Quick-start with Gluster on AWS

I wanted to play around with Gluster a bit, and EC2 has gotten cheap enough that it makes sense to spin up a few instances. My goal is simple: set up Gluster running on two servers in different regions, and see how everything works between them. This is in no way a production-ready guide, or even necessarily good practice. But I found the official guides lacking and confusing. (For reference, they have a Really, Really Quick Start Guide and also one tailored to EC2. Both took some tweaking. Here’s what I did:

  • Start two EC2 instances. I used “Amazon Linux” on a t2.micro, and started one each in Sydney and Oregon. (Using different regions is in no way required; I’m doing that because I’m specifically curious how it will behave in that case.)
  • Configure the security groups from the outset. Every node needs access to every other node on the following ports (this was different for older versins):
    • TCP and UDP 111 (portmap)
    • TCP 49152
    • TCP 24007-24008
  • Create a 5GB (or whatever you like, really) EBS volume for each instance; attach them. This will be our ‘brick’ that Gluster uses.
  • Pop this in /etc/yum.repos.d/glusterfs-epel.repo:
# Place this file in your /etc/yum.repos.d/ directory

[glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=0

[glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=0

[glusterfs-source-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/SRPMS
enabled=0
skip_if_unavailable=1
gpgcheck=0
  • sudo yum install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server. This should pull in the necessary dependencies.
  • Now, set up those volumes:
    • sudo fdisk /dev/sdf (or whatever it was attached as); create a partition spanning the disk
    • Create a filesystem on it; I used sudo mkfs.ext4 /dev/sdf1 for now
  • Create a mountpoint; mount
sudo mkdir -p /exports/sdf1
sudo mount /dev/sdf1 /exports/sdf1
sudo mkdir -p /exports/sdf1/brick
  • Edit /etc/fstab and add the appropriate line, like:
/dev/sdf1   /exports/sdf1 ext4  defaults        0   0
  • Start gluster on each node; sudo service glusterd start
  • Peer detection… This tripped me up big time. The only way I got this to work was by creating fake hostnames for each box in /etc/hosts. I used gluster01 and gluster02 for names. /etc/hosts/ mapped gluster01 to 127.0.0.1 on gluster01, and gluster02 to 127.0.0.1 on gluster02. Then, from one node (it doesn’t matter which), detect the other by its hostname that you just used. You don’t need to repeat from the other host; they’ll see each other.
  • Create the volume, replication level 2 (nodes), one one of them:
sudo gluster volume create test1 rep 2 gluster01:/exports/sdf1/brick gluster02:/exports/sdf1/brick

This will fail miserably if you didn’t get the hostname thing right. You can’t do it by public IP, and you can’t directly use localhost. If it works right, you’ll see “volume create: test1: success: please start the volume to access data”. So, let’s do that.

  • sudo gluster volume start test1 (you can then inspect it with sudo gluster volume status)
  • Now, mount it. On each box; sudo mkdir /mnt/storage. Then, on each box, mount it with a reference to one of the Gluster nodes: sudo mount -t glusterfs gluster01:test1 /mnt/storage (you could use gluster01:test1 or gluster02:test1, either will find the right volume). This may take a big if it’s going across oceans. * cd into /mnt/storage, create a file, and see that it appears on the other. Magic!

Please keep in mind that this was the bare minimum for a cobbled-together test, and is surely not a good production setup.

Also, replicating Gluster between Sydney and Oregon is horribly slow. Don’t do that! Even when it’s not across continents, Gluster doesn’t do well across a WAN.

GPG/PGP Keysigning

I just got back from this year’s OpenStack Summit, which was a great experience. In addition to many fruitful sessions about OpenStack itself, a keysigning party was held. This was the first such session I’ve attended, and the use of PKI for signing/encrypting mail is something that’s only recently drawn my interest.

One thing that I find interesting is that there’s no central authority from which keys derive trust, unlike SSL in browsers. Instead, it’s a web-of-trust model. Individuals cryptographically sign each others’ public keys to denote trust in them. If you’ve verified my key, and I sign Bob’s key saying I’ve verified it, then, if you trust me, you can trust Bob’s key.

At the keysigning party, we used the Sassman Projected Method, in which we each stood up, presented something like a passport on the projector, and verbally verified that the list of key fingerprints compiled before the event was valid. (We also verified the MD5 and SHA sums of the list itself before beginning, so that we knew we were working with the same list.)

GPG setup notes

I’m not going to cover the basics, because myriad other sources already do a much better job. But a few helpful hints for your gpg.conf:

  • You can set a default-key value if you have more than one key.
  • Ensure that require-cross-certification is present

You may also want to set up a keyserver different from the default. Here is what I have:

keyserver hkps://hkps.pool.sks-keyservers.net
keyserver-options ca-cert-file=~/.gnupg/sks-keyservers.netCA.pem
keyserver-options auto-key-retrieve
keyserver-options no-honor-keyserver-url

This uses the SKS Keyservers pool, a pool of almost 100 keyservers that all exchange keys. More specifically, it selects the HKPS one, running SSL on port 443. To use this, you must grab their self-signed SSL certificate. (Note that the use of SSL is mostly to prevent a middleman from eavesdropping than tampering with your keys—that security comes through the keys themselves.)

The auto-key-retrieve option is so that when I get new email in mutt with a key I haven’t seen before, it will be fetched automatically. The no-honor-keyserver-url ensures that we always use our HKPS-enabled one, even if a key points to another server, so we ensure we stay on HKPS.

Keysigning Process

caff automates much of this. On Fedora, it’s provided by pgp-tools.

  • After installing it, run caff once to have it generate a ~/.caffrc file.
  • Edit ~/.caffrc to taste:
    • Make sure that $CONFIG{'owner} and $CONFIG{'email'} are set properly.
    • If your machine doesn’t run a properly-configured MTA, add a line to relay mail through a mailserver, like so: $CONFIG{'mailer-send'} = [ 'smtp.corp.example.com'].

caff maintains its own gpg.conf file, in ~/.caff/gnupghome/. You may want to customize it, or just symlink your main one to it. Partly because I missed exactly what was happening at first, I instead imported keys to my normal keyring, and just pointed caff to that keyring. I used -R to prevent it from fetching keys, and --key-file ~/.gnupg/pubring.gpg to pull from my normal keyring. This probably made things more difficult than needed.

One thing that took me a moment was how to look up a fingerprint. For example, if my key fingerprint is 5150 9442 00FE 3099 4CA8 D2EA E639 859C 2BE0 2E05, how do I look that up? It turns out to be simple: take the last eight characters (2BE02E05), prepend 0x, and search.

So my workflow was:

gpg2 --search-keys 0x2be02e05 # and import
caff -R --key-file ~/.gnupg/pubring.gpg 0x2be02e05 # and follow steps

Of course, be sure that the fingerprint matches, and that you’ve validated the person’s identity in real life before signing. Once you run caff, it will have you sign the key and email it to each address on file.

Other stuff

Lazy distro mirrors with squid

I have a problem that I think a lot of fellow developers probably have–I have enough computers (or virtual machines!) running the same operating system version(s) that I would benefit from a local mirror of them, but I don’t have so many systems that it’s actually reasonable for me to run a full mirror, which would entail rsyncing a bunch of content daily, much of which may be packages I would never use. And using a proxy server isn’t terribly practical, because with a bunch of semi-round-robin mirrors, it’s likely that two systems would pull the same package from different mirrors. A proxy server would have no way of knowing (ahead of time) that the two documents were actually the same.

What I wanted for a long time was a “lazy” mirror — something that would appear to my systems as a full mirror, but would act more as a proxy. When a client installed a particular version of a particular package for the first time, it would go fetch them from a “real” mirror, and then cache it for a long time. Subsequent requests for the same package from my “mirror” would be served from cache. I was convinced that this was impossible to do with a proxy server. Worse, I wanted to mirror multiple repos — Fedora and CentOS and EPEL, and maybe even Ubuntu. There’s no way squid can do that.

I was wrong. squid is pretty awesome. We just pull a few tricks:

  • Instead of using squid as a traditional proxy server that listens on port 3128, use it as a reverse proxy / accelerator that listens on port 80. (This is, incidentally, what sites like Wikipedia do.)
  • Abuse Massage the refresh_pattern rules to cache RPM files (etc.) for a very long time. Normally it is an awful, awful idea for proxy servers to do interfere with the Cache-Control / Expires headers that sites serve. But in the case of a mirror, we know that any updates to a package will necessarily bump the version number in the URL. Ergo, we can pretty safely cache RPMs indefinitely.
  • Set up name-based virtual hosting with squid, so that centos-mirror.lan and fedora-mirror.lan can point to different mirrors.

Two other important steps involve setting up cache_dir reasonably (by default, at least in the packages on CentOS 6, squid will only cache data in RAM), and bumping up maximum_object_size from the default of 4MB.

Here is the relevant section of my squid.conf. (The “irrelevant” section of my squid.conf is a bunch of acl lines that I haven’t really customized and can probably be deleted.)

# Listen on port 80, not 3128
# 'accel' tells squid that it's a reverse proxy
# 'defaultsite' sets the hostname that will be used if none is provided
# 'vhost' tells squid that it'll use name-based virtual hosting. I'm not
#   sure if this is actually needed.
http_port 80 accel defaultsite=mirror.lowell.lan vhost

# Create a disk-based cache of up to 10GB in size:
# (10000 is the size in MB. 16 and 256 seem to set how many subdirectories
#  are created, and are default values.)
cache_dir ufs /var/spool/squid 10000 16 256

# Use the LFUDA cache eviction policy -- Least Frequently Used, with
#  Dynamic Aging. http://www.squid-cache.org/Doc/config/cache_replacement_policy/
# It's more important to me to keep bigger files in cache than to keep
# more, smaller files -- I am optimizing for bandwidth savings, not latency.
cache_replacement_policy heap LFUDA

# Do unholy things with refresh_pattern.
# The top two are new lines, and probably aren't everything you would ever
# want to cache -- I don't account for VM images, .deb files, etc.
# They're cached for 129600 minutes, which is 90 days.
# refresh-ims and override-expire are described in the configuration here:
#  http://www.squid-cache.org/Doc/config/refresh_pattern/
# but basically, refresh-ims makes squid check with the backend server
# when someone does a conditional get, to be cautious.
# override-expire lets us override the specified expiry time. (This is
#  illegal per the RFC, but works for our specific purposes.)
# You will probably want to tune this part.
refresh_pattern -i .rpm$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern -i .iso$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320

# This is OH SO IMPORTANT: squid defaults to not caching objects over
# 4MB, which may be a reasonable default, but is awful behavior on our
# pseudo-mirror. Let's make it 4GB:
maximum_object_size 4096 MB

# Now, let's set up several mirrors. These work sort of like Apache
# name-based virtual hosts -- you get different content depending on
# which hostname you use in your request, even on the same IP. This lets
# us mirror more than one distro on the same machine.

# cache_peer is used here to set an upstream origin server:
#   'mirror.us.as6453.net' is the hostname of the mirror I connect to.
#   'parent' tells squid that that this is a 'parent' server, not a peer
#    '80 0' sets the HTTP port (80) and ICP port (0)
#    'no-query' stops ICP queries, which should only be used between squid servers
#    'originserver' tells squid that this is a server that originates content,
#      not another squid server.
#    'name=as6453' tags it with a name we use on the next line.
# cache_peer_domain is used for virtual hosting.
#    'as6453' is the name we set on the previous line (for cache_peer)
#    subsequent words are virtual hostnames it answers to. (This particular
#     mirror has Fedora and Debian content mirrored.) These are the hostnames
#     you set up and will use to access content.
# Taken together, these two lines tell squid that, when it gets a request for
#  content on fedora-mirror.lowell.lan or debian-mirror.lowell.lan, it should
#  route the request to mirror.us.as6453.net and cache the result.
cache_peer mirror.us.as6453.net parent 80 0 no-query originserver name=as6453
cache_peer_domain as6453 fedora-mirror.lowell.lan debian-mirror.lowell.lan

# Another, for CentOS:
cache_peer mirrors.seas.harvard.edu parent 80 0 no-query originserver name=harvard
cache_peer_domain harvard centos-mirror.lowell.lan

You will really want to customize this. The as6453.net and harvard.edu mirrors happen to be geographically close to me and very fast, but that might not be true for you. Check out the CentOS mirror list and Fedora mirror list to find something close by. (And perhaps fetch a file or two with wget to check speeds.) And I’m reasonably confident that you don’t have a lowell.lan domain in your home.

If you can find one mirror that has all the distros you need, you don’t need to bother with virtual hosts.

You can edit the respective repos in /etc/yum.repos.d/ to point to the hostnames you set up. Pay attention to whether the mirror matches the URL structure the file defaults to or not.

You can just drop the hostnames in /etc/hosts if you don’t have a home DNS server, e.g.,:

172.16.1.100 fedora-mirror.lowell.lan centos-mirror.lowell.lan