Quick-start with Gluster on AWS

I wanted to play around with Gluster a bit, and EC2 has gotten cheap enough that it makes sense to spin up a few instances. My goal is simple: set up Gluster running on two servers in different regions, and see how everything works between them. This is in no way a production-ready guide, or even necessarily good practice. But I found the official guides lacking and confusing. (For reference, they have a Really, Really Quick Start Guide and also one tailored to EC2. Both took some tweaking. Here’s what I did:

  • Start two EC2 instances. I used “Amazon Linux” on a t2.micro, and started one each in Sydney and Oregon. (Using different regions is in no way required; I’m doing that because I’m specifically curious how it will behave in that case.)
  • Configure the security groups from the outset. Every node needs access to every other node on the following ports (this was different for older versins):
    • TCP and UDP 111 (portmap)
    • TCP 49152
    • TCP 24007-24008
  • Create a 5GB (or whatever you like, really) EBS volume for each instance; attach them. This will be our ‘brick’ that Gluster uses.
  • Pop this in /etc/yum.repos.d/glusterfs-epel.repo:
# Place this file in your /etc/yum.repos.d/ directory

[glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=0

[glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=0

[glusterfs-source-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/SRPMS
enabled=0
skip_if_unavailable=1
gpgcheck=0
  • sudo yum install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server. This should pull in the necessary dependencies.
  • Now, set up those volumes:
    • sudo fdisk /dev/sdf (or whatever it was attached as); create a partition spanning the disk
    • Create a filesystem on it; I used sudo mkfs.ext4 /dev/sdf1 for now
  • Create a mountpoint; mount
sudo mkdir -p /exports/sdf1
sudo mount /dev/sdf1 /exports/sdf1
sudo mkdir -p /exports/sdf1/brick
  • Edit /etc/fstab and add the appropriate line, like:
/dev/sdf1   /exports/sdf1 ext4  defaults        0   0
  • Start gluster on each node; sudo service glusterd start
  • Peer detection… This tripped me up big time. The only way I got this to work was by creating fake hostnames for each box in /etc/hosts. I used gluster01 and gluster02 for names. /etc/hosts/ mapped gluster01 to 127.0.0.1 on gluster01, and gluster02 to 127.0.0.1 on gluster02. Then, from one node (it doesn’t matter which), detect the other by its hostname that you just used. You don’t need to repeat from the other host; they’ll see each other.
  • Create the volume, replication level 2 (nodes), one one of them:
sudo gluster volume create test1 rep 2 gluster01:/exports/sdf1/brick gluster02:/exports/sdf1/brick

This will fail miserably if you didn’t get the hostname thing right. You can’t do it by public IP, and you can’t directly use localhost. If it works right, you’ll see “volume create: test1: success: please start the volume to access data”. So, let’s do that.

  • sudo gluster volume start test1 (you can then inspect it with sudo gluster volume status)
  • Now, mount it. On each box; sudo mkdir /mnt/storage. Then, on each box, mount it with a reference to one of the Gluster nodes: sudo mount -t glusterfs gluster01:test1 /mnt/storage (you could use gluster01:test1 or gluster02:test1, either will find the right volume). This may take a big if it’s going across oceans. * cd into /mnt/storage, create a file, and see that it appears on the other. Magic!

Please keep in mind that this was the bare minimum for a cobbled-together test, and is surely not a good production setup.

Also, replicating Gluster between Sydney and Oregon is horribly slow. Don’t do that! Even when it’s not across continents, Gluster doesn’t do well across a WAN.

HP Cloud working with Aeolus

I’m happy to report that the latest code we’ve added, which adds OpenStack support to Aeolus and will ship with our next release, is working successfully with HP Cloud, expanding our repertoire of public clouds.

While the support should allow us to support all OpenStack-based public cloud providers, in practice the APIs various providers expose are often mutated enough to prevent them from working. Rackspace, for example, has modified its authentication API enough to prevent authentication with Deltacloud today. I had similar issues trying Internap’s AgileCLOUD, which is using the hAPI interface that Voxel provided. (I understand that there’s a proper OpenStack environment in the works, though.)

But enough about what doesn’t work — HP’s cloud service does work! Getting it set up took a little figuring out, though, so I wanted to share some details.

First things first, you’ll need to activate one or more of the Availability Zones for its Compute service:

Until at least one is activated, you’ll have a tough time authenticating and it won’t be apparent why. (Or, at least, this was my experience.)

Once in, you’ll want to head over to the API Keys section to (you guessed it) get your API keys. Here’s an example of what it might look like (with randomized values):

(Just to be clear, the keys and tenant information were artificially-generated for this screenshot.)

At the bottom is the Keystone entrypoint you’ll want to put in to set up the Provider:

This much is straightforward. Adding a Provider Account is a little more of an adventure.

Despite what their documentation may say, the only way I’ve been able to authenticate through Deltacloud has been with my username and the tenant name shown — not the API keys, and not the Tenant ID.

In the example above, my Tenant Name is “example-tenant1”, with a username of “example”. So in Conductor, I’d want to enter “example+example-tenant1”, since we need to join username and tenant name that way. Password is what you use to log into the account.

Here you’ll notice that I cheat — Glance URL is currently a required field in Conductor. As best as I can tell, HP Cloud does not currently expose Glance to users, so there is not actually a valid Glance URL available. I’ve opened an issue to fix this in Conductor, but for right this second I just used localhost:1234 which passes validation.

As this may imply, we don’t presently support building images for HP Cloud, either, though there’s work being done to allow snapshot-style builds (in which a minimal OS is booted on the cloud, customized in place, and then snapshotted). What does work today are image imports, though.

It took me a moment to figure out how to import a reference to an HP Cloud image, though.   If you view the Servers tab within an Availability Zone and click “Create a new server from an Image”, you’ll get a dialog like this:

The orangey-red arrow point to the image ID — 54021 for the first one, 78265 for a CentOS 6.3 image, etc. These integers are what you enter into Conductor to import an image:

With an image imported, the launch process is just like with other providers, and you’ll be able to download a generated keypair and ssh in.

Of course, the job isn’t finished. The ability to build and push images is important for our cross-cloud workflow, and it’s something that’s in progress. And the Glance URL process is quite broken. But, despite these headaches, it works — I’ve got an instance running there launched through Conductor.

Digital Ocean

I came across Digital Ocean today, and am fairly interested. (Though I’m not really planning on jumping from my current host.)

The premise is that they offer SSD-based virtual machines cloud servers. They’re not the only ones doing that, but their pricing is beyond competitive. The front page advertises a VM with 20GB of SSD storage and 512 MB RAM for $5/month. (And unmetered transfer.) Prices climb a bit as you go, but are pretty proportional — $20/month for a 2GB instance with 2 cores and 40GB of SSD-backed storage. That’s a very good deal — but almost frighteningly low, along the lines of “Would you like this 40-cent bottle of champagne?” in that it leaves me a bit worried about what’s “wrong”. (Though I’m yet to find anything.)

In not-very-scientific (nor real-world) tests, hdparm -t showed 310.29 MB/sec throughput (932MB in 3.00 seconds). Various speed tests give scattered results, from 2-6 MB/sec. (16-48 Mbps), though it’s entirely possible that the bottleneck was the remote server. I must say, though, that yum is faster than I have ever seen it before.

They do seem to block outbound ICMP, probably due to abuse problems. They also appear to block NTP, which is odd and makes me wonder what else is blocked.

I don’t plan on switching over any time soon, but at the same time, it’s tempting to think of $10/month as a reasonable expenditure if I find myself needing something to host the occasional app or whatnot.