HP Cloud working with Aeolus

I’m happy to report that the latest code we’ve added, which adds OpenStack support to Aeolus and will ship with our next release, is working successfully with HP Cloud, expanding our repertoire of public clouds.

While the support should allow us to support all OpenStack-based public cloud providers, in practice the APIs various providers expose are often mutated enough to prevent them from working. Rackspace, for example, has modified its authentication API enough to prevent authentication with Deltacloud today. I had similar issues trying Internap’s AgileCLOUD, which is using the hAPI interface that Voxel provided. (I understand that there’s a proper OpenStack environment in the works, though.)

But enough about what doesn’t work — HP’s cloud service does work! Getting it set up took a little figuring out, though, so I wanted to share some details.

First things first, you’ll need to activate one or more of the Availability Zones for its Compute service:

Until at least one is activated, you’ll have a tough time authenticating and it won’t be apparent why. (Or, at least, this was my experience.)

Once in, you’ll want to head over to the API Keys section to (you guessed it) get your API keys. Here’s an example of what it might look like (with randomized values):

(Just to be clear, the keys and tenant information were artificially-generated for this screenshot.)

At the bottom is the Keystone entrypoint you’ll want to put in to set up the Provider:

This much is straightforward. Adding a Provider Account is a little more of an adventure.

Despite what their documentation may say, the only way I’ve been able to authenticate through Deltacloud has been with my username and the tenant name shown — not the API keys, and not the Tenant ID.

In the example above, my Tenant Name is “example-tenant1”, with a username of “example”. So in Conductor, I’d want to enter “example+example-tenant1”, since we need to join username and tenant name that way. Password is what you use to log into the account.

Here you’ll notice that I cheat — Glance URL is currently a required field in Conductor. As best as I can tell, HP Cloud does not currently expose Glance to users, so there is not actually a valid Glance URL available. I’ve opened an issue to fix this in Conductor, but for right this second I just used localhost:1234 which passes validation.

As this may imply, we don’t presently support building images for HP Cloud, either, though there’s work being done to allow snapshot-style builds (in which a minimal OS is booted on the cloud, customized in place, and then snapshotted). What does work today are image imports, though.

It took me a moment to figure out how to import a reference to an HP Cloud image, though.   If you view the Servers tab within an Availability Zone and click “Create a new server from an Image”, you’ll get a dialog like this:

The orangey-red arrow point to the image ID — 54021 for the first one, 78265 for a CentOS 6.3 image, etc. These integers are what you enter into Conductor to import an image:

With an image imported, the launch process is just like with other providers, and you’ll be able to download a generated keypair and ssh in.

Of course, the job isn’t finished. The ability to build and push images is important for our cross-cloud workflow, and it’s something that’s in progress. And the Glance URL process is quite broken. But, despite these headaches, it works — I’ve got an instance running there launched through Conductor.

Simple disposable VMs with snap-guest

Have you ever wished you could easily spin up a virtual machine for a little testing? Something quick, but something you could (optionally) throw away when you were done?

Of course you have. And I think snap-guest is the answer to your dreams (mine, too!). It allows you to set up a “base” image, and then easily spin up copy-on-write copies of it.


You can follow the installation instructions in the README, though note the trap — the syntax for the symlink is backwards. With that setup, I built a handful of virtual machines. One is RHEL 6.3, and the other is Fedora 17. (I plan to set up more soon.) You may want to copy it into /usr/bin instead of symlinking it into /usr/local/bin if you use sudo.

The “base” VMs are something you should set up and then shut down, to never touch again, because otherwise you will cause problems with the “copy-on-write” copies which are now against something that has been changed underneath them. So create a base image that things will be based on. Here’s what I did after the base install:

  • yum update
  • yum install ntpd, set it up with working servers, and chkconfig ntpd on
  • Set up EPEL
  • yum install bash-completion git screen telnet (telnet is for checking ports, not insecure logins!)
  • Add a non-privileged user
  • I added repos for Aeolus, but did not install anything from them for the base image.
  • Disable smartd, enable acpid
  • Allow incoming traffic on ports 22, 80, 443, 3000 in the firewall
  • Set up Avahi — yum install avahi, chkconfig avahi-daemon on, and open UDP port 5353 in the firewall. Do the same on your desktop, edit /etc/nsswitch.conf‘s “hosts:” line to read “hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname”. Now, ssh vm-hostname.local will “just work”. (Thanks, eck, for this trick!)
  • Clean things out for provisioning of guests: touch /.unconfigure; yum clean all; rm -rf /etc/ssh/ssh_host_*; poweroff
  • In hindsight, it might have been worthwhile to set up a basic local LDAP server on the guest so that I could test Conductor against it when needed.

When the machine shuts down, you shouldn’t boot it again, unless you are prepared to wipe out any derivative guests.


I ended up using a little more than is ordinarily required, because I didn’t love all of the defaults:

sudo /usr/local/bin/snap-guest -b Fedora-17-base -t test_f17_guest -m 2048 -c4 -n bridge=br0

This will clone the “Fedora-17-base” image, starting a “test_f17_guest” VM. -m 2048 tells it to use 2048MB RAM instead of 800MB. -c4 gives it 4 cores, and -n bridge=br0 brings it up using my host’s virbr0 brdiged interface for networking. Obviously, customize all of this as required.

Note that the system will come up with a hostname matching whatever you used with -t. If you set up Avahi as I outlined above, you should be able to “ssh test_f17_guest.local” and log right in.

I still have some kinks to work out, like network interfaces coming up under different names. But I think this is going to be immensely useful going forward. Historically, needing to test a patch on RHEL, or finding a clean Fedora system to test an upstream patch on to rule out issues with my local setup, has been a real timesink. Now it takes about 10 seconds to make a cloned guest, and under a minute for it to boot. I can re-use guests, or just trash them when I’m done.

Aeolus Community-Building Meetup #2

Yesterday we had our second Aeolus Community-Building Meeting. The idea was initially suggested by Nitesh, a community member who wanted to share some ideas for growing the community. We met a few weeks ago, with the notes published here.

This week we had a follow-up call, with much greater attendance.

We’ve been using Google Hangouts for these meetings with pretty good success. The videoconference aspect is pretty neat, and Mo had good luck with the screen-sharing aspect as well. This time we did an On Air Hangout, which allowed people to watch the stream live on Youtube, and for us to post a recording after the fact.

You can watch the recording here. (As a warning, my audio is really low, so if you crank the audio up for me, be prepared for everyone else to blow out your eardrums in comparison.)

Here’s a quick recap of what was discussed:

  • Mo Morsi gave a preview of his upcoming talks on Aeolus. He put the slides on the Aeolus Presentations wiki page.
  • We should have a release calendar on our site, and also make sure to make noise (on our blog, and on Twitter/Facebook/etc.) about our releases.
  • It would be valuable to look at integrating with CloudStack and documenting how to use it. Google Compute / Google Cloud Platform, too!
  • Our templates site is a really good idea that needs to be finished up and launched. The template repo Justin set up is a good interim solution.
  • It would be interesting to look at integrating with other sources of templates, like Bitnami or CloudFormation.
  • In general, we really need to support more providers, and finally arrive at a complete API.
  • Justin has been working on getting OpenShift Origin running on Aeolus. This would be an excellent blog post, and maybe a good marketing angle.
  • Sponsoring “drinkups” seems to work well for attracting interest, though it’s not clear if there’s actually a good return on it.
  • Someone suggested cotton candy at booths. I love that idea. Cotton candy kind of looks like a cloud, too!
  • Stickers and flyers are a must. We need to get a ton ordered.
  • We should do more with social media. We have a Facebook page and a Google Plus page, and we’re @aeolusproject on Twitter, and we have a blog. But we don’t make extensive use of them. What we really need are others to blog about us, retweet us, and share our stuff on Facebook.
  • We need to get people to take these tasks and commit to making them happen, and we should be in good shape for the next meeting!

Procedurally, we had some notes for how to make things go more smoothly next time:

  • I really, really need to buy a headset so that people can actually hear me.
  • When you have a large group of people who don’t all know each other, you should have people introduce themselves.
  • The 10-person limit is a problem. I think I might do a separate post on some things I have in mind for how to handle this next time.
  • We need an easier way to share the link ahead of time, both for the Hangout and for the live Youtube stream for people who don’t intend to participate.
  • We need to pay attention to IRC during the call, so those watching on Youtube can participate.

While comments are welcome on this post, please don’t do so at the expense of replying on aeolus-devel!

Aeolus and OpenStack: Today’s status

I’m currently working on a task to finally get Aeolus to play nicely with OpenStack. This work has been hamstrung in the past by various other components not supporting OpenStack, and then by some quirkiness with the OpenStack server I was trying to test against. The short version is that it doesn’t work yet, but hopefully will at the end of this sprint. What follows are some notes on where things stand right now.

If you’re on F16, you’ll need to gem install openstack, and then update to something newer than the 0.5.0-10 build of Deltacloud that’s in the Fedora repos. Fedora 17 has a new enough Deltacloud (it’s at 1.0.0-8 right now), but you still need to gem install openstack.

Marios has an excellent post from February (!) on using Deltacloud with OpenStack. This provided some clues I needed to get this running.

Right now, I had to start Deltacloud manually, with: deltacloudd -i openstack -P for this to work. -i openstack loads the OpenStack driver, and -P points Deltacloud to the OpenStack authentication URL. This corresponds to the OS_AUTH_URL value that may be created by an admin.

From there, you should be able to add the provider to Conductor:

(Out of abundant paranoia I have blurred some internal IPs and hostnames, even though they’re not necessarily sensitive information.)

That screenshot leaves me wanting to clarify some of these fields in Conductor. “Provider URL” is the usual Deltacloud URL. (I just happen to be running it in a non-standard way.) I wish the text would indicate this. “Openstack API_ENTRYPOINT” is, well, the OpenStack API entrypoint, but I’d love to find a friendly name for this. This value also happens to be the exact same thing you passed in the -P flag to Deltacloud, which doesn’t seem as DRY as it could be, though I’m not sure if we can effectively share it between Deltacloud and Conductor.

Now, onto adding a Provider Account! This part is going to need some work.

As Marios mentions, OpenStack’s “v2” authentication actaully requires three fields — a tenant name, a username, and a password. For Deltacloud, you need to combine your username and tenant name with a ‘+’, e.g., “matt+mytenantname” and then enter your password normally. Today, you need to do this in the Provider Account screen. I intend to break these fields out in Conductor, though, so you enter the three fields normally.

Presently, this is as far as I can get because of BZ #858030, in which we reject hardware profiles that define “0 GB” of storage, as many OpenStack setups seem to do. I will send a (trivial) patch for that, though.

This is our current progress. After this, I’d like to move onto the real stuff — importing and building images, and launching instances!