Fog – Automating Cloud Servers

This post is part of my weekly tech learning series, where I take one hour each week to try out a piece of technology that I’d like to learn.

Today I decided to try something new. Since I’ve been working with JavaScript libraries for the past few weeks, I wanted to go in a completely different direction.

Fog

Today I decided to try out the fog Ruby gem. fog is a gem that abstracts about a dozen cloud services into a common API.

Since I mostly use VPSs for myself and my clients, I wanted to explore Amazon’s EC2 and the Rackspace Cloud. Both of these are covered under fog’s Compute section. My goal was to configure and boot an Ubuntu LTS server on each of these services.

(I’d also like to take a look at fog’s Storage section later. I use Amazon s3 for much of my hosting and having programmatic access to it would be useful).

Rackspace Cloud

Since I already have an account with Rackspace I figured I’d start there. fog’s tutorial included Rackspace so I walked through it to setup a new default server (Ubuntu 10.04 with 256MB of RAM). Changing the RAM looks easy as it’s configured by the flavor_id attribute. The OS can be changed by the image_id attribute but I couldn’t find an easy way to list all of the flavors. I’d prefer an Ubuntu 12.04 instance but for times sake I’ll just leave it as 10.04 for now.

One thing that wasn’t mentioned in the documentation is that fog is now using a ~/.fog file to store your credentials. That will let you enter everything there and keep authentication and keys out of your source code. From what I saw you can also configure it to use a different file too so you could have project-specific configurations, great for freelance Ruby developers.

Unfortunately I ran into a few errors trying to use Rackspace. First the bootstrap command used in the tutorial needed some additional parameters, specifically the :public_key_path and private_key_path. Otherwise you’ll get an error like:

“ArgumentError: public_key is required for this operation”

After adding the keys, I ran bootstrap again. This command is supposed to create a new server, boot it up, and configure ssh access by copying your ssh keys.

Unfortunately I ran into another error, this time it was “Net::SSH::Disconnect: disconnected: Too many authentication failures for root (2)”. This meant that Net::SSH couldn’t ssh into the server; so either the user, the password, or ssh keys were wrong. I tried switching the user to ‘ubuntu’ since Ubuntu doesn’t use the root user but that didn’t work either.

I suspect this problem might be with Net::SSH and my ssh keys. I create new keys for each client I work with and have a few old ones left around. Looking at my ~/.ssh/ directory I see 16 keys in there.

Turning off public key authentication for the server I was able to ssh in manually so I’m 80.111111% sure that this is a problem with my laptop.

Since the server was created, I decided to move onto EC2. I’ll be going back and debugging my ssh configuration later on, but I’d like to have a little bit of experience with EC2 before I run out of time.

AWS EC2

EC2 was new to me. I’ve used S3 for years but never had the need for EC2 computing yet. After configuring fog for Rackspace it was just a few changes in my code to switch over to EC2.

The default image is still Ubuntu 10.04 but with 613MB of RAM (t1.micro) instead, the smallest size EC2 allows.

I didn’t have any problems with EC2 once I got my SSH keys figured out.

AWS doesn’t support DSA keypairs

Summary

I was impressed by how well fog abstracted the differences between EC2 and Rackspace. The API is generic enough to do some scripting with, though I’d be careful doing a lot of automation around it otherwise you might end up with 100s of servers sitting around.

I don’t see myself using fog for my own servers anytime soon because I mostly work with 1-2 servers at a time and don’t need to scale that often. The largest configuration I’ve done was a four node web cluster: 1 proxy, 2 app servers, 1 database server on the Rackspace cloud.

There are a few places I can see fog helping me with though:

  • Storing files to S3, like from a command line or a bulk upload.
  • Synchronizing files between S3 and Rackspace Cloud Files.
  • Automating the creation of a temporary server, like for doing remote pair programming. This kind of server would be started, used, and then destroyed a few hours later.

I’ve even been toying around with the idea of doing all of my development on a remote server and just using my laptop (or even my ipad) as a dumb terminal. The cost would be minimal and I’d be able to use a larger server if I needed more power (e.g. a 30GB Rackspace server is only $1.20/hour, less than $10/day for an 8 hour workday).

Code

There isn’t much code to show for fog, especially since I did a lot of exploration using its console mode. Here are the server creation scripts I did use.

# rackspace_server.rb
require 'rubygems'
require 'fog'
 
connection = Fog::Compute.new({:provider => 'Rackspace'})
 
server = connection.servers.bootstrap(:public_key_path => '/home/edavis/.ssh/fog.pub',
                                      :private_key_path => '/home/edavis/.ssh/fog',
                                      :username => 'ubuntu')
# ec2_server.rb
require 'rubygems'
require 'fog'
 
 
connection = Fog::Compute.new({
                                :provider => 'AWS',
                                :aws_access_key_id => ENV['S3_ACCESS_KEY'],
                                :aws_secret_access_key => ENV['S3_SECRET_ACCESS_KEY']
                              })
 
server = connection.servers.bootstrap(:private_key_path => '~/.ssh/fog',
                                      :public_key_path => '~/.ssh/fog.pub',
                                      :username => 'ubuntu')

Like I said, pretty much identical.

Next Friday I’m planning to explore EventMachine and get my ruby event-processing on.