Setting up on EC2

Thanks to my friend Dustin, and his EC2 demo using Elasticfox Firefox Extension for Amazon EC2 I got an EC2 image setup. With other references Link 1,Link 2,Link 3 I was also able to create my own AMI.

Some notes specific for my configuration.

Pre-config ElasticFox key for launching directly from ElasticFox SSH connections.

mkdir ~/ec2-keys
mv ~/Downloads/elasticfox.pem ~/ec2-keys/id_elasticfox
chmod 600 ~/ec2-keys/id_elasticfox
chmod 700 ~/ec2-keys/
ssh -i /Users/rbradfor/ec2-keys/id_elasticfox [email protected]

Installed Software.

apt-get update
apt-get -y autoremove
apt-get -y install apache2
apt-get -y install mysql-server
# Prompts for password (very annoying)
apt-get -y install php5
apache2ctl graceful
echo "Hello World" > /var/www/index.html
echo "< ? phpinfo() ?>" > /var/www/phpinfo.php

Configuration to save AMI.

scp -i ~/ec2-keys/id_elasticfox ~/ec2-keys/id_elasticfox pk-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem cert-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem [email protected]:/mnt
ec2-bundle-vol -d /mnt -c cert-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem -k pk-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem -u AccountNumber -r i386 -p ubuntu804_lamp
ec2-upload-bundle -b rbradford_804_lamp_ami -m /mnt/ubuntu804_lamp.manifest.xml -a AccessID -s SecretKey

Giving control of your data to the cloud

I’ve been doing some research and evaluation of more cloud computing. Specifically my focus has been on data store, and considering how to augment an existing operation using a popular database such as MySQL.

I’ve been looking first at Google App Engine and now I have my SimpleDB Beta today will be looking here next.

Some observations I’ve struggled with are:

  • No Native CLI, say for basic data setup. You can do some programmatic input for SELECT statements in a Query object in a SQL like syntax called GQL, but you can’t do DML
  • No simple data viewer. In production you would not do this, but I’m in evaluation and still looking at functionality, verification of results etc. A phpmyadmin clone is what I’m seeking for example. I suspect this would have been a good Google Summer of Code project.
  • Python only. While this was great for me to need to spend a 1/2 day to learn about Python syntax, it was just another small starting hurdle. If your organization doesn’t do or use Python, this is another skill or resource needed.

But the biggest concern and hurdle I’m understanding from my traditional principles is loss of control. Loss of control for monitoring and instrumentation, performance, availability and backup and recovery.

Recent issues in performance and unavailability have highlighted that App Engine is not suitable for mission critical web sites, not as your primary focus. I see huge benefits in augmenting access to information, perhaps more historical for example, a well define API within your application could easily support options to consider cloud storage as a secondary storage and primary retrieval of less important data.

My focus when revisiting here will be looking at means of object translate between tables and the Data Store and maybe an API for data transfer etc.

I suspect that when I get to evaluating EC2/S3 more I will have much more support by being able to leverage existing tools and techniques.

Your site unavailable page

When your site is down what do people see? If overloaded to you respond well or not?

For much larger organizations with the infrastructure and DNS management this should be part of your DR strategy. Yesterday was FireFox Download Day. Mozilla I don’t think coped as well as they could have and were not prepared. Looking at the two screens below you can see examples of error pages I got.

In this case it was a planned event, the increase in traffic was predicable ahead of time. Surely they could have had a static pages with something about the event, the high load and even then a static page of links if your goal was to download FireFox 3.

My site certainly doesn’t support the automated failover, I have to make a DNS change at my domain register Go Daddy to a different host that is already ready and wait for delegation, but I’m prepared for a significant outage like previously.

What determines authoritative information

I had need to visit a particular store in New York on Sunday on referral by a friend. I knew they had two locations. Like all tech savvy people I googled sports authority new york. I even visited the website from the link of the top result (with map). I typed in the address into my iPhone as listed by the map, why somebody hasn’t invented a means to point and click that I don’t know (it probably does exist, but finding it and knowing about it is a completely more complex problem). I even clicked on the map and zoomed in for nearest subway stop.

I got on the NY subway and headed into Manhattan. What resulted was me scratching my head when I could not find the intended store. In fact, the store never existed at 57 W 57th St. Even trying the phone number as per Google resulted in a no answer. Fortunately the trusty iPhone with Internet access and viewing the store locator on the official Sports Authority website enabled me to find the closest store, over a mile away.

I have often joked about the reliance on online information and the assumption of accurate information from even trusted sites. I’ve used this example previously, I know 1 mile is approximately 1.6 kilometers, and if you put in “convert 1 mile to kilometer” Google gives you an answer of “1 mile = 1.609344 kilometer”. What if that was indeed wrong, and it was 1.659344 for example.

Where are the safeguards for verifying information? Could it even be possible?
The benefit of information available readily does not equate to good information, indeed today searching for something can provide too much information and not exactly what you are seeking.

One wonders!

Don't use HostMonster

Following a 2-4 day outage from my hosting provider of my dedicated server, I decided to move non critical websites to shared hosting. I have one with 1&1 but I created a second account to share load and act as a backup with after a recommendation from a friend. I was able to move stuff, I was able to get some domains there, but it didn’t last long.


Probably about a week after my account was created, they decided to move my account, they didn’t notify me it was going to happen, they just did it. They lost all my files, and did not tell me, after making multiple inquires and phone calls.

Here is some history.

Tue, Jun 10, 2008 at 8:59 PM

Dear Hostmonster Customer,

Hostmonster has started migrating your account (ronaldbr).
Below you will find important migration details.  Please refer to your ticket
number 0 for any specific details.

Although Hostmonster will do everything possible to ensure that your
migration goes quickly and smoothly; it is important to understand
that your account will be moving from one physical server location to
another. During migration the IP address attached to your domain name
will be changed from your old server IP to your new server IP. This will
cause a temporary interruption in email, ftp, and the visibility
of your website.

This window of interruption occurs because most Internet Service Providers
(ISP's) take 24-72 hours to clear their Cache.

Although this window could last approximately 24-72 hours it typically
only lasts 48 hours before your site becomes fully functional again. Your
web browser (IE, Firefox, Netscape) has a Cached version of your site
stored on your local system. In some cases it will help if you clear your
browser cache. For more Information about Cache and clearing your browser
Cache please review our article on:

If after waiting 48 hours and clearing your browsers Cache, your
website has not begun functioning normally please contact our World
Class U.S. based Support Team by phone:

   Hostmonster Support:

       * Main Line: (866) 573-4678

       * Outside U.S: (801) 494-8462

       Support Questions: Press 2

Important Migration Details:

Your username and cPanel password will remain the same.

   * Migration Date: June 10, 2008

   * Migration Start Time: 06:00 PM MST

   * Migration End Time: (estimate) 04:00 AM MST

   * Old Server IP:

   * New Server IP:

Check out the CEO's Blog!

Come see the latest news, information, and updates on Hostmonster. While
you're there tell me how you think our company is doing!

Thank you again for choosing HostMonster.Com!

Matt Heaton (CEO)

I opened a ticket 24 hrs later at Wed Jun 11 2008 09:26PM.

I got a quick response, but was lied too when told no data was lost.

Wed Jun 11 2008 10:02PM by [email protected]
Dear Customer,
Thanks for contacting us.
We apologize for the trouble you've been having, we are working on the issue the migration has some complications none of your data was lost please allow 24 hours tops for the site to be fully functional again.
Level 1 Support Engineer

I asked for a reason why this was done, and why I wasn’t even notified. I was given a lame response with a “isn’t likely” it will happen again, “but might be” both in the same sentence.

Thu Jun 12 2008 10:13PM by [email protected]
We migrated your site to free up hard-disk space on our server, and I apologize that appropriate notice wasn't given. I have notified my supervisor in an effort to recommend improved communications. It isn't likely that your account will be migrated again for our business needs, but it might be needed in the future.

Thank you for your inquiry.

John Pratt
Support Level 1

Probably my second or third call now, is Friday morning, and I’ve had added to my ticket some lame text that it’s being escalated.

Fri Jun 13 2008 08:32AM by [email protected]
I am reopening this ticket. It was supposedly moved from host97 to host262 but cpanel man. is still showing host97 and tracert shows host97. I've tried going to both and but it's not logging me in. I've tried to change the password and tried both and with the new password and won't login either. An L3 says the username broke in the migration but it's a different kind of "broken" since it doesn't have the error message in Cpanel Manager. I got L3 approval to move this ticket to the escalations queue.

Nicholas Martin
Support Level 1

So now, it’s Sunday morning, on the phone again, no information, reason or help forthcoming again. A note on ticket Sat 2pm (not visible to me), apparently all files lost on both old server and new server. When were you like going to tell the customer.

This service is woeful, I want my money back.

Of course when I said I wanted to cancel my account and get my money back (I can’t login remember), I was told I would have to call back when the billing department was open.

Well, my complaint will be going to [email protected] – Supervisor of Host Monster Tech Support.

Handling Disaster 101

I’ve had to accept the “practice what you preach” pill recently due to a disaster at my hosting provider. See Learning from a Disaster.

While it was my own personal site on a dedicated server in question and not a business generating review I found that my MySQL Backup Strategy was incomplete ( It is also based on code 4 years old). I found that I had not tested my Disaster Recovery Plan adequately. I have used my backup and recovery approach in the past for various controlled situations and testing successfully.

So what mistakes did I make. There were two.

1. I was using a cold backup approach. That is specifically copying the entire MySQL Database in a controlled manner at the file system level. These were also copied to an alternate shared hosting server for storage. This works fine when you backup server supports a means of restoring data in this format, however if your backup shared hosting facility does not give you access to the MySQL data area, then this does not work. Not wanting to pay for two dedicated hosts this backup solution is impractical for my present hosting. Time to consider alternatives, such as being prepared with an EC2 image.

2. Recently I moved to using two MySQL instances, both 5.0 GA and 5.1 RC. The problem is I didn’t adjust my backup scripts appropriately to reflect two instances. Of course when my server was unavailable for 43 hours I was completely screwed in at least I could only throw an Site Unavailable page rather then my website. Combined with my hosting provider totally screwing DNS and admin access to manage DNS for 3-4 days more then almost 2 days of total server unavailability I also had to change support for my DNS. Would have been easier if I had a DNS DR plan.

The lesson is, no matter how trivial the information or website, if you actually want the content, make sure you test your recovery strategy ahead of a disaster.

Learning from a Disaster

As Farhan has already pointed out to us, Disaster is Inevitable – Must shutdown generators. My primary hosting provider The Planet had a serious meltdown, 9,000 servers unavailable, DNS and administration application .

My server was effectively totally unavailable from 3PM Saturday until 10AM Monday, 43 hours in total.

The problem didn’t stop there. Started and verified servers and domains, but like 8 hours later I find that the DNS is wrong on two important domains (I didn’t discover this because I have them in local /etc/hosts) because I moved them to a different IP like 2 weeks ago.

The Planet denied any problems, ticket logged to get them fixed because admin interface was still down.
Wind forward, Service Updates pages states

June 3 – 3:00pm CDT
All DNS Zone files for ns1, ns2, ns5 and ns6 are completely updated as of the information that was available 5:30PM Central Saturday. All DNS servers have been rebooted and BIND has been restarted.

What a flat out lie. DNS lookup directly against name server confirms it’s wrong. At Wed 12:00am Wednesday, the admin interface finally gives access to view and update zone files. What the. It indicates and IP address which is should be, but clearly not what it is.

I will so be sending a complaint to [email protected] when I finally get this resolved.

And just to make this all worse, my present professional site is of critical importance. It’s my only exposure for preparing to provide information to potential employers, and it’s used for the preparation of consulting information. I’ve been forced earlier today because of events starting today for NY Internet Week to purchase DNS services at That didn’t also got to plan, yet another story.

Working with Google App Engine

Yesterday I took a more serious look at Google App Engine, I got a developer account some weeks ago.

After going though the getting started demo some time ago, I chose an idea for a FaceBook Application and started in true eXtreme Programming (XP) style (i.e. What’s the bare minimum required for first iteration). I taught myself some Python and within just a few minutes had some working data being randomly generated totally within the development SDK environment On my MacBook. I was not able to deploy initially via the big blue deploy button, the catch is you have to register the application manually online.

Then it all worked, and hey presto I’ve got my application up at provided domain hosting at

Having coming from a truly relational environment, most notably MySQL of recent years I found the Datastore API different in a number of ways.

  • There is no means of Sequences/Auto Increment. There is an internal Unique Key, but it’s a String, not an integer, not enabling me to re-use it.
  • The ListProperty enables the use of Lists in Python (like Arrays) to be easily stored.
  • The ReferenceProperty is used as a foreign key relationship, and then can be more reference within an object hierarchy
  • I really missed an interactive interface. You have no abililty to look at your data, specifically for me I wanted to seek some data, then I wanted to delete some data, but I had to do all this via code.

Having developed a skelaton FaceBook application before in PHP, I figured a Python version would not be that much more work, but here is where I good stumped Information at Hosting a Facebook Application on Google AppEngine leveraging the PyFacebook project didn’t enable me to integrate Google App Engine with FaceBook just yet.

This had me thinking I need to resort to a standalone simply Python Facebook application to confirm the PyFacebook usage. Now my problems started. Under Mac it’s a lot more complex to install and configure Python/Django etc then under Linux. I tried to do it on my dedicated server, but drat Python is at 2.3.4, and it seems 2.5.x is needed.

Still it was a valuable exercise, I dropped the FaceBook goal and just worked on more Google App Engine stuff. Still early days, but it was productive to try out this new technology.

What I need to work on now is how to hold state within Python infrastructure so I can manage a user login and storing and retrieving user data for my sample app.