Basic scalability principles to avert downtime

In the press in the last two days has been the reported outage of Amazon Web Services Elastic Compute Cloud (EC2) in just one North Virginia data center. This has affected many large website includes FourSquare, Hootsuite, Reddit and Quora. A detailed list can be found at ec2disabled.com.

For these popular websites was this avoidable? Absolutely.

Basic scalability principles if deployed in these systems architecture would have averted the significant downtime regardless of your development stack. While I work primarily in MySQL these principles are not new, nor are they complicated, however they are fundamental concepts in scalability that apply to any technology including the popular MongoDB that is being used by a number of affected sites.

Scalability 101 involves some simple basic rules. Here are just two that seem to have been ignored by many affected by this recent AWS EC2 outage.

  1. Never put all your eggs in one basket. If you rely on AWS completely, or you rely on just one availability zone that is putting all your eggs in one basket.
  2. Always keep your important data close to home. When it comes to what is most critical to your business you need access and control to your information. At 5am in the morning when the CEO asks how long will our business be unavailabla and what is needed to resolve the problem, the answer “We have no control over this and have no ETA” is not an acceptable answer.

With a successful implementation and appropriate data redundancy you may not have an environment immediately available however you have access to your important information and the ability to create one quickly. Many large hosting companies can provide additional H/W on near demand, especially if you have an initial minimal footprint. Indeed using Amazon Web Services (AWS) as a means to avert a data center disaster is an ideal implementation of Infrastructure As A Service (IAAS). Even with this issue, organizations that had planned for this type of outage could have easily migrated to another AWS availability zone that was unaffected.

Furthermore, system architecture to support various levels of data availability and scalability ensure you can handle many more various types of unavailability without significant system down time as recently seen. There are many different types of availability and unavailability, know what your definition of downtime is and supporting disasters should be your primary focus of scalability, not an after thought.

As an expert in performance and scalability I can help your organization in the design of a suitable architecture to support successful scalability and disaster. This is not rocket science however many organizations gamble without the expertise of a professional to ensure business viability.

Problems compiling MySQL 5.4

Seem’s the year Sun had for improving MySQL, and with an entire new 5.4 branch the development team could not fix the autoconf and compile dependencies that has been in MySQL for all the years I’ve been compiling MySQL. Drizzle has got it right, thanks to the great work of Monty Taylor.

I’m working on the Wafflegrid AWS EC2 AMI’s for Matt Yonkovit and while compiling 5.1 was straight forward under Ubuntu 8.10 Intrepid, compiling 5.4 was more complicated.

For MySQL 5.1 I needed only to do the following:

apt-get install -y build-essential
apt-get install libncurses5-dev
./configure
make
make install

For MySQL 5.4, I elected to use the BUILD scripts (based on Wafflegrid recommendations). That didn’t go far before I needed.

apt-get install -y automake libtool

You then have to go compiling MySQL 5.4 for 10+ minutes to get an abstract error, then you need to consider what dependencies may be missing.
I don’t like to do a blanket apt-get of a long list of proposed packages unless I know they are actually needed.

The error was:

make[1]: Entering directory `/src/mysql-5.4.0-beta/sql'
make[1]: warning: -jN forced in submake: disabling jobserver mode.
/bin/bash ../ylwrap sql_yacc.yy y.tab.c sql_yacc.cc y.tab.h sql_yacc.h y.output sql_yacc.output -- -d --verbose
make -j 6 gen_lex_hash
make[2]: Entering directory `/src/mysql-5.4.0-beta/sql'
rm -f mini_client_errors.c
/bin/ln -s ../libmysql/errmsg.c mini_client_errors.c
make[2]: warning: -jN forced in submake: disabling jobserver mode.
rm -f pack.c
../ylwrap: line 111: -d: command not found
/bin/ln -s ../sql-common/pack.c pack.c
....
make[1]: Leaving directory `/src/mysql-5.4.0-beta/sql'
make: *** [all-recursive] Error 1

What a lovely error ../ylwrap: line 111: -d: command not found

ylwrap is part of yacc, and by default in this instance it’s not even an installed package. I’ve compiled MySQL long enough that it requires yacc, and actually bison but to you think it would hurt if the configure told the user this.

It’s also been some time since I’ve compiled MySQL source, rather focusing on Drizzle. I had forgotten just how many compile warnings MySQL throws. Granted a warning is not an error, but you should not just ignore them in building a quality product.

Announcing Drizzle on EC2

I have published the very first sharable Drizzle Amazon Machine Image (AMI) for AWS EC2, based on the good feedback from my discussion at the Drizzle Developer Day on what options we should try.

This first version is a 32bit Developer instance, showcasing Drizzle and all necessary developer tools to build Drizzle from source.

What you will find on drizzle-ami/intrepid-dev32 – ami-b858bfd1

Ubuntu 8.10 Intrepid 32 bit base server installation:

  • build tools
  • drizzle dependencies
  • bzr 1.31.1

From the respective source trees the following software is available:

  • drizzle 2009.04.997
  • libdrizzle 0.0.2
  • gearman 0.0.4
  • memcached 1.2.8
  • libmemcached 0.28

Drizzle has been configured with necessary dependencies for PAM authentication, http_auth, libgearman and MD5 but these don’t seem to be available in the binary distribution.

I will be creating additional AMI’s including 64bit and LAMP ready binary only images.

The following example shows using drizzle on this AMI. Some further work is necessary for full automation, parameters and logging. I’ve raised a number of issues the Drizzle Developers are now hard at work on.

1. Starting Drizzle

ssh [email protected]
sudo /etc/init.d/drizzle-server.init start &

2. Testing Drizzle (the sakila database has been installed)

$ drizzle
Welcome to the Drizzle client..  Commands end with ; or g.
Your Drizzle connection id is 4
Server version: 2009.04.997 Source distribution

Type 'help;' or 'h' for help. Type 'c' to clear the buffer.

drizzle> select version();
+-------------+
| version()   |
+-------------+
| 2009.04.997 |
+-------------+
1 row in set (0 sec)

drizzle> select count(*) from sakila.film;
+----------+
| count(*) |
+----------+
|     1000 |
+----------+
1 row in set (0 sec)

3. Compiling Drizzle

sudo su - drizzle
ls
deploy  drizzle  libdrizzle  sakila-drizzle
cd drizzle
./configure --help
Description of plugins:

   === HTTP Authentication Plugin ===
  Plugin Name:      auth_http
  Description:      HTTP based authentications
  Supports build:   static and dynamic

   === PAM Authenication Plugin ===
  Plugin Name:      auth_pam
  Description:      PAM based authenication.
  Supports build:   dynamic

   === compression UDFs ===
  Plugin Name:      compression
  Description:      UDF Plugin for compression
  Supports build:   static and dynamic
  Status:           mandatory

   === crc32 UDF ===
  Plugin Name:      crc32
  Description:      UDF Plugin for crc32
  Supports build:   static and dynamic
  Status:           mandatory

   === Error Message Plugin ===
  Plugin Name:      errmsg_stderr
  Description:      Errmsg Plugin that sends messages to stderr.
  Supports build:   dynamic

   === Daemon Example Plugin ===
  Plugin Name:      hello_world
  Description:      UDF Plugin for Hello World.
  Supports build:   dynamic

   === Gearman Logging Plugin ===
  Plugin Name:      logging_gearman
  Description:      Logging Plugin that logs to Gearman.
  Supports build:   dynamic

   === Query Logging Plugin ===
  Plugin Name:      logging_query
  Description:      Logging Plugin that logs all queries.
  Supports build:   static and dynamic
  Status:           mandatory

   === Syslog Logging Plugin ===
  Plugin Name:      logging_syslog
  Description:      Logging Plugin that writes to syslog.
  Supports build:   static and dynamic
  Status:           mandatory

   === MD5 UDF ===
  Plugin Name:      md5
  Description:      UDF Plugin for MD5
  Supports build:   static and dynamic

   === One Thread Per Connection Scheduler ===
  Plugin Name:      multi_thread
  Description:      plugin for multi_thread
  Supports build:   static
  Status:           mandatory

   === Old libdrizzle Protocol ===
  Plugin Name:      oldlibdrizzle
  Description:      plugin for oldlibdrizzle
  Supports build:   static
  Status:           mandatory

   === Pool of Threads Scheduler ===
  Plugin Name:      pool_of_threads
  Description:      plugin for pool_of_threads
  Supports build:   static
  Status:           mandatory

   === Default Signal Handler ===
  Plugin Name:      signal_handler
  Description:      plugin for signal_handler
  Supports build:   static
  Status:           mandatory

   === Single Thread Scheduler ===
  Plugin Name:      single_thread
  Description:      plugin for single_thread
  Supports build:   static
  Status:           mandatory

   === Archive Storage Engine ===
  Plugin Name:      archive
  Description:      Archive Storage Engine
  Supports build:   static
  Status:           mandatory

   === Blackhole Storage Engine ===
  Plugin Name:      blackhole
  Description:      Basic Write-only Read-never tables
  Supports build:   static and dynamic
  Configurations:   max, max-no-ndb

   === CSV Storage Engine ===
  Plugin Name:      csv
  Description:      Stores tables in text CSV format
  Supports build:   static
  Status:           mandatory

   === Memory Storage Engine ===
  Plugin Name:      heap
  Description:      Volatile memory based tables
  Supports build:   static
  Status:           mandatory

   === InnoDB Storage Engine ===
  Plugin Name:      innobase
  Description:      Transactional Tables using InnoDB
  Supports build:   static and dynamic
  Configurations:   max, max-no-ndb
  Status:           mandatory

   === MyISAM Storage Engine ===
  Plugin Name:      myisam
  Description:      Traditional non-transactional MySQL tables
  Supports build:   static
  Status:           mandatory


Report bugs to <http://bugs.launchpad.net/drizzle>.

Setting up MySQL on Amazon Web Services (AWS) Presentation

On Tuesday at the MySQL Camp 2009 in Santa Clara I presented Setting up MySQL on Amazon Web Services (AWS).

This presentation assumed you know nothing about AWS, and have no account. With Internet access via a Browser and a valid Credit Card, you can have your own running Web Server on the Internet in under 10 minutes, just point and click.

We also step into some more detail online click and point and supplied command line tools to demonstrate some more advanced usage.

Extending application data to the cloud

I was one of the invited panel speakers to A panel on Cloud Computing this week in New York. As one of 2 non vendor presenters, it was a great experience to be invited and be involved with vendors.

While I never got to use my slides available here, I did get to both present certain content, and indeed questions and discussions on the night were on other points of my content.

Cloud computing is here, it’s early days and new players will continue to emerge. For example, from the panel there was AppNexus, reviewed favorably at Info World in comparison with EC2 and Google App Engine, 10gen, an open source stack solution and Kaavo which from an initial 60 seconds of playing provide a management service on top of AWS similar to what ElasticFox provides. I need to investigate further how much the feature set extends and would compete with others like RightScale for example.

The greatest mystery came from Hank Williams and his stealth Kloudshare. He did elaborate more on where they aim to provide services. A new term discussed was “Tools as a service”, akin to moving use metaphorically from writing in Assembly language to the advanced frameworks of today’s generation of languages such as Java and Ruby.

Thanks to Murat Aktihanoglu of Unype who chaired the event.

Your data and the cloud

I will be speaking on July 29th in New York at an Entrepreneurs Forum on A Free Panel on Cloud Computing. With a number of experts including Hank Williams of KloudShare, Mike Nolet of AppNexus, and Hans Zaunere of New York PHP fame is should be a great event.

The focus of my presentation will be on “Extending existing applications to leverage the cloud” where I will be discussing both the advantages of the cloud, and the complexities and issues that you will encounter such as data management, data consistency, loss of control, security and latency for example.

Using traditional MySQL based applications I’ll be providing an approach that can lead to your application gaining greater power of cloud computing.


About the Author

Ronald Bradford provides Consulting and Advisory Services in Data Architecture, Performance and Scalability for MySQL Solutions. An IT industry professional for two decades with extensive database experience in MySQL, Oracle and Ingres his expertise covers data architecture, software development, migration, performance analysis and production system implementations. His knowledge from 10 years of consulting across many industry sectors, technologies and countries has provided unique insight into being able to provide solutions to problems. For more information Contact Ronald.

Getting Started with Simple DB

With my continued investigation of evaluating alternative data management with cloud computing options, I’m now evaluating Amazon Simple DB. Still in restricted beta, it helps to have a friend on the inside.

Working through the Getting Started Guide (API Version 2007-11-07) was ok, annoying in parts. Here are some issues I found. I was working with Java as the programming language.

  • The Docs enable you to view the language syntax in Java, C#, Perl, PHP, VB.NET, ScratchPad. You can also restrict the view to a specific language. A rather cool feature. One observation is there is no Python, which is rather ironic as my first investigation was Google App Engine (GAE), and the only language here is Python. Something I had to learn first.
  • Preparing the Samples asks you download the Amazon SimpleDB Sample code, but this is not actually a link to the sample code but an index to Community Code. I used Java Library for Amazon SimpleDB which wasn’t even on the first page of results.
  • The supplied docs for specifying the Classpath was rather wrong, helps to simply find all .jar files and included these. Mine looks like:
    • #!/bin/sh
      #  http://docs.amazonwebservices.com/AmazonSimpleDB/2007-11-07/GettingStartedGuide/?ref=get-started
      
      SDB_HOME="/put/directory/to/unzip/here"
      export CLASSPATH=$CLASSPATH:
      $SDB_HOME/src/com/amazonaws/sdb/samples/
      $SDB_HOME/lib/amazon-simpledb-2007-11-07-java-library.jar:
      $SDB_HOME/third-party/log4j-1.2.14/log4j-1.2.14.jar:
      $SDB_HOME/third-party/commons-codec-1.3/commons-codec-1.3.jar:
      $SDB_HOME/third-party/commons-logging-1.1/commons-logging-1.1.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-xjc.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/activation.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-impl.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/jaxb-api.jar:
      $SDB_HOME/third-party/jaxb-ri-2.1/jsr173_1.0_api.jar:
      $SDB_HOME/third-party/commons-httpclient-3.0.1/commons-httpclient-3.0.1.jar
      
  • All examples in the docs then refer to making changes such as “invokeCreateDomain(service, action); line and add the following lines after // @TODO: set action parameters here:”, problem is all the samples don’t have the action variable, but rather a variable called request. The comment in the code ” // @TODO: set request parameters here” is at least accurate.
  • The docs contain a lot of Java syntax that is would not for example compile correctly. Plenty of occurances of a missing semicolon ‘;’
  • Each example defines
    String accessKeyId = ““;
    String secretAccessKey = “
    “;
    Ok for the first example, but as soon as I moved to the second, I re-factored these into a Interface called Constants.
  • In all the examples, they never provide any sample output, this would help just to confirm stuff. In Java Library for Amazon SimpleDB download link, there is an example output, but it’s outdated, with new data attribute called BoxUsage. My output is:
    • CreateDomain Action Response
      =============================================================================
      
          CreateDomainResponse
              ResponseMetadata
                  RequestId
                      f04df8eb-71fa-4d4e-9bd5-cc98e853a2e4
                  BoxUsage
                      0.0055590278
      
  • And now some specifics. In a Relational database such as MySQL, you have Instance/Schema/Table/Column. Within SimpleDB it would indicate that you need a separate AWS account for Instance management. That’s probably a good thing as it will enable tracking of costs. There appears to be no concept of a Schema. Data is stored in Domains, this is the equivalent to a Table. Within each Domain, you specific Attributes, a correlation with Columns. One key difference is the ability to define a set of Attributes with the same identity (much like a list that is supported via Python/GAE). For any row of data, you must specify an itemName, this being equivalent to a Primary Key.These names table me back to old days (20 years ago) of Logical Data Models that used entities,attributes and relationships.
  • The term Replace is used when updating data for a given row.
  • When retrieving data, you first return a list of itemNames, then you can via Attributes for that given item.
  • You can perform a simple where qualification using a Query Expression, including against multiple Attributes via intersection syntax
  • A observation that is of significant concern is the lack of security against any type of operation. The Getting Started guide ends with Deleting the Domain. Is there no means to define permissions against type of users, such as an Application User, and a Database Administrator for managing the objects.

Well it took me longer to write this post, then to run through the example, but at least on a lazy Sunday afternoon, a first look at SimpleDB was quite simple.

I did also run into an error initially. I first started just via CLI under Linux (CentOS 5), but switched back to installing Eclipse on Mac OS/X for better error management, and of course this error didn’t occur.

Setting up on EC2

Thanks to my friend Dustin, and his EC2 demo using Elasticfox Firefox Extension for Amazon EC2 I got an EC2 image setup. With other references Link 1,Link 2,Link 3 I was also able to create my own AMI.

Some notes specific for my configuration.

Pre-config ElasticFox key for launching directly from ElasticFox SSH connections.

mkdir ~/ec2-keys
mv ~/Downloads/elasticfox.pem ~/ec2-keys/id_elasticfox
chmod 600 ~/ec2-keys/id_elasticfox
chmod 700 ~/ec2-keys/
ssh -i /Users/rbradfor/ec2-keys/id_elasticfox [email protected]

Installed Software.

apt-get update
apt-get -y autoremove
apt-get -y install apache2
apt-get -y install mysql-server
# Prompts for password (very annoying)
apt-get -y install php5
apache2ctl graceful
echo "Hello World" > /var/www/index.html
echo "< ? phpinfo() ?>" > /var/www/phpinfo.php

Configuration to save AMI.

scp -i ~/ec2-keys/id_elasticfox ~/ec2-keys/id_elasticfox pk-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem cert-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem [email protected]:/mnt
ec2-bundle-vol -d /mnt -c cert-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem -k pk-CHK7DP4475BWUKIUF4WFDIW3VMYDYOHQ.pem -u AccountNumber -r i386 -p ubuntu804_lamp
ec2-upload-bundle -b rbradford_804_lamp_ami -m /mnt/ubuntu804_lamp.manifest.xml -a AccessID -s SecretKey