The Effective MySQL Book Series

Effective MySQL: Optimizing SQL StatementsAnnounced on Sunday at Oracle Open World 2011 is the release of the Effective MySQL book series starting with the “Optimizing SQL Statements” title. The goal of the Effective MySQL series is a highly practical, concise and topic specific reference providing applicable knowledge to use on each page. A feedback comment provided today was “no fluff” which is great comment to re-enforce the practical nature of the series.

Details on the Effective MySQL Optimizing SQL Statements page include a sample chapter, code downloads and purchase links for print and e-books at Amazon, McGraw-Hill and Barnes & Noble.

Basic scalability principles to avert downtime

In the press in the last two days has been the reported outage of Amazon Web Services Elastic Compute Cloud (EC2) in just one North Virginia data center. This has affected many large website includes FourSquare, Hootsuite, Reddit and Quora. A detailed list can be found at ec2disabled.com.

For these popular websites was this avoidable? Absolutely.

Basic scalability principles if deployed in these systems architecture would have averted the significant downtime regardless of your development stack. While I work primarily in MySQL these principles are not new, nor are they complicated, however they are fundamental concepts in scalability that apply to any technology including the popular MongoDB that is being used by a number of affected sites.

Scalability 101 involves some simple basic rules. Here are just two that seem to have been ignored by many affected by this recent AWS EC2 outage.

  1. Never put all your eggs in one basket. If you rely on AWS completely, or you rely on just one availability zone that is putting all your eggs in one basket.
  2. Always keep your important data close to home. When it comes to what is most critical to your business you need access and control to your information. At 5am in the morning when the CEO asks how long will our business be unavailabla and what is needed to resolve the problem, the answer “We have no control over this and have no ETA” is not an acceptable answer.

With a successful implementation and appropriate data redundancy you may not have an environment immediately available however you have access to your important information and the ability to create one quickly. Many large hosting companies can provide additional H/W on near demand, especially if you have an initial minimal footprint. Indeed using Amazon Web Services (AWS) as a means to avert a data center disaster is an ideal implementation of Infrastructure As A Service (IAAS). Even with this issue, organizations that had planned for this type of outage could have easily migrated to another AWS availability zone that was unaffected.

Furthermore, system architecture to support various levels of data availability and scalability ensure you can handle many more various types of unavailability without significant system down time as recently seen. There are many different types of availability and unavailability, know what your definition of downtime is and supporting disasters should be your primary focus of scalability, not an after thought.

As an expert in performance and scalability I can help your organization in the design of a suitable architecture to support successful scalability and disaster. This is not rocket science however many organizations gamble without the expertise of a professional to ensure business viability.

Upcoming NY Presentation – How Better Indexes Save You Money

For all those in New York this is an upcoming MySQL presentation held in conjunction with our colleagues at General Assembly on March 22nd 2011.

This presentation “How Better Indexes Save You Money” will be discussing how one simple technique can result in huge MySQL performance improvements and with zero code changes necessary. Many people think they know indexes, however MySQL and MySQL Storage Engines have some specifics that differ from more traditional RDBMS products. Learn some of the key analysis and verification techniques and be able to see immediate potential results in performance.

You can find more details at Meetup.com EffectiveMySQL. This new group is all about highly technical MySQL related content “no fluff, just stuff”.

Microsoft's position on MySQL

While Oracle provides no official information they are planning on improving MySQL and using as a product to compete with Microsoft SQL Server, it is rather obvious from what little information you can glean from public announcements this is a clear business goal.

Microsoft however are publicly seeking a Senior Product Manager, MySQL Compete in the Marketing department. Your goal is nothing technical, it’s all PR to dispel MySQL as a viable product. I quote “you will equip field and partners to win in competitive engagements against MySQL, and you will influence market perception in favor of Microsoft technologies.” Here is the Full job description for those that want an amusing read.

This information came from an Oracle colleague of mine based in Asia.

A Cassandra twitter clone

Following my successful Cassandra Cluster setup and having a potential client example to work with running Ruby On Rails (RoR), I came across the following examples in Ruby.

Not being a ruby developer, I thought it was time to investigate further. Starting first on Mac OS X 10.5, I found the first line example of installing cassandra via gem unsuccessful.

$ gem install cassandra
Updating metadata for 1 gems from http://gems.rubyforge.org
.
complete
ERROR:  could not find cassandra locally or in a repository

Some more reading highlights Otherwise, you need to install Java 1.6, Git 1.6, Ruby, and Rubygems in some reasonable way.

In case you didn’t read my earlier posts, Java 6 is installed, but not the default.

export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
export PATH=$JAVA_HOME/bin:$PATH

I achieved installing RubyGems via Installing Ruby on Rails on Mac OS X.

$ sudo gem install rubygems-update

Updating metadata for 1 gems from http://gems.rubyforge.org
.
complete
Successfully installed rubygems-update-1.3.6
1 gem installed
Installing ri documentation for rubygems-update-1.3.6...
Installing RDoc documentation for rubygems-update-1.3.6...
Could not find main page README
Could not find main page README
Could not find main page README
Could not find main page README

$ sudo update_rubygems
RubyGems 1.3.6 installed

=== 1.3.6 / 2010-02-17

NOTE:

http://rubygems.org is now the default source for downloading gems.

You may have sources set via ~/.gemrc, so you should replace
http://gems.rubyforge.org with http://rubygems.org

http://gems.rubyforge.org will continue to work for the forseeable future.

New features:

* `gem` commands
  * Added `gem push` and `gem owner` for interacting with modern/Gemcutter
    sources
  * `gem dep` now supports --prerelease.
  * `gem fetch` now supports --prerelease.
  * `gem server` now supports --bind.  Patch #27357 by Bruno Michel.
  * `gem rdoc` no longer overwrites built documentation.  Use --overwrite
    force rebuilding.  Patch #25982 by Akinori MUSHA.
* Captial letters are now allowed in prerelease versions.

Bug fixes:

* Development deps are no longer added to rubygems-update gem so older
  versions can update sucessfully.
* Installer bugs:
  * Prerelease gems can now depend on non-prerelease gems.
  * Development dependencies are ignored unless explicitly needed.  Bug #27608
    by Roger Pack.
* `gem` commands
  * `gem which` now fails if no paths were found.  Adapted patch #27681 by
    Caio Chassot.
  * `gem server` no longer has invalid markup.  Bug #27045 by Eric Young.
  * `gem list` and friends show both prerelease and regular gems when
    --prerelease --all is given
* Gem::Format no longer crashes on empty files.  Bug #27292 by Ian Ragsdale.
* Gem::GemPathSearcher handles nil require_paths. Patch #27334 by Roger Pack.
* Gem::RemoteFetcher no longer copies the file if it is where we want it.
  Patch #27409 by Jakub Šťastný.

Deprecation Notices:

* lib/rubygems/timer.rb has been removed.
* Gem::Dependency#version_requirements is deprecated and will be removed on or
  after August 2010.
* Bulk index update is no longer supported.
* Gem::manage_gems was removed in 1.3.3.
* Time::today was removed in 1.3.3.


------------------------------------------------------------------------------

RubyGems installed the following executables:
	/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/gem

NOTE: This second command took over 60 seconds with no user feedback.

I was then able to successfully install cassandra via ruby’s gem package manager.

$ sudo gem install cassandra
Building native extensions.  This could take a while...
Successfully installed thrift-0.2.0
Successfully installed thrift_client-0.4.0
Successfully installed simple_uuid-0.1.0
Successfully installed cassandra-0.7.5
4 gems installed
Installing ri documentation for thrift-0.2.0...

Enclosing class/module 'thrift_module' for class BinaryProtocolAccelerated not known

Enclosing class/module 'thrift_module' for class BinaryProtocolAccelerated not known
Installing ri documentation for thrift_client-0.4.0...
Installing ri documentation for simple_uuid-0.1.0...
Installing ri documentation for cassandra-0.7.5...
Installing RDoc documentation for thrift-0.2.0...

Enclosing class/module 'thrift_module' for class BinaryProtocolAccelerated not known

Enclosing class/module 'thrift_module' for class BinaryProtocolAccelerated not known
Installing RDoc documentation for thrift_client-0.4.0...
Installing RDoc documentation for simple_uuid-0.1.0...
Installing RDoc documentation for cassandra-0.7.5...

My use of cassandra_helper provided the following expected dependency error.

$ cassandra_helper cassandra
Set the CASSANDRA_INCLUDE environment variable to use a non-default cassandra.in.sh and friends.
(in /Library/Ruby/Gems/1.8/gems/cassandra-0.7.5)
You need to install git 1.6 or 1.7

I found instructions to install git at Installing git (OSX) and installed via GUI installer.

I had to include to my current session path to get my Ruby Cassandra installation.

$ export PATH=/usr/local/git/bin:$PATH

$ cassandra_helper cassandra
Set the CASSANDRA_INCLUDE environment variable to use a non-default cassandra.in.sh and friends.
(in /Library/Ruby/Gems/1.8/gems/cassandra-0.7.5)
Checking Cassandra out from git
Initialized empty Git repository in /Users/rbradfor/cassandra/server/.git/
remote: Counting objects: 16715, done.
remote: Compressing objects: 100% (2707/2707), done.
remote: Total 16715 (delta 9946), reused 16011 (delta 9364)
Receiving objects: 100% (16715/16715), 19.22 MiB | 1.15 MiB/s, done.
Resolving deltas: 100% (9946/9946), done.
Updating Cassandra.
Buildfile: build.xml

clean:

BUILD SUCCESSFUL
Total time: 2 seconds
HEAD is now at 298a0e6 check-in debian packaging
Building Cassandra
Buildfile: build.xml

build-subprojects:

init:
    [mkdir] Created dir: /Users/rbradfor/cassandra/server/build/classes
    [mkdir] Created dir: /Users/rbradfor/cassandra/server/build/test/classes
    [mkdir] Created dir: /Users/rbradfor/cassandra/server/src/gen-java

check-gen-cli-grammar:

gen-cli-grammar:
     [echo] Building Grammar /Users/rbradfor/cassandra/server/src/java/org/apache/cassandra/cli/Cli.g  ....

build-project:
     [echo] apache-cassandra-incubating: /Users/rbradfor/cassandra/server/build.xml
    [javac] Compiling 247 source files to /Users/rbradfor/cassandra/server/build/classes
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.

build:

BUILD SUCCESSFUL
Total time: 42 seconds
CASSANDRA_HOME: /Users/rbradfor/cassandra/server
CASSANDRA_CONF: /Library/Ruby/Gems/1.8/gems/cassandra-0.7.5/conf
Listening for transport dt_socket at address: 8888
DEBUG - Loading settings from /Library/Ruby/Gems/1.8/gems/cassandra-0.7.5/conf/storage-conf.xml
....

I was then able to complete the example at up and running with cassandra running via the ruby interactive console.

I was also able to fire up the cassandra-cli and see the data added in ruby.

$ bin/cassandra-cli -host localhost
Connected to localhost/9160
cassandra> get Twitter.Statuses['1']
=> (column=user_id, value=5, timestamp=1267072406503471)
=> (column=text, value=Nom nom nom nom nom., timestamp=1267072406503471)
Returned 2 results.
cassandra> get Twitter.UserRelationships['5'];
=> (super_column=user_timeline,
     (column=???!??zvZ+?!, value=1, timestamp=1267072426991872)
     (column=??-?!???C?th?, value=2, timestamp=1267072427019091))
Returned 1 results.

No sure about the data in the second example.

Getting started with Cassandra

With the motivation from today’s public news on Twitter’s move from MySQL to Cassandra, my own skills desire following in-depth discussions at last November’s Open SQL Camp to consider Cassandra and yesterday’s discussion with a new client on persistent key-value store products, today I download installed and configured for the first time. Not that today’s news was unexpected, if you follow the Twitter Engineering Open Source projects you would have seen Cassandra as well as other products being used or evaluated by Twitter.

So I went from nothing to a working Cassandra node in under 5 minutes. This is what I did.

  1. While I knew this was an Apache project, a Google Search yields for me the 3rd link for the The Apache Cassandra Project at http://incubator.apache.org/cassandra/. Congrats for Cassandra now a top level Apache Project. This url will update soon.
  2. Download Cassandra. Hard to miss with a big green button on home page. Current version is 0.5
  3. I read Getting Started, which is the 3rd top level link on menu after Home and Download. Step 1 is picking a version which I’ve already done, Step 2 is Running a single node.
  4. The Getting Started indicated a problem on Mac OS X for the required minimum Java version. I was installing on Mac OS X 10.5 and CentOS 5.4. I’ve experienced this Java 6 default path issue before. Set my JAVA_HOME and PATH accordingly (after I updated the wiki with correct value)
  5. I extracted the tar file, changed to the directory and took at look at the README.txt file. Yes, I always check this first with any software and relevant because it includes valuable instructions on creating the default data and log directories.
  6. Start with bin/cassandra -f. No problems!
  7. I then followed the instructions from the link in Step 2 with the CassandraCli. This tests and confirms the installation is operational.

Ok, a working environment. I’ve now installed on a second machine and tested however I now need to configure the cluster, and the documentation is not as straightforward. Time to try out Google again.

On a side note, this is one reason why I love Open Source. I followed the instructions online and found a mistake in the Mac OS X path, I simply registered and corrected providing the benefit of my experience for the next reader(s).

You may also like to view future posts including.

What do MySQL staff think of the acquisition?

It finally dawned on me while reflecting on the year past this Sunday that the missing voice since the announcement of the Oracle acquisition of Sun Microsystems (and therefore MySQL) has been the MySQL employees.

When I worked as an employee for MySQL Inc, the acquisition by Sun Microsystems in 2008 lead to several requirements about the acquisition.

  • You were not allowed to talk about the acquisition publically.
  • You were not allowed to communicate with any Sun (i.e. the acquirer) resources.

In other words it was “business as usual” which is really an oxymoron, because business will never be exactly as it was before the announcement. The ongoing delay in pending acquisition by Oracle Corporation is really hurting everybody with getting on with doing their jobs, being happy with their work, and making a difference in open source and in the lives of all the benefit from using MySQL.

I’m sure many that have words to say are disappointed, worried or even fearful of their own future careers. Comments are always welcome via Mr Anonymous using 10 minute email.

What's new in MySQL 5.4.1

Absolutely nothing?

5.4.0 was released with a change in the MySQL Binary distributions, delivering only 1 64bit Linux platform and two Sun Solaris platforms. This was officially announced on April 21 2009 however the 5.4.0 Release Notes state 05 April 2009. So it’s not a big deal, but consistency would be nice.

I’ve seen in a few posts 5.4.1, so I decided to try it out. Spending the time to read what’s changed in 2 months with the 5.4.1 Release Notes before I go downloading and installing, you read.

This release does not differ from 5.4.0 except that binary distributions are available for all MySQL-supported platforms.

Is this going to be the new policy from Sun? Release for Solaris platforms first, then later release for other platforms?

O'Reilly Twitter Boot Camp a success

The first O’Reilly Twitter Boot Camp#OTBC was held in New York as a pre cursor to 140 Characters Conference#140conf on Monday 15th June, 2009.

With opening and closing keynotes were like matching bookends of The Twitter Book #twitterbook offered to all attendees and authored by the keynoters @timoreilly and @SarahM.

Attendees came from across the country. Just a few I spoke with coming from LA – @EricMueller of @FLWbooks, Texas – @marlaerwin , Vancouver – HootSuite, Las Vegas -zappos, Boston – @mvolpe , Philadelphia, @SBrownCCI from Cincinnati and @sticky_mommy from Vermont.

The demographics of attendees was a little different from my usual O’Reilly conferences of MySQL, OSCON and Web 2.0. There were less the half the attendees with laptops at hand for notes & twittering, offset by the high blackberry or should I say shaq-berry users (Thanks Ami @digitalroyalty), easily seen from the back of the steep and dark auditorium. A greater proportion of different industries and gender lead to many questions and discussions from users, not just technologists.

The morning panel sessions afforded no question time due to speakers providing good but overtime content. Over lunch Mike Volpe of HubSpot a corporate sponsor for the day set the standard by asking his panel of speakers to stick on time. This afforded almost 30 minutes of question time and a roar of approval from the crowd.

There is a lot of valuable information you can find by Twitter Search of #OTBC. A few examples include:

  • @archivesnext: Good advice: RT @mpedson RT @timoreilly: Twitter usage policy from @zappos at #OTBC: “Just be real and use your best judgement.”
  • @GeekGirlCamp: Hmmmm. Lots of conflicting views on following on Twitter here. What makes YOU follow someone? Would love to know… #OTBC
  • @CarriBugbee: ROI is a tricky thing on twitter; if you’re using it solely to generate revenue, you might be there for the wrong reason – @wholefoods #otbc
  • @mvolpe: “Driving ROI on Twitter” slides and video of my presentation later today for #OTBC – http://tinyurl.com/061509mvolpe
  • @ronaldbradford: Best Practices for Twitter – Build a commercial-grade profile. @CarriBugbee at #OTBC
  • @journalismgal: Ask questions within your tweets even something as simple as your fab apple #otbc
  • @ronaldbradford: Do stay tuned in. Nights, weekends, holidays are all twitter time. Maria Erwin @wholefoods at #OTBC
  • @harrybrelsford: Is Twitter the new Google? That is belief of @erictpeterson Twitter is creating entire new businesses (Flash Light books) #otbc #smbnation

My individual brands of @ronaldbradford and @MySQLExpert will certainly benefit from a wealth of knowledge of the day. If only I had my Twitter name on the tee shirt I was wearing for the event.

The only down sides to the venue the lack of power for attendees, flaky Internet and a basement auditorium with no cell phone service. Important things to re-consider for a online technology conference. In true form the attendees including myself @ronaldbradford, @SBrownCCI, @GeekGirlCamp, @14str8 used the medium of the conference and our voices were heard and some limited power made available. Thanks O’Reilly for listening.

Thank you to all speakers @katmeyer, @timoreilly, @steverubel, @zappos, @carribugbee, @twittermoms, @flwbooks, @davidjdeal, @bethharte, @dunkindonuts, @reggiebradford, @wholefoods, @tedmurphy, @adbroad, @digitalroyalty, @erictpeterson, @mvolpe, @laureltouby, @sarahm and to Zappos.com for the after event happy hour.

Problems compiling MySQL 5.4

Seem’s the year Sun had for improving MySQL, and with an entire new 5.4 branch the development team could not fix the autoconf and compile dependencies that has been in MySQL for all the years I’ve been compiling MySQL. Drizzle has got it right, thanks to the great work of Monty Taylor.

I’m working on the Wafflegrid AWS EC2 AMI’s for Matt Yonkovit and while compiling 5.1 was straight forward under Ubuntu 8.10 Intrepid, compiling 5.4 was more complicated.

For MySQL 5.1 I needed only to do the following:

apt-get install -y build-essential
apt-get install libncurses5-dev
./configure
make
make install

For MySQL 5.4, I elected to use the BUILD scripts (based on Wafflegrid recommendations). That didn’t go far before I needed.

apt-get install -y automake libtool

You then have to go compiling MySQL 5.4 for 10+ minutes to get an abstract error, then you need to consider what dependencies may be missing.
I don’t like to do a blanket apt-get of a long list of proposed packages unless I know they are actually needed.

The error was:

make[1]: Entering directory `/src/mysql-5.4.0-beta/sql'
make[1]: warning: -jN forced in submake: disabling jobserver mode.
/bin/bash ../ylwrap sql_yacc.yy y.tab.c sql_yacc.cc y.tab.h sql_yacc.h y.output sql_yacc.output -- -d --verbose
make -j 6 gen_lex_hash
make[2]: Entering directory `/src/mysql-5.4.0-beta/sql'
rm -f mini_client_errors.c
/bin/ln -s ../libmysql/errmsg.c mini_client_errors.c
make[2]: warning: -jN forced in submake: disabling jobserver mode.
rm -f pack.c
../ylwrap: line 111: -d: command not found
/bin/ln -s ../sql-common/pack.c pack.c
....
make[1]: Leaving directory `/src/mysql-5.4.0-beta/sql'
make: *** [all-recursive] Error 1

What a lovely error ../ylwrap: line 111: -d: command not found

ylwrap is part of yacc, and by default in this instance it’s not even an installed package. I’ve compiled MySQL long enough that it requires yacc, and actually bison but to you think it would hurt if the configure told the user this.

It’s also been some time since I’ve compiled MySQL source, rather focusing on Drizzle. I had forgotten just how many compile warnings MySQL throws. Granted a warning is not an error, but you should not just ignore them in building a quality product.

Leveraging the power of Twitter

Last week I posted the following twitter request“Can somebody loan me (or buy me) a Dell 2950 decked out so I can run and publish some benchmarks. Please!”

In a same day response I was offered access to use 2 x Dell 1950’s, and today I’m now actually using these machines for my own testing. I would like to thank cafemom (Barry, Anthony & Dan) for the loan of hardware.

And now the chance to better understand the RAID configuration of the DELL PERC Controllers, trying out some different RAID types, LVM configurations and disk tests. When I’m done with my System Administrator refresher, I’m then be trying some different MySQL Benchmarks to test various MySQL configuration settings including using the new Juice benchmark.

Announcing Drizzle on EC2

I have published the very first sharable Drizzle Amazon Machine Image (AMI) for AWS EC2, based on the good feedback from my discussion at the Drizzle Developer Day on what options we should try.

This first version is a 32bit Developer instance, showcasing Drizzle and all necessary developer tools to build Drizzle from source.

What you will find on drizzle-ami/intrepid-dev32 – ami-b858bfd1

Ubuntu 8.10 Intrepid 32 bit base server installation:

  • build tools
  • drizzle dependencies
  • bzr 1.31.1

From the respective source trees the following software is available:

  • drizzle 2009.04.997
  • libdrizzle 0.0.2
  • gearman 0.0.4
  • memcached 1.2.8
  • libmemcached 0.28

Drizzle has been configured with necessary dependencies for PAM authentication, http_auth, libgearman and MD5 but these don’t seem to be available in the binary distribution.

I will be creating additional AMI’s including 64bit and LAMP ready binary only images.

The following example shows using drizzle on this AMI. Some further work is necessary for full automation, parameters and logging. I’ve raised a number of issues the Drizzle Developers are now hard at work on.

1. Starting Drizzle

ssh [email protected]
sudo /etc/init.d/drizzle-server.init start &

2. Testing Drizzle (the sakila database has been installed)

$ drizzle
Welcome to the Drizzle client..  Commands end with ; or g.
Your Drizzle connection id is 4
Server version: 2009.04.997 Source distribution

Type 'help;' or 'h' for help. Type 'c' to clear the buffer.

drizzle> select version();
+-------------+
| version()   |
+-------------+
| 2009.04.997 |
+-------------+
1 row in set (0 sec)

drizzle> select count(*) from sakila.film;
+----------+
| count(*) |
+----------+
|     1000 |
+----------+
1 row in set (0 sec)

3. Compiling Drizzle

sudo su - drizzle
ls
deploy  drizzle  libdrizzle  sakila-drizzle
cd drizzle
./configure --help
Description of plugins:

   === HTTP Authentication Plugin ===
  Plugin Name:      auth_http
  Description:      HTTP based authentications
  Supports build:   static and dynamic

   === PAM Authenication Plugin ===
  Plugin Name:      auth_pam
  Description:      PAM based authenication.
  Supports build:   dynamic

   === compression UDFs ===
  Plugin Name:      compression
  Description:      UDF Plugin for compression
  Supports build:   static and dynamic
  Status:           mandatory

   === crc32 UDF ===
  Plugin Name:      crc32
  Description:      UDF Plugin for crc32
  Supports build:   static and dynamic
  Status:           mandatory

   === Error Message Plugin ===
  Plugin Name:      errmsg_stderr
  Description:      Errmsg Plugin that sends messages to stderr.
  Supports build:   dynamic

   === Daemon Example Plugin ===
  Plugin Name:      hello_world
  Description:      UDF Plugin for Hello World.
  Supports build:   dynamic

   === Gearman Logging Plugin ===
  Plugin Name:      logging_gearman
  Description:      Logging Plugin that logs to Gearman.
  Supports build:   dynamic

   === Query Logging Plugin ===
  Plugin Name:      logging_query
  Description:      Logging Plugin that logs all queries.
  Supports build:   static and dynamic
  Status:           mandatory

   === Syslog Logging Plugin ===
  Plugin Name:      logging_syslog
  Description:      Logging Plugin that writes to syslog.
  Supports build:   static and dynamic
  Status:           mandatory

   === MD5 UDF ===
  Plugin Name:      md5
  Description:      UDF Plugin for MD5
  Supports build:   static and dynamic

   === One Thread Per Connection Scheduler ===
  Plugin Name:      multi_thread
  Description:      plugin for multi_thread
  Supports build:   static
  Status:           mandatory

   === Old libdrizzle Protocol ===
  Plugin Name:      oldlibdrizzle
  Description:      plugin for oldlibdrizzle
  Supports build:   static
  Status:           mandatory

   === Pool of Threads Scheduler ===
  Plugin Name:      pool_of_threads
  Description:      plugin for pool_of_threads
  Supports build:   static
  Status:           mandatory

   === Default Signal Handler ===
  Plugin Name:      signal_handler
  Description:      plugin for signal_handler
  Supports build:   static
  Status:           mandatory

   === Single Thread Scheduler ===
  Plugin Name:      single_thread
  Description:      plugin for single_thread
  Supports build:   static
  Status:           mandatory

   === Archive Storage Engine ===
  Plugin Name:      archive
  Description:      Archive Storage Engine
  Supports build:   static
  Status:           mandatory

   === Blackhole Storage Engine ===
  Plugin Name:      blackhole
  Description:      Basic Write-only Read-never tables
  Supports build:   static and dynamic
  Configurations:   max, max-no-ndb

   === CSV Storage Engine ===
  Plugin Name:      csv
  Description:      Stores tables in text CSV format
  Supports build:   static
  Status:           mandatory

   === Memory Storage Engine ===
  Plugin Name:      heap
  Description:      Volatile memory based tables
  Supports build:   static
  Status:           mandatory

   === InnoDB Storage Engine ===
  Plugin Name:      innobase
  Description:      Transactional Tables using InnoDB
  Supports build:   static and dynamic
  Configurations:   max, max-no-ndb
  Status:           mandatory

   === MyISAM Storage Engine ===
  Plugin Name:      myisam
  Description:      Traditional non-transactional MySQL tables
  Supports build:   static
  Status:           mandatory


Report bugs to <http://bugs.launchpad.net/drizzle>.

A change in the MySQL Binary distributions

Yesterday was the surprise announcement of MySQL 5.4 at the 2009 MySQL Conference and Expo. It was unfortunate that the supporting information was not that forthcoming on the MySQL website. I tried for several hours to try and download, but no mirrors were initially available. Today I see some information on the mysql.com home page and finally able to get the binary.

What I found most significant with this new major version release is a change in the binary distribution, as seen on the Download page.

MySQL 5.4 is only available on 3 platforms:

  • Linux (AMD64 / Intel EM64T)
  • Solaris 10 (SPARC, 64-bit)
  • Solaris 10 (AMD64 / Intel EM64T, 64-bit)

I was also surprised that this beta release highlights the emphasis of community contributions (long overdue), yet the community and indeed many employees of Sun/MySQL were simply unaware of this work. This is clearly a change in involving the community. While I applaud the beta status, hopefully a more stable product to start with, it’s development was done in a very closed company model.

Setting up MySQL on Amazon Web Services (AWS) Presentation

On Tuesday at the MySQL Camp 2009 in Santa Clara I presented Setting up MySQL on Amazon Web Services (AWS).

This presentation assumed you know nothing about AWS, and have no account. With Internet access via a Browser and a valid Credit Card, you can have your own running Web Server on the Internet in under 10 minutes, just point and click.

We also step into some more detail online click and point and supplied command line tools to demonstrate some more advanced usage.

MySQL Users Conference Opening Lines

Opening introduction from Colin Charles got us started. Karen Tegan Padir VP MySQL & Software Infrastructure was the opening keynote.

She comes from a strong tech background and is passionate about open source, the communities and how to make a successful product.

There isn’t a person that doesn’t go a day without interacting with a website or hardware system that uses a MySQL database.

The big news was the announcement of MySQL 5.4 – Performance & Scalability. Key features include.

  • InnoDb scalability 16way x86 and 64 way CMT servers
  • subquery optimization
  • new query algorithms
  • improved stored procedures, and prepared statements
  • enhanced Information Schema
  • improved DTrace Support

More information at MySQL 5.4 Announcement Details….

Other key points includes:

1. Ken Jacobs announces today an Embedded Innodb with a powerful API (not SQL based). Read more at Innobase Introduces Embedded InnoDB
2. MySQLCluster 7.0 is also released today. Some benchmarks 4.3x improvements. New features also include LDAP support.
3. The next release of MySQL Query Analyzer, 2.1 announced.
4. Sun announces a commitment to accept contributions from the community.
5. Community also gets the Monthly Rapid Updates.
6. MySQL Drizzle Project is discussed as a technology incubator.

Partners of the year: Intel, Infobright and Lifeboard.
Appliation of the year: Zappos, Alcatel-lucent and Symantec.
Community members of the year: Marc Delisle, Ronald Bradford, Shlomi Noach.

Where is the MySQL in Sun's announcement

I find it surprising that in the official Sun Announcement there is no mention of MySQL for two reasons. Firstly, this was Sun largest single purchase of $1 billion only 12 months ago. Second, MySQL’s largest competitor is Oracle.

While the Sun website shows the news in grandeur, the MySQL website is noticeably absent in any information of it’s owners’ acquisition.

On my professional side, as an independent speaker for Sun Microsystems with plans for upcoming webinars and future speaking on “Best Practices in Migrating to MySQL from Oracle”, this news does not benefit my bottom line.

Twitter Tips

I have in the past questioned the value of Twitter as an effective business tool, but it continues to defy the trend of inability to bridge the business gap with social media.

Even with still continual growth problems (at least it’s not down as much) Twitter is everywhere I go, see or do. You see it at business events, business cards, meetups even on CNN Headline News. There are so many various differ twitter sites, applications, widgets etc, I’m surprised there isn’t a twitter index just of the twitter related sites.

I have now incorporated Twitter into my professional site and I’m using this micro-blogging approach more to share my professional skills and interests to my growing band of followers. I don’t expect to make the Twitter top list which is headed by CNN Breaking News with 667,353 followers.

Even Lance Armstrong (who rates 9th) used Twitter for press releases this week of his injuries.

For more reading check out How Twitter Makes You A Better Writer and 27 Twitter Applications Your Small Business Can Use Today.

I was surprised to see How to get a job by blogging: Tips for a setting up the kind of professional blog that will get you hired, barely mention Twitter.

Now be sure to add a background appropriate to your Twitter. This one is wicked.

Classic quotes – Community One East

The CommunityOne East 2009 conference has finished up. There were a few classic statements made by the speakers during the day. They included.

“We have a community reception, that’s a long way to say free beer.”

“Google is the dial tone of the Internet, if it’s not there people start freaking out.”

“I am an insom-maniac, a late night hacker.”

“Having a successful catastrophic – Achieving your marketing goals, and your site crashes due to the load.”

“Ruby is a beautiful expressive fun language.”

(talking about cloud providers)“Lock-in, it’s like marriage, it’s not necessarily a bad thing.”

“It wasn’t a red carpet, but it was carpeted.”

Event: CommunityOne East in New York, NY.
Article Author: Ronald Bradford

Your Code, Your Community, Your Cloud… Project Kenai

Following the opening keynote announcement about Kenai I ventured into a talk on Project Kenai.

With today’s economy, the drive is towards efficiency is certainly a key consideration, it was quoted that dedicated hosting servers only run at 30% efficiency.

An overview again of Cloud Computing

  • Economics – Pay as you go,
  • Developer Centric – rapid self provisioning, api-driven, faster deployment
  • Flexibility – standard services, elastic, on demand, multi-tenant

Types of Clouds

  • Public – pay as you go, multi-tenant application and services
  • Private – Cloud computing model run within a company’s own data center
  • Mixed – Mixed user of public and private clouds according to applications

SmugMug was referenced as a Mixed Cloud example.

Cloud Layers

  • Infrastructure as a Services – Basic storage and computer capabilities offer as a service (eg. AWS)
  • Platform as a Service – Developer platform with build-in services. e.g. Google App Engine
  • Software as Service – applications offered on demand over the network e.g salesforce.com

Some issues raised about this layers included.

  • IaaS issues include Service Level, Privacy, Security, Cost of Exit
  • PaaS interesting point, one that is the bane of MySQL performance tuning, that is instrumentation
  • SaaS nothing you need to download, you take the pieces you need, interact with the cloud. More services simply like doing your Tax online.

Sun offers Project Kenai as well as Zembly.

Project Kenai

  • A platform and ecosystem for developers.
  • Freely host open source projects and code.
  • Connect, community, collaborate and Code with peers
  • Eventually easily deploy application/services to “clouds”

Kenai Features

  • Code Repository with SVN, Mercurial, or an external repository
  • Issue tracking with bugzilla, jira
  • collaboration tools such as wiki, forums, mailing lists
  • document hosting
  • your profile
  • administrative role

Within Kenai you can open up to 5 open source projects and various metrics of the respositories, issue trackers, wiki etc.

The benefits were given as the features are integrated into your project, not distributed across different sites. Agile development within the project sees a release every 2 weeks. Integration with NetBeans and Eclipse is underway.

Kenai is targeted as being the core of the next generation of Sun’s collaboration tools. However when I asked for more details about uptake in Sun, it’s only a request, not a requirement for internal teams.

The API’s for the Sun Cloud are at http://kenai.com/projects/suncloudapis.

Event: CommunityOne East in New York, NY.
Presenter: Tori Wieldt, Sun Microsystems
Article Author: Ronald Bradford

Everybody is talking About Clouds

From the opening keynote at CommunityOne East we begin with Everybody is talking About Clouds.

It’s difficult to get a good definition, the opening cloud definition today was Software/Platform/Storage/Database/Infrastructure as a service. Grid Computing, Visualization, Utility Computing, Application Hosting. Basically all the buzz words we currently know.

Cloud computing has the ideals of truly bringing a freedom of choice. For inside or outside of an enterprise, the lower the barrier, time and cost into freedom of choice give opportunities including:

  • Self-service provisioning
  • Scale up, Scale down.
  • Pay for only what you use.

Sun’s Vision has existed since 1984 with “The NETWORK is the Computer”.

Today, Sun’s View includes Many Clouds, Public and Private, Tuned up for different application needs, geographical, political, with a goal of being Open and Compatible.

How do we think into the future for developing and deploying into the cloud? The answer given today was, The Sun Open Cloud Platform which includes the set of core technologies, API’s and protocols that Sun hopes to see uptake among many different providers.

The Sun Cloud Platform

  • Products and Technologies – VirtualBox, Sun xVM, Q-Laser, MySQL
  • Expertise and Services
  • Partners – Zmanda, Rightscale, Kickapps
  • Open Communities – Glashfish, Java, Open Office, Zfs, Netbeans, Eucalyptus

The Sun Cloud includes:

  • Compute Service
  • Storage Service
  • Virtual Data Center
  • Open API – Public, RESTful, Java, Python, Ruby

The public API has been released today and is available under Kenai. It includes two key points:

  • Everything is a resource http GET, POST, PUT etc
  • A single starting point, other URI’s are discoverable.

What was initially showed was CLI interface exmaples, great to see this still is common, a demonstration using drag and drop via a web interface was also given, showing a load balanced, multi-teired, multi server environment. This was started and tested during the presentation.

Then Using Cyberduck (a WebDAV client on Mac OS/X) and being able to access the storage component at storage.network.com directly, then from Open Office you now get options to Get/Save to Cloud ( using TwoGuys.com, Virtual Data Center example document).

Seamless integration between the tools, and the service. That was impressive.

More information at sun.com/cloud. You can get more details also at the Sun Microsystems Unveils Open Cloud PlatformOfficial Press Release.

Event: CommunityOne East in New York, NY.
Article Author: Ronald Bradford

CommunityOne East – An open developer conference

With an opening video from thru-you.com – an individual taking random you-tube video and producing video mashup’s, the CommunityOne East conference in New York, NY beings.

The opening introduction was by Chief Sustainability Officer Dave Douglas. Interesting job title.

His initial discussion was around what is the relationship between technology and society. A plug for his upcoming book “Citizen Engineer” – The responsibilities of a 21st Century Engineer. He quotes “Crisis loves an Innovation” by Jonathan Schwartz, and extends with “Crisis loves a Community”.

He asks us to consider the wider community ecosystem such as schools, towns, governments, NGO’s etc with our usage and knowledge of technology.

Event: CommunityOne East in New York, NY.
Author: Ronald Bradford

Hurting the little guy?

Today I come back from the dentist, if that wasn’t bad enough news, I get an email from Google AdWords titled Your Google AdWords Approval Status.

In the email, all my AdWords campaigns are now disapproved, because of:

SUGGESTIONS:
-> Ad Content: Please remove the following trademark from your ad:
mysql.

Yeah right. I can’t put the word ‘MySQL’ in my ads. How are people to now find me? It would appear that many ads have been pulled not just mine. Is this a proactive measure by Google? is this a complaint from the MySQL trademark holder Sun Microsystems?

I’d like any comment, feedback or suggestions on how one can proceed here.

It reminds me of the days CentOS advertised itself as an “Open source provider of a popular North American Operating System”, or something of that nature.

NY Tech 1995-2008. Opening Web 2.0 Expo NY Keynote

Web 2.0 Expo NY keynotes are happening today. Technology in use included CrowdVine which I’d not heard of, and plenty of Twitter feeds such as w2e_NY08.

The opening keynote was Fred Wilson from Union Square Ventures with his presentation New York’s Web Industry From 1995 to 2008: From Nascent to Ascendent .

Some stats, Seed and early stage deals.

  • 1995 230 SF Bay area, 30 in NY
  • 2008 360 SF Bay area, 116 in NY

Fred first asked “New York is not an alley. Call it Broadway, or just New York.”

Here is a summary of his history of New York Web Industry.

  • 1991 – ZDNet
  • 1993 – New York Online Dialup services
  • 1993 – Jupiter Communications online conference
  • 1993 – Prodigy
  • 1994 – Startups such as Pseudo, Total New york, Razorfish.
  • 1994 – Time Warner Pathfinder
  • 1995 -NYIC 55 Broad St. – Technology oriented building
  • 1995 – Seth Godin – Yoyodyne – Permission Marketing
  • 1995 – itraffic, agency.com, NY Times online
  • 1995 – Softbank, Double Click, 24×7, Real Media
  • 1996 – Silicon Alley Reporter
  • 1996 – ivillage, the knot
  • 1996 – Flatiron Partners – good sued for that
  • 1997 – The Silicon Alley Report Radio Show
  • 1997 – mining co.
  • 1997 – Total NY sold to AOL
  • 1997 – Agency rollups razorfish buying 4 companies
  • 1997 – DoubleClick IPO
  • 1998 – Seth Godin moves to Yahoo
  • 1998 – Burn Rate
  • 1998 – Kozmo – We’ll be right over
  • 1998 – was the last year of sanity in the Internet wave
  • 1999 – The start of the boom
  • 1999 the big players came online , all hell breaks loose. 200 startups were funded in 1999, 300 in 2000.
  • 2000 – The Crash & Burn
  • 2000 – f**kedcompany
  • 2000 – Google came to New York. – 86th St Starbucks
  • 2001 – Layoffs, Landlords and bankruptcies
  • 2002 – Rock bottom
  • 2003 – Renewal
  • 2003 – Blogging started gizmodo
  • 2003 – Web 2.0 coined
  • 2003 – del.icio.us was launched from a computer in an apartment
  • 2004 – NY Tech Meetup
  • 2004 – Union Square Ventures $120million raised
  • 2005 – about.com acquired by NY Times
  • 2005 – Etsy
  • 2006 – Google took over port authority building, now with 750 engineers in NY
  • 2008 – Web 2.0 comes to New York City

New York is now 1/3 of Silicon valley, compared to 1/8 of funded Internet companies.

One thing mentioned is a documentary called “We live in Public”. Some of the footage from 1999, is so early Big Brother.

An intestesting approach to free hosting

I came across the OStatic Free hosting service that provide Solaris + Glassfish (Java Container) + MySQL.

They offer “Now you can get free Web hosting on Cloud Computing environment free of charge for up to 12 months.

The catch “accumulate 400 Points for participating on the site“.

A rather novel approach, you get 100 points for registering, but then only 5-15 points per task. The equates to approximately at least 30 tasks you need to perform, such as answering a question, or reviewing somebodies application. That seems like a lot of work?

In this case, Free has definitely a cost and time factor to consider.

VirtualBox, compiling Part 2

So I managed to find all dependencies after some trial and error for compiling VirtualBox 1.6.4 under Ubuntu 8.0.4, then finding the Linux build instructions to confirm.

It was not successful however in building, throwing the following error:

kBuild: Compiling dyngen - dyngen.c
kBuild: Linking dyngen
kmk[2]: Leaving directory `/usr/local/VirtualBox-1.6.4/src/recompiler'
kmk[2]: Entering directory `/usr/local/VirtualBox-1.6.4/src/apps'
kmk[2]: pass_bldprogs: No such file or directory
kmk[2]: *** No rule to make target `pass_bldprogs'. Stop.
kmk[2]: Leaving directory `/usr/local/VirtualBox-1.6.4/src/apps'
kmk[1]: *** [pass_bldprogs_before] Error 2
kmk[1]: Leaving directory `/usr/local/Virtu

More searching, I needed to add two more files manually. Read More Here.

A long wait, compiling for 20+ minutes, and a necessary reboot as upgraded images threw another error, I got 1.6.4 running, and able to boot Fedora Core 9 image created under 1.5.6

But the real test, and the need for this version was to install Intrepid.

This also failed with a Kernel panic during boot. More info to see this reported as a Ubuntu Bug and Virtual Box Bug.

More work still needed.

Virtual Box, a world of hurt

I successfully installed Virtual box via a few simply apt-get commands under Ubuntu 8.04 via these instructions.

It started fine, after two small annoying, install this module, add this group messages. I was even able to install Ubuntu Intrepid from .iso. But from here it was down hill.

Attempting to start VM gives the error.

This kernel requires the following features not present on the CPU:
pae
Unable to boot - please use a kernel appropriate for the CPU

Some digging around, and confirmation that the current packaged version of Virtual Box doesn’t support PAE. You think they could tell you before successfully installing an OS. I’m running 1.5.6, I need 1.6.x

$ dpkg -l | grep virtualbox
ii  virtualbox-ose                             1.5.6-dfsg-6ubuntu1                      x86 virtualization solution - binaries
ii  virtualbox-ose-modules-2.6.24-19-generic   24.0.4                                   virtualbox-ose module for linux-image-2.6.24
ii  virtualbox-ose-source                      1.5.6-dfsg-6ubuntu1                      x86 virtualization solution - kernel module

Off to the Virtual Box Downloads to get 1.6.4
Don’t make the same mistake as I did and use the first download link, that’s the commercial version that doesn’t install what you expect, you need the OSE. Of course this is not packaged, it’s only source.

  ./configure
Checking for environment: Determined build machine: linux.x86, target machine: linux.x86, OK.
Checking for kBuild: found, OK.
Checking for gcc: found version 4.2.3, OK.
Checking for as86:
  ** as86 (variable AS86) not found!

Ok, well I go through this step like 4 times, installing one package at a time, I wish they could do a pre-check and give you all missing requirements. I installed bin86, bcc, iasl.

Then I got to the following error.

$ ./configure
...
Checking for libxml2:
  ** not found!

Well it’s installed, all too hard. Throw Virtual Box away for virtualization software. And why am I using it anyway. Because VMWare Server doesn’t work under Ubuntu 8.04 either because of some ancient gcc dependency. Sees I may have to go back to that. I just want a working virtualization people on the most popular Linux distro to install other current distros. It’s not a difficult request.

$ dpkg -l | grep libxml
ii  libxml-parser-perl                         2.34-4.3                                 Perl module for parsing XML files
ii  libxml-twig-perl                           1:3.32-1                                 Perl module for processing huge XML document
ii  libxml2                                    2.6.31.dfsg-2ubuntu1                     GNOME XML library
ii  libxml2-utils                              2.6.31.dfsg-2ubuntu1                     XML utilities
ii  python-libxml2                             2.6.31.dfsg-2ubuntu1                     Python bindings for the GNOME XML library

Project Darkstar

It may sound like either a astronomical research project or a Star Wars spin- off, but Project Darkstar is an open source infrastructure from Sun Microsystems that states “simplify the development and operation of massively scalable online games, virtual worlds, and social networking applications.”

The advertising sounds promising like many sites, the emphasis seems to be on gaming throughout the material, interesting they threw in the term “social networking applications” specifically in opening descriptions.

I believe worthy of investigation, if only to see how that solve some classic problems. So, Learn some more, Start your rockets and Participate.

Extending application data to the cloud

I was one of the invited panel speakers to A panel on Cloud Computing this week in New York. As one of 2 non vendor presenters, it was a great experience to be invited and be involved with vendors.

While I never got to use my slides available here, I did get to both present certain content, and indeed questions and discussions on the night were on other points of my content.

Cloud computing is here, it’s early days and new players will continue to emerge. For example, from the panel there was AppNexus, reviewed favorably at Info World in comparison with EC2 and Google App Engine, 10gen, an open source stack solution and Kaavo which from an initial 60 seconds of playing provide a management service on top of AWS similar to what ElasticFox provides. I need to investigate further how much the feature set extends and would compete with others like RightScale for example.

The greatest mystery came from Hank Williams and his stealth Kloudshare. He did elaborate more on where they aim to provide services. A new term discussed was “Tools as a service”, akin to moving use metaphorically from writing in Assembly language to the advanced frameworks of today’s generation of languages such as Java and Ruby.

Thanks to Murat Aktihanoglu of Unype who chaired the event.