MySQL Conference – Opening Keynote with Marton Mickos

It’s an early start this morning at 8:20am at MySQL Conference 2007 with CEO Mårten Mickos keynote talk Welcome and State of MySQL AB.

Here are some of the key points that impressed on me.


“The Participatory & Disruptive spirit of the Dolphin.”.

Open Source disrupts inefficient models and produces new wealth.

Architecture of Participation.

  • GPL and FOSS licenses
  • Google
  • del.ico.us

MySQL Architecture of Participation
You have the forge, the planet, community contributer program and many more.

Production of Amateurs

  • You Tube
  • Second Life
  • Wikipedia

Some really great Quotes:

“Fail often to succeed soon.” IDEO company slogan

“Noah’s Ark was built by amateurs, but the Titanic by professionals.”

Innovation Happens Here

MySQL Monitoring & Advisory Service

In his presentation of MySQL Network Enterprise Dashboard, If you were quick you would have noticed the MySQL Version 6.0.0-alpha-pb518.log

Leading Applications

  • Open Source ISVs
  • Web
  • Web 2.0
  • On-Demand
  • Hosting
  • Telco & VOIP
  • Security
  • Enterptise 2.0
  • Hub & Spoke

We want to be known as The Online Database.

Drawn to the Lamp

  • Microsoft
  • Oracle
  • IBM
  • Sun
  • HP
  • Dell
  • Unisys

They all have an Open Source strategy, the develop open source products, they use and partner with MySQL.

He also mentioned MySQL Enterprise Connection Alliance – MECA.

Global Distributed Organization

  • 70% work from home
  • 100 major locations in nearly 30 different countries.
  • There is a great focus on culture – global culture.

Disruptive Businesses

  • A smarter way to produce the goods
  • A smarter way to distribute them
  • Serve the underserved
  • Keep it simple
  • Innovate with technology to give customers more convenience
  • Make money
  • Gartner: 66% of enterprises are deploying MySQL or planning to
  • 750,000 newsletter subscribers
  • MySQL installed in over 11 million installations

Going Forward

  • Scalability
  • Availability
  • Manageability
MySQL – The best online database in the world.

MySQL Conference – Rewarding the Community

At MySQL Conference 2007, CEO Mårten Mickos in his opening keynote Welcome and State of MySQL AB rewarded the community. Those that contributed to “The best database in the world”.

2007 MySQL Applications of the Year
#1 in Online Video
#1 in 3G Mobile Entertainment
#1 in Creative Software

And the Winners- YouTube, Amp’d mobile, and Adobe

2007 MySQL Partners of the Year
#1 reseller of MySQL Enterprise to govt
#1 in MySQL Enterprise integration
#1 in Open Source

And the Winners – Carasoft, HP, and RedHat

2007 Community Members of the Year
Quality Contributor
Community Code Contributor
Community Advocate

And the Winners

Martin Freibe
Paul McCullagh
Sheeri Kritzer

MYSQL Conference – Scaling and High Availablilty Architectures Tutorial

My first tutorial today at MySQL Conference 2007 is Scaling and High Availablilty Architectures by Jeremy Cole and Eric Bergen of Proven Scaling.

Basic Tenets

While not discussed, the premise is to Cache Everything. MemCache is a key component to any scalable system.

Lifetime of a scalable system

Using the analogy from a newborn child Jeremy stepped us through the categories Newborn, Toddler, Teenager, Late teens to 20s, Adult.

In Late teens to 20s, is where most systems die a slow death, he termed “the awkward stage”. This is where scalability is critical, and a meltdown for example can ruin you. Downtime is also just not acceptable for your user community.

When your Adult you need to perfect the ability to deploy incremental changes to your application seamlessly.

As the system grows, optimizations changes that may have worked are now affecting your system. It’s important to revisit during each stage.

Partitioning

Most applications mainly implement a horizontal partitioning model. Different components of your systemcan be scaled by a “partition key”. The different models include fixed “Hash key” partitioning, Dynamic “directory” partitioning, Partition by “group” and partition by “user”.

The Dynamic “directory” is a lot harder to implement, but is ultimately more scalable.

One of Partitioning Difficulties, is inter-partition interactions. Ultimately the solution is duplicating meta-data or duplicating data. Overall reporting is also more difficult. What if we want average for users per location, if we partition by users. Most systems user driven and partition by user. A newer strategy is to partition by group.

For implementing a Fixed Hash Key partitioning.

  • Divide data into B buckets
  • Divide the B buckets over M machines

You define 1024 physical buckets (can then the easily dividable) 0-1023 (user_id % 1024). Coded then by range to physical machines, 0-255, 256-511, 512-767, 768-1023. The plus side is very easy to implement, you can always derive where something is. The biggest problems is scalability, e.g. going from 4 machines to 5. You also don’t have any fine grained control over buckets.

For Dynamic Directory partitioning you maintain a database of mappings to partitions. A user can be easily moved at a later date in a much finer grain. MySQL Cluster is designed for this type of application. It is not necessary however, a well configured Innodb Hardware solution with memcache can easily provide the same functionality. The only writes are new users, or update partition keys, with a lot of reads.

HiveDB

This open source product implements a “standard” partition-by-key MySQL system written in Java.
Many organizations have a somewhat similar built system, but this is an example of something that’s been open sourced.

More information at www.hivedb.org.

The Hive API language should be the only code that should be re-written to be application development language (e.g. PHP,Ruby) when needed.

High Availability

The obvious goals.

  • Avoid downtime due to failures.
  • No single point of failure.
  • Extremely fast failover.
  • No dependency of DNS changes.
  • No Dependency on code changes.
  • Painless and seamless failover.
  • Fail-back must be just as painless.

The overall objective is speed.

Understanding MySQL Replication is important to understanding HA options.

MySQL Replication is Master-Slave One Way asynchronous replication.

  • Slave requests binary logs from last position.
  • Master sends binary logs up to current time.
  • Master keeps sending binary logs in real-time.

Dual Master provides an easy configuration to fail over, it doesn’t provide benefits in throughput. Can help solve online schema changes without downtime. Assuming existing queries will perform both pre and post schema. (set-sql-bin-log=0 for the session is the tip). There are a number of caveats.

Ultimately for High Availability you have a trade off, data loss (minuet) to scalability.

SHOW PROFILE

I’ve been playing more with the SHOW PROFILE command available as part of MySQL Community 5.0.37. Thanks to the author Jeremy Cole. This command can provide some interesting insight into the workings of MySQL. It does however like most new releases of software enable users to make suggestions for new features and functionality. Here is my wish list. Some I figure are practical, some are wish list, but if you never ask you never know.

  1. The Unit of measure for duration is Second. It would be great if it could be tuned for display, say millisecond. If you look at my first Example, all figures are effectively represented in milli-second or even micro-second granularity.
  2. I would like to see a total for the query. Again in Example 1, you have to add up all the figures to determine this query took 8ms.
  3. Again in Example 1, perhaps a percentage of total time for each line duration may be helpful.
  4. More descriptive status descriptions (this is part of the MySQL internal code and not the patch)
  5. SET PROFILING=1; can only be set on a current session, making it impossible to easily monitor a JDBC multi-connection test. There needs to be a way to enable for a session, other then current interactive one you are viewing, but also be able to see results. You can’t do a SHOW PROFILE via a JDBC connection!
  6. I’d like to see a threshold, so queries running under threshold are discarded, much like a long-query-time option. This enables you to run a large number of SQL Statements and only profiles for longer running ones are logged
  7. I’d like to see a level of logging to file, again like a slow query log, so you can simply gather information on a working system and review at some later time. Combined with the previous point, you now have microsecond slow query log with explicit details.

One major benefit of the SHOW PROFILE command is I can accurately get a figure for how long a query it taking (time in milliseconds). You just have to sum all figures (See wish list point 2).

By default, the source details are not provided, you need to specify the SOURCE operand, which helps in both comparing with any debugging output and also trouncing through the code. As in Example 1, I needed to find why 95% of time was in a step with the most descriptive line of ‘end’.

Example 1

mysql> show profile SOURCE,MEMORY for query 4;
+--------------------+------------+-----------------------+---------------+-------------+
| Status             | Duration   | Source_function       | Source_file   | Source_line |
+--------------------+------------+-----------------------+---------------+-------------+
| Opening tables     | 0.00013200 | open_tables           | sql_base.cc   |        2106 |
| System lock        | 0.00001800 | mysql_lock_tables     | lock.cc       |         153 |
| Table lock         | 0.00000600 | mysql_lock_tables     | lock.cc       |         162 |
| init               | 0.00001300 | mysql_select          | sql_select.cc |        2073 |
| optimizing         | 0.00004800 | optimize              | sql_select.cc |         617 |
| statistics         | 0.00002500 | optimize              | sql_select.cc |         773 |
| preparing          | 0.00005200 | optimize              | sql_select.cc |         783 |
| executing          | 0.00002200 | exec                  | sql_select.cc |        1407 |
| Sending data       | 0.00000500 | exec                  | sql_select.cc |        1925 |
| end                | 0.00786600 | mysql_select          | sql_select.cc |        2118 |
| query end          | 0.00001400 | mysql_execute_command | sql_parse.cc  |        5085 |
| freeing items      | 0.00000700 | mysql_parse           | sql_parse.cc  |        5973 |
| closing tables     | 0.00001900 | dispatch_command      | sql_parse.cc  |        2120 |
| logging slow query | 0.00001000 | log_slow_statement    | sql_parse.cc  |        2178 |
| cleaning up        | 0.00000500 | dispatch_command      | sql_parse.cc  |        2143 |
+--------------------+------------+-----------------------+---------------+-------------+
15 rows in set (0.01 sec)

Example 2

We were experiencing increased latency in JDBC with a particular query, With a recommendation from MySQL Support we tried SET SQL_BUFFER_RESULT=1;

mysql> show profile source for query 14;
+------------------------------+------------+-------------------------+---------------+-------------+
| Status                       | Duration   | Source_function         | Source_file   | Source_line |
+------------------------------+------------+-------------------------+---------------+-------------+
| Opening tables               | 0.00006025 | open_tables             | sql_base.cc   |        2106 |
| System lock                  | 0.00004875 | mysql_lock_tables       | lock.cc       |         153 |
| Table lock                   | 0.00000400 | mysql_lock_tables       | lock.cc       |         162 |
| init                         | 0.00001600 | mysql_select            | sql_select.cc |        2073 |
| optimizing                   | 0.00005675 | optimize                | sql_select.cc |         617 |
| statistics                   | 0.00001250 | optimize                | sql_select.cc |         773 |
| preparing                    | 0.00005175 | optimize                | sql_select.cc |         783 |
| Creating tmp table           | 0.00001275 | optimize                | sql_select.cc |        1206 |
| executing                    | 0.00006025 | exec                    | sql_select.cc |        1407 |
| Copying to tmp table         | 0.00000400 | exec                    | sql_select.cc |        1547 |
| converting HEAP to MyISAM    | 0.04820900 | create_myisam_from_heap | sql_select.cc |        9914 |
| Copying to tmp table on disk | 0.04049075 | create_myisam_from_heap | sql_select.cc |        9968 |
| Sending data                 | 1.29302000 | exec                    | sql_select.cc |        1925 |
| end                          | 0.09398425 | mysql_select            | sql_select.cc |        2118 |
| removing tmp table           | 0.00004975 | free_tmp_table          | sql_select.cc |        9856 |
| end                          | 0.00089125 | free_tmp_table          | sql_select.cc |        9884 |
| query end                    | 0.00001850 | mysql_execute_command   | sql_parse.cc  |        5085 |
| freeing items                | 0.00000825 | mysql_parse             | sql_parse.cc  |        5973 |
| closing tables               | 0.00003425 | dispatch_command        | sql_parse.cc  |        2120 |
| logging slow query           | 0.00001325 | log_slow_statement      | sql_parse.cc  |        2178 |
| cleaning up                  | 0.00000675 | dispatch_command        | sql_parse.cc  |        2143 |
+------------------------------+------------+-------------------------+---------------+-------------+
21 rows in set (0.00 sec)

Looking at the lines helped to indicate that the temporary table was being flushed to disk, indicating we need to Add SET SESSION tmp_table_size=20*1024*1024;

mysql> show profile source for query 18;
+----------------------+------------+-----------------------+---------------+-------------+
| Status               | Duration   | Source_function       | Source_file   | Source_line |
+----------------------+------------+-----------------------+---------------+-------------+
| Opening tables       | 0.00006050 | open_tables           | sql_base.cc   |        2106 |
| System lock          | 0.00001250 | mysql_lock_tables     | lock.cc       |         153 |
| Table lock           | 0.00000400 | mysql_lock_tables     | lock.cc       |         162 |
| init                 | 0.00000775 | mysql_select          | sql_select.cc |        2073 |
| optimizing           | 0.00005475 | optimize              | sql_select.cc |         617 |
| statistics           | 0.00001225 | optimize              | sql_select.cc |         773 |
| preparing            | 0.00005075 | optimize              | sql_select.cc |         783 |
| Creating tmp table   | 0.00001350 | optimize              | sql_select.cc |        1206 |
| executing            | 0.00006125 | exec                  | sql_select.cc |        1407 |
| Copying to tmp table | 0.00000375 | exec                  | sql_select.cc |        1547 |
| Sending data         | 0.29110925 | exec                  | sql_select.cc |        1925 |
| end                  | 0.08023800 | mysql_select          | sql_select.cc |        2118 |
| removing tmp table   | 0.00001525 | free_tmp_table        | sql_select.cc |        9856 |
| end                  | 0.05971400 | free_tmp_table        | sql_select.cc |        9884 |
| query end            | 0.00001925 | mysql_execute_command | sql_parse.cc  |        5085 |
| freeing items        | 0.00000425 | mysql_parse           | sql_parse.cc  |        5973 |
| closing tables       | 0.00004625 | dispatch_command      | sql_parse.cc  |        2120 |
| logging slow query   | 0.00000800 | log_slow_statement    | sql_parse.cc  |        2178 |
| cleaning up          | 0.00000300 | dispatch_command      | sql_parse.cc  |        2143 |
+----------------------+------------+-----------------------+---------------+-------------+
19 rows in set (0.00 sec)

Google's worse nightmare

Today while waiting at the airport, I took a look at the news stand, and right there on the cover of Fast Company were two words Google, and Wikipedia. Given Wikipedia is a poster boy of MySQL it was an immediate purchase just to see what was being said.

So the title of the cover was Google’s worse nightmare – Wikipedia’s Jimmy Wales has his sights set on the search business.

Now, often I use Wikipedia to search for things directly rather then using Google. I’ve found it usually to be more accurate, particular on topics I know it will contain. References to search users being disappointed, Google and Yahoo tied with a 2.3 of 5 in user satisfaction hits about home for me as week, and that’s exactly the ideas behind www.wikia.com.

The article states “Google doesn’t have an effective method of locking in its customers the way earlier info-tech leaders did”. Well I’d agree in relation to searching, but Google is rapidly releasing new products, new features and acquiring good ideas. It has the wealth (money and gene pool) to make an impact.

Some more on www.wikia.comWikia offers free MediaWiki hosting for your community to build a free content wiki-based website. Find out why you should use Wikia and request a new wiki in any language or see our list of wikis to visit an existing one.

Today, it has 69 categories, I’m sure tomorrow it will be more. today’s feature page is The Marvel Database, others like Wookieepedia – The Star Wars encyclopedia and 100’s of entries alone in the Games Category make it something to look at one of day’s when I have free time.

And best of all it runs the LAMP Stack. Linux, Apache, MySQL, PHP.

Get behind a new exciting site

As I write this blog I have over 90 draft blog posts. That’s 9-0. Why do I have so many posts? The main reason is I want to say something, and I’ve either not completed it, or researched it sufficiently to consider the entry complete.

This frustrates me as sometimes I just want to get the word out on something, or of my opinion, or of something great I’ve discovered. I do it for me, I don’t really care if anybody actually reads my stuff, but I’m surprised sometimes when I get comments how people actually get to see my blog.

JotThat is a surprisingly simple yet brilliant idea. It’s quite simply a site for making Jots, making quick notes, making a passing comment, noting a thought, something you want to either remember or something you want to say in a simple Jot form.

What makes JotThat in my eyes? Well it’s simple, and I strive for simple. It’s extremely easy to use. It’s built on MySQL, so that gains my attention. It’s something just a little different from everything you see each day.

I especially like the new feature of being able to email your Jots to your account. I was at the train station tonight, then later at dinner with some great input from somebody. I wanted to write it down (I forget things a lot more these days), so instead of adding it to notes in my PDA, I simply emailed it to my JotThat account. Easy as!

The site is still in early stages, but please get behind a project. I always like to get referrals from people on everything from websites, products, even services like a good dentist. I recommend you check this one out, as well as supporting the MySQL open source community.

MySQL LOAD DATA Trick

I leaned a new trick today with LOAD DATA INFILE. I’m migrating some data from an external source, and the Date Format is not the MySQL required YYYY-MM-DD, it was DD-MMM-YY. So how do you load this into a Date Field.


$ echo "02-FEB-07" > /tmp/t1.psv
$ mysql -umysql
USE test;
DROP TABLE IF EXISTS t1;
CREATE TABLE t1(d1 DATE);
# echo "02-FEB-07" > /tmp/t1.psv
LOAD DATA LOCAL INFILE '/tmp/t1.psv'
INTO TABLE t1 (@var1)
SET d1=STR_TO_DATE(@var1,'%d-%M-%y');
SELECT * FROM t1;
EXIT

The trick is to bind the appropriate column within the file being loaded to a variable, @var1 in my example and use the SET syntax to perform a function on the variable. Rather cool.

A good tip to know.

That missing INNODB STATUS

On Thursday I saw something I’d not seen before. An Empty Innodb Status. Now given the amount of output normally shown it was certainly a first. And it looked like:

mysql> SHOW ENGINE INNODB STATUS;
+--------+------+--------+
| Type   | Name | Status |
+--------+------+--------+
| InnoDB |      |        |
+--------+------+--------+
1 row in set (0.03 sec)

To answer some of the most obvious questions.

  • Yes it was a working existing MySQL instance, with InnoDB correctly configured. Indeed we had been benchmarking for several hours.
  • MySQL Server was running, indeed a command selecting data from the mysql schema worked just fine after seeing this (All other tables were Innodb).
  • Absolutely nothing in the host MySQL error log. (This was the second most disappointing aspect)
  • The Process List showed two queries that had been running for some time, everything was taking  ; 1 second. (This was the most disappointing)

So the problem is, MySQL seems to effectively hung when dealing with queries solely in InnoDB tables. Closer investigation found that another application process had filled the /tmp file system. Reclaiming space didn’t cause MySQL and InnoDB to start operating. Even a shutdown of MySQL failed, with mysqld having to be killed manually

For those super inquisitive the version was 5.1.16-ndb-6.2.0-log, and yes it is a Cluster release. I’ve yet to test the problem on a normal 5.1 version and log a bug appropriately if it exists.

I suspect in our benchmark we definitely need to include some timeout handling, so the queries would fail (they were both UPDATES), but it did have the customer asking why, do which there was no answer.

Watching Replication in action

For all those instant GUI people out there, there is an easy way to watch the present status of your MySQL Slaves using the watch command.

$ watch -n 1 -d "mysql -uroot -pxxxx mysql -e 'SHOW SLAVE STATUS\G'"

The watch provides a view of a file or command, and shows interval updates to this output (-n  seconds> option). You can also specific a granularity better then one second for example 0.5. -d also highlights the differences for you. So while you see the following output with your SHOW SLAVE STATUS, on a loaded system you will also see bin-log and relay-log changes, and perhaps Seconds_Behind_Master.

The question is, Why is Seconds_Behind_Master the last column in this display?


*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: localhost
Master_User: repl
Master_Port: 10002
Connect_Retry: 60
Master_Log_File: master-bin.000006
Read_Master_Log_Pos: 102
Relay_Log_File: newyork-relay-bin.000055
Relay_Log_Pos: 244
Relay_Master_Log_File: master-bin.000006
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 102
Relay_Log_Space: 539
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0

Smarter indexing for column LIKE '%string%'

With my very heavy travel load and skilling load I’ve not had time to scratch myself. It hasn’t stopped the brain working overtime on various issues including the classic find a pattern in a string starting with a wildcard character. On a recent gig I saw the true classic.


SELECT columns
FROM users
WHERE username LIKE '%str%'
OR firstname LIKE '%str%'
OR lastname LIKE '%str%'

I went through the various options and comments on leading ‘%’, OR’s, combined columns, FULLTEXT (which doesn’t work in this case), merge indexing etc, however it perplexed me that nobody has really solved this problem, or at least shared their solutions.

I have an idea, a theory and while I’d love to prove/disprove it, I simply just don’t have the time. So here are my notes, hopefully somebody can comment positively/negatively, do some research, or encourage me to pursue it more.

The Problem

The problem is quite simply the leading wildcard, Der!. So how do you eliminate the wildcard from the search?

My Idea

Working with the earlier example, and having a concatenated field of the three components (username,firstname,lastname), my idea is to encode into 27 bits, a bit for each alphabetic character (A-Z) found, and any non-alphabetic character except whitespace in bit 27.

This column is then indexed and searched for bitwise matches. Here is a pictorial description.

String a b c d e f g h i j k l m n o p q r s t u v w x y z
Ronald Bradford 1 1   1   1           1   1 1     1                
Search: ‘brad’ 1 1   1                           1                 *** MATCH ***
Search: ‘fred’       1 1 1                       1                 NO MATCH

My idea is very simple, the question is, will it actually work.

The Questions

The goal is quite obviously to get an Index utilization, to a maximum say of 10%-25% of matching rows. Factors that would affect this include:

  • If search string is one character, like a vowel I could this not useful, so an implied criteria for optimal work is at least 2 or 3 characters.
  • I see the effectiveness lost on large column values, it could be that only up to say 30 characters is optimal, I see strings of 100+ characters would be ineffective.
  • It doesn’t support case-sensitive searching or non ASCII searching

The Tests

  • Create good sized table and distribution of first/last names (A IMDB Actors table would work)
  • Create a concatenated column of searchable fields (e.g. details = CONCAT_WS(‘ ‘,firstname,lastname)
  • Create function to return bitwised search string (e.g. bitize(str))
  • Create Indexed bitwise column and pre-populate accordingly (e.g. bitfulldetails)
  • Create BEFORE INSERT|BEFORE UPDATE Triggers to populate Indexed bitwised column
  • Test the sucker (of course you will need to include the actual LIKE command as well in the WHERE clause)

A sample query would then be:


SELECT columns
FROM users
WHERE bitize('str') & bitfulldetails
AND fulldetails like '%str%'

So the challenge to all those budding MySQL Gurus, does it seem plausible?

CU@UC07


I’ll be speaking at the upcoming 2007 MySQL Conference & Expo (Why they dropped the word User, who knows), this time with Guy Harrison (Author of many books including MySQL Stored Procedures). We will be talking on MySQL for Oracle DBAs and Developers.

Anyway, good friend Paul McCullagh, creator of PBXT will be also speaking on PrimeBase XT: Design and Implementation of a Transactional Storage Engine. He coined to me in an email “CU at the UC”. I’ve done a further level of refactoring, and added marketing. You can buy the shirt online here. (More colors including black and products coming, if you want it now, please ask).

Using Innodb Primary Keys wisely

At a customer site recently I came across the following table definition for an Innodb table. 140 columns, 3 part primary key totally 44 bytes, 2 million rows giving over 900MB in data size. And it had 15 indexes, totaling over 2.3GB in size. Add into the mix a Windows Server 2003 OS, 640MB innodb_buffer_pool_size setting and table scans out the warzoo. This all leads to very poor performance.

In is generally considered best practice for Innodb to use the shortest primary key as possible, and there is a clear reason for this. Innodb stores with every index the full primary key. So for example if an indexed column was 4 bytes in length, in this example the index row would be 48 bytes (before overhead). Fortunately an easy solution presented itself, because of this index storage requirement, Innodb will create an internal 6 byte primary key if none exists for a table. I of course had known about this but had never tried it in a production situation. I come from old school where every table is defined with a primary key.

So a ALTER TABLE [name] DROP PRIMARY KEY results in a long wait, and a reduction in the Index size to 900MB. WOOT! Now, in closer analysis the Primary Key is the Primary Key because it’s the Unique requirement for the table. No problem I just add a Unique Key to replace the previously dropped Primary Key. A check to review the Index Size showed a size of 2.3GB. What the!

It seems if you read the fine print of the MySQL documentation for Innodb Table Structures there is an exception clause. If you do not define a PRIMARY KEY for your table, MySQL picks the first UNIQUE index that has only NOT NULL columns as the primary key and InnoDB uses it as the clustered index.

In my situation, by adding a Unique Key this was in turn converted internally to the new Primary Key. Drat! So to the Innodb developers out there. I’d like to see a way for the internally generated key to remain in this situation, or at least provide the ability for the designer to choose this capability.

The only result is to physically create an INTEGER UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY column definition. This may seem like not a big thing, but the customer did not want to make any schema changes. However it was a necessity simply as the first step to get reasonable performance.

This was only the first part of the problem, solving the full table scans via creative Indexing was not possible, code changes were out of the question for an immediate fix. The real problem was the size of the innodb_buffer_pool_size. It was just too small to handle both the table data and index pages in memory. Perfmon analysis showed the machine simply when into intensive disk I/O every time any queries were run. On a Linux system with just a database and Innodb tables, it can be recommended that 70-80% of available memory can be allocated to the innodb_buffer_pool_size.

Alas this is Windows 32 bit, and their is an implied 2GB memory limit for any process, so the best one could manage in this situation was 1600MB.
The long and the short was that even with poor database design, and immediate performance improvement occurred with an optimal Primary Key and sufficient memory allocated to the correct MySQL buffers. This is only a temporary fix for a greater problem.

MySQL Camp T-Shirts


For those that attended the MySQL Camp at Google HQ late last year you may have seen me with my own T-Shirt designs. A number of people inquired about getting them. I’ve finally got around to make them available online, so anybody that would like one can order online.

There are two different shirts. If you want your name on the shirt, you need to make sure you choose the correct one.

  • Early Adopters – For those that were the first 48 that signed up, your name as well as position and company are on the shirt.
  • The Herd – For everybody that registered on the website, your name is on the shirt.

Ok. I’ve already been asked why 48. This was the number of registrants when I got the shirt made back in Australia a week or so before the Camp.

There are also plenty more of my MySQL designs at my MySQL Wear Store.

For those that also liked the runner up pin “A mouse is a waste of a perfectly good hand”, you can also get this in it’s original graphical shirt design at Geek Cool – CLI”

MySQL Predictions for 2007

I’m interested to know what people consider will behold MySQL in 2007?

The announcement of “You” as Time person of the year can only considered a huge boost to the opportunities in 2007. So, in 2007 here are my 7 (in no significant order).

  1. 2007 will be the year of the storage engine. We will see 5 offerings for transactional storage engines, 20+ available and practical engines for management of some form of data.
  2. 2007 will see MySQL 5.1 GA (finally).
  3. 2007 will see MySQL release it’s own Falcon Storage Engine (GA not until Q4 :-().
  4. The MySQL Winter of Code will enable the contributions of the community to change feature development. I foresee a Bounty system from an external party or parties for MySQL Features emerging.
  5. MySQL will make major press inroads to the RDBMS Big 3 of Oracle, SQL Server and IBM DB2.
  6. Despite efforts of MySQL AB, major installations of MySQL 4.0 and 4.1 including large ISP’s will hamper the uptake of 5.0 and 5.1 and the de-commissioning of 4.x
  7. A major country government will make an announcement to move to Open Source across servers and desktops, and MySQL will contribute to being an enterprise database offering in systems replacements as part of a longer term strategy.

NY Tech Meetup – Idea Virus

On more thing that came from the NY Tech Meetup last night was the Idea Virus. It was handed out on a piece of paper. Here is what it said.

Idea Virus

  1. Think of a novel or awesome idea
  2. Write it on the back of your business card
  3. Spread the viral idea to somebody else
  4. Duplicate and re-propagate ideas that you like

When you go home, leave your accumulated ideas in the Halloween bucket. Check out idea-virus.org to see the results.

So I goto the site, and all I get is a Not quite ready yet… message page. They should have a least put up what I just typed out, and maybe some more “Coming Soon”, or submit to a email or something. 15 mins of work, rather then 2 mins of work would not leave people like me going “Well why bother!”

NY Tech Meetup

Tonight I headed to the NY Tech Meetup organized by the CEO of Meetup and co-founder of Fotolog, the company my friend Frank works for.

This forum provided for quick presentations by new NY high tech ventures and other interesting discussions, then enabling further networking between people.

A Perfect Thing


The first speaker was Steven Levy, mentioned on the site as Newsweek’s tech editor & all-around geek writer extraordinaire. He is the author of “The Perfect Thing”, a story of the Apple iPod. He shared a funny story of a dinner where he was seated with Bill Gates at a Microsoft XP launch in late 2001, in which he had just that week got his initial iPod following the launch. When he gave it to Bill Gates, he observed as he described this mind meld as a votex between Bill’s brain and the iPod while he checked it out, exploring all the menu options, buttons and options. 45 seconds later came the comment of something like, looks great, and it works with a Macintosh.

Urbis

Our second speaker was Steve Spurgat from www.urbis.com. The blub. Urbis is a creative community with three types of users: creative people, those who love and support creative people, and those who have opportunities for creative people. It’s very creative.. Some of the interesting features of this site included:

  • Can pre-define the people that can review your submissions, by various criteria, meaning that your feedback can be restricted.
  • You can specify your specific goals for your submission.
  • You can select the present opportunities for your submission.
  • There is an economy system to see reviews of your own work you must review others

Presently only writing is available, but plans for Music, Art and Film will be available in the next few months. With some 12,000+ members and 13% active, it’s a good start.

There was also discussion of copyright, Urbis being a registered copyright agent complying with government guidelines, and of revenue models including the option for fees from publishers, and the potential of ad copy. A competitor site Trigger Street was also mentioned, started by Kevin Spacey.

One Web 2.0 thing I liked about this site, and the next was that the website was the presentation (no powerpoint), and while talking the home page of the website was displayed and the content was dynamically changing, in this case, reviews being submitted online. A good selling point.

LinkStorms

Scott Kolber of LinkStorms was our next presenter. Described as the next generation of links for the web providing context specific fast links and specific navigation from a button, images, banner ad.

The revenue model is CPM plus a publishers setup, maintenance and support fee structure. Apparently up to 40% click thru rate, which is extraordinary compared to the current stats of < 1% for general banners.

When asked what was different with this model, the answer was "the results. It's a better user experience looking at ads".

You can see it in action at Premiere Magazine – The Departed.

CogMap

Brent Halliburton and his approach to a wikipedia of Organization charts with CogMap certainly got the best response the crowd. A good comedian, Brent made the mistake with a slow Internet connection to demonstrate interactively with an example from the audience and not his own prepared content. It ended up not rendering, then crashing but he managed to turn it around into a plus and the best applause of the night.

His idea provoked a wide range of comment and feedback and when asked why? “Because if your an entrepreneur you do things”. “In the big scheme of things I don’t have all the answers. I just put it out there.”

uPlayMe

David Fishman provided the last presentation of uPlayMe, a Windoze program that provides a slant on the community social networking via enternaintment, specifically when they are actually playing via Windows Media Player for example. It’s designed to help people discover other people with the same interests, or weird interests. Some other sites mentioned in the discussion included Last.fm, Pandora and MOG.

2007 Predictions

We ended with an audience participated 2007 predications. The included:

  • No Predication – (The first person from the Board of Advisors I believe that was specifically asked)
  • IP TV market and integration with the TV
  • Will see a Billion $ organization from the NY community
  • The buzz of radios that can do multiple gigibits of transfer between neighours (yes it sounded weird)
  • Era of the connected home, Computer, TV, Stereo
  • Some political thing at change.net
  • Another political thing, an organic style camp debrief
  • The Term 2.0 will cease being used in 2007
  • Skype will be a source of major innovation
  • NY will produce a billion dollar Internet company

Pluggable Storage Engines – What is the potential?

I started this post a month ago, but after Kaj’s discussion on the same topic at the MySQL Camp I figured it was time to post.

I had dinner with a friend recently (a very smart friend), and our conversation lead him to ask “What’s different with MySQL?”. One of the things I tried to describe was the “Pluggable Storage Engine Architecture” (PSE) potential for the future that I expect will set MySQL apart from all other Open Source and even commercial databases.

Here are some details of the example I tried to provide, given somebody who understands enough of the general principles of RDBMS’s.

Consider the ability that information (intelligent data) is available within a Relational Database via the appropriate tools and language (e.g. SQL) but it is not physically constrained to Tables, Columns and Rows of data and an application to manage that data which is the present traditional approach. Let’s use images that you take with your digital camera as an example.

In a typical RDBMS application you would create an application to managed the content of your data, with a number of tables, and links to the images etc. Of course you would need an application as well to both view and manage this information.

What if, you simply pointed your database to a directory of images and then was able to query information such as photos by date, or by size, by album, from a certain location, with given keyword etc. Most of this information about digital photographs is already there. This information is encoded into an Exif format that is embedded within JPEG images.

So what’s missing from this information? Tags and Comments are the most obvious, because this information can’t be determined electronically, this is something that humans do. If you could also embedded this information into an image with a suitable tool then you could be ready to manage your photos.

A further extension would be to have Image Analysis capabilities that enabled you to search for photos that contained the sky, or people, or something that was the color red.

What if in the future, your camera’s had a built in GPS and this information recorded within Exif, and then the ability to extend your output to link to popular on line mapping software such as Google Maps would exist. You could then use your digital camera to track your moves, taking photos that could then plot your path over a holiday, and also enabling location based queries.

It was interesting to postulate what ideas may be possible in the futre. I suspect that it won’t be long before we actually see this. So what are the other potentials that you may not consider. Another example may be a MP3 Dukebox style PSE, managing all the information held with ID tags of MP3 allowing you to do with music what could be done with images.

References

Exif Example

Here is some example content of Exif using EXIF Tool

 ./exiftool ~/Desktop/2006_02_23_AirShow/IMG_5966.JPG
ExifTool Version Number         : 6.50
File Name                       : IMG_5966.JPG
Directory                       : /home/rbradfor/Desktop/2006_02_23_AirShow
File Size                       : 2 MB
File Modification Date/Time     : 2006:09:24 17:44:32
File Type                       : JPEG
MIME Type                       : image/jpeg
Make                            : Canon
Camera Model Name               : Canon EOS 300D DIGITAL
Orientation                     : Horizontal (normal)
X Resolution                    : 180
Y Resolution                    : 180
Resolution Unit                 : inches
Modify Date                     : 2006:02:23 16:01:56
Y Cb Cr Positioning             : Centered
Exposure Time                   : 1/320
F Number                        : 10.0
ISO                             : 200
Exif Version                    : 0221
Date/Time Original              : 2006:02:23 16:01:56
Create Date                     : 2006:02:23 16:01:56
Components Configuration        : YCbCr
Compressed Bits Per Pixel       : 3
Shutter Speed Value             : 1/320
Aperture Value                  : 10.0
Max Aperture Value              : 3.5
Flash                           : No Flash
Focal Length                    : 18.0mm
Macro Mode                      : Unknown (0)
Self-timer                      : 0
Quality                         : Fine
Canon Flash Mode                : Off
Continuous Drive                : Single
Focus Mode                      : AI Focus AF
Canon Image Size                : Large
Easy Mode                       : Manual
Digital Zoom                    : Unknown (-1)
Contrast                        : +1
Saturation                      : +1
Sharpness                       : +1
Camera ISO                      : n/a
Metering Mode                   : Evaluative
Focus Range                     : Not Known
AF Point                        : Manual AF point selection
Canon Exposure Mode             : Program AE
Lens Type                       : Unknown (-1)
Long Focal                      : 55
Short Focal                     : 18
Focal Units                     : 1
Max Aperture                    : 3.6
Min Aperture                    : 22
Flash Activity                  : 0
Flash Bits                      : (none)
Zoom Source Width               : 3072
Zoom Target Width               : 3072
Color Tone                      : Normal
Focal Plane X Size              : 23.22mm
Focal Plane Y Size              : 15.49mm
Auto ISO                        : 100
Base ISO                        : 200
Measured EV                     : 9.00
Target Aperture                 : 10
Target Exposure Time            : 1/318
Exposure Compensation           : 0
White Balance                   : Auto
Slow Shutter                    : None
Shot Number In Continuous Burst : 0
Flash Guide Number              : 0
Flash Exposure Compensation     : 0
Auto Exposure Bracketing        : Off
AEB Bracket Value               : 0
Focus Distance Upper            : -0.01
Focus Distance Lower            : 5.46
Bulb Duration                   : 0
Camera Type                     : EOS Mid-range
Auto Rotate                     : None
ND Filter                       : Unknown (-1)
Self-timer 2                    : 0
Bracket Mode                    : Off
Bracket Value                   : 0
Bracket Shot Number             : 0
Canon Image Type                : IMG:EOS 300D DIGITAL JPEG
Canon Firmware Version          : Firmware Version 1.1.1
Camera Body No.                 : 0930402471
Serial Number Format            : Format 1
File Number                     : 159-5966
Owner's Name                    :
Canon Model ID                  : EOS Digital Rebel / 300D / Kiss Digital
Canon File Length               : 2387078
WB RGGB Levels Auto             : 1726 832 831 948
WB RGGB Levels Daylight         : 0 0 0 0
WB RGGB Levels Shade            : 0 0 0 0
WB RGGB Levels Cloudy           : 0 0 0 0
WB RGGB Levels Tungsten         : 0 0 0 0
WB RGGB Levels Fluorescent      : 0 0 0 0
WB RGGB Levels Flash            : 0 0 0 0
WB RGGB Levels Custom           : 0 0 0 0
WB RGGB Levels Kelvin           : 0 0 0 0
Color Temperature               : 5200
Num AF Points                   : 7
Canon Image Width               : 3072
Canon Image Height              : 2048
Canon Image Width As Shot       : 3072
Canon Image Height As Shot      : 2048
AF Points Used                  : Mid-left
Preview Quality                 : Normal
Preview Image Length            : 278318
Preview Image Width             : 1536
Preview Image Height            : 1024
Preview Image Start             : 2108760
Preview Focal Plane X Resolution: 3443.9
Preview Focal Plane Y Resolution: 3442.0
User Comment                    :
Flashpix Version                : 0100
Color Space                     : sRGB
Exif Image Width                : 3072
Exif Image Length               : 2048
Interoperability Index          : R98 - DCF basic file (sRGB)
Interoperability Version        : 0100
Related Image Width             : 3072
Related Image Length            : 2048
Focal Plane X Resolution        : 3443.946
Focal Plane Y Resolution        : 3442.017
Focal Plane Resolution Unit     : inches
Sensing Method                  : One-chip color area
File Source                     : Digital Camera
Custom Rendered                 : Normal
Exposure Mode                   : Auto
Scene Capture Type              : Standard
Compression                     : JPEG (old-style)
Thumbnail Offset                : 2560
Thumbnail Length                : 7680
Image Width                     : 3072
Image Height                    : 2048
Aperture                        : 10.0
Drive Mode                      : Single-frame shooting
Flash                           : Off
Image Size                      : 3072x2048
Lens                            : 18.0 - 55.0mm
Preview Image                   : (Binary data 278318 bytes, use -b option to extract)
Preview Image Size              : 1536x1024
Scale Factor To 35mm Equivalent : 1.6
Shooting Mode                   : Program AE
Shutter Speed                   : 1/320
Thumbnail Image                 : (Binary data 7680 bytes, use -b option to extract)
WB RGGB Levels                  : 1726 832 831 948
Blue Balance                    : 1.140108
Circle Of Confusion             : 0.019 mm
Focal Length                    : 18.0mm (35mm equivalent: 27.9mm)
Hyperfocal Distance             : 1.67 m
LV                              : 14.0
Lens                            : 18.0 - 55.0mm (35mm equivalent: 27.9 - 85.3mm)
Red Balance                     : 2.075767

The desire for Performance SQL Tips

It seems, people are clammering for a more consolidated help guide for SQL Performance tips.

Jay Pipes at the MySQL Camp ran a session Interactive Top 10 SQL performance Tips. There was plenty of input and discussion, and at the time Sheeri simply typed them into a wiki page for later work.

Well it seems even that rough list is popular at Del.icio.us ranking near the top of the Hot List on the front page. I saw it earlier and it was second or third, but didn’t think of taking a screen shot until now, but it’s still high.

I’d say that we could easily get the Top 10 for up to 10 different categories rather easily. Good luck Jay.

The MySQL Joust

At our MySQL Camp Jay and Brian pitted off in the Umbrella Joust. Not sure if there was a winner, or a looser, but in the end no blood was split (except Leslie, but that’s another story).

See these and more camp photos at Flickr.




The Falcon!

Some early notes by Brian Aker on Falcon as discussed at the MySQL Camp.

Falcon is a transactional engine MySQL will be introducing. The first discussions were held about 3 years ago with Ann Harrison and about 1 1/2 years ago, MySQL started taking seriously the possibilities.

Falcon is not an InnoDB replacement. It’s a different way of looking at the problem of how it looks at and manages transactions, and how it’s designed. It flips around the way data is stored. Some points:

  • It uses as much memory as possible, like Oracle SGA or InnoDB pool.
  • It has a row cache not a page cache for more optimal memory use.
  • No locking at all. Jim doesn’t believe in it for concurrency control. It has total versioning.
  • Falcon has to keep all changes in memory, so not great for user transactions that may take longer
  • Characteristics – Well optimised for short fast web transactions, Designed for environments with lots of memory.

In general discussions is was mentioned from the floor the fear that there will be so many storage engine options, and you will need a matrix for what is good for what.

In conclusion, Brian mentioned it will be alpha before the end of year.

MyISAM++

Monty gave us a quick overview of next generation of MyISAM. It is set to include:

  • New data disk format
  • Transaction support
  • multi-versioning
  • row level locking and escalation to table level locks. (interesting)
  • bitmap indexes and new table scanning optimizing indexes with up to 1000x times performance.

No details of time frame were given for delivery, however development is well underway.

Doxygen Project

What the?

Well this is the inheritance diagram of the Item Class in the MySQL 5.1 Source tree, nicely documented using the Doxygen tool as mentioned by Jay in his presentation at MySQL Camp.

Jay started the Community Doxygen Project on the Forge to improve the level of commenting enabling a better platform for the community to contribute MySQL server code changes.

At this early stage David Shrewsbury is working on fine tuning initial documentation examples for QA and review. You can check out the Status Page of automated commenting conversion.

You can see the present documentation of MySQL 5.1 source here.

The joys of working at Google


So, mid morning especially after having a heavy and late night drinking with new friends in Palo Alto I was seeking at Day #3 of the MySQL Camp a high-caffeine pick me up drink. Yesterday I had a Bawls, and after enjoying it I was a little concerned that when I returned to New York I would not be able to buy it. You can get it at Think Geek but that’s more complicated then a local supermarket.

So after getting a Googler to get me to the cafe fridge we find out that there weren’t any there. No problem, lets just go this way I’m told. So we start a quick tour of the larger cafe area and another set of fridges but no Bawls, we keep walking, no more again. At this time the recommendation is I should try a Rockstar, but so far no luck either. Then we head into another area (all in the same building) to a micro kitchen, no Bawls and no Rockstar, but man, it’s an entire kitchen, with like 15+ cereals alone on tap for reference. My host is now committed for me to find and try his recommendation of a RockStar. Off now, still in the same building upstairs. At this time I’m blown away. The first desk I actually walk past has a Dell 24″ Widescreen LCD monitor, of which I have one, See my comments here. One blink later to then see a desk in the next area with two side by side in vertical mode. Blink again, and in this area there are four desks, and each desk has twin Dell 24″ Widescreen LCD monitors. They were everywhere I looked. WOW!!!!

So at the next micro kitchen we finally achieve our objective. A Rockstar. Sometimes the journey is just as rewarding as the prize. In this case I got a quick and very amazing tour of just a small part of one building.

Some details about the Rockstar. Firstly not bad, certainly was a pick me up, and it didn’t take long to kick in. And when I looked at the Supplement Facts on the can. % Daily Value = 130%. So this is today’s total intake plus 1/3 of tomorrows. Oops! Breakfast, morning tea, and now Indian for lunch! (Update. I’m a dick, hence the need for the drink originally as beer killed too many brain cells. As pointed out it’s a value of 130 for calories and blank for % daily value.)

I was also told that Googlers can get addicted to the high energy drinks here. Yes, I’m certain that is true.

Return to Google Lobby – Camp Photo


Early on Sunday Day #3, I dragged a few willing participants out for a “different photo” based on the umbrellas in each Lobby. It worked out well. Special thanks to Kynan who ran around to other lobby’s to find additional umbrellas. (He is the one holding the white one, and yes, that’s a utility Kilt).

I’ll be uploading more in this series to My Flickr Photos MySQL Camp 01 soon.



You can get a larger copy of image Here.

MySQL Replibeertion

MySQL Replibeertion was the last scheduled session on Day 2, but not withstanding there was free beer (a lot of), there was a serious side with a Replication Discussion.

One of the first questions by Jeremy was “Are the any big replication users?” to which Sheeri quickly replied “Are you calling me fat again”.

This was a highly interactive session, here are some of the points from the audience.

Some Uses of Replication

  • Backup
  • Hot standby
  • Scaling
  • Data Warehousing
    • Slaves are larger then your database
  • For no special reason
  • Consolidation of multiple sources
  • Support for multiple indexes

Issues

  • Can break
  • replication lag
  • bi-directional replication is not supported
  • hard to setup replication/initialization upto point to run one command
  • hard to know when the slave is out of sync (working but broken) diagnose
  • does not manage binary logs for you (max-bin-logs )
  • no row level replication (5.1 row based replication, change based replication, good and bad )
  • Serialized execution on the slave
  • Master does not keep track of the slaves (to the master, the slave is just another connection) Jeremy comment “it really really sucks in production systems.”
  • No multi-master replication. A slave can not have more then one master.
  • ring replication. No idea when something breaks what is right.
  • no ability for delayed duplication.
  • no way to get binlogs back. (manually twink the info file)
  • Master doesn’t care what data is on the slave.
  • A replication backup is really only good for the restoration of that machine
  • Default reconnection timeout is way, way to high (default of 1 minute). It should be at most 1 second with exponential fallback. (master-connect-retry), no fallback, no max number of retries, logged in error log every time.

Feature Results (Things replication needs, what you want to see)

  • Delayed Replication
  • Registered Salves in the Master
  • Import Binlog
  • Checksum Table Events (Need ability for table checksum to be added to binlog periodically so it can be checked by the slave.)
  • Global Sequence Number
  • Connect Retry Exponential Backoff
  • Hetrogous Replication (Oracle to MySQL). Golden Gate Software has a commercial offering
  • Command Exclusion List (sql_log_bin=0)
  • Replication filters by data on the slave
  • Show upcoming queries, skip query
  • Multi-Master to one slave
  • piping mysqlbin log commands into the mysql client fails for some character sets.
  • Binlog index capability
  • Checksum of Binary Events to determine a command is valid
  • command line interface in mysqllog so you could go backwards and forwards, then execute commands.

Check out more at Google Code Blog.

Day 2 – Memorable Quotes

Continuing on from my Day 1 – Memorable Quotes from the MySQL Camp.

“Are there any big replication users” — Jeremy “Are you calling me fat again” — Sheeri

“Only some of us have problems with interruptions.” — Jeremy to Jay

“It really really sucks in production systems.” — Jeremy About Slave management by Master.

“So there are like 12 people here, it must be the CEO’s turn to talk.” — Marten Mickos MySQL CEO

“Kegs and Eggs” — Joel S. Regarding all beer that will still be available at breakfast tomorrow.

“You can fight to the death for it”, Jeremy to his two employees Joel and Justin about who gets to be called employee #1.

“Patches go to employee #1″ — Ronald directed to Joel when a replication patch was coined by Jeremy and Eric.

“It’s a little like Google, there are no numbers”. In response to getting any dates/times on a commitment to functionality by MySQL.

“There is a way, but you don’t want to do it.” — Monty on a topic in using Replication Slave for Master Backups

“The Blackhole storage engine is really really scary. It’s not just the name, it’s a hack.” — Jeremy

“It will suck you in.” — More on the Blackhole Storage Engine.

“It still scares me.” — Jeremy are a long discussion by Brian on the Blackhole architecture concluding with the transactional state.

“I’m not sure I’d buy that.” — Brian continuing on more comments about the blackhole discussion.

“Let’s not optimize things that won’t happen in the grand scheme of things” — Jeremy

“You asked what I wanted to see, not what was practical” — Sheeri

“Wasting network bandwidth is great” — Jeremy

“People do lots of weird things to do performance”.

“All you need is beer and love”.

“Oh” — Sheeri. Long pause. “Light bulb pops up” — Jeremy

“Are we eating Oracle’s lunch? No we are eating Oracle’s dessert” — Marten Mikos MySQL CEO