MySQL Replication 102

One of the most asked questions is how to setup MySQL replication. The MySQL Reference Manual provides a good Replication How To as a starting guide on MySQL Replication 101.

MySQL replication has many uses including read scalability, backups, failover, online maintenance, upgrade testing and verification, software upgrades, alternative data or structure for performance queries and even benchmarking to name the popular uses.

When reviewing an installation of MySQL replication I use the following as part of my checklist of steps used for confirming your replication environment is operational.

Master Variables

  • server-id – Replication will not work without this correctly set and unique
  • log-bin – Pre-requisite for working replication
  • log-bin-index
  • max_binlog_size
  • binlog_cache_size
  • expire_logs_days – a value from 5 to 10 is good, not set can result in a full disk.
  • binlog-do-db/binlog-ignore-db – Use with caution
  • sync_binlog
  • innodb_support_xa

Slave Variables

  • server-id – Replication will not work without this correctly set and unique
  • read_only = TRUE
  • log-bin – may or may not be present
  • relay-log
  • relay-log-index
  • max_binlog_size
  • binlog_cache_size
  • expire_logs_days – a value from 5 to 10 is good, not set can result in a full disk.
  • replicate-do-???? – Warning, use these with caution. Your slave will not be the same as your master.
  • slave-skip-errors – Warning, this can lead to your slave being inconsistent with your slave.

On the Master I audit the following information.

  • SHOW MASTER STATUS
    • If any Binlog_Do_DB, then a SHOW SCHEMAS for verification
  • SHOW MASTER LOGS
    • Confirm physical files as well as available diskspace on log-bin disk partition
  • SHOW SLAVE STATUS (in a true master/slave environment this should be empty)
  • SHOW GLOBAL VARIABLES LIKE ‘binlog_cache_size';
  • SHOW GLOBAL STATUS LIKE ‘Binlog%’
  • SELECT host,user,password FROM mysql.user WHERE Repl_slave_priv=’Y’ AND Super_priv=’N';

On the Slave I audit the following information.

  • SHOW SLAVE STATUS
  • SHOW MASTER STATUS – This will determine if you have log-bin enabled on the slave

The key information for MySQL slaves is in the SHOW SLAVE STATUS command. An example output is:

mysql> show slave statusG
*************************** 1. row ***************************
             Slave_IO_State:
                Master_Host: 10.10.1.1
                Master_User: slave
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: bin-log.001817
        Read_Master_Log_Pos: 369684547
             Relay_Log_File: relay-log.000449
              Relay_Log_Pos: 42347742
      Relay_Master_Log_File: bin-log.001817
           Slave_IO_Running: No
          Slave_SQL_Running: No
            Replicate_Do_DB:
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 369684547
            Relay_Log_Space: 42347742
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: NULL
1 row in set (0.00 sec)

It is important that you learn and understand these values. In this above case, replication is NOT running as indicated by Slave_IO_Running and Slave_SQL_Running.

This information is just an introduction as to what to look at. In my next lesson, I’ll spend more detail of the output of the various commands, as well as describe in greater detail the relationship of underlying files that are important for a working MySQL Replication environment.

Other References

Verifying MySQL Replication in Action
MySQL Replication Architecture

Where can you find MySQL Events?

As a frequent traveler for my MySQL consulting (last 4 weeks were Sydney, San Francisco, New York and Vancouver), I like to keep abreast of any local tech event that includes MySQL that I may be able to attend.

Now there is a consolidated location that you can use, the Open Source Events Calendar. Kudos to the MySQL Community team members Lenz Grimmer and Giuseppe Maxia who have put this together.

We need your help. If you have a local event, please submit your event request. This projects needs the support of all.

You will also find valuable conference information including dates for close of proposals. A great tool for scheduling your upcoming conference year.

InnoDB I_S.tables.table_rows out by a factor of 100x

I’ve always believed that the MySQL Information_schema.tables.table_rows figure for Innodb tables to be while approximate, approximately accurate.

Today I found that the figures varied on one table from 10x to 100x wrong.

Before performing an ALTER I always verify sizes for reference.

+---------+------------+--------+------------+------------+----------------+------------+------------+-----------+------------+
| table_s | table_name | engine | row_format | table_rows | avg_row_length | total_mb   | data_mb    | index_mb  | today      |
+---------+------------+--------+------------+------------+----------------+------------+------------+-----------+------------+
| XXXXX   | s_a        | InnoDB | Compact    |     208993 |           7475 |  1491.5312 |  1490.0156 |    1.5156 | 2009-09-09 |

mysql> alter table s_a modify col VARCHAR(255);
Query OK, 23471 rows affected (4 min 26.23 sec)

After

+---------+------------+--------+------------+------------+----------------+------------+------------+-----------+------------+
| XXXXX   | s_a        | InnoDB | Compact    |    2407063 |            629 |  1447.0312 |  1445.5156 |    1.5156 | 2009-09-09 |

I was so caught out by this. The table reported 200k rows, but the alter returned 23k, that’s like 10x out.
I ran my query again, and the second time I got.

+---------+------------+--------+------------+------------+----------------+------------+------------+-----------+------------+
| XXXXX   | s_a        | InnoDB | Compact    |      21813 |          69487 |  1447.0312 |  1445.5156 |    1.5156 | 2009-09-09 |

This is closer to what I’d expected, 22k verses 23k.

I have to say, while I have always treated the data and index size as accurate, I now how little confidence in the table_rows any more.

MySQL DML stats per table

MySQL provides a level of statistics for your INSERT, UPDATE, DELETE, REPLACE Data Manipulation Language (DML) commands using the STATUS output of various Com_ variables, however it is per server stats. I would like per table stats.

You can achieve this with tools such as MySQL Proxy and mk-query-digest, however there is actually a very simple solution that requires no additional tools.
The following 1 line Linux command (reformatted for ease of reading) gave me exactly what I wanted, and it had ZERO impact on the database.

$ mysqlbinlog /path/to/mysql-bin.000999 |  
   grep -i -e "^update" -e "^insert" -e "^delete" -e "^replace" -e "^alter"  | 
   cut -c1-100 | tr '[A-Z]' '[a-z]' |  
   sed -e "s/t/ /g;s/`//g;s/(.*$//;s/ set .*$//;s/ as .*$//" | sed -e "s/ where .*$//" |  
   sort | uniq -c | sort -nr


  33389 update e_acc
  17680 insert into r_b
  17680 insert into e_rec
  14332 insert into rcv_c
  13543 update e_rec
  10805 update loc
   3339 insert into r_att
   2781 insert into o_att
...

Granted the syntax could do with some regex improvements, but in 2 minutes I was able to deduce some approximate load. The mysqlbinlog command also gives option to retrieve data for a given time period, so it is very easy to get these statistics on a per hour basis.

Sometimes the most simple options are right in front of you, just just need to strive to find the simplest solution.

Has your blog been hacked?

While not a MySQL topic, as most of my readers view my MySQL Blog, my WordPress blog has been hacked? Has yours?

Like many, I’m sure you may have read about it like at WordPress blogs under attack from hack attack but I was surprised when my custom permlinks did not work.

Being surprised I looked at Administrator accounts, and I found that there was one more number then being displayed in the list. I had to dig into the database to find the problem.

mysql> select * from wp_users where ID in (select user_id from wp_usermeta where meta_key = 'wp_capabilities' and meta_value like '%admin%');
+-----+-------------+------------------------------------+---------------+------------------------------+---------------------------+---------------------+---------------------+-------------+--------------+
| ID  | user_login  | user_pass                          | user_nicename | user_email                   | user_url                  | user_registered     | user_activation_key | user_status | display_name |
+-----+-------------+------------------------------------+---------------+------------------------------+---------------------------+---------------------+---------------------+-------------+--------------+
|   1 | admin       | $P$BHZFK/prDplb/W/024yrH49JvAmmCE. | ronald        | [email protected] | http://ronaldbradford.com | 2005-11-21 23:43:47 |                     |           0 | Ronald       |
| 127 | ronald      | $P$B..e75VtFsv9bUGj5H5NTiXXPQIitr1 | ronald        | [email protected]    | http://ronaldbradford.com | 2009-02-22 20:13:33 |                     |           0 | ronald       |
| 133 | ChaseKent87 | $P$Bl8cVSzBums33Md6u2PQtUVY2PPBHK. | chasekent87   |                              |                           | 2009-09-05 06:36:59 |                     |           0 | ChaseKent87  |
+-----+-------------+------------------------------------+---------------+------------------------------+---------------------------+---------------------+---------------------+-------------+--------------+
3 rows in set (0.00 sec)

mysql> delete from wp_users where ID=133;
mysql> delete from wp_usermeta where user_id=133;

However the damage has been done, and an update to the recommend 2.8.4 is unlikely to fix the data corruption.

Being a good DBA I have a nightly backup of my database. Being a diligent system administrator, I have not 1 copy, by 3 copies of my system, one on my web site and two offsite.

The problem is I don’t keep older backups of my data, only a day old version.

What do you monitor in MySQL?

If you are unfamiliar with what to monitor in MySQL, starting with looking at what popular Monitoring products monitor. For example, the following is the list of MySQL Cacti Plugin measurements.

Innodb Buffer Pool Activity

  • Pages Created
  • Pages Written
  • Pages Read

Innodb Buffer Pool Pages

  • Pool Size
  • Database Pages
  • Free Pages
  • Modified Pages

Inoodb File I/O

  • File Reads
  • Files Writes
  • Log Writes
  • File Fsyncs

Innodb Pending I/O

  • Aio Log Ios
  • Aio Sync ios
  • Buffer Pool Flushes
  • Chkp Writes
  • Ibuf Aio Reads
  • Log Flushes
  • Log Writes
  • Normal Aio Reads
  • Normal Aio Writes

Innodb Insert Buffer

  • Inserts
  • Merged
  • Merges

Innodb Log

  • Log Buffer Size
  • Log Bytes Written
  • Log Bytes Flushed
  • Unflushed Log

Innodb Row Operations

  • Rows Read
  • Rows Deleted
  • Rows Updated
  • Rows Inserted

Innodb Semaphores

  • Spin Rounds
  • Spin Waits
  • OS Waits

Innodb Transactions

  • Innodb Transactions
  • Current Transactions
  • History List
  • Read Views

MySQL Binary/Relay Logs

  • Binlog Cache use
  • Binlog Cache Disk Use
  • Binary Log Space
  • Relay Log Space

MySQL Command Counters

  • Questions
  • SELECT
  • DELETE
  • INSERT
  • UPDATE
  • REPLACE
  • LOAD
  • DELETE MULTI
  • INSERT SELECT
  • UPDATE MULTI
  • REPLACE SELECT

MySQL Connections

  • Max Connections
  • Max Used Connections
  • Aborted Clients
  • Aborted Connects
  • Threads Connected
  • Connections

MySQL Files and Tables

  • Table Cache
  • Open Tables
  • Open Files
  • Opened Tables

MySQL Network Traffic

  • Bytes Received
  • Bytes Sent

MySQL Processlist

  • State Closing Tables
  • State Copying to Tmp Table
  • State End
  • State Freeing Items
  • State Init
  • State Locked
  • State Login
  • State Preparing
  • State Reading From Net
  • State Sending Data
  • State Sorting Result
  • State Statistics
  • State Updating
  • State Writing to Net
  • State None
  • State Other

MySQL Query Cache

  • Queries In Cache
  • Hits
  • Inserts
  • Not Cached
  • Lowmem Prunes

MySQL Query Cache Memory

  • Query Cache Size
  • Free Memory
  • Total Blocks
  • Free Blocks

MySQL Replication

  • Slave Running
  • Slave Stopped
  • Slave Lag
  • Slave Open Temp Tables
  • Slave Retried Transactions

MySQL Select Types

  • Select Full Join
  • Select Full Range Join
  • Select Range
  • Select Range Check
  • Select Scan

MySQL Sorts

  • Sort Rows
  • Sort Range
  • Sort Merge Passes
  • Sort Scan

MySQL Table Locks

  • Table Locks Immediate
  • Table Locks Waited
  • Slow Queries

MySQL Temporary Objects

  • Created Tmp Tables
  • Created Tmp Disk Tables
  • Created Tmp Files

MySQL Threads

  • Thread Cache Size
  • Threads Created

SQL Analysis with MySQL Proxy – Part 2

As I outlined in Part 1 MySQL Proxy can be one tool for performing SQL analysis. The impact with any monitoring is the art of monitoring will affect the results, in this case the performance. I don’t recommend enabling this level of detailed monitoring in production, these techniques are designed for development, testing, and possibly stress testing.

This leads to the question, how do I monitor SQL in production? The simple answer to this question is, Sampling. Take a representative sample of your production system. The implementation of this depends on many factors including your programming technology stack, and your MySQL topology.

If for example you are using PHP, then defining MySQL proxy on a production system, and executing firewall rules to redirect incoming 3306 traffic to 4040 for a period of time, e.g. 2 seconds can provide a wealth of information as to what’s happening on the server now. I have used this very successfully in production as an information gathering an analysis tool. It is also reasonably easy to configure, execute and the impact on any failures for example are minimized due to the sampling time.

If you run a distributed environment with MySQL Slaves, or many application servers, you can also introduce sampling to a certain extent as these specific points, however like scaling options, it is key to be able to handle and process the write load accurately.

Another performance improvement is to move processing of the gathered information in MySQL proxy to a separate thread or process, removing this work from the thread execution path and therefore increasing the performance. I’m interested to explore the option of passing this information off to memcached or gearman and having MySQL proxy simply capture the packet information and distributing the output. I have yet to see how memcached and/or gearman integrate with the Lua/C bindings. If anybody has experience or knowledge I would be interested to know more.

It is interesting to know that Drizzle provides a plugin to send this level of logging information to gearman automatically.

SQL query analysis with MySQL Proxy

Long before there was the official Query Analyzer (QUAN), a component of MySQL Enterprise, SQL analysis was possible using MySQL Proxy.

The following is an introduction to logging and query analysis with MySQL Proxy.

Get MySQL Proxy

You need to first download MySQL Proxy. In this example I am using the Linux RHEL5 64bit OS and Version 0.7.2

$ wget http://dev.mysql.com/get/Downloads/MySQL-Proxy/mysql-proxy-0.7.2-linux-rhel5-x86-64bit.tar.gz/from/http://mirror.trouble-free.net/mysql_mirror/
$ tar xvfz mysql-proxy-0.7.2-linux-rhel5-x86-64bit.tar.gz
$ ln -s mysql-proxy-0.7.2-linux-rhel5-x86-64bit mysql-proxy
$ export PATH=`pwd`/mysql-proxy/sbin:$PATH
$ mysql-proxy --help-all

Pre-requisites

MySQL Proxy uses TCP/IP, so it is important you connect via the actual hostname. You should first confirm this, as appropriate MySQL permissions may be necessary. For example:

$ mysql -h`hostname` -u -p

On confirmation this works, you can then connect directly to the proxy

$ mysql -h`hostname` -P4040 -u -p

Logging

$ cd mysql-proxy/share/doc/mysql-proxy/
$ wget -O log.lua http://ronaldbradford.com/mysql-dba/mysql-proxy/log.lua
$ mysql-proxy --proxy-lua-script=share/doc/mysql-proxy/log.lua &
$ tail -f mysql.log

This script is based on simple query logging which requires a modification to work in more current versions of MySQL proxy.

$ mysql -hhostname -P4040 -u -p
mysql>  SELECT host,user,password FROM mysql.user;
mysql>  SELECT table_schema,COUNT(*) FROM information_schema.tables GROUP BY table_schema;
mysql>  SELECT NOW(), SLEEP(3);
mysql>  EXIT
$ cat mysql.log
2009-09-02 17:15:01     58 -- select @@version_comment limit 1
2009-09-02 17:16:15     58 -- SELECT host,user,password FROM mysql.user
2009-09-02 17:16:30     58 -- SELECT table_schema,COUNT(*) FROM information_schema.tables GROUP BY table_schema
2009-09-02 17:16:39     58 -- SELECT NOW(), SLEEP(3)

Query Analysis

Restart proxy with the histogram.lua sample provided.

$ mysql-proxy --proxy-lua-script=share/doc/mysql-proxy/histogram.lua &

Connect and run some queries.

$ mysql -hhostname -P4040 -u -p
mysql>  SELECT host,user,password FROM mysql.user;
mysql>  SELECT table_schema,COUNT(*) FROM information_schema.tables GROUP BY table_schema;
mysql>  SELECT NOW(), SLEEP(3);

While connected to the proxy, you can now review data from two pseudo tables.

mysql>  SELECT * FROM histogram.tables;
mysql>  SELECT * FROM histogram.queriesG
mysql>  DELETE FROM histogram.tables;
mysql>  DELETE FROM histogram.queries;

mysql> SELECT * FROM histogram.tables;
+---------------------------+-------+--------+
| table                     | reads | writes |
+---------------------------+-------+--------+
| information_schema.tables |     1 |      0 |
| mysql.user                |     1 |      0 |
+---------------------------+-------+--------+

mysql> SELECT * FROM histogram.queries;
+--------------------------------------------------------------------------------------------------+-------+----------------+----------------+
| query                                                                                            | count | max_query_time | avg_query_time |
+--------------------------------------------------------------------------------------------------+-------+----------------+----------------+
| SELECT @@version_comment LIMIT ?                                                                 |     1 |            300 |            300 |
| SELECT `table_schema` , COUNT( * ) FROM `information_schema` . `tables` GROUP BY `table_schema`  |     1 |           1822 |           1822 |
| SELECT `host` , `user` , `password` FROM `mysql` . `user`                                        |     1 |            494 |            494 |
| SELECT NOW( ) , SLEEP( ? )                                                                       |     1 |        3000735 |        3000735 |
+--------------------------------------------------------------------------------------------------+-------+----------------+----------------+

Moving forward

The power is that with Lua you have the flexibility to write your own logging. Some improvements to these scripts could be.

  • Add the query time, number of rows, and result set size to the logging
  • Be able to sort histogram results or see top percentile. Being able to copy data into real tables would enable any level of analysis
  • Combine the logging and histogram scripts
  • Enable global enable/disabling of logging with SET GLOBAL commands
  • Support variable length IN queries, those that pass multiple values, so you end up with a subset of all queries
  • Provide a actual query example, making it easy to do a QEP. For normalized queries you need to do additional work to find values.
  • The histogram does not support the C API multi query functionality, where multiple queries can be passed to the server at one time. The problem is there is no way to time the individual queries.

Read on in SQL Analysis with MySQL Proxy – Part 2.

References

A good introduction document
MySQL Proxy – From architecture to implementation – OSCON 2008

Seeking public data for benchmarks

I have several side projects when time permits and one is that of benchmarking various MySQL technologies (e.g. MySQL 5.0,5.1,5.4), variants (e.g. MariaDB, Drizzle) and storage engines (e.g. Tokutek, Innodb plugin) and even other products like Tokyo Cabinet which is gaining large implementations.

You have two options with benchmarks, the brute force approach such as Sysbench, TPC, sysbench, Juice Benchmark, iibench, mysqlslap, skyload. I prefer the realistic approach however these are always on client’s private data. What is first needed is better access to public data for benchmarks. I have compiled this list to date and I am seeking additional sources for reference.

Of course, the data is only the starting point, having representative transactions and queries to execute and a framework to execute and a reporting module are also necessary. The introduction of Lua into Sysbench may now be a better option then my tool of choice mybench which I use simply because I can configure, write and deploy generally for a client in under 1 hour.

If anybody has other good references to free public data that’s easily loadable into MySQL please let me know.

Setting up a Virtual IP address (VIP)

These instructions are for CentOS/Redhat Linux distributions.

1. Identify your current NIC’s and IP addresses in use.

$ /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 00:30:48:98:9C:A6
          inet addr:192.168.53.201  Bcast:192.168.53.255  Mask:255.255.255.0
          inet6 addr: 0080::230:48ff:fe98:9ca6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6159779 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6137085 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:1158210510 (1.0 GiB)  TX bytes:541617489 (516.5 MiB)
          Memory:e8000000-e8020000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:3791468208 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3791468208 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:740660664181 (689.7 GiB)  TX bytes:740660664181 (689.7 GiB)

Determine which NIC you want the VIP on, in this case eth0 (the private address). We want to add the address 192.168.53.220

$  echo "DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.53.255
IPADDR=192.168.53.220
NETMASK=255.255.255.0
NETWORK=192.168.53.0
ONBOOT=yes" > /etc/sysconfig/network-scripts/ifcfg-eth0:0
$ /etc/sysconfig/network-scripts/ifup-aliases eth0

Check your /sbin/ifconfig and now you should have a virtual IP address.

$ /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 99:30:48:98:9C:A6
          inet addr:192.168.53.201  Bcast:192.168.53.255  Mask:255.255.255.0
          inet6 addr: 0080::230:48ff:fe98:9ca6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6159779 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6137085 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:1158210510 (1.0 GiB)  TX bytes:541617489 (516.5 MiB)
          Memory:e8000000-e8020000

eth0:0    Link encap:Ethernet  HWaddr 99:30:48:98:9C:A7
          inet addr:192.168.53.220  Bcast:192.168.53.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Memory:e8100000-e8120000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:3791468208 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3791468208 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:740660664181 (689.7 GiB)  TX bytes:740660664181 (689.7 GiB)

We need more CATs

Before you think I’ve posted an animal story in my MySQL category please read on. For reference, The RAT and the CAT is something I wrote back in 2006, that explains the CAT part.

I was reviewing the website performance tonight of a client. I had access to an idle system so being lazy in collecting SQL after my initial audit of the system and MySQL, I turned on the General Query log and hit the home page. NOTE: Do not do this on a running production system, especially with any volume.

What I got was 77 lines of output (after I removed newlines in the queries). 74 were queries, and 72 were from the same table, via some unnecessarily complex sub-joins. I wish I could have laughed but it wasn’t really funny.

My Friday wisdom for writing SQL is this:

  • Don’t write duplicate SQL statements
  • Look the entire process/function, not just an individual SQL statement
  • Don’t use SELECT *. While I could go into detail, in this example they were 4, I repeat 4 TEXT columns where output was not used.
  • All developers should know what a QEP is, and use it.
  • The best performance improvement you can make with an SQL statement, is to never execute it. Even better, never think about writing it.

I trust just looking at this below will impact more wisdom.

1 Connect     XXXX
1 Init DB     XXXX
1 Query       SELECT * FROM categories where cparent_id = 0 AND show_on_navigation =1  order by cid ASC
1 Query       Select home_title, home_keyword from general_meta where id = '1'
1 Query       SELECT * FROM categories where cparent_id = 0 and show_on_navigation = 1  order by cid ASC
1 Query       SELECT *, CASE WHEN cname = 'Babies Clothes' THEN 'baby clothes' ELSE (CASE WHEN cname = 'Babies Toys' THEN 'baby toys' ELSE (CASE WHEN cname = 'Babies Shoes' THEN 'baby shoes' ELSE cname END) END) END AS ccname FROM categories where cparent_id = '45' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '2') AS pretext FROM categories cp WHERE cp.cparent_id = '2' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '5') AS pretext FROM categories cp WHERE cp.cparent_id = '5' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '6') AS pretext FROM categories cp WHERE cp.cparent_id = '6' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM categories where cparent_id = '81' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '82' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '83' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '84' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '85' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '86' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '87' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '8') AS pretext FROM categories cp WHERE cp.cparent_id = '8' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM categories where cparent_id = '133' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '134' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '135' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '9') AS pretext FROM categories cp WHERE cp.cparent_id = '9' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM categories where cparent_id = '88' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '40') AS pretext FROM categories cp WHERE cp.cparent_id = '40' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM categories where cparent_id = '669' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '670' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT *, CASE WHEN cname = 'Babies Clothes' THEN 'baby clothes' ELSE (CASE WHEN cname = 'Babies Toys' THEN 'baby toys' ELSE (CASE WHEN cname = 'Babies Shoes' THEN 'baby shoes' ELSE cname END) END) END AS ccname FROM categories where cparent_id = '46' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '10') AS pretext FROM categories cp WHERE cp.cparent_id = '10' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM categories where cparent_id = '90' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '91' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '92' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '93' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '94' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '95' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '96' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '97' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '11') AS pretext FROM categories cp WHERE cp.cparent_id = '11' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '12') AS pretext FROM categories cp WHERE cp.cparent_id = '12' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '13') AS pretext FROM categories cp WHERE cp.cparent_id = '13' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '14') AS pretext FROM categories cp WHERE cp.cparent_id = '14' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '69') AS pretext FROM categories cp WHERE cp.cparent_id = '69' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT *, CASE WHEN cname = 'Babies Clothes' THEN 'baby clothes' ELSE (CASE WHEN cname = 'Babies Toys' THEN 'baby toys' ELSE (CASE WHEN cname = 'Babies Shoes' THEN 'baby shoes' ELSE cname END) END) END AS ccname FROM categories where cparent_id = '47' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '15') AS pretext FROM categories cp WHERE cp.cparent_id = '15' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM categories where cparent_id = '98' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '99' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '100' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '101' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '102' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '103' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM categories where cparent_id = '104' and show_on_navigation = 1  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '16') AS pretext FROM categories cp WHERE cp.cparent_id = '16' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '17') AS pretext FROM categories cp WHERE cp.cparent_id = '17' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '18') AS pretext FROM categories cp WHERE cp.cparent_id = '18' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '19') AS pretext FROM categories cp WHERE cp.cparent_id = '19' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '20') AS pretext FROM categories cp WHERE cp.cparent_id = '20' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT *, CASE WHEN cname = 'Babies Clothes' THEN 'baby clothes' ELSE (CASE WHEN cname = 'Babies Toys' THEN 'baby toys' ELSE (CASE WHEN cname = 'Babies Shoes' THEN 'baby shoes' ELSE cname END) END) END AS ccname FROM categories where cparent_id = '45'  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '2') AS pretext FROM categories cp WHERE cp.cparent_id = '2' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '5') AS pretext FROM categories cp WHERE cp.cparent_id = '5' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '6') AS pretext FROM categories cp WHERE cp.cparent_id = '6' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '8') AS pretext FROM categories cp WHERE cp.cparent_id = '8' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '9') AS pretext FROM categories cp WHERE cp.cparent_id = '9' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '40') AS pretext FROM categories cp WHERE cp.cparent_id = '40' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT *, CASE WHEN cname = 'Babies Clothes' THEN 'baby clothes' ELSE (CASE WHEN cname = 'Babies Toys' THEN 'baby toys' ELSE (CASE WHEN cname = 'Babies Shoes' THEN 'baby shoes' ELSE cname END) END) END AS ccname FROM categories where cparent_id = '46'  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '10') AS pretext FROM categories cp WHERE cp.cparent_id = '10' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '11') AS pretext FROM categories cp WHERE cp.cparent_id = '11' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '12') AS pretext FROM categories cp WHERE cp.cparent_id = '12' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '13') AS pretext FROM categories cp WHERE cp.cparent_id = '13' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '14') AS pretext FROM categories cp WHERE cp.cparent_id = '14' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '69') AS pretext FROM categories cp WHERE cp.cparent_id = '69' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT *, CASE WHEN cname = 'Babies Clothes' THEN 'baby clothes' ELSE (CASE WHEN cname = 'Babies Toys' THEN 'baby toys' ELSE (CASE WHEN cname = 'Babies Shoes' THEN 'baby shoes' ELSE cname END) END) END AS ccname FROM categories where cparent_id = '47'  order by cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '15') AS pretext FROM categories cp WHERE cp.cparent_id = '15' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '16') AS pretext FROM categories cp WHERE cp.cparent_id = '16' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '17') AS pretext FROM categories cp WHERE cp.cparent_id = '17' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '18') AS pretext FROM categories cp WHERE cp.cparent_id = '18' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '19') AS pretext FROM categories cp WHERE cp.cparent_id = '19' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       SELECT * FROM ( SELECT * , ( SELECT CASE WHEN cname LIKE '%Babies Clothes%' THEN 'baby ' ELSE ( CASE WHEN cname = 'Men''s Clothes' THEN 'mens ' ELSE ( CASE WHEN cname = 'Women''s Clothes' THEN 'womens ' ELSE ( CASE WHEN cname LIKE '%Boy''s Clothes%' THEN 'boys ' ELSE ( CASE WHEN cname LIKE '%Girl''s Clothes%' THEN 'girls ' ELSE '' END ) END ) END ) END ) END AS pretext FROM categories WHERE cid = '20') AS pretext FROM categories cp WHERE cp.cparent_id = '20' AND show_on_navigation =1)abc ORDER BY cid ASC
1 Query       select * from general_meta where id=1
1 Quit

Handy MySQL documentation indexes

Updated
If your wanting to know more about MySQL Indexes on tables, then check out Understanding Different MySQL Index Implementations.

I just discovered today in the MySQL 5.1 Reference Manual a handy set of additional indexes in the System Navigation section.

  • Index
  • Standard Index
  • C Function Index
  • Command Index
  • Function Index
  • INFORMATION_SCHEMA Index
  • Transaction Isolation Level Index
  • JOIN Types Index
  • Operator Index
  • Option Index
  • Privileges Index
  • SQL Modes Index
  • Status Variable Index
  • Statement/Syntax Index
  • System Variable Index

Perhaps they have been around for some time and I’ve not noticed, but there are much better then searching when you know the content type as per the index list on what you are searching for.

Have you checked your MySQL error log today?

As a consultant I would be rich if I made money every time when asking “Have you checked the MySQL error log?”

Today’s special found in a 13GB MySQL server error log.

090819 22:49:37InnoDB: Warning: difficult to find free blocks from
InnoDB: the buffer pool (1101071 search iterations)! Consider
InnoDB: increasing the buffer pool size.
InnoDB: It is also possible that in your Unix version
InnoDB: fsync is very slow, or completely frozen inside
InnoDB: the OS kernel. Then upgrading to a newer version
InnoDB: of your operating system may help. Look at the
InnoDB: number of fsyncs in diagnostic info below.
InnoDB: Pending flushes (fsync) log: 0; buffer pool: 0
InnoDB: 167 OS file reads, 1 OS file writes, 1 OS fsyncs
InnoDB: Starting InnoDB Monitor to print further
InnoDB: diagnostics to the standard output.
090819 22:49:37InnoDB: Warning: difficult to find free blocks from
InnoDB: the buffer pool (1101051 search iterations)! Consider
InnoDB: increasing the buffer pool size.
InnoDB: It is also possible that in your Unix version
InnoDB: fsync is very slow, or completely frozen inside
InnoDB: the OS kernel. Then upgrading to a newer version
InnoDB: of your operating system may help. Look at the
InnoDB: number of fsyncs in diagnostic info below.
InnoDB: Pending flushes (fsync) log: 0; buffer pool: 0
InnoDB: 167 OS file reads, 1 OS file writes, 1 OS fsyncs
InnoDB: Starting InnoDB Monitor to print further
InnoDB: diagnostics to the standard output.
090819 22:49:37InnoDB: Warning: difficult to find free blocks from
InnoDB: the buffer pool (1101072 search iterations)! Consider
InnoDB: increasing the buffer pool size.
InnoDB: It is also possible that in your Unix version
InnoDB: fsync is very slow, or completely frozen inside
InnoDB: the OS kernel. Then upgrading to a newer version
InnoDB: of your operating system may help. Look at the
InnoDB: number of fsyncs in diagnostic info below.
InnoDB: Pending flushes (fsync) log: 0; buffer pool: 0
InnoDB: 167 OS file reads, 1 OS file writes, 1 OS fsyncs
InnoDB: Starting InnoDB Monitor to print further
InnoDB: diagnostics to the standard output.

As you can see the same error is occuring, and in this example 3 times in the last second. To find the cause of the error I didn’t have to look far because I had already checked the /etc/my.cnf file.

$ cat /etc/my.cnf
[mysqld]
set-variable = max_connections=500
innodb_buffer_pool_size = 500
safe-show-database

Where do people come up with these my.cnf files? A decision was made to create one, and not use either no file or at least the default that was provided with the installation that is still on the system.

$ find / -name *my*cnf
/home/dontcold/.my.cnf
/etc/my.cnf
/root/.my.cnf
/usr/local/cpanel/whostmgr/my.cnf
/usr/share/doc/MySQL-server-5.0.77/my-small.cnf
/usr/share/doc/MySQL-server-5.0.77/my-medium.cnf
/usr/share/doc/MySQL-server-5.0.77/my-innodb-heavy-4G.cnf
/usr/share/doc/MySQL-server-5.0.77/my-large.cnf
/usr/share/doc/MySQL-server-5.0.77/my-huge.cnf
/usr/share/mysql/my-small.cnf
/usr/share/mysql/my-medium.cnf
/usr/share/mysql/my-innodb-heavy-4G.cnf
/usr/share/mysql/my-large.cnf
/usr/share/mysql/my-huge.cnf

What is interesting is that I’ve seen nearly the same file on a previous installation and I documented in For MySQL DBA fame and glory. Prize included.

Getting started with Gearman

Gearman is an open source generic framework for distributed processing. At OSCON 2009 I attended the Gearman: Build Your Own Distributed Platform in 3 Hours tutorial.

While it’s very easy to install Gearman, and follow the first example, if you missed the all important additional PHP steps listed on just one slide you may be left with the “‘Class ‘GearmanClient’ not found” error.

The following are detailed instructions for the installation and configuration of Gearman and PHP on Ubuntu 9.04 Jaunty.

Add the Drizzle PPA to get pre-packaged versions of Gearman.

cp /etc/apt/sources.list /etc/apt/sources.list.orig
echo "deb http://ppa.launchpad.net/drizzle-developers/ppa/ubuntu intrepid main
deb-src http://ppa.launchpad.net/drizzle-developers/ppa/ubuntu intrepid main" >> /etc/apt/sources.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 06899068
apt-get update

Get the gearman packages

apt-get install -y gearman gearman-job-server gearman-tools libgearman1 libgearman-dev libgearman-dbg libgearman-doc

Get the German PHP extension.

wget http://pecl.php.net/get/gearman-0.4.0.tgz
tar xvfz gearman-0.4.0.tgz
cd gearman-0.4.0/
phpize
./configure
make
make install

If phpize is not available then you are missing the development packages.

$ apt get php5-dev

You also configure PHP to load the extension. This will vary on different Linux environments. In this case.

echo 'extension="gearman.so"' >>/etc/php5/cli/php.ini

Verify the PHP Gearman extension is configured.

$ php --info | grep gearman
gearman
gearman support => enabled
libgearman version => 0.8

Now you are ready for working with the Gearman PHP examples.

Setting up sysbench with MySQL & Drizzle

Sysbench is a open source product that enables you to perform various system benchmarks including databases. Drizzles performs regression testing of every trunk revision with a branched version of sysbench within Drizzle Automation.

A pending branch https://code.launchpad.net/~elambert/sysbench/trunk_drizzle_merge by Eric Lambert now enables side by side testing with MySQL and Drizzle. On a system running MySQL and Drizzle I was able install this sysbench branch with the following commands.

cd bzr
bzr branch lp:~elambert/sysbench/trunk_drizzle_merge
cd trunk_drizzle_merge/
./autogen.sh
./configure
make
sudo make install

Running the default lua tests supplied required me to ensure drizzle was in my path and that I created the ‘sbtest’ schema. I’ll be sure it add that checking to my future developed benchmark scripts.

$ cd sysbench/tests/db
$ sysbench --test=insert.lua --db_driver=drizzle prepare
sysbench v0.4.10:  multi-threaded system evaluation benchmark

FATAL: unable to connect to Drizzle server: 23
FATAL: error 0: Unknown database 'sbtest'
FATAL: failed to execute function `prepare': insert.lua:7: Failed to connect to the database
$ drizzle -e "create schema sbtest"
$ sysbench --test=insert.lua --db_driver=drizzle prepare
sysbench v0.4.10:  multi-threaded system evaluation benchmark

Creating table 'sbtest'...

And running produces the following results.

$ sysbench --num-threads=1 --test=insert.lua --db_driver=drizzle run
sysbench v0.4.10:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Threads started!

OLTP test statistics:
    queries performed:
        read:                            0
        write:                           10000
        other:                           0
        total:                           10000
    transactions:                        0      (0.00 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 10000  (879.68 per sec.)
    other operations:                    0      (0.00 per sec.)

Test execution summary:
    total time:                          11.3678s
    total number of events:              10000
    total time taken by event execution: 11.3354s
    per-request statistics:
         min:                                  0.32ms
         avg:                                  1.13ms
         max:                                 68.74ms
         approx.  95 percentile:               2.41ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   11.3354/0.0

Rerunning the prepare also lacked some auto cleanup to allow for automated re-running.

$ sysbench --test=insert.lua --db_driver=drizzle prepare
Creating table 'sbtest'...
ALERT: Drizzle Query Failed: 1050:Table 'sbtest' already exists
FATAL: failed to execute function `prepare': insert.lua:57: Database query failed

For MySQL

$ sysbench --test=insert.lua --db_driver=mysql --mysql_table_engine=innodb prepare
sysbench v0.4.10:  multi-threaded system evaluation benchmark

Creating table 'sbtest'...

Unfortunately this doesn’t actually create the table in the right storage engine, I had to hack the code to ensure I was comparing InnoDB in each test.

$ sysbench --num-threads=1 --test=insert.l
ua --db_driver=mysql run
sysbench v0.4.10:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Threads started!

OLTP test statistics:
    queries performed:
        read:                            0
        write:                           10000
        other:                           0
        total:                           10000
    transactions:                        0      (0.00 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 10000  (897.67 per sec.)
    other operations:                    0      (0.00 per sec.)

Test execution summary:
    total time:                          11.1399s
    total number of events:              10000
    total time taken by event execution: 11.1084s
    per-request statistics:
         min:                                  0.27ms
         avg:                                  1.11ms
         max:                                252.63ms
         approx.  95 percentile:               2.48ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   11.1084/0.00


Armed with a working environment I can now write some more realistic production like tests in Lua.

configure: error: mysql_config executable not found

If your compiling a product that includes a dependency of MySQL, you can easily get the error

configure: error: mysql_config executable not found

I generally don’t see this problem, because I use MySQL binary tar files, however if you use MySQL packages, such as Ubuntu, you can easily miss the required dependency.

My currently installed MySQL packages on this Ubuntu machine are:

$ sudo dpkg -l | grep mysql
ii  libdbd-mysql                               0.8.2-1-4.1                               MySQL database server driver for libdbi
ii  libdbd-mysql-perl                          4.008-1                                   A Perl5 database interface to the MySQL data
ii  libmysqlclient15off                        5.1.30really5.0.75-0ubuntu10.2            MySQL database client library
ii  libmysqlclient16                           5.1.31-1ubuntu2                           MySQL database client library
ii  libqt4-sql-mysql                           4.5.0-0ubuntu4.1                          Qt 4 MySQL database driver
ii  mysql-client-5.1                           5.1.31-1ubuntu2                           MySQL database client binaries
ii  mysql-common                               5.1.30really5.0.75-0ubuntu10.2            MySQL database common files
ii  mysql-server-5.1                           5.1.31-1ubuntu2                           MySQL database server binaries
ii  php5-mysql                                 5.2.6.dfsg.1-3ubuntu4.1                   MySQL module for php5

The missing link is the development version of the libmysqlclient library.

sudo apt-get install libmysqlclient15-dev

Understanding Different MySQL Index Implementations

It is important to know and understand that while indexing columns in MySQL will generally improve performance, using the appropriate type of index can make a greater impact on performance.

There are four general index types to consider when creating an appropriate index to optimize SQL queries.

  • Column Index
  • Concatenated Index
  • Covering Index
  • Partial Index

For the purpose of this discussion I am excluding other specialized index types such as fulltext, spatial and hash in memory engine.

Example Table

For the following examples, I will use this test table structure.

DROP TABLE IF EXISTS t1;
CREATE TABLE t1(
  id INT UNSIGNED NOT NULL AUTO_INCREMENT,
  user_name VARCHAR(20) NOT NULL,
  first_name VARCHAR(30) NOT NULL,
  last_name VARCHAR(30) NOT NULL,
  external_id INT UNSIGNED NOT NULL,
  country_id SMALLINT UNSIGNED NOT NULL,
  PRIMARY KEY(id)
) ENGINE=InnoDB;

Column Index

Quite simply, you have an index on a single column to help with performance. For example, if you were to query your data on external_id, without an index the system will need to read all data pages and then sequential scan pages to identify matching records. As there is no information known about how many rows satisfy the criteria, all data must be read. You can confirm this with the QEP.

SELECT id, user_name
FROM   t1
WHERE external_id = 1;

By adding an index to external_id, the query is optimized to only look at records that satisfy your criteria.

ALTER TABLE t1
  ADD INDEX (external_id);

Concatenated Index

I often see many single column indexes on tables, when these are simply not needed, and generally will be not used. This is easily identified when looking at the QEP and seeing multiple 3,4,5 possible keys.
You need to also consider in your MySQL Index theory, that in general only one index is used for each table in a MySQL query. There are a few exceptions however these are rare.

A concatenated index uses multiple columns. Let’s look a modified version of our query.

SELECT id, user_name
FROM   t1
WHERE external_id = 1
AND      country_id = 5;

The original external_id index will be used, however if we create a concatenated index on external_id and country_id we improve the query path.

ALTER TABLE t1
  DROP INDEX external_id,
  ADD INDEX (external_id, country_id);

What about an index on country_id, external_id? If your access to your data always includes these two columns, you can consider swapping the columns based on the cardinality. However, if you have queries that search on external_id or external_id and country_id, then creating an index on country_id, external_id will not be used.

Tip In the QEP look at the key length to determine how effective concatenated indexes are.

Covering Index

A covering index as the name describes covers all columns in a query. The benefit of a covering index is that the lookup of the various Btree index pages necessary satisfies the query, and no additional data page lookups are necessary.

If we revisit our earlier example, by modifying the external_id index, and create a concatenated index on external_id and user_name we actually satisfy

ALTER TABLE t1
  DROP INDEX external_id,
  ADD INDEX (external_id, user_name);
SELECT id, user_name
FROM   t1
WHERE external_id = 1;

With MySQL, the QEP will indicate in Extra, ‘Using Index’. This is not a reference to the index actually being used, but the index satisfies all requirements of the query.

Partial Index

The final type is the partial index. This is a MySQL feature which allows you specify a subset of a column for the index.

Let’s say we query data and allow pattern matching on last name.

SELECT id, first_name, last_name, user_name
FROM   t1
WHERE last_name like 'A%'

We should add an index to last_name to improve performance.

ALTER TABLE t1
  ADD INDEX (last_name);

Depending on the average length of data in last_name (you can use PROCEDURE ANALYSE as a quick tool to sample this), creating a partial index may greatly reduce the size of the index, and minimize the additional data lookups required.

ALTER TABLE t1
  DROP INDEX last_name,
  ADD INDEX (last_name(10));

In this example, you would want to investigate the size of the index, the improvement, and then the amount of additional reads necessary for sample queries. If your accessed data is generally hot, then the benefit of a smaller index will not be impacted by additional data seeks.

Conclusion

As with any performance tuning, sufficient analysis and before and after testing is necessary for your specific environment.

Some future topics on indexes not discussed here include:

  • Using UNIQUE Indexe
  • The impact of NULL columns and values on indexes
  • Eliminating filesort by using indexes
  • The affect of too many indexes
  • Index cardinality

You need to also consider in your MySQL Index theory, that in general only one index is used for each table in a MySQL query. There are a few exceptions however these are rare.

I common question I am also asked is about function based indexes? MySQL provides no means to use a scalar function against a column in an index.

mysql.com and related sites are down

I tried to go to mysql.com and Planet MySQL over my lunch break at OSCON 2009 to find the websites are down. Seems from conversions with fellow Drizzle colleagues this has been down for some time.

What does your site look like when your system is unavailable or down?

This is a question I ask clients. What redundancy do you have in place for DNS, for a site unavailable page, for a static copy of content?

I learned my first personal lesson several years ago when at The Planet, my server and 9,000 others were unavailable at least 40 hours due to explosion, fire at a data center. While I had copies of my site, and shared hosting options elsewhere, all DNS was also in the same unavailable data center. This was definitely a shortcoming of the Host Provider at the time.

For any commercial site, it is important that at least your have geographical redundancy for DNS. Let’s use mysql.com as an example investigation.

Identify DNS records

$ dig mysql.com

; < <>> DiG 9.4.3-P1 < <>> mysql.com
;; global options:  printcmd
;; Got answer:
;; ->>HEADER< <- opcode: QUERY, status: NOERROR, id: 63421
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 0

;; QUESTION SECTION:
;mysql.com.			IN	A

;; ANSWER SECTION:
mysql.com.		2839	IN	A	213.136.52.29

;; AUTHORITY SECTION:
mysql.com.		72	IN	NS	ns7.sun.com.
mysql.com.		72	IN	NS	ns8.sun.com.
mysql.com.		72	IN	NS	ns1.sun.com.
mysql.com.		72	IN	NS	ns2.sun.com.

;; ADDITIONAL SECTION:
ns1.sun.com.		86045	IN	A	192.18.128.11
ns2.sun.com.		86075	IN	A	192.18.99.5
ns7.sun.com.		86085	IN	A	192.18.43.15
ns8.sun.com.		86093	IN	A	192.18.43.12

;; Query time: 2 msec
;; SERVER: 10.10.16.2#53(10.10.16.2)
;; WHEN: Wed Jul 22 14:18:11 2009
;; MSG SIZE  rcvd: 183

I am definitely no expert in networking, my understanding is your defined DNS server contain your primary information that is then delegated to servers worldwide.

These servers are up and running. Having no ping response is not an indicator the server not available.

mactaz:~ rbradfor$ ping -c 1 ns1.sun.com
PING ns1.sun.com (192.18.128.11): 56 data bytes
64 bytes from 192.18.128.11: icmp_seq=0 ttl=242 time=66.891 ms

--- ns1.sun.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 66.891/66.891/66.891/0.000 ms
mactaz:~ rbradfor$ ping -c 1 ns2.sun.com
PING ns2.sun.com (192.18.99.5): 56 data bytes
64 bytes from 192.18.99.5: icmp_seq=0 ttl=239 time=58.879 ms

--- ns2.sun.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 58.879/58.879/58.879/0.000 ms
mactaz:~ rbradfor$ ping -c 1 ns7.sun.com
PING ns7.sun.com (192.18.43.15): 56 data bytes
64 bytes from 192.18.43.15: icmp_seq=0 ttl=244 time=3.921 ms

--- ns7.sun.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.921/3.921/3.921/0.000 ms
mactaz:~ rbradfor$ ping -c 1 ns8.sun.com
PING ns8.sun.com (192.18.43.12): 56 data bytes
64 bytes from 192.18.43.12: icmp_seq=0 ttl=244 time=4.076 ms

--- ns8.sun.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 4.076/4.076/4.076/0.000 ms

They even appear to be in different locations which is good.

$ traceroute 192.18.128.11
traceroute to 192.18.128.11 (192.18.128.11), 64 hops max, 40 byte packets
 1  10.10.0.1 (10.10.0.1)  1.575 ms  0.882 ms  1.538 ms
 2  10.10.16.2 (10.10.16.2)  0.329 ms  0.366 ms  0.376 ms
 3  gateway.above.net (209.133.114.1)  1.567 ms  0.785 ms  0.863 ms
 4  ge-11-0-2.er1.sjc2.us.above.net (64.124.196.161)  1.386 ms  1.567 ms  1.214 ms
 5  xe-0-1-0.mpr4.sjc7.us.above.net (64.125.30.178)  2.177 ms  1.907 ms  1.873 ms
 6  above-att.sjc7.us.above.net (64.125.12.118)  5.361 ms  3.927 ms  3.717 ms
 7  cr2.sffca.ip.att.net (12.123.15.162)  66.434 ms  66.523 ms  66.694 ms
 8  cr2.la2ca.ip.att.net (12.122.31.133)  67.472 ms  66.008 ms  65.632 ms
 9  cr2.dlstx.ip.att.net (12.122.28.177)  66.003 ms  66.372 ms  66.723 ms
10  cr1.attga.ip.att.net (12.122.28.173)  66.472 ms  66.001 ms  66.908 ms
11  gar1.chlnc.ip.att.net (12.122.141.77)  66.139 ms  65.835 ms  65.892 ms
12  12.125.220.10 (12.125.220.10)  67.209 ms  66.569 ms  66.529 ms
13  cltea-ns-1.sun.com (192.18.128.11)  66.357 ms  66.756 ms  66.386 ms
mactaz:~ rbradfor$ traceroute 192.18.99.5
traceroute to 192.18.99.5 (192.18.99.5), 64 hops max, 40 byte packets
 1  10.10.0.1 (10.10.0.1)  1.159 ms  0.763 ms  0.704 ms
 2  10.10.16.2 (10.10.16.2)  0.298 ms  0.303 ms  0.290 ms
 3  gateway.above.net (209.133.114.1)  0.637 ms  0.784 ms  0.937 ms
 4  ge-11-0-2.er1.sjc2.us.above.net (64.124.196.161)  1.513 ms  1.743 ms  1.746 ms
 5  xe-0-1-0.mpr4.sjc7.us.above.net (64.125.30.178)  2.066 ms  1.417 ms  4.144 ms
 6  above-att.sjc7.us.above.net (64.125.12.118)  3.835 ms  3.374 ms  4.001 ms
 7  cr2.sffca.ip.att.net (12.123.15.162)  56.427 ms  56.191 ms  55.553 ms
 8  cr1.dvmco.ip.att.net (12.122.28.54)  55.819 ms  55.508 ms  55.442 ms
 9  gar1.dvmco.ip.att.net (12.122.144.37)  55.429 ms  55.406 ms  55.401 ms
10  12.125.159.146 (12.125.159.146)  59.293 ms  59.501 ms  59.237 ms
11  192.18.101.249 (192.18.101.249)  58.936 ms  59.099 ms  60.184 ms
12  brm-ea-ns-1.Sun.COM (192.18.99.5)  60.090 ms  59.285 ms  59.289 ms
mactaz:~ rbradfor$ traceroute 192.18.43.15
traceroute to 192.18.43.15 (192.18.43.15), 64 hops max, 40 byte packets
 1  10.10.0.1 (10.10.0.1)  1.070 ms  0.639 ms  0.639 ms
 2  10.10.16.2 (10.10.16.2)  0.323 ms  0.238 ms  0.242 ms
 3  gateway.above.net (209.133.114.1)  1.524 ms  2.697 ms  0.615 ms
 4  ge-11-0-2.er1.sjc2.us.above.net (64.124.196.161)  1.463 ms  1.510 ms  1.922 ms
 5  xe-0-1-0.mpr4.sjc7.us.above.net (64.125.30.178)  7.735 ms  2.136 ms  66.881 ms
 6  xe-0-0-0.mpr3.sjc7.us.above.net (64.125.27.85)  1.744 ms  3.131 ms  1.874 ms
 7  * above-level3.sjc7.us.above.net (64.125.13.242)  49.976 ms  2.078 ms
 8  ae-11-69.car1.SanJose1.Level3.net (4.68.18.3)  124.861 ms  206.837 ms  5.631 ms
 9  SUN-MICROSY.car1.SanJose1.Level3.net (4.53.16.50)  3.182 ms  3.579 ms  3.348 ms
10  192.18.44.18 (192.18.44.18)  4.168 ms  4.611 ms  4.146 ms
11  * * *
12  * * *
13  * *^C
mactaz:~ rbradfor$ traceroute 192.18.43.12
traceroute to 192.18.43.12 (192.18.43.12), 64 hops max, 40 byte packets
 1  10.10.0.1 (10.10.0.1)  1.206 ms  0.818 ms  0.879 ms
 2  10.10.16.2 (10.10.16.2)  0.348 ms  0.485 ms  0.465 ms
 3  gateway.above.net (209.133.114.1)  10.055 ms  1.911 ms  1.775 ms
 4  ge-11-0-2.er1.sjc2.us.above.net (64.124.196.161)  1.278 ms  0.963 ms  1.307 ms
 5  xe-0-1-0.mpr4.sjc7.us.above.net (64.125.30.178)  2.243 ms  2.004 ms  2.041 ms
 6  * xe-0-0-0.mpr3.sjc7.us.above.net (64.125.27.85)  2.016 ms  2.104 ms
 7  above-level3.sjc7.us.above.net (64.125.13.242)  2.143 ms  1.471 ms  2.106 ms
 8  ae-41-99.car1.SanJose1.Level3.net (4.68.18.195)  2.970 ms  3.103 ms ae-31-89.car1.SanJose1.Level3.net (4.68.18.131)  2.876 ms
 9  SUN-MICROSY.car1.SanJose1.Level3.net (4.53.16.50)  3.054 ms  3.414 ms  2.925 ms
10  192.18.44.18 (192.18.44.18)  3.721 ms  3.643 ms  3.622 ms
11  scaea-ns-1.sun.com (192.18.43.12)  4.350 ms  3.905 ms  4.188 ms

A traceroute of mysql.com shows it’s outside of the Sun network that at least the DNS servers are at.

$ traceroute 213.136.52.29
traceroute to 213.136.52.29 (213.136.52.29), 64 hops max, 40 byte packets
 1  10.10.0.1 (10.10.0.1)  1.243 ms  0.750 ms  0.844 ms
 2  10.10.16.2 (10.10.16.2)  0.397 ms  0.353 ms  0.413 ms
 3  gateway.above.net (209.133.114.1)  1.254 ms  1.021 ms  0.976 ms
 4  ge-11-0-2.er1.sjc2.us.above.net (64.124.196.161)  1.448 ms  0.933 ms  14.524 ms
 5  * xe-0-1-0.mpr4.sjc7.us.above.net (64.125.30.178)  1.734 ms  2.025 ms
 6  sjo-bb1-link.telia.net (213.248.94.29)  2.001 ms  1.942 ms  2.212 ms
 7  nyk-bb2-link.telia.net (80.91.254.176)  75.310 ms  81.628 ms  75.063 ms
 8  kbn-bb2-link.telia.net (80.91.254.90)  175.072 ms  175.445 ms  174.846 ms
 9  s-bb2-pos7-0-0.telia.net (213.248.65.30)  181.580 ms  181.930 ms  182.126 ms
10  s-b3-link.telia.net (80.91.253.226)  184.610 ms  198.216 ms  184.766 ms
11  bahnhof-110262-s-b3.c.telia.net (213.248.97.42)  182.919 ms  185.830 ms  184.827 ms
12  * * *
13  tsic2-gw.bahnhof.net (85.24.151.133)  186.588 ms  186.847 ms  188.352 ms
14  tsic3-gw.bahnhof.net (85.24.151.135)  183.782 ms  183.355 ms  184.660 ms
15  pio-dr1.pio-dr2.bahnhof.net (85.24.151.7)  186.142 ms  186.809 ms  186.723 ms
16  mysql-gw-sec-c.bahnhof.net (85.24.153.74)  183.821 ms  183.793 ms  183.597 ms
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *

For such a significant open source product, I’m surprised that this level of complete unavailability without even a site unavailable page is surprising.

NOTE Further update. It’s been reported the site has been down now for 8+ hours.

Drizzle Query logging

Currently Drizzle offers three (3) separate query logging plugins. These plugins offer an extensible means of gathering all or selected queries and provide the foundation for a query analyser tool. Additional filtering includes selecting queries by execution time, result size, rows processed and by any given regular expression via PCRE.

During this tutorial I’ll be stepping though the various logging_query parameters which log SQL in a CSV format.

Confirm Logging Plugins

You can view the current ACTIVE plugins in Drizzle with the following SQL.

drizzle> select version();
+--------------+
| version()    |
+--------------+
| 2009.07.1097 |
+--------------+

drizzle> select * from information_schema.plugins where plugin_name like 'logging%';
+-----------------+----------------+---------------+--------------------------------------+---------------------------------+----------------+
| PLUGIN_NAME     | PLUGIN_VERSION | PLUGIN_STATUS | PLUGIN_AUTHOR                        | PLUGIN_DESCRIPTION              | PLUGIN_LICENSE |
+-----------------+----------------+---------------+--------------------------------------+---------------------------------+----------------+
| logging_gearman | 0.1            | ACTIVE        | Mark Atwood  mark @fallenpegasus.com | Log queries to a Gearman server | GPL            |
| logging_query   | 0.2            | ACTIVE        | Mark Atwood  mark @fallenpegasus.com | Log queries to a CSV file       | GPL            |
| logging_syslog  | 0.2            | ACTIVE        | Mark Atwood  mark @fallenpegasus.com | Log to syslog                   | GPL            |
+-----------------+----------------+---------------+--------------------------------------+---------------------------------+----------------+
3 rows in set (0.01 sec)

Logging all queries

You can define the following configuration variables to enable query logging.

/etc/drizzle/drizzled.cnf
[drizzled]
logging_query_enable=true
logging_query_filename=/var/log/drizzle/general.csv

You can confirm the settings with the following SHOW VARIABLES.

drizzle> show global variables like 'logging_query%';
+---------------------------------------+------------------------------+
| Variable_name                         | Value                        |
+---------------------------------------+------------------------------+
| logging_query_enable                  | ON                           |
| logging_query_filename                | /var/log/drizzle/general.csv |
| logging_query_pcre                    |                              |
| logging_query_threshold_big_examined  | 0                            |
| logging_query_threshold_big_resultset | 0                            |
| logging_query_threshold_slow          | 0                            |
+---------------------------------------+------------------------------+

This command showing queries to be logged.

$ cat /var/log/drizzle/general.csv
1248214561824590,1,1,"","select @@version_comment limit 1","Query",1248214561824590,1240,1240,1,00,0
1248214582588346,1,3,"","show global variables like 'logging_query%'","Query",1248214582588346,1958,1706,6,62,0

Unfortunately the log does not yet provide a header. You need to turn the source code to get a better description of the columns.

      snprintf(msgbuf, MAX_MSG_LEN,
               "%"PRIu64",%"PRIu64",%"PRIu64","%.*s","%s","%.*s","
               "%"PRIu64",%"PRIu64",%"PRIu64",%"PRIu64",%"PRIu64
               "%"PRIu32",%"PRIu32"n",
               t_mark,
               session->thread_id,
               session->query_id,
               // dont need to quote the db name, always CSV safe
               dbl, dbs,
               // do need to quote the query
               quotify((unsigned char *)session->query,
                       session->query_length, qs, sizeof(qs)),
               // command_name is defined in drizzled/sql_parse.cc
               // dont need to quote the command name, always CSV safe
               (int)command_name[session->command].length,
               command_name[session->command].str,
               // counters are at end, to make it easier to add more
               (t_mark - session->connect_utime),
               (t_mark - session->start_utime),
               (t_mark - session->utime_after_lock),
               session->sent_row_count,
               session->examined_row_count,
               session->tmp_table,
               session->total_warn_count);

The important parts of this information include:

  • getmicrotime – 1248214561824590
  • Session Id – 1
  • Query Id – 1
  • Schema
  • The Query: “show global variables like ‘logging_query%'”
  • The Query type “Query”
  • Time session connected – 1248214582588346
  • The total execution time – 1958
  • The execution time after necessary locks – 1706
  • The number of rows returned – 6
  • The number of rows examined – 6
  • The number of temporary tables used – 2
  • The total warning count – 0

I also found what I believe is a formatting problem logged as Bug #402831.

You can enable logging dynamically.

drizzle> select now();
+---------------------+
| now()               |
+---------------------+
| 2009-07-22 02:14:31 |
+---------------------+
1 row in set (0 sec)

drizzle> set global logging_query_enable=true;
Query OK, 0 rows affected (0 sec)

drizzle> select curdate();
+------------+
| curdate()  |
+------------+
| 2009-07-22 |
+------------+
1 row in set (0 sec)

drizzle> set global logging_query_enable=false;
Query OK, 0 rows affected (0 sec)

drizzle> select now();
+---------------------+
| now()               |
+---------------------+
| 2009-07-22 02:14:54 |
+---------------------+
1 row in set (0 sec)
1248228876381645,4,3,"","set global logging_query_enable=true","Query",1248228876381645,761,761,0,00,0
1248228886866882,4,4,"","select curdate()","Query",1248228886866882,105,105,1,00,0

I was not able to alter the logging_query_filename dynamically. Need to confirm with the development team about this functionality for the future.

drizzle> set global logging_query_filename='/tmp/general.csv';
ERROR 1238 (HY000): Variable 'logging_query_filename' is a read only variable

Logging slow queries

If you just wanted to emulate the MySQL slow query log, with a long_query_time of 1 second, you could use the following.

/etc/drizzle/drizzled.cnf
[drizzled]
logging_query_enable=true
logging_query_filename=/var/log/drizzle/slow.csv
logging_query_threshold_slow=1000000

Drizzle supports the ability to set a threshold in microseconds.

NOTE: I wanted to demonstrate this using the popular MySQL SLEEP() function, only to find this is currently not available in Drizzle. This is an ideal example of a simple UDF that can be written and added to Drizzle. One day if I ever have the time.

Here is some sample output using queries > 1 second.

1248216457856195,1,43,"test","insert into numbers   select...","Query",1248216457856195,2160680,2160620,0,26214420,0
1248216462738678,1,45,"test","insert into numbers   select...","Query",1248216462738678,4530327,4530263,0,52428821,0
1248216472430813,1,47,"test","insert into numbers   select...","Query",1248216472430813,8990965,8990890,0,104857622,0
1248216473592812,1,48,"test","select @counter := count(*) from numbers","Query",1248216473592812,1152319,1152257,1,104857622,0

Logging by threshold

Drizzle Query Logging provides the ability to return results by 2 thresholds, the number of rows in the result, and the number of rows examined by the storage engine.

/etc/drizzle/drizzled.cnf
[drizzled]
logging_query_enable=true
logging_query_filename=/var/log/drizzle/slow.csv
logging_query_threshold_big_resultset=100
1248216631322097,1,5,"test","select * from numbers limit 100","Query",1248216631322097,281,217,100,1002,0
1248216642763174,1,6,"test","select * from numbers limit 101","Query",1248216642763174,268,215,101,1012,0
/etc/drizzle/drizzled.cnf
[drizzled]
logging_query_enable=true
logging_query_filename=/var/log/drizzle/slow.csv
logging_query_threshold_big_examined=1000
1248216785430588,1,6,"test","select * from numbers limit 1000","Query",1248216785430588,8055,7983,1000,10002,0
1248216800327928,1,7,"test","select count(*) from numbers","Query",1248216800327928,1041322,1041222,1,10485762,0

Logging by pattern

The final option is to return queries that match a given pattern via a PCRE expression.


/etc/drizzle/drizzled.cnf
[drizzled]
logging_query_enable=true
logging_query_filename=/var/log/drizzle/slow.csv
logging_query_pcre=now
drizzle> select now();
+---------------------+
| now()               |
+---------------------+
| 2009-07-22 03:24:32 |
+---------------------+
1 row in set (0 sec)

drizzle> select curdate();
+------------+
| curdate()  |
+------------+
| 2009-07-22 |
+------------+
1 row in set (0 sec)

drizzle> select "now";
+-----+
| now |
+-----+
| now |
+-----+
1 row in set (0 sec)

drizzle> select "know how";
+----------+
| know how |
+----------+
| know how |
+----------+
1 row in set (0 sec)
1248233072792211,3,2,"","select now()","Query",1248233072792211,154,154,1,00,0
1248233085807520,3,4,"","select "now"","Query",1248233085807520,92,92,1,00,0
1248233096659018,3,5,"","select "know how"","Query",1248233096659018,75,75,1,00,0

Another example using a pattern.

/etc/drizzle/drizzled.cnf
[drizzled]
logging_query_enable=true
logging_query_filename=/var/log/drizzle/slow.csv
logging_query_pcre="[0-9][0-9][0-9]"
drizzle> select 1;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0 sec)

drizzle> select 11;
+----+
| 11 |
+----+
| 11 |
+----+
1 row in set (0 sec)

drizzle> select 111;
+-----+
| 111 |
+-----+
| 111 |
+-----+
1 row in set (0 sec)

drizzle> select 1111;
+------+
| 1111 |
+------+
| 1111 |
+------+
1 row in set (0 sec)

drizzle> select 11+22;
+-------+
| 11+22 |
+-------+
|    33 |
+-------+
1 row in set (0 sec)
1248233336460373,3,4,"","select 111","Query",1248233336460373,79,79,1,00,0
1248233339300429,3,5,"","select 1111","Query",1248233339300429,82,82,1,00,0

Unfortunately it seems that this variable is also not configurable dynamically at this time.

drizzle> set global logging_query_pcre="now";
ERROR 1238 (HY000): Variable 'logging_query_pcre' is a read only variable

This is definitely an improvement over current MySQL logging.

What's new in MySQL 5.4.1

Absolutely nothing?

5.4.0 was released with a change in the MySQL Binary distributions, delivering only 1 64bit Linux platform and two Sun Solaris platforms. This was officially announced on April 21 2009 however the 5.4.0 Release Notes state 05 April 2009. So it’s not a big deal, but consistency would be nice.

I’ve seen in a few posts 5.4.1, so I decided to try it out. Spending the time to read what’s changed in 2 months with the 5.4.1 Release Notes before I go downloading and installing, you read.

This release does not differ from 5.4.0 except that binary distributions are available for all MySQL-supported platforms.

Is this going to be the new policy from Sun? Release for Solaris platforms first, then later release for other platforms?

What to do at 3:25am

Look at MySQL bug reports of course? Well actually I’m writing multiple blog posts, and I was confirming additional reference sources and links when I came across MySQL Bug #29847 – Large CPU usage of InnoDB crash recovery with a big buf pool.

Taking the time to actually read the information exchange I stumble upon.

[8 Jun 23:29] liz drachnik

Hello Heikki -

In order for us to continue the process of reviewing your contribution to MySQL - We need
you to review and sign the Sun|MySQL contributor agreement (the "SCA")

The process is explained here:

http://forge.mysql.com/wiki/Sun_Contributor_Agreement

Getting a signed/approved SCA on file will help us facilitate your contribution-- this
one, and others in the future.

Thank you !

Liz Drachnik  - Program Manager - MySQL

Oops. Well it made me laugh out loud for so many reasons. First your talking to the creator of InnoDB, a part of MySQL for at least 5+ years. Second, there is clearly an agreement between Oracle and MySQL already for the incorporation of InnoDB in the current builds, but mostly because at this late stage of probably Oracle acquisition (which side note has seemed too quite for too long) it seems rather a mute point to be chasing up paperwork.

Fortunately sanity obviously prevailed, unfortunately the public record persists. Still, humor is always good.

[9 Jun 18:04] Liz Drachnik

Please disregard the previous request for an SCA.
thank you

Never let your binlog directory fill up

Recently with a client while running a number of disaster recovery tests I came across a nasty situation which was not part of the original plan and provided a far worse disaster situation then expected.

I should preface this condition with some of the environment conditions.

  • MySQL 5.0 Enterprise 5.0.54
  • RHEL 5 64bit
  • Master and 2 Slaves
  • MySQL Data and MySQL Binary Logs/MySQL Error Logs are on separate disk partitions

While running stress tests under high load, we tested the filling of partition containing the logs. This partition included the binary log and MySQL error log.

The observed output was.

  • An error message was written to the MySQL error log. See below.
  • Application throughput dropped, but did not stop.
  • Binary logs stopped occurring.
  • MySQL proactively stopped logging but continued to process transactions.

The end result was:

  • The error log was ultimately truncated after reboot, so if the information was not captured while this was in action, this important messages would be lost.
  • The primary slave used for read scalability and the secondary slave used for backups are now unusable.
  • The backup and recovery process using slaves and point in time recovery via binary logs is not unusable.
  • The three backup methods in use for the client are ineffective. It was necessary to disable access to the Master, take a full backup, and then sync the slaves from this copy.
090710 19:01:25 [ERROR] /opt/mysql/bin/mysqld: Disk is full writing '/mysqllog/binlog/hostname-3306-bin.000020'
     (Errcode: 28). Waiting for someone to free space... Retry in 60 secs
090710 19:01:46 [ERROR] Error writing file '/mysqllog/slow_log/hostname_3306_slow_queries.log' (errno: 1)
090710 19:02:25 [ERROR] Error writing file '/mysqllog/binlog/hostname-3306-bin' (errno: 28)
090710 19:02:25 [ERROR] Could not use /mysqllog/binlog/hostname-3306-bin for logging (error 28).
    Turning logging off for the whole duration of the MySQL server process. 
    To turn it on again: fix the cause, shutdown the MySQL server and restart it.

Updated

I discuss in detail the options for the MySQL error log including recommendations for the MySQL error log file location in Monitoring MySQL – The error log

Getting wireless working on Ubuntu Macbook

I run Ubuntu 9.04 Januty on my Macbook. Previously installing Ubuntu 8.10, wireless worked automatically, for 9.04 it did not.

This is what I did to fix it.

  1. Verify your Macbook is seeing the Broadcom controller. See below for the lspci command, and expected output.
  2. Goto System -> Administration -> Hardware Drivers. The Broadcom STA wireless driver is activated, deactivate it.
  3. Add to /etc/modules a line with wl
  4. Reboot
  5. Goto System -> Administration -> Hardware Drivers. Activate the Broadcom STA wireless driver.
  6. Reboot
  7. Wireless now operational.
$ lspci
...
02:00.0 Network controller: Broadcom Corporation BCM4328 802.11a/b/g/n (rev 03)
03:00.0 Ethernet controller: Marvell Technology Group Ltd. Marvell Yukon 88E8058 PCI-E Gigabit Ethernet Controller (rev 13)
04:03.0 FireWire (IEEE 1394): Agere Systems FW323 (rev 61)

For reference, modprobe wl does not return any output in my environment, yet wireless works fine.

References:

Understanding more InnoDB MVCC

As I had written earlier in Understanding InnoDB MVCC, I am trying to understand why InnoDB is taking a lock on data during an UPDATE when I do not expect this to happen.

Not wanting to go looking at the InnoDB source code to understand the problem, I’m endeavouring to possibly use existing MySQL monitoring to try and understand the problem better. In this case, I’m going to investigate SHOW ENGINE INNODB STATUS first. An old but still relevant article for reference is SHOW INNODB STATUS walk through. The MySQL High Performance book is also a good starting reference.

I’ve just installed 5.1.36 on a new Linux 64 bit laptop for this test.

For now all I’ve done is ensure the innodb_buffer_pool_size is sufficient to hold the entire table in memory.

So what happens in the 21.5 seconds these queries took to run.

–thread 1

mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> update numbers
    -> set f2 = f2 +200
    -> where id between 1 and 1000000; commit;
Query OK, 1000000 rows affected (21.50 sec)
Rows matched: 1000000  Changed: 1000000  Warnings: 0
Query OK, 0 rows affected (0.02 sec)

–thread 2

Query OK, 0 rows affected (0.00 sec)

mysql> update numbers
    -> set f2 = f2 +300
    -> where id between 1000001 and 2000000; commit;
Query OK, 1000000 rows affected (20.06 sec)
Rows matched: 1000000  Changed: 1000000  Warnings: 0
Query OK, 0 rows affected (0.02 sec)

In SHOW ENGINE INNODB STATUS I sampled 1718 unique times. Probably not the best approach but it did highlight some things.

The overall state of the main Innodb thread process cycled though states including sleeping, making checkpoint, flushing log, waiting for server activity, purging and reserving kernel mutex.

     49 Main thread process no. 5489, id 139872150796624, state: flushing log
    709 Main thread process no. 5489, id 139872150796624, state: making checkpoint
      1 Main thread process no. 5489, id 139872150796624, state: purging
      2 Main thread process no. 5489, id 139872150796624, state: reserving kernel mutex
    956 Main thread process no. 5489, id 139872150796624, state: sleeping
      1 Main thread process no. 5489, id 139872150796624, state: waiting for server activity

I was surprised to see making checkpoint here. I’ve actually run this monitoring on two separate servers, both running 5.1.x and both times this occured. On this test machine, I also saw an increase in the flush list, and pending checkpoint writes to confirm this. I’ve tried in the past to monitor this closely, so this test actually will be useful for a different problem analysis at a later time.

Some 3800 OS Waits (that’s context switches) indicates some critical code need to use a mutex. These mutexes listed in the Semaphores section occured in multiple areas including:

      4 Mutex at 0x1684b08 created file mem/mem0pool.c line 206, lock var 0
     61 Mutex at 0x7f369033e2b8 created file srv/srv0srv.c line 875, lock var 0
      1 Mutex at 0x7f36903408b8 created file fil/fil0fil.c line 1313, lock var 1
     22 Mutex at 0x7f3690340cb8 created file buf/buf0buf.c line 597, lock var 0
      5 Mutex at 0x7f36903410e0 created file btr/btr0sea.c line 139, lock var 0
      1 Mutex at 0x7f36903410e0 created file btr/btr0sea.c line 139, lock var 1
     21 Mutex at 0x7f3690341910 created file log/log0log.c line 738, lock var 0
     85 Mutex at 0x7f3690341910 created file log/log0log.c line 738, lock var 1
      8 Mutex at 0x7f36903508c0 created file trx/trx0rseg.c line 147, lock var 0
    232 Mutex at 0x7f36903508c0 created file trx/trx0rseg.c line 147, lock var 1

One transaction waited only a little more then 1 second, indicated by 58 samples.

---TRANSACTION 0 1327, ACTIVE 22 sec, process no 5489, OS thread id 139872177113424 fetching rows
mysql tables in use 1, locked 1
LOCK WAIT 2008 lock struct(s), heap size 292848, 1002005 row lock(s), undo log entries 1000000
MySQL thread id 66, query id 11435 localhost root Updating
update numbers
set f2 = f2 +200
where id between 1 and 1000000
------- TRX HAS BEEN WAITING 1 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 2612 n bits 568 index `PRIMARY` of table `test`.`numbers` trx id 0 1327 lock_mode X waiting
Record lock, heap no 256 PHYSICAL RECORD: n_fields 5; compact format; info bits 0
 0: len 4; hex 000f4241; asc   BA;; 1: len 6; hex 000000000530; asc      0;; 2: len 7; hex 00000000f20110; asc        ;; 3: len 4; hex 800f4241; asc   BA;; 4: len 4; hex 8005032c; asc    ,;;

And the blocker in this instance while waiting for this lock to be released was continuing to get row locks, and create undo log entries. That was to be expected.

963 lock struct(s), heap size 292848, 979851 row lock(s), undo log entries 977891
1979 lock struct(s), heap size 292848, 988001 row lock(s), undo log entries 986025
1989 lock struct(s), heap size 292848, 993013 row lock(s), undo log entries 991027

I doubled checked to ensure no other queries were inside the InnoDB queue.

      4 0 queries inside InnoDB, 0 queries in queue
    303 1 queries inside InnoDB, 0 queries in queue
   1411 2 queries inside InnoDB, 0 queries in queue

Only 12 of the 1718 samples showed any Pending asynchronous I/O writes, however there was a higher amount of pending buffer pool and log syncs. This is all to be expected.

This quick inspection, especially at 1am has not given me any answers. With greater information, the need for better understanding is required.

Understanding InnoDB MVCC

Multi versioning concurrency control (MVCC) is a database design theory that enables relational databases to support concurrency, or more simply multiple user access to common data in your database.

In MySQL the InnoDB storage engine provides MVCC, row-level locking, full ACID compliance as well as other features.

In my understanding of database theory, access to modify independent sections of unique data (e.g. UPDATE) under MVCC should fully support concurrency. I have however experienced a level of exclusive locking under Innodb.

I wanted to clearly document this situation so I could then seek the advice of the guru’s in InnoDB Internals such as Mark Callaghan, Percona and the Innodb development team for example. I’m happy to say I’m not a MySQL expert in every aspect of MySQL, specifically internals where I have not had the detailed time to read the code, and understanding all internal workings.

The situation

Single table updates on a range of rows by primary keys are being blocked by other similar operations on the same table yet the set of data for each query is effectively unique.

Reproducing the problem

$ mysql -u -p test
drop table if exists numbers;
create table numbers (id int unsigned not null primary key, f1 int not null, f2 int not null) engine=innodb;

delimiter $$

drop procedure if exists fill_numbers $$
create procedure fill_numbers(in p_max int)
deterministic
begin
  declare counter int default 1;
  truncate table numbers;
  insert into numbers values (1,1,1);
  while counter < p_max
  do
      insert into numbers (id,f1, f2)
          select id + counter, counter + f1, id - f2
          from numbers;
      select count(*) into counter from numbers;
      select counter;
  end while;
end $$
delimiter ;

call fill_numbers(2000000);

In two separate threads I execute similar statements on different ranges of the primary key.

--thread 1
start transaction;
update numbers
set f2 = f2 +200
where id between 1 and 1000000;
commit;

--thread 2
start transaction;
update numbers
set f2 = f2 +300
where id between 1000001 and 2000000;
commit;

And in a third thread we can monitor the transactions inside Innodb.

-- thread 3
show engine innodb statusG

During the update process, the following error can be observed.

---TRANSACTION 0 7741, ACTIVE 20 sec, process no 2159, OS thread id 1188534592 fetching rows, thread declared inside InnoDB 275
mysql tables in use 1, locked 1
2007 lock struct(s), heap size 292848, 1001862 row lock(s), undo log entries 999858
MySQL thread id 918563, query id 16802707 localhost root Updating
update numbers set f2 = f2 +300 where id between 1000001 and 2000000
---TRANSACTION 0 7740, ACTIVE 21 sec, process no 2159, OS thread id 1178949952 fetching rows
mysql tables in use 1, locked 1
LOCK WAIT 2008 lock struct(s), heap size 292848, 1002005 row lock(s), undo log entries 1000000
MySQL thread id 918564, query id 16802694 localhost root Updating
update numbers set f2 = f2 +200 where id between 1 and 1000000
------- TRX HAS BEEN WAITING 3 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 16052 n bits 568 index `PRIMARY` of table `test`.`numbers` trx id 0 7740 lock_mode X waiting
Record lock, heap no 256 PHYSICAL RECORD: n_fields 5; compact format; info bits 0
 0: len 4; hex 000f4241; asc   BA;; 1: len 6; hex 000000001e3d; asc      =;; 2: len 7; hex 00000033630110; asc    3c  ;; 3: len 4; hex 800f4241; asc   BA;; 4: len 4; hex 80050584; asc     ;;

The problem has been reproduced on various different MySQL versions and different hardware including, 5.0.67, 5.0.81 and 5.1.25.

What is causing the problem?

  • Is it a bug? No.
  • Is my understanding of MVCC theory incorrect? Maybe.
  • Is it InnoDB’s implementation of MVCC incomplete. No. Heikki and his team have a far greater understanding of data theory then most database experts
  • Is it the MySQL kernel interfering with the InnoDB storage engine? No, this is not possible as the MySQL kernel has passed the queries to InnoDB, and InnoDB is handling these threads independently.
  • Is it a gap locking issue, a problem that can cause deadlocks when inserting data in a high concurrency situation? Not likely as the data is inserted in primary key, i.e. auto increment order, and there are no gaps.
  • Is it related to InnoDB access method via the primary key, where InnoDB uses a clustered index to store the primary key. Given the data is physically in primary key order, this clustered index would in theory reduce possible locking.
  • Is it related to the page size of indexes, e.g. the 16k index page, effectively causing a page level lock for overlapping index data? My understanding is that InnoDB supports row level locking, and MVCC should cater for this.
  • Is is related to the ranges of primary keys being adjacent, i.e. 1,000,000 and 1,000,001. Not likely as I can reproduce the problem not using adjacent ranges.
  • Is it some weird interaction to managing the undo space of the transactions in the Innodb buffer pool?
  • Is it some weird interaction with marking/locking the dirty pages in the Innodb buffer pool of modified pages?
  • Is it some weird interaction with logging the successful Innodb transaction to the redo logs.

I’ve listed these points more as an information exercise for all those that have less understanding of the problem to see my though process.

Additional testing can definitely be performed. Additional analysis of InnoDB internals with SHOW ENGINE INNODB STATUS such as spin waits, OS waits (context switches), looking at Mutexes with SHOW ENGINE INNODB MUTEX can be undertaken.

My hope and request is that this has been observed by others and that a simple hybrid solution exists.

Killing my softly with QUERY

The MySQL KILL command as the name suggests kills queries that are running.

After identifying the Id using the SHOW PROCESSLIST command, the User of the connection/thread or a database user with SUPER privileges can execute KILL [id]; to remove the connection/thread.

However, there is an ability to kill just the query that is being executed rather the entire connection. The default when not specified is to kill the connection, however you can optional specify the CONNECTION or QUERY keywords.

For example, below is an interactive test.

Thread 1:

mysql> select sleep(10);

Thread 2:

mysql> show processlist;
+----+------+-----------+------+---------+------+-----------+------------------+
| Id | User | Host      | db   | Command | Time | State     | Info             |
+----+------+-----------+------+---------+------+-----------+------------------+
| 23 | root | localhost | NULL | Query   |    2 | executing | select sleep(10) |
| 24 | root | localhost | NULL | Query   |    0 | NULL      | show processlist |
+----+------+-----------+------+---------+------+-----------+------------------+
2 rows in set (0.00 sec)

mysql> kill query 23;

Thread 1:
Notice, the query exits, in less then 10 seconds, but the connection is still valid.

+-----------+
| sleep(10) |
+-----------+
|         1 |
+-----------+
1 row in set (7.27 sec)

mysql> select sleep(10);

Thread 2:

mysql> show processlist;
+----+------+-----------+------+---------+------+-----------+------------------+
| Id | User | Host      | db   | Command | Time | State     | Info             |
+----+------+-----------+------+---------+------+-----------+------------------+
| 23 | root | localhost | NULL | Query   |    4 | executing | select sleep(10) |
| 24 | root | localhost | NULL | Query   |    0 | NULL      | show processlist |
+----+------+-----------+------+---------+------+-----------+------------------+
2 rows in set (0.00 sec)

mysql> kill 23;
Query OK, 0 rows affected (0.00 sec)

Thread 1:
Query is killed and connection is lost. An attempt to run the command again within the interactive client causes the system to re-get a connection.

ERROR 2013 (HY000): Lost connection to MySQL server during query
mysql> select sleep(10);
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    25
Current database: *** NONE ***

+-----------+
| sleep(10) |
+-----------+
|         0 |
+-----------+
1 row in set (10.00 sec)

Installing Perl RRD module – RRDs.pm

Perform a quick check if the module is available.

$ perl -MRRDs -le 'print q(ok!)'
Can't locate RRDs.pm in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib64/perl5/site_perl/5.8.7/x86_64-linux-thread-multi /usr/lib64/perl5/site_perl/5.8.6/x86_64-linux-thread-multi /usr/lib64/perl5/site_perl/5.8.5/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl/5.8.7 /usr/lib/perl5/site_perl/5.8.6 /usr/lib/perl5/site_perl/5.8.5 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib64/perl5/vendor_perl/5.8.7/x86_64-linux-thread-multi /usr/lib64/perl5/vendor_perl/5.8.6/x86_64-linux-thread-multi /usr/lib64/perl5/vendor_perl/5.8.5/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl/5.8.7 /usr/lib/perl5/vendor_perl/5.8.6 /usr/lib/perl5/vendor_perl/5.8.5 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .).
BEGIN failed--compilation aborted.

Check the installed packages on this CentOS 5 server.

$ rpm -qa | grep rrd
rrdtool-1.2.27-3.el5
rrdtool-1.2.27-3.el5

Do a search for related rrdtool modules.

$ sudo yum search rrdtool
Loading "fastestmirror" plugin
Loading mirror speeds from cached hostfile
 * epel: ftp.SURFnet.nl
 * base: mirrors.service.softlayer.com
 * updates: mirrors.service.softlayer.com
 * addons: mirrors.service.softlayer.com
 * extras: mirrors.service.softlayer.com
Excluding Packages in global exclude list
Finished
rrdtool.i386 : Round Robin Database Tool to store and display time-series data
queuegraph.noarch : A RRDtool frontend for Mail statistics
rrdtool-perl.x86_64 : Perl RRDtool bindings
rrdtool-tcl.x86_64 : Tcl RRDtool bindings
mailgraph-selinux.noarch : A RRDtool frontend for Mail statistics
collectd-rrdtool.x86_64 : RRDTool module for collectd
rrdtool.i386 : Round Robin Database Tool to store and display time-series data
munin.noarch : Network-wide graphing framework (grapher/gatherer)
rrdtool.x86_64 : Round Robin Database Tool to store and display time-series data
ruby-RRDtool.x86_64 : RRDTool for Ruby
munin-node.noarch : Network-wide graphing framework (node)
rrdtool-ruby.x86_64 : Ruby RRDtool bindings
sagator.noarch : Antivir/antispam gateway for smtp server
rrdtool-php.x86_64 : PHP RRDtool bindings
rrdtool-devel.x86_64 : RRDtool libraries and header files
rrdtool-python.x86_64 : Python RRDtool bindings
ganglia-gmetad.x86_64 : Ganglia Metadata collection daemon
rrdtool-doc.x86_64 : RRDtool documentation
cacti.noarch : An rrd based graphing tool
queuegraph-selinux.noarch : A RRDtool frontend for Mail statistics
rrdtool-devel.i386 : RRDtool libraries and header files
cacti.noarch : An rrd based graphing tool
sysusage.noarch : System monitoring based on perl, rrdtool, and sysstat
mailgraph.noarch : A RRDtool frontend for Mail statistics
rrdtool.x86_64 : Round Robin Database Tool to store and display time-series data

Install what looks like the right module.

$ sudo yum install rrdtool-perl
Loading "fastestmirror" plugin
Loading mirror speeds from cached hostfile
 * epel: mirrors.ircam.fr
 * base: mirrors.service.softlayer.com
 * updates: mirrors.service.softlayer.com
 * addons: mirrors.service.softlayer.com
 * extras: mirrors.service.softlayer.com
Excluding Packages in global exclude list
Finished
Setting up Install Process
Parsing package install arguments
Resolving Dependencies
--> Running transaction check
---> Package rrdtool-perl.x86_64 0:1.2.27-3.el5 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================
 Package                 Arch       Version          Repository        Size
=============================================================================
Installing:
 rrdtool-perl            x86_64     1.2.27-3.el5     epel               34 k

Transaction Summary
=============================================================================
Install      1 Package(s)
Update       0 Package(s)
Remove       0 Package(s)

Total download size: 34 k
Downloading Packages:
(1/1): rrdtool-perl-1.2.2 100% |=========================|  34 kB    00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing: rrdtool-perl                 ######################### [1/1]

Installed: rrdtool-perl.x86_64 0:1.2.27-3.el5
Complete!

Test shows it should now be available.

$ perl -MRRDs -le 'print q(ok!)'
ok!

An important Drizzle/MySQL difference

There are many features that are similar in MySQL and Drizzle. There are also many that are not.

I’ve previously discussed topics like Datatypes and tables, SQL_MODE and SHOW.

A key difference in Drizzle is the definition of utf8 as 4 bytes, not 3 bytes as in MySQL. This combined with no other character sets leads to an impact on the length in keys supported in Innodb.

During a recent test with a client, I was unable to successfully migrated the schema and provide the same schema due to unique indexes defined for utf8 VARHAR(255) fields.

Here is the problem.

mysql> create table t1(c1 int unsigned not null auto_increment primary key, c2 varchar(255) not null, unique key (c2)) engine=innodb default charset latin1;
Query OK, 0 rows affected (0.05 sec)
mysql> create table t2(c1 int unsigned not null auto_increment primary key, c2 varchar(255) not null, unique key (c2)) engine=innodb default charset utf8;
Query OK, 0 rows affected (0.03 sec)
mysql> create table t3(c1 int unsigned not null auto_increment primary key, c2 varchar(256) not null, unique key (c2)) engine=innodb default charset utf8;
ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes
drizzle> create table t1(c1 int not null auto_increment primary key, c2 varchar(255) not null, unique key (c2)) engine=innodb default charset latin1;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your Drizzle server version for the right syntax to use near 'charset latin1' at line 1
drizzle> create table t1(c1 int not null auto_increment primary key, c2 varchar(255) not null, unique key (c2)) engine=innodb;
ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

Only a maximum of 191 is now possible.

drizzle> create table t1(c1 int not null auto_increment primary key, c2 varchar(191) not null, unique key (c2)) engine=innodb;
Query OK, 0 rows affected (0.03 sec)
drizzle> create table t1(c1 int not null auto_increment primary key, c2 varchar(192) not null, unique key (c2)) engine=innodb;
ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

The confusion over global and session status

I was trying to demonstrate to a client how to monitor queries that generate internal temporary tables. With an EXPLAIN plan you see ‘Creating temporary’. Within MySQL you can use the SHOW STATUS to look at queries that create temporary tables.

There is the issue that the act of monitoring impacts the results, SHOW STATUS actually creates a temporary table. You can see in this example.

mysql> select version();
+-----------------+
| version()       |
+-----------------+
| 5.1.31-1ubuntu2 |
+-----------------+
1 row in set (0.00 sec)

mysql> show global status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 48    |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 155   |
+-------------------------+-------+
3 rows in set (0.00 sec)

mysql> show global status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 48    |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 156   |
+-------------------------+-------+
3 rows in set (0.00 sec)

What has perplexed me in the past, and I can’t explain is that SHOW SESSION STATUS for this example does not increment. It’s confusing to tell a client to use SHOW SESSION STATUS for SQL statements, but the behavior is different with SHOW GLOBAL STATUS. For example, no increment.

mysql> show session status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 2     |
+-------------------------+-------+
3 rows in set (0.00 sec)

mysql> show session status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 2     |
+-------------------------+-------+
3 rows in set (0.00 sec)

Let’s look at a query that creates a temporary table.

mysql> explain select t1.* from t1,t2 where t1.c1 = t2.c2 order by t2.c2, t1.c1;
+----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra                           |
+----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+
|  1 | SIMPLE      | t1    | ALL  | NULL          | NULL | NULL    | NULL |    3 | Using temporary; Using filesort |
|  1 | SIMPLE      | t2    | ALL  | NULL          | NULL | NULL    | NULL |    3 | Using where; Using join buffer  |
+----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+
2 rows in set (0.03 sec)

If we use session status we get an increment of 1.

mysql> show session status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 2     |
+-------------------------+-------+
3 rows in set (0.00 sec)

mysql> show session status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 2     |
+-------------------------+-------+
3 rows in set (0.00 sec)

mysql> select SQL_NO_CACHE t1.* from t1,t2 where t1.c1 = t2.c2 order by t2.c2, t1.c1;
Empty set (0.00 sec)

mysql> show session status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 3     |
+-------------------------+-------+
3 rows in set (0.00 sec)

If we use global status, in this case it’s and idle server so I know there is no other activity, however in a real world situation that isn’t possible.


mysql> show global status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 48    |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 171   |
+-------------------------+-------+
3 rows in set (0.00 sec)

mysql> select SQL_NO_CACHE t1.* from t1,t2 where t1.c1 = t2.c2 order by t2.c2, t1.c1;
Empty set (0.00 sec)

mysql> show global status like 'created_tmp%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 48    |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 173   |
+-------------------------+-------+
3 rows in set (0.00 sec)

Benchmarking Drizzle with MyBench(DBD::drizzle)

With thanks to Patrick Galbraith and his DBD::drizzle 0.200 I am now able to test client benchmarks side by side with MySQL and Drizzle.

For simple benchmarking with clients, generally when I have little time, I use a simple Perl framework mybench. I was able to change just the connection string and run tests.

The diff of my two scripts where:

---
> my $user      = $opt{u} || "appuser";
> my $pass      = $opt{p} || "password";
> my $port      = $opt{P} || 3306;
> my $dsn       = "DBI:mysql:$db:$host;port=$port";
---
< my $user      = $opt{u} || "root";
< my $pass      = $opt{p} || "";
< my $port      = $opt{P} || 4427;
< my $dsn       = "DBI:drizzle:$db:$host;port=$port";
---

It's too early to tell what improvement Drizzle will make. Just running my first test with single and multi thread tests shows an improvement in all figures in Drizzle via MySQL, however I will need to run this on various different versions of MySQL including the latest 5.0 to confirm.