Performance v Scalability – For Employers

In a recent discussion with a fellow peer reviewing a job description he was applying for, we got into a discussion on the specifics of a Performance Engineer verses a Scalability Engineer.

Performance and Scalability are two very different goals. While it is true that improving performance can lead to increased scalability capacity with the same physical resources, increasing the scalability of your application does not necessarily lead to improved performance.

Performance is all about perception. In layman’s terms, how quickly can you provide a response to a request from your customer. As volume increases, performance generally degrades after a certain point, and then as volume continues, often the outcome is complete failure. Having a suitable scalable architecture can enable you to provide consistent performance for a given and growing workload.

A Scalability Engineer needs to have architectural skills, management skills, deployment skills and automation skills. A Performance Engineer needs to have more specific technology skills, development skills and some architectural skills.

A great example of a performance problem is when a client contacts me to help with a slow performing website. When the home page takes 5 seconds to load, but only 500ms of that is the actual page generation, and ultimately the maximum possible amount of time spent in the database, in isolation as a database expert I could only improve on 10% of the actual problem. As a performance engineer, your knowledge of the full stack including the web container, the data store accesses (persistent and non-persistent), optimizing the network payload size with compression, various techniques of caching and parallelism capacities are all essential skills needed.

A scalability problem is when your site supports 5,000 concurrent users, but it needs to support 25,000. Applying the primary skills just listed will not solve your scalability need. Simply adding 5x of servers is a simple way to provide support for more concurrent users, but where is the bottleneck or limitation of your application as you scale. Does adding 5x web servers place too much load on your caching tier or your database tier? While most applications utilize load balancing for web traffic, and so a new webserver is generally straightforward (to a point), can your application even support adding more database servers? Or does your architecture lead to read scalability, but not write scalability? Not being able to scale writes is a clear single point of failure for scalability. Most scalability needs require (re)architecture of your stack and the management of how this can be achieved while maintaining an operational site. After a point when you have 500+ servers, adding 50 more servers is generally the role of great automated deployment processes. The problem is usually greater when moving from 5 servers to 25 servers.

For employers that are writing a job description and using a specific job title, consider if the objectives in the description matches the title.

This leads to the question, what about a Reliability Engineer? That is another detailed discussion that relates to performance and scalability, but also have very different goals. Clearly defining your uptime needs is just one question a reliability engineer needs to ask.

Announcing the MySQL Plugin for New Relic

Many application developers would know of New Relic. A SaaS performance and monitoring tool targeted towards your web application monitoring including PHP, Ruby, Java, .Net, Python and Node.

With the release today (June 19, 2013) of the New Relic Platform, custom monitoring of data stores including MySQL are now possible. Try it now free. This link will provide you a free standard account (no cost, no billing details necessary), that enables you to perform application monitoring, server monitoring, MySQL instance monitoring and monitoring of many other products via many plugins.

Over the next few posts I will be discussing some of the design decisions I made for this MySQL plugin. New Relic has certain features that lend towards really helping developers monitor and diagnose the application (I have been surprised how it has helped in debugging DB and OS problems directly from PHP code for example). However, often it’s important that knowing the server resources, the database usage is critical to seeing the whole picture, and with the new plugins, New Relic gives developers, system administrators and database administrator some well targeted insights. When building custom dashboards you can see CPU usage, Database usage, and your web application volume, page load time and other metrics on one page.

The MySQL plugin has two pre-requisite requirements. A MySQL server running 5.0 or better, and a Java JRE 1.6 or better. The plugin can work either directly on each MySQL server/instance, and therefore needs the JRE, or it is possible to configure a single server to collect all MySQL statistics and report them to New Relic. There are no specifics that restrict this plugin working for any MySQL variants/forks, infact I specifically designed the plugin to be forward compatible with new version and status variables for example, and support custom recording of any metrics (more on that later).

This is first release of the New Relic Platform and MySQL plugin so I expect a lot of refinements, improvements and suggestions as we move forward. As an integral part of developing the MySQL plugin and using the New Relic Java SDK, there are a number of roadmap items to better serve MySQL and other products that will be coming in future releases. The beta version of the MySQL plugin has been running on production MySQL servers for several months now and working well.

New Relic provides two ways to display data, first by graphs, and second by tables. There is a handy information option in the Server monitoring that is not presently available for the plugins. Graphs work best with multiple data points and constantly changing data and records averages. When looking at the SQL commands for example is great to see the total breakdown, monitoring MySQL replication lag (a single metric), that hopefully is generally zero can appear a little bland. A cool trick is to click on the legend, this toggles the displayed value, and can really help when one value in a graph hogs the metric.

At present the plugin has 4 tabs of display:

  • Overview provides a high level view of total reads v total writes, database connections, network communications and a table of key utilizations (which I will discuss more in another post)
  • Query Analysis shows details of SQL commands, temporary (memory/disk) tables, slow queries, query cache usage, select query types, sort types and table locking.
  • InnoDB Metrics include buffer pool operations, a page breakdown, row operations, log writes, log and data fsyncs, checkpoint age, history list, internal threads and mutexes.
  • Replication shows lag, relay log volume, I/O thread lag, slave errors and master binary log volume (when on a master).

The plugin is written to be extensible via JSON configuration. Those wishing to monitor different variables, or say custom metrics from storage engines like Tokutek can be easily defined, either a key/value set, or single row of metrics. However, the initial version of the dashboards does not allow the customers to modify the present dashboard. Requests are welcome for me to expand the current global dashboards.

In the News

Reasons to use MySQL 5.5 Presentation

I recently gave a presentation at the New York Effective MySQL Meetup on the new features of, and some of the compelling reasons to upgrade to MySQL 5.5. There are also a number of new MySQL variables that can have a dramatic effect on performance in a highly transactional environment, innodb_buffer_pool_instances and innodb_purge_threads are just two to consider.

For more information on all the new variables, status, reserved words and benchmarks of new features you can Download Presentation Slides.

Extra: Using Index

Many people consider this information in the MySQL Query Execution Plan (QEP) to indicate that the referenced table is using an index. It actually means that ONLY the index is used. This can for larger and more frequent queries provide a significant boost.

In a recent example, adding an index dropped a query from 190ms to 6ms. However adding a better index dropped that 6ms query to 1.2ms. When executed 100s/1000s of times per second these millisecond improvements can have a huge benefit in greater scalability. While people often tune slow running queries, in a well tuned system shaving milliseconds of queries, in this example making 6ms query 80% better is a far greater improvement.

You can get a detailed explanation of how to identify, create and verify covering indexes from my Percona Live presentation Improving performance with better indexes where I also include another great 10 table join example, reducing a query running 20,000+ times per second from 175ms to 10ms.

Part 2 – Simple lessons in improving scalability

Given the popular response from my first lesson in improving scalability where I detailed simple ways to eliminate unnecessary SQL, let me share another common bottleneck with MySQL scalability that can be instantly overcome.

Analyzing the writes that occur on a system can expose obvious potential bottlenecks. The MySQL Binary Log is a wealth of information that can be mined. Simple DML Counts per table can be achieved by a single line command.

Let’s look at the following example output of a production system:

mysqlbinlog /path/to/mysql-bin.000999 |  
   grep -i -e "^update" -e "^insert" -e "^delete" -e "^replace" -e "^alter"  | 
   cut -c1-100 | tr '[A-Z]' '[a-z]' |  
   sed -e "s/t/ /g;s/`//g;s/(.*$//;s/ set .*$//;s/ as .*$//" | sed -e "s/ where .*$//" |  
   sort | uniq -c | sort -nr

Of the approx 100,000 DML statements we get the following breakdown.

  55283 update sessions
  25204 insert into sessions
  12610 update items
  10536 insert into item_categories
   7532 update users
   5168 delete from item_categories
 

More then 50% of the statements that are written to the binary log and therefore replicated are INSERT’s into the sessions table. A further 25% are UPDATE’s to the same table. This represents 75% of DML statements in just the two most frequent statements.

What is disappointing is that these statements do not belong in MySQL. This is an example of when MySQL is being abused for a purpose where other products are more suited. While there is the argument in using MySQL for storing data, the impact in MySQL memory management, backup/recovery and slave replication throughput and lag can significantly impact scalability of your important MySQL data.

What is observed here is session management where a key value store product should be used as an alternative. In most circumstances it is likely this information is not even required to be persisted. The obvious replacement here is using memcached. If you do wish to persist this data there is an ever increasing list of products including Redis, Tokyo Cabinet/Kyoto Cabinent, Membrain, Voldemort etc that are specifically designed as a key-value store. Even the popular noSQL MongoDB can be easily substituted to perform as a key-value session manager with the added benefits of being a more fully functional product for other purposes.

This is often a common mistake when you use a framework such as Ruby on Rails (RoR) or PHP Code Igniter and many others.

Simple lessons in improving scalability

It can be very easy to improve scalability with a MySQL server by a few simple rules. Here is one of them.

“The most efficient way to improve an SQL statement is to eliminate it”

There are numerous ways to eliminate SQL statements, however before I give a classic example that I’ve observed again with a client, let me explain the basic premise of why this improves scalability?

The MySQL kernel can only physically process a certain number of SQL statements for a given time period (e.g. per second). Regardless of the type of machine you have, there is a physical limit. If you eliminate SQL statements that are unwarranted and unnecessary, you automatically enable more important SQL statements to run. There are numerous other downstream affects, however this is the simple math. To run more SQL, reduce the number of SQL you need to run.

Here is the output of a small sample of analyzed TCP/IP packets via mk-query-digest.

# Rank Query ID           Response time Calls R/Call Apdx V/M   Item
# ==== ================== ============= ===== ====== ==== ===== ==========
#    1 0xD631CB919867DB50  0.0436 47.3%    92 0.0005 1.00  0.00 SELECT TTDOD
#    2 0x04FE01C5B31FD305  0.0258 27.9%   329 0.0001 1.00  0.00 ADMIN PING
#    3 0x93321857BCD8E771  0.0229 24.8%    36 0.0006 1.00  0.00 SELECT TTD

There are many problems here including the Row at a Time (RAT) nature of the SQL, the excessive pings however that’s a topic for another time. Let us look at the first statement.

SELECT `Date` FROM TTDOD WHERE ID = 9999;

That seems a simple enough query however let’s look at the table.

mysql> select count(*) from TTDOD;
+----------+
| count(*) |
+----------+
|        0 |
+----------+

In this case, the query will NEVER return any rows because the table is currently empty. Sure this may change in the future, however as this is more an exception processing situation the simple act of managing the knowledge this table rarely has any rows, and building a solution to inform the application of this can completely eliminate the need for this query to ever be executed.

FYI, the above sample is from less then 2 seconds of sampling. Removing the first query reduces the number of queries executed in this time slice by 20%. Regardless of whether this is typical load or load during a batch job the principle stands. We have not even started to look at what we can do with the next query.

Five reasons to upgrade to MySQL 5.5

Updated Nov 2011. Check out my Reasons to use MySQL 5.5 Presentation for more in-depth information about installing/configuring and using MySQL 5.5

I have been looking forward to the general availability (GA) release of MySQL 5.5 since is was publically announced in September that we would see this in 2010. While I already have a production client with 5.5.7rc, the badge of general availability is a great way to promote why environments should consider moving to using MySQL 5.5. Here is my quick short list of why I’d promote moving to MySQL 5.5.

1. Improved integration

The first significant improvement is that InnoDB is now again firmly a default included storage engine. The InnoDB plugin 1.1.x is now the builtin version of the engine, not a plugin version. Also the 1.1.x version has continued improvements over the 1.0.x version available as an included but not enabled plugin in current MySQL 5.1.x versions. Removing the complexity for end users over the choice of InnoDB and the necessary configuration changes is a great simplification. The introduction in the InnoDB plugin 1.0.x of a new file format (known as Barracuda) and additional new row formats such as dynamic and compressed should also be considered for improved I/O performance. Greater eduction and customer comparison results are still needed here.

2. Improved performance

“Improved scalability on multi-core CPUs” is the tag line Oracle touts and while the marketing graphs do show improved performance with huge numbers of threads, it is the continued work at addressing the support for large cores that is the key to have MySQL perform on newer H/W. MySQL was always known as the software that runs on low cost commodity H/W, however it also needs to work well on new hardware architectures.

InnoDB has many improvements, you can see a long list at 13.7.7. Performance and Scalability Enhancements. I have seen an upgrade from 5.0 to 5.5 significantly reduce contention of queries. The help of multiple buffer pool instances, multiple rollback segments, additional threads and functional mutexes have all reduced high throughput mutex contention.

3. Improved recoverability

While I have yet to use Semisynchronous replication in a production situation I see this as a continued progression to improving the recoverability and resilience of MySQL. This was work first seen by the Google Patch in MySQL 5.0 several years ago. The lack of a single unbreakable backup strategy is the number 1 overlooked feature in MySQL as reported by customer advisory groups in the past. The move to making InnoDB the default storage engine is another small step to improved recoverability in a default MySQL setting.

4. Improved instrumentation

A lack of detailed instrumentation rates number 2 on my list of areas of improvement needed with MySQL. With 5.5 there are more continued improvements. While some of these are very technical for high end analysis (e.g. the performance_schema), simple improvements in common instrumentation can help you identify triggers for poor performance. There is a lot more information available in the INFORMATION_SCHEMA. Here is a great example via SHOW PROCESSLIST that shows a 4 second lock in the Query Cache (QC) that until now was not possible to summarize easily what happens in the Query Cache black box.

... | Query   |    4 | Waiting for query cache lock | REPLACE INTO statistics_w SET IP = ...

Identification is the first necessary step to continued improvement.

My work with the PERFORMANCE_SCHEMA has been limited to experimenting such as described by Tracking IO with performance_schema however a recent discussion with Mark Leith highlighted that with the performance_schema it will be possible to determine the exact size of disk based temporary tables created by the MySQL kernel for example.

5. Improved development methodology

We have heard a lot of noise from a vocal few that Oracle will not do right by MySQL, the product, the community, the OEM providers. While I believe it is too early to tell the long term plans of MySQL under Oracle, I believe that action speaks louder than words. Oracle has made a number of commitments publically, Oracle are investing in resources to improve the development of the product and It’s clear the targeting of the Microsoft OS highlights MySQL has a future as a SQL Server competitor. Oracle brings a great wealth of experience, resources and processes and hopefully that will be invested into MySQL.

There is a good list of What Is New in MySQL 5.5 as part of the current development, features I’ve not mention include SIGNAL/RESIGNAL, change in the default storage engine, partitioning additions and pluggable MySQL authentication capability.

The MySQL documentation team also do a great job. An example of new work is a great comparison of Options and Variables from all 5.x versions shown on a single page for comparison.

The way forward with MySQL

Oracle as steward of MySQL is here to stay. While people still rumble about it, everybody has to get over it. For the end consumer little will change, for clients that use MySQL Enterprise Support that are affected by the changes in policy, many other viable support options exist. What is unknown and my single point of issue with the EU 10 point Statement is what will happen with OEM providers. This affects the financial viability of a number of providers and while I’m not directly involved I do not want to see this good work lost to the MySQL community.

Points of uncertainty include what features will be developed next, will they be pay only features, will they be available to enterprise customers and not community customers, what is the frequency of community releases etc. These still need good PR from Oracle for the MySQL community. For those concerned, Oracle has made continued investment in acquired RDBMS products including BerkeleyDB and InnoDB and RDB. Oracle RDB for example was acquired in 2004 and is still being actively developed and supported. As long as MySQL is profitable, we will see continued releases.

About the Author

Ronald Bradford is a well respected expert in the MySQL community. Ronald is internationally recognized as an Oracle ACE Director in MySQL, the highest industry recognition and is also a published author of Expert PHP and MySQL. He is available for MySQL consulting work now.

MySQL HandlerSocket under Ubuntu

Starting with the great work of Yoshinori-san Using MySQL as a NoSQL – A story for exceeding 750,000 qps on a commodity server and Golan Zakai who posted Installing Dena’s HandlerSocket NoSQL plugin for MySQL on Centos I configured and tested HandlerSocket under Ubuntu 10.04 64bit.

NOTE: This machine already compiles MySQL and Drizzle. You should refer to appropriate source compile instructions for necessary dependencies.

# Get Software
cd /some/path
export DIR=`pwd`
wget http://download.github.com/ahiguti-HandlerSocket-Plugin-for-MySQL-1.0.6-10-gd032ec0.tar.gz
wget http://mysql.mirror.iweb.ca/Downloads/MySQL-5.1/mysql-5.1.52.tar.gz
wget  http://mysql.mirror.iweb.ca/Downloads/MySQL-5.1/mysql-5.1.52-linux-x86_64-glibc23.tar.gz

# Install MySQL Binaries
tar xvfz mysql-5.1.52-linux-x86_64-glibc23.tar.gz
ln -s mysql-5.1.52-linux-x86_64-glibc23 mysql5152
rm -rf mysql5152/mysql-test
rm -rf mysql5152/sql-bench
cd mysql5152
scripts/mysql_install_db
bin/mysqld_safe &

# Install MySQL Source
tar xvfz mysql-5.1.52.tar.gz

# Install HandlerSocket
tar xvfz ahiguti-HandlerSocket-Plugin-for-MySQL-1.0.6-10-gd032ec0.tar.gz
cd ahiguti-HandlerSocket-Plugin-for-MySQL-d032ec0/
./autogen.sh
./configure --with-mysql-source=${DIR}/mysql-5.1.51 --with-mysql-bindir=${DIR}/mysql5152/bin
make
sudo make install

# libtool: install: /usr/bin/install -c .libs/handlersocket.lai /home/rbradfor/projects/handlersocket/mysql5152/lib/plugin/handlersocket.la

# Install the Perl dependency
cd perl-Net-HandlerSocket
perl Makefile.PL
Checking if your kit is complete...
Warning: the following files are missing in your kit:
	lib/HandlerSocket.pm
Please inform the author.
Writing Makefile for Net::HandlerSocket
# For some reason I had to run this twice, after error I check, saw nothing wrong, ran a second time and it worked ???
perl Makefile.PL
Writing Makefile for Net::HandlerSocket
make
sudo make install

Files found in blib/arch: installing files in blib/lib into architecture dependent library tree
Installing /usr/local/lib/perl/5.10.1/auto/Net/HandlerSocket/HandlerSocket.so
Installing /usr/local/lib/perl/5.10.1/auto/Net/HandlerSocket/HandlerSocket.bs
Installing /usr/local/lib/perl/5.10.1/Net/HandlerSocket.pm
Installing /usr/local/lib/perl/5.10.1/Net/HandlerSocket/Pool.pm
Installing /usr/local/man/man3/Net::HandlerSocket.3pm
Appending installation info to /usr/local/lib/perl/5.10.1/perllocal.pod

cp lib/Net/HandlerSocket.pm blib/lib/Net/HandlerSocket.pm
cp lib/Net/HandlerSocket/Pool.pm blib/lib/Net/HandlerSocket/Pool.pm
/usr/bin/perl /usr/share/perl/5.10/ExtUtils/xsubpp  -typemap /usr/share/perl/5.10/ExtUtils/typemap  HandlerSocket.xs > HandlerSocket.xsc && mv HandlerSocket.xsc HandlerSocket.c
Please specify prototyping behavior for HandlerSocket.xs (see perlxs manual)
g++ -fPIC -c  -I. -I../libhsclient -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O3 -Wall -Wno-unused   -DVERSION="0.01" -DXS_VERSION="0.01" -fPIC "-I/usr/lib/perl/5.10/CORE"   HandlerSocket.c
Running Mkbootstrap for Net::HandlerSocket ()
chmod 644 HandlerSocket.bs
rm -f blib/arch/auto/Net/HandlerSocket/HandlerSocket.so
LD_RUN_PATH="/usr/local/lib" g++ -fPIC  -shared -O2 -g -L/usr/local/lib -fstack-protector HandlerSocket.o  -o blib/arch/auto/Net/HandlerSocket/HandlerSocket.so 	
	   -lhsclient  	

chmod 755 blib/arch/auto/Net/HandlerSocket/HandlerSocket.so
cp HandlerSocket.bs blib/arch/auto/Net/HandlerSocket/HandlerSocket.bs
chmod 644 blib/arch/auto/Net/HandlerSocket/HandlerSocket.bs
Manifying blib/man3/Net::HandlerSocket.3pm

Now all built, it’s as simple as the following to install:

${DIR}/mysql5152/bin/mysql -uroot
mysql> INSTALL PLUGIN HandlerSocket SONAME 'handlersocket.so';
mysql> SHOW PLUGINS;
+---------------+----------+----------------+------------------+---------+
| Name          | Status   | Type           | Library          | License |
+---------------+----------+----------------+------------------+---------+
| binlog        | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| partition     | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| ARCHIVE       | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| BLACKHOLE     | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| CSV           | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| FEDERATED     | DISABLED | STORAGE ENGINE | NULL             | GPL     |
| MEMORY        | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| InnoDB        | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| MyISAM        | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| MRG_MYISAM    | ACTIVE   | STORAGE ENGINE | NULL             | GPL     |
| handlersocket | ACTIVE   | DAEMON         | handlersocket.so | BSD     |
+---------------+----------+----------------+------------------+---------+
11 rows in set (0.00 sec)

However, I found it not actually running even after restarting MySQL. You need the following configuration

cd ${DIR}/mysql5152
echo "[mysqld]
plugin-load=handlersocket.so
loose_handlersocket_port = 9998 # the port number to bind to (for read requests)
loose_handlersocket_port_wr = 9999 # the port number to bind to (for write requests)
loose_handlersocket_threads = 16 # the number of worker threads (for read requests)
loose_handlersocket_threads_wr = 1 # the number of worker threads (for write requests)" >> my.cnf

After restarting you get the following confirmation.

mysql>SHOW PROCESSLIST;
+----+-------------+-----------------+---------------+---------+------+-------------------------------------------+------------------+
| Id | User        | Host            | db            | Command | Time | State                                     | Info             |
+----+-------------+-----------------+---------------+---------+------+-------------------------------------------+------------------+
|  1 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
|  2 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
|  3 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
|  4 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
|  5 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
|  6 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
|  7 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
|  8 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
|  9 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
| 10 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
| 11 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
| 12 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
| 13 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
| 14 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
| 15 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
| 16 | system user | connecting host | NULL          | Connect | NULL | handlersocket: mode=rd, 0 conns, 0 active | NULL             |
| 17 | system user | connecting host | handlersocket | Connect | NULL | handlersocket: mode=wr, 0 conns, 0 active | NULL             |
| 18 | root        | localhost       | NULL          | Query   |    0 | NULL                                      | SHOW PROCESSLIST |
+----+-------------+-----------------+---------------+---------+------+-------------------------------------------+------------------+
18 rows in set (0.00 sec)

My Test Data

USE test
CREATE TABLE user (
  user_id INT UNSIGNED NOT NULL,
  name VARCHAR(50) NOT NULL,
  email VARCHAR(255) NOT NULL,
  created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY(user_id)
) ENGINE=InnoDB;

INSERT INTO user (user_id,name, email) VALUES
(100,'Mickey Mouse','[email protected]'),
(101,'Marvin the Martian','[email protected]'),
(102,'Spiderman','[email protected]');

SELECT user_id,name,email FROM user WHERE user_id=101;
+---------+--------------------+---------------------+
| user_id | name               | email               |
+---------+--------------------+---------------------+
|     101 | Marvin the Martian | [email protected] |
+---------+--------------------+---------------------+

My test retrieval program is basically unchanged from the original post.

$ cat retrieve.pl
#!/usr/bin/perl

use strict;
use warnings;
use Net::HandlerSocket;

#1. establishing a connection
my $args = { host => '10.0.0.6', port => 9998 };
my $hs = new Net::HandlerSocket($args);

#2. initializing an index so that we can use in main logics.
 # MySQL tables will be opened here (if not opened)
my $res = $hs->open_index(0, 'test', 'user', 'PRIMARY',
    'name,email,created');
die $hs->get_error() if $res != 0;

#3. main logic
 #fetching rows by id
 #execute_single (index id, cond, cond value, max rows, offset)
$res = $hs->execute_single(0, '=', [ '101' ], 1, 0);
die $hs->get_error() if $res->[0] != 0;
shift(@$res);
for (my $row = 0; $row < 1; ++$row) {
  my $user_name= $res->[$row + 0];
  my $user_email= $res->[$row + 1];
  my $created= $res->[$row + 2];
  print "$user_namet$user_emailt$createdn";
}

#4. closing the connection
$hs->close();

If your HandlerSocket is not running you will see:

$ perl retrieve.pl
FATAL_EXIT: getaddrinfo failed: ip_to_remote_host:9998
$ telnet 10.0.0.6 9998
Trying 10.0.0.6...
telnet: Unable to connect to remote host: Connection refused

When working

$ telnet 10.0.0.6 9998
Trying 10.0.0.6...
Connected to 10.0.0.6.
Escape character is '^]'.
P	0	test	user	PRIMARY	name,email,created
0	1
0	=	1	101
0	3	Marvin the Martian	[email protected]	2010-11-05 13:11:34


$ perl retrieve.pl
Marvin the Martian	[email protected]	2010-11-05 13:11:34

Again, great work by Yoshinori-san to have this code as production deployable. Now to run some real benchmarks.

Improving MySQL Insert thoughput

There are three simple practices that can improve general INSERT throughput. Each requires consideration on how the data is collected and what is acceptable data loss in a disaster.

General inserting of rows can be performed as single INSERT’s for example.

INSERT INTO table (col1, col2, col3) VALUES (?, ?, ?);
INSERT INTO table (col1, col2, col3) VALUES (?, ?, ?);
INSERT INTO table (col1, col2, col3) VALUES (?, ?, ?);

While this works, there are two scalability limitations. First is the network overhead of the back and forth of each SQL statement, the second is the synchronous nature, that is your code can not continue until your INSERT is successfully completed.

The first improvement is to use MySQL’s multi values capability with INSERT. That is you can insert multiple rows with a single INSERT statement. For example:

INSERT INTO table (col1, col2, col3) VALUES (?, ?, ?),  (?, ?, ?), (?, ?, ?);

Depending on how you collect the information to be inserted, you may be able to easily implement this. The benefit, as previously mentioned in The value of multi insert values shows an 88% improvement in performance.

One system variable to be aware of is max_allowed_packet. You may have to adjust this for larger INSERT statements.

Next is the ability to INSERT data based on information already in another table. You can also leverage for example another storage engine like MEMORY to batch up data to be inserted via this approach.

INSERT INTO table (col1, col2, col3) SELECT col1, col2, col3 FROM anothertable

The third option is to batch load your data from a flat file. Depending on how you source the information you are inserting, that may also be a significant improvement in throughput in bulk inserting data.

LOAD DATA [CONCURRENT] INFILE 'file'
INTO TABLE (col1, col2, col3)

On a closing note, the choice of storage engine can also have a significant effect on INSERT throughput. MySQL also offers other non ANSI options including DELAYED, IGNORE and LOW_PRIORITY. These should definitely be avoided.

Successful MySQL Scalability Presentation

Last night I was the invited guest at the SF MySQL Meetup. In my presentation “Successful MySQL Scalability” I talked about a set of principles to ensure appropriate system architecture, data availability and best practices to build an ideal solution for your business. The presentation was also live streamed and is available online.

Speaking at Surge Scalability 2010 – Baltimore, MD

I will be joining a great list of quality speakers including John Allspaw, Theo Schlossnagle, Rasmus Lerdorf and Tom Cook at Surge 2010 in Baltimore, Maryland on Thu 30 Sep, and Fri Oct 1st 2010. Surge 2010 Speaker - Baltimore, MD

My presentation on “The most common MySQL scalability mistakes, and how to avoid them.” will include discussing various experiences observed in the field as a MySQL Consultant and MySQL Performance Tuning expert.

Abstract:

The most common mistakes are easy to avoid however many startups continue to fall prey, with the impact including large re-design costs, delays in new feature releases, lower staff productivity and less then ideal ROI. All growing and successful sites need to achieve higher Availability, seamless Scalability and proven Resilience. Know the right MySQL environments to provide a suitable architecture and application design to support these essential needs.

Overview:

Some details of the presentation would include:

  • The different types of accessible data  (e.g. R/W, R, none)
  • What limits MySQL availability (e.g software upgrades, blocking statements, locking etc)
  • The three components of scalability – Read Scalability/Write Scalability/Caching
  • Design practices for increasing scalability and not physical resources
  • Disaster is inevitable. Having a tested and functional failover strategy
  • When other products are better (e.g. Static files, Session management via Key/Value store)
  • What a lack of accurate monitoring causes
  • What a lack of breakability testing causes
  • What does “No Downtime” mean to your organization.
  • Implementing a successful “failed whale” approach with preemptive analysis
  • Identifying when MySQL is not your bottleneck

Optimizing SQL Performance – The Art of Elimination

The most efficient performance optimization of a SQL statement is to eliminate it. Cary Millsap’s recent Kaleidoscope presentation again highlighted that improving performance is function of code path. Removing code will improve performance.

You may think that it could be hard to eliminate SQL, however when you know every SQL statement that is executed in your code path obvious improvements may be possible. In the sequence SQL was implemented sometimes easy observations can lead to great gains. Let me provide some actual client examples that were discovered by using the MySQL General Log.

Example 1

5 Query   SELECT *  FROM `artist`
5 Query   SELECT *  FROM `artist`
5 Query   SELECT *  FROM `artist`
5 Query   SELECT *  FROM `artist`
5 Query   SELECT *  FROM `artist`
5 Query   SELECT *  FROM `artist` WHERE (ArtistID = 196 )
5 Query   SELECT *  FROM `artist` WHERE (ArtistID = 2188 )
5 Query   SELECT *  FROM `artist`
5 Query   SELECT *  FROM `artist`
5 Query   SELECT *  FROM `artist`

In this example, the following was executed for a single page load. Not only did I find a bug where full-table scans occurred rather then being qualified, there were many repeating and unnecessary occurrences.

Example 2

SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'
SELECT option_value FROM wp_options WHERE option_name = 'aiosp_title_format' LIMIT 1
SELECT option_value FROM wp_options WHERE option_name = 'ec3_show_only_even' LIMIT 1
SELECT option_value FROM wp_options WHERE option_name = 'ec3_num_months' LIMIT 1
SELECT option_value FROM wp_options WHERE option_name = 'ec3_day_length' LIMIT 1
SELECT option_value FROM wp_options WHERE option_name = 'ec3_hide_event_box' LIMIT 1
SELECT option_value FROM wp_options WHERE option_name = 'ec3_advanced' LIMIT 1
SELECT option_value FROM wp_options WHERE option_name = 'ec3_navigation' LIMIT 1
SELECT option_value FROM wp_options WHERE option_name = 'ec3_disable_popups' LIMIT 1
SELECT option_value FROM wp_options WHERE option_name = 'sidebars_widgets' LIMIT 1

This is a stock WordPress installation and highlights a classic Row at a Time (RAT) processing.

Example 3

SELECT * FROM activities_theme WHERE theme_parent_id=0
SELECT * FROM activities_theme WHERE theme_parent_id=1
SELECT * FROM activities_theme WHERE theme_parent_id=2
SELECT * FROM activities_theme WHERE theme_parent_id=11
SELECT * FROM activities_theme WHERE theme_parent_id=16

In this client example, again RAT processing, I provided a code improvement to run these multiple queries in a single statement, otherwise known as Chunk At a Time (CAT) processing. It’s not rocket science however the elimination of the network component of several SQL statements can greatly reduce page load time.

SELECT *
FROM   activities_theme
WHERE  theme_parent_id in  (0,1,2,11,16) 

Example 4

The following represents one of the best improvement. During capture, the following query was executed 6,000 times over a 5 minute period. While you make think this is acceptable, the value passed wae 0. The pages_id is an auto_increment column which by definition does not have a 0 value. In this instance, a simple boundary condition in the code would eliminate this query.

SELECT pages_id, pages_livestats_code, pages_title,
       pages_parent, pages_exhibid, pages_theme,
       pages_accession_num
FROM pages WHERE pages_id = 0

There are many tips to improving and optimizing SQL. This is the simplest and often overlooked starting point.

Related articles include: The RAT and the CAT and We need more CAT’s

Timing your SQL queries

When working interactively with the MySQL client, you receive feedback of the time the query took to complete to a granularity of 10 ms.

Enabling profiling is a simple way to get more a more accurate timing of running queries. In the following example you can see the time the kernel took to run an explain, the query, and alter, and repeat explain and query.


mysql> set profiling=1;
mysql> EXPLAIN SELECT ...
mysql> SELECT ...
mysql> ALTER ...
mysql> show profiles;
+----------+------------+-------------------------
| Query_ID | Duration   | Query
+----------+------------+-------------------------
|        1 | 0.00036500 | EXPLAIN SELECT sbvi.id a
|        2 | 0.00432700 | SELECT sbvi.id as sbvi_i
|        3 | 2.83206100 | alter table sbvi drop in
|        4 | 0.00047500 | explain SELECT sbvi.id a
|        5 | 0.00367100 | SELECT sbvi.id as sbvi_i
+----------+------------+-------------------------

More information at Show Profiles documentation page.

Why is my database slow?

Not part of my Don’t Assume series, but when a client states “Why is my database slow””, you need to determine if indeed the database is slow.

Some simple tools come to the rescue here, one is Firebug. If a web page takes 5 seconds to load, but the .htm file takes 400ms, and the 100+ assets being downloaded from one base url, then is the database actually slow? Tuning the database will only improve the 400ms portion of 5,000ms download.

There some very simple tips here. MySQL is my domain expertise and I will not profess to improving the entire stack however perception is everything to a user and you can often do a lot. Some simple points include:

  • Know about blocking assets in your <head> element, e.g. .js files.
  • Streamline .js, .css and images to what’s needed. .e.g. download a 100k image only to resize to a thumbnail via style elements.
  • Sprites. Like many efficient but simple SQL statements, network overhead is your greatest expense.
  • Splitting images to a different domain.
  • Splitting images to multiple domains (e.g. 3 via CNAME only needed.) — Hint: Learn about the protocol
  • Cookieless domains for static assets
  • Lighter web container for static assets (e.g. nginx, lighttpd)
  • Know about caching, expires and etags
  • Stripping out http://ww.domain.com from all your internal links (that one alone saved 12% of HTML page size for a client). You may ask is that really a big deal, well in a high volume site the sooner you can release the socket on your webserver, the sooner you can start serving a different request.

Like tuning a database, some things work better then others, some require more testing then others, and consultants never tell you all the tricks.

References

As with everything in tuning, do your research and also determine what works in your environment and what doesn’t. Two excellent resources to start with are Steve Souders and Best Practices for Speeding Up Your Web Site by Yahoo.

2010 MySQL Conference Presentations

I have uploaded my three presentations from the 2010 MySQL Users Conference in Santa Clara, California which was my 5th consecutive year appearing as a speaker.

A full history of my MySQL presentations can be found on the Presenting page.

Ineffective concatenated indexes

In MySQL significant performance improvements can be achieved by the correct use of indexes. It is important to understand different MySQL index implementations and one key improvement on indexes defined on single columns is to use multiple column or more commonly known concatenated indexes.

However it’s also possible to define ineffective indexes. This example shows you how to identify a concatenated index that is ineffective.

CREATE TABLE example (
  id INT UNSIGNED NOT NULL AUTO_INCREMENT,
  a  INT UNSIGNED NOT NULL,
  b  INT UNSIGNED NOT NULL,
  c  INT UNSIGNED NOT NULL,
  d  INT UNSIGNED NOT NULL,
  x  VARCHAR(10),
  y  VARCHAR(10),
  z  VARCHAR(10),
PRIMARY KEY (id),
UNIQUE INDEX (a,b,c,d)
) ENGINE=InnoDB;

INSERT INTO example(a,b,c,d) VALUES
(1,0,1,1),(1,0,1,2), (1,0,2,3), (1,0,4,5),
(2,0,2,1),(2,0,2,2), (2,0,2,3), (2,0,2,5),
(3,0,2,1),(3,0,2,3), (3,0,3,3), (3,0,3,5);

And our sample query is

SELECT id,x,y,z
FROM   example
WHERE  a = 2
AND    c = 2
AND    d = 3;

The EXPLAIN plan is

+----+-------------+---------+------+---------------+------+---------+-------+------+-------------+
| id | select_type | table   | type | possible_keys | key  | key_len | ref   | rows | Extra       |
+----+-------------+---------+------+---------------+------+---------+-------+------+-------------+
|  1 | SIMPLE      | example | ref  | a             | a    | 4       | const |    4 | Using where |
+----+-------------+---------+------+---------------+------+---------+-------+------+-------------+

While we are using the index (see the key column), the full benefit of the index is not utilized (see the key_len). 4 indicates the number of bytes used, that is only 1 INT column.

Let’s look at a second example.

SELECT id,x,y,z
FROM   example
WHERE  a = 2
AND    b = 0
AND    c = 2
AND    d = 3;

+----+-------------+---------+-------+---------------+------+---------+-------------------------+------+-------+
| id | select_type | table   | type  | possible_keys | key  | key_len | ref                     | rows | Extra |
+----+-------------+---------+-------+---------------+------+---------+-------------------------+------+-------+
|  1 | SIMPLE      | example | const | a             | a    | 16      | const,const,const,const |    1 |       |
+----+-------------+---------+-------+---------------+------+---------+-------------------------+------+-------+

In this example the index is used however the key_len is 16, that is 4 x 4 byte INT columns. This is ideal for this index.

In the above example, the client was using a common data structure however was not using one column, it’s values were all effectively 0. Certain queries were written with this knowledge and instead of specifying the column, they elected to remove it, however the impact of this developer code change was increased load on the database and more inefficient performance.

While this was easily addressed by a code change, an alternative could have been to change the index definition. It was not possible to remove the column due to legacy requirements.

ALTER TABLE example
DROP INDEX a,  ADD UNIQUE INDEX (a,c,d);
mysql> explain SELECT id,x,y,z FROM   example  WHERE  a = 2 AND    c = 2 AND    d = 3;
+----+-------------+---------+-------+---------------+------+---------+-------------------+------+-------+
| id | select_type | table   | type  | possible_keys | key  | key_len | ref               | rows | Extra |
+----+-------------+---------+-------+---------------+------+---------+-------------------+------+-------+
|  1 | SIMPLE      | example | const | a             | a    | 12      | const,const,const |    1 |       |
+----+-------------+---------+-------+---------------+------+---------+-------------------+------+-------+

A better solution may be to enforce better integrity on this business rule, that b only contains 0. MySQL does not support check constraint. The closest option would be to use an ENUM data type and a STRICT sql_mode however ENUM is only a string object so this would break other code.

ALTER TABLE example MODIFY b ENUM('0') NOT NULL DEFAULT '0';

Not only is performance impacted in this situation, in other examples of ineffective indexes it’s simply a waste of diskspace to sort additional columns in an index that are never used, and also a waste of memory as the index pages stored in memory contain information that is not used.

10x Performance Improvements in MySQL – A Case Study

The slides for my presentation at FOSDEM 2010 are now available online at slideshare. In this presentation I describe a successful client implementation with the result of 10x performance improvements. My presentation covers monitoring, reviewing and analyzing SQL, the art of indexes, improving SQL, storage engines and caching.

The end result was a page load improvement from 700+ms load time to a a consistent 60ms.

Speaking at MySQL UC 2010

My talk on 10x performance improvements – A case study has just been approved for the 2010 MySQL Conference. This will be my 5th straight year speaking at the MySQL conferences. For those in Europe wanting a sneak peek I am also speaking at FOSDEM 2010 in Brussels on Feb 7th where I’ll be giving an abridged version.

As an independent MySQL consultant, my work generally covers performance tuning and scalability and sometimes database architecture. Often however my work involves a review of a given problem and recommendations (immediate, short and long term). I’m rarely involved in the full implementation and generally do not see the full fruits of the proposed work.

Recently however I was able to work with a client, first in resolving critical performance issues and then in a review of the application architecture and MySQL environment, provide recommendations and also helping internal resources in the successful implementation. The result was a very successful engagement and an ideal case study on a strategy for tackling improving performance of an application using MySQL.

With an existing environment that included a MySQL master, 3 database slaves and 6 web servers this application provided the grounds of a well sized configuration and the need for greater availability and scalability.

The Call for Papers for the 2010 MySQL conference is still open. If you would like to share your experiences with MySQL submit your talk today.


O'Reilly MySQL Conference & Expo 2010

SQL Analysis with MySQL Proxy – Part 2

As I outlined in Part 1 MySQL Proxy can be one tool for performing SQL analysis. The impact with any monitoring is the art of monitoring will affect the results, in this case the performance. I don’t recommend enabling this level of detailed monitoring in production, these techniques are designed for development, testing, and possibly stress testing.

This leads to the question, how do I monitor SQL in production? The simple answer to this question is, Sampling. Take a representative sample of your production system. The implementation of this depends on many factors including your programming technology stack, and your MySQL topology.

If for example you are using PHP, then defining MySQL proxy on a production system, and executing firewall rules to redirect incoming 3306 traffic to 4040 for a period of time, e.g. 2 seconds can provide a wealth of information as to what’s happening on the server now. I have used this very successfully in production as an information gathering an analysis tool. It is also reasonably easy to configure, execute and the impact on any failures for example are minimized due to the sampling time.

If you run a distributed environment with MySQL Slaves, or many application servers, you can also introduce sampling to a certain extent as these specific points, however like scaling options, it is key to be able to handle and process the write load accurately.

Another performance improvement is to move processing of the gathered information in MySQL proxy to a separate thread or process, removing this work from the thread execution path and therefore increasing the performance. I’m interested to explore the option of passing this information off to memcached or gearman and having MySQL proxy simply capture the packet information and distributing the output. I have yet to see how memcached and/or gearman integrate with the Lua/C bindings. If anybody has experience or knowledge I would be interested to know more.

It is interesting to know that Drizzle provides a plugin to send this level of logging information to gearman automatically.

Understanding Different MySQL Index Implementations

It is important to know and understand that while indexing columns in MySQL will generally improve performance, using the appropriate type of index can make a greater impact on performance.

There are four general index types to consider when creating an appropriate index to optimize SQL queries.

  • Column Index
  • Concatenated Index
  • Covering Index
  • Partial Index

For the purpose of this discussion I am excluding other specialized index types such as fulltext, spatial and hash in memory engine.

Example Table

For the following examples, I will use this test table structure.

DROP TABLE IF EXISTS t1;
CREATE TABLE t1(
  id INT UNSIGNED NOT NULL AUTO_INCREMENT,
  user_name VARCHAR(20) NOT NULL,
  first_name VARCHAR(30) NOT NULL,
  last_name VARCHAR(30) NOT NULL,
  external_id INT UNSIGNED NOT NULL,
  country_id SMALLINT UNSIGNED NOT NULL,
  PRIMARY KEY(id)
) ENGINE=InnoDB;

Column Index

Quite simply, you have an index on a single column to help with performance. For example, if you were to query your data on external_id, without an index the system will need to read all data pages and then sequential scan pages to identify matching records. As there is no information known about how many rows satisfy the criteria, all data must be read. You can confirm this with the QEP.

SELECT id, user_name
FROM   t1
WHERE external_id = 1;

By adding an index to external_id, the query is optimized to only look at records that satisfy your criteria.

ALTER TABLE t1
  ADD INDEX (external_id);

Concatenated Index

I often see many single column indexes on tables, when these are simply not needed, and generally will be not used. This is easily identified when looking at the QEP and seeing multiple 3,4,5 possible keys.
You need to also consider in your MySQL Index theory, that in general only one index is used for each table in a MySQL query. There are a few exceptions however these are rare.

A concatenated index uses multiple columns. Let’s look a modified version of our query.

SELECT id, user_name
FROM   t1
WHERE external_id = 1
AND      country_id = 5;

The original external_id index will be used, however if we create a concatenated index on external_id and country_id we improve the query path.

ALTER TABLE t1
  DROP INDEX external_id,
  ADD INDEX (external_id, country_id);

What about an index on country_id, external_id? If your access to your data always includes these two columns, you can consider swapping the columns based on the cardinality. However, if you have queries that search on external_id or external_id and country_id, then creating an index on country_id, external_id will not be used.

Tip In the QEP look at the key length to determine how effective concatenated indexes are.

Covering Index

A covering index as the name describes covers all columns in a query. The benefit of a covering index is that the lookup of the various Btree index pages necessary satisfies the query, and no additional data page lookups are necessary.

If we revisit our earlier example, by modifying the external_id index, and create a concatenated index on external_id and user_name we actually satisfy

ALTER TABLE t1
  DROP INDEX external_id,
  ADD INDEX (external_id, user_name);
SELECT id, user_name
FROM   t1
WHERE external_id = 1;

With MySQL, the QEP will indicate in Extra, ‘Using Index’. This is not a reference to the index actually being used, but the index satisfies all requirements of the query.

Partial Index

The final type is the partial index. This is a MySQL feature which allows you specify a subset of a column for the index.

Let’s say we query data and allow pattern matching on last name.

SELECT id, first_name, last_name, user_name
FROM   t1
WHERE last_name like 'A%'

We should add an index to last_name to improve performance.

ALTER TABLE t1
  ADD INDEX (last_name);

Depending on the average length of data in last_name (you can use PROCEDURE ANALYSE as a quick tool to sample this), creating a partial index may greatly reduce the size of the index, and minimize the additional data lookups required.

ALTER TABLE t1
  DROP INDEX last_name,
  ADD INDEX (last_name(10));

In this example, you would want to investigate the size of the index, the improvement, and then the amount of additional reads necessary for sample queries. If your accessed data is generally hot, then the benefit of a smaller index will not be impacted by additional data seeks.

Conclusion

As with any performance tuning, sufficient analysis and before and after testing is necessary for your specific environment.

Some future topics on indexes not discussed here include:

  • Using UNIQUE Indexe
  • The impact of NULL columns and values on indexes
  • Eliminating filesort by using indexes
  • The affect of too many indexes
  • Index cardinality

You need to also consider in your MySQL Index theory, that in general only one index is used for each table in a MySQL query. There are a few exceptions however these are rare.

I common question I am also asked is about function based indexes? MySQL provides no means to use a scalar function against a column in an index.

Percona Performance Conference Talk

My final presentation during the 2009 MySQL Conference and Expo week was with the Percona Performance Conference on the topic of The Ideal Performance Architecture. My talk included discussions on Technology, Disk, Memory, Indexes, SQL and Data.

Updated 09/18/09
you can now see video of the event at Percona TV.