The MySQL documentation is not always right

Let me premise this post with the statement I think the MySQL documentation is an excellent and highly accurate resource. I think the MySQL docs team do a great job, however like software and people, documentation is not perfect.

As members of the MySQL community you can always contribute to improve the process by reading the documentation and logging any issues as Documentation Bugs.

Some time ago in a discussion with a friend and colleague, we were talking about changes in historical defaults that had been improved finally in MySQL 5.4 The specific discussion was on the new default innodb_buffer_pool_size and we both agreed it increased significantly. One said 1GB, the other said 128MB. Who was right? Well we both were, and we were both inaccurate depending on versions.

Referencing the 5.4 Manual in InnoDB Startup Options and System Variables the current value for Linux is 128M, but for Windows it’s 1GB.

However I was confident I was told in a presentation, perhaps even the keynote the value was 1GB. Firing up my server and seeing the original version I used of 5.4.0 (which was not available on Windows) we find that the default for Linux was 1GB at some time, i.e. the first release.

mysql> show global variables like 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 1073741824 |
+-------------------------+------------+
1 row in set (0.00 sec)

mysql> select 1073741824/1024/1024 as MB;
+---------------+
| MB            |
+---------------+
| 1024.00000000 |
+---------------+
1 row in set (0.00 sec)

mysql> show global variables like 'version%';
+-------------------------+------------------------------+
| Variable_name           | Value                        |
+-------------------------+------------------------------+
| version                 | 5.4.0-beta                   |
| version_comment         | MySQL Community Server (GPL) |
| version_compile_machine | x86_64                       |
| version_compile_os      | unknown-linux-gnu            |
+-------------------------+------------------------------+
4 rows in set (0.00 sec)

I’m not trying to nit pick here, I’m highlighting that MySQL is a evolving product with many different versions and architectures. It’s our job of the MySQL community to help make the documentation the best for all readers. In this above case I’ve not logged the issue, because 5.4 is a defunct product, however if you want an example of how I identified a problem, provided a test case, and saw that my contribution was reviewed, verified and implemented check out Bug #51739 –core-file is not default TRUE (incorrect docs).

In conclusion, always read the documentation but pay special attention to the current version that matches the documentation, and the version you are actually running. Defaults change between versions, e.g. innodb_thread_concurrency is a complex example, and I’ve been caught with a large enterprise client with assuming the default of a Connector/J options as true, when it was in 5.0.6, but in 5.0.5 the version the client was running it was false.

An old saying, “trust by verify” is a good motto to consider.

The Drizzle Census

One thing I have often wondered is just how many MySQL instances exist in the world and what MySQL versions and architectures are in use. We hear of 50,000 windows downloads per day but this is misleading because MySQL is basically bundled with Linux by default or installed from various repositories. Linux servers powers many websites.

In Drizzle we have a proposed plan, the Drizzle Census. From the productive Drizzle Developers Day recently at the 2010 MySQL conference we sat down and created a blueprint, and subsequent high level spec of what we considered this optional plugin should do. We didn’t get as far as I would have liked in a code skeleton to at least gather and store a sample result, but the hope is that with the community we will in the near future.

Here is the list of information we decided was appropriate for anonymous information of value.

  • Kernel Version/Architecture
  • CPU type
  • SID – HASH(processor_id,listener address,first listener port)
  • Drizzle Version
  • Drizzle Uptime
  • Drizzled process memory usage

Installation issues with MySQL 5.5.4 and resolveip

I was installing the latest MySQL 5.5.4 on a new machine and I came across the following issues during installation, steps I generally perform on other versions without any incidents.

$ sudo su -
$ cd /opt
$ wget http://dev.mysql.com/get/Downloads/MySQL-5.5/mysql-5.5.4-m3-linux2.6-x86_64.tar.gz/from/http://mysql.mirrors.hoobly.com/
$ tar xvfz mysql-5.5.4-m3-linux2.6-x86_64.tar.gz
$ cd mysql-5.5.4-m3-linux2.6-x86_64
$ scripts/mysql_install_db
Neither host 'barney' nor 'localhost' could be looked up with /usr/bin/resolveip
Please configure the 'hostname' command to return a correct hostname.
If you want to solve this at a later stage, restart this script with the --force option

Perform some checks

$ /usr/bin/resolveip
-su: /usr/bin/resolveip: No such file or directory

$ hostname
barney

$ head -2 /etc/hosts
127.0.0.1	localhost
127.0.1.1	barney

I was surprised to find that my Ubuntu 10.04 Lucid Lynx box had a different IP for hostname, rather then 127.0.0.1. That was a little weird. I changed this just for comfort.

I was not familiar with this command so I tried to install it.

$ apt-get install resolveip
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Couldn't find package resolveip

$  apt-cache search resolveip

No luck there, off to Google to do a search for resolveip and ubuntu you find the following online manual page which surprising shows the following as the owner of the command.
COPYRIGHT
Copyright 2007-2008 MySQL AB, 2009 Sun Microsystems, Inc.

Thinking it’s now actually packaged with MySQL we find this in the untar directory.

$ bin/resolveip `hostname`
IP address of barney is 127.0.0.1

I’m now still none the wiser regarding this command. I’ve decided to take the –force option for now and we will see!

2010 MySQL Conference Presentations

I have uploaded my three presentations from the 2010 MySQL Users Conference in Santa Clara, California which was my 5th consecutive year appearing as a speaker.

A full history of my MySQL presentations can be found on the Presenting page.

My acceptance with Oracle as ACE Director

I hinted last week of my acceptance with Oracle before the formal announcement this week at the MySQL Users Conference, not for a job but as Oracle ACE Director. In today’s State of the MySQL Community keynote by Kaj Arnö I was one of the first three MySQL nominees that are now part of this program.

What exactly is an ACE Director? Using the description from the Oracle website.

Oracle ACEs and Oracle ACE Directors are known for their strong credentials as Oracle community enthusiasts and advocates, with candidates nominated by anyone in the Oracle Technology and Applications communities. The baseline requirements are the same for both designations; however, Oracle ACE Directors work more closely and formally with Oracle in terms of their community activity.

What does this mean to me?

As a significant contributor to the community I now have the opportunity to continue as well as to contribute to how Oracle continues to interact, promote and involve the MySQL community. As stewards our role as an Oracle ACE Director is to be actively involved. I look forward to the challenge to help shape and improve our State of the MySQL Community.

News and References
Welcome, Oracle ACE Directors for MySQL

State of the Dolphin – Opening keynote

Edward Screven – Chief Corporate Architect of Oracle provided the opening keynote at the 2010 MySQL Users Conference.

Overall I was disappointed. The first half was more an Oracle Sales pitch, we had some product announcements, we had some 5.5 performance buzz. While a few numbers and features were indeed great to hear, there was a clear lack of information to the MySQL ecosystem including employees, alumni and various support services. I hope more is unveiled this week.

Some notes of the session.

  • Oracle’s Strategy covers storage, servers, virtual machines, operating system, database, middleware, applications
  • We build a complete technology stack that is “open” and “integrated” based on “open standards”
  • products talk via open standards with the intention for customers to not feel locked in to any technology
  • Examples include apache, java, linux, xen, eclipse, and innodb
  • Unbreakable linux has now over 4,500 customers

After the sales pitch we got down to more about MySQL.

What MySQL means to Oracle? We make the Oracle solution more complete as a stack for customers.

What is the investment in MySQL?

  • Make MySQL a better MySQL
  • Develop, promote and support MySQL
  • MySQL community edition

Integration with Oracle Enterprise Manager, Oracle Secure Backup and Oracle Audit Vault infrastructure. *This I expected and have blogged about, so I’m glad to see this commitment.

MySQL 5.5 is now in Alpha, some features are

  • InnoDB will be default engine
  • Semi sync replication
  • Replication heartbeat
  • Signal
  • Performance Schema

MySQL 5.5 is planned on being faster with Innodb Performance Improvements & MySQL Performance Improvements.
MySQL 5.5 sysbench claims, read 200% faster, write 364% faster.

MySQL Workbench 5.2 announcement

  • SQL Development
  • Database Administration
  • Data Modelling

MySQL Cluster 7.1 GA announcement

  • Improved Administration
  • Higher Performance
  • Carrier Grade Availability & Performance

MySQL Enterprise Backup announcement

  • Online backup for InnoDB only
  • Formally InnoDB hot backup with additional features including incremental backups

MySQL Enterprise Monitor 2.2 Beta announcment

In closing the statement was “MySQL lets Oracle be more complete at the database layer”. Is that good for the MySQL Community or better for the Oracle revenue model?

My acceptance with Oracle

There have been a number of April fools jokes today so I thought I’d add my own to the list. While this sounds unexpected it’s actually no joke.

I just accepted a position with Oracle yesterday but I can’t say any more about the details until the MySQL users conference in a few weeks.

A special thanks to Lenz, Kaj & Giuseppe that championed everything to make it all happen, I really didn’t have to do anything other then accept.

Using ROLLBACK with MyISAM

Using ROLLBACK with MyISAM is useless. A ROLLBACK command is used to undo any DML that occurs during a transaction (i.e. START TRANSACTION and COMMIT). The MySQL default storage engine MyISAM does not support transactions.

It is easy with the SHOW GLOBAL STATUS command to see if your application code uses ROLLBACK. By performing two samples you can look at the delta over time. The statpack utility is one product that provides a human friendly display of this delta. As seen below, the use of ROLLBACK in combination with the read/write ratio and the my.cnf –skip-innodb indicate unnecessary database work.

====================================================================================================
                    Variable    Delta/Percentage         Per Second              Total
====================================================================================================

                                         Statement Activity
====================================================================================================

                     SELECT:        1,135,589                 1,309.79              189,279,510 (49.62%)
                     INSERT:            6,171                     7.12                  431,987 (0.11%)
                     UPDATE:            4,800                     5.54                  334,620 (0.09%)
                     DELETE:              312                     0.36                   17,910 (0.00%)
                    REPLACE:                0                     0.00                        0 (0.00%)
          INSERT ... SELECT:              121                     0.14                    4,042 (0.00%)
         REPLACE ... SELECT:               11                     0.01                      109 (0.00%)
               Multi UPDATE:                0                     0.00                       30 (0.00%)
               Multi DELETE:                0                     0.00                       28 (0.00%)
                     COMMIT:                0                     0.00                        0 (0.00%)
                   ROLLBACK:        1,154,987                 1,332.16              191,382,775 (50.17%)

If the ROLLBACK command doesn’t do anything you may be tempted to consider this doesn’t do much harm, think again. In the following example of statements analyzed via TCP packets, the ROLLBACK attributed to 21% of the execution time of all SQL in this sample.

# Profile
# Rank Query ID           Response time    Calls R/Call   Item
# ==== ================== ================ ===== ======== ================
#    1 0x4ED092EFA577DAB7     0.0106 24.8%     1   0.0106 SELECT p
#    2 0xC9ECBBF2C88C2336     0.0102 23.8%    52   0.0002 SELECT r_c
#    3 0x19C8068B5C1997CD     0.0092 21.6%   138   0.0001 ROLLBACK
#    4 0x448E4AEB7E02AF72     0.0091 21.3%    52   0.0002 SELECT r_t
#    5 0x56438040F4B2B894     0.0015  3.6%     2   0.0008 SELECT h_c
#    6 0x164962ED9B451586     0.0012  2.9%     9   0.0001 SELECT r
#    7 0x8FDE1484818AAACE     0.0008  1.9%     8   0.0001 SELECT p_c

In a well tuned system, the greatest time to execute an SQL statement is not the running of the SQL inside the MySQL kernel, it is the network latency of making the call, and the time taken to return the resultset requested.

In this extreme case on a production system, 1/2 the statements executed where unnecessary.

Uncovering this issue was three commands and less then 5 minutes of my time. The statpack report uncovered 4 additional red flags at the same time.

If you are not monitoring your production system, start now. For assistance on what to monitor and analysis please contact me for more information.

How to find MySQL developers?

Brian wrote recently Where did all of the MySQL Developers Go?, while over in Drizzle land they have been accepted for the Google Summer of code along with many other open source projects. MySQL from my observation a noticeable absentee.

Historically, the lack of opportunity to enable community contributions and see them implemented in say under 5 years, has really hurt MySQL in recent times. There is plenty of history here so that’s not worth repeating. The current landscape of patches, forks and custom MySQL binaries for storage engine provider has provided a boom of innovation that sadly is now lost from the core MySQL product.

In Drizzle, community contribution is actively sought and a good portion of committed code is not from the core Drizzle developers (wherever they work). As a Drizzle GSoC project contributor last year Padraig for example this year is helping to mentor. The Drizzle project contribution philisophy, GSoC and other activities such as the Drizzle Developer Day all enable the next generation of developers to be part of ongoing project developement.

Oracle, what are you going to do to foster an active community and new long term developers for MySQL?

Understanding Drizzle user authentication options – Part 2

A key differentiator in Drizzle from it’s original MySQL roots is user based authentication. Gone is the host/user and schema/table/column model that was stored in the MyISAM based mysql.user table.

Authentication is now completely pluggable, leveraging existing systems such as PAM, LDAP via PAM and Http authentication.

In this post I’ll talk about HTTP authentication which requires an external http server to implement successfully. You can look at Part 1 for PAM authentication.

Compiling for http auth support

By default during compilation you may find.

checking for libcurl... no
configure: WARNING: libcurl development lib not found: not building auth_http plugin. On Debian this is found in libcurl4-gnutls-dev. On RedHat it's in libcurl-devel.

In my case I needed:

$ sudo yum install curl-devel

NOTE: Bug #527255 talks about issues of the message being incorrect for libcurl-devel however this appears it may be valid in Fedora Installs

After successfully installing the necessary pre-requisite you should see.

checking for libcurl... yes
checking how to link with libcurl... -lcurl
checking if libcurl has CURLOPT_USERNAME... no

HTTP Authentication

We need to enable the plugin at server startup.

$ sbin/drizzled --mysql-protocol-port=3399 --plugin_add=auth_http &

You need to ensure the auth_http plugin is active by checking the data dictionary plugin table.

drizzle> select * from data_dictionary.plugins where plugin_name='auth_http';
+-------------+----------------+-----------+-------------+
| PLUGIN_NAME | PLUGIN_TYPE    | IS_ACTIVE | MODULE_NAME |
+-------------+----------------+-----------+-------------+
| auth_http   | Authentication | TRUE      |             |
+-------------+----------------+-----------+-------------+

The auth_http plugin also has the following system variables.

drizzle> SHOW GLOBAL VARIABLES LIKE '%http%';
+------------------+-------------------+
| Variable_name    | Value             |
+------------------+-------------------+
| auth_http_enable | OFF               |
| auth_http_url    | http://localhost/ |
+------------------+-------------------+
2 rows in set (0 sec)

In order to configure Http authentication, you need to have the following settings added to your drizzled.cnf file. For example:

$ cat etc/drizzled.cnf
[drizzled]
auth_http_enable=TRUE
auth_http_url=http://thedrizzler.com/auth

NOTE: Replace the domain name with something you have, even localhost.

A Drizzle restart gives us

$ bin/drizzle -e "SHOW GLOBAL VARIABLES LIKE 'auth_http%'"
+------------------+-----------------------------+
| Variable_name    | Value                       |
+------------------+-----------------------------+
| auth_http_enable | ON                          |
| auth_http_url    | http://thedrizzler.com/auth |
+------------------+-----------------------------+

By default, currently if the settings result in an invalid url, then account validation does not fail and you can still login. It is recommended that you always configure pam authentication as well as a fall back.

$ wget -O tmp http://thedrizzler.com/auth
--17:32:32--  http://thedrizzler.com/auth
Resolving thedrizzler.com... 208.43.73.220
Connecting to thedrizzler.com|208.43.73.220|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
17:32:32 ERROR 404: Not Found.

$ bin/drizzle
drizzle > exit

Configuring passwords

To correctly configured your web server to perform the HTTP auth, you can use this Apache syntax as an example.

The following is added to the VirtualHost entry in your web browser.

<Directory /var/www/drizzle/auth>
AllowOverride FileInfo All AuthConfig
AuthType Basic
AuthName "Drizzle Access Only"
AuthUserFile /home/drizzle/.authentication
Require valid-user
</Directory>
$ sudo su -
$ mkdir /var/www/drizzle/auth
$ touch /var/www/drizzle/auth/index.htm
$ apachectl graceful

We check we now need permissions for the URL.

$ wget -O tmp http://thedrizzler.com/auth
--17:35:48--  http://thedrizzler.com/auth
Resolving thedrizzler.com... 208.43.73.220
Connecting to thedrizzler.com|208.43.73.220|:80... connected.
HTTP request sent, awaiting response... 401 Authorization Required
Authorization failed.

You need to create the username/password for access.

$ htpasswd -cb /home/drizzle/.authentication testuser sakila
$ cat /home/drizzle/.authentication
testuser:85/7CbdeVql4E

Confirm that the http auth with correct user/password works.

$ wget -O tmp http://thedrizzler.com/auth --user=testuser --password=sakila
--17:37:45--  http://thedrizzler.com/auth
Resolving thedrizzler.com... 208.43.73.220
Connecting to thedrizzler.com|208.43.73.220|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently

Drizzle HTTP Authentication in action

By default we now can’t login

$ bin/drizzle
ERROR 1045 (28000): Access denied for user ''@'127.0.0.1' (using password: NO)
$ bin/drizzle --user=testuser --password=sakila999
ERROR 1045 (28000): Access denied for user 'testuser'@'127.0.0.1' (using password: YES)

$ bin/drizzle --user=testuser --password=sakila
Welcome to the Drizzle client..  Commands end with ; or g.
Your Drizzle connection id is 6
Server version: 7 Source distribution (trunk)

Type 'help;' or 'h' for help. Type 'c' to clear the buffer.

drizzle>

Understanding Drizzle user authentication options – Part 1

A key differentiator in Drizzle from it’s original MySQL roots is user based authentication. Gone is the host/user and schema/table/column model that was stored in the MyISAM based mysql.user table.

Authentication is now completely pluggable, leveraging existing systems such as PAM, LDAP via PAM and Http authentication.

In this post I’ll talk about PAM authentication which is effectively your current Linux based user security.

This information is based on the current build 1317.

Compiling for PAM support

Your Drizzle environment needs to be compiled with PAM support. You would have received the following warning during a configure.

$ ./configure
...
checking for libpam... no
configure: WARNING: Couldn't find PAM development support, pam_auth will not be built. On Debian, libpam is in libpam0g-dev. On RedHat it's in pam-devel.

The solution is provided in the warning message which is another great thing about Drizzle. The pre checks for dependencies and the optional messages like these far exceed the MySQL equivalent compilation process. In my case:

$ sudo yum install pam-devel

When correctly configured, it should look like:

checking for libpam... yes
checking how to link with libpam... -lpam

Working with PAM

You need to enable the PAM authentication plugin at drizzled startup.

sbin/drizzled --plugin_add=auth_pam &

Unfortunately connecting fails to work with

time sbin/drizzle --user=testuser --password=***** --port=4427
real 0m0.003s
user 0m0.003s
sys 0m0.001s

A look into the source at src/drizzle-2010.03.1317/plugin/auth_pam/auth_pam.cc shows a needed config file

117     retval= pam_start("check_user", userinfo.name, &conv_info, &pamh);

Configuring PAM

In order to enable PAM with Drizzle you need to have the following system configuration.

$ cat /etc/pam.d/check_user
auth    required        pam_unix.so
account required        pam_unix.so

$ time sbin/drizzle --user=testuser --password=***** --port=4427
ERROR 1045 (28000): Access denied for user 'testuser'@'127.0.0.1' (using password: YES)

real 0m2.055s
user 0m0.002s
sys 0m0.002s

This did some validation but still failed.

It seems Bug #484069 may fix this problem, however this is not currently in the main line!

Stay Tuned!

Using ext4 for MySQL

This week with a client I saw ext4 used for the first time on a production MySQL system which was running Ubuntu 9.10 (Karmic Koala). I observe today while installing 9.10 Server locally that ext4 is the default option. The ext4 filesystem is described as better performance, reliability and features while there is also information about improvements in journaling.

At OSCON 2009 I attended a presentation on Linux Filesystem Performance for Databases by Selena Deckelmann in which ext4 was included. While providing some improvements in sequential reading and writing, there were issue with random I/O which is the key for RDBMS products.

Is the RAID configuration (e.g. RAID 5, RAID 10), strip size, buffer caches, LVM etc more important then upgrading from ext3 to ext4? I don’t have access to any test equipment in order to determine myself however I’d like to know of any experiences from members of the MySQL community and if anybody has experienced any general problems running ext4.

ext4 References

Drizzle's Data Dictionary and Global Status

With the recent news by Brian about the Data Dictionary in Drizzle replacing the INFORMATION_SCHEMA, I was looking into the server status variables (aka INFORMATION_SCHEMA.GLOBAL_STATUS) and I came across an interesting discovery.

select * from data_dictionary.global_status;
...
| Table_locks_immediate      | 0              |
| Table_locks_waited         | 0              |
| Threads_connected          | 8134064        |
| Uptime                     | 332            |
| Uptime_since_flush_status  | 332            |
+----------------------------+----------------+
51 rows in set (0 sec)

This only retrieved 51 rows, which is way less then previous. What I wanted was clearly missing, all the old com_ status variables. Looking at what the data_dictionary actually has available revealed a new table.

drizzle> select * from data_dictionary.global_statements;
+-----------------------+----------------+
| VARIABLE_NAME         | VARIABLE_VALUE |
+-----------------------+----------------+
| admin_commands        | 0              |
| alter_db              | 0              |
| alter_table           | 0              |
| analyze               | 0              |
| begin                 | 0              |
| change_db             | 1              |
| check                 | 0              |
| checksum              | 0              |
| commit                | 0              |
| create_db             | 0              |
| create_index          | 0              |
| create_table          | 0              |
| delete                | 0              |
| drop_db               | 0              |
| drop_index            | 0              |
| drop_table            | 0              |
| empty_query           | 0              |
| flush                 | 0              |
| insert                | 0              |
| insert_select         | 0              |
| kill                  | 0              |
| load                  | 0              |
| release_savepoint     | 0              |
| rename_table          | 0              |
| replace               | 0              |
| replace_select        | 0              |
| rollback              | 0              |
| rollback_to_savepoint | 0              |
| savepoint             | 0              |
| select                | 10             |
| set_option            | 0              |
| show_create_db        | 0              |
| show_create_table     | 0              |
| show_errors           | 0              |
| show_warnings         | 0              |
| truncate              | 0              |
| unlock_tables         | 0              |
| update                | 0              |
+-----------------------+----------------+
38 rows in set (0 sec)

Kudos to this. Looking at list I saw an obvious omission, of “ping”. Something that caught me out some years ago with huge (300-500 per second admin_commands). I’m also a fan of Mark’s recent work An evening hack – Com_ping in MySQL.

How do I identify the MySQL my.cnf file?

If you are unfamiliar in administrating MySQL, the current MySQL configuration file generally found is named my.cnf (my.ini on windows). Where is that file.

If only that question was easy to answer!

Use of configuration files

MySQL will by default use at least one configuration file from the following defaults. MySQL also uses a cascade approach for configuration files. When you have multiple files in the appropriate paths you can see unexpected behavior when you override certain values in different files.

You can however for example specify –no-defaults to use no configuration file, or add options to your command line execution, so even looking at all configuration files is no guarantee of your operating configuration.

However for most environments, these complexities do not exist.

Default Location

By default and on single instance MySQL servers you are most likely to find this file called my.cnf and found at:

  • /etc/my.cnf
  • /etc/mysql/my.cnf

These are known as the global options files.

Alternative Locations

MySQL has both instance specific and user specific locations. For the inclusion of an instance specific file, the location is:

  • $MYSQL_HOME/my.cnf

where MYSQL_HOME is a defined environment variable. Historical MySQL versions also looked at [datadir]/my.cnf however I am unaware if this is applicable in 5.x versions.

You can also specific options on a per user basis for default inclusion. These are found at:

  • $HOME/.my.cnf

Distro specific locations

Ubuntu for example also provides an ability to add options via an include directory.

Specifying a configuration at runtime

While you may have these default files, you may elect to start mysql with a specific configuration file as specified by –defaults-file. This option will override all global/instance/user locations and use just this configuration file. You can also specify additional configuration that supplements and not overrides the default with –defaults-extra-file.

What files are on my system?

Again, assuming the default names you can perform a brute force check with:

$ sudo find / -name "*my*cnf"

This is actually worthwhile, especially if you find a /root/.my.cnf file which is default MySQL settings for the Operating System ‘root’ user.

MySQL recommendations

MySQL by default provides a number of recommended files however these are generally outdated especially for newer hardware. These files include my-huge.cnf, my-large.cnf, my-medium.cnf, my-small.cnf and my-innodb-heavy-4G.cnf. Don’t assume replacing your configuration with one of these files will make your system perform better.

MySQL made some attempt to correct these and at least some very poor defaults with MySQL 5.4 however I am unsure what’s in MySQL 5.5

MySQL Configuration at runtime

While several commands can help with identifying your configuration files and print defaults etc, it’s also possible to change your configuration at runtime. It’s possible that these changes are not reflected in your configuration files and pose an additional mismatch.

References

Don't Assume – Per Session Buffers

MySQL has a number of global buffers, i.e. your SGA. There are also a number of per session/thread buffers that combined with other memory usage constitutes an unbounded PGA. One of the most common errors in mis-configured MySQL environments is the setting of the 4 primary per session buffers thinking they are global buffers.

Global buffers include:

    The four important per session buffers are:

    I have seen people see these values > 5M. The defaults range from 128K to 256K. My advice for any values above 256K is simple. What proof do you have this works better? When nothing is forthcoming, the first move is to revert to defaults or a maximum of 256K for some benchmarkable results. The primary reason for this is MySQL internally as quoted by Monty Taylor – for values > 256K, it uses mmap() instead of malloc() for memory allocation.

    These are not all the per session buffers you need to be aware of. Others include thread_stack, max_allowed_packet,binlog_cache_size and most importantly max_connections.

    MySQL also uses memory in other areas most noticeably in internal temporary tables and MEMORY based tables.

    As I mentioned, there is no bound for the total process memory allocation for MySQL, so some incorrectly configured variables can easily blow your memory usage.

    References

    About “Don’t Assume”

    “Don’t Assume” is a series of posts to help the Oracle DBA understand, use and appreciate the subtle differences and unique characteristics of the MySQL RDBMS in comparison to Oracle. These points as essential to operate MySQL effectively in a production environment and avoid any loss of data or availability.

    For more posts in this series be sure to follow the mysql4oracledba tag and also watch out for MySQL for Oracle DBA presentations.

    The MySQLCamp for the Oracle DBA is a series of educational talks all Oracle DBA resources should attend. Two presentations from this series IGNITION and LIFTOFF will be presented at the MySQL Users Conference 2010 in Santa Clara, April 2010 This series also includes JUMPSTART and VELOCITY. If you would like to here these presentations in your area, please contact me.

Free advice on your my.cnf

Today, while on IRC in #pentaho I came across a discussion and a published my.cnf. In this configuration I found some grossly incorrect values for per session buffers (see below).

It doesn’t take a MySQL expert to spot the issues, however there is plenty of bad information available on the Internet and developers not knowing MySQL well can easily be mislead. This has spurred me to create a program to rid the world of bad MySQL configuration. While my task is potential infinite, it will enable me to give back and hopefully do a small amount of good. You never know, saving those CPU cycles may save energy and help the planet.

Stay tuned for more details of my program.

[mysqld]
...
sort_buffer_size = 6144K
myisam_sort_buffer_size = 1G
join_buffer_size = 1G
bulk_insert_buffer_size = 1G
read_buffer_size     = 6144K
read_rnd_buffer_size = 6144K
key_buffer_size		= 1024M
max_allowed_packet	= 32M
thread_stack		= 192K
thread_cache_size       = 256

query_cache_limit	= 512M
query_cache_size        = 512M
...

MySQL is crashing, what do I do?

Let me start by saying the majority of environments never experience problems of MySQL crashing. I have seen production environments up for years. On my own server I have seen 575 days of MySQL uptime and the problem was hardware, not MySQL.

However it does occur, and the reasons may be obscure.

Confirming mysqld has crashed

To the unsuspecting, MySQL may indeed be crashing and you never know about it. The reason is because most MySQL installations have two running processes, these are mysqld and mysqld_safe.

ps -ef | grep mysqld
root     28822     1  0 Feb22 ?        00:00:00 /bin/sh bin/mysqld_safe
mysql    28910 28822  0 Feb22 ?        00:30:08 /opt/mysql51/bin/mysqld --basedir=/opt/mysql51 --datadir=/opt/mysql51/data --user=mysql --log-error=/opt/mysql51/log/error.log --pid-file=/opt/mysql51/data/dc1.onegreendog.com.pid

One of the functions of mysqld_safe is to restart mysqld if it fails. Unless you review your mysql error log and for low volume systems you will never know. Hint Have you checked your MySQL error log today?

You can determine quickly via SQL your instance uptime.

mysql> SHOW GLOBAL STATUS LIKE '%uptime%';
+---------------+---------+
| Variable_name | Value   |
+---------------+---------+
| Uptime        | 1033722 |
+---------------+---------+
1 row in set (0.00 sec)

This is the number of seconds since start time. While not easily readable for humans, this is more user friendly display. (NOTE: Works for 5.1+ only)

mysql> SELECT FROM_UNIXTIME(UNIX_TIMESTAMP() - variable_value) AS server_start
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE variable_name='Uptime';
+---------------------+
| server_start        |
+---------------------+
| 2010-02-22 15:22:13 |
+---------------------+
1 row in set (0.07 sec)

Debugging a mysqld core file

When correctly configured, mysqld will generate a core file (See How to crash mysqld intentionally for background information on required settings).

Your first check is to determine if the mysqld binary used has debugging information and symbols stripped. You need this information not stripped for identifying symbol names.

$ file bin/mysqld
bin/mysqld: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0,
dynamically linked (uses shared libs), for GNU/Linux 2.4.0, not stripped

You can use gdb and with a backtrace command (bt) you can see a stack trace of calls. This won’t help the average DBA without C or MySQL internal knowledge greatly, however it’s essential information to get to the bottom of the problem.

In the following example I’m going to use Bug #38508 to intentionally crash my test instance.

mysql> drop table if exists t1,t2;
mysql> create table t1(a bigint);
mysql> create table t2(b tinyint);
mysql> insert into t2 values (null);
mysql> prepare stmt from "select 1 from t1 join  t2 on a xor b where b > 1  and a =1";
mysql> execute stmt;
mysql> execute stmt;
ERROR 2013 (HY000): Lost connection to MySQL server during query

Lost connection is the first sign of a problem. We check the error log to confirm.

$ tail data/`hostname`.err
100306 14:51:49 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=8384512
read_buffer_size=131072
max_used_connections=1
max_threads=151
threads_connected=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338301 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd: 0x521f160
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x401b6100 thread_stack 0x40000
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(my_print_stacktrace+0x2e)[0x8abfbe]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(handle_segfault+0x322)[0x5df252]
/lib64/libpthread.so.0[0x35fb00de80]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN9Item_cond10fix_fieldsEP3THDPP4Item+0x7f)[0x5654ff]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN9Item_cond10fix_fieldsEP3THDPP4Item+0xb8)[0x565538]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z11setup_condsP3THDP10TABLE_LISTS2_PP4Item+0xf6)[0x621f96]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN4JOIN7prepareEPPP4ItemP10TABLE_LISTjS1_jP8st_orderS7_S1_S7_P13st_select_lexP18st_select_lex_unit+0x2db)[0x645f3b]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x7a4)[0x654d24]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z13handle_selectP3THDP6st_lexP13select_resultm+0x16c)[0x659f9c]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld[0x5ec92a]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z21mysql_execute_commandP3THD+0x602)[0x5efb22]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN18Prepared_statement7executeEP6Stringb+0x3bd)[0x66587d]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN18Prepared_statement12execute_loopEP6StringbPhS2_+0x7c)[0x66874c]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z22mysql_sql_stmt_executeP3THD+0xa7)[0x668c27]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z21mysql_execute_commandP3THD+0x1123)[0x5f0643]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z11mysql_parseP3THDPKcjPS2_+0x357)[0x5f5047]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0xe93)[0x5f5ee3]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z10do_commandP3THD+0xe6)[0x5f67a6]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(handle_one_connection+0x246)[0x5e9146]
/lib64/libpthread.so.0[0x35fb006307]
/lib64/libc.so.6(clone+0x6d)[0x35fa4d1ded]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x5249320 = select 1 from t1 join  t2 on a xor b where b > 1  and a =1
thd->thread_id=1
thd->killed=NOT_KILLED
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
Writing a core file
100306 14:51:49 mysqld_safe Number of processes running now: 0
100306 14:51:49 mysqld_safe mysqld restarted
100306 14:51:49 [Note] Plugin 'FEDERATED' is disabled.
100306 14:51:50  InnoDB: Started; log sequence number 0 44233
100306 14:51:50 [Note] Event Scheduler: Loaded 0 events
100306 14:51:50 [Note] /home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld: ready for connections.
Version: '5.1.38'  socket: '/tmp/mysql.sock.3999'  port: 3999  MySQL Community Server (GPL)

Confirming we got the “Writing a core file” line, we can find and use this.

$ find . -name "core*"
./data/core.23290
$ gdb bin/mysqld data/core.23290
GNU gdb Red Hat Linux (6.5-37.el5_2.2rh)
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu"...Using host libthread_db library "/lib64/libthread_db.so.1".

Reading symbols from /lib64/libpthread.so.0...done.
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /lib64/libdl.so.2...done.
Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/libcrypt.so.1...done.
Loaded symbols for /lib64/libcrypt.so.1
Reading symbols from /lib64/libnsl.so.1...done.
Loaded symbols for /lib64/libnsl.so.1
Reading symbols from /lib64/libm.so.6...done.
Loaded symbols for /lib64/libm.so.6
Reading symbols from /lib64/libc.so.6...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libgcc_s.so.1...done.
Loaded symbols for /lib64/libgcc_s.so.1
Core was generated by `/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld --defaults-fi'.
Program terminated with signal 11, Segmentation fault.
#0  0x00000035fb00b142 in pthread_kill () from /lib64/libpthread.so.0
(gdb) bt
#0  0x00000035fb00b142 in pthread_kill () from /lib64/libpthread.so.0
#1  0x00000000005df285 in handle_segfault (sig=11) at mysqld.cc:2552
#2  
#3  0x00000000005654ff in Item_cond::fix_fields (this=0x5249dd0, thd=0x521f160, ref=) at item_cmpfunc.cc:3900
#4  0x0000000000565538 in Item_cond::fix_fields (this=0x52435b8, thd=0x521f160, ref=) at item_cmpfunc.cc:3912
#5  0x0000000000621f96 in setup_conds (thd=0x521f160, tables=, leaves=0x52494d0, conds=0x5244e38) at sql_base.cc:7988
#6  0x0000000000645f3b in JOIN::prepare (this=0x5243770, rref_pointer_array=0x5248a90, tables_init=, wild_num=, conds_init=,
    og_num=, order_init=0x0, group_init=0x0, having_init=0x0, proc_param_init=0x0, select_lex_arg=0x52488c0, unit_arg=0x5248498) at sql_select.cc:412
#7  0x0000000000654d24 in mysql_select (thd=0x521f160, rref_pointer_array=0x5c3fd0, tables=0x4, wild_num=0, fields=@0x52489c8, conds=0x52435b8, og_num=0, order=0x0, group=0x0, having=0x0,
    proc_param=0x0, select_options=0, result=0x52688a0, unit=0x5248498, select_lex=0x52488c0) at sql_select.cc:2377
#8  0x0000000000659f9c in handle_select (thd=0x521f160, lex=0x52483f8, result=0x52688a0, setup_tables_done_option=0) at sql_select.cc:268
#9  0x00000000005ec92a in execute_sqlcom_select (thd=0x521f160, all_tables=0x52494d0) at sql_parse.cc:5011
#10 0x00000000005efb22 in mysql_execute_command (thd=0x521f160) at sql_parse.cc:2206
#11 0x000000000066587d in Prepared_statement::execute (this=0x5245d60, expanded_query=, open_cursor=false) at sql_prepare.cc:3579
#12 0x000000000066874c in Prepared_statement::execute_loop (this=0x5245d60, expanded_query=0x401b43c0, open_cursor=false, packet=, packet_end=)
    at sql_prepare.cc:3253
#13 0x0000000000668c27 in mysql_sql_stmt_execute (thd=) at sql_prepare.cc:2524
#14 0x00000000005f0643 in mysql_execute_command (thd=0x521f160) at sql_parse.cc:2215
#15 0x00000000005f5047 in mysql_parse (thd=0x521f160, inBuf=0x5243520 "execute stmt", length=12, found_semicolon=0x401b6060) at sql_parse.cc:5931
#16 0x00000000005f5ee3 in dispatch_command (command=COM_QUERY, thd=0x521f160, packet=0x525fde1 "execute stmt", packet_length=) at sql_parse.cc:1213
#17 0x00000000005f67a6 in do_command (thd=0x521f160) at sql_parse.cc:854
#18 0x00000000005e9146 in handle_one_connection (arg=dwarf2_read_address: Corrupted DWARF expression.
) at sql_connect.cc:1127
#19 0x00000035fb006307 in start_thread () from /lib64/libpthread.so.0
#20 0x00000035fa4d1ded in clone () from /lib64/libc.so.6
(gdb) quit

You can use gdb to obtain additional information based on the type of information available.

Now what?

Is the problem a bug? Is it data corruption? Is it hardware related?

Gathering the information is the first step in informing you of more detail that will enable you to search, discuss and seek professional advice to address your problem.

References

Advanced reporting options for MySQL

I’m seeking help from the MySQL community for what tools are used today to generate complex reports for enterprise applications that use MySQL. In an Oracle world, you have Oracle Report Writer, in Microsoft Crystal Reports.

In the open source world there is Jasper Reports, Pentaho Reports and BIRT however I don’t know the power of complex reporting with these.

If anybody has experience using or evaluating these tools please let me know. This may lead to possible work.

Migrating MySQL latin1 to utf8 – The process

Having covered the preparation and character set options of performing a latin1 to utf8 MySQL migration, just how do you perform the migration correctly.

Example Case

Just to recap, we have the following example table and data.

mysql> select c,length(c),char_length(c),charset(c), hex(c) from conv.test_latin1;
+---------------+-----------+----------------+------------+----------------------------+
| c             | length(c) | char_length(c) | charset(c) | hex(c)                     |
+---------------+-----------+----------------+------------+----------------------------+
| a             |         1 |              1 | latin1     | 61                         |
| abc           |         3 |              3 | latin1     | 616263                     |
| ☺             |         3 |              3 | latin1     | E298BA                     |
| abc ☺☹☻       |        13 |             13 | latin1     | 61626320E298BAE298B9E298BB |
+---------------+-----------+----------------+------------+----------------------------+

Migration approach 1

The easiest way is to use mysqldump to dump the schema and data, to change the schema definitions and then a client reload process, all with the correct client character sets. Working on our sample table test_latin1 which I have in a conv schema you can do.

$ mysqldump -uroot -p --default-character-set=latin1 --skip-set-charset conv > conv.sql
$ sed -i -e "s/latin1/utf8/g" conv.sql
$ mysql -uroot -p --default-character-set=utf8 conv < conv.sql

If you attempt this which technically works (as I've seen a similar example on the Internet) your bound to screw up something. While the mysqldump and mysql load use the correct client command line options, performing a blind conversion of all references of latin1 to utf8 will not only change your table definition, in my example it will change the name of table, and if would change any values of data that contained the word 'latin1'. For example.

mysql> select c,length(c),char_length(c),charset(c), hex(c) from conv.test_utf8;
+---------+-----------+----------------+------------+----------------------------+
| c       | length(c) | char_length(c) | charset(c) | hex(c)                     |
+---------+-----------+----------------+------------+----------------------------+
| a       |         1 |              1 | utf8       | 61                         |
| abc     |         3 |              3 | utf8       | 616263                     |
| ?       |         3 |              1 | utf8       | E298BA                     |
| abc ??? |        13 |              7 | utf8       | 61626320E298BAE298B9E298BB |
+---------+-----------+----------------+------------+----------------------------+
4 rows in set (0.00 sec)

mysql> set names utf8;
Query OK, 0 rows affected (0.00 sec)

mysql> select c,length(c),char_length(c),charset(c), hex(c) from conv.test_utf8;
+---------------+-----------+----------------+------------+----------------------------+
| c             | length(c) | char_length(c) | charset(c) | hex(c)                     |
+---------------+-----------+----------------+------------+----------------------------+
| a             |         1 |              1 | utf8       | 61                         |
| abc           |         3 |              3 | utf8       | 616263                     |
| ☺             |         3 |              1 | utf8       | E298BA                     |
| abc ☺☹☻       |        13 |              7 | utf8       | 61626320E298BAE298B9E298BB |
+---------------+-----------+----------------+------------+----------------------------+
4 rows in set (0.01 sec)

Be sure to realize now you need to connect with --default-character-set=utf8 or in our example, we use set names as a stop gap measure.

Migration Approach 2

A good practice with using mysqldump regardless of migration is to always dump the schema with the --no-data option, and then dump the data separately with the --no-create-info option. In this case, you can then manually edit the schema file, carefully changing latin1 where appropriate. This is your production data, so some manual hand verification is a good thing. The additional benefit is you can create your schema and verify the syntax is correct before loading any data. While mysqldump creates a single file, due to this dump and reload, you can split your data file and perform some level of parallelism for data loading depending on your hardware capabilities.

$ mysqldump -uroot -p --default-character-set=latin1 --skip-set-charset --no-data conv > conv.schema.sql
$ mysqldump -uroot -p --default-character-set=latin1 --skip-set-charset --no-create-info conv > conv.data.sql
# For simplicity in this example I'm using a more strict global replacement. NOTE: this syntax depends on the version of mysql
$ sed -i -e "s/DEFAULT CHARSET=latin1/DEFAULT CHARSET=utf8/g" conv.schema.sql
$ mysql -uroot -p --default-character-set=utf8 conv < conv.schema.sql
$ mysql -uroot -p --default-character-set=utf8 conv < conv.data.sql
$ mysql -uroot -p --default-character-set=utf8 conv
mysql> select c,length(c),char_length(c),charset(c), hex(c) from conv.test_latin1;
+---------------+-----------+----------------+------------+----------------------------+
| c             | length(c) | char_length(c) | charset(c) | hex(c)                     |
+---------------+-----------+----------------+------------+----------------------------+
| a             |         1 |              1 | utf8       | 61                         |
| abc           |         3 |              3 | utf8       | 616263                     |
| ☺             |         3 |              1 | utf8       | E298BA                     |
| abc ☺☹☻       |        13 |              7 | utf8       | 61626320E298BAE298B9E298BB |
+---------------+-----------+----------------+------------+----------------------------+
4 rows in set (0.00 sec)

Other software interactions

While I only discuss the MySQL data and mysql client usage here, you should ensure that your programming language connecting to MySQL uses the correct character set management, e.g. PHP uses the default_charset settings in php.ini and even your webserver may benefit from the correct encoding, e.g. Apache httpd uses AddDefaultCharset httpd.conf.

Migration Approach Problems

In our example our migration is successfully, however like life, your production data is unlikely to be perfect. In my next post I will talk about identifying and handling exceptions, including data that was wrongly encoded originally into latin1, or translation such as html entities from rich editor input fields for example.

Don't Assume – Data Integrity

MySQL has the same level of data integrity for numbers and strings as Oracle; when MySQL is correctly configured. By default (a reason I wish I knew why it is still the default), MySQL performs silent conversions on boundary conditions of data that will result in your data not always being what is specified. Let’s look at the following examples to demonstrate default behavior.

For numbers

mysql> DROP TABLE IF EXISTS example;
mysql> CREATE TABLE example(i1  TINYINT, i2 TINYINT UNSIGNED, c1 VARCHAR(5));
mysql> INSERT INTO example (i1) VALUES (1), (-1), (100), (500);
Query OK, 4 rows affected, 1 warning (0.08 sec)
mysql> SELECT * FROM example;
+------+------+------+
| i1   | i2   | c1   |
+------+------+------+
|    1 | NULL | NULL |
|   -1 | NULL | NULL |
|  100 | NULL | NULL |
|  127 | NULL | NULL |
+------+------+------+
4 rows in set (0.00 sec)

As you can see for one value we inserted 500, yet the value of 127 is stored? For this example I have used the TINYINT numeric data type to demonstrate truncation. TINYINT is a 1 byte integer that stores values from -128 to +127. Unlike Oracle, MySQL has 9 different data types for numeric columns, and using these wisely can improve your database disk footprint, for example BIGINT v INT. Is there a big deal?.

MySQL also has a nice feature for numeric data types, the UNSIGNED attribute that ensures only a positive integer or 0 value. Let’s see what happens with this column.

Unsigned

mysql> TRUNCATE TABLE example;
mysql> INSERT INTO example (i2) VALUES (1), (-1), (100), (500);
Query OK, 4 rows affected, 2 warnings (0.00 sec)
mysql> SELECT * FROM example;
+------+------+------+
| i1   | i2   | c1   |
+------+------+------+
| NULL |    1 | NULL |
| NULL |    0 | NULL |
| NULL |  100 | NULL |
| NULL |  255 | NULL |
+------+------+------+
4 rows in set (0.00 sec)

Now you see that -1 and 500 are now not the expected values, and before while 500 was silently truncated to 127, now it’s truncated to 255.

For Strings

As you can now assume, the following also occurs for strings.

mysql> TRUNCATE TABLE example;
mysql> INSERT INTO example (c1) VALUES (NULL),('a'),('abcde'),('xyz12345');
Query OK, 4 rows affected, 1 warning (0.00 sec)
mysql> SELECT * FROM example;
+------+------+-------+
| i1   | i2   | c1    |
+------+------+-------+
| NULL | NULL | NULL  |
| NULL | NULL | a     |
| NULL | NULL | abcde |
| NULL | NULL | xyz12 |
+------+------+-------+
4 rows in set (0.00 sec)

Show warnings

As you can see here, the mysql client shows that warnings occurred, but if you don’t review the warning you would never know, a situation that is rarely reviewed with development in richer programming languages. Let us look at these actual warnings more closely.

mysql> INSERT INTO example (i1) VALUES (1), (-1), (100), (500);
Query OK, 4 rows affected, 1 warning (0.08 sec)
mysql> SHOW WARNINGS;
+---------+------+---------------------------------------------+
| Level   | Code | Message                                     |
+---------+------+---------------------------------------------+
| Warning | 1264 | Out of range value for column 'i1' at row 4 |
+---------+------+---------------------------------------------+

mysql> INSERT INTO example (i2) VALUES (1), (-1), (100), (500);
Query OK, 4 rows affected, 2 warnings (0.00 sec)
mysql> SHOW WARNINGS;
+---------+------+---------------------------------------------+
| Level   | Code | Message                                     |
+---------+------+---------------------------------------------+
| Warning | 1264 | Out of range value for column 'i2' at row 2 |
| Warning | 1264 | Out of range value for column 'i2' at row 4 |
+---------+------+---------------------------------------------+
2 rows in set (0.00 sec)
mysql> INSERT INTO example (c1) VALUES (NULL),('a'),('abcde'),('xyz12345');
Query OK, 4 rows affected, 1 warning (0.00 sec)
mysql> SHOW WARNINGS;
+---------+------+-----------------------------------------+
| Level   | Code | Message                                 |
+---------+------+-----------------------------------------+
| Warning | 1265 | Data truncated for column 'c1' at row 4 |
+---------+------+-----------------------------------------+
1 row in set (0.00 sec)

Using sql_mode

The solution is the sql_mode configuration variable and at minimum the value of STRICT_ALL_TABLES defined. We can demonstrate the expected behavior with the following syntax.

mysql> set SESSION sql_mode=STRICT_ALL_TABLES;
Query OK, 0 rows affected (0.00 sec)

mysql> TRUNCATE TABLE example;
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO example (i1) VALUES (1), (-1), (100), (500);
ERROR 1264 (22003): Out of range value for column 'i1' at row 4
mysql> SELECT * FROM example;
+------+------+------+
| i1   | i2   | c1   |
+------+------+------+
|    1 | NULL | NULL |
|   -1 | NULL | NULL |
|  100 | NULL | NULL |
+------+------+------+
3 rows in set (0.00 sec)

As you can see, even with an error for a single INSERT statement, some data was actually stored. You should read Don’t Assume – Transactions for some insights here.

When it comes to dates, there is greater complexity and this is grounds for another entry of this series.

References

About “Don’t Assume”

“Don’t Assume” is a series of posts to help the Oracle DBA understand, use and appreciate the subtle differences and unique characteristics of the MySQL RDBMS in comparison to Oracle. These points as essential to operate MySQL effectively in a production environment and avoid any loss of data or availability.

For more posts in this series be sure to follow the mysql4oracledba tag and also watch out for MySQL for Oracle DBA presentations.

The MySQLCamp for the Oracle DBA is a series of educational talks all Oracle DBA resources should attend. Two presentations from this series IGNITION and LIFTOFF will be presented at the MySQL Users Conference 2010 in Santa Clara, April 2010 This series also includes JUMPSTART and VELOCITY. If you would like to here these presentations in your area, please contact me.

How to crash mysqld intentionally

While some may think I’m daft, I have a legitimate reason for wanting to crash mysqld. However first we need to find a way to crash it.

Great thanks to Alan K, Mark L, Harrison and Hartmut on #mysql-dev for several suggestions and a config option I was unaware of. My investigation even lead to a documentation bug logged as #51739.

My first thought was to find a known bug and if necessary install the correct version to test that. A good one was suggested, Bug #48508 which fails on several versions that I will use to demonstrate with, however the simplest way is to issue kill -11

By default, no core file will be produced which is what I’m seeking but with the right options this is possible. First, the user running mysqld probably has a core file limit size of 0.

$ ulimit -c
0

You can fix this with with ulimit or you can specify this in the [mysqld_safe] section with core-file-size=unlimited

$ ulimit -c unlimited
$ ulimit -c
unlimited

The option I was not aware of is you also have to also specify core-file in your my.cnf

[mysqld]
core-file

I also for my CentOS 5.4 installation ran the following kernel commands, but this may be unnecessary.

sudo /sbin/sysctl -w kernel.core_pattern="core"
sudo /sbin/sysctl -w fs.suid_dumpable= 1

It is now easy to produce a core file.

$ bin/mysqld_safe &
$ killall -11 mysqld
$ bin/mysqld_safe: line 137:  2656 Segmentation fault      (core dumped) ...
100304 16:46:43 mysqld_safe Number of processes running now: 0
100304 16:46:43 mysqld_safe mysqld restarted
$ find . -name "core*"
./data/core.99999

NOTE: Do no run killall on a multi-instance server. I use this syntax here only for simplicity in presentation. It is best to run ps and kill the appropriate pid.

On a side note, I also tried to produce a core on Mac OS X without success. I’d still like to document that way, so if anybody can assist please ping me.

Don't Assume – Transactions

MySQL by default is a NON transactional database. For the hobbyist (See The Hobbyist and the Professional), startup entrepreneur and website developer this may not appear foreign, however to the seasoned Oracle DBA who has only used Oracle the concept is very foreign.

In MySQL you have to be concerned with two situations that will catch the unprepared out. The first is the default autocommit mode. This is TRUE, i.e. all statements are automatically committed on completion.

mysql> SELECT @@autocommit,TRUE;
+--------------+------+
| @@autocommit | TRUE |
+--------------+------+
|            1 |    1 |
+--------------+------+
1 row in set (0.00 sec)

The second is the storage engine used. Again a foreign term for Oracle DBA’s, a storage engine is a technology that stores and retrieves the underlying data from the MySQL database. MySQL has many different storage engines, each with relative strengths and weaknesses and different features. For the purpose of this discussion it is important to know that engines are either non-transactional or transactional. The default storage engine MyISAM is NON transactional. MySQL provides by default the InnoDB storage engine which is transactional. There are distinct advantages of a non transactional environment which I will not go into at this time.

Having recently written about this in my upcoiming book Expert PHP and MySQL I will demonstrate what happens with both MyISAM and InnoDB.

Non-transactional Tables

To show the difference, Listing 6-7 demonstrates that atomicity is not possible with non-transactional tables. The following tables are used in this example.

DROP TABLE IF EXISTS non_trans_parent;
CREATE TABLE non_trans_parent (
  id   INT UNSIGNED NOT NULL AUTO_INCREMENT,
  val  VARCHAR(10) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY (val)
) ENGINE=MyISAM DEFAULT CHARSET latin1;
DROP TABLE IF EXISTS non_trans_child;
CREATE TABLE non_trans_child (
  id        INT UNSIGNED NOT NULL AUTO_INCREMENT,
  parent_id INT UNSIGNED NOT NULL,
  created   TIMESTAMP NOT NULL,
PRIMARY KEY (id),
INDEX (parent_id)
) ENGINE=MyISAM DEFAULT CHARSET latin1;

To test things out, perform a sample transaction that inserts records into these two tables:

START TRANSACTION;
INSERT INTO non_trans_parent(val) VALUES(‘a’);
INSERT INTO non_trans_child(parent_id,created) VALUES(LAST_INSERT_ID(),NOW());
INSERT INTO non_trans_parent (val) VALUES(‘a’);
ERROR 1062 (23000): Duplicate entry ‘a’ for key ‘val’
ROLLBACK;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> SHOW WARNINGS;
+--------+------+--------------------------------------------------------------
| Level  | Code | Message
+--------+------+--------------------------------------------------------------
| Warning| 1196 | Some non-transactional changed tables couldn’t be rolled back
+--------+------+--------------------------------------------------------------
SELECT * FROM non_trans_parent;
+----+-----+
| id | val |
+----+-----+
|  1 | a   |
+----+-----+
SELECT * FROM non_trans_child;
+----+-----------+---------------------+
| id | parent_id | created             |
+----+-----------+---------------------+
|  1 |         1 | 2009–09–21 23:44:25 |
+----+-----------+---------------------+

As you can see, data that you would have expected to not exist from the transaction is present.

Transactional Tables

Repeat these SQL statements using the transactional storage engine InnoDB; you will observe the difference between transactional and non transactional processing. The following tables, shown in Listing 6-8, are used in this example.

DROP TABLE IF EXISTS trans_parent;
CREATE TABLE trans_parent (
  id   INT UNSIGNED NOT NULL AUTO_INCREMENT,
  val  VARCHAR(10) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY (val)
) ENGINE=InnoDB DEFAULT CHARSET latin1;
DROP TABLE IF EXISTS trans_child;
CREATE TABLE trans_child (
  id        INT UNSIGNED NOT NULL AUTO_INCREMENT,
  parent_id INT UNSIGNED NOT NULL,
  created   TIMESTAMP NOT NULL,
PRIMARY KEY (id),
INDEX (parent_id)
) ENGINE=InnoDB DEFAULT CHARSET latin1;

Perform a sample transaction that inserts records into these two tables:

START TRANSACTION;
INSERT INTO trans_parent (val) VALUES(‘a’);
INSERT INTO trans_child (parent_id,created) VALUES(LAST_INSERT_ID(),NOW());
INSERT INTO trans_parent (val) VALUES(‘a’);
ERROR 1062 (23000): Duplicate entry ‘a’ for key ‘val’
ROLLBACK;
Query OK, 0 rows affected (0.01 sec)
SELECT * FROM trans_parent;
Empty set (0.00 sec)
SELECT * FROM trans_child;
Empty set (0.00 sec)

As you can see, no data has been recorded as part of the failing transaction.

About “Don’t Assume”

“Don’t Assume” is a series of posts to help the Oracle DBA understand, use and appreciate the subtle differences and unique characteristics of the MySQL RDBMS in comparison to Oracle. These points as essential to operate MySQL effectively in a production environment and avoid any loss of data or availability.

For more posts in this series be sure to follow the mysql4oracledba tag and also watch out for MySQL for Oracle DBA presentations.

The MySQLCamp for the Oracle DBA is a series of educational talks all Oracle DBA resources should attend. Two presentations from this series IGNITION and LIFTOFF will be presented at the MySQL Users Conference 2010 in Santa Clara, April 2010 This series also includes JUMPSTART and VELOCITY. If you would like to here these presentations in your area, please contact me.

Upcoming book – Expert PHP and MySQL

This month will see the release of the book Expert PHP and MySQL which I was a co-author of. Initially this will be available for purchase in PDF format from the Wrox website and I am hopeful this will be available in print format for the MySQL Users Conference.

More then just your standard PHP and MySQL there is detailed content on technologies including Memcached, Sphinx, Gearman, MySQL UDFs and PHP extensions. We will be posting more information at www.ExpertPhpandMySQL.com. You can download a PDF version of Chapter 1 Techniques Every Expert Programmer Needs to Know.

The book includes the following content:

  1. Techniques Every Expert Programmer Needs to Know
  2. Advanced PHP Constructs
  3. MySQL Drivers and Storage Engines
  4. Improved Performance through Caching
  5. Memcached MySQL
  6. Advanced MySQL
  7. Extending MySQL with User-defined Functions
  8. Writing PHP Extensions
  9. Full Text Search using SPHINX
  10. Multi-tasking in PHP and MySQL
  11. Rewrite Rules
  12. User Authentication with PHP and MySQL
  13. Understanding the INFORMATION_SCHEMA
  14. Security
  15. Service and Command Lines
  16. Optimization and Debugging

Don't Assume – Common Terminology

In Oracle the default transaction isolation is READ_COMMITTED. In MySQL the default is REPEATABLE_READ. Because MySQL also has READ_COMMITTED I have seen in more then one production MySQL environment a transaction isolation of READ_COMMITTED. The explanation and ultimately incorrect assumption is the default in Oracle is READ_COMMITTED so we made that the default in MySQL.

I’m not going to discuss the specific differences of these isolation levels (see reference lines below) except to say it that READ_COMMITTED in Oracle more closely relates to the MySQL default of REPEATABLE_READ and not READ_COMMITTED. Just because the same term for a common feature exists, don’t assume the underlying functionality is the same or that either or both actually conform to the SQL ANSI standard.

While switching your MySQL environment to READ_COMMITTED is possible, there is still conjucture if this actually provides any performance improvement. There are different cases of improving locking contention, in one case Heikki Tuuri the creator of InnoDB suggests READ_COMMITTED may overcome an adjacent range gap locking contention problem while in a tpcc-like benchmark a far greater number of deadlocks were detected.

I will close by stating two facts. When changing the MySQL transaction isolation from the default of REPEATABLE_READ you are using a code path that is less tested and not used as frequently to the millions of default MySQL installations, and you are also required to change the default replication format, again a code path less tested and potential a significant increase in I/O load.

References

About “Don’t Assume”

“Don’t Assume” is a series of posts to help the Oracle DBA understand, use and appreciate the subtle differences and unique characteristics of the MySQL RDBMS in comparison to Oracle. These points as essential to operate MySQL effectively in a production environment and avoid any loss of data or availability.

For more posts in this series be sure to follow the mysql4oracledba tag and also watch out for MySQL for Oracle DBA presentations.

The MySQLCamp for the Oracle DBA is a series of educational talks all Oracle DBA resources should attend. Two presentations from this series IGNITION and LIFTOFF will be presented at the MySQL Users Conference 2010 in Santa Clara, April 2010 This series also includes JUMPSTART and VELOCITY. If you would like to here these presentations in your area, please contact me.

Don't Assume – Session Scope

MySQL system variables and status variables have two scopes. These are GLOBAL and SESSION which are self explanatory.
This is important to realize when altering system variables dynamically. The following example does not produce the expected results.

mysql> USE test;
Database changed
mysql> CREATE TABLE example1(
    -> id INT UNSIGNED NOT NULL AUTO_INCREMENT,
    -> col1 VARCHAR(10) NOT NULL,
    -> PRIMARY KEY(id)
    -> );
Query OK, 0 rows affected (0.29 sec)

mysql> SHOW CREATE TABLE example1G
*************************** 1. row ***************************
       Table: example1
Create Table: CREATE TABLE `example1` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `col1` varchar(10) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

We see that the table has a default CHARACTER SET of latin1. If you wanted to ensure all tables are created as utf8 you change the appropriate system variable. For example, we change the GLOBAL system variable and re-create the table.

mysql> SHOW GLOBAL VARIABLES like 'char%';
+--------------------------+----------------------------------------------------------------+
| Variable_name            | Value                                                          |
+--------------------------+----------------------------------------------------------------+
| character_set_client     | latin1                                                         |
| character_set_connection | latin1                                                         |
| character_set_database   | latin1                                                         |
| character_set_filesystem | binary                                                         |
| character_set_results    | latin1                                                         |
| character_set_server     | latin1                                                         |
| character_set_system     | utf8                                                           |
| character_sets_dir       | /Users/rbradfor/mysql/mysql-5.1.39-osx10.5-x86/share/charsets/ |
+--------------------------+----------------------------------------------------------------+
8 rows in set (0.00 sec)

mysql> SET GLOBAL character_set_server=utf8;
Query OK, 0 rows affected (0.10 sec)
mysql> DROP TABLE example1;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE example1( id INT UNSIGNED NOT NULL AUTO_INCREMENT, col1 VARCHAR(10) NOT NULL, PRIMARY KEY(id) );
Query OK, 0 rows affected (0.12 sec)

mysql> SHOW CREATE TABLE example1G
*************************** 1. row ***************************
       Table: example1
Create Table: CREATE TABLE `example1` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `col1` varchar(10) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.02 sec)

The table still is latin1. This is because now we have a SESSION scope that differs from the GLOBAL scope as seen in this output.

mysql> SELECT @@GLOBAL.character_set_server,@@SESSION.character_set_server;
+-------------------------------+--------------------------------+
| @@GLOBAL.character_set_server | @@SESSION.character_set_server |
+-------------------------------+--------------------------------+
| utf8                          | latin1                         |
+-------------------------------+--------------------------------+
1 row in set (0.00 sec)

The solution is easy however the trap can be easily overlooked and especially when changing other MySQL system variables.

mysql> SET SESSION character_set_server=utf8;
Query OK, 0 rows affected (0.00 sec)

mysql> DROP TABLE example1;Query OK, 0 rows affected (0.00 sec)

mysql> CREATE TABLE example1( id INT UNSIGNED NOT NULL AUTO_INCREMENT, col1 VARCHAR(10) NOT NULL, PRIMARY KEY(id) );
Query OK, 0 rows affected (0.09 sec)

mysql> SHOW CREATE TABLE example1G
*************************** 1. row ***************************
       Table: example1
Create Table: CREATE TABLE `example1` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `col1` varchar(10) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

By default, and when not specified in SHOW and SET commands the default scope is GLOBAL, however prior to MySQL 5.0.2 the default was SESSION. A note you will find in the 5.0 Reference Manual but not the current GA version 5.1 Reference Manual. See also SHOW STATUS Gotcha written in August 2006.

There are also other gotchas with scope that we will discuss at some other time.

References

About “Don’t Assume”

“Don’t Assume” is a series of posts to help the Oracle DBA understand, use and appreciate the subtle differences and unique characteristics of the MySQL RDBMS in comparison to Oracle. These points as essential to operate MySQL effectively in a production environment and avoid any loss of data or availability.

For more posts in this series be sure to follow the mysql4oracledba tag and also watch out for other MySQL for Oracle DBA presentations.

The MySQLCamp for the Oracle DBA is a series of educational talks all Oracle DBA resources should attend. Two presentations from this series IGNITION and LIFTOFF will be presented at the MySQL Users Conference 2010 in Santa Clara, April 2010 This series also includes JUMPSTART and VELOCITY. If you would like to here these presentations in your area, please contact me.

Don't Assume Series – MySQL for the Oracle DBA

As part of my MySQLCamp for the Oracle DBA series of talks to help the Oracle DBA understand, use and appreciate MySQL I have also developed a series of short interesting posts I have termed “Don’t Assume”. Many of these are re-occurring points during my consulting experiences as I observe Oracle DBA’s using MySQL. I am putting the finishing touches to my MySQL for the Oracle DBA series of talks and I’m excited to highlight some of the subtle differences and unique characteristics of MySQL RDBMS in comparison to Oracle and some extent other products including SQL Server.

Stay tuned for more soon.

I will be presenting at the MySQL Users Conference 2010 in Santa Clara, April 2010 two presentations from this series, IGNITION and LIFTOFF. This series also includes JUMPSTART and VELOCITY.

Edward Screven of Oracle to Answer Questions for future of MySQL

For those of you on the O’Reilly MySQL conference list you will no doubt see this email, but for readers here is the important bits.


Oracle Executive Will Speak at O’Reilly MySQL Conference & Expo
Edward Screven to Answer Questions re: Future of MySQL

Sebastopol, CA, February 24, 2010—Wonder about the future of MySQL? Curious about what Oracle plans for the open source database software? Expect answers when Edward Screven, Oracle’s chief corporate architect and leader of the MySQL business, speaks at the O’Reilly MySQL Conference & Expo, scheduled for April 12-15, at the Santa Clara Convention Center and the Hyatt Regency Santa Clara.

Edward Screven reports to CEO Larry Ellison, and he drives technology and architecture decisions across all Oracle products to ensure that product directions are consistent with Oracle’s overall strategy. He’ll discuss the current and future state of MySQL, now part of the Oracle family of products. His presentation will also cover Oracle’s investment in MySQL technology and community, as well as the role that open source in general is playing within heterogeneous customer environments around the world.

I have not found a link yet to provide reference to this.

Ineffective concatenated indexes

In MySQL significant performance improvements can be achieved by the correct use of indexes. It is important to understand different MySQL index implementations and one key improvement on indexes defined on single columns is to use multiple column or more commonly known concatenated indexes.

However it’s also possible to define ineffective indexes. This example shows you how to identify a concatenated index that is ineffective.

CREATE TABLE example (
  id INT UNSIGNED NOT NULL AUTO_INCREMENT,
  a  INT UNSIGNED NOT NULL,
  b  INT UNSIGNED NOT NULL,
  c  INT UNSIGNED NOT NULL,
  d  INT UNSIGNED NOT NULL,
  x  VARCHAR(10),
  y  VARCHAR(10),
  z  VARCHAR(10),
PRIMARY KEY (id),
UNIQUE INDEX (a,b,c,d)
) ENGINE=InnoDB;

INSERT INTO example(a,b,c,d) VALUES
(1,0,1,1),(1,0,1,2), (1,0,2,3), (1,0,4,5),
(2,0,2,1),(2,0,2,2), (2,0,2,3), (2,0,2,5),
(3,0,2,1),(3,0,2,3), (3,0,3,3), (3,0,3,5);

And our sample query is

SELECT id,x,y,z
FROM   example
WHERE  a = 2
AND    c = 2
AND    d = 3;

The EXPLAIN plan is

+----+-------------+---------+------+---------------+------+---------+-------+------+-------------+
| id | select_type | table   | type | possible_keys | key  | key_len | ref   | rows | Extra       |
+----+-------------+---------+------+---------------+------+---------+-------+------+-------------+
|  1 | SIMPLE      | example | ref  | a             | a    | 4       | const |    4 | Using where |
+----+-------------+---------+------+---------------+------+---------+-------+------+-------------+

While we are using the index (see the key column), the full benefit of the index is not utilized (see the key_len). 4 indicates the number of bytes used, that is only 1 INT column.

Let’s look at a second example.

SELECT id,x,y,z
FROM   example
WHERE  a = 2
AND    b = 0
AND    c = 2
AND    d = 3;

+----+-------------+---------+-------+---------------+------+---------+-------------------------+------+-------+
| id | select_type | table   | type  | possible_keys | key  | key_len | ref                     | rows | Extra |
+----+-------------+---------+-------+---------------+------+---------+-------------------------+------+-------+
|  1 | SIMPLE      | example | const | a             | a    | 16      | const,const,const,const |    1 |       |
+----+-------------+---------+-------+---------------+------+---------+-------------------------+------+-------+

In this example the index is used however the key_len is 16, that is 4 x 4 byte INT columns. This is ideal for this index.

In the above example, the client was using a common data structure however was not using one column, it’s values were all effectively 0. Certain queries were written with this knowledge and instead of specifying the column, they elected to remove it, however the impact of this developer code change was increased load on the database and more inefficient performance.

While this was easily addressed by a code change, an alternative could have been to change the index definition. It was not possible to remove the column due to legacy requirements.

ALTER TABLE example
DROP INDEX a,  ADD UNIQUE INDEX (a,c,d);
mysql> explain SELECT id,x,y,z FROM   example  WHERE  a = 2 AND    c = 2 AND    d = 3;
+----+-------------+---------+-------+---------------+------+---------+-------------------+------+-------+
| id | select_type | table   | type  | possible_keys | key  | key_len | ref               | rows | Extra |
+----+-------------+---------+-------+---------------+------+---------+-------------------+------+-------+
|  1 | SIMPLE      | example | const | a             | a    | 12      | const,const,const |    1 |       |
+----+-------------+---------+-------+---------------+------+---------+-------------------+------+-------+

A better solution may be to enforce better integrity on this business rule, that b only contains 0. MySQL does not support check constraint. The closest option would be to use an ENUM data type and a STRICT sql_mode however ENUM is only a string object so this would break other code.

ALTER TABLE example MODIFY b ENUM('0') NOT NULL DEFAULT '0';

Not only is performance impacted in this situation, in other examples of ineffective indexes it’s simply a waste of diskspace to sort additional columns in an index that are never used, and also a waste of memory as the index pages stored in memory contain information that is not used.

Getting started with Cassandra

With the motivation from today’s public news on Twitter’s move from MySQL to Cassandra, my own skills desire following in-depth discussions at last November’s Open SQL Camp to consider Cassandra and yesterday’s discussion with a new client on persistent key-value store products, today I download installed and configured for the first time. Not that today’s news was unexpected, if you follow the Twitter Engineering Open Source projects you would have seen Cassandra as well as other products being used or evaluated by Twitter.

So I went from nothing to a working Cassandra node in under 5 minutes. This is what I did.

  1. While I knew this was an Apache project, a Google Search yields for me the 3rd link for the The Apache Cassandra Project at http://incubator.apache.org/cassandra/. Congrats for Cassandra now a top level Apache Project. This url will update soon.
  2. Download Cassandra. Hard to miss with a big green button on home page. Current version is 0.5
  3. I read Getting Started, which is the 3rd top level link on menu after Home and Download. Step 1 is picking a version which I’ve already done, Step 2 is Running a single node.
  4. The Getting Started indicated a problem on Mac OS X for the required minimum Java version. I was installing on Mac OS X 10.5 and CentOS 5.4. I’ve experienced this Java 6 default path issue before. Set my JAVA_HOME and PATH accordingly (after I updated the wiki with correct value)
  5. I extracted the tar file, changed to the directory and took at look at the README.txt file. Yes, I always check this first with any software and relevant because it includes valuable instructions on creating the default data and log directories.
  6. Start with bin/cassandra -f. No problems!
  7. I then followed the instructions from the link in Step 2 with the CassandraCli. This tests and confirms the installation is operational.

Ok, a working environment. I’ve now installed on a second machine and tested however I now need to configure the cluster, and the documentation is not as straightforward. Time to try out Google again.

On a side note, this is one reason why I love Open Source. I followed the instructions online and found a mistake in the Mac OS X path, I simply registered and corrected providing the benefit of my experience for the next reader(s).

You may also like to view future posts including.

The correct approach to rolling MySQL logs

I say correct because there are several incorrect approaches to managing MySQL logs. In MySQL you have two important log files, the MySQL error log (configured with –log-error) and the MySQL slow query log (configured with –log-slow-queries or –slow-query-log and –slow-query-log-file which is available from 5.1.29).

The ideal management of these log files is different for each type of file.

The MySQL Error Log

With the error log you want to have one file showing a contiguous history of the server instance. This log provides valuable information over time and you should not discard this. You do NOT want to roll your error log. If for feel the content in the error log is too much, then these are errors you need to be addressing, and archiving after correcting. There are circumstances where error logs are rolled outside of your control.

The first is the FLUSH LOGS command. When this command is run, the error log is renamed to -old and a new log is created. For example:

$ ls -l log/error*
-rw-rw---- 1 mysql root  1733 Feb 22 16:11 error.log

$ mysql -uroot -p -e "FLUSH LOGS"

$ ls -l log/error*
-rw-rw---- 1 mysql mysql    0 Feb 22 18:08 error.log
-rw-rw---- 1 mysql root  1733 Feb 22 16:11 error.log-old

What happens when you run this command again?

$ mysql -uroot -p -e "FLUSH LOGS"

$ ls -l log/error*
-rw-rw---- 1 mysql mysql 0 Feb 22 18:10 log/error.log
-rw-rw---- 1 mysql mysql 0 Feb 22 18:08 log/error.log-old

You have now lost all valuable information in the error log, both the current log and the -old log are ZERO bytes in size.

The second example is the Ubuntu specific MySQL distribution on Ubuntu OS that logs MySQL information to the system error log (i.e. /var/log/syslog). You are then receiving a daily log rotate via the default OS log-rotate settings. You effectively lose information after 7 days. Here is what you will find on a stock Ubuntu server.

$ ls -l /var/log
...
-rw-r----- 1 mysql       adm       0 2008-05-28 20:33 mysql.err
-rw-r----- 1 mysql       adm       0 2008-05-28 20:33 mysql.log
...
-rw-r----- 1 syslog      adm  278480 2010-02-22 20:22 syslog
-rw-r----- 1 syslog      adm  366934 2010-02-22 06:25 syslog.0
-rw-r----- 1 syslog      adm   21025 2010-02-21 06:27 syslog.1.gz
-rw-r----- 1 syslog      adm   18551 2010-02-20 06:47 syslog.2.gz
-rw-r----- 1 syslog      adm   20086 2010-02-19 06:25 syslog.3.gz
-rw-r----- 1 syslog      adm   17135 2010-02-18 06:40 syslog.4.gz
-rw-r----- 1 syslog      adm   19238 2010-02-17 06:32 syslog.5.gz
-rw-r----- 1 syslog      adm   16101 2010-02-16 06:34 syslog.6.gz

You have to troll the syslog files to find any mysql specific information, even that is not possible with one command.

$ grep syslog syslog.0 | grep mysql

$ zcat syslog*gz | grep mysql
Feb 16 22:12:20 db1 mysqld[21769]: 100216 22:12:20 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_5508_40.MYI'; try to repair it
Feb 16 22:12:20 db1 mysqld[21769]: 100216 22:12:20 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_5508_33.MYI'; try to repair it
Feb 16 22:30:15 db1 mysqld[21769]: 100216 22:19:13 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_5508_29.MYI'; try to repair it
Feb 16 22:30:17 db1 mysqld[21769]: 100216 22:19:13 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_5508_44.MYI'; try to repair it
...

I’m lucky I looked, because tomorrow would have been too late to see these errors, as this is the oldest log file.

I place a disclaimer that if you have proper backups in place, these files are retrievable, but that takes time and effort which I consider unnecessary for a database server. The mysql log is one of the most important log files for this type of server, you want this information on the server.

The solution to this problem is easy in Ubuntu, always define the -log-error variable for the [mysqld] and [mysqld_safe] sections to a specific file, e.g. /var/log/mysql/mysql.log

The MySQL slow query log

With the slow query log, you DO want to rotate the log file produced. The best practice is to rotate this file daily. In addition you should both analyze the log file producing a top 5 or top 10 list of slow SQL queries each day. As load may not always predictable, it’s ideal to also analyze the combined logs of the past 7 days for cross reference.

The daily granularity allows you to track your load of slow performing queries more consistently and it also enables you verify more easily the impact of improvements made when they have been deployed.

Care needs to be taken to roll your log filed. Simply moving the log file will not work. For example.

A good tip to help in slow query SQL analysis is to
Comment your SQL (url)

$ mysql -uroot -p -e "select sleep(5)"
$ tail -1 log/slow.log
select sleep(5);

$ ls -l log/slow*
-rw-rw---- 1 mysql mysql 352 Feb 22 15:26 log/slow.log

$ mv log/slow.log log/slow.log.1; touch log/slow.log
$ ls -l log/slow*
-rw-r--r-- 1 root  root    0 Feb 22 15:26 log/slow.log
-rw-rw---- 1 mysql mysql 352 Feb 22 15:26 log/slow.log.1


$ mysql -uroot -p -e "SELECT SLEEP(4)"

$ ls -l log/slow*
-rw-r--r-- 1 root  root    0 Feb 22 15:26 log/slow.log
-rw-rw---- 1 mysql mysql 533 Feb 22 16:01 log/slow.log.1
$ tail -1 log/slow.log.1
SELECT SLEEP(4);

As you can see, the slow log was not written to, but the previous file which has the same inode.

$ rm -f log/slow.log; mv log/slow.log.1 log/slow.log
$ ls -l log/slow.log*
-rw-rw---- 1 mysql mysql 533 Feb 22 16:01 log/slow.log
$ cp log/slow.log log/slow.log.`date +%M`; > log/slow.log
$ mysql -uroot -p -e "SELECT SLEEP(3)"
$ ls -l log/slow.log*
-rw-rw---- 1 mysql mysql 181 Feb 22 16:03 log/slow.log
-rw-r----- 1 root  root  533 Feb 22 16:03 log/slow.log.03
$  tail -1 log/slow.log
SELECT SLEEP(3);

$ cp log/slow.log log/slow.log.`date +%M`; > log/slow.log
$ mysql -uroot -p -e "SELECT SLEEP(6)"
$ ls -l log/slow.log*
-rw-rw---- 1 mysql mysql 181 Feb 22 16:04 log/slow.log
-rw-r----- 1 root  root  533 Feb 22 16:03 log/slow.log.03
-rw-r----- 1 root  root  181 Feb 22 16:04 log/slow.log.04
$ tail -1 log/slow.log
SELECT SLEEP(6);

As you can see by copying and truncating you can perform an effective log rotate manually. Ideally you should config logrotate to manage this log.

More Information