ImageMagick on Mac OS X

Wanting to do some image manipulation I realized my Linux scripts don’t run under Mac OS X, as ImageMagick is not installed via my MacPorts.

However installation failed:

$ sudo port install imagemagick
--->  Computing dependencies for ImageMagick
--->  Verifying checksum(s) for xorg-libX11
Error: Checksum (md5) mismatch for libX11-1.3.3.tar.bz2
Error: Checksum (sha1) mismatch for libX11-1.3.3.tar.bz2
Error: Checksum (rmd160) mismatch for libX11-1.3.3.tar.bz2
Error: Target org.macports.checksum returned: Unable to verify file checksums
Error: The following dependencies failed to build: xorg-libXext xorg-libX11 xorg-libXt xorg-libsm xorg-libice
Error: Status 1 encountered during processing.
Before reporting a bug, first run the command again with the -d flag to get complete output.

Figuring that some of my packages may require upgrade:

$ sudo port selfupdate
sudo port -d upgrade outdated

The problem is this all failed. Turning to the FAQ it seemed what I needed to do was remove and re-install the offending package receiving the checksum error via the following syntax.

$ sudo port clean --all 
$ sudo port install 

It seemed I had to do this for several packages manually however in the end removing and installing a number of packages addressed the problem and now ImageMagick is happily running on Mac OS X

bash-3.2$ sudo port clean --all xorg-libX11
--->  Cleaning xorg-libX11
bash-3.2$ sudo port install xorg-libX11
---->  Computing dependencies for ImageMagick
--->  Verifying checksum(s) for xorg-libX11
Error: Checksum (md5) mismatch for libX11-1.3.3.tar.bz2
Error: Checksum (sha1) mismatch for libX11-1.3.3.tar.bz2
Error: Checksum (rmd160) mismatch for libX11-1.3.3.tar.bz2
Error: Target org.macports.checksum returned: Unable to verify file checksums
Error: The following dependencies failed to build: xorg-libXext xorg-libX11 xorg-libXt xorg-libsm xorg-libice
Error: Status 1 encountered during processing.
Before reporting a bug, first run the command again with the -d flag to get complete output.
bash-3.2$ sudo port clean --all libX11
Error: Port libX11 not found
Before reporting a bug, first run the command again with the -d flag to get complete output.
bash-3.2$ sudo port clean --all xorg-libX11
--->  Cleaning xorg-libX11
bash-3.2$ sudo port install xorg-libX11

tcpdump errors on FreeBSD for mk-query-digest

While I use this tcpdump command for MySQL query analysis with mk-query-digest, I found recently that it didn’t work on FreeBSD

$ tcpdump -i bge0 port 3306 -s 65535 -x -n -q -tttt -c 5
tcpdump: syntax error

It left me perplexed and reading the man page seemed to indicate my options were valid. I tried a few variances just to be sure without success.

$ tcpdump -i bge0 -c 5 port 3306 -x
tcpdump: syntax error
$ tcpdump -i bge0 -c 5 port 3306 -q
tcpdump: syntax error
$ tcpdump -i bge0 -c 5 port 3306 -tttt
tcpdump: syntax error

The solution was actually quite simple in the end, it had nothing to do with the commands, it had everything to do with the order of them. Placing port as the last option solved the problem.

$ tcpdump -i bge0 -s 65535 -x -n -q -tttt -c 5  port 3306
$ uname -a
FreeBSD db4.example.com 6.3-RELEASE-p3 FreeBSD 6.3-RELEASE-p3 #0: Wed Jul 16 05:13:50 EDT 200

MySQL Best Practices: User Security

It is critical that you do not use the default MySQL installation security, it’s simply insecure.

Default Installation

When installed, MySQL enables any user with physical permissions to the server to connect to the MySQL via unauthenticated users. MySQL also provides complete access to all super user privileges via the ‘root’ user with no default password.

$ mysql -uroot
mysql> SELECT host,user,password FROM mysql.user;
+--------------+------+-------------------------------------------+
| host         | user | password                                  |
+--------------+------+-------------------------------------------+
| localhost    | root |                                           |
| server.local | root |                                           |
| 127.0.0.1    | root |                                           |
| localhost    |      |                                           |
| server.local |      |                                           |
+--------------+------+-------------------------------------------+

What you see here are two types of users.

  • The ‘root’ user which has MySQL super user privileges for your server or ‘localhost’ connections with no password.
  • Unauthenticated users indicated by the blank ‘user’ column

The absolute minimum you should do, is run the provided optional command for immediate improvements mysql_secure_installation. When running this command, you’re prompted for the following
options — the output has been trimmed for presentations purposes.

$ mysql_secure_installation
Enter current password for root (enter for none):
Set root password? [Y/n] y
New password:
Re-enter new password:
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

If you revisit permissions now, you’ll see what you would expect from a more initially secure installation.

mysql> SELECT host,user,password FROM mysql.user;
+-----------+------+-------------------------------------------+
| host      | user | password                                  |
+-----------+------+-------------------------------------------+
| localhost | root | *FDAF706717E70DB8DDAD0C5214B13770E1A80B0E |
+-----------+------+-------------------------------------------+

This is only the first step to hardening your MySQL instance and server.

Recommendations

The following are my recommendations for the minimum MySQL security permissions:

  • Always set a MySQL ‘root’ user password
  • Change the MySQL ‘root’ user id to a different name, e.g. ‘dba’
  • Only enable SUPER privileges to dba accounts, and only ever for ‘localhost’.
  • Application user permissions should be as restrictive as possible.
  • Never use ‘%’ for a hostname
  • Never use ALL TO *.*
  • Ideally the application should have at least two types of users, a read/write user and a read user.

There is a lot more information about physical Operating System security and the MySQL permission/privilege model to be discussed. One product I know of that help is SecuRich – The MySQL Security Package featuring roles, password history and many other cool functionalities.

References

A recent post by Lance Miller quoted the following.


I cant tell you how many times in the past 18 months that I’ve found real enterprises running vulnerable databases with default passwords, weak passwords and no real permissions management. It’s bad enough that the stats right now are this (so I guess I can tell you):
– 9 out of 10 organizations have a Microsoft SQL Database with a blank “sa” password (or an sa password of “sa”, “sql” or “password”)
– 9 out of 10 organizations have a Postgres Database with a default password
– 9 out of 10 organizations have a Sybase Database with a default password

The article didn’t include MySQL however some organizations don’t change the default password, probably not 9 of 10 in my experience.

More Information

MySQL Monitoring – What's really needed

The implementation of MySQL Monitoring is critical for any organization that uses a database and wants to avoid the inevitable disaster. There are 3 important components that all serve a key purpose to “MySQL Monitoring” in general:

  • Monitoring – Historical and graphical information
  • Alerting – Tell me when something is wrong
  • Dashboard – The State of NOW

Monitoring

There is no one option for Monitoring that is significantly better then another. A short list of what’s on offer can be found at http://monitoring-mysql.com/monitoring-products. What’s important is you have monitoring in place so historically you can review situations and compare across your servers and enabling the better identification of physical or database bottlenecks. My recommendations for products are Cacti which is packaged with most popular Linux distrubtions, so can be installed via a single apt-get/yum/rpm command, and the MySQL Cacti Templates. This is not the best product solution or combination, it’s in my opinion the most common and covers all the essential bases.

It’s best to define a different web server (i.e. publically accessible) to be the monitoring server rather then installing the web interface on any single DB server.

Alerting

Alerting is key to notify you of a problem without the need for somebody to be viewing a screen and see it happening via your monitoring. The identification of high CPU load, a disk nearing capacity, database locking etc often helps avoid a current problem before it becomes some level of disaster. Almost all companies use Nagios or a derivative such as the main fork Icigna or products that include Nagios like Opsview or Groundworks.

Dashboard

While both Monitoring and Alerting are necessary, they both however lack a key component necessary for successful administration. That is timing. Both of these earlier options sample, e.g. 1 minute or generally 5 minutes (by default), but problems can happen quickly. This is why each organization needs to have a Dashboard. I don’t know of any products here unless you try and adjust a monitoring product, but want’s needed is a very lightweight and very business centric single page of Green/Yellow/Red status’s of your environment including databases, webservers, response time and traffic etc. This is for the state of NOW. A Dashboard should sample every 5 to 10 seconds. I have seen larger and more successful companies have various home grown implementations. I developed a product for one company and it included the following on a single page. You can see the screen output in a presentation at http://www.slideshare.net/ronaldbradford/10x-performance-improvements. This included

  • 5 DB servers monitoring load average, ping time, database connections, active,free,locked, replication availability and lag
  • 5 web servers monitoring load average, ping time, apache connections
  • Application metrics monitoring 3 different page load times, and page size

It is often important to be able to identify a key problem and then drill down to this more quickly rather then the usual “the website is slow” question and having to investigate the same repetitive tasks. You need to automate, and be more pro-active in response especially to load and locking issues.

Advanced Monitoring

Information is only have the requirement, it is what you do with this information that determines how to be proactive rather then reactive. If I was the DBA of a company I’d do even more then these initial 3 steps which are a necessary base. I would also monitor for example:

  • Database size and growth. This is important to be preemptive about your capacity. Example SQL at http://ronaldbradford.com/mysql-dba/#allschemas
  • Error Log changes
  • Backup timing. This is important as your DB grows as it affects recovery.
  • Recovery timing
  • Gather raw MySQL status information because monitoring tools only capture what you ask it to do, not everything. While you may not analyze all now, you may want to in the future look back in time. Example scripts at http://ronaldbradford.com/mysql-dba/#log-stats
  • Hourly/Daily text reports. Producing a easy readable SHOW GLOBAL status report such as with statpack will for example enable me to know network throughtput in the DB, transaction throughput and key indicators of locking, disk access etc. While you may have a graphic interface, it’s a lot easier to automate and grep text reports.
  • Proactive restrictions. The Twitter failed whale is a great example of when the system moves closer to known limits, but before those limits they start limiting load. This includes for example to disable less critical but resource intensive functionality, e.g. people search. The also start rejecting connections so they do not reach a crash state. This could be proactively changing timeout values so the DB fails queries, and the webservers respond accordingly with a try again approach.

While these are important if you have only limited resources, too much information can be just as much of a burden then people just start ignoring the information and miss what’s important.

Finalized speakers list for Kaleidoscope conference

We have secured approval for our final two speakers and now have a full schedule for the 4 day MySQL track at ODTUG Kaleidoscope conference. The conference is in Washington DC from Monday June 28th to Thursday July 1st. Welcome to Josh Sled and Craig Sylvester that will be joining our existing list of speakers.

This conference will include 19 sessions of dedicated MySQL content from Monday thru Thursday by well qualified MySQL community members, as well a forums discussion and reception on Monday night. You don’t need to be an Oracle developer to get the benefit of this conference. We will offering a discount code for MySQL attendees in the upcoming days.

If you are in the DC area, the Monday night forum (known as the sundown sessions) as well as the reception are FREE for the MySQL community. This was a great jesture of the Oracle Developer Tools Users Group to openly invite the MySQL community to meet and interact. We ask that you register your name and email for confirmation of numbers.

Speakers List

  • Philip Antoniades, Oracle/MySQL
  • Ronald Bradford, 42SQL
  • Sheeri K. Cabral, The Pythian Group
  • Laine Campbell, PalominoDB
  • Patrick Galbraith, Northscale
  • Sarah Novotny, Blue Gecko
  • Padraig O’Sullivan, Akiba Technologies Inc.
  • Jay Pipes, Rackspace Cloud
  • Dossy Shiobara, Panoptic.com
  • Josh Sled, Oracle/MySQL
  • Craig Sylvester, Oracle/MySQL
  • Matt Yonkovit, Percona

References

Why is my database slow?

Not part of my Don’t Assume series, but when a client states “Why is my database slow””, you need to determine if indeed the database is slow.

Some simple tools come to the rescue here, one is Firebug. If a web page takes 5 seconds to load, but the .htm file takes 400ms, and the 100+ assets being downloaded from one base url, then is the database actually slow? Tuning the database will only improve the 400ms portion of 5,000ms download.

There some very simple tips here. MySQL is my domain expertise and I will not profess to improving the entire stack however perception is everything to a user and you can often do a lot. Some simple points include:

  • Know about blocking assets in your <head> element, e.g. .js files.
  • Streamline .js, .css and images to what’s needed. .e.g. download a 100k image only to resize to a thumbnail via style elements.
  • Sprites. Like many efficient but simple SQL statements, network overhead is your greatest expense.
  • Splitting images to a different domain.
  • Splitting images to multiple domains (e.g. 3 via CNAME only needed.) — Hint: Learn about the protocol
  • Cookieless domains for static assets
  • Lighter web container for static assets (e.g. nginx, lighttpd)
  • Know about caching, expires and etags
  • Stripping out http://ww.domain.com from all your internal links (that one alone saved 12% of HTML page size for a client). You may ask is that really a big deal, well in a high volume site the sooner you can release the socket on your webserver, the sooner you can start serving a different request.

Like tuning a database, some things work better then others, some require more testing then others, and consultants never tell you all the tricks.

References

As with everything in tuning, do your research and also determine what works in your environment and what doesn’t. Two excellent resources to start with are Steve Souders and Best Practices for Speeding Up Your Web Site by Yahoo.

More on understanding sort_buffer_size

There have been a few posts by Sheeri and Baron today on the MySQL sort_buffer_size variable. I wanted to add some more information about this buffer, what is impacted when it is changed, and what to do about it?

The first thing you need to know is the sort_buffer_size is a per session buffer. That is this memory is assigned per connection/thread. I’ve seen clients that set this assuming it’s a global buffer Don’t Assume – Per Session Buffers.

Second, internally in the OS usage independently of MySQL, there is a threshold > 256K. From Monty Taylor “if buffer is set to over 256K, it uses mmap() instead of malloc() for memory allocation. Actually – this is a libc malloc thing and is tunable, but defaults to 256k. From the manual:” . He goes on in a further to shows that impact > 256K for a buffer is 37x slower. This applies to all per session buffers, not just sort buffer. Now I have heard recently about this limit being 512K. I wasn’t able to nail down the specific speaker to see if this was a newer library or kernel or OS.

With MySQL instrumentation and the sort_buffer_size we are lucky, there is the Sort_merge_passes status variable. While it’s not perfect, it does indicate if the size of the buffer is in-sufficient, however even if we use a sort_buffer_size of say 256K, and you see Sort_merge_passes increasing slowly, does not indicate you have to increase the buffer.

So, all this does not tell you how to tune the buffer? Unfortunately with MySQL there is no actual easy answer. You do need to monitor the mysqld memory usage overall, especially if you are using persistent connections. A connection/thread will not release the memory assigned until it is closed, so it’s important to monitor for memory creep of the PGA, knowing what your initial SGA is. Morgan Tocker wrote a patch in Bug #33540 to create a RESET CONNECTION type command.

You do need to look for memory as a bottleneck. You need to learn how MySQL use memory, not just the sort_buffer_size. I actually started many years ago to write global/session variables to indicate when buffers were used, and how much and I started with the sort_buffer_size which was buried down in some very old filesort code. When I sought the input of an expert C coder around this, they wondered how the code, especially a loop handler even actually worked.

Nobody knows what the optimal setting is, and that’s the problem. In certain areas especially memory usage the MySQL instrumentation is simply non-existent, and I’d like to see this as something that is fixed.

In conclusion, if I ever see a sort_buffer_size above 256K, e.g. 1M or 2M, I always reset it to 256K. My reasoning is simple. Until you have evidence in your specific environment increasing the buffer makes performance better, it’s better to use a smaller value. There are bigger wins, like not using sorting, or better design, or even better simplifying or eliminating SQL.

References

sort_buffer_size and Knowing Why
How to tune MySQL’s sort_buffer_size
Read Buffer performance hit
more on malloc() speed

Free MySQL Event in Washington DC

As the program chair for the recently announced MySQL Track at the ODTUG Kaleidoscope conference located in Washington DC we are also looking into an associated free community event for MySQL locals in addition to a dedicated track for 4 days.

Please let us know your name and email via the form at http://ronaldbradford.com/ODTUG/free-event/ so we can provide more details in the coming week as we try to finalize logistics.

Registration will be necessary for attendance however for now we just want to know who is local so we can provide more details soon!

Updated. Full details of the free Monday night sundown sessions and reception can be found at MySQL track with free event at Kaleidoscope 2010

The MySQL community impacting the Oracle community

I’m happy to announce that the MySQL community has been given the opportunity to speak at the upcoming Oracle Developer Tools User Group (ODTUG) Kaleidoscope conference in Washington DC. We will be releasing more details this week of the MySQL presentations and topics and we are finalizing details of possible options to include the local MySQL community during the event.

The various independent Oracle User Groups in North America that embody “by the community and for the community” have been very positive with including the MySQL community. With the Sun/MySQL now Oracle community team of Giuseppe Maxia, Lenz Grimmer, Kaj Arnö and Oracle ACE Directors Sheeri K Cabral and myself we have been happy with the openness and willingness to include us in the larger Oracle ecosystem.

We’ll announce the schedule when we finalize it, but we have had a great response from an impressive list of speakers.

Additional References

The MySQL documentation is not always right

Let me premise this post with the statement I think the MySQL documentation is an excellent and highly accurate resource. I think the MySQL docs team do a great job, however like software and people, documentation is not perfect.

As members of the MySQL community you can always contribute to improve the process by reading the documentation and logging any issues as Documentation Bugs.

Some time ago in a discussion with a friend and colleague, we were talking about changes in historical defaults that had been improved finally in MySQL 5.4 The specific discussion was on the new default innodb_buffer_pool_size and we both agreed it increased significantly. One said 1GB, the other said 128MB. Who was right? Well we both were, and we were both inaccurate depending on versions.

Referencing the 5.4 Manual in InnoDB Startup Options and System Variables the current value for Linux is 128M, but for Windows it’s 1GB.

However I was confident I was told in a presentation, perhaps even the keynote the value was 1GB. Firing up my server and seeing the original version I used of 5.4.0 (which was not available on Windows) we find that the default for Linux was 1GB at some time, i.e. the first release.

mysql> show global variables like 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 1073741824 |
+-------------------------+------------+
1 row in set (0.00 sec)

mysql> select 1073741824/1024/1024 as MB;
+---------------+
| MB            |
+---------------+
| 1024.00000000 |
+---------------+
1 row in set (0.00 sec)

mysql> show global variables like 'version%';
+-------------------------+------------------------------+
| Variable_name           | Value                        |
+-------------------------+------------------------------+
| version                 | 5.4.0-beta                   |
| version_comment         | MySQL Community Server (GPL) |
| version_compile_machine | x86_64                       |
| version_compile_os      | unknown-linux-gnu            |
+-------------------------+------------------------------+
4 rows in set (0.00 sec)

I’m not trying to nit pick here, I’m highlighting that MySQL is a evolving product with many different versions and architectures. It’s our job of the MySQL community to help make the documentation the best for all readers. In this above case I’ve not logged the issue, because 5.4 is a defunct product, however if you want an example of how I identified a problem, provided a test case, and saw that my contribution was reviewed, verified and implemented check out Bug #51739 –core-file is not default TRUE (incorrect docs).

In conclusion, always read the documentation but pay special attention to the current version that matches the documentation, and the version you are actually running. Defaults change between versions, e.g. innodb_thread_concurrency is a complex example, and I’ve been caught with a large enterprise client with assuming the default of a Connector/J options as true, when it was in 5.0.6, but in 5.0.5 the version the client was running it was false.

An old saying, “trust by verify” is a good motto to consider.

The Drizzle Census

One thing I have often wondered is just how many MySQL instances exist in the world and what MySQL versions and architectures are in use. We hear of 50,000 windows downloads per day but this is misleading because MySQL is basically bundled with Linux by default or installed from various repositories. Linux servers powers many websites.

In Drizzle we have a proposed plan, the Drizzle Census. From the productive Drizzle Developers Day recently at the 2010 MySQL conference we sat down and created a blueprint, and subsequent high level spec of what we considered this optional plugin should do. We didn’t get as far as I would have liked in a code skeleton to at least gather and store a sample result, but the hope is that with the community we will in the near future.

Here is the list of information we decided was appropriate for anonymous information of value.

  • Kernel Version/Architecture
  • CPU type
  • SID – HASH(processor_id,listener address,first listener port)
  • Drizzle Version
  • Drizzle Uptime
  • Drizzled process memory usage

Installation issues with MySQL 5.5.4 and resolveip

I was installing the latest MySQL 5.5.4 on a new machine and I came across the following issues during installation, steps I generally perform on other versions without any incidents.

$ sudo su -
$ cd /opt
$ wget http://dev.mysql.com/get/Downloads/MySQL-5.5/mysql-5.5.4-m3-linux2.6-x86_64.tar.gz/from/http://mysql.mirrors.hoobly.com/
$ tar xvfz mysql-5.5.4-m3-linux2.6-x86_64.tar.gz
$ cd mysql-5.5.4-m3-linux2.6-x86_64
$ scripts/mysql_install_db
Neither host 'barney' nor 'localhost' could be looked up with /usr/bin/resolveip
Please configure the 'hostname' command to return a correct hostname.
If you want to solve this at a later stage, restart this script with the --force option

Perform some checks

$ /usr/bin/resolveip
-su: /usr/bin/resolveip: No such file or directory

$ hostname
barney

$ head -2 /etc/hosts
127.0.0.1	localhost
127.0.1.1	barney

I was surprised to find that my Ubuntu 10.04 Lucid Lynx box had a different IP for hostname, rather then 127.0.0.1. That was a little weird. I changed this just for comfort.

I was not familiar with this command so I tried to install it.

$ apt-get install resolveip
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Couldn't find package resolveip

$  apt-cache search resolveip

No luck there, off to Google to do a search for resolveip and ubuntu you find the following online manual page which surprising shows the following as the owner of the command.
COPYRIGHT
Copyright 2007-2008 MySQL AB, 2009 Sun Microsystems, Inc.

Thinking it’s now actually packaged with MySQL we find this in the untar directory.

$ bin/resolveip `hostname`
IP address of barney is 127.0.0.1

I’m now still none the wiser regarding this command. I’ve decided to take the –force option for now and we will see!

2010 MySQL Conference Presentations

I have uploaded my three presentations from the 2010 MySQL Users Conference in Santa Clara, California which was my 5th consecutive year appearing as a speaker.

A full history of my MySQL presentations can be found on the Presenting page.

My acceptance with Oracle as ACE Director

I hinted last week of my acceptance with Oracle before the formal announcement this week at the MySQL Users Conference, not for a job but as Oracle ACE Director. In today’s State of the MySQL Community keynote by Kaj Arnö I was one of the first three MySQL nominees that are now part of this program.

What exactly is an ACE Director? Using the description from the Oracle website.

Oracle ACEs and Oracle ACE Directors are known for their strong credentials as Oracle community enthusiasts and advocates, with candidates nominated by anyone in the Oracle Technology and Applications communities. The baseline requirements are the same for both designations; however, Oracle ACE Directors work more closely and formally with Oracle in terms of their community activity.

What does this mean to me?

As a significant contributor to the community I now have the opportunity to continue as well as to contribute to how Oracle continues to interact, promote and involve the MySQL community. As stewards our role as an Oracle ACE Director is to be actively involved. I look forward to the challenge to help shape and improve our State of the MySQL Community.

News and References
Welcome, Oracle ACE Directors for MySQL

State of the Dolphin – Opening keynote

Edward Screven – Chief Corporate Architect of Oracle provided the opening keynote at the 2010 MySQL Users Conference.

Overall I was disappointed. The first half was more an Oracle Sales pitch, we had some product announcements, we had some 5.5 performance buzz. While a few numbers and features were indeed great to hear, there was a clear lack of information to the MySQL ecosystem including employees, alumni and various support services. I hope more is unveiled this week.

Some notes of the session.

  • Oracle’s Strategy covers storage, servers, virtual machines, operating system, database, middleware, applications
  • We build a complete technology stack that is “open” and “integrated” based on “open standards”
  • products talk via open standards with the intention for customers to not feel locked in to any technology
  • Examples include apache, java, linux, xen, eclipse, and innodb
  • Unbreakable linux has now over 4,500 customers

After the sales pitch we got down to more about MySQL.

What MySQL means to Oracle? We make the Oracle solution more complete as a stack for customers.

What is the investment in MySQL?

  • Make MySQL a better MySQL
  • Develop, promote and support MySQL
  • MySQL community edition

Integration with Oracle Enterprise Manager, Oracle Secure Backup and Oracle Audit Vault infrastructure. *This I expected and have blogged about, so I’m glad to see this commitment.

MySQL 5.5 is now in Alpha, some features are

  • InnoDB will be default engine
  • Semi sync replication
  • Replication heartbeat
  • Signal
  • Performance Schema

MySQL 5.5 is planned on being faster with Innodb Performance Improvements & MySQL Performance Improvements.
MySQL 5.5 sysbench claims, read 200% faster, write 364% faster.

MySQL Workbench 5.2 announcement

  • SQL Development
  • Database Administration
  • Data Modelling

MySQL Cluster 7.1 GA announcement

  • Improved Administration
  • Higher Performance
  • Carrier Grade Availability & Performance

MySQL Enterprise Backup announcement

  • Online backup for InnoDB only
  • Formally InnoDB hot backup with additional features including incremental backups

MySQL Enterprise Monitor 2.2 Beta announcment

In closing the statement was “MySQL lets Oracle be more complete at the database layer”. Is that good for the MySQL Community or better for the Oracle revenue model?

My acceptance with Oracle

There have been a number of April fools jokes today so I thought I’d add my own to the list. While this sounds unexpected it’s actually no joke.

I just accepted a position with Oracle yesterday but I can’t say any more about the details until the MySQL users conference in a few weeks.

A special thanks to Lenz, Kaj & Giuseppe that championed everything to make it all happen, I really didn’t have to do anything other then accept.

Using ROLLBACK with MyISAM

Using ROLLBACK with MyISAM is useless. A ROLLBACK command is used to undo any DML that occurs during a transaction (i.e. START TRANSACTION and COMMIT). The MySQL default storage engine MyISAM does not support transactions.

It is easy with the SHOW GLOBAL STATUS command to see if your application code uses ROLLBACK. By performing two samples you can look at the delta over time. The statpack utility is one product that provides a human friendly display of this delta. As seen below, the use of ROLLBACK in combination with the read/write ratio and the my.cnf –skip-innodb indicate unnecessary database work.

====================================================================================================
                    Variable    Delta/Percentage         Per Second              Total
====================================================================================================

                                         Statement Activity
====================================================================================================

                     SELECT:        1,135,589                 1,309.79              189,279,510 (49.62%)
                     INSERT:            6,171                     7.12                  431,987 (0.11%)
                     UPDATE:            4,800                     5.54                  334,620 (0.09%)
                     DELETE:              312                     0.36                   17,910 (0.00%)
                    REPLACE:                0                     0.00                        0 (0.00%)
          INSERT ... SELECT:              121                     0.14                    4,042 (0.00%)
         REPLACE ... SELECT:               11                     0.01                      109 (0.00%)
               Multi UPDATE:                0                     0.00                       30 (0.00%)
               Multi DELETE:                0                     0.00                       28 (0.00%)
                     COMMIT:                0                     0.00                        0 (0.00%)
                   ROLLBACK:        1,154,987                 1,332.16              191,382,775 (50.17%)

If the ROLLBACK command doesn’t do anything you may be tempted to consider this doesn’t do much harm, think again. In the following example of statements analyzed via TCP packets, the ROLLBACK attributed to 21% of the execution time of all SQL in this sample.

# Profile
# Rank Query ID           Response time    Calls R/Call   Item
# ==== ================== ================ ===== ======== ================
#    1 0x4ED092EFA577DAB7     0.0106 24.8%     1   0.0106 SELECT p
#    2 0xC9ECBBF2C88C2336     0.0102 23.8%    52   0.0002 SELECT r_c
#    3 0x19C8068B5C1997CD     0.0092 21.6%   138   0.0001 ROLLBACK
#    4 0x448E4AEB7E02AF72     0.0091 21.3%    52   0.0002 SELECT r_t
#    5 0x56438040F4B2B894     0.0015  3.6%     2   0.0008 SELECT h_c
#    6 0x164962ED9B451586     0.0012  2.9%     9   0.0001 SELECT r
#    7 0x8FDE1484818AAACE     0.0008  1.9%     8   0.0001 SELECT p_c

In a well tuned system, the greatest time to execute an SQL statement is not the running of the SQL inside the MySQL kernel, it is the network latency of making the call, and the time taken to return the resultset requested.

In this extreme case on a production system, 1/2 the statements executed where unnecessary.

Uncovering this issue was three commands and less then 5 minutes of my time. The statpack report uncovered 4 additional red flags at the same time.

If you are not monitoring your production system, start now. For assistance on what to monitor and analysis please contact me for more information.

Installing Ubuntu Desktop 10.04 with LVM

With a new quad core desktop with 8GB RAM & 1TB HDD I wanted to install the Ubuntu desktop version using LVM. This is not possible with the “Desktop CD”. You need to use the “alternative CD” which will easily allow you to configure LVM via a text/cursors installation and also give you a deskop GNOME environment. The “Server CD” also gives you LVM options, but no GUI.

While there are complicated instructions on how to configure/setup LVM with various versions of Ubuntu, all you need with Ubuntu 10.04 is the right CD.

While installing I also read up on two tips that I found of benefit.

  • During installation of the base system, package unpacking and setup messages are redirected to tty4. You can access this terminal by pressing Left Alt+F4; get back to the main installer process with Left Alt+F1.
  • You can get a terminal window easily during installation by switching to the second virtual console by pressing Left Alt+F2

Installation was rather seamless, the only annoyance the cursors interface not displaying clearly on my Dell 2407WFP monitor during the installation process. Monitor works fine with installed GUI at 1920×1200.

How to find MySQL developers?

Brian wrote recently Where did all of the MySQL Developers Go?, while over in Drizzle land they have been accepted for the Google Summer of code along with many other open source projects. MySQL from my observation a noticeable absentee.

Historically, the lack of opportunity to enable community contributions and see them implemented in say under 5 years, has really hurt MySQL in recent times. There is plenty of history here so that’s not worth repeating. The current landscape of patches, forks and custom MySQL binaries for storage engine provider has provided a boom of innovation that sadly is now lost from the core MySQL product.

In Drizzle, community contribution is actively sought and a good portion of committed code is not from the core Drizzle developers (wherever they work). As a Drizzle GSoC project contributor last year Padraig for example this year is helping to mentor. The Drizzle project contribution philisophy, GSoC and other activities such as the Drizzle Developer Day all enable the next generation of developers to be part of ongoing project developement.

Oracle, what are you going to do to foster an active community and new long term developers for MySQL?

Understanding Drizzle user authentication options – Part 2

A key differentiator in Drizzle from it’s original MySQL roots is user based authentication. Gone is the host/user and schema/table/column model that was stored in the MyISAM based mysql.user table.

Authentication is now completely pluggable, leveraging existing systems such as PAM, LDAP via PAM and Http authentication.

In this post I’ll talk about HTTP authentication which requires an external http server to implement successfully. You can look at Part 1 for PAM authentication.

Compiling for http auth support

By default during compilation you may find.

checking for libcurl... no
configure: WARNING: libcurl development lib not found: not building auth_http plugin. On Debian this is found in libcurl4-gnutls-dev. On RedHat it's in libcurl-devel.

In my case I needed:

$ sudo yum install curl-devel

NOTE: Bug #527255 talks about issues of the message being incorrect for libcurl-devel however this appears it may be valid in Fedora Installs

After successfully installing the necessary pre-requisite you should see.

checking for libcurl... yes
checking how to link with libcurl... -lcurl
checking if libcurl has CURLOPT_USERNAME... no

HTTP Authentication

We need to enable the plugin at server startup.

$ sbin/drizzled --mysql-protocol-port=3399 --plugin_add=auth_http &

You need to ensure the auth_http plugin is active by checking the data dictionary plugin table.

drizzle> select * from data_dictionary.plugins where plugin_name='auth_http';
+-------------+----------------+-----------+-------------+
| PLUGIN_NAME | PLUGIN_TYPE    | IS_ACTIVE | MODULE_NAME |
+-------------+----------------+-----------+-------------+
| auth_http   | Authentication | TRUE      |             |
+-------------+----------------+-----------+-------------+

The auth_http plugin also has the following system variables.

drizzle> SHOW GLOBAL VARIABLES LIKE '%http%';
+------------------+-------------------+
| Variable_name    | Value             |
+------------------+-------------------+
| auth_http_enable | OFF               |
| auth_http_url    | http://localhost/ |
+------------------+-------------------+
2 rows in set (0 sec)

In order to configure Http authentication, you need to have the following settings added to your drizzled.cnf file. For example:

$ cat etc/drizzled.cnf
[drizzled]
auth_http_enable=TRUE
auth_http_url=http://thedrizzler.com/auth

NOTE: Replace the domain name with something you have, even localhost.

A Drizzle restart gives us

$ bin/drizzle -e "SHOW GLOBAL VARIABLES LIKE 'auth_http%'"
+------------------+-----------------------------+
| Variable_name    | Value                       |
+------------------+-----------------------------+
| auth_http_enable | ON                          |
| auth_http_url    | http://thedrizzler.com/auth |
+------------------+-----------------------------+

By default, currently if the settings result in an invalid url, then account validation does not fail and you can still login. It is recommended that you always configure pam authentication as well as a fall back.

$ wget -O tmp http://thedrizzler.com/auth
--17:32:32--  http://thedrizzler.com/auth
Resolving thedrizzler.com... 208.43.73.220
Connecting to thedrizzler.com|208.43.73.220|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
17:32:32 ERROR 404: Not Found.

$ bin/drizzle
drizzle > exit

Configuring passwords

To correctly configured your web server to perform the HTTP auth, you can use this Apache syntax as an example.

The following is added to the VirtualHost entry in your web browser.

<Directory /var/www/drizzle/auth>
AllowOverride FileInfo All AuthConfig
AuthType Basic
AuthName "Drizzle Access Only"
AuthUserFile /home/drizzle/.authentication
Require valid-user
</Directory>
$ sudo su -
$ mkdir /var/www/drizzle/auth
$ touch /var/www/drizzle/auth/index.htm
$ apachectl graceful

We check we now need permissions for the URL.

$ wget -O tmp http://thedrizzler.com/auth
--17:35:48--  http://thedrizzler.com/auth
Resolving thedrizzler.com... 208.43.73.220
Connecting to thedrizzler.com|208.43.73.220|:80... connected.
HTTP request sent, awaiting response... 401 Authorization Required
Authorization failed.

You need to create the username/password for access.

$ htpasswd -cb /home/drizzle/.authentication testuser sakila
$ cat /home/drizzle/.authentication
testuser:85/7CbdeVql4E

Confirm that the http auth with correct user/password works.

$ wget -O tmp http://thedrizzler.com/auth --user=testuser --password=sakila
--17:37:45--  http://thedrizzler.com/auth
Resolving thedrizzler.com... 208.43.73.220
Connecting to thedrizzler.com|208.43.73.220|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently

Drizzle HTTP Authentication in action

By default we now can’t login

$ bin/drizzle
ERROR 1045 (28000): Access denied for user ''@'127.0.0.1' (using password: NO)
$ bin/drizzle --user=testuser --password=sakila999
ERROR 1045 (28000): Access denied for user 'testuser'@'127.0.0.1' (using password: YES)

$ bin/drizzle --user=testuser --password=sakila
Welcome to the Drizzle client..  Commands end with ; or g.
Your Drizzle connection id is 6
Server version: 7 Source distribution (trunk)

Type 'help;' or 'h' for help. Type 'c' to clear the buffer.

drizzle>

Understanding Drizzle user authentication options – Part 1

A key differentiator in Drizzle from it’s original MySQL roots is user based authentication. Gone is the host/user and schema/table/column model that was stored in the MyISAM based mysql.user table.

Authentication is now completely pluggable, leveraging existing systems such as PAM, LDAP via PAM and Http authentication.

In this post I’ll talk about PAM authentication which is effectively your current Linux based user security.

This information is based on the current build 1317.

Compiling for PAM support

Your Drizzle environment needs to be compiled with PAM support. You would have received the following warning during a configure.

$ ./configure
...
checking for libpam... no
configure: WARNING: Couldn't find PAM development support, pam_auth will not be built. On Debian, libpam is in libpam0g-dev. On RedHat it's in pam-devel.

The solution is provided in the warning message which is another great thing about Drizzle. The pre checks for dependencies and the optional messages like these far exceed the MySQL equivalent compilation process. In my case:

$ sudo yum install pam-devel

When correctly configured, it should look like:

checking for libpam... yes
checking how to link with libpam... -lpam

Working with PAM

You need to enable the PAM authentication plugin at drizzled startup.

sbin/drizzled --plugin_add=auth_pam &

Unfortunately connecting fails to work with

time sbin/drizzle --user=testuser --password=***** --port=4427
real 0m0.003s
user 0m0.003s
sys 0m0.001s

A look into the source at src/drizzle-2010.03.1317/plugin/auth_pam/auth_pam.cc shows a needed config file

117     retval= pam_start("check_user", userinfo.name, &conv_info, &pamh);

Configuring PAM

In order to enable PAM with Drizzle you need to have the following system configuration.

$ cat /etc/pam.d/check_user
auth    required        pam_unix.so
account required        pam_unix.so

$ time sbin/drizzle --user=testuser --password=***** --port=4427
ERROR 1045 (28000): Access denied for user 'testuser'@'127.0.0.1' (using password: YES)

real 0m2.055s
user 0m0.002s
sys 0m0.002s

This did some validation but still failed.

It seems Bug #484069 may fix this problem, however this is not currently in the main line!

Stay Tuned!

Gearman examples under Mac OS X

Today I listened in on the O’Reilly webcast Introduction to Gearman by Eric Day of Rackspace. I thought I would follow through on the machine at hand; a Mac with OS X 10.5, however I again got caught up with the gearman PHP extension integration. A look at and older post Getting started with Gearman based on Ubuntu needed an update for Mac.

First I downloaded and installed the latest gearman. This was version 0.12 and includes libgearman 0.7.
You should always check for any more recent updates.

wget http://launchpad.net/gearmand/trunk/0.12/+download/gearmand-0.12.tar.gz
tar xvfz gearmand-0.12.tar.gz
cd gearmand-0.12
./configure
make
sudo make install
ls -l /usr/local/lib/libg*
#-rwxr-xr-x  1 root  wheel  79808 Mar 12 13:33 /usr/local/lib/libgearman.4.dylib
#lrwxr-xr-x  1 root  wheel     18 Mar 12 13:33 /usr/local/lib/libgearman.dylib -> libgearman.4.dylib
#-rwxr-xr-x  1 root  wheel    960 Mar 12 13:33 /usr/local/lib/libgearman.la

gearmand was installed in /usr/local/sbin and gearman installed in /usr/local/bin

Next we needed the gearman PHP extension from pecl

wget http://pecl.php.net/get/gearman-0.7.0.tgz
tar xvfz gearman-0.7.0.tgz
cd gearman-0.7.0
phpize
./configure
make
sudo make install
# Installing shared extensions:     /usr/lib/php/extensions/no-debug-non-zts-20060613/

Take note of the extension location, as I needed this for the next step.

Php was already installed, which was good.

$ which php
/usr/bin/php

However I found no configuration loaded.

$ php --info | grep -i configuration
Configuration File (php.ini) Path => /etc
Loaded Configuration File => (none)
Configuration

What exists is a default example only. In order to include the gearman extension I needed to do the following.

$ sudo cp /etc/php.ini.default /etc/php.ini
$ sudo vi /etc/php.ini

# Set extension directory
extension_dir = "/usr/lib/php/extensions/no-debug-non-zts-20060613/"
# Add Gearman extension
extension="gearman.so"

And a confirmation.

$ php --info | egrep -i "(configuration|gearman)"
Configuration File (php.ini) Path => /etc
Loaded Configuration File => /private/etc/php.ini
Configuration
gearman
gearman support => enabled
libgearman version => 0.12

Ready now to try out the PHP examples.

Using ext4 for MySQL

This week with a client I saw ext4 used for the first time on a production MySQL system which was running Ubuntu 9.10 (Karmic Koala). I observe today while installing 9.10 Server locally that ext4 is the default option. The ext4 filesystem is described as better performance, reliability and features while there is also information about improvements in journaling.

At OSCON 2009 I attended a presentation on Linux Filesystem Performance for Databases by Selena Deckelmann in which ext4 was included. While providing some improvements in sequential reading and writing, there were issue with random I/O which is the key for RDBMS products.

Is the RAID configuration (e.g. RAID 5, RAID 10), strip size, buffer caches, LVM etc more important then upgrading from ext3 to ext4? I don’t have access to any test equipment in order to determine myself however I’d like to know of any experiences from members of the MySQL community and if anybody has experienced any general problems running ext4.

ext4 References

Drizzle's Data Dictionary and Global Status

With the recent news by Brian about the Data Dictionary in Drizzle replacing the INFORMATION_SCHEMA, I was looking into the server status variables (aka INFORMATION_SCHEMA.GLOBAL_STATUS) and I came across an interesting discovery.

select * from data_dictionary.global_status;
...
| Table_locks_immediate      | 0              |
| Table_locks_waited         | 0              |
| Threads_connected          | 8134064        |
| Uptime                     | 332            |
| Uptime_since_flush_status  | 332            |
+----------------------------+----------------+
51 rows in set (0 sec)

This only retrieved 51 rows, which is way less then previous. What I wanted was clearly missing, all the old com_ status variables. Looking at what the data_dictionary actually has available revealed a new table.

drizzle> select * from data_dictionary.global_statements;
+-----------------------+----------------+
| VARIABLE_NAME         | VARIABLE_VALUE |
+-----------------------+----------------+
| admin_commands        | 0              |
| alter_db              | 0              |
| alter_table           | 0              |
| analyze               | 0              |
| begin                 | 0              |
| change_db             | 1              |
| check                 | 0              |
| checksum              | 0              |
| commit                | 0              |
| create_db             | 0              |
| create_index          | 0              |
| create_table          | 0              |
| delete                | 0              |
| drop_db               | 0              |
| drop_index            | 0              |
| drop_table            | 0              |
| empty_query           | 0              |
| flush                 | 0              |
| insert                | 0              |
| insert_select         | 0              |
| kill                  | 0              |
| load                  | 0              |
| release_savepoint     | 0              |
| rename_table          | 0              |
| replace               | 0              |
| replace_select        | 0              |
| rollback              | 0              |
| rollback_to_savepoint | 0              |
| savepoint             | 0              |
| select                | 10             |
| set_option            | 0              |
| show_create_db        | 0              |
| show_create_table     | 0              |
| show_errors           | 0              |
| show_warnings         | 0              |
| truncate              | 0              |
| unlock_tables         | 0              |
| update                | 0              |
+-----------------------+----------------+
38 rows in set (0 sec)

Kudos to this. Looking at list I saw an obvious omission, of “ping”. Something that caught me out some years ago with huge (300-500 per second admin_commands). I’m also a fan of Mark’s recent work An evening hack – Com_ping in MySQL.

How do I identify the MySQL my.cnf file?

If you are unfamiliar in administrating MySQL, the current MySQL configuration file generally found is named my.cnf (my.ini on windows). Where is that file.

If only that question was easy to answer!

Use of configuration files

MySQL will by default use at least one configuration file from the following defaults. MySQL also uses a cascade approach for configuration files. When you have multiple files in the appropriate paths you can see unexpected behavior when you override certain values in different files.

You can however for example specify –no-defaults to use no configuration file, or add options to your command line execution, so even looking at all configuration files is no guarantee of your operating configuration.

However for most environments, these complexities do not exist.

Default Location

By default and on single instance MySQL servers you are most likely to find this file called my.cnf and found at:

  • /etc/my.cnf
  • /etc/mysql/my.cnf

These are known as the global options files.

Alternative Locations

MySQL has both instance specific and user specific locations. For the inclusion of an instance specific file, the location is:

  • $MYSQL_HOME/my.cnf

where MYSQL_HOME is a defined environment variable. Historical MySQL versions also looked at [datadir]/my.cnf however I am unaware if this is applicable in 5.x versions.

You can also specific options on a per user basis for default inclusion. These are found at:

  • $HOME/.my.cnf

Distro specific locations

Ubuntu for example also provides an ability to add options via an include directory.

Specifying a configuration at runtime

While you may have these default files, you may elect to start mysql with a specific configuration file as specified by –defaults-file. This option will override all global/instance/user locations and use just this configuration file. You can also specify additional configuration that supplements and not overrides the default with –defaults-extra-file.

What files are on my system?

Again, assuming the default names you can perform a brute force check with:

$ sudo find / -name "*my*cnf"

This is actually worthwhile, especially if you find a /root/.my.cnf file which is default MySQL settings for the Operating System ‘root’ user.

MySQL recommendations

MySQL by default provides a number of recommended files however these are generally outdated especially for newer hardware. These files include my-huge.cnf, my-large.cnf, my-medium.cnf, my-small.cnf and my-innodb-heavy-4G.cnf. Don’t assume replacing your configuration with one of these files will make your system perform better.

MySQL made some attempt to correct these and at least some very poor defaults with MySQL 5.4 however I am unsure what’s in MySQL 5.5

MySQL Configuration at runtime

While several commands can help with identifying your configuration files and print defaults etc, it’s also possible to change your configuration at runtime. It’s possible that these changes are not reflected in your configuration files and pose an additional mismatch.

References

Don't Assume – Per Session Buffers

MySQL has a number of global buffers, i.e. your SGA. There are also a number of per session/thread buffers that combined with other memory usage constitutes an unbounded PGA. One of the most common errors in mis-configured MySQL environments is the setting of the 4 primary per session buffers thinking they are global buffers.

Global buffers include:

    The four important per session buffers are:

    I have seen people see these values > 5M. The defaults range from 128K to 256K. My advice for any values above 256K is simple. What proof do you have this works better? When nothing is forthcoming, the first move is to revert to defaults or a maximum of 256K for some benchmarkable results. The primary reason for this is MySQL internally as quoted by Monty Taylor – for values > 256K, it uses mmap() instead of malloc() for memory allocation.

    These are not all the per session buffers you need to be aware of. Others include thread_stack, max_allowed_packet,binlog_cache_size and most importantly max_connections.

    MySQL also uses memory in other areas most noticeably in internal temporary tables and MEMORY based tables.

    As I mentioned, there is no bound for the total process memory allocation for MySQL, so some incorrectly configured variables can easily blow your memory usage.

    References

    About “Don’t Assume”

    “Don’t Assume” is a series of posts to help the Oracle DBA understand, use and appreciate the subtle differences and unique characteristics of the MySQL RDBMS in comparison to Oracle. These points as essential to operate MySQL effectively in a production environment and avoid any loss of data or availability.

    For more posts in this series be sure to follow the mysql4oracledba tag and also watch out for MySQL for Oracle DBA presentations.

    The MySQLCamp for the Oracle DBA is a series of educational talks all Oracle DBA resources should attend. Two presentations from this series IGNITION and LIFTOFF will be presented at the MySQL Users Conference 2010 in Santa Clara, April 2010 This series also includes JUMPSTART and VELOCITY. If you would like to here these presentations in your area, please contact me.

Free advice on your my.cnf

Today, while on IRC in #pentaho I came across a discussion and a published my.cnf. In this configuration I found some grossly incorrect values for per session buffers (see below).

It doesn’t take a MySQL expert to spot the issues, however there is plenty of bad information available on the Internet and developers not knowing MySQL well can easily be mislead. This has spurred me to create a program to rid the world of bad MySQL configuration. While my task is potential infinite, it will enable me to give back and hopefully do a small amount of good. You never know, saving those CPU cycles may save energy and help the planet.

Stay tuned for more details of my program.

[mysqld]
...
sort_buffer_size = 6144K
myisam_sort_buffer_size = 1G
join_buffer_size = 1G
bulk_insert_buffer_size = 1G
read_buffer_size     = 6144K
read_rnd_buffer_size = 6144K
key_buffer_size		= 1024M
max_allowed_packet	= 32M
thread_stack		= 192K
thread_cache_size       = 256

query_cache_limit	= 512M
query_cache_size        = 512M
...

MySQL is crashing, what do I do?

Let me start by saying the majority of environments never experience problems of MySQL crashing. I have seen production environments up for years. On my own server I have seen 575 days of MySQL uptime and the problem was hardware, not MySQL.

However it does occur, and the reasons may be obscure.

Confirming mysqld has crashed

To the unsuspecting, MySQL may indeed be crashing and you never know about it. The reason is because most MySQL installations have two running processes, these are mysqld and mysqld_safe.

ps -ef | grep mysqld
root     28822     1  0 Feb22 ?        00:00:00 /bin/sh bin/mysqld_safe
mysql    28910 28822  0 Feb22 ?        00:30:08 /opt/mysql51/bin/mysqld --basedir=/opt/mysql51 --datadir=/opt/mysql51/data --user=mysql --log-error=/opt/mysql51/log/error.log --pid-file=/opt/mysql51/data/dc1.onegreendog.com.pid

One of the functions of mysqld_safe is to restart mysqld if it fails. Unless you review your mysql error log and for low volume systems you will never know. Hint Have you checked your MySQL error log today?

You can determine quickly via SQL your instance uptime.

mysql> SHOW GLOBAL STATUS LIKE '%uptime%';
+---------------+---------+
| Variable_name | Value   |
+---------------+---------+
| Uptime        | 1033722 |
+---------------+---------+
1 row in set (0.00 sec)

This is the number of seconds since start time. While not easily readable for humans, this is more user friendly display. (NOTE: Works for 5.1+ only)

mysql> SELECT FROM_UNIXTIME(UNIX_TIMESTAMP() - variable_value) AS server_start
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE variable_name='Uptime';
+---------------------+
| server_start        |
+---------------------+
| 2010-02-22 15:22:13 |
+---------------------+
1 row in set (0.07 sec)

Debugging a mysqld core file

When correctly configured, mysqld will generate a core file (See How to crash mysqld intentionally for background information on required settings).

Your first check is to determine if the mysqld binary used has debugging information and symbols stripped. You need this information not stripped for identifying symbol names.

$ file bin/mysqld
bin/mysqld: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0,
dynamically linked (uses shared libs), for GNU/Linux 2.4.0, not stripped

You can use gdb and with a backtrace command (bt) you can see a stack trace of calls. This won’t help the average DBA without C or MySQL internal knowledge greatly, however it’s essential information to get to the bottom of the problem.

In the following example I’m going to use Bug #38508 to intentionally crash my test instance.

mysql> drop table if exists t1,t2;
mysql> create table t1(a bigint);
mysql> create table t2(b tinyint);
mysql> insert into t2 values (null);
mysql> prepare stmt from "select 1 from t1 join  t2 on a xor b where b > 1  and a =1";
mysql> execute stmt;
mysql> execute stmt;
ERROR 2013 (HY000): Lost connection to MySQL server during query

Lost connection is the first sign of a problem. We check the error log to confirm.

$ tail data/`hostname`.err
100306 14:51:49 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=8384512
read_buffer_size=131072
max_used_connections=1
max_threads=151
threads_connected=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338301 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd: 0x521f160
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x401b6100 thread_stack 0x40000
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(my_print_stacktrace+0x2e)[0x8abfbe]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(handle_segfault+0x322)[0x5df252]
/lib64/libpthread.so.0[0x35fb00de80]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN9Item_cond10fix_fieldsEP3THDPP4Item+0x7f)[0x5654ff]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN9Item_cond10fix_fieldsEP3THDPP4Item+0xb8)[0x565538]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z11setup_condsP3THDP10TABLE_LISTS2_PP4Item+0xf6)[0x621f96]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN4JOIN7prepareEPPP4ItemP10TABLE_LISTjS1_jP8st_orderS7_S1_S7_P13st_select_lexP18st_select_lex_unit+0x2db)[0x645f3b]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x7a4)[0x654d24]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z13handle_selectP3THDP6st_lexP13select_resultm+0x16c)[0x659f9c]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld[0x5ec92a]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z21mysql_execute_commandP3THD+0x602)[0x5efb22]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN18Prepared_statement7executeEP6Stringb+0x3bd)[0x66587d]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_ZN18Prepared_statement12execute_loopEP6StringbPhS2_+0x7c)[0x66874c]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z22mysql_sql_stmt_executeP3THD+0xa7)[0x668c27]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z21mysql_execute_commandP3THD+0x1123)[0x5f0643]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z11mysql_parseP3THDPKcjPS2_+0x357)[0x5f5047]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0xe93)[0x5f5ee3]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(_Z10do_commandP3THD+0xe6)[0x5f67a6]
/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld(handle_one_connection+0x246)[0x5e9146]
/lib64/libpthread.so.0[0x35fb006307]
/lib64/libc.so.6(clone+0x6d)[0x35fa4d1ded]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x5249320 = select 1 from t1 join  t2 on a xor b where b > 1  and a =1
thd->thread_id=1
thd->killed=NOT_KILLED
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
Writing a core file
100306 14:51:49 mysqld_safe Number of processes running now: 0
100306 14:51:49 mysqld_safe mysqld restarted
100306 14:51:49 [Note] Plugin 'FEDERATED' is disabled.
100306 14:51:50  InnoDB: Started; log sequence number 0 44233
100306 14:51:50 [Note] Event Scheduler: Loaded 0 events
100306 14:51:50 [Note] /home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld: ready for connections.
Version: '5.1.38'  socket: '/tmp/mysql.sock.3999'  port: 3999  MySQL Community Server (GPL)

Confirming we got the “Writing a core file” line, we can find and use this.

$ find . -name "core*"
./data/core.23290
$ gdb bin/mysqld data/core.23290
GNU gdb Red Hat Linux (6.5-37.el5_2.2rh)
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu"...Using host libthread_db library "/lib64/libthread_db.so.1".

Reading symbols from /lib64/libpthread.so.0...done.
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /lib64/libdl.so.2...done.
Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/libcrypt.so.1...done.
Loaded symbols for /lib64/libcrypt.so.1
Reading symbols from /lib64/libnsl.so.1...done.
Loaded symbols for /lib64/libnsl.so.1
Reading symbols from /lib64/libm.so.6...done.
Loaded symbols for /lib64/libm.so.6
Reading symbols from /lib64/libc.so.6...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libgcc_s.so.1...done.
Loaded symbols for /lib64/libgcc_s.so.1
Core was generated by `/home/rbradfor/mysql/mysql-5.1.38-linux-x86_64-glibc23/bin/mysqld --defaults-fi'.
Program terminated with signal 11, Segmentation fault.
#0  0x00000035fb00b142 in pthread_kill () from /lib64/libpthread.so.0
(gdb) bt
#0  0x00000035fb00b142 in pthread_kill () from /lib64/libpthread.so.0
#1  0x00000000005df285 in handle_segfault (sig=11) at mysqld.cc:2552
#2  
#3  0x00000000005654ff in Item_cond::fix_fields (this=0x5249dd0, thd=0x521f160, ref=) at item_cmpfunc.cc:3900
#4  0x0000000000565538 in Item_cond::fix_fields (this=0x52435b8, thd=0x521f160, ref=) at item_cmpfunc.cc:3912
#5  0x0000000000621f96 in setup_conds (thd=0x521f160, tables=, leaves=0x52494d0, conds=0x5244e38) at sql_base.cc:7988
#6  0x0000000000645f3b in JOIN::prepare (this=0x5243770, rref_pointer_array=0x5248a90, tables_init=, wild_num=, conds_init=,
    og_num=, order_init=0x0, group_init=0x0, having_init=0x0, proc_param_init=0x0, select_lex_arg=0x52488c0, unit_arg=0x5248498) at sql_select.cc:412
#7  0x0000000000654d24 in mysql_select (thd=0x521f160, rref_pointer_array=0x5c3fd0, tables=0x4, wild_num=0, fields=@0x52489c8, conds=0x52435b8, og_num=0, order=0x0, group=0x0, having=0x0,
    proc_param=0x0, select_options=0, result=0x52688a0, unit=0x5248498, select_lex=0x52488c0) at sql_select.cc:2377
#8  0x0000000000659f9c in handle_select (thd=0x521f160, lex=0x52483f8, result=0x52688a0, setup_tables_done_option=0) at sql_select.cc:268
#9  0x00000000005ec92a in execute_sqlcom_select (thd=0x521f160, all_tables=0x52494d0) at sql_parse.cc:5011
#10 0x00000000005efb22 in mysql_execute_command (thd=0x521f160) at sql_parse.cc:2206
#11 0x000000000066587d in Prepared_statement::execute (this=0x5245d60, expanded_query=, open_cursor=false) at sql_prepare.cc:3579
#12 0x000000000066874c in Prepared_statement::execute_loop (this=0x5245d60, expanded_query=0x401b43c0, open_cursor=false, packet=, packet_end=)
    at sql_prepare.cc:3253
#13 0x0000000000668c27 in mysql_sql_stmt_execute (thd=) at sql_prepare.cc:2524
#14 0x00000000005f0643 in mysql_execute_command (thd=0x521f160) at sql_parse.cc:2215
#15 0x00000000005f5047 in mysql_parse (thd=0x521f160, inBuf=0x5243520 "execute stmt", length=12, found_semicolon=0x401b6060) at sql_parse.cc:5931
#16 0x00000000005f5ee3 in dispatch_command (command=COM_QUERY, thd=0x521f160, packet=0x525fde1 "execute stmt", packet_length=) at sql_parse.cc:1213
#17 0x00000000005f67a6 in do_command (thd=0x521f160) at sql_parse.cc:854
#18 0x00000000005e9146 in handle_one_connection (arg=dwarf2_read_address: Corrupted DWARF expression.
) at sql_connect.cc:1127
#19 0x00000035fb006307 in start_thread () from /lib64/libpthread.so.0
#20 0x00000035fa4d1ded in clone () from /lib64/libc.so.6
(gdb) quit

You can use gdb to obtain additional information based on the type of information available.

Now what?

Is the problem a bug? Is it data corruption? Is it hardware related?

Gathering the information is the first step in informing you of more detail that will enable you to search, discuss and seek professional advice to address your problem.

References

Advanced reporting options for MySQL

I’m seeking help from the MySQL community for what tools are used today to generate complex reports for enterprise applications that use MySQL. In an Oracle world, you have Oracle Report Writer, in Microsoft Crystal Reports.

In the open source world there is Jasper Reports, Pentaho Reports and BIRT however I don’t know the power of complex reporting with these.

If anybody has experience using or evaluating these tools please let me know. This may lead to possible work.

Migrating MySQL latin1 to utf8 – The process

Having covered the preparation and character set options of performing a latin1 to utf8 MySQL migration, just how do you perform the migration correctly.

Example Case

Just to recap, we have the following example table and data.

mysql> select c,length(c),char_length(c),charset(c), hex(c) from conv.test_latin1;
+---------------+-----------+----------------+------------+----------------------------+
| c             | length(c) | char_length(c) | charset(c) | hex(c)                     |
+---------------+-----------+----------------+------------+----------------------------+
| a             |         1 |              1 | latin1     | 61                         |
| abc           |         3 |              3 | latin1     | 616263                     |
| ☺             |         3 |              3 | latin1     | E298BA                     |
| abc ☺☹☻       |        13 |             13 | latin1     | 61626320E298BAE298B9E298BB |
+---------------+-----------+----------------+------------+----------------------------+

Migration approach 1

The easiest way is to use mysqldump to dump the schema and data, to change the schema definitions and then a client reload process, all with the correct client character sets. Working on our sample table test_latin1 which I have in a conv schema you can do.

$ mysqldump -uroot -p --default-character-set=latin1 --skip-set-charset conv > conv.sql
$ sed -i -e "s/latin1/utf8/g" conv.sql
$ mysql -uroot -p --default-character-set=utf8 conv < conv.sql

If you attempt this which technically works (as I've seen a similar example on the Internet) your bound to screw up something. While the mysqldump and mysql load use the correct client command line options, performing a blind conversion of all references of latin1 to utf8 will not only change your table definition, in my example it will change the name of table, and if would change any values of data that contained the word 'latin1'. For example.

mysql> select c,length(c),char_length(c),charset(c), hex(c) from conv.test_utf8;
+---------+-----------+----------------+------------+----------------------------+
| c       | length(c) | char_length(c) | charset(c) | hex(c)                     |
+---------+-----------+----------------+------------+----------------------------+
| a       |         1 |              1 | utf8       | 61                         |
| abc     |         3 |              3 | utf8       | 616263                     |
| ?       |         3 |              1 | utf8       | E298BA                     |
| abc ??? |        13 |              7 | utf8       | 61626320E298BAE298B9E298BB |
+---------+-----------+----------------+------------+----------------------------+
4 rows in set (0.00 sec)

mysql> set names utf8;
Query OK, 0 rows affected (0.00 sec)

mysql> select c,length(c),char_length(c),charset(c), hex(c) from conv.test_utf8;
+---------------+-----------+----------------+------------+----------------------------+
| c             | length(c) | char_length(c) | charset(c) | hex(c)                     |
+---------------+-----------+----------------+------------+----------------------------+
| a             |         1 |              1 | utf8       | 61                         |
| abc           |         3 |              3 | utf8       | 616263                     |
| ☺             |         3 |              1 | utf8       | E298BA                     |
| abc ☺☹☻       |        13 |              7 | utf8       | 61626320E298BAE298B9E298BB |
+---------------+-----------+----------------+------------+----------------------------+
4 rows in set (0.01 sec)

Be sure to realize now you need to connect with --default-character-set=utf8 or in our example, we use set names as a stop gap measure.

Migration Approach 2

A good practice with using mysqldump regardless of migration is to always dump the schema with the --no-data option, and then dump the data separately with the --no-create-info option. In this case, you can then manually edit the schema file, carefully changing latin1 where appropriate. This is your production data, so some manual hand verification is a good thing. The additional benefit is you can create your schema and verify the syntax is correct before loading any data. While mysqldump creates a single file, due to this dump and reload, you can split your data file and perform some level of parallelism for data loading depending on your hardware capabilities.

$ mysqldump -uroot -p --default-character-set=latin1 --skip-set-charset --no-data conv > conv.schema.sql
$ mysqldump -uroot -p --default-character-set=latin1 --skip-set-charset --no-create-info conv > conv.data.sql
# For simplicity in this example I'm using a more strict global replacement. NOTE: this syntax depends on the version of mysql
$ sed -i -e "s/DEFAULT CHARSET=latin1/DEFAULT CHARSET=utf8/g" conv.schema.sql
$ mysql -uroot -p --default-character-set=utf8 conv < conv.schema.sql
$ mysql -uroot -p --default-character-set=utf8 conv < conv.data.sql
$ mysql -uroot -p --default-character-set=utf8 conv
mysql> select c,length(c),char_length(c),charset(c), hex(c) from conv.test_latin1;
+---------------+-----------+----------------+------------+----------------------------+
| c             | length(c) | char_length(c) | charset(c) | hex(c)                     |
+---------------+-----------+----------------+------------+----------------------------+
| a             |         1 |              1 | utf8       | 61                         |
| abc           |         3 |              3 | utf8       | 616263                     |
| ☺             |         3 |              1 | utf8       | E298BA                     |
| abc ☺☹☻       |        13 |              7 | utf8       | 61626320E298BAE298B9E298BB |
+---------------+-----------+----------------+------------+----------------------------+
4 rows in set (0.00 sec)

Other software interactions

While I only discuss the MySQL data and mysql client usage here, you should ensure that your programming language connecting to MySQL uses the correct character set management, e.g. PHP uses the default_charset settings in php.ini and even your webserver may benefit from the correct encoding, e.g. Apache httpd uses AddDefaultCharset httpd.conf.

Migration Approach Problems

In our example our migration is successfully, however like life, your production data is unlikely to be perfect. In my next post I will talk about identifying and handling exceptions, including data that was wrongly encoded originally into latin1, or translation such as html entities from rich editor input fields for example.