Poor programming practices

When will it stop. These amateur programmers that simply cut/paste code really affect those good programmers in the ecosystem trying to make a decent living. I was reviewing a developed (but incomplete) PHP/MySQL system using a common framework (which in itself is irrelevant for this post).

In one source file there were 12 repetitions of the following code:

   //permissions
    $this->security_model->setUserPermissions($id);
    if (!array_key_exists($id,$this->session->userdata['permissions']) OR
	!array_key_exists('id', $this->session->userdata['permissions'][$id]) OR
	!array_key_exists('scope', $this->session->userdata['permissions'][$id]['name'])){
      $this->session->set_flashdata('alert', 'You are not authorized to go there.');
      redirect($this->agent->referrer());
    }

It’s bad enough when code is repeated and not put in a simple re factored function. When it’s repeated 12 times in one file, and OMG over 100 times in the product, that is a recipe for bugs, and high maintenance codes due to extremely poor coding practice.

Upgrading to MySQL 5.5 on Ubuntu 10.04 LTS

Ubuntu does not provide an apt-get repository package for MySQL 5.5 on this older OS, however this is still a widely used long term support version. The following steps will upgrade an existing MySQL 5.1 apt-get version to a standard MySQL 5.5 binary.

Step 1. Remove existing MySQL 5.1 retaining data and configuration

sudo su -
service mysql stop
cp -r /etc/mysql /etc/mysql.51
cp -r /var/lib/mysql /var/lib/mysql.51
which mysqld
dpkg -P mysql-server mysql-server-5.1 mysql-server-core-5.1
which mysqld
which mysql
dpkg -P mysql-client-5.1 mysql-client-core-5.1
which mysql
dpkg -P libdbd-mysql-perl libmysqlclient16 mysql-common
# This will not remove /etc/mysql if any other files are in the directory
dpkg -l | grep mysql
[ -d /etc/mysql ] && mv /etc/mysql /etc/mysql.uninstall
cp -r /etc/mysql.51 /etc/mysql

2. Prepare configuration and required directories.

sudo su -
MYCNF="/etc/mysql/my.cnf"
grep basedir ${MYCNF}
sed -ie "s/^basedir.*$/basedir=/opt/mysql/" ${MYCNF}
grep basedir ${MYCNF}
sed -ie "/^[mysqld_safe]/a
skip-syslog" ${MYCNF}
chown -R mysql:mysql /var/lib/mysql
mkdir -p /var/run/mysqld
sudo chown mysql:mysql /var/run/mysqld

Install MySQL 5.5

sudo su -
mkdir -p /opt
cd /opt
# Install MySQL 5.5 Binaries
apt-get install -y libaio-dev  # New 5.5 dependency
wget http://cdn.mysql.com/Downloads/MySQL-5.5/mysql-5.5.28-linux2.6-x86_64.tar.gz
tar xvfz mysql*.tar.gz
ln -s mysql-5.5.28-linux2.6-x86_64 /opt/mysql
echo "export MYSQL_HOME=/opt/mysql
export PATH=$MYSQL_HOME/bin:$PATH" > /etc/profile.d/mysql.sh
chmod +x /etc/profile.d/mysql.sh
. /etc/profile.d/mysql.sh
echo $MYSQL_HOME
echo $PATH
which mysql

4. Upgrade and verify MySQL Instance

su - mysql
cd $MYSQL_HOME
bin/mysqld_safe --skip-syslog &
tail /var/log/mysql/error.log
# There will be some expected ERRORS in error log
bin/mysql_upgrade -uroot
bin/mysqladmin -uroot  shutdown
bin/mysqld_safe --skip-syslog &
tail -100 /var/log/mysql/error.log
mysql -uroot -e "SELECT VERSION();"
bin/mysqladmin -uroot  shutdown
exit

5. Setup MySQL for system use

# As Root
sudo su -
INIT="/etc/init.d/mysqld"
cp /opt/mysql/support-files/mysql.server ${INIT}
sed -ie "s/^basedir=$/basedir=/opt/mysql/;s/^datadir=$/datadir=/var/lib/mysql/" ${INIT}
${INIT} start
mysql -uroot -e "SELECT VERSION();"
${INIT} stop

Not a cool new feature for Master_Host

I was surprised to find on a customer MySQL server this new syntax for Master_host in SHOW SLAVE STATUS.

mysql> SHOW SLAVE STATUSG
*************************** 1. row ***************************
               Slave_IO_State: Connecting to master
                  Master_Host: testdb1.xxx.com or 10.XXX.XX.XXX
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: db1-354215-bin-log.000005
          Read_Master_Log_Pos: 1624
               Relay_Log_File: db2-354214-relay-log.000001
   

Is this a fancy new Percona Server feature? No. It’s operator error.

We read a little further to find.

mysql> SHOW SLAVE STATUSG
...
             Slave_IO_Running: Connecting

...
                Last_IO_Errno: 2005
                Last_IO_Error: error connecting to master 'repl@ testdb1.xxx.com or 10.XXX.XX.XXX' - retry-time: 60  retries: 86400

How can this be created.
Using MySQL MHA, you get the following message in the output of commands to manage replication.

...
Thu Jan  3 17:06:40 2013 - [info]  All other slaves should start replication from here.
Statement should be: CHANGE MASTER TO MASTER_HOST='testdb1.xxx.com or 10.XXX.XX.XXX', MASTER_PORT=3306,
 MASTER_LOG_FILE='db1-354215-bin-log.000005', MASTER_LOG_POS=1624, MASTER_USER='repl',
MASTER_PASSWORD='xxx';
...

Needless to say, this syntax was taken literately, and MySQL did not complain.

I would suggest here that while MySQL does not do any validation on the value of the MASTER_HOST value in the CHANGE MASTER TO command to ensure it is resolvable it should at least do some validation to ensure the value is either a DNS entry or an IPV4,IPV6 value, that is space ‘ ‘, is not a valid character in these situations.

Installing MySQL MHA with Percona Server

MySQL MHA by Oracle ACE Director Yoshinori Matsunobu is an excellent open source tool to help in providing HA with native MySQL replication. The installation however is dependent on some Perl packages and to the untrained eye this may be an issue if you are using Percona Server as your choice of MySQL implementation.

The MHA Node page requires the perl-DBD-MySQL package to be installed. The installation on RedHat/CentOS/Oracle Linux look like this:

$ sudo yum install perl-DBD-MySQL
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.anl.gov
* extras: mirror.anl.gov
* updates: mirror.anl.gov
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package perl-DBD-MySQL.x86_64 0:3.0007-2.el5 set to be updated
--> Finished Dependency Resolution

However, recently with a client the following error “mysql conflicts with Percona-Server” occurred.

$ sudo yum install perl-DBD-MySQL
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.anl.gov
* extras: mirror.anl.gov
* updates: mirror.anl.gov
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package perl-DBD-MySQL.x86_64 0:3.0007-2.el5 set to be updated
--> Processing Dependency: libmysqlclient.so.15(libmysqlclient_15)(64bit) for package: perl-DBD-MySQL
--> Processing Dependency: libmysqlclient.so.15()(64bit) for package: perl-DBD-MySQL
--> Running transaction check
---> Package mysql.x86_64 0:5.0.95-1.el5_7.1 set to be updated
--> Processing Conflict: mysql conflicts MySQL
--> Finished Dependency Resolution
mysql-5.0.95-1.el5_7.1.x86_64 from updates has depsolving problems
 --> mysql conflicts with Percona-Server-client-55
Error: mysql conflicts with Percona-Server-client-55

This is actually an easy problem to solve, but if you are unfamiliar with all the various MySQL packages for RPM’s, and you installed Percona Server with the standard documented commands you may get lost. What is necessary is the installation of the compatibility libraries, as simple as:

$ sudo yum install -y Percona-Server-shared-compat

SQLSTATE[HY000]: General error: 2006 MySQL server has gone away

This would have to be one of the most common MySQL error messages that is misleading to the end user developer. The MySQL Manual page confirms the broad range of possible conditions, but offers little to a PHP developer that does not speak MySQL Geek. I am commonly asked to help solve this issue from a developer.

The problem is that there are several conditions that can cause this error, and a more meaningful explanation to the end user would help in addressing the issue. In general terms, this actually means “Your SQL statement has failed because the connection to the database has been disconnected because of ???”.

Here are a few common situations and how to check for what “???” is.

1. Your MySQL server really did go away.

We can easily check this by looking at the server uptime and the server error log.

$ mysql -uroot -p -e "show global status like 'uptime';"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Uptime        | 68928 |
+---------------+-------+
1 row in set (0.04 sec)
$ tail /var/log/mysql/error.log
130101 22:22:30 InnoDB: Initializing buffer pool, size = 256.0M
130101 22:22:30 InnoDB: Completed initialization of buffer pool
130101 22:22:30 InnoDB: highest supported file format is Barracuda.
130101 22:22:30 InnoDB: 1.1.8 started; log sequence number 63444325509
130101 22:22:30 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306
130101 22:22:30 [Note]   - '127.0.0.1' resolves to '127.0.0.1';
130101 22:22:30 [Note] Server socket created on IP: '127.0.0.1'.
130101 22:22:30 [Note] Event Scheduler: Loaded 0 events
130101 22:22:30 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.28-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

In both these cases, the server has been up some time, and there are zero error messages to indicate problems.

If the MySQL server did go away, was it shutdown or did it crash? The MySQL error log will provide the answers. Generally the mysql daemon (mysqld) will be restarted by the mysqld_safe wrapper process.

2. The connection timed out

$ mysql -uroot -p -e "show global variables like '%timeout';"
+----------------------------+----------+
| Variable_name              | Value    |
+----------------------------+----------+
| connect_timeout            | 30       |
| delayed_insert_timeout     | 300      |
| innodb_lock_wait_timeout   | 50       |
| innodb_rollback_on_timeout | OFF      |
| interactive_timeout        | 28800    |
| lock_wait_timeout          | 31536000 |
| net_read_timeout           | 30       |
| net_write_timeout          | 60       |
| slave_net_timeout          | 3600     |
| wait_timeout               | 28800    |
+----------------------------+----------+

These values are relatively sane MySQL defaults. If however you have very short timeouts, you may get this error. Here is just one example.

mysql> SET SESSION wait_timeout=5;

## Wait 10 seconds

mysql> SELECT NOW();
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    132361
Current database: *** NONE ***

+---------------------+
| NOW()               |
+---------------------+
| 2013-01-02 11:31:15 |
+---------------------+
1 row in set (0.00 sec)

3. Your SQL statement was killed

Some systems will proactively kill SQL statements that have been running too long. You can easily check if this may be happening proactively by looking at how many KILL statements have been executed.

$ mysql -uroot -p -e "show global status like 'com_kill'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Com_kill      | 0     |
+---------------+-------+

Not killed this time.

4. Your SQL statement was too large.

A little harder to test and verify, but MySQL uses a maximum packet site for communications between the server and the client. If this includes large fields (for example BLOB columns), you may be getting a termination of your SQL statement due to size.

By default this is relatively small.

mysql> show global variables like 'max_allowed_packet';
+--------------------+---------+
| Variable_name      | Value   |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
1 row in set (0.00 sec)

You can increase, for example to 16M with:

mysql> set global max_allowed_packet=1024*1024*16;
mysql> show global variables like 'max_allowed_packet';
+--------------------+----------+
| Variable_name      | Value    |
+--------------------+----------+
| max_allowed_packet | 16777216 |
+--------------------+----------+
1 row in set (0.00 sec)

The good news, is this was the cause for the customer today, and now no more errors!

Be sure to keep this value during MySQL restarts.

#my.cnf
[mysqld]
max_allowed_packet = 16M

Open Source Database Schemas

I am seeking the help of the community. I am working on an evaluation project about schema design in open source applications. While it’s possible for me to download the software of many popular apps, and install the software and then do a mysqldump, it takes time. Quite often there is no simple schema.sql file, but a process for creating the schema. If you are using an open source project, would you take a moment and run the following.

$ mysqldump -u[user] -p --skip-lock-tables --no-data --databases [schema]  > [schema].sql

This will only dump the table definitions, and should therefore contain nothing company specific. I have at this time:

  • WordPress
  • Drupal
  • Mediawiki
  • OS Commerce
  • Joomla
  • EzPublish
  • PHPWiki

I am open to any projects, and it doesn’t matter if the version is not the most current, what I am seeking is to understand trends. There is a huge list of possibilities including Bugzilla, PHPBB, SugarCRM, Magento, PHPWiki just to name a few.

Your help would be greatly appreciated. You are welcome to add a a comment, or email me at [this domain] with your results.

I am also happy to accept sanitized schemas of any projects, however please ensure no company or propriety specific information is provided.

Controlled failover simplicity with MySQL

As part of a recent engagement, I described the relative products to manage a MySQL pair (i.e. an Active/Passive MySQL masters configuration). This included the steps to undertake a controlled failover for supporting software maintenance using manual procedures. The upcoming Effective MySQL: Replication Techniques in Depth book details each step and all conditions to review over a dozen pages. While the steps are straightforward and generally well known, scripting this for your environment takes a certain amount of work to ensure your information is correct, and application connectivity loss is kept to a minimum.

In Continuent Tungsten (which I have just been reviewing these past few weeks), I achieved the same result with a single command.

$ echo "switch" | /opt/continuent/tungsten/tungsten-manager/bin/cctrl

In addition to all the checks and balances to ensure data is consistent and no information can be lost, Continuent Tungsten Connectors ensure ALL database connections are maintained, i.e. they are not dropped. This is ideal for an application that uses persistent connections, e.g. Java applications. This is a feature that other options do not provide. This command also supports additional read slaves with no additional work.

This post has been in draft for a while, the current chatter on disasters, controlled and uncontrolled failover, and the disaster preparedness for your company information is a very important. There are several options for implementing a more highly available (HA) MySQL solution depending on your business continuity requirements.

The fine print

Here is the full output of the command, between 3 servers, alpha, beta and gamma.

$ echo "switch" | /opt/continuent/tungsten/tungsten-manager/bin/cctrl
Tungsten Enterprise 1.5.2 build 69
connect to 'nyc@alpha'
nyc: session established
[LOGICAL] /nyc > switch
SELECTED SLAVE: beta@nyc
PURGE REMAINING ACTIVE SESSIONS ON CURRENT MASTER 'alpha@nyc'
PURGED A TOTAL OF 0 ACTIVE SESSIONS ON MASTER 'alpha@nyc'
FLUSH TRANSACTIONS ON CURRENT MASTER 'alpha@nyc'
PUT THE NEW MASTER 'beta@nyc' ONLINE
PUT THE PRIOR MASTER 'alpha@nyc' ONLINE AS A SLAVE
RECONFIGURING SLAVE 'gamma@nyc' TO POINT TO NEW MASTER 'beta@nyc'
SWITCH TO 'beta@nyc' WAS SUCCESSFUL
[LOGICAL] /nyc >
Exiting...

Joining the Continuent Team

This month I have joined the team at Continuent. No stranger to the MySQL ecosystem, Continuent provides replication and clustering technology for managing data between MySQL, Oracle, PostgreSQL, Vertica and a growing list of data stores.

I have known many of the team at Continuent for some time, and will again be joining Giuseppe Maxia from our days at MySQL Inc/AB starting back in 2006.

I am looking forward to taking the hard work out of administration of MySQL systems with the simplicity of Continuent Tungsten, simplifying tasks including automatic failover, multi-master and geo cluster redundancy to a single command.

Catch me speaking at the upcoming MySQL Connect (San Francisco) and Percona Live (New York) conferences, and where the third book of the Effective MySQL Series Replication Techniques in Depth will also be available for sale.

Looking for MySQL 4.1

I had need today to download a version of MySQL 4.1 to test something. The MySQL Developer Zone archives no longer provides any software before 5.0.

While this may have long reached EOL and is no longer support, customers still do run this version of MySQL.

Anybody that can help out with binaries (on several OS’s), it would be appreciated.

SPOF Internet

SPOF (i.e. Single Point of Failure) is the bane for technologists. Avoiding SPOF generally requires redundancy, and redundancy has a cost, often more then a business is prepared to pay. In the database field, I see this regularly and advise clients on how to improve availability and potential avoid disasters that can affect their business.

Today, at approximately 10:30am, the Con Edison work crew in front of my home (digging a 5″ deep trench down the road), severed multiple Time Warner Cable fibre connections. ($#&* and the lack of ownership to correct timely is another story). No Internet, no ability to work actively with clients (which I was doing), etc, etc.

As an individual that works from home, I have recognized this SPOF and have redundancy in place. That is, a Verizon MiFi HotSpot, normally used for travel, but a backup in times of Internet downtime to my home. The moral here is, one level of redundancy is often not enough, just as MySQL replication is not a backup solution, only one part thereof, my backup redundancy was in maintenance mode (in this case loaned to a good overseas friend that is visiting and traveling in the US). Disaster often strikes unexpectedly, and often causes multiple failures, this being an example of a cascading failure in my Internet redundancy procedures.

The fact that I am writing this post, shows that I have a second backup, that is a portable WiFi hotspot on my T-Mobile phone. It’s not great, but it is an emergency.

This is not a satisfactory solution long term. My first estimate for repairs was September 17th. Even after stressing that was unsatisfactory, my second estimate is still unacceptable for my business.

Amen, to co-working spaces in New York. The ability to work at a location for a fixed daily/weekly/monthly cost. It pays to know where they are, and have an informal relationship for such an emergency. While inconvenient, I can take a laptop and have power and Internet to work for some core hours in the day, again far less then ideal.

For those that have actually decided to read this far, the moral of the story is this. What is your plan when this happens to your business connection, or your primary MySQL database server? What is acceptable downtime, and how to address correcting issues outside of your control (e.g. An explosion in a data center taking out 15,000 servers, which has happened to me, or damage to the 5 servers you have, all in a single rack). With practically every client, there is not a defined plan in the event of a disaster. There should be.

MySQL client password security

In case you missed it, MySQL 5.6.6, also known as Milestone 9, was recently released. I have yet to install this, however just one part of the MySQL 5.6.6 Release Notes makes placing installing and testing high on my TODO list.

Updated 20 Sep, 2012. Be sure to also read Todd’s post Understanding mysql_config_editor’s security aspects about a more in-depth and accurate description of this new feature. In summary, “It makes secure access via MySQL client applications easier to use”.

That is the reported improvements in password management. From the release notes:

Security Improvements

These security improvements were implemented:

MySQL now provides a method for storing authentication credentials securely in an option file named .mylogin.cnf. To create the file, use the mysql_config_editor utility. The file can be read later by MySQL client programs to obtain authentication credentials for connecting to a MySQL server. mysql_config_editor writes the .mylogin.cnf file using encryption so the credentials are not stored as clear text, and its contents when decrypted by client programs are used only in memory. In this way, passwords can be stored in a file in non-cleartext format and used later without ever needing to be exposed on the command line or in an environment variable. This improves security for interactive use of MySQL client programs, as well as security for noninteractive tasks that require a MySQL password from a file. For more information, see Section 4.6.6, “mysql_config_editor — MySQL Configuration Utility”.

The .mylogin.cnf file can contain multiple sets of options, known as “login paths.” To specify which option group to use from the .mylogin.cnf file for connecting to the server, use the –login-path option. See Section 4.2.3.4, “Command-Line Options that Affect Option-File Handling”.

There are additional improvements and modifications around encryption. Well worth reading about in MySQL 5.6.6 Release Notes.

When is a crashing MySQL bug not a bug?

Answer: When Oracle acknowledges the bug in 5.5.25 (to the owner only), corrects the bug in 5.5.27 (to the owner only), yet hides all information of its existence.

Recently a colleague and good friend discovered a bug in MySQL 5.5 replication that would crash MySQL. This was initially reported as Bug #65740, and after a lot of back and forth, a reproducible test case was found. Excellent work on the part of my colleague to spend the time to clearly identify the specific conditions. I remember looking at this initial thread in detail for an UPDATE statement using variables combined with an –ignore-database configuration option.

For no explanation by Oracle, this bug was subsequently marked as private (after I originally viewed the thread publicly), corrected, and the corrected bug is not referenced in the 5.5.27 Release Notes, yet the private bug clearly states.

[20 Jul 11:59] XXXX
Fixed in 5.1.65/5.5.27.

The reason why I raise this concern is due to the lack of consistency. Why this was not included in the Release Notes is not acceptable in my view. However, the installation of 5.2.25 was needed to address another obscure Bug #45670 that was corrected and remains open to the MySQL Community.

I do not know of [yet] a public statement by Oracle that details why certain information about MySQL is open, and certain information is closed.

Recent Presentations in Cali, Colombia

On July 4 I gave two presentations at the OTN Tour Day, and on July 5 I have three presentations at the MySQL Training Days. This was my 3rd visit to Colombia and it was great to see a receptive audience. Thanks to Robin for organizing the events in 2010, 2011 and 2012.

You can download all presentations from the provided links.

New security fixes for MySQL reported

6 new security fixes for Oracle MySQL have been detailed in the most current Oracle Critical Patch Update (CPU).

These are:

  • CVE-2012-1735 (5.5.23 and earlier)
  • CVE-2012-0540 (5.1.62 and earlier, 5.5.23 and earlier)
  • CVE-2012-1757 (5.5.23 and earlier )
  • CVE-2012-1756 (5.5.23 and earlier)
  • CVE-2012-1734 (5.1.62 and earlier, 5.5.23 and earlier )
  • CVE-2012-1689 (5.1.62 and earlier, 5.5.22 and earlier )

Oracle strongly recommends that customers apply CPU fixes as soon as possible. Unfortunately there is no easy description for MySQL users what that really entails. There is a reference to Critical Patch Update July 2012 Patch Delivery Document for Oracle Sun Products Suite My Oracle Support Note 1446033.1, however all the information is behind having a support license. There appears to be no information easily available for the community users.

A full description of these CVEs can be found here. Unfortunately most say Vulnerability in the MySQL Server component of Oracle MySQL (subcomponent: Server). Supported versions that are affected are 5.5.23 and earlier. Easily exploitable vulnerability allows successful authenticated network attacks via multiple protocols. Successful attack of this vulnerability can result in unauthorized ability to cause a hang or frequently repeatable crash (complete DOS) of MySQL Server. which is effectively useless information.

There is external information that can be found at the National Vulnerability Database (not linked in the Oracle article). For example http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1735 however this does not provide any more meaningful information either.

There was a recent 5.5.25a released on 2012-07-05, however this, 5.5.25, 5.5.24, and the yet to be released 5.5.26 release notes provide no information about these security issues.

While security is important for a database and system administrator, on first inspection the information provided does not offer an easy way to assess the risk and take appropriate actions.

More information at

I will be speaking at Percona Live New York

Percona Live New York City, October 1 - 2, 2012
Percona is back for a second New York Percona Live Conference. As the resident New York MySQL Expert, I will again be presenting. My session will be on MySQL Backup and Recovery Essentials.

You can only present so much in one hour, and this presentation just touches on the highlights of what is possible. More detailed information about the right backup and recovery strategy and associated tools is available in my current book Effective MySQL: Backup and Recovery.

Encrypting your MySQL backups and more

Assuming you have a backup and recovery strategy in place, how secure is your data? Does a hacker need to obtain access to your production system bypassing all the appropriate security protection you have in place, or just the unencrypted data on the backup server?

Encryption with zNcrypt

The following steps demonstrate how I setup a mysqldump encrypted backup with zNcrypt, a product from Gazzang. You can request a free trial evaluation of the software from http://gazzang.com/request-a-trial. I asked for a AWS EC2 instance, and was able to provide my bootstrap instructions for OS and MySQL installation. Following installation and configuration, the first step is to verify the zNcrypt process is running:

$ sudo ezncrypt-service status
  ezncrypt | Checking system dependencies
** ezncrypt system is UP and running **
       log | File: /var/log/ezncrypt/ezncrypt-service.log

If the process is not running you would find the following error message:

$ sudo ezncrypt-service status
  ezncrypt | Checking system dependencies
** ezncrypt system is NOT running **
       log | File: /var/log/ezncrypt/ezncrypt-service.log

$ sudo ezncrypt-service start
  ezncrypt | Checking system dependencies
  ezncrypt | checking encryption directories
    keymgr | Retrieving key from KSS
           |  > Encryption password retrieved from KSS
  ezncrypt | starting service
           |  > using "aes_256" cipher algorithm
           | done!
    access | Loading access control list
           | done!
  ezncrypt | Thank you for using ezncrypt.
       log | File: /var/log/ezncrypt/ezncrypt-service.log

Under the covers you will find the following attached devices, and no actual processes.

$ df -h
Filesystem ...
...
/var/lib/ezncrypt/storage/encrypted_private
/var/lib/ezncrypt/ezncrypted

$ ps -ef | grep ezn
uid  4947  3327  0 23:15 pts/3    00:00:00 grep ezn

$ ps -ef | grep cry
root        30     2  0 21:41 ?        00:00:00 [ecryptfs-kthrea]
root        31     2  0 21:41 ?        00:00:00 [crypto]
uid  4951  3327  0 23:15 pts/3    00:00:00 grep cry

The first step is to create a backup directory and encrypt all contents that are placed in the directory. ezNcrypt uses the concept of an @category for reference with an encrypted file or directory.

$ mkdir /mysql/backup/encrypted
$ sudo ezncrypt --encrypt @backup /mysql/backup/encrypted
  ezncrypt | Checking system dependencies
           | Verifying ezncrypt license
           | getting information about location
           |   > path: /var/lib/ezncrypt/ezncrypted/backup
  ezncrypt | Checking encryption status
           | done!
    keymgr | Retrieving key from KSS
           |  > Encryption password retrieved from KSS
           | generating keys
           | done!
    backup | backing up data
           | This can take a while. Please be patient
           |  > backing up /mysql/backup/encrypted
           |  > File: /opt/ezncrypt/backup/2012-04-27/encrypted.tar.gz
           | done!
  ezncrypt | encrypting files
           |  > checking disk space
           |  > encrypting /mysql/backup/encrypted
           | done!
  ezncrypt | congratulations. you have encrypted your Files!!
       log | File: /var/log/ezncrypt/ezncrypt.log

The underlying regular directory is now replaced:

$ ls -l /mysql/backup
total 0
lrwxrwxrwx 1 root root 59 2012-04-27 00:03 encrypted -> /var/lib/ezncrypt/ezncrypted/backup//mysql/backup/encrypted

Any attempts to write to this encrypted directory will now fail, even with the Linux super user:

$ mysqldump --all-databases > /mysql/backup/encrypted/edump1.sql
-bash: /mysql/backup/encrypted/edump1.sql: Permission denied

$ sudo mysqldump --all-databases > /mysql/backup/encrypted/edump1.sql
-bash: /mysql/backup/encrypted/edump1.sql: Permission denied

In order to read and write from an encrypted directory you need to grant access controls to a given program, for example mysqldump:

$ sudo ezncrypt-access-control -a "ALLOW @backup * /usr/bin/mysqldump"
passphrase:
salt:
Rule added

You verify the defined access control rules with:

$ sudo ezncrypt-access-control -L
passphrase:
salt:
# -  Type     Category       Path    Process
1    ALLOW    @backup        *       /usr/bin/mysqldump

However, writing with mysqldump still causes an error because it is the shell redirection that is performing the writing, as seen in the system error log:

$ mysqldump --all-databases > /mysql/backup/encrypted/edump1.sql
-bash: /mysql/backup/encrypted/edump1.sql: Permission denied
$ dmesg | tail
[4138848.618559] ezncryptfs: DENIED type="acl" exec="/bin/bash" script="/dev/pts/4" comm="bash" path="/var/lib/ezncrypt/ezncrypted/backup" pid=7448 uid=1000

You can use the –result-file option with mysqldump to enable the process to create the file directly. For example:

$ time mysqldump --all-databases   --result-file=/mysql/backup/encrypted/edump2.sql
real      1m34.714s
user      0m59.388s
sys       0m9.589s

$ sudo ezncrypt-run "ls -l /mysql/backup/encrypted/"
passphrase:
salt:
total 3.0G
-rw-rw-r-- 1 uid gid 2.9G 2012-04-27 02:43 edump2.sql

In this single test, the transparent encryption added only a very nominal overhead to the mysqldump test backup used. You can easily extract the file from the encrypted directory, however that would defeat the purpose of using encryption. The following syntax is shown just to confirm the validity of the encrypted file:

$ sudo /usr/sbin/ezncrypt-run "cp /mysql/backup/encrypted/edump2.sql ."
passphrase:
salt:
$ ls –al edump*
total 3916
-rw-r--r-- 1 uid gid 2.9G 2012-04-27 02:55 edump2.sql


$ grep "^CREATE.*DATABASE" edump2.sql
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `book2` /*!40100 DEFAULT CHARACTER SET latin1 */;
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `employees` ...
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `musicbrainz` ...
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `mysql` ...
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `sakila` ...
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `world_innodb` ...
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `world_myisam` ...

When using correctly configured directories and access controls, the use is truly transparent to the backup process.

Restoring an encrypted file is a little more involved. The best approach is to create a script to perform the work, than encrypt this script. When executed, this script will have the permissions necessary to read and apply the encrypted file.

Perhaps the best tip about using this type of transparent encryption is that it is possible to encrypt the MySQL user and password securely in a plain text configuration file and used with appropriate MySQL client commands. This helps to address another common security problem.

What compression do you use?

The following is an evaluation of various compression utilities that I tested when reviewing the various options for MySQL backup strategies. The overall winner in performance was pigz, a parallel implementation of gzip. If you use gzip today as most organizations do, this one change will improve your backup compression times.

Details of the test:

  • The database is 5.4GB of data
  • mysqldump produces a backup file of 2.9GB
  • The server is an AWS t1.xlarge with a dedicated EBS volume for backups

The following testing was performed to compare the time and % compression savings of various available open source products. This was not an exhaustive test with multiple iterations and different types of data files.

Compression
Utility
Compression Time
(sec)
Decompression Time
(sec)
New Size
(% Saving)
lzo (-3) 21 34 1.5GB (48%)
pigz (-1) 43 33 995MB (64%)
pigz (-3) 56 34 967MB (67%)
gzip (-1) 81 43 995MB (64%)
fastlz 92 128 1.3GB (55%)
pigz [-6] 105 25 902MB (69%)
gzip (-3) 106 43 967MB (67%)
compress 145 36 1.1GB (62%)
pigz (-9) 202 23 893MB (70%)
gzip [-6] 232 78 902MB (69%)
zip 234 50 902MB (69%)
gzip (-9) 405 43 893MB (70%)
bzip2 540 175 757MB (74%)
rzip 11 minutes 360 756MB (74%)
lzo (-9) 20 minutes 82 1.2GB (58%)
7z 33 minutes 122 669MB (77%)
lzip 47 minutes 132 669MB (77%)
lzma 58 minutes 180 639MB (78%)
xz 59 minutes 160 643MB (78%)

Observations

  • The percentage savings and compression time of results will vary depending on the type of data that is stored in the MySQL database.
  • The pigz compression utility was the surprising winner in best compression time producing at least a size of gzip. This was a full 50% faster than gzip.
  • For this compression tests, only one large file was used. Some utilities work much better with many smaller files.

Find our more information of these tests and the results in Effective MySQL: Backup and Recovery

Recent Presentations Buenos Aires MySQL/NoSQL/Cloud Conference

The first annual Latin America MySQL/NoSQL/Cloud Conference was held in Buenos Aires Argentina from June 26-28. Kudos to Santiago Lertora from Binlogic who had the vision for the conference in his country and made it happen. I look forward to the second annual event.

My first presentation was “Improving Performance with Better Indexes”. This presentation details the six steps to SQL performance analysis, Capture, Identify, Confirm, Analyze, Optimize and Verify. An explanation of MySQL EXPLAIN, and working examples to create indexes and better covering indexes in several examples are provided. A production example of a 13 table join is used to detail how covering indexes and partial column indexes can make a dramatic improvement in performance. Download Presentation (PDF).

More detailed information about EXPLAIN and creating indexes is available in book Effective MySQL: Optimizing SQL Statements.

My second presentation was “MySQL Backup and Recovery Essentials”. This presentation covers the most common options for MySQL backup and the respective restore options. Also covered is the importance of the master binary logs and point in time recovery capabilities. Download Presentation (PDF)

More detailed information about the right backup and recovery strategy and associated tools is available in book Effective MySQL: Backup and Recovery.

References

Latin America MySQL/NoSQL/Cloud Conference Program.

Upcoming MySQL Connect Presentations


The MySQL Connect 2012 conference event being held in San Francisco on Sep 29-30 has a long list of quality MySQL speakers including myself. I will be giving 2 presentations on:

CON8322 – Lessons from Managing 500+ MySQL Instances

In this presentation, learn about the issues of managing a large number of instances of MySQL, supporting 50 billion SQL statements per day. Topics covered:
• The need for monitoring and instrumentation
• How to automate installations, upgrades, and deployments
• Issues with MySQL’s Replication feature with 300 slaves per master
• Traffic minimization techniques
• Creating high availability with regions and zones
• Real-time traffic stats (aggregated every five seconds)

CON8320 – Improving Performance with Better Indexes

Learn how to use one simple advanced technique to make better indexes in MySQL and improve your queries by 500 percent or more. Even with a highly indexed schema, you can achieve significant improvements in performance by creating better indexes. This presentation introduces an approach to correct identification and verification of problem SQL statements and then describes the means of identifying index choices for optimization. Then it discusses not only how to apply indexes to improve query performance but also how to apply better indexes and provide even greater performance gains.

You can also read more information with my Interview about MySQL Connect.

Recent Presentations at Charlotte South East LinuxFest

At the recent South East LinuxFest in June 2012 I gave two MySQL presentations.

The first was on Explaining the MySQL Explain. This presentation details the MySQL Query Execution Plan (QEP) of an SQL statement and how to understand and interpret the information from the EXPLAIN command. Also discussed are additional commands and tools that exist to add supplementary information. These are essential skills that will be used daily in production operations. Download Presentation (PDF)

Effective MySQL: Optimizing SQL StatementsMore detailed information about EXPLAIN and associated commands is available in book Effective MySQL: Optimizing SQL Statements.

Effective MySQL:Backup and Recovery
The second was on MySQL Disasters, and how to avoid yours. Organizations are always making improvements for scalability, however disaster preparedness is the poor cousin. This presentation will show you how to easily avoid the most common MySQL disaster situations.
Backup and recovery is critical for business continuity, many websites run the risk of data loss or corruption because existing procedures (if any) are generally flawed.
Download Presentation (PDF

More detailed information about the right backup and recovery strategy and associated tools is available in book Effective MySQL: Backup and Recovery.

References

South East Linux Fest Agenda

REPOST: A Tragically Comedic Security Flaw in MySQL

“In short, if you try to authenticate to a MySQL server affected by this flaw, there is a chance it will accept your password even if the wrong one was supplied. The following one-liner in bash will provide access to an affected MySQL server as the root user account, without actually knowing the password.”

$ for i in `seq 1 1000`; do mysql -u root --password=bad -h 127.0.0.1 2>/dev/null; done
mysql>

The following are confirmed distributions that are vulnerable:

  • Ubuntu Linux 64-bit ( 10.04, 10.10, 11.04, 11.10, 12.04 ) ( via many including @michealc )
  • OpenSuSE 12.1 64-bit MySQL 5.5.23-log ( via @michealc )
  • Debian Unstable 64-bit 5.5.23-2 ( via @derickr )
  • Fedora ( via hexed and confirmed by Red Hat )
  • Arch Linux (unspecified version)

Full details can be found at https://community.rapid7.com/community/metasploit/blog/2012/06/11/cve-2012-2122-a-tragically-comedic-security-flaw-in-mysql

The black vodka MySQL tradition

Many do not need any further introduction to this Monty tradition at MySQL events. For the New York Effective MySQL Meetup group this was a new experience for many that I had the opportunity to share at our recent meeting. In 12 months the group has grown to over 280 members, and now recent attendees have experienced black vodka first hand.

A special thanks to Monty Program AB and Colin Charles for providing the alcohol.


South America Speaking Events

Following my 2 presentations at SouthEast LinuxFest on Friday and Open DB Camp on Sunday in Charlotte, NC, I will then be speaking at the first Latin America MySQL event in Buenos Aires, Argentina later this month. This will include at least six MySQL Alumni and key presentations from MariaDB and Tokutek.

I will then be attending the OTN Tour 2012 event in Cali, Colombia the following week and also a dedicated 2 day MySQL Training Days following.

UTF-8 with MySQL and LAMP

A recent question on a mailing list was the best practices for UTF-8 and PHP/MySQL. The following are the configurations I used in my multi-language projects.

MySQL UTF-8 Configuration

# my.cnf
[mysqld]
default_character_set = utf8
character_set_client       = utf8
character_set_server       = utf8
[client]
default_character_set = utf8

PHP UTF-8 Configuration

#php.ini
default_charset = "utf-8"

Apache UTF-8 Configuration

#httpd.conf
AddDefaultCharset UTF-8
<VirtualHost>
    AddCharset UTF-8   .htm
</VirtualHost>

HTML file UTF-8 Configuration

 <meta charset="utf-8">

PHP file UTF-8 Configuration

header('Content-type: text/html; charset=UTF-8');

MySQL connection (extra precaution)

SET NAMES utf8;

Shell UTF-8

And last but not least, even editing files in shell can be affected (.e.g UTF-8 data to be inserted into DB from file). Ensure at least

#~/.profile
export LC_CTYPE=en_US.UTF-8
export LANG=en_US.UTF-8

Amateurs – They give us professionals a bad name

Any person with half a brain would see from the error messages below that the MySQL server is not operating optimally, or more specifically the MySQL upgrade has not completely successfully and let users can go happily use the website. It amazing me when web hosting providers tell their paying client that an upgrade has been performed yet they did not have the intelligence to actually look at the error log for confirmation. Got a mysql> prompt, it’s all good. One of the first things I check is the error log.

When will people learn the MySQL error log is a valuable resource both for what it contains, and what it should not contain.

120426 17:36:00 [Note] /usr/libexec/mysqld: Shutdown complete

120426 17:36:00 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
120426 17:36:00 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
120426 17:36:00 [Note] Plugin 'FEDERATED' is disabled.
/usr/libexec/mysqld: Table 'mysql.plugin' doesn't exist
120426 17:36:00 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
120426 17:36:00 InnoDB: The InnoDB memory heap is disabled
120426 17:36:00 InnoDB: Mutexes and rw_locks use GCC atomic builtins
120426 17:36:00 InnoDB: Compressed tables use zlib 1.2.3
120426 17:36:00 InnoDB: Using Linux native AIO
120426 17:36:00 InnoDB: Initializing buffer pool, size = 128.0M
120426 17:36:00 InnoDB: Completed initialization of buffer pool
120426 17:36:00 InnoDB: highest supported file format is Barracuda.
120426 17:36:00  InnoDB: Waiting for the background threads to start
120426 17:36:01 InnoDB: 1.1.8 started; log sequence number 232577699
120426 17:36:01 [ERROR] Missing system table mysql.proxies_priv; please run mysql_upgrade to create it
120426 17:36:01 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist
120426 17:36:01 [ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'threads' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure
120426 17:36:01 [ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure
120426 17:36:01 [ERROR] Column count of mysql.db is wrong. Expected 22, found 20. Created with MySQL 50077, now running 50523. Please use mysql_upgrade to fix this error.
120426 17:36:01 [ERROR] mysql.user has no `Event_priv` column at position 29
120426 17:36:01 [ERROR] Cannot open mysql.event
120426 17:36:01 [ERROR] Event Scheduler: An error occurred when initializing system tables. Disabling the Event Scheduler.
120426 17:36:01 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.5.23-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL) by Atomicorp
120426 17:46:01 [ERROR] Missing system table mysql.proxies_priv; please run mysql_upgrade to create it
120426 17:46:01 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist
120426 17:46:01 [ERROR] Column count of mysql.proc is wrong. Expected 20, found 16. Created with MySQL 50077, now running 50523. Please use mysql_upgrade to fix this error.

Some more light reading at Have you checked your MySQL error log today? and Monitoring MySQL – The error log

I want a mysqldump –ignore-database option

While working with RDS and Google Cloud SQL I have come to realize that excluding the mysql schema from a mysqldump is important. However with many databases, the –all-databases option enables you only to select all or none. There is however an easy solution to exclude one or more databases in mysqldump with this little gem I created.

$ time mysqldump --databases `mysql --skip-column-names -e "SELECT GROUP_CONCAT(schema_name SEPARATOR ' ') FROM information_schema.schemata WHERE schema_name NOT IN ('mysql','performance_schema','information_schema');" >` >/mysql/backup/rds2.sql

An you can exclude as many schemas as you want.

I checked the mysqldump –help, there was no option in MySQL 5.1, asked a colleague just to be sure I wasn’t wasting my time, and it took all of 2 minutes to create and test a working solution.

When is a database schema not a database schema?

mysql> show schemas;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| innodb             |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)
mysql> drop schema innodb;
ERROR 1010 (HY000): Error dropping database (can't rmdir './innodb/', errno: 17)

This is an additional schema that is included in an AWS RDS installation. You should not put directories in the MySQL data directory.

MySQL now has two user conferences (*)

PC World has written a post with this title(*) about the upcoming MySQL Connect conference and references the Percona Live conference and an official Percona comment. As this is not syndicated in Planet MySQL I encourage you to read the full article.

This is the MySQL conference to get technical presentations by the many great Oracle/MySQL technical staff who will not be in attendance at Percona Live. There will also be a strong community presence in speaking at Oracle Connect in September. While Oracle was organizing a dedicated MySQL event in April for the community with all vendors including Percona to replace the conference dropped by long term partner O’Reilly (kudos for many years of great events), Percona decided to go at it without including Oracle, the owners and developers of MySQL. The statement quoted in the PC World article regarding “lack of momentum around the [annual community] event” is clearly inaccurate and not a true representation of actual events.

It is difficult to keep up with all the community events Oracle is now running including multiple OTN MySQL Developer days per month across the US and Europe. I will be speaking at the upcoming OTN NY event in April, the Rocky Mountain Training Day in May, and hopefully the MySQL Innovation day in June. Get the full list at Upcoming MySQL Events.

Indeed MySQL content and presentations have also been represented at Oracle Open World for a number of years. 2011 was a very large turnout and many MySQL presentations. As a senior consultant for MySQL Inc I manned a MySQL booth at OOW exhibition hall back in 2007, prior to both Sun and Oracle acquisitions.