Migrating an MyISAM schema to use Referential Integrity

Here are some steps involved. Using the current MySQL defacto engine InnoDB. Of course, Falcon, PBXT and others will enable alternative engines to be used.

Convert Table Storage Engine Types

$ mysql -u[user] -p[password] [database] -e "SHOW TABLES" | grep -v "Tables_in" | sed -e "s/^/ALTER TABLE /" | sed -e "s/$/ ENGINE=InnoDB;/" > upgrade.sql
$ mysql -u[user] -p[password] [database] < upgrade.sql

NOTE: This may not work for all tables, for example those with FULLTEXT indexes will fail.

For the introduction of Referential Integrity we need to ensure the following.

  • Each Foreign Key column should have an index. Prior to 4.1 I think this was a requirement, however it’s a good general practice regardless for SQL performance.
  • The datatype must match between Primary Key and Foreign Keys. The most obvious oversight is normally UNSIGNED, however you also for example have INT and INT, and not INT and BIGINT as datatypes.
  • Optional Foreign Keys must contain NULL values and not the normal practice of having a default of 0.

A Sample Foreign Key Syntax.

mysql> ALTER TABLE wp_usermeta ADD FOREIGN KEY usermeta_user_fk (user_id) REFERENCES wp_user(id);
ERROR 1005 (HY000): Can't create table './wordpress/#sql-cd9_10.frm' (errno: 150)

A closer investigation of what this error really is:

$ perror 150
MySQL error code 150: Foreign key constraint is incorrectly formed

A confirmation of table definitions.

mysql> desc wp_usermeta;
+------------+---------------------+------+-----+---------+----------------+
| Field      | Type                | Null | Key | Default | Extra          |
+------------+---------------------+------+-----+---------+----------------+
| umeta_id   | bigint(20)          |      | PRI | NULL    | auto_increment |
| user_id    | bigint(20) unsigned |      | MUL | 0       |                |
| meta_key   | varchar(255)        | YES  | MUL | NULL    |                |
| meta_value | longtext            | YES  |     | NULL    |                |
+------------+---------------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)

 mysql> desc wp_users;
+---------------------+---------------------+------+-----+---------------------+----------------+
| Field               | Type                | Null | Key | Default             | Extra          |
+---------------------+---------------------+------+-----+---------------------+----------------+
| id                  | bigint(20) unsigned |      | PRI | NULL                | auto_increment |
| user_login          | varchar(60)         |      | UNI |                     |                |
| user_pass           | varchar(64)         |      |     |                     |                |
| user_nicename       | varchar(50)         |      |     |                     |                |
| user_email          | varchar(100)        |      |     |                     |                |
| user_url            | varchar(100)        |      |     |                     |                |
| user_registered     | datetime            |      |     | 0000-00-00 00:00:00 |                |
| user_activation_key | varchar(60)         |      |     |                     |                |
| user_status         | int(11)             |      |     | 0                   |                |
| display_name        | varchar(250)        |      |     |                     |                |
+---------------------+---------------------+------+-----+---------------------+----------------+
10 rows in set (0.00 sec)

A second pair of eyes (thanks Jon), showed that I actually spelt a table name wrong. Did you spot it. Of course it would have been nice if the error message actually told me this. This rather bland message Can’t create table could actually mean.

  • missing index (pre 4.1)
  • incompatible columns data types
  • Invalid Table.

I’m sure if I tried to break it I’d find more examples, but just a trap for unsuspecting people.

Now migrating an existing schema to using Referential Integrity provides some initial benefits (row level locking, misplaced key updates/deletes) but it does not provide true integrity unless your application has been written to support transactions. Chances are it hasn’t, but this is at least the first step.

Tagged with: Databases General MySQL Open Source PBXT

Related Posts

How long does it take the ReadySet cache to warm up?

During my setup of benchmarking I run a quick test-sysbench script to ensure my configuration is right before running an hour+ duration test. When pointing to a Readyset cache where I have cached the 5 queries used in the sysbench test, but I have not run any execution of the SQL, throughput went up 10x in 5 seconds.

Read more

Monitoring Latency with Throughput

Higher throughput does not imply improved performance. This is a common problem when the need for an application to support more users, you provide higher concurrency and that appears to show the capability to support higher throughput.

Read more

Using Readyset Caching with AWS RDS MySQL

Readyset is a next-generation database caching solution that offers a drop-in; no application code changes; approach to improve database performance. If you are using a legacy application where it is difficult to modify SQL statements, or the database is overloaded due to poorly-designed SQL access patterns, implementing a cache is a common design strategy for addressing database reliability and scalability issues.

Read more