On Thursday I saw something I’d not seen before. An Empty Innodb Status. Now given the amount of output normally shown it was certainly a first. And it looked like:
mysql> SHOW ENGINE INNODB STATUS; +--------+------+--------+ | Type | Name | Status | +--------+------+--------+ | InnoDB | | | +--------+------+--------+ 1 row in set (0.03 sec)
To answer some of the most obvious questions.
- Yes it was a working existing MySQL instance, with InnoDB correctly configured. Indeed we had been benchmarking for several hours.
- MySQL Server was running, indeed a command selecting data from the mysql schema worked just fine after seeing this (All other tables were Innodb).
- Absolutely nothing in the host MySQL error log. (This was the second most disappointing aspect)
- The Process List showed two queries that had been running for some time, everything was taking ; 1 second. (This was the most disappointing)
So the problem is, MySQL seems to effectively hung when dealing with queries solely in InnoDB tables. Closer investigation found that another application process had filled the /tmp file system. Reclaiming space didn’t cause MySQL and InnoDB to start operating. Even a shutdown of MySQL failed, with mysqld having to be killed manually
For those super inquisitive the version was 5.1.16-ndb-6.2.0-log, and yes it is a Cluster release. I’ve yet to test the problem on a normal 5.1 version and log a bug appropriately if it exists.
I suspect in our benchmark we definitely need to include some timeout handling, so the queries would fail (they were both UPDATES), but it did have the customer asking why, do which there was no answer.