When mysqldump –no-set-names matters

I had this perplexing problem yesterday where a mysql dump and restore was producing different results when using MaatKit mk-table-checksum.

mk-table-checksum --algorithm=BIT_XOR h=192.168.X.XX,u=user,p=password --databases=db1 --tables=c
DATABASE TABLE   CHUNK HOST         ENGINE      COUNT         CHECKSUM TIME WAIT STAT  LAG
db1      c           0 192.168.X.XX InnoDB     215169         d1d52a31    2    0 NULL NULL
mk-table-checksum --algorithm=BIT_XOR h=localhost,u=user,p=password --databases=db1 --tables=c
DATABASE TABLE   CHUNK HOST      ENGINE      COUNT         CHECKSUM TIME WAIT STAT  LAG
db1      c           0 localhost InnoDB     215169         91e7f182    0    0 NULL NULL

It was rather crazy until I reviewed the mysqldump settings I was using, and I realized I was using –no-set-names.

So just what does this option remove. Here is a diff of mysqldump with and without.

5a6,10
>
> /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
> /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
> /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
> /*!40101 SET NAMES utf8 */;
153a159,161
> /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
> /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
> /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
156c164

As you can see it executes a SET NAMES utf8. The problem here is I’m exporting a table, and it is DEFAULT CHARSET=latin1, and no columns are defined as utf8.

I’m not expert in character sets, but this strikes me as strange, and a problem that remains unresolved to my satisfaction, resolved, but not to my comfort level.

Tagged with: Databases MySQL

Related Posts

More CPUs or Newer CPUs

In a CPU-bound database workload, regardless of price, would you scale-up or scale-new? What if price was the driving factor, would you scale-up or scale-new? I am using as a baseline the first available AWS Graviton2 processor for RDS (r6g).

Read more

An Interesting Artifact with AWS RDS Aurora Storage

As part of using public datasets with my own Benchmarking Suite I wanted upsize a dataset for larger volume testing. I have always used the INFORMATION_SCHEMA.TABLES data_length and index_length columns as a sufficiently accurate measurement for actual disk space used.

Read more

How long does it take the ReadySet cache to warm up?

During my setup of benchmarking I run a quick test-sysbench script to ensure my configuration is right before running an hour+ duration test. When pointing to a Readyset cache where I have cached the 5 queries used in the sysbench test, but I have not run any execution of the SQL, throughput went up 10x in 5 seconds.

Read more