Archive for August, 2007

The woes of MySQL Community tools under Solaris

Friday, August 31st, 2007

Yesterday I attempted to get a working MySQL environment to support the number of utilities we all use including mytop, innotop, mybench, mysqltoolkit. These products require a number of Perl Dependencies, and while that may be a rather trivial task under Linux and with the power of cpan, working on Solaris is a whole different story.

For the record, I’m working with Solaris 9 SPARC 64bit.

I won’t detail you with how hard it was to get to this point, except to say thanks to Jeremy, Baron and Frank so far. Here is where I’m at.

You need a number of pre-requisites, most from sunfreeware.com

Pre-Requisites

$ wget ftp://ftp.sunfreeware.com/pub/freeware/sparc/9/libiconv-1.11-sol9-sparc-local.gz
$ wget ftp://ftp.sunfreeware.com/pub/freeware/sparc/9/gcc-3.4.6-sol9-sparc-local.gz
$ wget ftp://ftp.sunfreeware.com/pub/freeware/sparc/9/make-3.81-sol9-sparc-local.gz
$ gunzip libiconv-1.11-sol9-sparc-local.gz gcc-3.4.6-sol9-sparc-local.gz make-3.81-sol9-sparc-local.gz
$ pkgadd -d libiconv-1.11-sol9-sparc-local
$ pkgadd -d gcc-3.4.6-sol9-sparc-local
$ pkgadd -d make-3.81-sol9-sparc-local
$ wget http://search.cpan.org/CPAN/authors/id/A/AB/ABURLISON/Solaris-PerlGcc-1.3.tar.gz
$ gunzip Solaris-PerlGcc-1.3.tar.gz
$ tar xvf Solaris-PerlGcc-1.3.tar
$ cd Solaris-PerlGcc-1.3
$ perl Makefile.PL
$ make install
$ wget http://search.cpan.org/CPAN/authors/id/M/MS/MSCHWERN/ExtUtils-MakeMaker-6.36.tar.gz
$ gunzip ExtUtils-MakeMaker-6.36.tar.gz
$ tar xvf ExtUtils-MakeMaker-6.36.tar
$ cd ExtUtils-MakeMaker-6.36
$ perl Makefile.PL
$ make install

Back to installing

$ PATH=/usr/local/bin:$PATH;export PATH
$ PATH=/usr/perl5/5.6.1/bin/:$PATH;export PATH

$ perlgcc Makefile.PL
Checking if your kit is complete...
Looks good
Writing Makefile for Term::ReadKey
netcontrol1:/tmp/TermReadKey-2.30# make install
make: *** No rule to make target `/usr/perl5/5.6.1/lib/sun4-solaris-64intConfig.pm', needed by `Makefile'.  Stop.

This is where the trail ends as my friend google has not helped me past this point, so I’m turning the community. If anybody can assist please let me know.

A MySQL Story

Sunday, August 26th, 2007

The Taso Dump

I’ve gotto go, my time has come. (maintenance window needed)
Could not get a lock. (table lock)
I needed a blocker to ensure a single thread. (–single-transaction)

Finally, got a lock. (–lock-tables)
Need to ensure no transactions lost.
Dump, Dump, Dump. (mysqldump)

I’m having flow problems.
Was the buffer to small? (key_buffer_size)
Was it the query cache? (query_cache)
My Dirty Pages may be too high? (innodb_max_dirty_pages_pct)
Or was it just Too Many Connections? (max_connections)
But it was just waiting on the flush (flush tables with read lock)

Time passes, Time Passes. No output yet.
Is it network saturation?
Is it IO bound?
Do I need a better flushing method (innodb_flush_method)
No, it was just the lag? (Seconds_behind_master)

Dump is complete (unlock tables)
Now it’s time to Purge (purge logs)

If there is an attempt to restore.
The backup would be empty.
Blackhole would be found as the cause.
In the end, all transactions lost!

If you weren’t at Friday dinner after MySQL Camp II you missed it.
It’s not meant to be MySQL grammar correct, it’s just some random words we were throwing around.

MySQL Camp II – Post Dinner

Sunday, August 26th, 2007

MySQL Camp II is complete. A small group of about 18 had post dinner at Tiny Thai in New York City. Some elected to drive from Brooklyn, they arrived at least 30 minutes after those of us that the subway.

I have a lot of notes to write, if ever the time permits. For now, the following few that joined for drinks are below. I know other people took photos of the camp, for a change I actually took none. If you want to add a link in comments of photos from the camp that would be great.

MySQL Camp II Post Dinner Drinks

Other sizes here

MySQL Camp II – Introductions

Friday, August 24th, 2007

We have started MySQL Camp II. The first session is Introductions.

I didn’t catch all the employers, but here is part of the list of attendees at the Introduction section. Great to see multiple people from many places including ESPN, priceline.com, Proven Scaling, Solid Tech – sponsors of Dorsal Source, ForSaleByOwner. fontshop.com, 9Mmedia, CafeMom, JP Morgan, Upoc, ClubMom, Stock Photo Finder, AmieStreet,LogicWorks, Skoll – Distributed Continuous QA , AOL, Minggl, Minggl New Test Site, Visibone. Others include OnlineBuddies.com, NT Snort User Group, DreamweaverNY User Group, A law firm. As well as a few people from MySQL.

Many people mentioned having an Oracle background, or working with Oracle now, at least 6 people that heard me speak at “MySQL DBABootcamp for the Oracle DBA” last week.

There were a lot of MySQL Beginners here which was really great.

And now we are onto the discussion of the sessions.

MySQL Camp II – It begins

Thursday, August 23rd, 2007


Well readers, your either here or your not. MySQL Camp II starts today in Brooklyn, New York, at Polytechnic University. Last night’s pre drinks meetup in NYC went well, but today it’s brass tacks time. View Larger Map

For those of you not able to make it, IRC@Freenode #mysql-camp will be the place to hang out to hear what’s happening. If your not at the camp, please identify yourself. Be sure to also check out the Camp Web Site MySQL Camp II for the plans for today and tomorrow.

For those of you not here, MySQL Camp III is already in planning.

MySQL Backup & Recovery – Part 1

Tuesday, August 21st, 2007

I realized recently from observation that some smaller websites which use MySQL do not have a working backup and recovery strategy. Some may have a backup approach but it will not work in a recovery. As part of a number of discussions on Backup & Recovery here is part 1.

Using straight ‘cp’ for Backup

Using ‘cp’ to simply copy your running database *DOES NOT WORK*. You may be lucky, but in a world of guarantees, this is no guarantee that your can recovery your system. Why is this so.

  • The data is not consistent during the backup. If it takes say 5 minutes to copy your files, they are copied probably in alphabetical order, what if data is written during the backup to a table starting with ‘A’ and a table starting with ‘Z’, the A file has already been copied.
  • When using MyISAM, Indexes are not flushed to disk unlike Table data. This means that while MyISAM has the facility of recover and rebuild indexes using myisamchk, you need to know if you have corruption, you need to rebuild your tables offline, and there is not assurance your data can be corrected.
  • Likewise with using Innodb, Data and Indexes are not flushed to disk. While Innodb does flush the redo logs to enable crash recovery, I have seen on a production website, granted 4.1 that Innodb failed to recover and caused major downtime and serious business ramifications.

Using ‘cp’ correctly

In order to copy your database, there are two different ways. The first is to shutdown your database first then copy your data files. Be sure to also copy your my.cnf file, this is important in a recovery process. The second is to use FLUSH command. For example.

mysql> FLUSH TABLES WITH READ LOCK;
-- In second session copy all data
mysql> UNLOCK TABLES;

The advantage is you don’t have to shutdown your instance, the disadvantage is that FLUSH TABLES will halt any WRITE statements to your database, so depending on the time to copy your data files that could have significant effect on your system. In a high write environment the FLUSH command could also take a significant time to complete.

I’ll leave with one simple tip. You should always have sufficient diskspace on your machine for performing the backup. This is important for two parts. First, if you say compress your backup during the process, you are taking a longer time then a straight copy and your database is unavailable longer. Second, over 90% of any recovery occurs with last nights backup, and the time take to uncompress the backup affects your recovery time, particularly in a large system. Your ‘cp’ should indeed be a two phase process. First ‘cp’ on the local server for a local backup, then have an off server compressed backup.

In my next Part, I’ll discuss more alternatives to improving your backup strategy.

References

FLUSH TABLES

Today’s interesting MySQL Error Message

Monday, August 20th, 2007

You have to love error messages some times. Today is was “ERROR 1289 (HY000): The ‘UNKNOWN’ feature is disabled;”

For those interested.

mysql> select version();
+-------------+
| version()   |
+-------------+
| 6.0.0-alpha |
+-------------+
1 row in set (0.00 sec)
mysql> set session sql_mode=NO_ENGINE_SUBSTITUTION;
Query OK, 0 rows affected (0.00 sec)

mysql> create table T2 (id INT UNSIGNED NOT NULL) ENGINE=InnoDB;
ERROR 1289 (HY000): The 'UNKNOWN' feature is disabled; you need MySQL built with 'UNKNOWN'
to have it working

In this case it’s obviously something that’s not quite right, so being a good community contributor I raised a bug. You can read more at Bug #29373.

MySQL values the contributions from the community, even as trivial as documentation or error messages. If you have an inquiry raise it at the forums or the lists, or if it’s obviously an issue raise it in the Bugs System

SHOW STATUS WHERE

Monday, August 20th, 2007

When you use SHOW STATUS can can restrict with the LIKE syntax, allowing for a subset of values. For example:

mysql> SHOW GLOBAL STATUS LIKE 'Com%';
+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| Com_admin_commands       | 0     |
| Com_alter_db             | 0     |
| Com_alter_table          | 0     |
| Com_analyze              | 0     |
| Com_backup_table         | 0     |
| Com_begin                | 0     |
| Com_change_db            | 0     |
| Com_change_master        | 0     |
...

That’s great, but sometimes you want specific values. Using WHERE can achieve this. For Example.

mysql> SHOW GLOBAL STATUS WHERE VARIABLE_NAME IN (’Com_insert’,'Innodb_buffer_pool_pages_latched’,'threads_running’);
+----------------------------------+-------+
| Variable_name                    | Value |
+----------------------------------+-------+
| Com_insert                       | 0     |
| Innodb_buffer_pool_pages_latched | 0     |
| Threads_running                  | 1     |
+----------------------------------+-------+
3 rows in set (0.00 sec)

Cool, the downside is you loose the wildcard capability, however you can string commands together with LIKE

mysql> SHOW GLOBAL STATUS WHERE VARIABLE_NAME LIKE 'innodb%' OR VARIABLE_NAME LIKE  'com%';

MySQL NY Meetup – Part 2 in our series

Monday, August 20th, 2007

Last night at the MySQL NY Meetup we continued on from a very successful July presentation on “Practical Performance Tips & Tricks”. I must admit after speaking and standing all day for the MySQL DBA Bootcamp for the Oracle DBA it was a stretch, and we didn’t cover all material as expected, but the evening was still very productive to everybody. Links are here for my August Presentation and July Presentation.

Thanks to Marc and the team from LogicWorks for again sponsoring our NY Meetup Event. We don’t get the beer and food any other way.

As a consultant working on client sites even in the time from the previous meeting, I see a number of simple steps that can be applied to every single MySQL environment and last nights talk placed some focus on this. The following is the homework that has been set for attendees, so next month we can have a panel discussion and then continue more on the analysis of MySQL.

This homework focuses on three areas. It seems a lot, but it will be worth it.

  1. Collection of SQL statements (in various forms)
  2. Monitoring of raw MySQL Status information
  3. Review of Schema Data Types for more optimal storage

Collection

This information is to be collected.

  • Production is to have slow query log enabled and long_query_time =1
  • Daily review of Production slow query log to view the longest running and most frequent (Slide Aug-13)
  • All development and test environments have general log and slow query log enabled.
  • All application paths to be tested in development or testing, to capture all SQL. Use self documenting approach (Slide: Aug-21)

Outputs
Your outputs will include offending SQL statements via the Slow Query log that require investigation, extra credit for doing an EXPLAIN and a SHOW CREATE TABLE for each table in the query for analysis of improving the queries.
You will have a review of every SQL statement. A simple desk check will determine are there any queries which are exact duplicates, are there any queries that could be combined (as they use the same where condition). Are there any queries that are repeated a lot, but with only changing values. These are a clear candidate for combining into a single query.

MySQL Status Information

The MySQL Status can provide help into where to look for problems. This is the raw fundamentals that either home grown scripts or MySQL Network Monitoring and Advisory Service use to provide monitoring of important information (e.g. Buffer Hit Ratio) and can provide alerts of present or potential pending problems. We will not be reviewing all these status (~250) or developing any detailed analysis, however this is designed to show you the building blocks.

You should collect figures for 1 minute, 1 hour and 1 day that represent appropriate load on your system.

1 Minute

$ mysqladmin -uroot -p -r -i 1 -c 60 extended-status | grep -v " | 0 " > minute.log

1 Hour

$ mysqladmin -uroot -p -i 60 -c 60 extended-status > hourly.log

1 Day

$ mysqladmin -uroot -p -i 3600 -c 24 extended-status > daily.log

Extra credit for downloading Statpack and doing an analysis of the statistics. The trick is to take the first and last status output from each of the Hourly and Daily logs to get a 1 hour picture and a 1 day picture.

Review Schema

For reviewing the schema, try to use a static production sized copy (e.g. a copy of production in a test environment). This ensures you don’t make changes that will cause your production system to fail, and before and after figures are for the same data set as it’s not changing during production use.

  • Calculate size of present data
  • Review schema for immediate size improvements (Slide Jul-59)
  • Apply size improvements and re-run to see size improvements.

SQL

Calculate Size of all schemas

select table_schema, count(*) as tables,
           sum(data_length+index_length)/1024/1024 as total_mb,
           sum(data_length)/1024/1024 as data_mb,
           sum(index_length)/1024/1024 as index_mb
from information_schema.tables
group by table_schema
order by 3 desc;

Calculate Size of all tables for given schema

use test;
select table_name,engine,row_format,avg_row_length,table_rows,
          (data_length+index_length)/1024/1024 as total_mb,
          (data_length)/1024/1024 as data_mb,
          (index_length)/1024/1024 as index_mb
from information_schema.tables
where table_schema=database()
order by 6 desc;

Learning MySQL as an Oracle DBA

Sunday, August 19th, 2007

Updated

I have an entire section now devoted to various MySQL for the Oracle DBA Resources. You will find additional information here.

This week I presented two one day free seminars, “MySQL DBA Bootcamp for the Oracle DBA” in New York and San Francisco. Both were very successful days providing an opportunity to speak to seasoned enterprise professionals.

One question I was asked was “As an Oracle DBA, how can I become a MySQL DBA, what do I do, where do I start?”

Here are my references and recommendations that have zero cost to get started.

  • Read the MySQL Documentation Reference Manual.
  • Download MySQL install and use it.
  • The MySQL Developer Zone is a great sources for articles, information and references.
  • Planet MySQL is our official consolidated Blog Aggregator. Read it daily.
  • The MySQL Forge is a detailed reference of MySQL related projects, code snippets, wiki and MySQL WorkLog. A look at what MySQL is doing in future versions, and what others like yourself would like to see in future versions.
  • The MySQL Conference website has many papers from the recent 2007 Conference. You can also review the 2006 Conference Papers.
  • MySQL has various Forums and Email Lists. We have a specific Oracle Forum to assist Oracle DBA’s and Developers with MySQL questions.
  • MySQL also provides a large number of White Papers and Case Studies in it’s Why MySQL section. These are helpful to see how MySQL is being used today.
  • Register at mysql.com, if you join a list, fill in download form or respond to a forum, your already registered, but if not you will get a regular newletter that provides helpful information, including events, webinars and training
  • Sheeri Kritzer, the She-BA of MySQL and this years community award winner has an extensive list of resources on her website including podcasts and a long list of Audio & Video from the recent MySQL Conference.

For additional Oracle to MySQL specific references from recent conferences include 2006 – MySQL For Oracle DBA’s,
MySQL For Oracle Developers and 2007 – MySQL For Oracle DBA’s and Developers.

For your reference material, I would initially recommend the following books.

  • “MySQL 5.0 Certification Guide” – which all attendees received for free.
  • “MySQL Cookbook” by Paul DuBois, “MySQL” by the same Paul DuBois or “Pro MySQL” by Michael Kruckenberg and Jay Pipes.
  • “MySQL Administrator’s Guide and Language Reference”. – This is just a printed copy of the MySQL Manual, however some people may appreciate this.

Following that, additional resources depending on your level of interest in development or internal workings etc would include.

  • “MySQL Stored Procedure Programming” by Guy Harrison.
  • “Understanding MySQL Internals” by Sasha Pachev.

MySQL Professional Services also provides training and certification for MySQL. With 9 different training courses held world wide, and 5 different certification courses there are various programs to suit DBA’s and Developers at different skill levels. For more information see MySQL Training and Certification.

Let’s improve MySQL security

Sunday, August 19th, 2007

We have all done it in the past, and probably most people that read this (will admit| or lie) to still doing it, but everybody must start making an effort to improving MySQL security in the usage on your MySQL Installation, including just on your laptop, and in presentations that people read.

I spotted a reference article on Planet MySQL this evening and without looking at the details the syntax presented typifies two basic and fundamental 101 MySQL security issues.

1. Always, always, always have a password for a MySQL account, especially for the ‘root’ user.
2. Don’t use the ‘root’ user unless you really have to. The SUPER privilege is just that SUPER, there are many things you really don’t want every person accessing to have. In a larger environment you want to ensure good principles, but also in single developer environment.

I was asked on Thursday at a seminar I was giving. “Our developers login as ‘root’ for development and use ‘root’ for MySQL access, is this bad?”. To which my simple response was “Yes”.

I think MySQL should improve the product and not allow a root user without a password. You can of course use mysql_secure_installation to set this, but it’s not a mandatory step. You can also use SQL_MODE to ensure users can’t be created without a password, in both instances the horse has already bolted.

My iPhone Bill

Sunday, August 19th, 2007

Today I got back home to my iPhone bill. Lucky for me it was only 15 pages long. Rather stupid, but this news has been circulating for a few days. You can read More Google News stories including creative titles like “How many trees did your iPhone bill kill?”, “A 300-page iPhone Bill?” and “iPhone: The 800-Pound Gorilla Spawns a 300-Page Bill”. Not only did they waste trees, but I read one article stating it cost AT&T $7 in postage for one bill.

The present news is that the only way AT&T will address this is if you accept to get a paperless bill. How stupid, how about they just not print every data transfer, it’s not like you can actually get on a data plan that’s not unlimited anyway.