tcpdump errors on FreeBSD for mk-query-digest

While I use this tcpdump command for MySQL query analysis with mk-query-digest, I found recently that it didn’t work on FreeBSD

$ tcpdump -i bge0 port 3306 -s 65535 -x -n -q -tttt -c 5
tcpdump: syntax error

It left me perplexed and reading the man page seemed to indicate my options were valid. I tried a few variances just to be sure without success.

$ tcpdump -i bge0 -c 5 port 3306 -x
tcpdump: syntax error
$ tcpdump -i bge0 -c 5 port 3306 -q
tcpdump: syntax error
$ tcpdump -i bge0 -c 5 port 3306 -tttt
tcpdump: syntax error

The solution was actually quite simple in the end, it had nothing to do with the commands, it had everything to do with the order of them. Placing port as the last option solved the problem.

$ tcpdump -i bge0 -s 65535 -x -n -q -tttt -c 5  port 3306
$ uname -a
FreeBSD db4.example.com 6.3-RELEASE-p3 FreeBSD 6.3-RELEASE-p3 #0: Wed Jul 16 05:13:50 EDT 200

MySQL Best Practices: User Security

It is critical that you do not use the default MySQL installation security, it’s simply insecure.

Default Installation

When installed, MySQL enables any user with physical permissions to the server to connect to the MySQL via unauthenticated users. MySQL also provides complete access to all super user privileges via the ‘root’ user with no default password.

$ mysql -uroot
mysql> SELECT host,user,password FROM mysql.user;
+--------------+------+-------------------------------------------+
| host         | user | password                                  |
+--------------+------+-------------------------------------------+
| localhost    | root |                                           |
| server.local | root |                                           |
| 127.0.0.1    | root |                                           |
| localhost    |      |                                           |
| server.local |      |                                           |
+--------------+------+-------------------------------------------+

What you see here are two types of users.

  • The ‘root’ user which has MySQL super user privileges for your server or ‘localhost’ connections with no password.
  • Unauthenticated users indicated by the blank ‘user’ column

The absolute minimum you should do, is run the provided optional command for immediate improvements mysql_secure_installation. When running this command, you’re prompted for the following
options — the output has been trimmed for presentations purposes.

$ mysql_secure_installation
Enter current password for root (enter for none):
Set root password? [Y/n] y
New password:
Re-enter new password:
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

If you revisit permissions now, you’ll see what you would expect from a more initially secure installation.

mysql> SELECT host,user,password FROM mysql.user;
+-----------+------+-------------------------------------------+
| host      | user | password                                  |
+-----------+------+-------------------------------------------+
| localhost | root | *FDAF706717E70DB8DDAD0C5214B13770E1A80B0E |
+-----------+------+-------------------------------------------+

This is only the first step to hardening your MySQL instance and server.

Recommendations

The following are my recommendations for the minimum MySQL security permissions:

  • Always set a MySQL ‘root’ user password
  • Change the MySQL ‘root’ user id to a different name, e.g. ‘dba’
  • Only enable SUPER privileges to dba accounts, and only ever for ‘localhost’.
  • Application user permissions should be as restrictive as possible.
  • Never use ‘%’ for a hostname
  • Never use ALL TO *.*
  • Ideally the application should have at least two types of users, a read/write user and a read user.

There is a lot more information about physical Operating System security and the MySQL permission/privilege model to be discussed. One product I know of that help is SecuRich – The MySQL Security Package featuring roles, password history and many other cool functionalities.

References

A recent post by Lance Miller quoted the following.


I cant tell you how many times in the past 18 months that I’ve found real enterprises running vulnerable databases with default passwords, weak passwords and no real permissions management. It’s bad enough that the stats right now are this (so I guess I can tell you):
– 9 out of 10 organizations have a Microsoft SQL Database with a blank “sa” password (or an sa password of “sa”, “sql” or “password”)
– 9 out of 10 organizations have a Postgres Database with a default password
– 9 out of 10 organizations have a Sybase Database with a default password

The article didn’t include MySQL however some organizations don’t change the default password, probably not 9 of 10 in my experience.

More Information

MySQL Monitoring – What's really needed

The implementation of MySQL Monitoring is critical for any organization that uses a database and wants to avoid the inevitable disaster. There are 3 important components that all serve a key purpose to “MySQL Monitoring” in general:

  • Monitoring – Historical and graphical information
  • Alerting – Tell me when something is wrong
  • Dashboard – The State of NOW

Monitoring

There is no one option for Monitoring that is significantly better then another. A short list of what’s on offer can be found at http://monitoring-mysql.com/monitoring-products. What’s important is you have monitoring in place so historically you can review situations and compare across your servers and enabling the better identification of physical or database bottlenecks. My recommendations for products are Cacti which is packaged with most popular Linux distrubtions, so can be installed via a single apt-get/yum/rpm command, and the MySQL Cacti Templates. This is not the best product solution or combination, it’s in my opinion the most common and covers all the essential bases.

It’s best to define a different web server (i.e. publically accessible) to be the monitoring server rather then installing the web interface on any single DB server.

Alerting

Alerting is key to notify you of a problem without the need for somebody to be viewing a screen and see it happening via your monitoring. The identification of high CPU load, a disk nearing capacity, database locking etc often helps avoid a current problem before it becomes some level of disaster. Almost all companies use Nagios or a derivative such as the main fork Icigna or products that include Nagios like Opsview or Groundworks.

Dashboard

While both Monitoring and Alerting are necessary, they both however lack a key component necessary for successful administration. That is timing. Both of these earlier options sample, e.g. 1 minute or generally 5 minutes (by default), but problems can happen quickly. This is why each organization needs to have a Dashboard. I don’t know of any products here unless you try and adjust a monitoring product, but want’s needed is a very lightweight and very business centric single page of Green/Yellow/Red status’s of your environment including databases, webservers, response time and traffic etc. This is for the state of NOW. A Dashboard should sample every 5 to 10 seconds. I have seen larger and more successful companies have various home grown implementations. I developed a product for one company and it included the following on a single page. You can see the screen output in a presentation at http://www.slideshare.net/ronaldbradford/10x-performance-improvements. This included

  • 5 DB servers monitoring load average, ping time, database connections, active,free,locked, replication availability and lag
  • 5 web servers monitoring load average, ping time, apache connections
  • Application metrics monitoring 3 different page load times, and page size

It is often important to be able to identify a key problem and then drill down to this more quickly rather then the usual “the website is slow” question and having to investigate the same repetitive tasks. You need to automate, and be more pro-active in response especially to load and locking issues.

Advanced Monitoring

Information is only have the requirement, it is what you do with this information that determines how to be proactive rather then reactive. If I was the DBA of a company I’d do even more then these initial 3 steps which are a necessary base. I would also monitor for example:

  • Database size and growth. This is important to be preemptive about your capacity. Example SQL at http://ronaldbradford.com/mysql-dba/#allschemas
  • Error Log changes
  • Backup timing. This is important as your DB grows as it affects recovery.
  • Recovery timing
  • Gather raw MySQL status information because monitoring tools only capture what you ask it to do, not everything. While you may not analyze all now, you may want to in the future look back in time. Example scripts at http://ronaldbradford.com/mysql-dba/#log-stats
  • Hourly/Daily text reports. Producing a easy readable SHOW GLOBAL status report such as with statpack will for example enable me to know network throughtput in the DB, transaction throughput and key indicators of locking, disk access etc. While you may have a graphic interface, it’s a lot easier to automate and grep text reports.
  • Proactive restrictions. The Twitter failed whale is a great example of when the system moves closer to known limits, but before those limits they start limiting load. This includes for example to disable less critical but resource intensive functionality, e.g. people search. The also start rejecting connections so they do not reach a crash state. This could be proactively changing timeout values so the DB fails queries, and the webservers respond accordingly with a try again approach.

While these are important if you have only limited resources, too much information can be just as much of a burden then people just start ignoring the information and miss what’s important.

Finalized speakers list for Kaleidoscope conference

We have secured approval for our final two speakers and now have a full schedule for the 4 day MySQL track at ODTUG Kaleidoscope conference. The conference is in Washington DC from Monday June 28th to Thursday July 1st. Welcome to Josh Sled and Craig Sylvester that will be joining our existing list of speakers.

This conference will include 19 sessions of dedicated MySQL content from Monday thru Thursday by well qualified MySQL community members, as well a forums discussion and reception on Monday night. You don’t need to be an Oracle developer to get the benefit of this conference. We will offering a discount code for MySQL attendees in the upcoming days.

If you are in the DC area, the Monday night forum (known as the sundown sessions) as well as the reception are FREE for the MySQL community. This was a great jesture of the Oracle Developer Tools Users Group to openly invite the MySQL community to meet and interact. We ask that you register your name and email for confirmation of numbers.

Speakers List

  • Philip Antoniades, Oracle/MySQL
  • Ronald Bradford, 42SQL
  • Sheeri K. Cabral, The Pythian Group
  • Laine Campbell, PalominoDB
  • Patrick Galbraith, Northscale
  • Sarah Novotny, Blue Gecko
  • Padraig O’Sullivan, Akiba Technologies Inc.
  • Jay Pipes, Rackspace Cloud
  • Dossy Shiobara, Panoptic.com
  • Josh Sled, Oracle/MySQL
  • Craig Sylvester, Oracle/MySQL
  • Matt Yonkovit, Percona

References

Why is my database slow?

Not part of my Don’t Assume series, but when a client states “Why is my database slow””, you need to determine if indeed the database is slow.

Some simple tools come to the rescue here, one is Firebug. If a web page takes 5 seconds to load, but the .htm file takes 400ms, and the 100+ assets being downloaded from one base url, then is the database actually slow? Tuning the database will only improve the 400ms portion of 5,000ms download.

There some very simple tips here. MySQL is my domain expertise and I will not profess to improving the entire stack however perception is everything to a user and you can often do a lot. Some simple points include:

  • Know about blocking assets in your <head> element, e.g. .js files.
  • Streamline .js, .css and images to what’s needed. .e.g. download a 100k image only to resize to a thumbnail via style elements.
  • Sprites. Like many efficient but simple SQL statements, network overhead is your greatest expense.
  • Splitting images to a different domain.
  • Splitting images to multiple domains (e.g. 3 via CNAME only needed.) — Hint: Learn about the protocol
  • Cookieless domains for static assets
  • Lighter web container for static assets (e.g. nginx, lighttpd)
  • Know about caching, expires and etags
  • Stripping out http://ww.domain.com from all your internal links (that one alone saved 12% of HTML page size for a client). You may ask is that really a big deal, well in a high volume site the sooner you can release the socket on your webserver, the sooner you can start serving a different request.

Like tuning a database, some things work better then others, some require more testing then others, and consultants never tell you all the tricks.

References

As with everything in tuning, do your research and also determine what works in your environment and what doesn’t. Two excellent resources to start with are Steve Souders and Best Practices for Speeding Up Your Web Site by Yahoo.

More on understanding sort_buffer_size

There have been a few posts by Sheeri and Baron today on the MySQL sort_buffer_size variable. I wanted to add some more information about this buffer, what is impacted when it is changed, and what to do about it?

The first thing you need to know is the sort_buffer_size is a per session buffer. That is this memory is assigned per connection/thread. I’ve seen clients that set this assuming it’s a global buffer Don’t Assume – Per Session Buffers.

Second, internally in the OS usage independently of MySQL, there is a threshold > 256K. From Monty Taylor “if buffer is set to over 256K, it uses mmap() instead of malloc() for memory allocation. Actually – this is a libc malloc thing and is tunable, but defaults to 256k. From the manual:” . He goes on in a further to shows that impact > 256K for a buffer is 37x slower. This applies to all per session buffers, not just sort buffer. Now I have heard recently about this limit being 512K. I wasn’t able to nail down the specific speaker to see if this was a newer library or kernel or OS.

With MySQL instrumentation and the sort_buffer_size we are lucky, there is the Sort_merge_passes status variable. While it’s not perfect, it does indicate if the size of the buffer is in-sufficient, however even if we use a sort_buffer_size of say 256K, and you see Sort_merge_passes increasing slowly, does not indicate you have to increase the buffer.

So, all this does not tell you how to tune the buffer? Unfortunately with MySQL there is no actual easy answer. You do need to monitor the mysqld memory usage overall, especially if you are using persistent connections. A connection/thread will not release the memory assigned until it is closed, so it’s important to monitor for memory creep of the PGA, knowing what your initial SGA is. Morgan Tocker wrote a patch in Bug #33540 to create a RESET CONNECTION type command.

You do need to look for memory as a bottleneck. You need to learn how MySQL use memory, not just the sort_buffer_size. I actually started many years ago to write global/session variables to indicate when buffers were used, and how much and I started with the sort_buffer_size which was buried down in some very old filesort code. When I sought the input of an expert C coder around this, they wondered how the code, especially a loop handler even actually worked.

Nobody knows what the optimal setting is, and that’s the problem. In certain areas especially memory usage the MySQL instrumentation is simply non-existent, and I’d like to see this as something that is fixed.

In conclusion, if I ever see a sort_buffer_size above 256K, e.g. 1M or 2M, I always reset it to 256K. My reasoning is simple. Until you have evidence in your specific environment increasing the buffer makes performance better, it’s better to use a smaller value. There are bigger wins, like not using sorting, or better design, or even better simplifying or eliminating SQL.

References

sort_buffer_size and Knowing Why
How to tune MySQL’s sort_buffer_size
Read Buffer performance hit
more on malloc() speed

Free MySQL Event in Washington DC

As the program chair for the recently announced MySQL Track at the ODTUG Kaleidoscope conference located in Washington DC we are also looking into an associated free community event for MySQL locals in addition to a dedicated track for 4 days.

Please let us know your name and email via the form at http://ronaldbradford.com/ODTUG/free-event/ so we can provide more details in the coming week as we try to finalize logistics.

Registration will be necessary for attendance however for now we just want to know who is local so we can provide more details soon!

Updated. Full details of the free Monday night sundown sessions and reception can be found at MySQL track with free event at Kaleidoscope 2010

The MySQL community impacting the Oracle community

I’m happy to announce that the MySQL community has been given the opportunity to speak at the upcoming Oracle Developer Tools User Group (ODTUG) Kaleidoscope conference in Washington DC. We will be releasing more details this week of the MySQL presentations and topics and we are finalizing details of possible options to include the local MySQL community during the event.

The various independent Oracle User Groups in North America that embody “by the community and for the community” have been very positive with including the MySQL community. With the Sun/MySQL now Oracle community team of Giuseppe Maxia, Lenz Grimmer, Kaj Arnö and Oracle ACE Directors Sheeri K Cabral and myself we have been happy with the openness and willingness to include us in the larger Oracle ecosystem.

We’ll announce the schedule when we finalize it, but we have had a great response from an impressive list of speakers.

Additional References