Simple lessons in improving scalability

It can be very easy to improve scalability with a MySQL server by a few simple rules. Here is one of them.

“The most efficient way to improve an SQL statement is to eliminate it”

There are numerous ways to eliminate SQL statements, however before I give a classic example that I’ve observed again with a client, let me explain the basic premise of why this improves scalability?

The MySQL kernel can only physically process a certain number of SQL statements for a given time period (e.g. per second). Regardless of the type of machine you have, there is a physical limit. If you eliminate SQL statements that are unwarranted and unnecessary, you automatically enable more important SQL statements to run. There are numerous other downstream affects, however this is the simple math. To run more SQL, reduce the number of SQL you need to run.

Here is the output of a small sample of analyzed TCP/IP packets via mk-query-digest.

# Rank Query ID           Response time Calls R/Call Apdx V/M   Item
# ==== ================== ============= ===== ====== ==== ===== ==========
#    1 0xD631CB919867DB50  0.0436 47.3%    92 0.0005 1.00  0.00 SELECT TTDOD
#    2 0x04FE01C5B31FD305  0.0258 27.9%   329 0.0001 1.00  0.00 ADMIN PING
#    3 0x93321857BCD8E771  0.0229 24.8%    36 0.0006 1.00  0.00 SELECT TTD

There are many problems here including the Row at a Time (RAT) nature of the SQL, the excessive pings however that’s a topic for another time. Let us look at the first statement.

SELECT `Date` FROM TTDOD WHERE ID = 9999;

That seems a simple enough query however let’s look at the table.

mysql> select count(*) from TTDOD;
+----------+
| count(*) |
+----------+
|        0 |
+----------+

In this case, the query will NEVER return any rows because the table is currently empty. Sure this may change in the future, however as this is more an exception processing situation the simple act of managing the knowledge this table rarely has any rows, and building a solution to inform the application of this can completely eliminate the need for this query to ever be executed.

FYI, the above sample is from less then 2 seconds of sampling. Removing the first query reduces the number of queries executed in this time slice by 20%. Regardless of whether this is typical load or load during a batch job the principle stands. We have not even started to look at what we can do with the next query.

Comments

  1. says

    Heh. I remember the Query Analyzer of MySQL Enterprise Monitor being sold with a similar argument. But it’s true and effective, so it is worth repeating.

    Btw, I really look forward to becoming an active user of Maatkit in my soon-to-start new job. Thanks for posting this example of mk-query-digest, I can already see a lot of use for it.

  2. Steve says

    This particular example seems like more of an edge case to me, in that you’d have to do a query to find out the table is empty. Some applications could cache that fact, and then avoid the query later, but much of the web is still using the “stateless” model and so caching is not always possible or practical. It is most likely that tables like this that have no records (or no records for this key) are needed as part of a larger operation. Including them in a left outer join can minimize round trips to the server. We find that using queries that are slightly more complex but minimize trips to the server gives us the best performance.

  3. says

    Awesome Article. I think there is a science or “skill” to tuning both SQL DBMS and any system really; but experience has shown me that the more information that is available the easier the task.

    Ideally folks try to write lean and efficient code 24×7 but that isn’t reality.

    I do a lot of coding in a language where realtime debugging isn’t allowed, and it has taught me to be extremely systemic from the beginning of any project to make sure there are a few things to assist with tuning both the application and the SQL stuff.

    1: Logging Mechanism so every message from your program from “just letting ya know” messages to “FATAL ERRORS” and they should point you back EXACTLY where in your code stuff went awry.

    2: A “Queue” so series of cascading errors and messages can be collected at once when a FATAL error happens – then you can track up the “stack of messages” to see where things are choking

    3: Add time-in and time-out sub-system you can use compiler directives or conditionals to enable and disable – (as they slow down things – but can give you import info like how long each function is taking – where are the bottle necks?

    And I believe this relates to this article because SQL is only part of the picture and when you add these kinds of measures into your applications (or your dev platofmr has this kind of thing built in) helps identify and bring to light quickly WHEN the SQL server needs it’s special attention!

    It’s hard enough to get code running error free – clean code – efficient etc.. All the tools in place to assist of course are welcomed – all I’m saying is that if your “dev” platform doesn’t give you enough to really tell where things are breaking down or slowing down etc.. try to implement your own.

    And let there be no downplaying of the importance of a DBA who REALLY REALLY knows index and SQL Plan Execution and all those other nitty-gritty (sometimes – often? vendor specific – tricks?) to get the most from the database.

    Programmers however need to absolutely try to be wise about minimizing SQL calls – but I just wanted to finish with good design tends run well – and will benefit the most from this kind of tuning!

    Light a Fire under that applications Tail!

    Important article.
    –Jason P Sage

  4. Joe Smith says

    The problem is not that people don’t understand that eliminating a query is the best way to reduce load. It is that they do not know how to implement alternative solutions that would allow them to eliminate it. Your post would have been infinitely more beneficial had gone over some general ways to do that and their pros/cons. Sure, they wouldn’t apply to every application out there, but it would get people thinking and might spark an idea that leads them to a solution for their particular application.

  5. Anonymous coward says

    Idunno …

    SQL/databases are good at doing some things, code written in higher-level languages is good at doing some other things. I don’t think that moving complexity out of the database at the expense of increasing complexity in application code, in order to lessen the number of queries being executed, is always a good thing. In particular, I think the example given in the article is quite not on target. If I have to cache the number of rows in the table in application state, this adds quite a lot of complexity to application code, which may turn out to be more expensive than actually performing the query, in terms of memory and processor cycles. Which would render the optimization useless, given that there are other strategies to cope with database limitations.

    IMO, the best way to optimize overall application performance/scalability isn’t local optimization – such as elliminating SQL queries potentially at the risk of higher complexity in application code – local optimizations of this type could prove quite costly.

    Of course, a function call inside the application is several times cheaper than a call to a database server, especially when the database is located on a different host than the one where the application runs. But then again, a local cache inside the application doesn’t add just one level of indirection, it adds several, and it significantly increases both complexity of the application (not necessarily of the part of the application you write, but of the application overall, including all libraries you use) and usually also increases memory and processor requirements.

    IMO, the best thing to do is always write your application in the cleanest, developer-friendliest way right from the start, then, if time and budget permits, and there seems to be a need for it (i.e. load/performance tests say so), do some profiling. Only profiling will tell you exactly which parts of the application need optimization.

    Another way to scale your application, when the database is the bottleneck, other than reducing the number of queries, is to split the database into several schemas, located on different servers. At least in some cases it is more convenient than optimizing the number of queries performed, especially if this would involve significantly higher complexity in application code.

    There is an article somewhere on the web, I think written by Martin Fowler, where he describes what optimization without profiling can lead to. Once upon a time, he (provided he wrote the story) and some fellow programmer, working on an application, ran into performance problems. Each of them had several ideas on how to optimize the application, and it was almost by accident that they did some profiling before actually implementing the optimizations. What little profiling they did showed that the performance killer was some specific operations on date/time values, which were performed very any times in very many places. After optimizing just these operations, they got such an increase of performance that none of the optimizations they initially thought of was worth implementing anymore. And they got these without any significant loss of code readability or maintainability, or overall increase in complexity.

  6. Thomas Michaud says

    It sounds good in theory — test whether there is any data in the database before doing a bunch of queries.

    But suggests something is kinda foobar too. When I write queries, usually (but not always) additional columns are cheap. Rather than write a query to test for possible rows, another query to pull back primary key, another query to pull back date, another query to…etc.

    Pull back the row in question, but get all the columns you need. There is a slight increase in performance for testing if no rows exist, and load testing that may reveal it’s worthwhile.

    But once you’ve decided to go for a record, get all the columns. Never just get a single column of data and do multiple queries. The advantage is obvious, once you’ve gone for ID=9999, you’ve already pulled back to the app what the date was.

    Otherwise, I agree with Jason. Verify the indexes, run show-plans, understand what’s being run.

Trackbacks

  1. What are the best methods for optimizing PHP/MySQL code for speed without caching?…

    Don’t run an SQL Statement that is not ultimately needed. There are many, many examples I could provide, however the following statement suffices. “The most efficient way to improve an SQL statement is to eliminate it” Some actually examples can be fo…