Aside from profiling the queries in real time we can also profile queries that

Aside from profiling the queries in real time we can

This preview shows page 10 - 13 out of 14 pages.

Aside from profiling the queries in real time, we can also profile queries that are used by daemons and cron jobs and log the results to a file. MySQL has a built in feature in MySQL that can log slow queries for us while the database daemon is running. As of MySQL 5.1.21 we can get microsecond timing on queries (previously only one-second jumps were supported) so we can get very good measurements with the slow-query log. Examining query optimizing plans (EXPLAIN) Before trying to optimize a slow query, we need to understand what makes it slow. For this purpose MySQL has a query examination tool called EXPLAIN . Add the reserved word 'EXPLAIN' at the beginning of your query to get the execution plan for the query. The execution plan literally 'explains' to us what the database is doing to optimize the query. The MySQL manual has a full reference guide to the different values that appear in the plan, and you can see a full walkthrough of using EXPLAIN to optimize a query in this slideshow on slide share 1.Looping queries
Image of page 10
DATABASE OPTIMIZATION AND TUNING 11 The most basic performance issues often will not be the fault of the database itself. One of the most common mistakes is to query in a loop without need. Most likely looped SELECT queries can be rewritten as a JOIN - Inserting and updating rows in a loop can have major overhead as well, and those queries are generally slower than simple SELECT queries (since indexes often need to be updated) and they affect the performance of other queries since they use table / row locks while the data is written (this differs depending on the table engine ). I wrote an article almost two years ago on multiple row operations that covers how to rewrite looped INSERT / UPDATE queries and includes some benchmarks to show how it improves performance. 2. Picking only needed columns It is common to see a wildcard used to pick all columns ('SELECT * FROM ... ') - this however, is not efficient. Depending on the number of participating columns and their type (especially large types such as the TEXT variants), we could be selecting much more data from the database than we actually need. The query will take longer to return since it needs to transfer more data (from the hard-disk if it doesn't hit the cache) and it will take up more memory doing so. Picking only the needed columns is a good general practice to use, and avoids those problems. 3. Filtering rows correctly and using indexes Our main goal is to select the smallest number of rows we need and doing so in the fastest way possible. We want to filter rows using indexes, and in general we want to avoid full table scans unless it is absolutely needed (aside from edge cases where it actually improves performance ). The MySQL manual has some great information on optimizing the WHERE clause, and I'll dive into a bit more detail -
Image of page 11
DATABASE OPTIMIZATION AND TUNING 12 Filtering conditions include the WHERE, ON (for joins) and HAVING clauses. As much as possible, we want those clauses to hit indexes - unless we are selecting a very large amount of rows, index lookup is much faster than a full table scan. Those clauses should be used along with
Image of page 12
Image of page 13

You've reached the end of your free preview.

Want to read all 14 pages?

  • Spring '15
  • Alexander

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture