100%(8)8 out of 8 people found this document helpful
This preview shows page 10 - 13 out of 14 pages.
Aside from profiling the queries in real time, we can also profile queries that are used bydaemons and cron jobs and log the results to a file. MySQL has a built in feature in MySQLthatcan log slow queries for us while the database daemon is running. As of MySQL 5.1.21 we canget microsecond timing on queries (previously only one-second jumps were supported) so wecan get very good measurements with the slow-query log.Examining query optimizing plans (EXPLAIN)Before trying to optimize a slow query, we need to understand what makes it slow. For thispurpose MySQL has a query examination tool called EXPLAIN. Add the reserved word'EXPLAIN' at the beginning of your query to get the execution plan for the query. The executionplan literally 'explains' to us what the database is doing to optimize the query. The MySQLmanual has a full reference guideto the different values that appear in the plan, and you can see afull walkthrough of using EXPLAIN to optimize a query in this slideshow on slide share1.Looping queries
DATABASE OPTIMIZATION AND TUNING11The most basic performance issues often will not be the fault of the database itself. One of themost common mistakes is to query in a loop without need. Most likely looped SELECT queriescan be rewritten as a JOIN -Inserting and updating rows in a loop can have major overhead as well, and those queries aregenerally slower than simple SELECT queries (since indexes often need to be updated) and theyaffect the performance of other queries since they use table / row locks while the data is written(this differs depending on the table engine). I wrote an article almost two years ago on multiplerow operationsthat covers how to rewrite looped INSERT / UPDATE queries and includes somebenchmarks to show how it improves performance.2. Picking only needed columnsIt is common to see a wildcard used to pick all columns ('SELECT * FROM ... ') - this however,is not efficient. Depending on the number of participating columns and their type (especiallylarge types such as the TEXT variants), we could be selecting much more data from the databasethan we actually need. The query will take longer to return since it needs to transfer more data(from the hard-disk if it doesn't hit the cache) and it will take up more memory doing so. Pickingonly the needed columns is a good general practice to use, and avoids those problems.3. Filtering rows correctly and using indexesOur main goal is to select the smallest number of rows we need and doing so in the fastest waypossible. We want to filter rows using indexes, and in general we want to avoid full table scansunless it is absolutely needed (aside from edge cases where it actually improves performance).The MySQL manual has some great informationon optimizing the WHERE clause, and I'll diveinto a bit more detail -
DATABASE OPTIMIZATION AND TUNING12Filtering conditions include the WHERE, ON (for joins) and HAVING clauses. As much aspossible, we want those clauses to hit indexes- unless we are selecting a very large amount ofrows, index lookup is much faster than a full table scan. Those clauses should be used along with