- 184.108.40.206 WHERE Clause Optimization
- 220.127.116.11 Range Optimization
- 18.104.22.168 Index Merge Optimization
- 22.214.171.124 Engine Condition Pushdown Optimization
- 126.96.36.199 Index Condition Pushdown Optimization
- 188.8.131.52 Nested-Loop Join Algorithms
- 184.108.40.206 Nested Join Optimization
- 220.127.116.11 Outer Join Optimization
- 18.104.22.168 Outer Join Simplification
- 22.214.171.124 Multi-Range Read Optimization
- 126.96.36.199 Block Nested-Loop and Batched Key Access Joins
- 188.8.131.52 Condition Filtering
- 184.108.40.206 IS NULL Optimization
- 220.127.116.11 ORDER BY Optimization
- 18.104.22.168 GROUP BY Optimization
- 22.214.171.124 DISTINCT Optimization
- 126.96.36.199 LIMIT Query Optimization
- 188.8.131.52 Function Call Optimization
- 184.108.40.206 Window Function Optimization
- 220.127.116.11 Row Constructor Expression Optimization
- 18.104.22.168 Avoiding Full Table Scans
Queries, in the form of
statements, perform all the lookup operations in the database.
Tuning these statements is a top priority, whether to achieve
sub-second response times for dynamic web pages, or to chop
hours off the time to generate huge overnight reports.
SELECT statements, the
tuning techniques for queries also apply to constructs such as
WHERE clauses in
DELETE statements. Those
statements have additional performance considerations because
they combine write operations with the read-oriented query
The main considerations for optimizing queries are:
To make a slow
SELECT ... WHEREquery faster, the first thing to check is whether you can add an index. Set up indexes on columns used in the
WHEREclause, to speed up evaluation, filtering, and the final retrieval of results. To avoid wasted disk space, construct a small set of indexes that speed up many related queries used in your application.
Indexes are especially important for queries that reference different tables, using features such as joins and foreign keys. You can use the
EXPLAINstatement to determine which indexes are used for a
SELECT. See Section 8.3.1, “How MySQL Uses Indexes” and Section 8.8.1, “Optimizing Queries with EXPLAIN”.
Isolate and tune any part of the query, such as a function call, that takes excessive time. Depending on how the query is structured, a function could be called once for every row in the result set, or even once for every row in the table, greatly magnifying any inefficiency.
Minimize the number of full table scans in your queries, particularly for big tables.
Keep table statistics up to date by using the
ANALYZE TABLEstatement periodically, so the optimizer has the information needed to construct an efficient execution plan.
Learn the tuning techniques, indexing techniques, and configuration parameters that are specific to the storage engine for each table. Both
MyISAMhave sets of guidelines for enabling and sustaining high performance in queries. For details, see Section 8.5.6, “Optimizing InnoDB Queries” and Section 8.6.1, “Optimizing MyISAM Queries”.
You can optimize single-query transactions for
InnoDBtables, using the technique in Section 8.5.3, “Optimizing InnoDB Read-Only Transactions”.
Avoid transforming the query in ways that make it hard to understand, especially if the optimizer does some of the same transformations automatically.
If a performance issue is not easily solved by one of the basic guidelines, investigate the internal details of the specific query by reading the
EXPLAINplan and adjusting your indexes,
WHEREclauses, join clauses, and so on. (When you reach a certain level of expertise, reading the
EXPLAINplan might be your first step for every query.)
Adjust the size and properties of the memory areas that MySQL uses for caching. With efficient use of the
MyISAMkey cache, and the MySQL query cache, repeated queries run faster because the results are retrieved from memory the second and subsequent times.
Even for a query that runs fast using the cache memory areas, you might still optimize further so that they require less cache memory, making your application more scalable. Scalability means that your application can handle more simultaneous users, larger requests, and so on without experiencing a big drop in performance.
Deal with locking issues, where the speed of your query might be affected by other sessions accessing the tables at the same time.