This section describes how to configure non-persistent optimizer
statistics. Optimizer statistics are not persisted to disk when
when individual tables are created or altered with
Instead, statistics are stored in memory, and are lost when the
server is shut down. Statistics are also updated periodically by
certain operations and under certain conditions.
As of MySQL 5.6.6, optimizer statistics are persisted to disk by
default, enabled by the
configuration option. For information about persistent optimizer
statistics, see Section 220.127.116.11, “Configuring Persistent Optimizer Statistics Parameters”.
Optimizer statistics are updated under the following circumstances:
ANALYZE TABLE is run.
The default setting for
was changed to
OFF when persistent
optimizer statistics were enabled by default in MySQL 5.6.6.
may reduce access speed for schemas that have a large number
of tables or indexes, and reduce stability of execution
plans for queries that involve
configured globally using a
SET GLOBAL innodb_stats_on_metadata=ON
When starting a mysql client with the
--auto-rehash option enabled,
which is the default. The
auto-rehash option causes all
InnoDB tables to be opened, and the open
table operations cause statistics to be recalculated.
To improve the start up time of the mysql
client and to updating statistics, you can turn
auto-rehash off using the
feature enables automatic name completion of database,
table, and column names for interactive users.
When a table is first opened.
InnoDB detects that 1 / 16 of table
has been modified since the last time statistics were
The MySQL query optimizer uses estimated
statistics about key
distributions to choose the indexes for an execution plan, based
on the relative
selectivity of the
InnoDB updates optimizer
statistics, it samples random pages from each index on a table
to estimate the
cardinality of the
index. (This technique is known as
To give you control over the quality of the statistics estimate
(and thus better information for the query optimizer), you can
change the number of sampled pages using the parameter
The default number of sampled pages is 8, which could be
insufficient to produce an accurate estimate, leading to poor
index choices by the query optimizer. This technique is
especially important for large tables and tables used in
full table scans for
such tables can be a substantial performance issue. See
Section 18.104.22.168, “How to Avoid Full Table Scans” for tips on tuning
is a global parameter that can be set at runtime.
The value of
affects the index sampling for all
tables and indexes when
aware of the following potentially significant impacts when you
change the index sample size:
Small values like 1 or 2 can result in inaccurate estimates of cardinality.
value might require more disk reads. Values much larger
than 8 (say, 100), can cause a big slowdown in the time it
takes to open a table or execute
The optimizer might choose very different query plans based on different estimates of index selectivity.
Whatever value of
works best for a system, set the option and leave it at that
value. Choose a value that results in reasonably accurate
estimates for all tables in your database without requiring
excessive I/O. Because the statistics are automatically
recalculated at various times other than on execution of
ANALYZE TABLE, it does not make
sense to increase the index sample size, run
ANALYZE TABLE, then decrease
sample size again. The more accurate statistics calculated by
ANALYZE running with a high value of
can be wiped away later.
Smaller tables generally require fewer index samples than larger
tables. If your database has many large tables, consider using a
higher value for
than if you have mostly smaller tables.