Overall application performance, CPU and I/O utilization and the size of disk files are good indicators of how effective compression is for your application. This section builds on the performance tuning advice from Section 17.9.1.3, “Tuning Compression for InnoDB Tables”, and shows how to find problems that might not turn up during initial testing.
To dig deeper into performance considerations for compressed tables, you can monitor compression performance at runtime using the Information Schema tables described in Example 17.1, “Using the Compression Information Schema Tables”. These tables reflect the internal use of memory and the rates of compression used overall.
The INNODB_CMP
table reports
information about compression activity for each compressed page
size (KEY_BLOCK_SIZE
) in use. The information
in these tables is system-wide: it summarizes the compression
statistics across all compressed tables in your database. You
can use this data to help decide whether or not to compress a
table by examining these tables when no other compressed tables
are being accessed. It involves relatively low overhead on the
server, so you might query it periodically on a production
server to check the overall efficiency of the compression
feature.
The INNODB_CMP_PER_INDEX
table
reports information about compression activity for individual
tables and indexes. This information is more targeted and more
useful for evaluating compression efficiency and diagnosing
performance issues one table or index at a time. (Because that
each InnoDB
table is represented as a
clustered index, MySQL does not make a big distinction between
tables and indexes in this context.) The
INNODB_CMP_PER_INDEX
table does
involve substantial overhead, so it is more suitable for
development servers, where you can compare the effects of
different workloads, data,
and compression settings in isolation. To guard against imposing
this monitoring overhead by accident, you must enable the
innodb_cmp_per_index_enabled
configuration option before you can query the
INNODB_CMP_PER_INDEX
table.
The key statistics to consider are the number of, and amount of
time spent performing, compression and uncompression operations.
Since MySQL splits B-tree
nodes when they are too full to contain the compressed data
following a modification, compare the number of
“successful” compression operations with the number
of such operations overall. Based on the information in the
INNODB_CMP
and
INNODB_CMP_PER_INDEX
tables and
overall application performance and hardware resource
utilization, you might make changes in your hardware
configuration, adjust the size of the buffer pool, choose a
different page size, or select a different set of tables to
compress.
If the amount of CPU time required for compressing and uncompressing is high, changing to faster or multi-core CPUs can help improve performance with the same data, application workload and set of compressed tables. Increasing the size of the buffer pool might also help performance, so that more uncompressed pages can stay in memory, reducing the need to uncompress pages that exist in memory only in compressed form.
A large number of compression operations overall (compared to
the number of INSERT
,
UPDATE
and DELETE
operations in your application and the size of the database)
could indicate that some of your compressed tables are being
updated too heavily for effective compression. If so, choose a
larger page size, or be more selective about which tables you
compress.
If the number of “successful” compression
operations (COMPRESS_OPS_OK
) is a high
percentage of the total number of compression operations
(COMPRESS_OPS
), then the system is likely
performing well. If the ratio is low, then MySQL is
reorganizing, recompressing, and splitting B-tree nodes more
often than is desirable. In this case, avoid compressing some
tables, or increase KEY_BLOCK_SIZE
for some
of the compressed tables. You might turn off compression for
tables that cause the number of “compression
failures” in your application to be more than 1% or 2% of
the total. (Such a failure ratio might be acceptable during a
temporary operation such as a data load).