Do not convert MySQL system tables in the
mysql database from
InnoDB tables. This is an unsupported
operation. If you do this, MySQL does not restart until you
restore the old system tables from a backup or regenerate them
by reinitializing the data directory (see
Section 2.10.1, “Initializing the Data Directory”).
A table can contain a maximum of 1017 columns (raised in MySQL 5.6.9 from the earlier limit of 1000). Virtual generated columns are included in this limit.
A table can contain a maximum of 64 secondary indexes.
By default, an index key for a single-column index can be up to 767 bytes. The same length limit applies to any index key prefix. See Section 14.1.14, “CREATE INDEX Syntax”. For example, you might hit this limit with a column prefix index of more than 255 characters on a
VARCHARcolumn, assuming a UTF-8 character set and the maximum of 3 bytes for each character. When the
innodb_large_prefixconfiguration option is enabled, this length limit is raised to 3072 bytes, for
InnoDBtables that use the
Attempting to use an index prefix length that is greater than the allowed maximum value produces an error. To avoid such errors for replication configurations, avoid setting the
innodb_large_prefixoption on the master if it cannot also be set on the slaves, and the slaves have unique indexes that could be affected by this limit.
InnoDBinternal maximum key length is 3500 bytes, but MySQL itself restricts this to 3072 bytes. This limit applies to the length of the combined index key in a multi-column index.
If you reduce the
InnoDBpage size to 8KB or 4KB by specifying the
innodb_page_sizeoption when creating the MySQL instance, the maximum length of the index key is lowered proportionally, based on the limit of 3072 bytes for a 16KB page size. That is, the maximum index key length is 1536 bytes when the page size is 8KB, and 768 bytes when the page size is 4KB.
The maximum row length, except for variable-length columns (
TEXT), is slightly less than half of a database page for 4KB, 8KB, 16KB, and 32KB page sizes. For example, the maximum row length for the default
innodb_page_sizeof 16KB is about 8000 bytes. For an
InnoDBpage size of 64KB, the maximum row length is about 16000 bytes.
LONGTEXTcolumns must be less than 4GB, and the total row length, including
TEXTcolumns, must be less than 4GB.
If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 15.12.2, “File Space Management”.
InnoDBsupports row sizes larger than 65,535 bytes internally, MySQL itself imposes a row-size limit of 65,535 for the combined size of all columns:
CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),->
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),->
f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB;ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
On some older operating systems, files must be less than 2GB. This is not a limitation of
InnoDBitself, but if you require a large tablespace, you will need to configure it using several smaller data files rather than one large data file.
The combined size of the
InnoDBlog files can be up to 512GB.
The minimum tablespace size is slightly larger than 10MB. The maximum tablespace size is four billion database pages (64TB). This is also the maximum size for a table.
The default database page size in
InnoDBis 16KB. You can lower the page size to 8KB or 4KB by specifying the
innodb_page_sizeoption when creating the MySQL instance.Note
Prior to MySQL 5.7.6, increasing the page size is not a supported operation. There is no guarantee that
InnoDBwill function normally with a page size greater than 16KB. Problems compiling or running
InnoDBmay occur. In particular,
ROW_FORMAT=COMPRESSEDin the Barracuda file format assumes that the page size is at most 16KB and uses 14-bit pointers.
As of MySQL 5.7.6, 32KB and 64KB page sizes are supported but
ROW_FORMAT=COMPRESSEDis still unsupported for page sizes greater than 16KB. For both 32KB and 64KB page sizes, the maximum record size is 16KB. For
innodb_page_size=32k, extent size is 2MB. For
innodb_page_size=64k, extent size is 4MB.
A MySQL instance using a particular
InnoDBpage size cannot use data files or log files from an instance that uses a different page size. This limitation could affect restore or downgrade operations using data from MySQL 5.6, which does support page sizes other than 16KB.
ANALYZE TABLEdetermines index cardinality (as displayed in the
SHOW INDEXoutput) by doing random dives to each of the index trees and updating index cardinality estimates accordingly. Because these are only estimates, repeated runs of
ANALYZE TABLEcould produce different numbers. This makes
ANALYZE TABLEfast on
InnoDBtables but not 100% accurate because it does not take all rows into account.
You can make the statistics collected by
ANALYZE TABLEmore precise and more stable by turning on the
innodb_stats_persistentconfiguration option, as explained in Section 126.96.36.199, “Configuring Persistent Optimizer Statistics Parameters”. When that setting is enabled, it is important to run
ANALYZE TABLEafter major changes to indexed column data, because the statistics are not recalculated periodically (such as after a server restart) as they traditionally have been.
You can change the number of random dives by modifying the
innodb_stats_persistent_sample_pagessystem variable (if the persistent statistics setting is turned on), or the
innodb_stats_transient_sample_pagessystem variable (if the persistent statistics setting is turned off).
MySQL uses index cardinality estimates only in join optimization. If some join is not optimized in the right way, you can try using
ANALYZE TABLE. In the few cases that
ANALYZE TABLEdoes not produce values good enough for your particular tables, you can use
FORCE INDEXwith your queries to force the use of a particular index, or set the
max_seeks_for_keysystem variable to ensure that MySQL prefers index lookups over table scans. See Section 6.1.4, “Server System Variables”, and Section B.5.5, “Optimizer-Related Issues”.
If statements or transactions are running on a table and
ANALYZE TABLEis run on the same table followed by a second
ANALYZE TABLEoperation, the second
ANALYZE TABLEoperation is blocked until the statements or transactions are completed. This behavior occurs because
ANALYZE TABLEmarks the currently loaded table definition as obsolete when
ANALYZE TABLEis finished running. New statements or transactions (including a second
ANALYZE TABLEstatement) must load the new table definition into the table cache, which cannot occur until currently running statements or transactions are completed and the old table definition is purged. Loading multiple concurrent table definitions is not supported.
SHOW TABLE STATUSdoes not give accurate statistics on
InnoDBtables, except for the physical size reserved by the table. The row count is only a rough estimate used in SQL optimization.
InnoDBdoes not keep an internal count of rows in a table because concurrent transactions might “see” different numbers of rows at the same time. To process a
SELECT COUNT(*) FROM tstatement,
InnoDBscans an index of the table, which takes some time if the index is not entirely in the buffer pool. To get a fast count, you have to use a counter table you create yourself and let your application update it according to the inserts and deletes it does. If an approximate row count is sufficient,
SHOW TABLE STATUScan be used. See Section 9.5, “Optimizing for InnoDB Tables”.
InnoDBalways stores database and table names internally in lowercase. To move databases in a binary format from Unix to Windows or from Windows to Unix, create all databases and tables using lowercase names.
ai_colmust be defined as part of an index such that it is possible to perform the equivalent of an indexed
SELECT MAX(lookup on the table to obtain the maximum column value. Typically, this is achieved by making the column the first column of some table index.
InnoDBsets an exclusive lock on the end of the index associated with the
AUTO_INCREMENTcolumn while initializing a previously specified
AUTO_INCREMENTcolumn on a table.
InnoDBuses a special
AUTO-INCtable lock mode where the lock is obtained and held to the end of the current SQL statement while accessing the auto-increment counter. Other clients cannot insert into the table while the
AUTO-INCtable lock is held. The same behavior occurs for “bulk inserts” with
AUTO-INClocks are not used with
innodb_autoinc_lock_mode=2. For more information, See Section 15.8.6, “AUTO_INCREMENT Handling in InnoDB”.
When you restart the MySQL server,
InnoDBmay reuse an old value that was generated for an
AUTO_INCREMENTcolumn but never stored (that is, a value that was generated during an old transaction that was rolled back).
AUTO_INCREMENTinteger column runs out of values, a subsequent
INSERToperation returns a duplicate-key error. This is general MySQL behavior, similar to how
DELETE FROMdoes not regenerate the table but instead deletes all rows, one by one.
Cascaded foreign key actions do not activate triggers.
You cannot create a table with a column name that matches the name of an internal InnoDB column (including
DB_MIX_ID). The server reports error 1005 and refers to error −1 in the error message. This restriction applies only to use of the names in uppercase.
LOCK TABLESacquires two locks on each table if
innodb_table_locks=1(the default). In addition to a table lock on the MySQL layer, it also acquires an
InnoDBtable lock. Versions of MySQL before 4.1.2 did not acquire
InnoDBtable locks; the old behavior can be selected by setting
innodb_table_locks=0. If no
InnoDBtable lock is acquired,
LOCK TABLEScompletes even if some records of the tables are being locked by other transactions.
In MySQL 5.7,
innodb_table_locks=0has no effect for tables locked explicitly with
LOCK TABLES ... WRITE. It does have an effect for tables locked for read or write by
LOCK TABLES ... WRITEimplicitly (for example, through triggers) or by
LOCK TABLES ... READ.
InnoDBlocks held by a transaction are released when the transaction is committed or aborted. Thus, it does not make much sense to invoke
autocommit=1mode because the acquired
InnoDBtable locks would be released immediately.
The limit on data-modifying transactions is now 96 * 1023 concurrent transactions that generate undo records. As of MySQL 5.7.2, 32 of 128 rollback segments are assigned to non-redo logs for transactions that modify temporary tables and related objects. This reduces the maximum number of concurrent data-modifying transactions from 128K to 96K. The 96K limit assumes that transactions do not modify temporary tables. If all data-modifying transactions also modify temporary tables, the limit is 32K concurrent transactions.