Do not convert MySQL system tables in the
mysql database from
InnoDB tables. This is an unsupported
operation. If you do this, MySQL does not restart until you
restore the old system tables from a backup or re-generate them
with the mysql_install_db program.
A table can contain a maximum of 1000 columns.
A table can contain a maximum of 64 secondary indexes.
By default, an index key for a single-column index can be up
to 767 bytes. The same length limit applies to any index key
prefix. See Section 13.1.13, “CREATE INDEX Syntax”. For example, you
might hit this limit with a
column prefix index
of more than 255 characters on a
VARCHAR column, assuming a UTF-8 character
set and the maximum of 3 bytes for each character. When the
configuration option is enabled, this length limit is raised
to 3072 bytes, for
InnoDB tables that use
If you specify an index prefix length that is greater than the allowed maximum value, the length is silently reduced to the maximum length. In MySQL 5.6 and later, specifying an index prefix length greater than the maximum length produces an error.
InnoDB internal maximum key length is
3500 bytes, but MySQL itself restricts this to 3072 bytes.
This limit applies to the length of the combined index key in
a multi-column index.
The maximum row length, except for variable-length columns
TEXT), is slightly less than
half of a database page. That is, the maximum row length is
about 8000 bytes.
columns must be less than 4GB, and the total row length,
TEXT columns, must be less than
If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 14.12.2, “File Space Management”.
The row size for
that are chosen for external off-page storage should not
exceed 10% of the combined redo
log file size. If the row size exceeds 10% of the
combined redo log file size,
overwrite the most recent checkpoint which may result in lost
data during crash recovery. (Bug#69477).
InnoDB supports row sizes larger
than 65,535 bytes internally, MySQL itself imposes a row-size
limit of 65,535 for the combined size of all columns:
CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),->
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),->
f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB;ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
On some older operating systems, files must be less than 2GB.
This is not a limitation of
but if you require a large tablespace, you will need to
configure it using several smaller data files rather than one
large data file.
The combined size of the
InnoDB log files
must be less than 4GB.
The minimum tablespace size is slightly larger than 10MB. The maximum tablespace size is four billion database pages (64TB). This is also the maximum size for a table.
Changing the page size is not a supported operation and
there is no guarantee that
InnoDB will function normally
with a page size other than 16KB. Problems compiling or
running InnoDB may occur. In particular,
ROW_FORMAT=COMPRESSED in the Barracuda
file format assumes that the page size is at most 16KB and
uses 14-bit pointers.
A version of
InnoDB built for
one page size cannot use data files or log files from a
version built for a different page size. This limitation
could affect restore or downgrade operations using data from
MySQL 5.6, which does support page sizes other than 16KB.
InnoDB tables do not support
InnoDB tables support spatial data types,
but not indexes on them.
ANALYZE TABLE determines index
cardinality (as displayed in the
Cardinality column of
SHOW INDEX output) by doing
random dives to each
of the index trees and updating index cardinality estimates
accordingly. Because these are only estimates, repeated runs
ANALYZE TABLE could produce
different numbers. This makes
TABLE fast on
InnoDB tables but
not 100% accurate because it does not take all rows into
You can change the number of random dives by modifying the
MySQL uses index cardinality estimates only in join
optimization. If some join is not optimized in the right way,
you can try using
TABLE. In the few cases that
ANALYZE TABLE does not produce
values good enough for your particular tables, you can use
FORCE INDEX with your queries to force the
use of a particular index, or set the
variable to ensure that MySQL prefers index lookups over table
scans. See Section 5.1.4, “Server System Variables”, and
Section B.5.6, “Optimizer-Related Issues”.
If statements or transactions are running on a table and
ANALYZE TABLE is run on the
same table followed by a second
TABLE operation, the second
ANALYZE TABLE operation is
blocked until the statements or transactions are completed.
This behaviour occurs because
TABLE marks the currently loaded table definition as
ANALYZE TABLE is
finished running. New statements or transactions (including a
ANALYZE TABLE statement)
must load the new table definition into the table cache, which
cannot occur until currently running statements or
transactions are completed and the old table definition is
purged. Loading multiple concurrent table definitions is not
InnoDB does not keep an internal count of
rows in a table because concurrent transactions might
“see” different numbers of rows at the same time.
To process a
SELECT COUNT(*) FROM t
InnoDB scans an index of the
table, which takes some time if the index is not entirely in
the buffer pool. If your table does not change often, using
the MySQL query cache is a good solution. To get a fast count,
you have to use a counter table you create yourself and let
your application update it according to the inserts and
deletes it does. If an approximate row count is sufficient,
SHOW TABLE STATUS can be used.
InnoDB always stores database
and table names internally in lowercase. To move databases in
a binary format from Unix to Windows or from Windows to Unix,
create all databases and tables using lowercase names.
ai_col must be defined as part of
an index such that it is possible to perform the equivalent of
MAX( lookup on the
table to obtain the maximum column value. Typically, this is
achieved by making the column the first column of some table
InnoDB sets an exclusive lock on the end of
the index associated with the
AUTO_INCREMENT column while initializing a
on a table.
InnoDB uses a special
AUTO-INC table lock mode where the lock is
obtained and held to the end of the current SQL statement
while accessing the auto-increment counter. Other clients
cannot insert into the table while the
AUTO-INC table lock is held. The same
behavior occurs for “bulk inserts” with
AUTO-INC locks are not used
For more information, See
Section 14.8.5, “AUTO_INCREMENT Handling in InnoDB”.
When you restart the MySQL server,
may reuse an old value that was generated for an
AUTO_INCREMENT column but never stored
(that is, a value that was generated during an old transaction
that was rolled back).
AUTO_INCREMENT integer column runs
out of values, a subsequent
operation returns a duplicate-key error. This is general MySQL
behavior, similar to how
regenerate the table but instead deletes all rows, one by one.
Under some conditions,
InnoDB table is mapped to
FROM . See
Section 13.1.33, “TRUNCATE TABLE Syntax”.
Currently, cascaded foreign key actions do not activate triggers.
You cannot create a table with a column name that matches the
name of an internal InnoDB column (including
DB_MIX_ID). The server reports error 1005
and refers to error −1 in the error message. This
restriction applies only to use of the names in uppercase.
LOCK TABLES acquires two locks
on each table if
default). In addition to a table lock on the MySQL layer, it
also acquires an
InnoDB table lock.
Versions of MySQL before 4.1.2 did not acquire
InnoDB table locks; the old behavior can be
selected by setting
InnoDB table lock is acquired,
LOCK TABLES completes even if
some records of the tables are being locked by other
As of MySQL 5.5.3,
innodb_table_locks=0 has no
effect for tables locked explicitly with
LOCK TABLES ...
WRITE. It still has an effect for tables locked for
read or write by
LOCK TABLES ...
WRITE implicitly (for example, through triggers) or
InnoDB locks held by a transaction are
released when the transaction is committed or aborted. Thus,
it does not make much sense to invoke
LOCK TABLES on
InnoDB tables in
autocommit=1 mode because the
InnoDB table locks would be
The limit of 1023 concurrent data-modifying transactions has been raised in MySQL 5.5 and above. The limit is now 128 * 1023 concurrent transactions that generate undo records. You can remove any workarounds that require changing the proper structure of your transactions, such as committing more frequently.