Do not convert MySQL system tables in the
mysql database from
InnoDB tables. This is an unsupported
operation. If you do this, MySQL does not restart until you
restore the old system tables from a backup or re-generate them
with the mysql_install_db program.
A table can contain a maximum of 1000 columns.
A table can contain a maximum of 64 secondary indexes.
An index key for a single-column index can be up to 767 bytes. The same length limit applies to any index key prefix. See Section 13.1.13, “CREATE INDEX Syntax”.
InnoDB internal maximum key length is
3500 bytes, but MySQL itself restricts this to 3072 bytes.
This limit applies to the length of the combined index key in
a multi-column index.
The maximum row length, except for variable-length columns
TEXT), is slightly less than
half of a database page. That is, the maximum row length is
about 8000 bytes.
columns must be less than 4GB, and the total row length,
TEXT columns, must be less than
If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 184.108.40.206, “File Space Management”.
The row size for
that are chosen for external off-page storage should not
exceed 10% of the combined redo
log file size. If the row size exceeds 10% of the
combined redo log file size,
overwrite the most recent checkpoint which may result in lost
data during crash recovery. (Bug#69477).
InnoDB supports row sizes larger
than 65,535 bytes internally, MySQL itself imposes a row-size
limit of 65,535 for the combined size of all columns:
CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),->
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),->
f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB;ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
On some older operating systems, files must be less than 2GB.
This is not a limitation of
but if you require a large tablespace, you will need to
configure it using several smaller data files rather than one
large data file.
The combined size of the
InnoDB log files
must be less than 4GB.
The minimum tablespace size is 10MB. The maximum tablespace size is four billion database pages (64TB). This is also the maximum size for a table.
Changing the page size is not a supported operation and
there is no guarantee that
InnoDB will function normally
with a page size other than 16KB. Problems compiling or
running InnoDB may occur. In particular,
ROW_FORMAT=COMPRESSED in the
InnoDB Plugin assumes that the page size
is at most 16KB and uses 14-bit pointers.
A version of
InnoDB built for
one page size cannot use data files or log files from a
version built for a different page size. This limitation
could affect restore or downgrade operations using data from
MySQL 5.6, which does support page sizes other than 16KB.
InnoDB tables do not support
InnoDB tables support spatial data types,
but not indexes on them.
ANALYZE TABLE determines index
cardinality (as displayed in the
Cardinality column of
SHOW INDEX output) by doing
eight random dives to each of the index trees and updating
index cardinality estimates accordingly. Because these are
only estimates, repeated runs of
TABLE may produce different numbers. This makes
ANALYZE TABLE fast on
InnoDB tables but not 100% accurate because
it does not take all rows into account.
InnoDB Plugin, you can change the
number of random dives by modifying the
MySQL uses index cardinality estimates only in join
optimization. If some join is not optimized in the right way,
you can try using
TABLE. In the few cases that
ANALYZE TABLE does not produce
values good enough for your particular tables, you can use
FORCE INDEX with your queries to force the
use of a particular index, or set the
variable to ensure that MySQL prefers index lookups over table
scans. See Section 5.1.4, “Server System Variables”, and
Section B.5.6, “Optimizer-Related Issues”.
If statements or transactions are running on a table and
ANALYZE TABLE is run on the
same table followed by a second
TABLE operation, the second
ANALYZE TABLE operation is
blocked until the statements or transactions are completed.
This behaviour occurs because
TABLE marks the currently loaded table definition as
ANALYZE TABLE is
finished running. New statements or transactions (including a
ANALYZE TABLE statement)
must load the new table definition into the table cache, which
cannot occur until currently running statements or
transactions are completed and the old table definition is
purged. Loading multiple concurrent table definitions is not
InnoDB does not keep an internal count of
rows in a table because concurrent transactions might
“see” different numbers of rows at the same time.
To process a
SELECT COUNT(*) FROM t
InnoDB scans an index of the
table, which takes some time if the index is not entirely in
the buffer pool. If your table does not change often, using
the MySQL query cache is a good solution. To get a fast count,
you have to use a counter table you create yourself and let
your application update it according to the inserts and
deletes it does. If an approximate row count is sufficient,
SHOW TABLE STATUS can be used.
See Section 14.6.8, “InnoDB Performance Tuning Tips”.
InnoDB always stores database
and table names internally in lowercase. To move databases in
a binary format from Unix to Windows or from Windows to Unix,
create all databases and tables using lowercase names.
ai_col must be defined as part of
an index such that it is possible to perform the equivalent of
MAX( lookup on the
table to obtain the maximum column value. Typically, this is
achieved by making the column the first column of some table
InnoDB sets an exclusive lock on the end of
the index associated with the
AUTO_INCREMENT column while initializing a
on a table.
InnoDB uses a special
AUTO-INC table lock mode where the lock is
obtained and held to the end of the current SQL statement
while accessing the auto-increment counter. Other clients
cannot insert into the table while the
AUTO-INC table lock is held. The same
behavior occurs for “bulk inserts” with
AUTO-INC locks are not used
For more information, See
Section 220.127.116.11, “AUTO_INCREMENT Handling in InnoDB”.
When you restart the MySQL server,
may reuse an old value that was generated for an
AUTO_INCREMENT column but never stored
(that is, a value that was generated during an old transaction
that was rolled back).
AUTO_INCREMENT integer column runs
out of values, a subsequent
operation returns a duplicate-key error. This is general MySQL
behavior, similar to how
regenerate the table but instead deletes all rows, one by one.
Under some conditions,
InnoDB table is mapped to
FROM . See
Section 13.1.34, “TRUNCATE TABLE Syntax”.
LOAD TABLE FROM MASTER
statement for setting up replication slave servers does not
InnoDB tables. A workaround is to
alter the table to
MyISAM on the master,
then do the load, and after that alter the master table back
InnoDB. Do not do this if the tables use
InnoDB-specific features such as foreign
Currently, cascaded foreign key actions do not activate triggers.
You cannot create a table with a column name that matches the
name of an internal InnoDB column (including
DB_MIX_ID). In versions of MySQL before
5.1.10 this would cause a crash, since 5.1.10 the server will
report error 1005 and refers to error −1 in the error
message. This restriction applies only to use of the names in
LOCK TABLES acquires two locks
on each table if
default). In addition to a table lock on the MySQL layer, it
also acquires an
InnoDB table lock.
Versions of MySQL before 4.1.2 did not acquire
InnoDB table locks; the old behavior can be
selected by setting
InnoDB table lock is acquired,
LOCK TABLES completes even if
some records of the tables are being locked by other
InnoDB locks held by a transaction are
released when the transaction is committed or aborted. Thus,
it does not make much sense to invoke
LOCK TABLES on
InnoDB tables in
autocommit=1 mode because the
InnoDB table locks would be
InnoDB has a limit of 1023 concurrent
transactions that have created undo records by modifying data.
Workarounds include keeping transactions as small and fast as
possible, delaying changes until near the end of the
transaction, and using stored routines to reduce client/server
latency delays. Applications should commit transactions before
doing time-consuming client-side operations.