Do not convert MySQL system tables in the
mysql database from
InnoDB tables. This is an unsupported
operation. If you do this, MySQL does not restart until you
restore the old system tables from a backup or re-generate them
with the mysql_install_db script.
A table can contain a maximum of 1000 columns.
InnoDB internal maximum key length is
3500 bytes, but MySQL itself restricts this to 3072 bytes.
(1024 bytes for non-64-bit builds before MySQL 5.0.17, and for
all builds before 5.0.15.)
An index key for a single-column index can be up to 767 bytes. The same length limit applies to any index key prefix. See Section 13.1.8, “CREATE INDEX Syntax”.
The maximum row length, except for variable-length columns
TEXT), is slightly less than
half of a database page. That is, the maximum row length is
about 8000 bytes.
columns must be less than 4GB, and the total row length,
TEXT columns, must be less than
If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 18.104.22.168, “File Space Management”.
InnoDB supports row sizes larger
than 65,535 bytes internally, MySQL itself imposes a row-size
limit of 65,535 for the combined size of all columns:
CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),->
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),->
f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB;ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
On some older operating systems, files must be less than 2GB.
This is not a limitation of
but if you require a large tablespace, you will need to
configure it using several smaller data files rather than one
large data file.
The combined size of the
InnoDB log files
must be less than 4GB.
The minimum tablespace size is 10MB. The maximum tablespace size is four billion database pages (64TB). This is also the maximum size for a table.
The default database page size in
16KB. By recompiling the code, you can set it to values
ranging from 8KB to 64KB. You must update the values of
UNIV_PAGE_SIZE_SHIFT in the
univ.i source file.
Changing the page size is not a supported operation and
there is no guarantee that
InnoDB will function normally
with a page size other than 16KB. Problems compiling or
running InnoDB may occur.
A version of
InnoDB built for
one page size cannot use data files or log files from a
version built for a different page size.
InnoDB tables do not support
InnoDB tables do not support spatial data
types before MySQL 5.0.16. As of 5.0.16,
InnoDB supports spatial data types, but not
indexes on them.
ANALYZE TABLE determines index
cardinality (as displayed in the
Cardinality column of
SHOW INDEX output) by doing
eight random dives to each of the index trees and updating
index cardinality estimates accordingly. Because these are
only estimates, repeated runs of
TABLE may produce different numbers. This makes
ANALYZE TABLE fast on
InnoDB tables but not 100% accurate because
it does not take all rows into account.
MySQL uses index cardinality estimates only in join
optimization. If some join is not optimized in the right way,
you can try using
TABLE. In the few cases that
ANALYZE TABLE does not produce
values good enough for your particular tables, you can use
FORCE INDEX with your queries to force the
use of a particular index, or set the
variable to ensure that MySQL prefers index lookups over table
scans. See Section 5.1.4, “Server System Variables”, and
Section B.5.6, “Optimizer-Related Issues”.
If statements or transactions are running on a table and
ANALYZE TABLE is run on the
same table followed by a second
TABLE operation, the second
ANALYZE TABLE operation is
blocked until the statements or transactions are completed.
This behaviour occurs because
TABLE marks the currently loaded table definition as
ANALYZE TABLE is
finished running. New statements or transactions (including a
ANALYZE TABLE statement)
must load the new table definition into the table cache, which
cannot occur until currently running statements or
transactions are completed and the old table definition is
purged. Loading multiple concurrent table definitions is not
InnoDB does not keep an internal count of
rows in a table because concurrent transactions might
“see” different numbers of rows at the same time.
To process a
SELECT COUNT(*) FROM t
InnoDB scans an index of the
table, which takes some time if the index is not entirely in
the buffer pool. If your table does not change often, using
the MySQL query cache is a good solution. To get a fast count,
you have to use a counter table you create yourself and let
your application update it according to the inserts and
deletes it does. If an approximate row count is sufficient,
SHOW TABLE STATUS can be used.
InnoDB always stores database
and table names internally in lowercase. To move databases in
a binary format from Unix to Windows or from Windows to Unix,
create all databases and tables using lowercase names.
ai_col must be defined as part of
an index such that it is possible to perform the equivalent of
MAX( lookup on the
table to obtain the maximum column value. Typically, this is
achieved by making the column the first column of some table
In MySQL 5.0 before MySQL 5.0.3,
InnoDB does not support the
AUTO_INCREMENT table option for setting the
initial sequence value in a
statement. To set the value with
insert a dummy row with a value one less and delete that dummy
row, or insert the first row with an explicit value specified.
While initializing a previously specified
AUTO_INCREMENT column on a table,
InnoDB sets an exclusive lock on the end of
the index associated with the
AUTO_INCREMENT column. While accessing the
InnoDB uses a
AUTO-INC table lock mode where the
lock lasts only to the end of the current SQL statement, not
to the end of the entire transaction. Other clients cannot
insert into the table while the
table lock is held. See
Section 22.214.171.124, “AUTO_INCREMENT Handling in InnoDB”.
When you restart the MySQL server,
may reuse an old value that was generated for an
AUTO_INCREMENT column but never stored
(that is, a value that was generated during an old transaction
that was rolled back).
AUTO_INCREMENT column runs out of
InnoDB wraps a
BIGINT values have 64 bits, so
if you were to insert one million rows per second, it would
still take nearly three hundred thousand years before
BIGINT reached its upper bound.
With all other integer type columns, a duplicate-key error
results. This is general MySQL behavior, similar to how
regenerate the table but instead deletes all rows, one by one.
Under some conditions,
InnoDB table is mapped to
FROM and does
not reset the
AUTO_INCREMENT counter. See
Section 13.1.21, “TRUNCATE TABLE Syntax”.
LOAD TABLE FROM MASTER
statement for setting up replication slave servers does not
InnoDB tables. A workaround is to
alter the table to
MyISAM on the master,
then do the load, and after that alter the master table back
InnoDB. Do not do this if the tables use
InnoDB-specific features such as foreign
Currently, cascaded foreign key actions do not activate triggers.
You cannot create a table with a column name that matches the
name of an internal InnoDB column (including
DB_MIX_ID). In versions of MySQL before
5.0.21 this would cause a crash, since 5.0.21 the server will
report error 1005 and refers to error −1 in the error
message. This restriction applies only to use of the names in
LOCK TABLES acquires two locks
on each table if
default). In addition to a table lock on the MySQL layer, it
also acquires an
InnoDB table lock.
Versions of MySQL before 4.1.2 did not acquire
InnoDB table locks; the old behavior can be
selected by setting
InnoDB table lock is acquired,
LOCK TABLES completes even if
some records of the tables are being locked by other
InnoDB locks held by a transaction are
released when the transaction is committed or aborted. Thus,
it does not make much sense to invoke
LOCK TABLES on
InnoDB tables in
autocommit=1 mode because the
InnoDB table locks would be
InnoDB has a limit of 1023 concurrent
transactions that have created undo records by modifying data.
Workarounds include keeping transactions as small and fast as
possible, delaying changes until near the end of the
transaction, and using stored routines to reduce client/server
latency delays. Applications should commit transactions before
doing time-consuming client-side operations.