These performance tips supplement the general guidelines for fast inserts in Section 10.2.5.1, “Optimizing INSERT Statements”.
- When importing data into - InnoDB, turn off autocommit mode, because it performs a log flush to disk for every insert. To disable autocommit during your import operation, surround it with- SET autocommitand- COMMITstatements:- SET autocommit=0; ... SQL import statements ... COMMIT;- The mysqldump option - --optcreates dump files that are fast to import into an- InnoDBtable, even without wrapping them with the- SET autocommitand- COMMITstatements.
- If you have - UNIQUEconstraints on secondary keys, you can speed up table imports by temporarily turning off the uniqueness checks during the import session:- SET unique_checks=0; ... SQL import statements ... SET unique_checks=1;- For big tables, this saves a lot of disk I/O because - InnoDBcan use its change buffer to write secondary index records in a batch. Be certain that the data contains no duplicate keys.
- If you have - FOREIGN KEYconstraints in your tables, you can speed up table imports by turning off the foreign key checks for the duration of the import session:- SET foreign_key_checks=0; ... SQL import statements ... SET foreign_key_checks=1;- For big tables, this can save a lot of disk I/O. 
- Use the multiple-row - INSERTsyntax to reduce communication overhead between the client and the server if you need to insert many rows:- INSERT INTO yourtable VALUES (1,2), (5,5), ...;- This tip is valid for inserts into any table, not just - InnoDBtables.
- When doing bulk inserts into tables with auto-increment columns, set - innodb_autoinc_lock_modeto 2 (interleaved) instead of 1 (consecutive). See Section 17.6.1.6, “AUTO_INCREMENT Handling in InnoDB” for details.
- When performing bulk inserts, it is faster to insert rows in - PRIMARY KEYorder.- InnoDBtables use a clustered index, which makes it relatively fast to use data in the order of the- PRIMARY KEY. Performing bulk inserts in- PRIMARY KEYorder is particularly important for tables that do not fit entirely within the buffer pool.
- For optimal performance when loading data into an - InnoDB- FULLTEXTindex, follow this set of steps:- Define a column - FTS_DOC_IDat table creation time, of type- BIGINT UNSIGNED NOT NULL, with a unique index named- FTS_DOC_ID_INDEX. For example:- CREATE TABLE t1 ( FTS_DOC_ID BIGINT unsigned NOT NULL AUTO_INCREMENT, title varchar(255) NOT NULL DEFAULT '', text mediumtext NOT NULL, PRIMARY KEY (`FTS_DOC_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE UNIQUE INDEX FTS_DOC_ID_INDEX on t1(FTS_DOC_ID);
- Load the data into the table. 
- Create the - FULLTEXTindex after the data is loaded.
 Note- When adding - FTS_DOC_IDcolumn at table creation time, ensure that the- FTS_DOC_IDcolumn is updated when the- FULLTEXTindexed column is updated, as the- FTS_DOC_IDmust increase monotonically with each- INSERTor- UPDATE. If you choose not to add the- FTS_DOC_IDat table creation time and have- InnoDBmanage DOC IDs for you,- InnoDBadds the- FTS_DOC_IDas a hidden column with the next- CREATE FULLTEXT INDEXcall. This approach, however, requires a table rebuild which can impact performance.
- If loading data into a new MySQL instance, consider disabling redo logging using - ALTER INSTANCE {ENABLE|DISABLE} INNODB REDO_LOGsyntax. Disabling redo logging helps speed up data loading by avoiding redo log writes. For more information, see Disabling Redo Logging.Warning- This feature is intended only for loading data into a new MySQL instance. Do not disable redo logging on a production system. It is permitted to shutdown and restart the server while redo logging is disabled, but an unexpected server stoppage while redo logging is disabled can cause data loss and instance corruption. 
- Use MySQL Shell to import data. MySQL Shell's parallel table import utility - util.importTable()provides rapid data import to a MySQL relational table for large data files. MySQL Shell's dump loading utility- util.loadDump()also offers parallel load capabilities. See MySQL Shell Utilities.