Documentation Home
MySQL 5.7 Reference Manual
Related Documentation Download this Manual Excerpts from this Manual

MySQL 5.7 Reference Manual  /  ...  /  Speed of INSERT Statements Speed of INSERT Statements

To optimize insert speed, combine many small operations into a single large operation. Ideally, you make a single connection, send the data for many new rows at once, and delay all index updates and consistency checking until the very end.

The time required for inserting a row is determined by the following factors, where the numbers indicate approximate proportions:

  • Connecting: (3)

  • Sending query to server: (2)

  • Parsing query: (2)

  • Inserting row: (1 × size of row)

  • Inserting indexes: (1 × number of indexes)

  • Closing: (1)

This does not take into consideration the initial overhead to open tables, which is done once for each concurrently running query.

The size of the table slows down the insertion of indexes by log N, assuming B-tree indexes.

You can use the following methods to speed up inserts:

Download this Manual
User Comments
  Posted by Nicolas Moldavsky on May 31, 2003
"If you are inserting a lot of rows from different clients, you can get higher speed by using the INSERT DELAYED statement."

I am running an application which inserts 20 to 120 rows per second on a single table (Format: Fixed/Row size: 46/No free data) using 5 to 10 client threads. I was using INSERT DELAYED but the delayed insert thread was consuming a lot of CPU and every 5 seconds all the insert threads would go into "waiting for handler lock" status for about 600 ms. The application also runs also selects and updates in the following proportion:

During that 600ms every 5 seconds, mysql didn't output anything to any thread (I checked it by monitoring network traffic).

Today I changed the INSERT DELAYED to a simple INSERT and CPU usage went down by 70% and no more intermitent locks are experienced.

Platform: Dual Intel Pentium III - Linux 2.4.20-13.7smp - MySQL 4.0.12-standard. Queries per second avg: 482.485
  Posted by on August 5, 2004
The suggestion of wrapping in transactions if you make more than about 5 modifications is in my experience a bit high.
I've seen massive performance increases in just wrapping 3 inserts. I imagine the performance gain is whenever you make more than a single insertion.
  Posted by on November 1, 2005

--tmpdir (and pointing it at a large partition with plenty of free space)
may be useful if you have a small /tmp partition and are using myisamchk on a large table.


Got a table with 36 million rows, 3 columns.

mysql> select count(*) from pagelinks;
| count(*) |
| 36785885 |

mysql> desc pagelinks;
| Field | Type | Null | Key | Default | Extra |
| pl_from | int(8) unsigned | | PRI | 0 | |
| pl_namespace | int(11) | | PRI | 0 | |
| pl_title | varchar(255) | | PRI | | |

which I populated using

LOAD DATA INFILE '/home/murray/tagging/wikipedia/tabbed.en_pagelinks.dat' INTO TABLE wikipedia.pagelinks;

# ls -ltrh tabbed.en_pagelinks.dat
-rw-r--r-- 1 murray murray 875M Nov 1 14:03 tabbed.en_pagelinks.dat

when running

myisamchk -r -q /var/lib/mysql/wikipedia/pagelinks

eventually causes multiple

myisamchk: Disk is full writing '/tmp/ST71pY9X' (Errcode: 28). Waiting for someone to free space... Retry in 60 secs

df -m /tmp shows

/dev/hda7 1786 1786 0 100% /tmp

but there is nothing actually in /tmp

# cd /
# du -sm tmp
1 tmp/

Fix was to use --tmpdir and point it at a partition with more than 2Gb of space.

myisamchk -r -q --tmpdir=/home/data/tmp /var/lib/mysql/wikipedia/pagelinks

# mysql -V
mysql Ver 14.7 Distrib 4.1.11, for pc-linux-gnu (i386)
debian stable/sarge

  Posted by Mike Looijmans on November 24, 2005
using LOCK TABLE .. WRITE may speed up the insertion, but when processing as well as loading rows, you will lock out any reader. A select on primary key (which should take close to 0 seconds) takes up to 0.2 seconds when my loader is busy loading batches of 1000 rows.

A very good workaround is to put the new rows into a temp table, then flush that table every now and then in a single tansaction. This makes the insert very quick, because all the checking and conversion was already done in the temp table.


-- Loop from here --
DELETE FROM .._temp;
INSERT INTO .._temp ... VALUES (...);
INSERT INTO .._temp ... VALUES (...);

LOCK TABLE ... WRITE, ..._temp READ;
-- Repeat until done --

DROP TABLE ..._temp;

What I've seen in my system, is that the INSERTs go at about the same speed as when locking the table, but the clients do not notice any performance loss at all when the loader is running.

  Posted by Jonathan Lampe on March 30, 2006
LOCK TABLES do not appear to speed up INSERT...SELECT queries. (At least under MySQL 4.0)

For example...

LOCK TABLES Folders READ, FolderPerms WRITE;
SELECT 'asc' as SID,
Folders.FolderPath FROM FolderID1
LEFT JOIN Folders ON FolderID1.ID=Folders.ID GROUP BY FolderID1.ID;

...runs just as fast with or without the LOCK/UNLOCK statements.
  Posted by Kyle Drake on May 18, 2006
If you have a situation where there are a lot of INSERTs taking place for an extended period of time (such as with a large import), edit your /etc/fstab to turn off access times with the noatime option for the partition that the data is located in. With access times enabled for the mounted partition (the default behavior), the OS has to write to the disk partition every time you access the database (even if it's just a read). On large imports, this can clog up your I/O buffers. This was a problem I had on a FreeBSD 5.4 machine. When I turned off access time, INSERT performance increased severalfold.

An example /etc/fstab entry without, and then with noatime, consult your manpages for OS-specific settings:

/dev/da0s1a /partition ufs rw 2 2
/dev/da0s1a /partition ufs rw,noatime 2 2
  Posted by Vitaly Ignatovich on January 23, 2007
If you have a big ammount of data to insert through a select ... insert query, it might be helpfull to divide it into smaller portions. I had a table with approx 30 000 000 records with 6 fields of the type double. If I try to use a select ... insert to copy all records at once into another empty table, it wouldn't go at all. But when copying 2000000 records at a time it took about 10 minutes for the whole table.
It seems that mysql executes a select completely and then starts inserting.
  Posted by Christopher Brown on January 4, 2008
Be aware that if you are use replication and set sync_binlog = 1, this can have a large impact on the speed of INSERT/UPDATE/DELETE operations especially if you are using mysqlimport or LOAD DATA LOCAL INFILE.

Using sync_binlog=1 on Solaris 10, I noted a speed decrease to 25% of the speed as compared to the default sync_binlog = 0.

Sign Up Login You must be logged in to post a comment.