MySQL 5.0 Reference Manual  /  ...  /  Creating a Data Snapshot Using mysqldump Creating a Data Snapshot Using mysqldump

One way to create a snapshot of the data in an existing master database is to use the mysqldump tool to create a dump of all the databases you want to replicate. Once the data dump has been completed, you then import this data into the slave before starting the replication process.

The example shown here dumps all databases to a file named dbdump.db, and includes the --master-data option which automatically appends the CHANGE MASTER TO statement required on the slave to start the replication process:

shell> mysqldump --all-databases --master-data > dbdump.db

If you do not use --master-data, then it is necessary to lock all tables in a separate session manually (using FLUSH TABLES WITH READ LOCK) prior to running mysqldump, then exiting or running UNLOCK TABLES from the second session to release the locks. You must also obtain binary log position information matching the snapshot, using SHOW MASTER STATUS, and use this to issue the appropriate CHANGE MASTER TO statement when starting the slave.

When choosing databases to include in the dump, remember that you need to filter out databases on each slave that you do not want to include in the replication process.

To import the data, either copy the dump file to the slave, or access the file from the master when connecting remotely to the slave.

Download this Manual
User Comments
  Posted by Alec Matusis on August 17, 2008
When your master is overloaded or is short on disk space, execute mysqldump on the slave, connecting remotely to the master: this saves master from extra disk writes when creating the dump file.
  Posted by gio monte on September 23, 2008
You may also pipe the output to gzip:
mysqldump --user=abc --password=abc | gzip > backup.sql.gz

To execute on the other servers, without uncompressing, use the following command:
zcat backup.sql.gz | mysql --user=abc --password=abc
  Posted by Sprout Software on October 22, 2009
Note that the --master-data option only includes MASTER_LOG_FILE and MASTER_LOG_POS arguments of the "CHANGE MASTER TO" statement; you'll likely want to edit the dump to include MASTER_HOST, MASTER_USER and MASTER_PASSWORD values when importing to a fresh slave server (or realize that you'll need to re-issue a CHANGE MASTER TO statement after the import).
  Posted by gerry myob on March 10, 2010
The mysqldump process locked up for me if I first did "FLUSH TABLES WITH READ LOCK;" as described above. I later discovered this would only happen for databases that contained an InnoDB table.

If you are having this issue you can fix it my adding the --single-transaction argument to your mysqldump call.
  Posted by Vlatko Šurlan on May 5, 2010
There is an interesting script sample here:
showing a way to dump mysql databases directly into gzip and then into ssh connection, thus creating a gzipped dump archive that never resided on the server hard drive. This can be a handy way to ensure that backup does not fill up the server hard drive.
  Posted by Gianluigi Zanettini on July 29, 2010
"FLUSH TABLES WITH READ LOCK;" seems to be unnecessary if you run mysqldump --master-data. Since it turns on --lock-all-tables automatically, it locks all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump.

So the lock doesn't need to be acquired manually.
  Posted by David Forgac on September 2, 2010
@Gianluigi Yes but the point of the FLUSH TABLES WITH READ LOCK here is that you acquire it, get the master log coordinates, and then complete the dump. If the lock is not in place the database will likely be changed between reading the log coordinates and making the dump even just a couple seconds later. If you're just looking to make a database dump that may be fine but it won't work for setting up replication.
  Posted by Ruslan Kabalin on September 23, 2010
@David: But correct coordinates (those recorded after locking all tables) will be added to the dump file automatically if --master-data parameter is used, aren't they? And the corresponding CHANGE MASTER TO MASTER_LOG_* statements will be executed on the slave the time of dump importing. So, I do not see the point to lock tables and record coordinates separately since --master-data parameter already does it.
  Posted by Ilan Hazan on April 7, 2011
Restoring a dump table into the MySQL master server can lead to serious replication delay.
To overcome the replication delay, caused by restoring the dump table on the master, there is a need to widespread the massive inserts. This can be done by the MySQL SLEEP command.
  Posted by Ender Li on July 5, 2011
If get a "Access denied for user 'ODBC'@'localhost'" error message. "MySQLInstanceConfig.exe".
2.Select "Create An Anonymous Account" at "Please set the security options".
  Posted by Stephen Wylie on January 25, 2012
When using --master-data, as Sprout has pointed out, not all the CHANGE MASTER statements required are in the dumpfile.
His suggestion of editing the dumpfile will work, but not trying to do another CHANGE MASTER to set the host, user etc after importing the dump. That will only work if you put in the filename and log co-ordinates again, as I found to my cost.
It won't tell you what's wrong either!
Every execution of CHANGE MASTER blows off the co-ords, as it assumes you are changing to a new master.
I think you can set the user, etc before importing and it should work.
Sign Up Login You must be logged in to post a comment.