Documentation Home
MySQL 5.7 Reference Manual
Related Documentation Download this Manual Excerpts from this Manual

MySQL 5.7 Reference Manual  /  ...  /  Backing Up a Slave Using mysqldump Backing Up a Slave Using mysqldump

Using mysqldump to create a copy of a database enables you to capture all of the data in the database in a format that enables the information to be imported into another instance of MySQL Server (see Section 4.5.4, “mysqldump — A Database Backup Program”). Because the format of the information is SQL statements, the file can easily be distributed and applied to running servers in the event that you need access to the data in an emergency. However, if the size of your data set is very large, mysqldump may be impractical.

When using mysqldump, you should stop replication on the slave before starting the dump process to ensure that the dump contains a consistent set of data:

  1. Stop the slave from processing requests. You can stop replication completely on the slave using mysqladmin:

    shell> mysqladmin stop-slave

    Alternatively, you can stop only the slave SQL thread to pause event execution:

    shell> mysql -e 'STOP SLAVE SQL_THREAD;'

    This enables the slave to continue to receive data change events from the master's binary log and store them in the relay logs using the I/O thread, but prevents the slave from executing these events and changing its data. Within busy replication environments, permitting the I/O thread to run during backup may speed up the catch-up process when you restart the slave SQL thread.

  2. Run mysqldump to dump your databases. You may either dump all databases or select databases to be dumped. For example, to dump all databases:

    shell> mysqldump --all-databases > fulldb.dump
  3. Once the dump has completed, start slave operations again:

    shell> mysqladmin start-slave

In the preceding example, you may want to add login credentials (user name, password) to the commands, and bundle the process up into a script that you can run automatically each day.

If you use this approach, make sure you monitor the slave replication process to ensure that the time taken to run the backup does not affect the slave's ability to keep up with events from the master. See Section, “Checking Replication Status”. If the slave is unable to keep up, you may want to add another slave and distribute the backup process. For an example of how to configure this scenario, see Section 17.3.4, “Replicating Different Databases to Different Slaves”.

Download this Manual
User Comments
  Posted by Michael Maguire on September 20, 2008
This is a half-question/half-tip (I welcome being corrected on this).

Wouldn't adding the --lock-all-tables command obviate the need for stop slave/start slave?
  Posted by Ramiro Cavalcanti on March 13, 2012
Answering above:

Using --lock-tables each database dumped is put in READ LOCK (LOCAL) at time. So, if you have tables between databases this lock it isn't efficient.

You should do a GLOBAL LOCK and stop slave in order to do the backup at Slave server.

--lock-tables, -l

For each dumped database, lock all tables to be dumped before dumping them. The tables are locked with READ LOCAL to permit concurrent inserts in the case of MyISAM tables. For transactional tables such as InnoDB, --single-transaction is a much better option than --lock-tables because it does not need to lock the tables at all.

Because --lock-tables locks tables for each database separately, this option does not guarantee that the tables in the dump file are logically consistent between databases. Tables in different databases may be dumped in completely different states.
Sign Up Login You must be logged in to post a comment.