Documentation Home
MySQL 5.6 リファレンスマニュアル
Download this Manual
PDF (US Ltr) - 27.1Mb
PDF (A4) - 27.1Mb
EPUB - 7.5Mb
HTML Download (TGZ) - 7.2Mb
HTML Download (Zip) - 7.2Mb


MySQL 5.6 リファレンスマニュアル  /  ...  /  mysqldump を使用したデータスナップショットの作成

17.1.1.5 mysqldump を使用したデータスナップショットの作成

既存のマスターデータベースでデータのスナップショットを作成する 1 つの方法は、mysqldump ツールを使用して、複製するすべてのデータベースのダンプを作成することです。データダンプが完了したら、レプリケーションプロセスを開始する前に、このデータをスレーブにインポートします。

ここで示した例では、dbdump.db という名前のファイルにすべてのデータベースをダンプし、レプリケーションプロセスを開始するためにスレーブ側で必要な CHANGE MASTER TO ステートメントを自動的に加える --master-data オプションを追加しています。

shell> mysqldump --all-databases --master-data > dbdump.db

--master-data を使用しない場合は、個々のセッションですべてのテーブルを手動でロックしてから (FLUSH TABLES WITH READ LOCK を使用)、mysqldump を実行し、2 番目のセッションから終了するか UNLOCK TABLES を実行するかしてロックを解除する必要があります。また、SHOW MASTER STATUS を使用してスナップショットに一致するバイナリログ位置情報を取得し、これを使用してスレーブ起動時に対応する CHANGE MASTER TO ステートメントを発行する必要があります。

ダンプに含めるデータベースを選択するときに、各スレーブでレプリケーションプロセスに含めないデータベースを除外する必要があることを覚えておいてください。

データをインポートするには、ダンプファイルをスレーブにコピーするか、スレーブにリモートで接続したときにマスターからファイルにアクセスします。


User Comments
  Posted by Alec Matusis on August 17, 2008
When your master is overloaded or is short on disk space, execute mysqldump on the slave, connecting remotely to the master: this saves master from extra disk writes when creating the dump file.
  Posted by gio monte on September 23, 2008
You may also pipe the output to gzip:
mysqldump --user=abc --password=abc | gzip > backup.sql.gz

To execute on the other servers, without uncompressing, use the following command:
zcat backup.sql.gz | mysql --user=abc --password=abc
  Posted by Sprout Software on October 22, 2009
Note that the --master-data option only includes MASTER_LOG_FILE and MASTER_LOG_POS arguments of the "CHANGE MASTER TO" statement; you'll likely want to edit the dump to include MASTER_HOST, MASTER_USER and MASTER_PASSWORD values when importing to a fresh slave server (or realize that you'll need to re-issue a CHANGE MASTER TO statement after the import).
  Posted by gerry myob on March 10, 2010
The mysqldump process locked up for me if I first did "FLUSH TABLES WITH READ LOCK;" as described above. I later discovered this would only happen for databases that contained an InnoDB table.

If you are having this issue you can fix it my adding the --single-transaction argument to your mysqldump call.
  Posted by Vlatko Šurlan on May 5, 2010
There is an interesting script sample here: http://www.docplanet.org/linux/backing-up-linux-web-server-live-via-ssh/
showing a way to dump mysql databases directly into gzip and then into ssh connection, thus creating a gzipped dump archive that never resided on the server hard drive. This can be a handy way to ensure that backup does not fill up the server hard drive.
  Posted by Gianluigi Zanettini on July 29, 2010
"FLUSH TABLES WITH READ LOCK;" seems to be unnecessary if you run mysqldump --master-data. Since it turns on --lock-all-tables automatically, it locks all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump.

So the lock doesn't need to be acquired manually.
  Posted by David Forgac on September 2, 2010
@Gianluigi Yes but the point of the FLUSH TABLES WITH READ LOCK here is that you acquire it, get the master log coordinates, and then complete the dump. If the lock is not in place the database will likely be changed between reading the log coordinates and making the dump even just a couple seconds later. If you're just looking to make a database dump that may be fine but it won't work for setting up replication.
  Posted by Ruslan Kabalin on September 23, 2010
@David: But correct coordinates (those recorded after locking all tables) will be added to the dump file automatically if --master-data parameter is used, aren't they? And the corresponding CHANGE MASTER TO MASTER_LOG_* statements will be executed on the slave the time of dump importing. So, I do not see the point to lock tables and record coordinates separately since --master-data parameter already does it.
  Posted by Ilan Hazan on April 7, 2011
Restoring a dump table into the MySQL master server can lead to serious replication delay.
To overcome the replication delay, caused by restoring the dump table on the master, there is a need to widespread the massive inserts. This can be done by the MySQL SLEEP command.
See http://www.mysqldiary.com/as-restoring-a-dump-table-into-the-mysql-master-you-better-get-some-sleep/
  Posted by Ender Li on July 5, 2011
If get a "Access denied for user 'ODBC'@'localhost'" error message.

1.run "MySQLInstanceConfig.exe".
2.Select "Create An Anonymous Account" at "Please set the security options".
  Posted by Stephen Wylie on January 25, 2012
When using --master-data, as Sprout has pointed out, not all the CHANGE MASTER statements required are in the dumpfile.
His suggestion of editing the dumpfile will work, but not trying to do another CHANGE MASTER to set the host, user etc after importing the dump. That will only work if you put in the filename and log co-ordinates again, as I found to my cost.
It won't tell you what's wrong either!
Every execution of CHANGE MASTER blows off the co-ords, as it assumes you are changing to a new master.
I think you can set the user, etc before importing and it should work.
Sign Up Login You must be logged in to post a comment.