Documentation Home
MySQL 5.6 Reference Manual
Related Documentation Download this Manual
PDF (US Ltr) - 28.7Mb
PDF (A4) - 28.7Mb
Man Pages (TGZ) - 189.1Kb
Man Pages (Zip) - 302.2Kb
Info (Gzip) - 2.8Mb
Info (Zip) - 2.8Mb
Excerpts from this Manual

MySQL 5.6 Reference Manual  /  ...  /  Using ZFS for File System Replication

16.1.1 Using ZFS for File System Replication

Because zfs send and zfs recv use streams to exchange data, you can use them to replicate information from one system to another by combining zfs send, ssh, and zfs recv.

For example, to copy a snapshot of the scratchpool file system to a new file system called slavepool on a new server, you would use the following command. This sequence combines the snapshot of scratchpool, the transmission to the replica machine (using ssh with login credentials), and the recovery of the snapshot on the replica using zfs recv:

#> zfs send scratchpool@snap1 |ssh id@host pfexec zfs recv -F slavepool

The first part of the pipeline, zfs send scratchpool@snap1, streams the snapshot. The ssh command, and the command that it executes on the other server, pfexec zfs recv -F slavepool, receives the streamed snapshot data and writes it to slavepool. In this instance, I've specified the -F option which forces the snapshot data to be applied, and is therefore destructive. This is fine, as I'm creating the first version of my replicated file system.

On the replica machine, the replicated file system contains the exact same content:

#> ls -al /slavepool/
total 23
drwxr-xr-x   6 root     root           7 Nov  8 09:13 ./
drwxr-xr-x  29 root     root          34 Nov  9 07:06 ../
drwxr-xr-x  31 root     bin           50 Jul 21 07:32 DTT/
drwxr-xr-x   4 root     bin            5 Jul 21 07:32 SUNWmlib/
drwxr-xr-x  14 root     sys           16 Nov  5 09:56 SUNWspro/
drwxrwxrwx  19 1000     1000          40 Nov  6 19:16 emacs-22.1/

Once a snapshot has been created, to synchronize the file system again, you create a new snapshot and then use the incremental snapshot feature of zfs send to send the changes between the two snapshots to the replica machine again:

#> zfs send -i scratchpool@snapshot1 scratchpool@snapshot2 |ssh id@host pfexec zfs recv slavepool

This operation only succeeds if the file system on the replica machine has not been modified at all. You cannot apply the incremental changes to a destination file system that has changed. In the example above, the ls command would cause problems by changing the metadata, such as the last access time for files or directories.

To prevent changes on the replica file system, set the file system on the replica to be read-only:

#> zfs set readonly=on slavepool

Setting readonly means that you cannot change the file system on the replica by normal means, including the file system metadata. Operations that would normally update metadata (like our ls) silently perform their function without attempting to update the file system state.

In essence, the replica file system is nothing but a static copy of the original file system. However, even when configured to be read-only, a file system can have snapshots applied to it. With the file system set to read only, re-run the initial copy:

#> zfs send scratchpool@snap1 |ssh id@host pfexec zfs recv -F slavepool

Now you can make changes to the original file system and replicate them to the replica.