Documentation Home
MySQL Cluster Manager 1.4 User Manual
Related Documentation Download this Manual
PDF (US Ltr) - 2.3Mb
PDF (A4) - 2.3Mb
EPUB - 1.4Mb
HTML Download (TGZ) - 1.3Mb
HTML Download (Zip) - 1.4Mb


MySQL Cluster Manager 1.4 User Manual  /  Using MySQL Cluster Manager  /  Setting Up MySQL Cluster Replication with MySQL Cluster Manager

3.9 Setting Up MySQL Cluster Replication with MySQL Cluster Manager

This section provides sample steps for setting up a MySQL Cluster replication with a single replication channel using the MySQL Cluster Manager.

Before trying the following steps, it is recommended that you first read NDB Cluster Replication to familiarize yourself with the concepts, requirements, operations, and limitations of MySQL Cluster replication.

  1. Create and start a master cluster:

    mcm> create site --hosts=tonfisk msite;

    mcm> add package --basedir=/usr/local/cluster-mgt/cluster-7.3.2 7.3.2;

    mcm> create cluster -P 7.3.2 -R \
           ndb_mgmd@tonfisk,ndbmtd@tonfisk,ndbmtd@tonfisk,mysqld@tonfisk,mysqld@tonfisk,ndbapi@*,ndbapi@* \
           master;

    mcm> set portnumber:ndb_mgmd=4000 master;

    mcm> set port:mysqld:51=3307 master;

    mcm> set port:mysqld:50=3306 master;

    mcm> set server_id:mysqld:50=100 master;

    mcm> set log_bin:mysqld:50=binlog master;

    mcm> set binlog_format:mysqld:50=ROW master;

    mcm> set ndb_connectstring:mysqld:50=tonfisk:4000 master;

    mcm> start cluster master;
    

  2. Create and start a slave cluster (we begin with creating a new site called ssite just for the slave cluster; you can also skip that and put the master and slave cluster hosts under the same site instead):

    mcm> create site --hosts=flundra ssite;

    mcm> add package --basedir=/usr/local/cluster-mgt/cluster-7.3.2 7.3.2;

    mcm> create cluster -P 7.3.2 -R \
          ndb_mgmd@flundra,ndbmtd@flundra,ndbmtd@flundra,mysqld@flundra,mysqld@flundra,ndbapi@*,ndbapi@* \
          slave;

    mcm> set portnumber:ndb_mgmd=4000 slave;

    mcm> set port:mysqld:50=3306 slave;

    mcm> set port:mysqld:51=3307 slave;

    mcm> set server_id:mysqld:50=101 slave;

    mcm> set ndb_connectstring:mysqld:50=flundra:4000 slave;

    mcm> set slave_skip_errors:mysqld=all slave;

    mcm> start cluster slave;
    

  3. Create a slave account (with the user name myslave and password mypw) on the master cluster with the appropriate privilege by logging into the master replication client (mysqlM) and issuing the following statements:

    
    mysqlM> GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'flundra'
        -> IDENTIFIED BY 'mypw';

  4. Log in to the slave cluster client (mysqlS) and issue the following statements:

    mysqlS> CHANGE MASTER TO
        -> MASTER_HOST='tonfisk',
        -> MASTER_PORT=3306,
        -> MASTER_USER='myslave',
        -> MASTER_PASSWORD='mypw';
    

  5. Start replication by issuing the following statement with the slave cluster client:

    mysqlS> START SLAVE;
    

The above example assumes that the master and slave clusters are created at about the same time, with no data on both before replication starts. If the master cluster has already been operating and has data on it when the salve cluster is created, after step 3 above, follow these steps to transfer the data from the master cluster to the slave cluster and prepare the slave cluster for replication:

  1. Back up your master cluster using the backup cluster command of MySQL Cluster Manager:

    mcm> backup cluster master;

    Note

    Only NDB tables are backed up by the command; tables using other MySQL storage engines are ignored.

  2. Look up the backup ID of the backup you just made by listing all backups for the master cluster:

    mcm> list backups master;
    +----------+--------+---------+---------------------+---------+
    | BackupId | NodeId | Host    | Timestamp           | Comment |
    +----------+--------+---------+---------------------+---------+
    | 1        | 1      | tonfisk | 2014-10-17 20:03:23 |         |
    | 1        | 2      | tonfisk | 2014-10-17 20:03:23 |         |
    | 2        | 1      | tonfisk | 2014-10-17 20:09:00 |         |
    | 2        | 2      | tonfisk | 2014-10-17 20:09:00 |         |
    +----------+--------+---------+---------------------+---------+
    

    From the output, you can see that the latest backup you created has the backup ID 2, and backup data exists for node 1 and 2.

  3. Using the backup ID and the related node IDs, identify the backup files just created under /mcm_data/clusters/cluster_name/node_id/data/BACKUP/BACKUP-backup_id/ in the master cluster's installation directory (in this case, the files under the /mcm_data/clusters/master/1/data/BACKUP/BACKUP-2 and /mcm_data/clusters/master/2/data/BACKUP/BACKUP-2), and copy them over to the equivalent places for the slave cluster (in this case, /mcm_data/clusters/slave/1/data/BACKUP/BACKUP-2 and /mcm_data/clusters/slave/2/data/BACKUP/BACKUP-2 under the slave cluster's installation directory). After the copying is finished, use the following command to check that the backup is now available for the slave cluster:

    mcm> list backups slave;
    +----------+--------+---------+---------------------+---------+
    | BackupId | NodeId | Host    | Timestamp           | Comment |
    +----------+--------+---------+---------------------+---------+
    | 2        | 1      | flundra | 2014-10-17 21:19:00 |         |
    | 2        | 2      | flundra | 2014-10-17 21:19:00 |         |
    +----------+--------+---------+---------------------+---------+
    

  4. Restore the backed up data to the slave cluster (note that you need an unused ndbapi slot for the restore cluster command to work):

    mcm> restore cluster --backupid=2 slave;

  5. On the master cluster client, use the following command to identify the correct binary log file and position for replication to start:

    
    mysqlM> SHOW MASTER STATUS\G;
    *************************** 1. row ***************************
                 File: binlog.000017
             Position: 2857
         Binlog_Do_DB:
     Binlog_Ignore_DB:
    Executed_Gtid_Set:
    

  6. On the slave cluster client, provide to the slave cluster the information of the master cluster, including the binary log file name (with the MASTER_LOG_FILE option) and position (with the MASTER_LOG_POS option) you just discovered in step 5 above:

    mysqlS> CHANGE MASTER TO
        -> MASTER_HOST='tonfisk',
        -> MASTER_PORT=3306,
        -> MASTER_USER='myslave',
        -> MASTER_PASSWORD='mypw',
        -> MASTER_LOG_FILE='binlog.000017',
        -> MASTER_LOG_POS=2857;
    
    

  7. Start replication by issuing the following statement with the slave cluster client:

    mysqlS> START SLAVE;
    

As an alternative to these steps, you can also follow the steps described in NDB Cluster Backups With NDB Cluster Replication to copy the data from the master to the slave and to specify the binary log file and position for replication to start.


User Comments
Sign Up Login You must be logged in to post a comment.