In the event that the primary Cluster replication process fails, it is possible to switch over to the secondary replication channel. The following procedure describes the steps required to accomplish this.
Obtain the time of the most recent global checkpoint (GCP).
That is, you need to determine the most recent epoch from the
ndb_apply_status table on the slave
cluster, which can be found using the following query:
In a circular replication topology, with a master and a slave
running on each host, when you are using
Cluster epochs are written in the slave binary logs. This
means that the
contains information for the slave on this host as well as for
any other host which acts as a slave of the master running on
In this case, you need to determine the latest epoch on this
slave to the exclusion of any epochs from any other slaves in
this slave's binary log that were not listed in the
IGNORE_SERVER_IDS options of the
CHANGE MASTER TO statement used
to set up this slave. The reason for excluding such epochs is
that rows in the
table whose server IDs have a match in the
IGNORE_SERVER_IDS list used with the CHANGE
MASTER TO statement used to prepare this slave's master
are also considered to be from local servers, in addition to
those having the slave's own server ID. You can retrieve
this list as
from the output of
STATUS. We assume that you have obtained this list
and are substituting it for
ignore_server_ids in the query
shown here, which like the previous version of the query,
selects the greatest epoch into a variable named
WHERE server_id NOT IN (
In some cases, it may be simpler or more efficient (or both)
to use a list of the server IDs to be included and
WHERE condition of the preceding query.
Prior to MySQL Cluster NDB 7.2.6, you should use the following query to accomplish this task:
@file:=SUBSTRING_INDEX(File, '/', -1),->
WHERE epoch > @latest->
ORDER BY epoch ASC LIMIT 1;
Beginning with MySQL Cluster NDB 7.2.6, you can take advantage
of the improved binary logging of DDL statements implemented
in those and later versions by using the following query to
obtain the needed records from the master's
@file:=SUBSTRING_INDEX(next_file, '/', -1),->
WHERE epoch = @latest->
ORDER BY epoch ASC LIMIT 1;
In either case, these are the records saved on the master
since the failure of the primary replication channel. We have
employed a user variable
@latest here to
represent the value obtained in Step 1. Of course, it is not
possible for one mysqld instance to access
user variables set on another server instance directly. These
values must be “plugged in” to the second query
manually or in application code.
If (and only if) you use the second of the two queries just
ndb_binlog_index (that is,
the query that employs the
next_file columns), you must ensure
that the slave mysqld is started with
Otherwise, replication may stop with duplicate DDL errors.
Now it is possible to synchronize the secondary channel by running the following query on the secondary slave server:
CHANGE MASTER TO->
Again we have employed user variables (in this case
represent the values obtained in Step 2 and applied in Step 3;
in practice these values must be inserted manually or using
application code that can access both of the servers involved.
@file is a string value such as
and so must be quoted when used in SQL or application code.
However, the value represented by
must not be quoted. Although MySQL
normally attempts to convert strings to numbers, this case
is an exception.
You can now initiate replication on the secondary channel by issuing the appropriate command on the secondary slave mysqld:
Once the secondary replication channel is active, you can investigate the failure of the primary and effect repairs. The precise actions required to do this will depend upon the reasons for which the primary channel failed.
The secondary replication channel is to be started only if and when the primary replication channel has failed. Running multiple replication channels simultaneously can result in unwanted duplicate records being created on the replication slaves.
If the failure is limited to a single server, it should (in
theory) be possible to replicate from
S', or from
however, this has not yet been tested.