It is possible to restore from an NDB backup to a cluster having a different number of data nodes than the original from which the backup was taken. The following two sections discuss, respectively, the cases where the target cluster has a lesser or greater number of data nodes than the source of the backup.
You can restore to a cluster having fewer data nodes than the original provided that the larger number of nodes is an even multiple of the smaller number. In the following example, we use a backup taken on a cluster having four data nodes to a cluster having two data nodes.
The management server for the original cluster is on host
host10
. The original cluster has four data nodes, with the node IDs and host names shown in the following extract from the management server'sconfig.ini
file:[ndbd]NodeId=2 HostName=host2 [ndbd] NodeId=4 HostName=host4 [ndbd] NodeId=6 HostName=host6 [ndbd] NodeId=8 HostName=host8
We assume that each data node was originally started with ndbmtd
--ndb-connectstring=host10
or the equivalent.Perform a backup in the normal manner. See Section 18.5.3.2, “Using The NDB Cluster Management Client to Create a Backup”, for information about how to do this.
The files created by the backup on each data node are listed here, where
N
is the node ID andB
is the backup ID.BACKUP-
B
-0.N
.DataBACKUP-
B
.N
.ctlBACKUP-
B
.N
.log
These files are found under
BackupDataDir
/BACKUP/BACKUP-
, on each data node. For the rest of this example, we assume that the backup ID is 1.B
Have all of these files available for later copying to the new data nodes (where they can be accessed on the data node's local file system by ndb_restore). It is simplest to copy them all to a single location; we assume that this is what you have done.
The management server for the target cluster is on host
host20
, and the target has two data nodes, with the node IDs and host names shown, from the management serverconfig.ini
file onhost20
:[ndbd] NodeId=3 hostname=host3 [ndbd] NodeId=5 hostname=host5
Each of the data node processes on
host3
andhost5
should be started with ndbmtd-c host20
--initial
or the equivalent, so that the new (target) cluster starts with clean data node file systems.Copy two different sets of two backup files to each of the target data nodes. For this example, copy the backup files from nodes 2 and 4 from the original cluster to node 3 in the target cluster. These files are listed here:
BACKUP-1-0.2.Data
BACKUP-1.2.ctl
BACKUP-1.2.log
BACKUP-1-0.6.Data
BACKUP-1.6.ctl
BACKUP-1.6.log
Then copy the backup files from nodes 6 and 8 to node 5; these files are shown in the following list:
BACKUP-1-0.4.Data
BACKUP-1.4.ctl
BACKUP-1.4.log
BACKUP-1-0.8.Data
BACKUP-1.8.ctl
BACKUP-1.8.log
For the remainder of this example, we assume that the respective backup files have been saved to the directory
/BACKUP-1
on each of nodes 3 and 5.On each of the two target data nodes, you must restore from both sets of backups. First, restore the backups from nodes 2 and 4 to node 3 by invoking ndb_restore on
host3
as shown here:shell> ndb_restore -c host20 --nodeid=2 --backupid=1 --restore-data --backup_path=/BACKUP-1 shell> ndb_restore -c host20 --nodeid=4 --backupid=1 --restore-data --backup_path=/BACKUP-1
Then restore the backups from nodes 6 and 8 to node 5 by invoking ndb_restore on
host5
, like this:shell> ndb_restore -c host20 --nodeid=6 --backupid=1 --restore-data --backup_path=/BACKUP-1 shell> ndb_restore -c host20 --nodeid=8 --backupid=1 --restore-data --backup_path=/BACKUP-1
The node ID specified for a given ndb_restore command is that of the node in the original backup and not that of the data node to restore it to. When performing a backup using the method described in this section, ndb_restore connects to the management server and obtains a list of data nodes in the cluster the backup is being restored to. The restored data is distributed accordingly, so that the number of nodes in the target cluster does not need to be to be known or calculated when performing the backup.
When changing the total number of LCP threads or LQH threads per node group, you should recreate the schema from backup created using mysqldump.
Create the backup of the data. You can do this by invoking the ndb_mgm client
START BACKUP
command from the system shell, like this:shell> ndb_mgm -e "START BACKUP 1"
This assumes that the desired backup ID is 1.
Create a backup of the schema (see also below):
shell> mysqldump --no-data --routines --events --triggers --databases > myschema.sql
ImportantYou must not make any schema changes between the first and second steps.
Copy the backup directories from above to the new cluster. For example if the backup you want to restore is has ID 1 and
BackupDataDir
=/backups/node_
, then the path to the backup on this node isnodeid
/backups/node_1/BACKUP/BACKUP-1
. Inside this directory there are three files, listed here:BACKUP-1-0.1.Data
BACKUP-1.1.ctl
BACKUP-1.1.log
You should copy the entire directory to the new node.
There is no requirement for the backup to be restored from a specific node or nodes.
To restore from the backup just created, perform the following steps:
Restore the schema. Import the schema file using the mysql client, as shown here:
shell> mysql < myschema.sql
Restore the data. The following commands can be run in parallel:
ndb_restore --nodeid=1 --backupid=1 --restore-data --backup_path=/backups/node_1/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=2 --backupid=1 --restore-data --backup_path=/backups/node_2/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=3 --backupid=1 --restore-data --backup_path=/backups/node_3/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=4 --backupid=1 --restore-data --backup_path=/backups/node_4/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=5 --backupid=1 --restore-data --backup_path=/backups/node_5/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=6 --backupid=1 --restore-data --backup_path=/backups/node_6/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=7 --backupid=1 --restore-data --backup_path=/backups/node_7/BACKUP/BACKUP-1 --disable-indexes ndb_restore --nodeid=8 --backupid=1 --restore-data --backup_path=/backups/node_8/BACKUP/BACKUP-1 --disable-indexes
Add the
--ndb-connectstring
option as needed.If you in 3. for example copied the backups from the “old” nodes having node IDs 1 and 2 to a “new” node whose node ID is 1, you should perform the two invocations of ndb_restore with
--nodeid=1
and--nodeid=2
on the new node that has 1 as its node ID.Rebuild the indexes. These were disabled by the
--disable-indexes
option used in the commands just shown. Recreating the indexes avoids errors due to the restore not being consistent at all points. Rebuilding the indexes can also improve performance in some cases. To rebuild the indexes, execute the following command once, on a single node:shell> ndb_restore --nodeid=1 --backupid=1 --backup_path=/backups/node_1/BACKUP/BACKUP-1 --rebuild-indexes
You should be aware that the supported number of
partitions in each table depends on the number of data
nodes, node groups, and LDM threads in the cluster. Other
conditions (such as the values of
MaxNoOfExecutionThreads
,
ThreadConfig
,
NoOfReplicas
, and so on)
being the same, a cluster with (for example) two data
nodes supports fewer partitions than a cluster with eight
data nodes supports. This means that using
ndb_restore
--restore-meta
to
restore the schema does not always work since this
restores a given table with the same number of partitions
as in the original; it is safer to restore the schema from
a backup written by mysqldump—as
in the example shown previously—when restoring to a
cluster having fewer data nodes, LDM threads, or both,
than were used in the original cluster.
The support for fewer partitions when restoring to a smaller cluster also means the maximum number of rows per table is lower. However, with the larger hash maps available in MySQL Cluster 7.2.9 and later (used by default for new tables), this is not likely to be an issue.