Documentation Home
MySQL Cluster Manager 1.4 User Manual
Related Documentation Download this Manual Preparing the Standalone Cluster for Migration

The next step in the import process is to prepare the wild cluster for migration. This requires, among other things, removing cluster processes from control by any system service management facility, making sure all management nodes are running with configuration caching disabled, and, for MySQL Cluster Manager 1.4.6 and earlier, killing any data node angel processes that may be running. More detailed information about performing these tasks is provided in the remainder of this section.

  1. Before proceeding with any migration, the taking of a backup using the ndb_mgm client's START BACKUP command is strongly recommended.

  2. Any cluster processes that are under the control of a system boot process management facility such as /etc/init.d on Linux systems or the Services Manager on Windows platforms should be removed from this facility's control. Consult your operating system's documentation for information about how to do this. Be sure not to stop any running cluster processes in the course of doing so.

  3. Create a MySQL user account on each of the wild cluster's SQL nodes for MySQL Cluster Manager to execute the import config and import cluster commands in the steps to follow. The account name and password MySQL Cluster Manager uses to access MySQL nodes are specified by the mcmd client options manager-username and manager-password (the default values are mcmd and super, respectively); use those credentials when creating the account on the wild cluster's SQL nodes, and grant the user all privileges on the server, including the privilege to grant privileges. For example, log in to each of the wild cluster's SQL nodes with the mysql client as root and execute the SQL statements shown here:

    CREATE USER 'mcmd'@'localhost' IDENTIFIED BY 'super';

    Keep in mind that this must be done on all the SQL nodes, unless distributed privileges are enabled on the wild cluster.

  4. Make sure every node of the wild cluster has been started with its node ID specified with the --ndb-nodeid option at the command line, not just in the cluster configuration file. That is required for each process to be correctly identified by mcmd during the import. You can check if the requirement is fulfilled by the ps -ef | grep command, which shows the options the process has been started with:

    shell> ps -ef | grep ndb_mgmd
    ari       8118     1  0 20:51 ?        00:00:04 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini
    --configdir=/home/ari/bin/cluster/wild-cluster --initial --ndb-nodeid=50

    (For clarity's sake, in the command output for the ps -ef | grep command in this and the upcoming sections, we are skipping the line of output for the grep process itself.)

    If the requirement is not fulfilled, restart the process with the --ndb-nodeid option; the restart can also be performed in step (e) or (f) below for any nodes you are restarting in those steps.

  5. Make sure that the configuration cache is disabled for each management node. Since the configuration cache is enabled by default, unless the management node has been started with the --config-cache=false option, you will need to stop and restart it with that option, in addition to other options that it has been started with previously.

    On Linux, we can once again use ps to obtain the information we need to accomplish this step. In a shell on host, on which the management node resides:

    shell> ps -ef | grep ndb_mgmd
    ari       8118     1  0 20:51 ?        00:00:04 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini
    --configdir=/home/ari/bin/cluster/wild-cluster --initial --ndb-nodeid=50

    The process ID is 8118. The configuration cache is turned on by default, and a configuration directory has been specified using the --configdir option. First, terminate the management node using kill as shown here, with the process ID obtained from ps previously:

    shell> kill -15 8118

    Verify that the management node process was stopped—it should no longer appear in the output of another ps command.

    Now, restart the management node as described previously, with the configuration cache disabled and with the options that it was started with previously. Also, as already stated in step (d) above, make sure that the --ndb-nodeid option is specified at the restart:

    shell> /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini --config-cache=false  --ndb-nodeid=50
    MySQL Cluster Management Server mysql-5.7.29-ndb-7.6.13
    2016-11-08 21:29:43 [MgmtSrvr] INFO     -- Skipping check of config directory since config cache is disabled.

    Do not use 0 or OFF for the value of the --config-cache option when restarting ndb_mgmd in this step. Using either of these values instead of false at this time causes the migration of the management node process to fail at later point in the import process.

    Verify that the process is running as expected, using ps:

    shell> ps -ef | grep ndb_mgmd
    ari      10221     1  0 19:38 ?        00:00:09 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini --config-cache=false --ndb-nodeid=50

    The management node is now ready for migration.


    While our example cluster has only a single management node, it is possible for a MySQL NDB Cluster to have more than one. In such cases, you must make sure the configuration cache is disabled for each management with the steps described in this step.

  6. For MySQL Cluster Manager 1.4.6 and earlier: Kill each data node angel process using the system's facility. The angel process monitors a data node process during a cluster's operation and, if necessary, attempts to restart the data node process (see this FAQ for details) . Before a cluster can be imported, the angel processes must be stopped first. On a Linux system, you can identify angel processes by the output of the ps -ef command executed on the processes' host; here is an example of doing that on the host of the sample cluster :

    shell> ps -ef | grep ndbd
    ari      12836     1  0 20:52 ?        00:00:00 ./bin/ndbd --initial --ndb-nodeid=2 --ndb-connectstring=
    ari      12838 12836  2 20:52 ?        00:00:00 ./bin/ndbd --initial --ndb-nodeid=2 --ndb-connectstring=

    While both the actual data node process and its angel process appear as processes ndbd, you can identify each by looking at the process IDs. The process ID of the angel process (italicized in the sample output above) appears twice in the command output, once for itself (in the first line of the output), and once as the ID of the parent process of the actual data node daemon (in the second line). Use the kill command to terminate the process with the identified process ID, like this:

    shell> kill -9 12836

    Verify that the angel process has been killed and the other ndbd process (the non-angel data node daemon) is still running by issuing the ps -ef command again, as shown here:

    shell> ps -ef | grep ndbd
    ari      12838     1  0 20:52 ?        00:00:02 ./bin/ndbd --initial --ndb-nodeid=2 --ndb-connectstring=

    Now repeat this process in a shell on host, as shown here:

    shell> ps -ef | grep ndbd
    ari      11274     1  0 20:57 ?        00:00:00 ./cluster//bin/ndbd --initial --ndb-nodeid=3 --ndb-connectstring=
    ari      11276 11274  0 20:57 ?        00:00:01 ./cluster//bin/ndbd --initial --ndb-nodeid=3 --ndb-connectstring=
    shell> kill -9 11274
    shell> ps -ef | grep ndbd
    ari      11276     1  0 20:57 ?        00:00:01 ./cluster//bin/ndbd --initial --ndb-nodeid=3 --ndb-connectstring=

    For MySQL Cluster Manager 1.4.7 and later: There is no need kill the angel processes manually, as they will be taken care of when the --remove-angel option is used with the import cluster command at the last step of the import process.

    The wild cluster's data nodes are now ready for migration.