The next step in the import process is to prepare the
“wild” cluster for migration. This requires
mcmd user account with root
privileges on all hosts in the cluster; killing any data node
angel processes that may be running; restarting all management
nodes without configuration caching; removing cluster
processes from control by any system service management
facility. More detailed information about performing these
tasks is provided in the remainder of this section.
MySQL Cluster Manager acts through a MySQL user named
mcmdon each SQL node. It is therefore necessary to create this user and grant root privileges to it. To do this, log in to the SQL node running on host
deltaand execute in the mysql client the SQL statements shown here:
CREATE USER 'mcmd'@'delta' IDENTIFIED BY 'super'; GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'delta' WITH GRANT OPTION;
Keep in mind that, if the “wild” cluster has more than one SQL node, you must create the
mcmduser on every one of these nodes.
Kill each data node angel process using the system's facility for doing so. Do not kill any non-angel data node daemons. On a Linux system, you can identify angel processes by matching their process IDs with the owner IDs of the remaining ndbd processes in the output of ps executed on host
betaof the example cluster, as shown here, with the relevant process IDs shown in emphasized text:
$> ps -ef | grep ndbd jon 2023 1 0 18:46 ? 00:00:00 ./ndbd -c alpha jon 2024 2023 1 18:46 ? 00:00:00 ./ndbd -c alpha jon 2124 1819 0 18:46 pts/2 00:00:00 grep --color=auto ndbd
Use the kill command to terminate the process with the indicated process ID, like this:
$> kill -9 2023
Verify that the angel process has been killed, and that only one of the two original ndbd processes remain, by issuing ps again, as shown here:
$> ps -ef | grep ndbd jon 2024 1 1 18:46 ? 00:00:01 ./ndbd -c alpha jon 2150 1819 0 18:47 pts/2 00:00:00 grep --color=auto ndbd
Now repeat this process from a login shell on host
gamma, as shown here:
$> ps -ef | grep ndbd jon 2066 1 0 18:46 ? 00:00:00 ./ndbd -c alpha jon 2067 2066 1 18:46 ? 00:00:00 ./ndbd -c alpha jon 3712 1704 0 18:46 pts/2 00:00:00 grep --color=auto ndbd $> kill -9 2066 $> ps -ef | grep ndbd jon 2067 1 1 18:46 ? 00:00:01 ./ndbd -c alpha jon 2150 1819 0 18:47 pts/2 00:00:00 grep --color=auto ndbd
The wild cluster's data nodes are now ready for migration.
Kill and restart each management node process. When restarting ndb_mgmd, its configuration cache must be disabled; since this is enabled by default, you must start the management server with
--config-cache=false, in addition to any other options that it was previously started with.Caution
Do not use
OFFfor the value of the
--config-cacheoption when restarting ndb_mgmd in this step. Using either of these values instead of
falseat this time causes the migration of the management node process to fail at later point in the importation process.
On Linux, we can once again use ps to obtain the information we need to accomplish this, this time in a shell on host
$> ps -ef | grep ndb_mgmd jon 16005 1 1 18:46 ? 00:00:09 ./ndb_mgmd -f /etc/mysql-cluster/config.ini jon 16401 1819 0 18:58 pts/2 00:00:00 grep --color=auto ndb_mgmd
The process ID is 16005, and the management node was started with the
-foption (the short form for
--config-file). First, terminate the management using kill, as shown here, with the process ID obtained from ps previously:
$> kill -9 16005
Verify that the management node process was killed, like this:
$> ps -ef | grep ndb_mgmd jon 16532 1819 0 19:03 pts/2 00:00:00 grep --color=auto ndb_mgmd
Now restart the management node as described previously, with the same options that it was started with previously, and with the configuration cache disabled. Change to the directory where ndb_mgmd is located, and restart it, like this:
$> ./ndb_mgmd -f /etc/mysql-cluster/config.ini --config-cache=false MySQL Cluster Management Server mysql-5.6.24-ndb-7.4.6 2013-12-06 19:16:08 [MgmtSrvr] INFO -- Skipping check of config directory since config cache is disabled.
Verify that the process is running as expected, using ps:
$> ps -ef | grep ndb_mgmd jon 17066 1 1 19:16 ? 00:00:01 ./ndb_mgmd -f /etc/mysql-cluster/config.ini --config-cache=false jon 17311 1819 0 19:17 pts/2 00:00:00 grep --color=auto ndb_mgmd
The management node is now ready for migration.Important
While our example cluster has only a single management node, it is possible for a MySQL Cluster to have more than one. In such cases, you must stop and restart each management node process as just described in this step.
Any cluster processes that are under the control of a system boot process management facility, such as
/etc/init.don Linux systems or the Services Manager on Windows platforms, should be removed from this facility's control. Consult your system operating documentation for information about how to do this. Be sure not to stop any running cluster processes in the course of doing so.
ndb_mgm> START BACKUP Waiting for completed, this may take several minutes Node 5: Backup 1 started from node 1 Node 5: Backup 1 started from node 1 completed StartGCP: 1338 StopGCP: 20134 #Records: 205044 #LogRecords: 10112 Data: 492807474 bytes Log: 317805 bytes
It may require some time for the backup to complete, depending on the size of the cluster's data and logs. For
START BACKUPcommand options and additional information, see Using The NDB Cluster Management Client to Create a Backup.