MySQL Blog Archive
For the latest blogs go to blogs.oracle.com/mysql
MCM 1.3 is GA - Importing a running Cluster into MySQL Cluster Manager

MySQL Cluster Manager logo
MySQL Cluster Manager 1.3.0 is now Generally Available and can be downloaded from the Oracle Software Delivery Cloud. The release contains a number of enhancements including performance improvements, handling larger clusters and of course bug fixes. The other big feature is that you can now import an existing, running MySQL Cluster instance into MCM without having to stop it first – this is the topic for this post.

In the past, we had a nice browser-based tool (the MySQL Cluster Auto-Installer) to get a well configured cluster up and running (tuned to your environment) and we also had MySQL Cluster Manager to simplify the ongoing management of the cluster. Unfortunately, if you wanted to migrate the cluster you’d created with the auto-installer (or built by hand) then you first had to shut it down and then follow a manual procedure. MCM 1.3 introduces an import command that takes a running cluster and brings it under the control of MCM without having to stop the cluster (or suspend reads or writes). There are still some manual steps involved and the first half of this post will step through this process:

Once the import has been completed, the post will then step though a number of MCM tasks to test that everything has gone to plan and also to give a reminder of how simple operations such as upgrades, backup/restore and adding new nodes is once you’re using MCM.

 

Specify Cluster topology in MCM

For this example, a cluster is used that’s been created using the auto-installer that was a part of MySQL Cluster 7.3.2 (the version is signifficant as from MySQL Cluster 7.3.3, the auto-installer creates .conf) files for the mysqld processes rather than specifying everything on the command-line – that simplifies the import process).

Before going any further, some data is added to the database so that I can later check that it’s not been lost:

The topology of the resulting cluster can be checked using the show command:

To define (but not create) the cluster in MCM, the following entities need to be defined:

  • site: the list of hosts that the cluster will run on
  • package: location of the cluster binaries on each of the hosts in the site
  • cluster: the cluster itself (the collection of nodes/processes that make up the cluster)

The MCM daemon mcmd must first be started on each of the target hosts and the mcm client run on any host before creating each of these entities:

This post doesn’t attempt to go into details about all of the commands shown above but if you’re new to MCM then check out the MySQL Cluster documentation. The --import option sets the state of each of the nodes to import; they will stay in that state until the import command is run later.

When the auto-installer created the cluster, it changed a number of the configuration parameters from their defaults (either with command-line options or using configuration files). Using the LINUX ps -ef command you can identify all of these settings (either directly or by examining the configuration files that are referenced):

Now that we have a view of all of the configuration parameters, a subset of them need to be applied to the definition of the cluster in MCM (or else MCM will override them). Note that not all of the definitions need porting to MCM – for example Portnumber=1186 as that is already the default value and configdir as MCM will use its own. So, based on the command-line options provided to the executables and the configuration files, the configuration parameters for the cluster defined in MCM are set:

Note that as MCM is not yet managing the running cluster, you can break this up into multiple SET commands as it doesn’t need to restart any processes.
 

Prepare running cluster for import

As MCM takes over some functions from MySQL Cluster, we need to ensure that a couple of rules are imposed on the cluster so that there are no conflicts.

MCM is responsible for making sure that the management nodes are using the correct version of the configuration data and so we don’t want the management nodes holding onto older versions – this leads to the first rule, that the configuration cache should be disabled. This means restarting the management nodes with the config-cache parameter set to FALSE:

MCM is responsible for making sure that a node is restarted in the event of the process stopping (only happens if StopOnError is set to FALSE) and so the data nodes no longer need their angel processes (the first of the 2 ndbmtd processes we see for each data node). This means the second rule is that all of these angel processes must be killed:

In order to manager the cluster, MCM needs to be able to connect to each of the MySQL Servers and so the mcmd user must be created on each of the MySQL Servers (unless you’re exploiting the ability to share user credentials between multiple MySQL Servers in which case it only needs doing once):

As the running cluster was generated by a pre-MySQL Cluster 7.3.3 version of the auto-installer, all of the mysqld settings were specified as command-line options. The MCM import command is very restrictive about what command-line options are allowed and so most of the options need moving to .cnf files and the the mysqld processes need to be restarted to use those configuration files (note that the cluster is still available through this process but there is a rolling restart of the MySQL Servers and so applications may need to switch ones they’re connected to temporarily – if this isn’t already handled by load ballancing). These are the configuration files that were manually created and the commands to restart each MySQL Server:

 

Ensure correct PID files in place

MCM tracks each of the processes in the cluster using the process IDs held in pid files. For the data nodes, MCM will automatically fetch the process IDs from the pid files from the running cluster and so we need to make sure that they’re accurate (that they’re in the right place and contain the correct IDs):

For MySQL Servers and management nodes, the pid files need be created within the MCM directory:

The good news is that if there are any mistakes in any of these files then the import cluster --dryrun command will fail with a useful error message and so you can come back to fix things up.
 

Import cluster into MCM

OK – you’ve now done the hard bits which is the prep work ahead of the actual importing of the cluster into MCM control, now it’s MCM’s turn to automate the actual install.

Before the real import, we use MCM to perform a dry-run to make sure that all of the prep work has been completed succesfully:

All that’s left before running the final import is to take a backup (just in cas something should go very wrong):

And now, finally the import itself can be run:

After all of the prep work, that seems a bit of an anticlimax! As a first check, make sure that all of the nodes (processes) have been imported correctly:

As a second check, make sure that the data hasn’t been lost:

So that’s it, the cluster is now safely under the control of MCM. The next section performs some more tests of MCM and at the same time illustrates how much simpler some of these management steps are now we have MCM to help.
 

Try out MCM on the imported cluster

This final section shows how to exploit some of the MCM features on the cluster.

On-line backup and (especially) restore is very simple using MCM but before that can be done, we want to add extra ndbapi slots so that the restore command can be executed on the hosts running the data nodes – fortunately this is straight-forward to do:

Note that this process took more than 2 minutes – the reason for that is that behind the scenes it performed a rolling restart of all of the existing nodes (processes) to make them aware of the new nodes. We can now confirm that these nodes have been added:

Taking a backup of the database is a single command:

Before performing the restore, the test data can be removed from the database:

To restore the database, check the available backups and then restore the most recent one (note as only the data has been deleted, the -M option is used to indicate that meta-data doesn’t need to be restored). The -I option is used to specify that the backup with Id of 2 should be used:

To confirm that the restore has indeed been successful, check that the test data is back in the database:

As a final test, the cluster is upgraded to MySQL Cluster 7.3.3; all that’s needed is to define the package (telling MCM where to find the new binaries on the target hosts) and then perform the upgrade:

Conclusion

The most demanded feature for MySQL Cluster Manager was the ability to take an existing (running) cluster and bring it under the control of MCM (rather than having to create the cluster using MCM in the first place). MCM 1.3 delivers this. As you’ll have noticed, the migration process involves a non-trivial amount of prep work; the reason for this is that making configuration changes to cluster is fairly involved, with a lot of moving parts. Hopefully you’ll also have obderved that once the cluster is under the control of MCM, the management process becomes much simpler and less prone to user error.

It would be great to hear how people get on with MCM 1.3 in general and with the import in particular, please download the MySQL Cluster Manager software and try it out.