In this section, we list the basic steps required to add new data nodes to an NDB Cluster. This procedure applies whether you are using ndbd or ndbmtd binaries for the data node processes. For a more detailed example, see Section 7.15.3, “Adding NDB Cluster Data Nodes Online: Detailed Example”.
Assuming that you already have a running NDB Cluster, adding data nodes online requires the following steps:
Edit the cluster configuration
config.inifile, adding new
[ndbd]sections corresponding to the nodes to be added. In the case where the cluster uses multiple management servers, these changes need to be made to all
config.inifiles used by the management servers.
You must be careful that node IDs for any new data nodes added in the
config.inifile do not overlap node IDs used by existing nodes. In the event that you have API nodes using dynamically allocated node IDs and these IDs match node IDs that you want to use for new data nodes, it is possible to force any such API nodes to “migrate”, as described later in this procedure.
Perform a rolling restart of all NDB Cluster management servers.
Perform a rolling restart of all existing NDB Cluster data nodes. It is not necessary (or usually even desirable) to use
--initialwhen restarting the existing data nodes.
If you are using API nodes with dynamically allocated IDs matching any node IDs that you wish to assign to new data nodes, you must restart all API nodes (including SQL nodes) before restarting any of the data nodes processes in this step. This causes any API nodes with node IDs that were previously not explicitly assigned to relinquish those node IDs and acquire new ones.
Perform a rolling restart of any SQL or API nodes connected to the NDB Cluster.
Start the new data nodes.
The new data nodes may be started in any order. They can also be started concurrently, as long as they are started after the rolling restarts of all existing data nodes have been completed, and before proceeding to the next step.
Execute one or more
CREATE NODEGROUPcommands in the NDB Cluster management client to create the new node group or node groups to which the new data nodes will belong.
Redistribute the cluster's data among all data nodes, including the new ones. Normally this is done by issuing an
ALTER TABLE ... ALGORITHM=INPLACE, REORGANIZE PARTITIONstatement in the mysql client for each
Exception: For tables created using the
MAX_ROWSoption, this statement does not work; instead, use
ALTER TABLE ... ALGORITHM=INPLACE MAX_ROWS=...to reorganize such tables. You should also bear in mind that using
MAX_ROWSto set the number of partitions in this fashion is deprecated, and you should use
PARTITION_BALANCEinstead; see Setting NDB_TABLE Options, for more information.Note
This needs to be done only for tables already existing at the time the new node group is added. Data in tables created after the new node group is added is distributed automatically; however, data added to any given table
tblthat existed before the new nodes were added is not distributed using the new nodes until that table has been reorganized.
ALTER TABLE ... REORGANIZE PARTITION ALGORITHM=INPLACEreorganizes partitions but does not reclaim the space freed on the “old” nodes. You can do this by issuing, for each
OPTIMIZE TABLEstatement in the mysql client.
This works for space used by variable-width columns of in-memory
OPTIMIZE TABLEis not supported for fixed-width columns of in-memory tables; it is also not supported for Disk Data tables.
You can add all the nodes desired, then issue several
CREATE NODEGROUP commands in
succession to add the new node groups to the cluster.