Documentation Home
MySQL NDB Cluster 8.0
Related Documentation Download this Excerpt
PDF (US Ltr) - 4.7Mb
PDF (A4) - 4.8Mb


MySQL NDB Cluster 8.0  /  Management of NDB Cluster  /  Importing Data Into MySQL Cluster

6.9 Importing Data Into MySQL Cluster

It is common when setting up a new instance of NDB Cluster to need to import data from an existing NDB Cluster, instance of MySQL, or other source. This data is most often available in one or more of the following formats:

When importing data from an SQL file, it is often not necessary to enforce transactions or foreign keys, and temporarily disabling these features can speed up the import process greatly. This can be done using the mysql client, either from a client session, or by invoking it on the command line. Within a mysql client session, you can perform the import using the following SQL statements:

SET ndb_use_transactions=0;
SET foreign_key_checks=0;

source path/to/dumpfile;

SET ndb_use_transactions=1;
SET foreign_key_checks=1;

When performing the import in this fashion, you must enable ndb_use_transaction and foreign_key_checks again following execution of the mysql client's source command. Otherwise, it is possible for later statements in same session may also be executed without enforcing transactions or foreign key constraints, and which could lead to data inconcsistency.

From the system shell, you can import the SQL file while disabling enforcement of transaction and foreign keys by using the mysql client with the --init-command option, like this:

$> mysql --init-command='SET ndb_use_transactions=0; SET foreign_key_checks=0' < path/to/dumpfile

It is also possible to load the data into an InnoDB table, and convert it to use the NDB storage engine afterwards using ALTER TABLE ... ENGINE NDB). You should take into account, especially for many tables, that this may require a number of such operations; in addition, if foreign keys are used, you must mind the order of the ALTER TABLE statements carefully, due to the fact that foreign keys do not work between tables using different MySQL storage engines.

You should be aware that the methods described previously in this section are not optimized for very large data sets or large transactions. Should an application really need big transactions or many concurrent transactions as part of normal operation, you may wish to increase the value of the MaxNoOfConcurrentOperations data node configuration parameter, which reserves more memory to allow a data node to take over a transaction if its transaction coordinator stops unexpectedly.

You may also wish to do this when performing bulk DELETE or UPDATE operations on NDB Cluster tables. If possible, try to have applications perform these operations in chunks, for example, by adding LIMIT to such statements.

If a data import operation does not complete successfully, for whatever reason, you should be prepared to perform any necessary cleanup including possibly one or more DROP TABLE statements, DROP DATABASE statements, or both. Failing to do so may leave the database in an inconsistent state.