Documentation Home
MySQL NDB Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 3.0Mb
PDF (A4) - 3.0Mb

MySQL NDB Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 6.3  /  Changes in MySQL Cluster NDB 6.3.26 (5.1.35-ndb-6.3.26) (2009-08-26)

Changes in MySQL Cluster NDB 6.3.26 (5.1.35-ndb-6.3.26) (2009-08-26)

This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.

This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.35 (see Changes in MySQL 5.1.35 (2009-05-13)).


Please refer to our bug database at for more details about the individual bugs fixed in this version.

Functionality Added or Changed

  • On Solaris platforms, the MySQL Cluster management server and NDB API applications now use CLOCK_REALTIME as the default clock. (Bug #46183)

  • A new option --exclude-missing-columns has been added for the ndb_restore program. In the event that any tables in the database or databases being restored to have fewer columns than the same-named tables in the backup, the extra columns in the backup's version of the tables are ignored. For more information, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #43139)

  • Note

    This issue, originally resolved in MySQL 5.1.16, re-occurred due to a later (unrelated) change. The fix has been re-applied.

    (Bug #25984)

Bugs Fixed

  • Restarting the cluster following a local checkpoint and an online ALTER TABLE on a non-empty table caused data nodes to crash. (Bug #46651)

  • Full table scans failed to execute when the cluster contained more than 21 table fragments.


    The number of table fragments in the cluster can be calculated as the number of data nodes, times 8 (that is, times the value of the internal constant MAX_FRAG_PER_NODE), divided by the number of replicas. Thus, when NoOfReplicas = 1 at least 3 data nodes were required to trigger this issue, and when NoOfReplicas = 2 at least 4 data nodes were required to do so.

    (Bug #46490)

  • Killing MySQL Cluster nodes immediately following a local checkpoint could lead to a crash of the cluster when later attempting to perform a system restart.

    The exact sequence of events causing this issue was as follows:

    1. Local checkpoint occurs.

    2. Immediately following the LCP, kill the master data node.

    3. Kill the remaining data nodes within a few seconds of killing the master.

    4. Attempt to restart the cluster.

    (Bug #46412)

  • Ending a line in the config.ini file with an extra semicolon character (;) caused reading the file to fail with a parsing error. (Bug #46242)

  • When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug #46069)

  • OPTIMIZE TABLE on an NDB table could in some cases cause SQL and data nodes to crash. This issue was observed with both ndbd and ndbmtd. (Bug #45971)

  • The AutoReconnect configuration parameter for API nodes (including SQL nodes) has been added. This is intended to prevent API nodes from re-using allocated node IDs during cluster restarts. For more information, see Defining SQL and Other API Nodes in a MySQL Cluster.

    This fix also introduces two new methods of the NDB API Ndb_cluster_connection class: set_auto_reconnect() and get_auto_reconnect(). (Bug #45921)

  • The signals used by ndb_restore to send progress information about backups to the cluster log accessed the cluster transporter without using any locks. Because of this, it was theoretically possible that these signals could be interefered with by heartbeat signals if both were sent at the same time, causing the ndb_restore messages to be corrupted. (Bug #45646)

  • Problems could arise when using VARCHAR columns whose size was greater than 341 characters and which used the utf8_unicode_ci collation. In some cases, this combination of conditions could cause certain queries and OPTIMIZE TABLE statements to crash mysqld. (Bug #45053)

  • An internal NDB API buffer was not properly initialized. (Bug #44977)

  • When a data node had written its GCI marker to the first page of a megabyte, and that node was later killed during restart after having processed that page (marker) but before completing a LCP, the data node could fail with file system errors. (Bug #44952)

    References: See also: Bug #42564, Bug #44291.

  • The warning message Possible bug in Dbdih::execBLOCK_COMMIT_ORD ... could sometimes appear in the cluster log. This warning is obsolete, and has been removed. (Bug #44563)

  • In some cases, OPTIMIZE TABLE on an NDB table did not free any DataMemory. (Bug #43683)

  • If the cluster crashed during the execution of a CREATE LOGFILE GROUP statement, the cluster could not be restarted afterward. (Bug #36702)

    References: See also: Bug #34102.

  • Partitioning; Disk Data: An NDB table created with a very large value for the MAX_ROWS option could—if this table was dropped and a new table with fewer partitions, but having the same table ID, was created—cause ndbd to crash when performing a system restart. This was because the server attempted to examine each partition whether or not it actually existed. (Bug #45154)

    References: See also: Bug #58638.

  • Disk Data: If the value set in the config.ini file for FileSystemPathDD, FileSystemPathDataFiles, or FileSystemPathUndoFiles was identical to the value set for FileSystemPath, that parameter was ignored when starting the data node with --initial option. As a result, the Disk Data files in the corresponding directory were not removed when performing an initial start of the affected data node or data nodes. (Bug #46243)

  • Disk Data: During a checkpoint, restore points are created for both the on-disk and in-memory parts of a Disk Data table. Under certain rare conditions, the in-memory restore point could include or exclude a row that should have been in the snapshot. This would later lead to a crash during or following recovery. (Bug #41915)

    References: See also: Bug #47832.