Documentation Home
MySQL NDB Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 3.0Mb
PDF (A4) - 3.0Mb

MySQL NDB Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 7.1  /  Changes in MySQL Cluster NDB 7.1.17 (5.1.56-ndb-7.1.17) (2011-11-15)

Changes in MySQL Cluster NDB 7.1.17 (5.1.56-ndb-7.1.17) (2011-11-15)

MySQL Cluster NDB 7.1.17 is a new release of MySQL Cluster, incorporating new features in the NDB storage engine and fixing recently discovered bugs in previous MySQL Cluster NDB 7.1 releases.

Obtaining MySQL Cluster NDB 7.1.  The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at

This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.56 (see Changes in MySQL 5.1.56 (2011-03-01)).

Functionality Added or Changed

  • Introduced the CrashOnCorruptedTuple data node configuration parameter. When enabled, this parameter causes data nodes to handle corrupted tuples in a fail-fast manner—in other words, whenever the data node detects a corrupted tuple, it forcibly shuts down if CrashOnCorruptedTuple is enabled. For backward compatibility, this parameter is disabled by default. (Bug #12598636)

  • Added the ThreadConfig data node configuration parameter to enable control of multiple threads and CPUs when using ndbmtd, by assigning threads of one or more specified types to execute on one or more CPUs. This can provide more precise and flexible control over multiple threads than can be obtained using the LockExecuteThreadToCPU parameter. (Bug #11795581)

  • Added the ndbinfo_select_all utility.

Bugs Fixed

  • Important Change; Cluster Replication: A unique key constraint violation caused NDB slaves to stop rather than to continue when the slave_exec_mode was IDEMPOTENT. In such cases, NDB now behaves as other MySQL storage engines do when in IDEMPOTENT mode. (Bug #11756310)

  • When adding data nodes online, if the SQL nodes were not restarted before starting the new data nodes, the next query to be executed crashed the SQL node on which it was run. (Bug #13715216, Bug #62847)

    References: This issue is a regression of: Bug #13117187.

  • When a failure of multiple data nodes during a local checkpoint (LCP) that took a long time to complete included the node designated as master, any new data nodes attempting to start before all ongoing LCPs were completed later crashed. This was due to the fact that node takeover by the new master cannot be completed until there are no pending local checkpoints. Long-running LCPs such as those which triggered this issue can occur when fragment sizes are sufficiently large (see MySQL Cluster Nodes, Node Groups, Replicas, and Partitions, for more information). Now in such cases, data nodes (other than the new master) are kept from restarting until the takeover is complete. (Bug #13323589)

  • When deleting from multiple tables using a unique key in the WHERE condition, the wrong rows were deleted. In addition, UPDATE triggers failed when rows were changed by deleting from or updating multiple tables. (Bug #12718336, Bug #61705, Bug #12728221)

  • Shutting down a mysqld while under load caused the spurious error messages Opening ndb_binlog_index: killed and Unable to lock table ndb_binlog_index to be written in the cluster log. (Bug #11930428)

  • Cluster Replication: The mysqlbinlog --database option generated table mapping errors when used with NDB tables, unless the binary log was generated using --log-bin-use-v1-row-events=0. (Bug #13067813)

  • Cluster Replication: Replication of NDB tables having more columns on the slave than on the master did not always work correctly when any of the extra columns were NOT NULL, did not have a default value, or both. (Bug #11755904, Bug #47742)

  • Cluster API: When more than 32KB of data must be sent in a single signal using the NDB API, the data is split across 2 or more signals each of which is smaller than 32kB, and these are then reassembled back into the original, full-length signal by the receiver. Such fragmented signals are used for some scan requests, as well as for SPJ QueryOperation requests. However, extra (spurious) signals could sometimes be sent when using fragmented signals, causing errors on the receiver; these implementation artifacts have now been eliminated. (Bug #13087016)