Documentation Home
MySQL Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb

MySQL Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 6.3  /  Changes in MySQL Cluster NDB 6.3.40 (5.1.51-ndb-6.3.40) (2011-01-26)

Changes in MySQL Cluster NDB 6.3.40 (5.1.51-ndb-6.3.40) (2011-01-26)

This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.

Obtaining MySQL Cluster NDB 6.3. The latest MySQL Cluster NDB 6.3 binaries for supported platforms can be obtained from Source code for the latest MySQL Cluster NDB 6.3 release can be obtained from the same location. You can also access the MySQL Cluster NDB 6.3 development source tree at

This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.51 (see Changes in MySQL 5.1.51 (2010-09-10)).


Please refer to our bug database at for more details about the individual bugs fixed in this version.

Functionality Added or Changed

  • Added the --skip-broken-objects option for ndb_restore. This option causes ndb_restore to ignore tables corrupted due to missing blob parts tables, and to continue reading from the backup file and restoring the remaining tables. (Bug #54613)

    References: See also Bug #51652.

Bugs Fixed

  • Two related problems could occur with read-committed scans made in parallel with transactions combining multiple (concurrent) operations:

    1. When committing a multiple-operation transaction that contained concurrent insert and update operations on the same record, the commit arrived first for the insert and then for the update. If a read-committed scan arrived between these operations, it could thus read incorrect data; in addition, if the scan read variable-size data, it could cause the data node to fail.

    2. When rolling back a multiple-operation transaction having concurrent delete and insert operations on the same record, the abort arrived first for the delete operation, and then for the insert. If a read-committed scan arrived between the delete and the insert, it could incorrectly assume that the record should not be returned (in other words, the scan treated the insert as though it had not yet been committed).

    (Bug #59496)

  • A row insert or update followed by a delete operation on the same row within the same transaction could in some cases lead to a buffer overflow. (Bug #59242)

    References: See also Bug #56524. This bug was introduced by Bug #35208.

  • The FAIL_REP signal, used inside the NDB kernel to declare that a node has failed, now includes the node ID of the node that detected the failure. This information can be useful in debugging. (Bug #58904)

  • Issuing EXPLAIN EXTENDED for a query that would use condition pushdown could cause mysqld to crash. (Bug #58553, Bug #11765570)

  • In some circumstances, an SQL trigger on an NDB table could read stale data. (Bug #58538)

  • During a node takeover, it was possible in some circumstances for one of the remaining nodes to send an extra transaction confirmation (LQH_TRANSCONF) signal to the DBTC kernel block, conceivably leading to a crash of the data node trying to take over as the new transaction coordinator. (Bug #58453)

  • A query having multiple predicates joined by OR in the WHERE clause and which used the sort_union access method (as shown using EXPLAIN) could return duplicate rows. (Bug #58280)

  • Trying to drop an index while it was being used to perform scan updates caused data nodes to crash. (Bug #58277, Bug #57057)

  • When handling failures of multiple data nodes, an error in the construction of internal signals could cause the cluster's remaining nodes to crash. This issue was most likely to affect clusters with large numbers of data nodes. (Bug #58240)

  • Some queries of the form SELECT ... WHERE column IN (subquery) against an NDB table could cause mysqld to hang in an endless loop. (Bug #58163)

  • The number of rows affected by a statement that used a WHERE clause having an IN condition with a value list containing a great many elements, and that deleted or updated enough rows such that NDB processed them in batches, was not computed or reported correctly. (Bug #58040)

  • A query using BETWEEN as part of a pushed-down WHERE condition could cause mysqld to hang or crash. (Bug #57735)

  • In some circumstances, it was possible for mysqld to begin a new multi-range read scan without having closed a previous one. This could lead to exhaustion of all scan operation objects, transaction objects, or lock objects (or some combination of these) in NDB, causing queries to fail with such errors as Lock wait timeout exceeded or Connect failure - out of connection objects. (Bug #57481)

    References: See also Bug #58750.

  • Queries using column IS [NOT] NULL on a table with a unique index created with USING HASH on column always returned an empty result. (Bug #57032)

  • With engine_condition_pushdown enabled, a query using LIKE on an ENUM column of an NDB table failed to return any results. This issue is resolved by disabling engine_condition_pushdown when performing such queries. (Bug #53360)

  • When a slash character (/) was used as part of the name of an index on an NDB table, attempting to execute a TRUNCATE TABLE statement on the table failed with the error Index not found, and the table was rendered unusable. (Bug #38914)

  • Disk Data: In certain cases, a race condition could occur when DROP LOGFILE GROUP removed the logfile group while a read or write of one of the effected files was in progress, which in turn could lead to a crash of the data node. (Bug #59502)

  • Disk Data: A race condition could sometimes be created when DROP TABLESPACE was run concurrently with a local checkpoint; this could in turn lead to a crash of the data node. (Bug #59501)

  • Disk Data: Performing what should have been an online drop of a multi-column index was actually performed offline. (Bug #55618)

  • Disk Data: When at least one data node was not running, queries against the INFORMATION_SCHEMA.FILES table took an excessive length of time to complete because the MySQL server waited for responses from any stopped nodes to time out. Now, in such cases, MySQL does not attempt to contact nodes which are not known to be running. (Bug #54199)

  • Cluster API: Attempting to read the same value (using getValue()) more than 9000 times within the same transaction caused the transaction to hang when executed. Now when more reads are performed in this way than can be accommodated in a single transaction, the call to execute() fails with a suitable error. (Bug #58110)

Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb