Documentation Home
MySQL Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb

MySQL Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 7.1  /  Changes in MySQL Cluster NDB 7.1.20 (5.1.61-ndb-7.1.20) (2012-03-29)

Changes in MySQL Cluster NDB 7.1.20 (5.1.61-ndb-7.1.20) (2012-03-29)

MySQL Cluster NDB 7.1.20 is a new release of MySQL Cluster, incorporating new features in the NDB storage engine and fixing recently discovered bugs in previous MySQL Cluster NDB 7.1 releases.

Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at

This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.61 (see Changes in MySQL 5.1.61 (2012-01-10)).

Bugs Fixed

  • Important Change: A number of changes have been made in the configuration of transporter send buffers.

    1. The data node configuration parameter ReservedSendBufferMemory is now deprecated, and thus subject to removal in a future MySQL Cluster release. ReservedSendBufferMemory has been non-functional since it was introduced and remains so.

    2. TotalSendBufferMemory now works correctly with data nodes using ndbmtd.

    3. SendBufferMemory can now over-allocate into SharedGlobalMemory for ndbmtd data nodes (only).

    4. A new data node configuration parameter ExtraSendBufferMemory is introduced. Its purpose is to control how much additional memory can be allocated to the send buffer over and above that specified by TotalSendBufferMemory or SendBufferMemory. The default setting (0) allows up to 16MB to be allocated automatically.

    (Bug #13633845, Bug #11760629, Bug #53053)

  • Important Change; Cluster Replication: The master limited the number of operations per transaction to 10000 (based on TimeBetweenEpochs). This could result in a larger number of data-modification operations in a single epoch than could be applied at one time, due to the limit imposed on the slave by its (own) setting for MaxDMLOperationsPerTransaction.

    The fix for this issue is to allow a replication slave cluster to exceed the configured value of MaxDMLOperationsPerTransaction when necessary, so that it can apply all DML operations received from the master in the same transaction. (Bug #12825405)

  • Setting insert_id had no effect on the auto-increment counter for NDB tables. (Bug #13731134)

  • A data node crashed when more than 16G fixed-size memory was allocated by DBTUP to one fragment (because the DBACC kernel block was not prepared to accept values greater than 32 bits from it, leading to an overflow). Now in such cases, the data node returns Error 889 Table fragment fixed data reference has reached maximum possible value.... When this happens, you can work around the problem by increasing the number of partitions used by the table (such as by using the MAXROWS option with CREATE TABLE). (Bug #13637411)

    References: See also Bug #11747870, Bug #34348.

  • Several instances in the NDB code affecting the operation of multi-threaded data nodes, where SendBufferMemory was associated with a specific thread for an unnecessarily long time, have been identified and fixed, by minimizing the time that any of these buffers can be held exclusively by a given thread (send buffer memory being critical to operation of the entire node). (Bug #13618181)

  • A very large value for BackupWriteSize, as compared to BackupMaxWriteSize, BackupDataBufferSize, or BackupLogBufferSize, could cause a local checkpoint or backup to hang. (Bug #13613344)

  • Queries using LIKE ... ESCAPE on NDB tables failed when pushed down to the data nodes. Such queries are no longer pushed down, regardless of the value of engine_condition_pushdown. (Bug #13604447, Bug #61064)

  • To avoid TCP transporter overload, an overload flag is kept in the NDB kernel for each data node; this flag is used to abort key requests if needed, yielding error 1218 Send Buffers overloaded in NDB kernel in such cases. Scans can also put significant pressure on transporters, especially where scans with a high degree of parallelism are executed in a configuration with relatively small send buffers. However, in these cases, overload flags were not checked, which could lead to node failures due to send buffer exhaustion. Now, overload flags are checked by scans, and in cases where returning sufficient rows to match the batch size (--ndb-batch-size server option) would cause an overload, the number of rows is limited to what can be accommodated by the send buffer.

    See also Configuring MySQL Cluster Send Buffer Parameters. (Bug #13602508)

  • A SELECT from an NDB table using LIKE with a multibyte column (such as utf8) did not return the correct result when engine_condition_pushdown was enabled. (Bug #13579318, Bug #64039)

    References: See also Bug #13608135.

  • A node failure and recovery while performing a scan on more than 32 partitions led to additional node failures during node takeover. (Bug #13528976)

  • The --skip-config-cache option now causes ndb_mgmd to skip checking for the configuration directory, and thus to skip creating it in the event that it does not exist. (Bug #13428853)

  • Cluster Replication: Conflict detection and resolution for statements updating a given table could be employed only on the same server where the table was created. When an NDB table is created by executing DDL on an SQL node, the binary log setup portion of the processing for the CREATE TABLE statement reads the table's conflict detection function from the ndb_replication table and sets up that function for the table. However, when the created table was discovered by other SQL nodes attached to the same MySQL Cluster due to schema distribution, the conflict detection function was not correctly set up. The same problem occurred when an NDB table was discovered as a result of selecting a database for the first time (such as when executing a USE statement), and when a table was discovered as a result of scanning all files at server startup.

    Both of these issues were due to a dependency of the conflict detection and resolution code on table objects, even in cases where checking for such objects might not be appropriate. With this fix, conflict detection and resolution for any NDB table works whether the table was created on the same SQL node, or on a different one. (Bug #13578660)

Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb