Documentation Home
MySQL Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb

MySQL Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 7.1  /  Changes in MySQL Cluster NDB 7.1.10 (5.1.51-ndb-7.1.10) (2011-01-26)

Changes in MySQL Cluster NDB 7.1.10 (5.1.51-ndb-7.1.10) (2011-01-26)

MySQL Cluster NDB 7.1.10 is a new release of MySQL Cluster, incorporating new features in the NDB storage engine and fixing recently discovered bugs in MySQL Cluster NDB 7.1.9a and previous MySQL Cluster releases.

Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at

This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.51 (see Changes in MySQL 5.1.51 (2010-09-10)).

Functionality Added or Changed

  • Important Change: The following changes have been made with regard to the TimeBetweenEpochsTimeout data node configuration parameter:

    • The maximum possible value for this parameter has been increased from 32000 milliseconds to 256000 milliseconds.

    • Setting this parameter to zero now has the effect of disabling GCP stops caused by save timeouts, commit timeouts, or both.

    • The current value of this parameter and a warning are written to the cluster log whenever a GCP save takes longer than 1 minute or a GCP commit takes longer than 10 seconds.

    For more information, see Disk Data and GCP Stop errors. (Bug #58383)

  • Added the --skip-broken-objects option for ndb_restore. This option causes ndb_restore to ignore tables corrupted due to missing blob parts tables, and to continue reading from the backup file and restoring the remaining tables. (Bug #54613)

    References: See also Bug #51652.

  • Made it possible to exercise more direct control over handling of timeouts occurring when trying to flush redo logs to disk using two new data node configuration parameters RedoOverCommitCounter and RedoOverCommitLimit, as well as the new API node configuration parameter DefaultOperationRedoProblemAction, all added in this release. Now, when such timeouts occur more than a specified number of times for the flushing of a given redo log, any transactions that were to be written are instead aborted, and the operations contained in those transactions can be either re-tried or themselves aborted.

    For more information, see Redo log over-commit handling.

  • Cluster Replication: Added the --ndb-log-apply-status server option, which causes a replication slave to apply updates to the master's mysql.ndb_apply_status table to its own ndb_apply_status table using its own server ID in place of the master's server ID. This option can be useful in circular or chain replication setups when you need to track updates to ndb_apply_status as they propagate from one MySQL Cluster to the next in the circle or chain.

  • Cluster API: It is now possible to stop or restart a node even while other nodes are starting, using the MGM API ndb_mgm_stop4() or ndb_mgm_restart4() function, respectively, with the force parameter set to 1. (Bug #58451)

    References: See also Bug #58319.

Bugs Fixed

  • Cluster API: In some circumstances, very large BLOB read and write operations in MySQL Cluster applications can cause excessive resource usage and even exhaustion of memory. To fix this issue and to provide increased stability when performing such operations, it is now possible to set limits on the volume of BLOB data to be read or written within a given transaction in such a way that when these limits are exceeded, the current transaction implicitly executes any accumulated operations. This avoids an excessive buildup of pending data which can result in resource exhaustion in the NDB kernel. The limits on the amount of data to be read and on the amount of data to be written before this execution takes place can be configured separately. (In other words, it is now possible in MySQL Cluster to specify read batching and write batching that is specific to BLOB data.) These limits can be configured either on the NDB API level, or in the MySQL Server.

    On the NDB API level, four new methods are added to the NdbTransaction object. getMaxPendingBlobReadBytes() and setMaxPendingBlobReadBytes() can be used to get and to set, respectively, the maximum amount of BLOB data to be read that accumulates before this implicit execution is triggered. getMaxPendingBlobWriteBytes() and setMaxPendingBlobWriteBytes() can be used to get and to set, respectively, the maximum volume of BLOB data to be written that accumulates before implicit execution occurs.

    For the MySQL server, two new options are added. The --ndb-blob-read-batch-bytes option sets a limit on the amount of pending BLOB data to be read before triggering implicit execution, and the --ndb-blob-write-batch-bytes option controls the amount of pending BLOB data to be written. These limits can also be set using the mysqld configuration file, or read and set within the mysql client and other MySQL client applications using the corresponding server system variables. (Bug #59113)

  • Two related problems could occur with read-committed scans made in parallel with transactions combining multiple (concurrent) operations:

    1. When committing a multiple-operation transaction that contained concurrent insert and update operations on the same record, the commit arrived first for the insert and then for the update. If a read-committed scan arrived between these operations, it could thus read incorrect data; in addition, if the scan read variable-size data, it could cause the data node to fail.

    2. When rolling back a multiple-operation transaction having concurrent delete and insert operations on the same record, the abort arrived first for the delete operation, and then for the insert. If a read-committed scan arrived between the delete and the insert, it could incorrectly assume that the record should not be returned (in other words, the scan treated the insert as though it had not yet been committed).

    (Bug #59496)

  • On Windows platforms, issuing a SHUTDOWN command in the ndb_mgm client caused management processes that had been started with the --nodaemon option to exit abnormally. (Bug #59437)

  • A row insert or update followed by a delete operation on the same row within the same transaction could in some cases lead to a buffer overflow. (Bug #59242)

    References: See also Bug #56524. This bug was introduced by Bug #35208.

  • Data nodes configured with very large amounts (multiple gigabytes) of DiskPageBufferMemory failed during startup with NDB error 2334 (Job buffer congestion). (Bug #58945)

    References: See also Bug #47984.

  • The FAIL_REP signal, used inside the NDB kernel to declare that a node has failed, now includes the node ID of the node that detected the failure. This information can be useful in debugging. (Bug #58904)

  • When executing a full table scan caused by a WHERE condition using unique_key IS NULL in combination with a join, NDB failed to close the scan. (Bug #58750)

    References: See also Bug #57481.

  • Issuing EXPLAIN EXTENDED for a query that would use condition pushdown could cause mysqld to crash. (Bug #58553, Bug #11765570)

  • In some circumstances, an SQL trigger on an NDB table could read stale data. (Bug #58538)

  • During a node takeover, it was possible in some circumstances for one of the remaining nodes to send an extra transaction confirmation (LQH_TRANSCONF) signal to the DBTC kernel block, conceivably leading to a crash of the data node trying to take over as the new transaction coordinator. (Bug #58453)

  • A query having multiple predicates joined by OR in the WHERE clause and which used the sort_union access method (as shown using EXPLAIN) could return duplicate rows. (Bug #58280)

  • Trying to drop an index while it was being used to perform scan updates caused data nodes to crash. (Bug #58277, Bug #57057)

  • When handling failures of multiple data nodes, an error in the construction of internal signals could cause the cluster's remaining nodes to crash. This issue was most likely to affect clusters with large numbers of data nodes. (Bug #58240)

  • The functions strncasecmp and strcasecmp were declared in ndb_global.h but never defined or used. The declarations have been removed. (Bug #58204)

  • Some queries of the form SELECT ... WHERE column IN (subquery) against an NDB table could cause mysqld to hang in an endless loop. (Bug #58163)

  • The number of rows affected by a statement that used a WHERE clause having an IN condition with a value list containing a great many elements, and that deleted or updated enough rows such that NDB processed them in batches, was not computed or reported correctly. (Bug #58040)

  • MySQL Cluster failed to compile correctly on FreeBSD 8.1 due to misplaced #include statements. (Bug #58034)

  • A query using BETWEEN as part of a pushed-down WHERE condition could cause mysqld to hang or crash. (Bug #57735)

  • Data nodes no longer allocated all memory prior to being ready to exchange heartbeat and other messages with management nodes, as in NDB 6.3 and earlier versions of MySQL Cluster. This caused problems when data nodes configured with large amounts of memory failed to show as connected or showed as being in the wrong start phase in the ndb_mgm client even after making their initial connections to and fetching their configuration data from the management server. With this fix, data nodes now allocate all memory as they did in earlier MySQL Cluster versions. (Bug #57568)

  • In some circumstances, it was possible for mysqld to begin a new multi-range read scan without having closed a previous one. This could lead to exhaustion of all scan operation objects, transaction objects, or lock objects (or some combination of these) in NDB, causing queries to fail with such errors as Lock wait timeout exceeded or Connect failure - out of connection objects. (Bug #57481)

    References: See also Bug #58750.

  • Queries using column IS [NOT] NULL on a table with a unique index created with USING HASH on column always returned an empty result. (Bug #57032)

  • With engine_condition_pushdown enabled, a query using LIKE on an ENUM column of an NDB table failed to return any results. This issue is resolved by disabling engine_condition_pushdown when performing such queries. (Bug #53360)

  • When a slash character (/) was used as part of the name of an index on an NDB table, attempting to execute a TRUNCATE TABLE statement on the table failed with the error Index not found, and the table was rendered unusable. (Bug #38914)

  • Partitioning; Disk Data: When using multi-threaded data nodes, an NDB table created with a very large value for the MAX_ROWS option could—if this table was dropped and a new table with fewer partitions, but having the same table ID, was created—cause ndbmtd to crash when performing a system restart. This was because the server attempted to examine each partition whether or not it actually existed.

    This issue is the same as that reported in Bug #45154, except that the current issue is specific to ndbmtd instead of ndbd. (Bug #58638)

  • Disk Data: In certain cases, a race condition could occur when DROP LOGFILE GROUP removed the logfile group while a read or write of one of the effected files was in progress, which in turn could lead to a crash of the data node. (Bug #59502)

  • Disk Data: A race condition could sometimes be created when DROP TABLESPACE was run concurrently with a local checkpoint; this could in turn lead to a crash of the data node. (Bug #59501)

  • Disk Data: Performing what should have been an online drop of a multi-column index was actually performed offline. (Bug #55618)

  • Disk Data: When at least one data node was not running, queries against the INFORMATION_SCHEMA.FILES table took an excessive length of time to complete because the MySQL server waited for responses from any stopped nodes to time out. Now, in such cases, MySQL does not attempt to contact nodes which are not known to be running. (Bug #54199)

  • Cluster Replication: When a mysqld performing replication of a MySQL Cluster that uses ndbmtd is forcibly disconnected (thus causing an API_FAIL_REQ signal to be sent), the SUMA kernel block iterates through all active subscriptions and disables them. If a given subscription has no more active users, then this subscription is also deactivated in the DBTUP kernel block.

    This process had no flow control, and when there were many subscriptions being deactivated (more than 512), this could cause an overflow in the short-time queue defined in the DbtupProxy class.

    The fix for this problem includes implementing proper flow control for this deactivation process and increasing the size of the short-time queue in DbtupProxy. (Bug #58693)

  • Cluster API: It was not possible to obtain the status of nodes accurately after an attempt to stop a data node using ndb_mgm_stop() failed without returning an error. (Bug #58319)

  • Cluster API: Attempting to read the same value (using getValue()) more than 9000 times within the same transaction caused the transaction to hang when executed. Now when more reads are performed in this way than can be accommodated in a single transaction, the call to execute() fails with a suitable error. (Bug #58110)

  • ClusterJ: When building MySQL Cluster NDB 7.1 on Windows using vcbuild with parallelism set to 8, the clusterj.jar file was built before its dependencies, causing the build of this file to fail. (Bug #58563)

Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb