Documentation Home
MySQL Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb

MySQL Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 7.1  /  Changes in MySQL Cluster NDB 7.1.29 (5.1.72-ndb-7.1.29) (2013-11-04)

Changes in MySQL Cluster NDB 7.1.29 (5.1.72-ndb-7.1.29) (2013-11-04)

MySQL Cluster NDB 7.1.29 is a new release of MySQL Cluster, incorporating new features in the NDB storage engine and fixing recently discovered bugs in previous MySQL Cluster NDB 7.1 releases.

Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at

This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.72 (see Changes in MySQL 5.1.72 (2013-09-20)).

Functionality Added or Changed

  • The length of time a management node waits for a heartbeat message from another management node is now configurable using the HeartbeatIntervalMgmdMgmd management node configuration parameter added in this release. The connection is considered dead after 3 missed heartbeats. The default value is 1500 milliseconds, or a timeout of approximately 6000 ms. (Bug #17807768, Bug #16426805)

Bugs Fixed

  • Trying to restore to a table having a BLOB column in a different position from that of the original one caused ndb_restore --restore_data to fail. (Bug #17395298)

  • ndb_restore could abort during the last stages of a restore using attribute promotion or demotion into an existing table. This could happen if a converted attribute was nullable and the backup had been run on active database. (Bug #17275798)

  • The DBUTIL data node block is now less strict about the order in which it receives certain messages from other nodes. (Bug #17052422)

  • The Windows error ERROR_FILE_EXISTS was not recognized by NDB, which treated it as an unknown error. (Bug #16970960)

  • RealTimeScheduler did not work correctly with data nodes running ndbmtd. (Bug #16961971)

  • Maintenance and checking of parent batch completion in the SPJ block of the NDB kernel was reimplemented. Among other improvements, the completion state of all ancestor nodes in the tree are now preserved. (Bug #16925513)

  • The LCP fragment scan watchdog periodically checks for lack of progress in a fragment scan performed as part of a local checkpoint, and shuts down the node if there is no progress after a given amount of time has elapsed. This interval, formerly hard-coded as 60 seconds, can now be configured using the LcpScanProgressTimeout data node configuration parameter added in this release.

    This configuration parameter sets the maximum time the local checkpoint can be stalled before the LCP fragment scan watchdog shuts down the node. The default is 60 seconds, which provides backward compatibility with previous releases.

    You can disable the LCP fragment scan watchdog by setting this parameter to 0. (Bug #16630410)

  • Added the ndb_error_reporter options --connection-timeout, which makes it possible to set a timeout for connecting to nodes, --dry-scp, which disables scp connections to remote hosts, and --skip-nodegroup, which skips all nodes in a given node group. (Bug #16602002)

    References: See also Bug #11752792, Bug #44082.

  • The NDB receive thread waited unnecessarily for additional job buffers to become available when receiving data. This caused the receive mutex to be held during this wait, which could result in a busy wait when the receive thread was running with real-time priority.

    This fix also handles the case where a negative return value from the initial check of the job buffer by the receive thread prevented further execution of data reception, which could possibly lead to communication blockage or configured ReceiveBufferMemory underutilization. (Bug #15907515)

  • When the available job buffers for a given thread fell below the critical threshold, the internal multi-threading job scheduler waited for job buffers for incoming rather than outgoing signals to become available, which meant that the scheduler waited the maximum timeout (1 millisecond) before resuming execution. (Bug #15907122)

  • Under some circumstances, a race occurred where the wrong watchdog state could be reported. A new state name Packing Send Buffers is added for watchdog state number 11, previously reported as Unknown place. As part of this fix, the state numbers for states without names are always now reported in such cases. (Bug #14824490)

  • When a node fails, the Distribution Handler (DBDIH kernel block) takes steps together with the Transaction Coordinator (DBTC) to make sure that all ongoing transactions involving the failed node are taken over by a surviving node and either committed or aborted. Transactions taken over which are then committed belong in the epoch that is current at the time the node failure occurs, so the surviving nodes must keep this epoch available until the transaction takeover is complete. This is needed to maintain ordering between epochs.

    A problem was encountered in the mechanism intended to keep the current epoch open which led to a race condition between this mechanism and that normally used to declare the end of an epoch. This could cause the current epoch to be closed prematurely, leading to failure of one or more surviving data nodes. (Bug #14623333, Bug #16990394)

  • ndb_error-reporter did not support the --help option. (Bug #11756666, Bug #48606)

    References: See also Bug #11752792, Bug #44082.

  • When START BACKUP WAIT STARTED was run from the command line using ndb_mgm --execute (-e), the client did not exit until the backup completed. (Bug #11752837, Bug #44146)

  • Formerly, the node used as the coordinator or leader for distributed decision making between nodes (also known as the DICT manager—see The DBDICT Block) was indicated in the output of the ndb_mgm client SHOW command as the master node, although this node has no relationship to a master server in MySQL Replication. (It should also be noted that it is not necessary to know which node is the leader except when debugging NDBCLUSTER source code.) To avoid possible confusion, this label has been removed, and the leader node is now indicated in SHOW command output using an asterisk (*) character. (Bug #11746263, Bug #24880)

  • Program execution failed to break out of a loop after meeting a desired condition in a number of internal methods, performing unneeded work in all cases where this occurred. (Bug #69610, Bug #69611, Bug #69736, Bug #17030606, Bug #17030614, Bug #17160263)

  • ABORT BACKUP in the ndb_mgm client (see Commands in the MySQL Cluster Management Client) took an excessive amount of time to return (approximately as long as the backup would have required to complete, had it not been aborted), and failed to remove the files that had been generated by the aborted backup. (Bug #68853, Bug #17719439)

  • Attribute promotion and demotion when restoring data to NDB tables using ndb_restore --restore_data with the --promote-attributes and --lossy-conversions options has been improved as follows:

    • Columns of types CHAR, and VARCHAR can now be promoted to BINARY and VARBINARY, and columns of the latter two types can be demoted to one of the first two.

      Note that converted character data is not checked to conform to any character set.

    • Any of the types CHAR, VARCHAR, BINARY, and VARBINARY can now be promoted to TEXT or BLOB.

      When performing such promotions, the only other sort of type conversion that can be performed at the same time is between character types and binary types.

  • Cluster Replication: Trying to use a stale .frm file and encountering a mismatch bewteen table definitions could cause mysqld to make errors when writing to the binary log. (Bug #17250994)

  • Cluster Replication: Replaying a binary log that had been written by a mysqld from a MySQL Server distribution (and from not a MySQL Cluster distribution), and that contained DML statements, on a MySQL Cluster SQL node could lead to failure of the SQL node. (Bug #16742250)

  • Cluster API: The Event::setTable() method now supports a pointer or a reference to table as its required argument. If a null table pointer is used, the method now returns -1 to make it clear that this is what has occurred. (Bug #16329082)

Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb