Documentation Home
MySQL NDB Cluster 7.4 Release Notes
Download these Release Notes
PDF (US Ltr) - 0.9Mb
PDF (A4) - 0.9Mb

MySQL NDB Cluster 7.4 Release Notes  /  Changes in MySQL NDB Cluster 7.4.4 (5.6.23-ndb-7.4.4) (2015-02-26, General Availability)

Changes in MySQL NDB Cluster 7.4.4 (5.6.23-ndb-7.4.4) (2015-02-26, General Availability)

MySQL NDB Cluster 7.4.4 is the first GA release of MySQL NDB Cluster 7.4, based on MySQL Server 5.6 and including new features in version 7.4 of the NDB storage engine, as well as fixing recently discovered bugs in previous NDB Cluster releases.

Obtaining MySQL NDB Cluster 7.4.  MySQL NDB Cluster 7.4 source code and binaries can be obtained from

For an overview of changes made in MySQL NDB Cluster 7.4, see What is New in NDB Cluster 7.4.

This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.6 through MySQL 5.6.23 (see Changes in MySQL 5.6.23 (2015-02-02, General Availability)).

Bugs Fixed

  • NDB Cluster APIs: When a transaction is started from a cluster connection, Table and Index schema objects may be passed to this transaction for use. If these schema objects have been acquired from a different connection (Ndb_cluster_connection object), they can be deleted at any point by the deletion or disconnection of the owning connection. This can leave a connection with invalid schema objects, which causes an NDB API application to fail when these are dereferenced.

    To avoid this problem, if your application uses multiple connections, you can now set a check to detect sharing of schema objects between connections when passing a schema object to a transaction, using the NdbTransaction::setSchemaObjectOwnerChecks() method added in this release. When this check is enabled, the schema objects having the same names are acquired from the connection and compared to the schema objects passed to the transaction. Failure to match causes the application to fail with an error. (Bug #19785977)

  • NDB Cluster APIs: The increase in the default number of hashmap buckets (DefaultHashMapSize API node configuration parameter) from 240 to 3480 in MySQL NDB Cluster 7.2.11 increased the size of the internal DictHashMapInfo::HashMap type considerably. This type was allocated on the stack in some getTable() calls which could lead to stack overflow issues for NDB API users.

    To avoid this problem, the hashmap is now dynamically allocated from the heap. (Bug #19306793)

  • When upgrading a MySQL NDB Cluster from NDB 7.3 to NDB 7.4, the first data node started with the NDB 7.4 data node binary caused the master node (still running NDB 7.3) to fail with Error 2301, then itself failed during Start Phase 5. (Bug #20608889)

  • A memory leak in NDB event buffer allocation caused an event to be leaked for each epoch. (Due to the fact that an SQL node uses 3 event buffers, each SQL node leaked 3 events per epoch.) This meant that a MySQL NDB Cluster mysqld leaked an amount of memory that was inversely proportional to the size of TimeBetweenEpochs—that is, the smaller the value for this parameter, the greater the amount of memory leaked per unit of time. (Bug #20539452)

  • The values of the Ndb_last_commit_epoch_server and Ndb_last_commit_epoch_session status variables were incorrectly reported on some platforms. To correct this problem, these values are now stored internally as long long, rather than long. (Bug #20372169)

  • When restoring a MySQL NDB Cluster from backup, nodes that failed and were restarted during restoration of another node became unresponsive, which subsequently caused ndb_restore to fail and exit. (Bug #20069066)

  • When a data node fails or is being restarted, the remaining nodes in the same nodegroup resend to subscribers any data which they determine has not already been sent by the failed node. Normally, when a data node (actually, the SUMA kernel block) has sent all data belonging to an epoch for which it is responsible, it sends a SUB_GCP_COMPLETE_REP signal, together with a count, to all subscribers, each of which responds with a SUB_GCP_COMPLETE_ACK. When SUMA receives this acknowledgment from all subscribers, it reports this to the other nodes in the same nodegroup so that they know that there is no need to resend this data in case of a subsequent node failure. If a node failed before all subscribers sent this acknowledgement but before all the other nodes in the same nodegroup received it from the failing node, data for some epochs could be sent (and reported as complete) twice, which could lead to an unplanned shutdown.

    The fix for this issue adds to the count reported by SUB_GCP_COMPLETE_ACK a list of identifiers which the receiver can use to keep track of which buckets are completed and to ignore any duplicate reported for an already completed bucket. (Bug #17579998)

  • The ndbinfo.restart_info table did not contain a new row as expected following a node restart. (Bug #75825, Bug #20504971)

  • The output format of SHOW CREATE TABLE for an NDB table containing foreign key constraints did not match that for the equivalent InnoDB table, which could lead to issues with some third-party applications. (Bug #75515, Bug #20364309)

  • An ALTER TABLE statement containing comments and a partitioning option against an NDB table caused the SQL node on which it was executed to fail. (Bug #74022, Bug #19667566)