Documentation Home
MySQL NDB Cluster 7.5 Release Notes
Related Documentation Download these Release Notes
PDF (US Ltr) - 1.2Mb
PDF (A4) - 1.2Mb


MySQL NDB Cluster 7.5 Release Notes  /  Changes in MySQL NDB Cluster 7.5.6 (5.7.18-ndb-7.5.6) (2017-04-10, General Availability)

Changes in MySQL NDB Cluster 7.5.6 (5.7.18-ndb-7.5.6) (2017-04-10, General Availability)

MySQL NDB Cluster 7.5.6 is a new release of MySQL NDB Cluster 7.5, based on MySQL Server 5.7 and including features in version 7.5 of the NDB storage engine, as well as fixing recently discovered bugs in previous NDB Cluster releases.

Obtaining MySQL NDB Cluster 7.5.  MySQL NDB Cluster 7.5 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.

For an overview of changes made in MySQL NDB Cluster 7.5, see What is New in NDB Cluster 7.5.

This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.18 (see Changes in MySQL 5.7.18 (2017-04-10, General Availability)).

Functionality Added or Changed

  • Packaging: Yum repo packages are added for EL5, EL6, EL7, and SLES12.

    Apt repo packages are added for Debian 7, Debian 8, Ubuntu 14.04, and Ubuntu 16.04

Bugs Fixed

  • Partitioning: The output of EXPLAIN PARTITIONS displayed incorrect values in the partitions column when run on an explicitly partitioned NDB table having a large number of partitions.

    This was due to the fact that, when processing an EXPLAIN statement, mysqld calculates the partition ID for a hash value as (hash_value % number_of_partitions), which is correct only when the table is partitioned by HASH, since other partitioning types use different methods of mapping hash values to partition IDs. This fix replaces the partition ID calculation performed by mysqld with an internal NDB function which calculates the partition ID correctly, based on the table's partitioning type. (Bug #21068548)

    References: See also: Bug #25501895, Bug #14672885.

  • Solaris: The minimum required version of Solaris is now Solaris 11 update 3, due to a dependency on system runtime libraries.

  • Solaris: On Solaris, MySQL is now built with Developer Studio 12.5 instead of gcc. The binaries require the Developer Studio C/C++ runtime libraries to be installed. See here for how to install only the libraries:

    https://docs.oracle.com/cd/E60778_01/html/E60743/gozsu.html
  • NDB Disk Data: Stale data from NDB Disk Data tables that had been dropped could potentially be included in backups due to the fact that disk scans were enabled for these. To prevent this possibility, disk scans are now disabled—as are other types of scans—when taking a backup. (Bug #84422, Bug #25353234)

  • NDB Disk Data: In some cases, setting dynamic in-memory columns of an NDB Disk Data table to NULL was not handled correctly. (Bug #79253, Bug #22195588)

  • NDB Cluster APIs: When signals were sent while the client process was receiving signals such as SUB_GCP_COMPLETE_ACK and TC_COMMIT_ACK, these signals were temporary buffered in the send buffers of the clients which sent them. If not explicitly flushed, the signals remained in these buffers until the client woke up again and flushed its buffers. Because there was no attempt made to enforce an upper limit on how long the signal could remain unsent in the local client buffers, this could lead to timeouts and other misbehavior in the components waiting for these signals.

    In addition, the fix for a previous, related issue likely made this situation worse by removing client wakeups during which the client send buffers could have been flushed.

    The current fix moves responsibility for flushing messages sent by the receivers, to the receiver (poll_owner client). This means that it is no longer necessary to wake up all clients merely to have them flush their buffers. Instead, the poll_owner client (which is already running) performs flushing the send buffer of whatever was sent while delivering signals to the recipients. (Bug #22705935)

    References: See also: Bug #18753341, Bug #23202735.

  • CPU usage of the data node's main thread by the DBDIH master block as the end of a local checkpoint could approach 100% in certain cases where the database had a very large number of fragment replicas. This is fixed by reducing the frequency and range of fragment queue checking during an LCP. (Bug #25443080)

  • The ndb_print_backup_file utility failed when attempting to read from a backup file when the backup included a table having more than 500 columns. (Bug #25302901)

    References: See also: Bug #25182956.

  • CMake now detects whether a GCC 5.3.0 loop optimization bug occurs and attempts a workaround if so. (Bug #25253540)

  • Multiple data node failures during a partial restart of the cluster could cause API nodes to fail. This was due to expansion of an internal object ID map by one thread, thus changing its location in memory, while another thread was still accessing the old location, leading to a segmentation fault in the latter thread.

    The internal map() and unmap() functions in which this issue arose have now been made thread-safe. (Bug #25092498)

    References: See also: Bug #25306089.

  • During the initial phase of a scan request, the DBTC kernel block sends a series of DIGETNODESREQ signals to the DBDIH block in order to obtain dictionary information for each fragment to be scanned. If DBDIH returned DIGETNODESREF, the error code from that signal was not read, and Error 218 Out of LongMessageBuffer was always returned instead. Now in such cases, the error code from the DIGETNODESREF signal is actually used. (Bug #85225, Bug #25642405)

  • There existed the possibility of a race condition between schema operations on the same database object originating from different SQL nodes; this could occur when one of the SQL nodes was late in releasing its metadata lock on the affected schema object or objects in such a fashion as to appear to the schema distribution coordinator that the lock release was acknowledged for the wrong schema change. This could result in incorrect application of the schema changes on some or all of the SQL nodes or a timeout with repeated waiting max ### sec for distributing... messages in the node logs due to failure of the distribution protocol. (Bug #85010, Bug #25557263)

    References: See also: Bug #24926009.

  • When a foreign key was added to or dropped from an NDB table using an ALTER TABLE statement, the parent table's metadata was not updated, which made it possible to execute invalid alter operations on the parent afterwards.

    Until you can upgrade to this release, you can work around this problem by running SHOW CREATE TABLE on the parent immediately after adding or dropping the foreign key; this statement causes the table's metadata to be reloaded. (Bug #82989, Bug #24666177)

  • Transactions on NDB tables with cascading foreign keys returned inconsistent results when the query cache was also enabled, due to the fact that mysqld was not aware of child table updates. This meant that results for a later SELECT from the child table were fetched from the query cache, which at that point contained stale data.

    This is fixed in such cases by adding all children of the parent table to an internal list to be checked by NDB for updates whenever the parent is updated, so that mysqld is now properly informed of any updated child tables that should be invalidated from the query cache. (Bug #81776, Bug #23553507)

  • Ubuntu 14.04 and Ubuntu 16.04 are now supported.