Documentation Home
MySQL NDB Cluster 7.5 Release Notes
Related Documentation Download these Release Notes
PDF (US Ltr) - 1.2Mb
PDF (A4) - 1.2Mb


MySQL NDB Cluster 7.5 Release Notes  /  Changes in MySQL NDB Cluster 7.5.16 (5.7.28-ndb-7.5.16) (2019-10-15, General Availability)

Changes in MySQL NDB Cluster 7.5.16 (5.7.28-ndb-7.5.16) (2019-10-15, General Availability)

MySQL NDB Cluster 7.5.16 is a new release of MySQL NDB Cluster 7.5, based on MySQL Server 5.7 and including features in version 7.5 of the NDB storage engine, as well as fixing recently discovered bugs in previous NDB Cluster releases.

Obtaining MySQL NDB Cluster 7.5.  MySQL NDB Cluster 7.5 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.

For an overview of changes made in MySQL NDB Cluster 7.5, see What is New in NDB Cluster 7.5.

This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.28 (see Changes in MySQL 5.7.28 (2019-10-14, General Availability)).

Functionality Added or Changed

  • ndb_restore now reports the specific NDB error number and message when it is unable to load a table descriptor from a backup .ctl file. This can happen when attempting to restore a backup taken from a later version of the NDB Cluster software to a cluster running an earlier version—for example, when the backup includes a table using a character set which is unknown to the version of ndb_restore being used to restore it. (Bug #30184265)

Bugs Fixed

  • MySQL NDB ClusterJ: If ClusterJ was deployed as a separate module of a multi-module web application, when the application tried to create a new instance of a domain object, the exception java.lang.IllegalArgumentException: non-public interface is not defined by the given loader was thrown. It was because ClusterJ always tries to create a proxy class from which the domain object can be instantiated, and the proxy class is an implementation of the domain interface and the protected DomainTypeHandlerImpl::Finalizable interface. The class loaders of these two interfaces were different in the case, as they belonged to different modules running on the web server, so that when ClusterJ tried to create the proxy class using the domain object interface's class loader, the above-mentioned exception was thrown. This fix makes the Finalization interface public so that the class loader of the web application would be able to access it even if it belongs to a different module from that of the domain interface. (Bug #29895213)

  • MySQL NDB ClusterJ: ClusterJ sometimes failed with a segmentation fault after reconnecting to an NDB Cluster. This was due to ClusterJ reusing old database metadata objects from the old connection. With the fix, those objects are discarded before a reconnection to the cluster. (Bug #29891983)

  • When executing a global schema lock (GSL), NDB used a single Ndb_table_guard object for successive retires when attempting to obtain a table object reference; it was not possible for this to succeed after failing on the first attempt, since Ndb_table_guard assumes that the underlying object pointer is determined once only—at initialisation—with the previously retrieved pointer being returned from a cached reference thereafter.

    This resulted in infinite waits to obtain the GSL, causing the binlog injector thread to hang so that mysqld considered all NDB tables to be read-only. To avoid this problem, NDB now uses a fresh instance of Ndb_table_guard for each such retry. (Bug #30120858)

    References: This issue is a regression of: Bug #30086352.

  • Restoring tables for which MAX_ROWS was used to alter partitioning from a backup made from NDB 7.4 to a cluster running NDB 7.6 did not work correctly. This is fixed by ensuring that the upgrade code handling PartitionBalance supplies a valid table specification to the NDB dictionary. (Bug #29955656)

  • NDB index statistics are calculated based on the topology of one fragment of an ordered index; the fragment chosen in any particular index is decided at index creation time, both when the index is originally created, and when a node or system restart has recreated the index locally. This calculation is based in part on the number of fragments in the index, which can change when a table is reorganized. This means that, the next time that the node is restarted, this node may choose a different fragment, so that no fragments, one fragment, or two fragments are used to generate index statistics, resulting in errors from ANALYZE TABLE.

    This issue is solved by modifying the online table reorganization to recalculate the chosen fragment immediately, so that all nodes are aligned before and after any subsequent restart. (Bug #29534647)

  • During a restart when the data nodes had started but not yet elected a president, the management server received a node ID already in use error, which resulted in excessive retries and logging. This is fixed by introducing a new error 1705 Not ready for connection allocation yet for this case.

    During a restart when the data nodes had not yet completed node failure handling, a spurious Failed to allocate nodeID error was returned. This is fixed by adding a check to detect an incomplete node start and to return error 1703 Node failure handling not completed instead.

    As part of this fix, the frequency of retries has been reduced for not ready to alloc nodeID errors, an error insert has been added to simulate a slow restart for testing purposes, and log messages have been reworded to indicate that the relevant node ID allocation errors are minor and only temporary. (Bug #27484514)