Documentation Home
MySQL NDB Cluster 7.6 Release Notes
Download these Release Notes
PDF (US Ltr) - 0.9Mb
PDF (A4) - 0.9Mb


MySQL NDB Cluster 7.6 Release Notes  /  Changes in MySQL NDB Cluster 7.6.25 (5.7.40-ndb-7.6.25) (2023-01-18, General Availability)

Changes in MySQL NDB Cluster 7.6.25 (5.7.40-ndb-7.6.25) (2023-01-18, General Availability)

MySQL NDB Cluster 7.6.25 is a new release of NDB 7.6, based on MySQL Server 5.7 and including features in version 7.6 of the NDB storage engine, as well as fixing recently discovered bugs in previous NDB Cluster releases.

Obtaining NDB Cluster 7.6.  NDB Cluster 7.6 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.

For an overview of changes made in NDB Cluster 7.6, see What is New in NDB Cluster 7.6.

This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.40 (see Changes in MySQL 5.7.40 (2022-10-11, General Availability)).

Bugs Fixed

  • MySQL NDB ClusterJ: ClusterJ could not be built on Ubuntu 22.10 with GCC 12.2. (Bug #34666985)

  • In some contexts, a data node process may be sent SIGCHLD by other processes. Previously, the data node process bound a signal handler treating this signal as an error, which could cause the process to shut down unexpectedly when run in the foreground in a Kubernetes environment (and possibly under other conditions as well). This occurred despite the fact that a data node process never starts child processes itself, and thus there is no need to take action in such cases.

    To fix this, the handler has been modified to use SIG_IGN, which should result in cleanup of any child processes.

    Note

    mysqld and ndb_mgmd processes do not bind any handlers for SIGCHLD.

    (Bug #34826194)

  • Following execution of DROP NODEGROUP in the management client, attempting to creating or altering an NDB table specifying an explicit number of partitions or using MAX_ROWS was rejected with Got error 771 'Given NODEGROUP doesn't exist in this cluster' from NDB. (Bug #34649576)

  • In a cluster with multiple management nodes, when one management node connected and later disconnected, any remaining management nodes were not aware of this node and were eventually forced to shut down when stopped nodes reconnected; this happened whenever the cluster still had live data nodes.

    On investigation it was found that node disconnection handling was done in the NF_COMPLETEREP path in ConfigManager but the expected NF_COMPLETEREP signal never actually arrived. We solve this by handling disconnecting management nodes when the NODE_FAILREP signal arrives, rather than waiting for NF_COMPLETEREP. (Bug #34582919)

  • When reorganizing a table with ALTER TABLE ... REORGANIZE PARTITION following addition of new data nodes to the cluster, unique hash indexes were not redistributed properly. (Bug #30049013)

  • During a rolling restart of a cluster with two data nodes, one of them refused to start, reporting that the redo log fragment file size did not match the configured one and that an initial start of the node was required. Fixed by addressing a previously unhandled error returned by fsync(), and retrying the write. (Bug #28674694)

  • For a partial local checkpoint, each fragment LCP must be to be able to determine the precise state of the fragment at the start of the LCP and the precise difference in the fragment between the start of the current LCP and the start of the previous one. This is tracked using row header information and page header information; in cases where physical pages are removed this is also tracked in logical page map information.

    A page included in the current LCP, before the LCP scan reaches it, is released due to the commit or rollback of some operation on the fragment, also releasing the last used storage on the page.

    Since the released page could not be found by the scan, the release itself set the LCP_SCANNED_BIT of the page map entry it was mapped into, in order to indicate that the page was already handled from the point of view of the current LCP, causing subsequent allocation and release of the pages mapped to the entry during the LCP to be ignored. The state of the entry at the start of the LCP was also set as allocated in the page map entry.

    These settings are cleared only when the next LCP is prepared. Any page release associated with the page map entry before the clearance would violate the requirement that the bit is not set; we resolve this issue by removing the (incorrect) requirement. (Bug #23539857)

  • A data node could hit an overly strict assertion when the thread liveness watchdog triggered while the node was already shutting down. We fix the issue by relaxing this assertion in such cases. (Bug #22159697)

  • Removed a leak of long message buffer memory that occurred each time an index was scanned for updating index statistics. (Bug #108043, Bug #34568135)

  • Fixed an uninitialized variable in Suma.cpp. (Bug #106081, Bug #33764143)