MySQL NDB Cluster 7.6.32 is a new release of NDB 7.6, based on
MySQL Server 5.7 and including features in version 7.6 of the
NDB
storage engine, as well as fixing
recently discovered bugs in previous NDB Cluster releases.
Obtaining NDB Cluster 7.6. NDB Cluster 7.6 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in NDB Cluster 7.6, see What is New in NDB Cluster 7.6.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.44 (see Changes in MySQL 5.7.44 (2023-10-25, General Availability)).
Important Change: For platforms on which OpenSSL libraries are bundled, the linked OpenSSL library for MySQL Server has been updated to version 3.0.15. For more information, see OpenSSL 3.0 Series Release Notes and OpenSSL Security Advisory [3rd September 2024]. (Bug #37021075)
NDB Cluster APIs: Using
NdbRecord
andOO_SETVALUE
from the NDB API to write the value of aVarchar
,Varbinary
,Longvarchar
, orLongvarbinary
column failed with error829
. (Bug #36989337)MySQL NDB ClusterJ:
ReconnectTest
in the ClusterJ test suite failed sometimes due to a race condition. The test has been rewritten with proper synchronization. (Bug #28550140)-
Fixed an issue relating to
FTS
comparisons.Our thanks to Shaohua Wang and the team at Alibaba for the contribution. (Bug #37039409)
While dumping tablespaces, mysqldump did not properly escape certain SQL statements in its output. In addition, the dump now encloses the following identifiers within backticks:
LOGFILE GROUP
,TABLESPACE
, andENGINE
. (Bug #37039394)The
AES_ENCRYPT()
function did not always return a valid result. (Bug #37039383)Removed node management code from
TRIX
that was not actually used. (Bug #37006547)-
Submitting concurrent shutdown commands for individual nodes using ndb_mgm
SHUTDOWN
or the MGM API sometimes had one or both of the following adverse results:node_id
Cluster failure when all nodes in the same node group were stopped
Inability to recover when all nodes in the same node group were stopped, and the cluster had more than one node group
This was due to the fact that the (planned) shutdown of a single node assumed that only one such shutdown occurred at a time, but did not actually check this limitation.
We fix this so that concurrent single-node shutdown requests are serialized across the cluster, and any which would cause a cluster outage are rejected. (Bug #36943756)
References: See also: Bug #36839995.
-
Shutdown of a data node late in a schema transaction updating index statistics caused the president node to shut down as well. (Bug #36886242)
References: See also: Bug #36877952.
It was possible for duplicate events to be sent to user applications when a data node was shut down. (Bug #36750146)
The server did not always handle connections correctly when running with both the thread pool and audit log plugins. (Bug #36682079)
Issues arose when an attempt was made to use a SHM transporter's wakeup socket before it was ready, due in part to error-handling when setting up the SHM transporter, which did not close the socket correctly prior to making another attempt at setup. (Bug #36568752, Bug #36623058)
DROP INDEX
with the addition of aFULLTEXT
index in the same transaction sometimes led to an unplanned server exit. (Bug #36559642)