MySQL NDB ClusterJ: Performance has been improved for accessing tables using a single-column partition key when the column is of type
CHAR
orVARCHAR
. (Bug #35027961)Beginning with this release, ndb_restore implements the
--timestamp-printouts
option, which causes all error, info, and debug node log messages to be prefixed with timestamps. (Bug #34110068)
Microsoft Windows: Two memory leaks found by code inspection were removed from
NDB
process handles on Windows platforms. (Bug #34872901)Microsoft Windows: On Windows platforms, the data node angel process did not detect whether a child data node process exited normally. We fix this by keeping an open process handle to the child and using this when probing for the child's exit. (Bug #34853213)
-
NDB Cluster APIs; MySQL NDB ClusterJ: MySQL ClusterJ uses a scratch buffer for primary key hash calculations which was limited to 10000 bytes, which proved too small in some cases. Now we
malloc()
the buffer if its size is not sufficient.This also fixes an issue with the
Ndb
object methodsstartTransaction()
andcomputeHash()
in the NDB API: Previously, if either of these methods was passed a temporary buffer of insufficient size, the method failed. Now in such cases a temporary buffer is allocated.Our thanks to Mikael Ronström for this contribution. (Bug #103814, Bug #32959894)
-
NDB Cluster APIs: When dropping an event operation (
NdbEventOperation
) in the NDB API, it was sometimes possible for the dropped event operation to remain visible to the application after instructing the data nodes to stop sending events related to this event operation, but before all pending buffered events were consumed and discarded. This could be observed in certain cases when performing an online alter operation, such asADD COLUMN
orRENAME COLUMN
, along with concurrent writes to the affected table.Further analysis showed that the dropped events were accessible when iterating through event operations with
Ndb::getGCIEventOperations()
. Now, this method skips dropped events when called iteratively. (Bug #34809944) -
To reduce confusion between the version of the file format and the version of the cluster which produced the backup, the backup file format version is now shown by ndb_restore using hexadecimal notation. (Bug #35079426)
References: This issue is a regression of: Bug #34110068.
-
Removed a memory leak in the
DBDICT
kernel block caused when an internal foreign key definition record was not released when no longer needed. This could be triggered by either of the following events:Drop of a foreign key constraint on an
NDB
tableRejection of an attempt to create a foreign key constraint on an
NDB
table
Such records use the
DISK_RECORDS
memory resource; you can check this on a running cluster by executingSELECT node_id, used FROM ndbinfo.resources WHERE resource_name='DISK_RECORDS'
in the mysql client. This resource usesSharedGlobalMemory
, exhaustion of which could lead not only to the rejection of attempts to create foreign keys, but of queries making use of joins as well, since theDBSPJ
block also uses shared global memory by way ofQUERY_MEMORY
. (Bug #35064142) -
When a transaction coordinator is starting fragment scans with many fragments to scan, it may take a realtime break (RTB) during the process to ensure fair CPU access for other requests. When the requesting API disconnected and API failure handling for the scan state occurred before the RTB continuation returned, continuation processing could not proceed because the scan state had been removed.
We fix this by adding appropriate checks on the scan state as part of the continuation process. (Bug #35037683)
Sender and receiver signal IDs were printed in trace logs as signed values even though they are actually unsigned 32-bit numbers. This could result in confusion when the top bit was set, as it cuased such numbers to be shown as negatives, counting upwards from
-MAX_32_BIT_SIGNED_INT
. (Bug #35037396)-
A fiber used by the
DICT
block monitors all indexes, and triggers index statistics calculations if requested byDBTUX
index fragment monitoring; these calculations are performed using a schema transaction. When theDICT
fiber attempts but fails to seize a transaction handle for requesting a schema transaction to be started, fiber exited, so that no more automated index statistics updates could be performed without a node failure. (Bug #34992370)References: See also: Bug #34007422.
-
Schema objects in NDB use composite versioning, comprising major and minor subversions. When a schema object is first created, its major and minor versions are set; when an existing schema object is altered in place, its minor subversion is incremented.
At restart time each data node checks schema objects as part of recovery; for foreign key objects, the versions of referenced parent and child tables (and indexes, for foreign key references not to or from a table's primary key) are checked for consistency. The table version of this check compares only major subversions, allowing tables to evolve, but the index version also compares minor subversions; this resulted in a failure at restart time when an index had been altered.
We fix this by comparing only major subversions for indexes in such cases. (Bug #34976028)
References: See also: Bug #21363253.
When running an NDB Cluster with multiple management servers, termination of the ndb_mgmd processes required an excessive amount of time when shutting down the cluster. (Bug #34872372)
When requesting a new global checkpoint (GCP) from the data nodes, such as by the NDB Cluster handler in mysqld to speed up delivery of schema distribution events and responses, the request was sent 100 times. While the
DBDIH
block attempted to merge these duplicate requests into one, it was possible on occasion to trigger more than one immediate GCP. (Bug #34836471)-
When started with no connection string on the command line, ndb_waiter printed
Connecting to mgmsrv at (null)
. Now in such cases, it printsConnecting to management server at nodeid=0,localhost:1186
if no other default host is specified.The
--help
option and other ndb_waiter program output was also improved. (Bug #12380163) -
The maximum
BatchByteSize
as sent inSCANREQ
signals was not always set correctly to reflect a limited byte size available in the client result buffers. The result buffer size calculation has been modified such that the effective batch byte size accurately reflects the maximum that may be returned by data nodes to prevent a possible overflow of the result buffers. (Bug #90360, Bug #27834961)References: See also: Bug #85411, Bug #25703113.