NDB Client Programs: Effective with this release, the MySQL NDB Cluster Auto-Installer (ndb_setup.py) has been has been removed from the NDB Cluster binary and source distributions, and is no longer supported. (Bug #32084831)
References: See also: Bug #31888835.
While retrieving sorted results from a pushed-down join using
ORDER BYwith the
indexaccess method (and without
filesort), an SQL node sometimes unexpectedly terminated. (Bug #32203548)
Logging of redo log initialization showed log part indexes rather than log part numbers. (Bug #32200635)
Signal data was overwritten (and lost) due to use of extended signal memory as temporary storage. Now in such cases, extended signal memory is not used in this fashion. (Bug #32195561)
Using the maximum size of an index key supported by index statistics (3056 bytes) caused buffer issues in data nodes. (Bug #32094904)
References: See also: Bug #25038373.
As with writing redo log records, when the file currently used for writing global checkpoint records becomes full, writing switches to the next file. This switch is not supposed to occur until the new file is actually ready to receive the records, but no check was made to ensure that this was the case. This could lead to an unplanned data node shutdown restoring data from a backup using ndb_restore. (Bug #31585833)
ndb_restore encountered intermittent errors while replaying backup logs which deleted blob values; this was due to deletion of blob parts when a main table row containing blob one or more values was deleted. This is fixed by modifying ndb_restore to use the asynchronous API for blob deletes, which does not trigger blob part deletes when a blob main table row is deleted (unlike the synchronous API), so that a delete log event for the main table deletes only the row from the main table. (Bug #31546136)
When a table creation schema transaction is prepared, the table is in
TS_CREATINGstate, and is changed to
TS_ACTIVEstate when the schema transaction commits on the
DBDIHblock. In the case where the node acting as
DBDIHcoordinator fails while the schema transaction is committing, another node starts taking over for the coordinator. The following actions are taken when handling this node failure:
DBDICTrolls the table creation schema transaction forward and commits, resulting in the table involved changing to
DBDIHstarts removing the failed node from tables by moving active table replicas on the failed node from a list of stored fragment replicas to another list.
These actions are performed asynchronously many times, and when interleaving may cause a race condition. As a result, the replica list in which the replica of a failed node resides becomes nondeterministic and may differ between the recovering node (that is, the new coordinator) and other
DIHparticipant nodes. This difference violated a requirement for knowing which list the failed node's replicas can be found during the recovery of the failed node recovery on the other participants.
To fix this, moving active table replicas now covers not only tables in
TS_ACTIVEstate, but those in
TS_CREATING(prepared) state as well, since the prepared schema transaction is always rolled forward.
In addition, the state of a table creation schema transaction which is being aborted is now changed from
TS_DROPPING, to avoid any race condition there. (Bug #30521812)