This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
MySQL Cluster NDB 6.3.21 was withdrawn due to issues discovered after its release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.31 (see Changes in MySQL 5.1.31 (2009-01-19)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Important Change: Formerly, when the management server failed to create a transporter for a data node connection,
net_write_timeoutseconds elapsed before the data node was actually permitted to disconnect. Now in such cases the disconnection occurs immediately. (Bug #41965)
References: See also Bug #41713.
It is now possible while in Single User Mode to restart all data nodes using
ALL RESTARTin the management client. Restarting of individual nodes while in Single User Mode remains not permitted. (Bug #31056)
Formerly, when using MySQL Cluster Replication, records for “empty” epochs—that is, epochs in which no changes to
NDBCLUSTERdata or tables took place—were inserted into the
ndb_binlog_indextables on the slave even when
--log-slave-updateswas disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL Cluster NDB 6.3.13 this was changed so that these “empty” epochs were no longer logged. However, it is now possible to re-enable the older behavior (and cause “empty” epochs to be logged) by using the
--ndb-log-empty-epochsoption. For more information, see Replication Slave Options and Variables.
References: See also Bug #37472.
A maximum of 11
TUPscans were permitted in parallel. (Bug #42084)
Trying to execute an
ALTER ONLINE TABLE ... ADD COLUMNstatement while inserting rows into the table caused mysqld to crash. (Bug #41905)
If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug #41469)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug #41462)
It was not possible to perform online upgrades from a MySQL Cluster NDB 6.2 release to MySQL Cluster NDB 6.3.8 or a later MySQL Cluster NDB 6.3 release. (Bug #41435)
Cluster log files were opened twice by internal log-handling code, resulting in a resource leak. (Bug #41362)
A race condition in transaction coordinator takeovers (part of node failure handling) could lead to operations (locks) not being taken over and subsequently getting stale. This could lead to subsequent failures of node restarts, and to applications getting into an endless lock conflict with operations that would not complete until the node was restarted. (Bug #41297)
References: See also Bug #41295.
An abort path in the
DBLQHkernel block failed to release a commit acknowledgment marker. This meant that, during node failure handling, the local query handler could be added multiple times to the marker record which could lead to additional node failures due an array overflow. (Bug #41296)
During node failure handling (of a data node other than the master), there was a chance that the master was waiting for a
GCP_NODEFINISHEDsignal from the failed node after having received it from all other data nodes. If this occurred while the failed node had a transaction that was still being committed in the current epoch, the master node could crash in the
DBTCkernel block when discovering that a transaction actually belonged to an epoch which was already completed. (Bug #41295)
EXITin the management client sometimes caused the client to hang. (Bug #40922)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug #34526)
If all data nodes were shut down, MySQL clients were unable to access
NDBCLUSTERtables and data even after the data nodes were restarted, unless the MySQL clients themselves were restarted. (Bug #33626)
Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.
The fix for this bug also corrected an issue in the
LGMANkernel block where the amount of free space left in the undo buffer was miscalculated, causing buffer overruns. This could cause records in the buffer to be overwritten, leading to problems when restarting data nodes. (Bug #28077)
Cluster Replication: Sometimes, when using the
orig_server_idcolumns of the
ndb_binlog_indextable on the slave contained the ID and epoch of the local server instead. (Bug #41601)
mgmapi.hcontained constructs which only worked in C++, but not in C. (Bug #27004)