This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
MySQL Cluster NDB 6.3.21 was withdrawn due to issues discovered after its release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.31 (see Changes in MySQL 5.1.31 (2009-01-19)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
Formerly, when the management server failed to create a
transporter for a data node connection,
elapsed before the data node was actually permitted to
disconnect. Now in such cases the disconnection occurs
References: See also Bug #41713.
It is now possible while in Single User Mode to restart all data
ALL RESTART in the management
client. Restarting of individual nodes while in Single User Mode
remains not permitted.
Formerly, when using MySQL Cluster Replication, records for
“empty” epochs—that is, epochs in which no
NDBCLUSTER data or
tables took place—were inserted into the
ndb_binlog_index tables on the slave even
disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL
Cluster NDB 6.3.13 this was changed so that these
“empty” epochs were no longer logged. However, it
is now possible to re-enable the older behavior (and cause
“empty” epochs to be logged) by using the
--ndb-log-empty-epochs option. For more
information, see Replication Slave Options and Variables.
References: See also Bug #37472.
A maximum of 11
TUP scans were permitted in
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN statement while inserting rows into the
table caused mysqld to crash.
If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug #41469)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug #41462)
It was not possible to perform online upgrades from a MySQL Cluster NDB 6.2 release to MySQL Cluster NDB 6.3.8 or a later MySQL Cluster NDB 6.3 release. (Bug #41435)
Cluster log files were opened twice by internal log-handling code, resulting in a resource leak. (Bug #41362)
A race condition in transaction coordinator takeovers (part of node failure handling) could lead to operations (locks) not being taken over and subsequently getting stale. This could lead to subsequent failures of node restarts, and to applications getting into an endless lock conflict with operations that would not complete until the node was restarted. (Bug #41297)
References: See also Bug #41295.
An abort path in the
DBLQH kernel block
failed to release a commit acknowledgment marker. This meant
that, during node failure handling, the local query handler
could be added multiple times to the marker record which could
lead to additional node failures due an array overflow.
During node failure handling (of a data node other than the
master), there was a chance that the master was waiting for a
GCP_NODEFINISHED signal from the failed node
after having received it from all other data nodes. If this
occurred while the failed node had a transaction that was still
being committed in the current epoch, the master node could
crash in the
DBTC kernel block when
discovering that a transaction actually belonged to an epoch
which was already completed.
EXIT in the management client
sometimes caused the client to hang.
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug #34526)
If all data nodes were shut down, MySQL clients were unable to
NDBCLUSTER tables and data
even after the data nodes were restarted, unless the MySQL
clients themselves were restarted.
Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.
The fix for this bug also corrected an issue in the
LGMAN kernel block where the amount of free
space left in the undo buffer was miscalculated, causing buffer
overruns. This could cause records in the buffer to be
overwritten, leading to problems when restarting data nodes.
Sometimes, when using the
orig_server_id columns of the
ndb_binlog_index table on the slave contained
the ID and epoch of the local server instead.
mgmapi.h contained constructs which only
worked in C++, but not in C.