This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.1 release.
MySQL Cluster NDB 6.1 no longer in development. MySQL Cluster NDB 6.1 (formerly known as “MySQL Cluster Carrier Grade Edition 6.1.x”) is no longer being developed or maintained; if you are using a MySQL Cluster NDB 6.1 release, you should consider upgrading to MySQL Cluster NDB 6.2 or 6.3.
This Beta release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.15 (see Changes in MySQL 5.1.15 (2007-01-25)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
The ndb_show_tables utility now displays information about table events. See ndb_show_tables — Display List of NDB Tables, for more information.
The ndbd_redo_log_reader utility is now part of the default build. For more information, see ndbd_redo_log_reader — Check and Print Content of Cluster Redo Log.
method has been added to the
It is now possible to disable arbitration by setting
ArbitrationRank = 0 on all management and SQL
An invalid pointer was returned following a
FSCLOSECONF signal when accessing the REDO
logs during a node restart or system restart.
No appropriate error message was provided when there was insufficient REDO log file space for the cluster to start. (Bug #25801)
The message Error 0 in readAutoIncrementValue(): no
Error was written to the error log whenever
SHOW TABLE STATUS was performed
on a Cluster table that did not have an
This improves on and supersedes an earlier fix that was made for this issue in MySQL 5.1.12.
The InvalidUndoBufferSize error used the same error code (763) as the IncompatibleVersions error. InvalidUndoBufferSize now uses its own error code (779). (Bug #26490)
Under some circumstances, following the restart of a management node, all data nodes would connect to it normally, but some of them subsequently failed to log any events to the management node. (Bug #26293)
parameter was not read until after distributed communication had
already started between cluster nodes. When the value of this
1, this could sometimes result
in data node failure due to missed heartbeats.
Takeover for local checkpointing due to multiple failures of master nodes was sometimes incorrectly handled. (Bug #26457)
The failure of a data node when restarting it with
--initial could lead to failures of subsequent
data node restarts.
A memory allocation failure in
cluster Subscription Manager) could cause the cluster to crash.
Use of a tablespace whose
greater than 1 GB could cause the cluster to crash.
Disk Data: A memory overflow could occur with tables having a large amount of data stored on disk, or with queries using a very high degree of parallelism on Disk Data tables. (Bug #26514)