This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.1 release.
MySQL Cluster NDB 6.1 no longer in development. MySQL Cluster NDB 6.1 (formerly known as “MySQL Cluster Carrier Grade Edition 6.1.x”) is no longer being developed or maintained; if you are using a MySQL Cluster NDB 6.1 release, you should consider upgrading to MySQL Cluster NDB 6.2 or 6.3.
This Beta release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.15 (see Changes in MySQL 5.1.15 (2007-01-25)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
Incompatible Change; Cluster Replication:
The schema for the
ndb_apply_status table in
mysql system database has changed. When
upgrading to this release from a previous MySQL Cluster NDB 6.x
or mainline MySQL 5.1 release, you must drop the
mysql.ndb_apply_status table, then restart
the server for the table to be re-created with the new schema.
See MySQL Cluster Replication Schema and Tables, for additional information.
A data node failing while another data node was restarting could leave the cluster in an inconsistent state. In certain rare cases, this could lead to a race condition and the eventual forced shutdown of the cluster. (Bug #27466)
mysqld processes would sometimes crash under high load.
This fix improves on and replaces a fix for this bug that was made in MySQL Cluster NDB 6.1.5.
When a data node was taking over as the master node, a race condition could sometimes occur as the node was assuming responsibility for handling of global checkpoints. (Bug #27283)
It was not possible to set
A race condition could sometimes occur if the node acting as master failed while node IDs were still being allocated during startup. (Bug #27286)
mysqld could crash shortly after a data node failure following certain DML operations. (Bug #27169)
The same failed request from an API node could be handled by the cluster multiple times, resulting in reduced performance. (Bug #27087)
The failure of a data node while restarting could cause other data nodes to hang or crash. (Bug #27003)
DROP INDEX on a Disk Data table
did not always move data from memory into the tablespace.
Trying to replicate a large number of frequent updates with a
relatively small relay log
max-relay-log-size set to 1M or less) could
cause the slave to crash.
An issue with the way in which the
freed resources could sometimes lead to memory corruption.
Cluster API: A delete operation using a scan followed by an insert using a scan could cause a data node to fail. (Bug #27203)