This General Availability (GA) release
incorporates new features in the
NDB storage engine and fixes
recently discovered bugs in MySQL Cluster NDB 7.0.4.
Obtaining MySQL Cluster NDB 7.0.5. The latest MySQL Cluster NDB 7.0 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.0 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.0 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.0.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1, 6.2, 6.3, and 6.4 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.32 (see Changes in MySQL 5.1.32 (2009-02-14)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
HeartbeatOrderdata node configuration parameter, which can be used to set the order in which heartbeats are transmitted between data nodes. This parameter can be useful in situations where multiple data nodes are running on the same host and a temporary disruption in connectivity between hosts would otherwise cause the loss of a node group, leading to failure of the cluster. (Bug #52182)
Two new server status variables
Ndb_pruned_scan_counthave been introduced.
Ndb_scan_countgives the number of scans executed since the cluster was last started.
Ndb_pruned_scan_countgives the number of scans for which
NDBCLUSTERwas able to use partition pruning. Together, these variables can be used to help determine in the MySQL server whether table scans are pruned by
NDBCLUSTER. (Bug #44153)
Important Note: Due to problem discovered after the code freeze for this release, it is not possible to perform an online upgrade from any MySQL Cluster NDB 6.x release to MySQL Cluster NDB 7.0.5 or any earlier MySQL Cluster NDB 7.0 release.
This issue is fixed in MySQL Cluster NDB 7.0.6 and later for upgrades from MySQL Cluster NDB 6.3.8 and later MySQL Cluster NDB 6.3 releases, or from MySQL Cluster NDB 7.0.5. (Bug #44294)
Cluster Replication: If data node failed during an event creation operation, there was a slight risk that a surviving data node could send an invalid table reference back to NDB, causing the operation to fail with a false Error 723 (No such table). This could take place when a data node failed as a mysqld process was setting up MySQL Cluster Replication. (Bug #43754)
Cluster API: The following issues occurred when performing an online (rolling) upgrade of a cluster to a version of MySQL Cluster that supports configuration caching from a version that does not:
When using multiple management servers, after upgrading and restarting one ndb_mgmd, any remaining management servers using the previous version of ndb_mgmd could not synchronize their configuration data.
The MGM API
ndb_mgm_get_configuration_nodeid()function failed to obtain configuration data.
During initial node restarts, initialization of the REDO log was always performed 1 node at a time, during start phase 4. Now this is done during start phase 2, so that the initialization can be performed in parallel, thus decreasing the time required for initial restarts involving multiple nodes. (Bug #50062)
If the number of fragments per table rises above a certain threshold, the
DBDIHkernel block's on-disk table-definition grows large enough to occupy 2 pages. However, in MySQL Cluster NDB 7.0 (including MySQL Cluster NDB 6.4 releases), only 1 page was actually written, causing table definitions stored on disk to be incomplete.
This issue was not observed in MySQL Cluster release series prior to MySQL Cluster NDB 7.0. (Bug #44135)
TransactionDeadlockDetectionTimeoutvalues less than 100 were treated as 100. This could cause scans to time out unexpectedly. (Bug #44099)
ndberror.ccontained a C++-style comment, which caused builds to fail with some C compilers. (Bug #44036)
A race condition could occur when a data node failed to restart just before being included in the next global checkpoint. This could cause other data nodes to fail. (Bug #43888)
The setting for
ndb_use_transactionswas ignored. This issue was only known to occur in MySQL Cluster NDB 6.4.3 and MySQL Cluster NDB 7.0.4. (Bug #43236)
When a data node process had been killed after allocating a node ID, but before making contact with any other data node processes, it was not possible to restart it due to a node ID allocation failure.
This issue could effect either ndbd or ndbmtd processes. (Bug #43224)
References: This issue is a regression of: Bug #42973.
ndb_restore crashed when trying to restore a backup made to a MySQL Cluster running on a platform having different endianness from that on which the original backup was taken. (Bug #39540)
PID files for the data and management node daemons were not removed following a normal shutdown. (Bug #37225)
Invoking the management client
START BACKUPcommand from the system shell (for example, as ndb_mgm -e "START BACKUP") did not work correctly, unless the backup ID was included when the command was invoked.
Now, the backup ID is no longer required in such cases, and the backup ID that is automatically generated is printed to stdout, similar to how this is done when invoking
START BACKUPwithin the management client. (Bug #31754)
When aborting an operation involving both an insert and a delete, the insert and delete were aborted separately. This was because the transaction coordinator did not know that the operations affected on same row, and, in the case of a committed-read (tuple or index) scan, the abort of the insert was performed first, then the row was examined after the insert was aborted but before the delete was aborted. In some cases, this would leave the row in a inconsistent state. This could occur when a local checkpoint was performed during a backup. This issue did not affect primary ley operations or scans that used locks (these are serialized).
After this fix, for ordered indexes, all operations that follow the operation to be aborted are now also aborted.
Disk Data: When using multi-threaded data nodes,
DROP TABLEstatements on Disk Data tables could hang. (Bug #43825)
Disk Data: This fix completes one that was made for this issue in MySQL Cluster NDB-7.0.4, which did not rectify the problem in all cases. (Bug #43632)
Cluster Replication: When creating or altering a table an
NdbEventOperationis created by the mysqld process to monitor the table for subsequent logging in the binary log. If this happened during a node restart there was a chance that the reference count on this event operation object could be incorrect, which could lead to an assert in debug MySQL Cluster builds. (Bug #43752)
Cluster API: If the largest offset of a
RecordSpecificationused for an
NdbRecordobject was for the
NULLbits (and thus not a column), this offset was not taken into account when calculating the size used for the
RecordSpecification. This meant that the space for the
NULLbits could be overwritten by key or other information. (Bug #43891)
BITcolumns created using the native NDB API format that were not created as nullable could still sometimes be overwritten, or cause other columns to be overwritten.
This issue did not effect tables having
BITcolumns created using the mysqld format (always used by MySQL Cluster SQL nodes). (Bug #43802)