This release incorporates new features in the
NDB storage engine and fixes
recently discovered bugs in previous MySQL Cluster NDB 7.0
Obtaining MySQL Cluster NDB 7.0. The latest MySQL Cluster NDB 7.0 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.0 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.0 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.0.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.69 (see Changes in MySQL 5.1.69 (2013-04-18)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Performance: In a number of cases found in various locations in the MySQL Cluster codebase, unnecessary iterations were performed; this was caused by failing to break out of a repeating control structure after a test condition had been met. This community-contributed fix removes the unneeded repetitions by supplying the missing breaks. (Bug #16904243, Bug #69392, Bug #16904338, Bug #69394, Bug #16778417, Bug #69171, Bug #16778494, Bug #69172, Bug #16798410, Bug #69207, Bug #16801489, Bug #69215, Bug #16904266, Bug #69393)
The planned or unplanned shutdown of one or more data nodes while reading table data from the
ndbinfodatabase caused a memory leak. (Bug #16932989)
DBDIHwas updating table checkpoint information subsequent to a node failure could lead to a data node failure. (Bug #16904469)
In certain cases, when starting a new SQL node, mysqld failed with Error 1427 Api node died, when SUB_START_REQ reached node. (Bug #16840741)
Failure to use container classes specific
NDBduring node failure handling could cause leakage of commit-ack markers, which could later lead to resource shortages or additional node crashes. (Bug #16834416)
Use of an uninitialized variable employed in connection with error handling in the
DBLQHkernel block could sometimes lead to a data node crash or other stability issues for no apparent reason. (Bug #16834333)
A race condition in the time between the reception of a
execNODE_FAILREPsignal by the
QMGRkernel block and its reception by the
DBTCkernel blocks could lead to data node crashes during shutdown. (Bug #16834242)
CLUSTERLOGcommand (see Commands in the MySQL Cluster Management Client) caused ndb_mgm to crash on Solaris SPARC systems. (Bug #16834030)
The LCP fragment scan watchdog periodically checks for lack of progress in a fragment scan performed as part of a local checkpoint, and shuts down the node if there is no progress after a given amount of time has elapsed. This interval, formerly hard-coded as 60 seconds, can now be configured using the
LcpScanProgressTimeoutdata node configuration parameter added in this release.
This configuration parameter sets the maximum time the local checkpoint can be stalled before the LCP fragment scan watchdog shuts down the node. The default is 60 seconds, which provides backward compatibility with previous releases.
You can disable the LCP fragment scan watchdog by setting this parameter to 0. (Bug #16630410)
START BACKUP, if
idhad already been used for a backup ID, an error caused by the duplicate ID occurred as expected, but following this, the
START BACKUPcommand never completed. (Bug #16593604, Bug #68854)
When trying to specify a backup ID greater than the maximum allowed, the value was silently truncated. (Bug #16585455, Bug #68796)
The unexpected shutdown of another data node as a starting data node received its node ID caused the latter to hang in Start Phase 1. (Bug #16007980)
References: See also: Bug #18993037.
Creating more than 32 hash maps caused data nodes to fail. Usually new hashmaps are created only when performing reorganzation after data nodes have been added or when explicit partitioning is used, such as when creating a table with the
MAX_ROWSoption, or using
PARTITION BY KEY() PARTITIONS. (Bug #14710311)
When performing an
INSERT ... ON DUPLICATE KEY UPDATEon an
NDBtable where the row to be inserted already existed and was locked by another transaction, the error message returned from the
INSERTfollowing the timeout was Transaction already aborted instead of the expected Lock wait timeout exceeded. (Bug #14065831, Bug #65130)
START BACKUP WAIT STARTEDwas run from the command line using ndb_mgm
-e), the client did not exit until the backup completed. (Bug #11752837, Bug #44146)
Formerly, the node used as the coordinator or leader for distributed decision making between nodes (also known as the
DICTmanager—see The DBDICT Block) was indicated in the output of the ndb_mgm client
SHOWcommand as the “master” node, although this node has no relationship to a master server in MySQL Replication. (It should also be noted that it is not necessary to know which node is the leader except when debugging
NDBCLUSTERsource code.) To avoid possible confusion, this label has been removed, and the leader node is now indicated in
SHOWcommand output using an asterisk (
*) character. (Bug #11746263, Bug #24880)
Cluster API: For each log event retrieved using the MGM API, the log event category (
ndb_mgm_event_category) was simply cast to an
enumtype, which resulted in invalid category values. Now an offset is added to the category following the cast to ensure that the value does not fall out of the allowed range.Note
This change was reverted by the fix for Bug #18354165. See the MySQL Cluster API Developer documentation for
ndb_logevent_get_next(), for more information.
References: See also: Bug #18354165.