This release incorporates new features in the
NDB storage engine and fixes
recently discovered bugs in previous MySQL Cluster NDB 7.0
Obtaining MySQL Cluster NDB 7.0. The latest MySQL Cluster NDB 7.0 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.0 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.0 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.0.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.69 (see Changes in MySQL 5.1.69 (2013-04-18)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Cluster API: Added
DUMPcode 2514, which provides information about counts of transaction objects per API node. For more information, see DUMP 2514. See also Commands in the MySQL Cluster Management Client. (Bug #15878085)
When ndb_restore fails to find a table, it now includes in the error output an NDB API error code giving the reason for the failure. (Bug #16329067)
Following an upgrade to MySQL Cluster NDB 7.2.7 or later, it was not possible to downgrade online again to any previous version, due to a change in that version in the default size (number of LDM threads used) for
NDBtable hash maps. The fix for this issue makes the size configurable, with the addition of the
To retain compatibility with an older release that does not support large hash maps, you can set this parameter in the cluster'
config.inifile to the value used in older releases (240) before performing an upgrade, so that the data nodes continue to use smaller hash maps that are compatible with the older release. You can also now employ this parameter in MySQL Cluster NDB 7.0 and MySQL Cluster NDB 7.1 to enable larger hash maps prior to upgrading to MySQL Cluster NDB 7.2. For more information, see the description of the
DefaultHashMapSizeparameter. (Bug #14800539)
References: See also: Bug #14645319.
Important Change; Cluster API: When checking—as part of evaluating an
ifpredicate—which error codes should be propagated to the application, any error code less than 6000 caused the current row to be skipped, even those codes that should have caused the query to be aborted. In addition, a scan that aborted due to an error from
DBTUPwhen no rows had been sent to the API caused
DBLQHto send a
SCAN_FRAGCONFsignal rather than a
DBTC. This caused
DBTCto time out waiting for a
SCAN_FRAGREFsignal that was never sent, and the scan was never closed.
As part of this fix, the default
ErrorCodevalue used by
NdbInterpretedCode::interpret_exit_nok()has been changed from 899 (Rowid already allocated) to 626 (Tuple did not exist). The old value continues to be supported for backward compatibility. User-defined values in the range 6000-6999 (inclusive) are also now supported. You should also keep in mind that the result of using any other
ErrorCodevalue not mentioned here is not defined or guaranteed.
The NDB Error-Reporting Utility (ndb_error_reporter) failed to include the cluster nodes' log files in the archive it produced when the
FILEoption was set for the parameter
LogDestination. (Bug #16765651)
References: See also: Bug #11752792, Bug #44082.
WHEREcondition that contained a boolean test of the result of an
INsubselect was not evaluated correctly. (Bug #16678033)
In some cases a data node could stop with an exit code but no error message other than
(null)was logged. (This could occur when using ndbd or ndbmtd for the data node process.) Now in such cases the appropriate error message is used instead (see ndbd Error Messages). (Bug #16614114)
When using tables having more than 64 fragments in a MySQL Cluster where multiple TC threads were configured (on data nodes running ndbmtd, using
KeyInfomemory could be freed prematurely, before scans relying on these objects could be completed, leading to a crash of the data node. (Bug #16402744)
References: See also: Bug #13799800. This issue is a regression of: Bug #14143553.
When started with
--initialand an invalid
-f) option, ndb_mgmd removed the old configuration cache before verifying the configuration file. Now in such cases, ndb_mgmd first checks for the file, and continues with removing the configuration cache only if the configuration file is found and is valid. (Bug #16299289)
DUMP 2304command during a data node restart could cause the data node to crash with a Pointer too large error. (Bug #16284258)
Improved handling of lagging row change event subscribers by setting size of the GCP pool to the value of
MaxBufferedEpochs. This fix also introduces a new
MaxBufferedEpochBytesdata node configuration parameter, which makes it possible to set a total number of bytes per node to be reserved for buffering epochs. In addition, a new
DUMPcode (8013) has been added which causes a list a lagging subscribers for each node to be printed to the cluster log (see DUMP 8013). (Bug #16203623)
Data nodes could fail during a system restart when the host ran short of memory, due to signals of the wrong types (
TRANSID_AI_R) being sent to the
DBSPJkernel block. (Bug #16187976)
Attempting to perform additional operations such as
ADD COLUMNas part of an
ALTER [ONLINE | OFFLINE] TABLE ... RENAME ...statement is not supported, and now fails with an ER_NOT_SUPPORTED_YET error. (Bug #16021021)
Purging the binary logs could sometimes cause mysqld to crash. (Bug #15854719)
Due to a known issue in the MySQL Server, it is possible to drop the
PERFORMANCE_SCHEMAdatabase. (Bug #15831748) In addition, when executed on a MySQL Server acting as a MySQL Cluster SQL node,
DROP DATABASEcaused this database to be dropped on all SQL nodes in the cluster. Now, when executing a distributed drop of a database,
NDBdoes not delete tables that are local only. This prevents MySQL system databases from being dropped in such cases. (Bug #14798043)
References: See also: Bug #15831748.
An error message in
src/mgmsrv/MgmtSrvr.cppwas corrected. (Bug #14548052, Bug #66518)
DUMP 1000command (see DUMP 1000) that contained extra or malformed arguments could lead to data node failures. (Bug #14537622)
LongMessageBuffermemory under heavy load could cause data nodes running ndbmtd to fail. (Bug #14488185)
The help text for ndb_select_count did not include any information about using table names. (Bug #11755737, Bug #47551)
The ndb_mgm client
HELPcommand did not show the complete syntax for the
Cluster API: The
Ndb::computeHash()API method performs a
malloc()if no buffer is provided for it to use. However, it was assumed that the memory thus returned would always be suitably aligned, which is not always the case. Now when
malloc()provides a buffer to this method, the buffer is aligned after it is allocated, and before it is used. (Bug #16484617)