MySQL Cluster NDB 7.2.11 is a new release of MySQL Cluster,
incorporating new features in the
NDB storage engine, and fixing
recently discovered bugs in previous MySQL Cluster NDB 7.2
Obtaining MySQL Cluster NDB 7.2. MySQL Cluster NDB 7.2 source code and binaries can be obtained from http://dev.mysql.com/downloads/cluster/.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.5 through MySQL 5.5.29 (see Changes in MySQL 5.5.29 (2012-12-21)).
Following an upgrade to MySQL Cluster NDB 7.2.7 or later, it was not possible to downgrade online again to any previous version, due to a change in that version in the default size (number of LDM threads used) for
NDBtable hash maps. The fix for this issue makes the size configurable, with the addition of the
To retain compatibility with an older release that does not support large hash maps, you can set this parameter in the cluster'
config.inifile to the value used in older releases (240) before performing an upgrade, so that the data nodes continue to use smaller hash maps that are compatible with the older release. You can also now employ this parameter in MySQL Cluster NDB 7.0 and MySQL Cluster NDB 7.1 to enable larger hash maps prior to upgrading to MySQL Cluster NDB 7.2. For more information, see the description of the
DefaultHashMapSizeparameter. (Bug #14800539)
References: See also: Bug #14645319.
Important Change; Cluster API: When checking—as part of evaluating an
ifpredicate—which error codes should be propagated to the application, any error code less than 6000 caused the current row to be skipped, even those codes that should have caused the query to be aborted. In addition, a scan that aborted due to an error from
DBTUPwhen no rows had been sent to the API caused
DBLQHto send a
SCAN_FRAGCONFsignal rather than a
DBTC. This caused
DBTCto time out waiting for a
SCAN_FRAGREFsignal that was never sent, and the scan was never closed.
As part of this fix, the default
ErrorCodevalue used by
NdbInterpretedCode::interpret_exit_nok()has been changed from 899 (Rowid already allocated) to 626 (Tuple did not exist). The old value continues to be supported for backward compatibility. User-defined values in the range 6000-6999 (inclusive) are also now supported. You should also keep in mind that the result of using any other
ErrorCodevalue not mentioned here is not defined or guaranteed.
When using tables having more than 64 fragments in a MySQL Cluster where multiple TC threads were configured (on data nodes running ndbmtd, using
KeyInfomemory could be freed prematurely, before scans relying on these objects could be completed, leading to a crash of the data node. (Bug #16402744)
References: See also: Bug #13799800. This issue is a regression of: Bug #14143553.
When started with
--initialand an invalid
-f) option, ndb_mgmd removed the old configuration cache before verifying the configuration file. Now in such cases, ndb_mgmd first checks for the file, and continues with removing the configuration cache only if the configuration file is found and is valid. (Bug #16299289)
DUMP 2304command during a data node restart could cause the data node to crash with a Pointer too large error. (Bug #16284258)
Including a table as a part of a pushed join should be rejected if there are outer joined tables in between the table to be included and the tables with which it is joined with; however the check as performed for any such outer joined tables did so by checking the join type against the root of the pushed query, rather than the common ancestor of the tables being joined. (Bug #16199028)
References: See also: Bug #16198866.
Some queries were handled differently with
ndb_join_pushdownenabled, due to the fact that outer join conditions were not always pruned correctly from joins before they were pushed down. (Bug #16198866)
References: See also: Bug #16199028.
Data nodes could fail during a system restart when the host ran short of memory, due to signals of the wrong types (
TRANSID_AI_R) being sent to the
DBSPJkernel block. (Bug #16187976)
Attempting to perform additional operations such as
ADD COLUMNas part of an
ALTER [ONLINE | OFFLINE] TABLE ... RENAME ...statement is not supported, and now fails with an ER_NOT_SUPPORTED_YET error. (Bug #16021021)
Due to a known issue in the MySQL Server, it is possible to drop the
PERFORMANCE_SCHEMAdatabase. (Bug #15831748) In addition, when executed on a MySQL Server acting as a MySQL Cluster SQL node,
DROP DATABASEcaused this database to be dropped on all SQL nodes in the cluster. Now, when executing a distributed drop of a database,
NDBdoes not delete tables that are local only. This prevents MySQL system databases from being dropped in such cases. (Bug #14798043)
References: See also: Bug #15831748.
When performing large numbers of DDL statements (100 or more) in succession, adding an index to a table sometimes caused mysqld to crash when it could not find the table in
NDB. Now when this problem occurs, the DDL statement should fail with an appropriate error.
DUMP 1000command (see DUMP 1000) that contained extra or malformed arguments could lead to data node failures. (Bug #14537622)
LongMessageBuffermemory under heavy load could cause data nodes running ndbmtd to fail. (Bug #14488185)
The ndb_mgm client
HELPcommand did not show the complete syntax for the
Cluster API: The
Ndb::computeHash()API method performs a
malloc()if no buffer is provided for it to use. However, it was assumed that the memory thus returned would always be suitably aligned, which is not always the case. Now when
malloc()provides a buffer to this method, the buffer is aligned after it is allocated, and before it is used. (Bug #16484617)
The mysql.server script exited with an error if the
statuscommand was executed with multiple servers running. (Bug #15852074)