MySQL Cluster NDB 7.2.11 is a new release of MySQL Cluster,
incorporating new features in the
NDBCLUSTER storage engine, and
fixing recently discovered bugs in previous MySQL Cluster NDB
Obtaining MySQL Cluster NDB 7.2. MySQL Cluster NDB 7.2 source code and binaries can be obtained from http://dev.mysql.com/downloads/cluster/.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.5 through MySQL 5.5.29 (see Changes in MySQL 5.5.29 (2012-12-21)).
Functionality Added or Changed
Following an upgrade to MySQL Cluster NDB 7.2.7 or later, it was
not possible to downgrade online again to any previous version,
due to a change in that version in the default size (number of
LDM threads used) for
hash maps. The fix for this issue makes the size configurable,
with the addition of the
To retain compatibility with an older release that does not
support large hash maps, you can set this parameter in the
config.ini file to the value
used in older releases (240) before performing an upgrade, so
that the data nodes continue to use smaller hash maps that are
compatible with the older release. You can also now employ this
parameter in MySQL Cluster NDB 7.0 and MySQL Cluster NDB 7.1 to
enable larger hash maps prior to upgrading to MySQL Cluster NDB
7.2. For more information, see the description of the
References: See also Bug #14645319.
Important Change; Cluster API:
When checking—as part of evaluating an
if predicate—which error codes should
be propagated to the application, any error code less than 6000
caused the current row to be skipped, even those codes that
should have caused the query to be aborted. In addition, a scan
that aborted due to an error from
no rows had been sent to the API caused
to send a
SCAN_FRAGCONF signal rather than a
SCAN_FRAGREF signal to
DBTC. This caused
time out waiting for a
that was never sent, and the scan was never closed.
As part of this fix, the default
value used by
has been changed from 899 (Rowid already
allocated) to 626 (Tuple did not
exist). The old value continues to be supported for
backward compatibility. User-defined values in the range
6000-6999 (inclusive) are also now supported. You should also
keep in mind that the result of using any other
ErrorCode value not mentioned here is not
defined or guaranteed.
When using tables having more than 64 fragments in a MySQL
Cluster where multiple TC threads were configured (on data nodes
running ndbmtd, using
memory could be freed prematurely, before scans relying on these
objects could be completed, leading to a crash of the data node.
References: See also Bug #13799800. This bug was introduced by Bug #14143553.
When started with
-f) option, ndb_mgmd
removed the old configuration cache before verifying the
configuration file. Now in such cases,
ndb_mgmd first checks for the file, and
continues with removing the configuration cache only if the
configuration file is found and is valid.
DUMP 2304 command during a data
node restart could cause the data node to crash with a
Pointer too large error.
Including a table as a part of a pushed join should be rejected if there are outer joined tables in between the table to be included and the tables with which it is joined with; however the check as performed for any such outer joined tables did so by checking the join type against the root of the pushed query, rather than the common ancestor of the tables being joined. (Bug #16199028)
References: See also Bug #16198866.
Some queries were handled differently with
ndb_join_pushdown enabled, due
to the fact that outer join conditions were not always pruned
correctly from joins before they were pushed down.
References: See also Bug #16199028.
Data nodes could fail during a system restart when the host ran
short of memory, due to signals of the wrong types
TRANSID_AI_R) being sent to the
DBSPJ kernel block.
Attempting to perform additional operations such as
COLUMN as part of an
[ONLINE | OFFLINE] TABLE ... RENAME ... statement is
not supported, and now fails with an
Due to a known issue in the MySQL Server, it is possible to drop
PERFORMANCE_SCHEMA database. (Bug
#15831748) In addition, when executed on a MySQL Server acting
as a MySQL Cluster SQL node,
DATABASE caused this database to be dropped on all SQL
nodes in the cluster. Now, when executing a distributed drop of
NDB does not delete
tables that are local only. This prevents MySQL system databases
from being dropped in such cases.
When performing large numbers of DDL statements (100 or more) in
succession, adding an index to a table sometimes caused
mysqld to crash when it could not find the
NDB. Now when this problem
occurs, the DDL statement should fail with an appropriate error.
DUMP 1000 command (see
DUMP 1000) that
contained extra or malformed arguments could lead to data node
under heavy load could cause data nodes running
ndbmtd to fail.
The ndb_mgm client
command did not show the complete syntax for the
method performs a
malloc() if no buffer is
provided for it to use. However, it was assumed that the memory
thus returned would always be suitably aligned, which is not
always the case. Now when
malloc() provides a
buffer to this method, the buffer is aligned after it is
allocated, and before it is used.
The mysql.server script exited with an error
status command was executed with
multiple servers running.