MySQL Cluster NDB 7.2.9 was withdrawn shortly after release, due to a problem with primary keys and tables with very many rows that was introduced in this release (Bug #16023068, Bug #67928). Users should upgrade to MySQL Cluster NDB 7.2.10, which fixes this issue.
MySQL Cluster NDB 7.2.9 is a new release of MySQL Cluster,
incorporating new features in the
NDBCLUSTER storage engine, and
fixing recently discovered bugs in previous MySQL Cluster NDB
Obtaining MySQL Cluster NDB 7.2. MySQL Cluster NDB 7.2 source code and binaries can be obtained from http://dev.mysql.com/downloads/cluster/.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.5 through MySQL 5.5.28 (see Changes in MySQL 5.5.28 (2012-09-28)).
Functionality Added or Changed
Important Change; ClusterJ:
A new CMake option
WITH_NDB_JAVA is introduced. When
this option is enabled, the MySQL Cluster build is configured to
include Java support, including support for
ClusterJ. If the JDK cannot be found,
CMake fails with an error. This option is
enabled by default; if you do not wish to
compile Java support into MySQL Cluster, you must now set this
explicitly when configuring the build, using
Added 3 new columns to the
transporters table in the
ndbinfo database. The
bytes_received columns help to provide an
overview of data transfer across the transporter links in a
MySQL Cluster. This information can be useful in verifying
system balance, partitioning, and front-end server load
balancing; it may also be of help when diagnosing network
problems arising from link saturation, hardware faults, or other
Data node logs now provide tracking information about arbitrations, including which nodes have assumed the arbitrator role and at what times. (Bug #11761263, Bug #53736)
A slow filesystem during local checkpointing could exert undue
DBDIH kernel block file page
buffers, which in turn could lead to a data node crash when
these were exhausted. This fix limits the number of table
definition updates that
DBDIH can issue
The management server process, when started with
sometimes hang during shutdown.
The output from ndb_config
--configinfo now contains the
same information as that from ndb_config
--xml, including explicit
indicators for parameters that do not require restarting a data
--initial to take effect.
ndb_config indicated incorrectly
node configuration parameter requires an initial node restart to
take effect, when in fact it does not; this error was also
present in the MySQL Cluster documentation, where it has also
Attempting to restart more than 5 data nodes simultaneously could cause the cluster to crash. (Bug #14647210)
In MySQL Cluster NDB 7.2.7, the size of the hash map was
increased to 3840 LDM threads. However, when upgrading a MySQL
Cluster from a previous release, existing tables could not use
or be modified online to take advantage of the new size, even
when the number of fragments was increased by (for example)
adding new data nodes to the cluster. Now in such cases,
following an upgrade and (once the number of fragments has been
increased), you can run
TABLE ... REORGANIZE PARTITION on tables that were
created in MySQL Cluster NDB 7.2.6 or earlier, after which they
can use the larger hash map size.
CPU consumption peaked several seconds after the forced termination an NDB client application due to the fact that the DBTC kernel block waited for any open transactions owned by the disconnected API client to be terminated in a busy loop, and did not break between checks for the correct state. (Bug #14550056)
ALTER TABLE with other
DML statements on the same NDB table returned Got
error -1 'Unknown error code' from NDBCLUSTER.
Receiver threads could wait unnecessarily to process incomplete signals, greatly reducing performance of ndbmtd. (Bug #14525521)
On platforms where epoll was not available, setting multiple
receiver threads with the
caused ndbmtd to fail.
--connect-delay startup options for
ndbd and ndbmtd.
--connect-retries (default 12) controls how
many times the data node tries to connect to a management server
before giving up; setting it to -1 means that the data node
never stops trying to make contact.
--connect-delay sets the number of seconds to
wait between retries; the default is 5.
(Bug #14329309, Bug #66550)
Following a failed
TABLE ... REORGANIZE PARTITION statement, a subsequent
execution of this statement after adding new data nodes caused a
failure in the
DBDIH kernel block which led
to an unplanned shutdown of the cluster.
It was possible in some cases for two transactions to try to
drop tables at the same time. If the master node failed while
one of these operations was still pending, this could lead
either to additional node failures (and cluster shutdown) or to
new dictionary operations being blocked. This issue is addressed
by ensuring that the master will reject requests to start or
stop a transaction while there are outstanding dictionary
takeover requests. In addition, table-drop operations now
correctly signal when complete, as the
kernel block could not confirm node takeovers while such
operations were still marked as pending completion.
DBSPJ kernel block had no information
about which tables or indexes actually existed, or which had
been modified or dropped, since execution of a given query
DBSPJ might submit dictionary
requests for nonexistent tables or versions of tables, which
could cause a crash in the
This fix introduces a simplified dictionary into the
DBSPJ kernel block such that
DBSPJ can now check reliably for the
existence of a particular table or version of a table on which
it is about to request an operation.
When using ndbmtd and performing joins, data
nodes could fail where ndbmtd processes were
configured to use a large number of local query handler threads
(as set by the
configuration parameter), the tables accessed by the join had a
large number of partitions, or both.
(Bug #13799800, Bug #14143553)
Disk Data: Concurrent DML and DDL operations against the same NDB table could cause mysqld to crash. (Bug #14577463)
When the value of
ndb_log_apply_status was set to
1, it was theoretically possible for the
server_id column not to be propagated
Transactions originating on a replication master are applied on
slaves as if using
transactions replayed from a binary log were not. Now
transactions being replayed from a log are handled in the same
way as those coming from a “live” replication
NdbOperation::AbortOption Type, for more