MySQL Cluster NDB 7.1.25 was withdrawn shortly after release, due to a problem with primary keys and tables with very many rows that was introduced in this release (Bug #16023068, Bug #67928). Users should upgrade to MySQL Cluster NDB 7.1.26, which fixes this issue.
MySQL Cluster NDB 7.1.25 is a new release of MySQL Cluster,
incorporating new features in the
NDBCLUSTER storage engine and
fixing recently discovered bugs in previous MySQL Cluster NDB
Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.1.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.66 (see Changes in MySQL 5.1.66 (2012-09-28)).
Functionality Added or Changed
Added 3 new columns to the
transporters table in the
ndbinfo database. The
bytes_received columns help to provide an
overview of data transfer across the transporter links in a
MySQL Cluster. This information can be useful in verifying
system balance, partitioning, and front-end server load
balancing; it may also be of help when diagnosing network
problems arising from link saturation, hardware faults, or other
Data node logs now provide tracking information about arbitrations, including which nodes have assumed the arbitrator role and at what times. (Bug #11761263, Bug #53736)
A slow filesystem during local checkpointing could exert undue
DBDIH kernel block file page
buffers, which in turn could lead to a data node crash when
these were exhausted. This fix limits the number of table
definition updates that
DBDIH can issue
The management server process, when started with
could sometimes hang during shutdown.
The output from ndb_config
--configinfo now contains the
same information as that from ndb_config
--xml, including explicit
indicators for parameters that do not require restarting a data
--initial to take effect.
ndb_config indicated incorrectly
data node configuration parameter requires an initial node
restart to take effect, when in fact it does not; this error was
also present in the MySQL Cluster documentation, where it has
also been corrected.
CPU consumption peaked several seconds after the forced termination an NDB client application due to the fact that the DBTC kernel block waited for any open transactions owned by the disconnected API client to be terminated in a busy loop, and did not break between checks for the correct state. (Bug #14550056)
ALTER TABLE with other
DML statements on the same NDB table returned Got
error -1 'Unknown error code' from NDBCLUSTER.
Receiver threads could wait unnecessarily to process incomplete signals, greatly reducing performance of ndbmtd. (Bug #14525521)
On platforms where epoll was not available, setting multiple
receiver threads with the
caused ndbmtd to fail.
options for ndbd and
(default 12) controls how many times the data node tries to
connect to a management server before giving up; setting it to
-1 means that the data node never stops trying to make contact.
--connect-delay sets the number of seconds to
wait between retries; the default is 5.
(Bug #14329309, Bug #66550)
Following a failed
TABLE ... REORGANIZE PARTITION statement, a subsequent
execution of this statement after adding new data nodes caused a
failure in the
DBDIH kernel block which led
to an unplanned shutdown of the cluster.
It was possible in some cases for two transactions to try to
drop tables at the same time. If the master node failed while
one of these operations was still pending, this could lead
either to additional node failures (and cluster shutdown) or to
new dictionary operations being blocked. This issue is addressed
by ensuring that the master will reject requests to start or
stop a transaction while there are outstanding dictionary
takeover requests. In addition, table-drop operations now
correctly signal when complete, as the
kernel block could not confirm node takeovers while such
operations were still marked as pending completion.
DBSPJ kernel block had no information
about which tables or indexes actually existed, or which had
been modified or dropped, since execution of a given query
DBSPJ might submit dictionary
requests for nonexistent tables or versions of tables, which
could cause a crash in the
This fix introduces a simplified dictionary into the
DBSPJ kernel block such that
DBSPJ can now check reliably for the
existence of a particular table or version of a table on which
it is about to request an operation.
When using ndbmtd and performing joins, data
nodes could fail where ndbmtd processes were
configured to use a large number of local query handler threads
(as set by the
configuration parameter), the tables accessed by the join had a
large number of partitions, or both.
(Bug #13799800, Bug #14143553)
When the value of
ndb_log_apply_status was set to
1, it was theoretically possible for the
server_id column not to be propagated
Transactions originating on a replication master are applied on
slaves as if using
transactions replayed from a binary log were not. Now
transactions being replayed from a log are handled in the same
way as those coming from a “live” replication
NdbOperation::AbortOption Type, for more