MySQL Cluster NDB 7.1.25 was withdrawn shortly after release, due to a problem with primary keys and tables with very many rows that was introduced in this release (Bug #16023068, Bug #67928). Users should upgrade to MySQL Cluster NDB 7.1.26, which fixes this issue.
MySQL Cluster NDB 7.1.25 is a new release of MySQL Cluster,
incorporating new features in the
NDB storage engine and fixing
recently discovered bugs in previous MySQL Cluster NDB 7.1
Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.1.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.66 (see Changes in MySQL 5.1.66 (2012-09-28)).
Added 3 new columns to the
transporterstable in the
bytes_receivedcolumns help to provide an overview of data transfer across the transporter links in a MySQL Cluster. This information can be useful in verifying system balance, partitioning, and front-end server load balancing; it may also be of help when diagnosing network problems arising from link saturation, hardware faults, or other causes. (Bug #14685458)
Data node logs now provide tracking information about arbitrations, including which nodes have assumed the arbitrator role and at what times. (Bug #11761263, Bug #53736)
A slow filesystem during local checkpointing could exert undue pressure on
DBDIHkernel block file page buffers, which in turn could lead to a data node crash when these were exhausted. This fix limits the number of table definition updates that
DBDIHcan issue concurrently. (Bug #14828998)
The management server process, when started with
--config-cache=FALSE, could sometimes hang during shutdown. (Bug #14730537)
The output from ndb_config
--configinfonow contains the same information as that from ndb_config
--xml, including explicit indicators for parameters that do not require restarting a data node with
--initialto take effect. In addition,
ndb_configindicated incorrectly that the
LogLevelCheckpointdata node configuration parameter requires an initial node restart to take effect, when in fact it does not; this error was also present in the MySQL Cluster documentation, where it has also been corrected. (Bug #14671934)
ALTER TABLEwith other DML statements on the same NDB table returned Got error -1 'Unknown error code' from NDBCLUSTER. (Bug #14578595)
CPU consumption peaked several seconds after the forced termination an NDB client application due to the fact that the DBTC kernel block waited for any open transactions owned by the disconnected API client to be terminated in a busy loop, and did not break between checks for the correct state. (Bug #14550056)
Receiver threads could wait unnecessarily to process incomplete signals, greatly reducing performance of ndbmtd. (Bug #14525521)
On platforms where epoll was not available, setting multiple receiver threads with the
ThreadConfigparameter caused ndbmtd to fail. (Bug #14524939)
--connect-delaystartup options for ndbd and ndbmtd.
--connect-retries(default 12) controls how many times the data node tries to connect to a management server before giving up; setting it to -1 means that the data node never stops trying to make contact.
--connect-delaysets the number of seconds to wait between retries; the default is 5. (Bug #14329309, Bug #66550)
Following a failed
ALTER TABLE ... REORGANIZE PARTITIONstatement, a subsequent execution of this statement after adding new data nodes caused a failure in the
DBDIHkernel block which led to an unplanned shutdown of the cluster.
DUMPcode 7019 was added as part of this fix. It can be used to obtain diagnostic information relating to a failed data node. See DUMP 7019, for more information. (Bug #14220269)
References: See also: Bug #18550318.
It was possible in some cases for two transactions to try to drop tables at the same time. If the master node failed while one of these operations was still pending, this could lead either to additional node failures (and cluster shutdown) or to new dictionary operations being blocked. This issue is addressed by ensuring that the master will reject requests to start or stop a transaction while there are outstanding dictionary takeover requests. In addition, table-drop operations now correctly signal when complete, as the
DBDICTkernel block could not confirm node takeovers while such operations were still marked as pending completion. (Bug #14190114)
DBSPJkernel block had no information about which tables or indexes actually existed, or which had been modified or dropped, since execution of a given query began. Thus,
DBSPJmight submit dictionary requests for nonexistent tables or versions of tables, which could cause a crash in the
This fix introduces a simplified dictionary into the
DBSPJkernel block such that
DBSPJcan now check reliably for the existence of a particular table or version of a table on which it is about to request an operation. (Bug #14103195)
Previously, it was possible to store a maximum of 46137488 rows in a single MySQL Cluster partition. This limitation has now been removed. (Bug #13844405, Bug #14000373)
References: See also: Bug #13436216.
When using ndbmtd and performing joins, data nodes could fail where ndbmtd processes were configured to use a large number of local query handler threads (as set by the
ThreadConfigconfiguration parameter), the tables accessed by the join had a large number of partitions, or both. (Bug #13799800, Bug #14143553)
Cluster Replication: When the value of
ndb_log_apply_statuswas set to 1, it was theoretically possible for the
server_idcolumn not to be propagated correctly. (Bug #14772503)
Cluster Replication: Transactions originating on a replication master are applied on slaves as if using
AO_AbortError, but transactions replayed from a binary log were not. Now transactions being replayed from a log are handled in the same way as those coming from a “live” replication master.
See The NdbOperation::AbortOption Type, for more information. (Bug #14615095)