MySQL Cluster NDB 7.1.28 is a new release of MySQL Cluster,
incorporating new features in the
NDBCLUSTER storage engine and
fixing recently discovered bugs in previous MySQL Cluster NDB
Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.1.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.70 (see Changes in MySQL 5.1.70 (2013-06-03)).
Functionality Added or Changed
ExtraSendBufferMemory parameter for
management nodes and API nodes. (Formerly, this parameter was
available only for configuring data nodes.) See
(management nodes), and
(API nodes), for more information.
Performance: In a number of cases found in various locations in the MySQL Cluster codebase, unnecessary iterations were performed; this was caused by failing to break out of a repeating control structure after a test condition had been met. This community-contributed fix removes the unneeded repetitions by supplying the missing breaks. (Bug #16904243, Bug #69392, Bug #16904338, Bug #69394, Bug #16778417, Bug #69171, Bug #16778494, Bug #69172, Bug #16798410, Bug #69207, Bug #16801489, Bug #69215, Bug #16904266, Bug #69393)
File system errors occurring during a local checkpoint could sometimes cause an LCP to hang with no obvious cause when they were not handled correctly. Now in such cases, such errors always cause the node to fail. Note that the LQH block always shuts down the node when a local checkpoint fails; the change here is to make likely node failure occur more quickly and to make the original file system error more visible. (Bug #16961443)
The planned or unplanned shutdown of one or more data nodes
while reading table data from the
ndbinfo database caused a
DROP TABLE while
DBDIH was updating table checkpoint
information subsequent to a node failure could lead to a data
In certain cases, when starting a new SQL node, mysqld failed with Error 1427 Api node died, when SUB_START_REQ reached node. (Bug #16840741)
Failure to use container classes specific
NDB during node failure handling
could cause leakage of commit-ack markers, which could later
lead to resource shortages or additional node crashes.
Use of an uninitialized variable employed in connection with
error handling in the
DBLQH kernel block
could sometimes lead to a data node crash or other stability
issues for no apparent reason.
A race condition in the time between the reception of a
execNODE_FAILREP signal by the
QMGR kernel block and its reception by the
blocks could lead to data node crashes during shutdown.
CLUSTERLOG command (see
Commands in the MySQL Cluster Management Client) caused
ndb_mgm to crash on Solaris SPARC systems.
ndb_mgm treated backup IDs provided to
ABORT BACKUP commands as signed values, so
that backup IDs greater than 231
wrapped around to negative values. This issue also affected
out-of-range backup IDs, which wrapped around to negative values
instead of causing errors as expected in such cases. The backup
ID is now treated as an unsigned value, and
ndb_mgm now performs proper range checking
for backup ID values greater than
(Bug #16585497, Bug #68798)
id WAIT STARTED
id had already been used for a backup
ID, an error caused by the duplicate ID occurred as expected,
but following this, the
START BACKUP command
(Bug #16593604, Bug #68854)
When trying to specify a backup ID greater than the maximum allowed, the value was silently truncated. (Bug #16585455, Bug #68796)
The unexpected shutdown of another data node as a starting data node received its node ID caused the latter to hang in Start Phase 1. (Bug #16007980)
Creating more than 32 hash maps caused data nodes to fail.
Usually new hashmaps are created only when performing
reorganzation after data nodes have been added or when explicit
partitioning is used, such as when creating a table with the
MAX_ROWS option, or using
BY KEY() PARTITIONS .
When performing an
ON DUPLICATE KEY UPDATE on an
NDB table where the row to be
inserted already existed and was locked by another transaction,
the error message returned from the
following the timeout was Transaction already
aborted instead of the expected Lock wait
(Bug #14065831, Bug #65130)
When using dynamic listening ports for accepting connections from API nodes, the port numbers were reported to the management server serially. This required a round trip for each API node, causing the time required for data nodes to connect to the management server to grow linearly with the number of API nodes. To correct this problem, each data node now reports all dynamic ports at once. (Bug #12593774)
For each log event retrieved using the MGM API, the log event
simply cast to an
enum type, which resulted
in invalid category values. Now an offset is added to the
category following the cast to ensure that the value does not
fall out of the allowed range.