MySQL Cluster NDB 7.1.8 is a new release of MySQL Cluster,
incorporating new features in the
NDB storage engine and fixing
recently discovered bugs in MySQL Cluster NDB 7.1.7 and previous
MySQL Cluster releases.
Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.1.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.47 (see Changes in MySQL 5.1.47 (2010-05-06)).
References: See also Bug #34325, Bug #11747863.
It is now possible using the ndb_mgm management client or the MGM API to force a data node shutdown or restart even if this would force the shutdown or restart of the entire cluster.
In the management client, this is implemented through the
addition of the
-f (force) option to the
For more information, see
Commands in the MySQL Cluster Management Client.
The MGM API function
was previously internal, has now been moved to the public API.
This function can be used to get
engine and other version information from the management server.
References: See also Bug #51273.
At startup, an ndbd or ndbmtd process creates directories for its file system without checking to see whether they already exist. Portability code added in MySQL Cluster NDB 7.0.18 and MySQL Cluster NDB 7.1.7 did not account for this fact, printing a spurious error message when a directory to be created already existed. This unneeded printout has been removed. (Bug #57087)
A data node can be shut down having completed and synchronized a
x, while having written a
great many log records belonging to the next GCI
x + 1, as part of normal operations.
However, when starting, completing, and synchronizing GCI
x + 1, then the log records from
original start must not be read. To make sure that this does not
happen, the REDO log reader finds the last GCI to restore, scans
forward from that point, and erases any log records that were
not (and should never be) used.
The current issue occurred because this scan stopped immediately as soon as it encountered an empty page. This was problematic because the REDO log is divided into several files; thus, it could be that there were log records in the beginning of the next file, even if the end of the previous file was empty. These log records were never invalidated; following a start or restart, they could be reused, leading to a corrupt REDO log. (Bug #56961)
An error in program flow in
result in data node shutdown routines being called multiple
Under certain rare conditions, attempting to start more than one
ndb_mgmd process simultaneously using the
--reload option caused a race
condition such that none of the ndb_mgmd
processes could start.
DROP TABLE operations among
several SQL nodes attached to a MySQL Cluster. the
LOCK_OPEN lock normally protecting
mysqld's internal table list is released
so that other queries or DML statements are not blocked.
However, to make sure that other DDL is not executed
simultaneously, a global schema lock (implemented as a row-level
NDB) is used, such that all
operations that can modify the state of the
mysqld internal table list also need to
acquire this global schema lock. The
TABLE STATUS statement did not acquire this lock.
In certain cases,
could sometimes leave behind a cached table object, which caused
problems with subsequent DDL operations.
Memory pages used for
assigned to ordered indexes, were not ever freed, even after any
rows that belonged to the corresponding indexes had been
MySQL Cluster stores, for each row in each
NDB table, a Global Checkpoint Index (GCI)
which identifies the last committed transaction that modified
the row. As such, a GCI can be thought of as a coarse-grained
Due to changes in the format used by
store local checkpoints (LCPs) in MySQL Cluster NDB 6.3.11, it
could happen that, following cluster shutdown and subsequent
recovery, the GCI values for some rows could be changed
unnecessarily; this could possibly, over the course of many node
or system restarts (or both), lead to an inconsistent database.
When multiple SQL nodes were connected to the cluster and one of
them stopped in the middle of a DDL operation, the
mysqld process issuing the DDL timed out with
the error distributing
tbl_name timed out.
TABLE ... ADD COLUMN operation that changed the table
schema such that the number of 32-bit words used for the bitmask
allocated to each DML operation increased during a transaction
in DML which was performed prior to DDL which was followed by
either another DML operation or—if using
replication—a commit, led to data node failure.
This was because the data node did not take into account that the bitmask for the before-image was smaller than the current bitmask, which caused the node to crash. (Bug #56524)
References: This bug is a regression of Bug #35208.
On Windows, a data node refused to start in some cases unless the ndbd.exe executable was invoked using an absolute rather than a relative path. (Bug #56257)
The text file
containing old MySQL Cluster changelog information was no longer
being maintained, and so has been removed from the tree.
The failure of a data node during some scans could cause other data nodes to fail. (Bug #54945)
Exhausting the number of available commit-ack markers
(controlled by the
parameter) led to a data node crash.
When running a
SELECT on an
NDB table with
TEXT columns, memory was
allocated for the columns but was not freed until the end of the
SELECT. This could cause problems
with excessive memory usage when dumping (using for example
mysqldump) tables with such columns and
having many rows, large column values, or both.
References: See also Bug #56488, Bug #50310.
When an SQL node starts, as part of setting up replication, it
subscribes to data events from all data nodes using a
SUB_START_REQ (subscription start request)
signal. Atomicity of
implemented such that, if any of the nodes returns an error, a
SUB_STOP_REQ (subscription stop request) is
sent to any nodes that replied with a
SUB_START_CONF (subscription start
confirmation). However, if all data nodes returned an error,
SUB_STOP_REQ was not sent to any of them.
This caused mysqld to hang when restarting (while waiting for a
response), and subsequent data node restarts to hang as well.
When a mysqld process was shut down while it
was still performing updates, it was possible for the entry
containing binary log information for the final epoch preceding
shutdown to be omitted from the
mysql.ndb_binlog_index table. This could
sometimes occur even during a normal shutdown of
Cluster Replication: The graceful shutdown of a data node in the master cluster could sometimes cause rows to be skipped when writing transactions to the binary log. Testing following an initial fix for this issue revealed an additional case where the graceful shutdown of a data node was not handled properly. The current fix addresses this case. (Bug #55641)
References: See also Bug #18538.
The MGM API functions
ndb_mgm_restart() set the error
code and message without first checking whether the management
server handle was
NULL, which could lead to
fatal errors in MGM API applications that depended on these
The MGM API function
ndb_mgm_get_version() did not
set the error message before returning with an error. With this
fix, it is now possible to call
after a failed call to this function such that
returns an error number and error message, as expected of MGM