This release incorporates new features in the
NDB storage engine and fixes
recently discovered bugs in previous MySQL Cluster NDB 7.0
Obtaining MySQL Cluster NDB 7.0. The latest MySQL Cluster NDB 7.0 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.0 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.0 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.0.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.51 (see Changes in MySQL 5.1.51 (2010-09-10)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
The following changes have been made with regard to the
data node configuration parameter:
The maximum possible value for this parameter has been increased from 32000 milliseconds to 256000 milliseconds.
Setting this parameter to zero now has the effect of disabling GCP stops caused by save timeouts, commit timeouts, or both.
The current value of this parameter and a warning are written to the cluster log whenever a GCP save takes longer than 1 minute or a GCP commit takes longer than 10 seconds.
For more information, see Disk Data and GCP Stop errors. (Bug #58383)
for ndb_restore. This option causes
ndb_restore to ignore tables corrupted due to
missing blob parts tables, and to continue reading from the
backup file and restoring the remaining tables.
References: See also Bug #51652.
server option, which causes a replication slave to apply updates
to the master's
table to its own
ndb_apply_status table using
its own server ID in place of the master's server ID. This
option can be useful in circular or chain replication setups
when you need to track updates to
ndb_apply_status as they propagate from one
MySQL Cluster to the next in the circle or chain.
It is now possible to stop or restart a node even while other
nodes are starting, using the MGM API
respectively, with the
parameter set to 1.
References: See also Bug #58319.
In some circumstances, very large
BLOB read and write operations in
MySQL Cluster applications can cause excessive resource usage
and even exhaustion of memory. To fix this issue and to provide
increased stability when performing such operations, it is now
possible to set limits on the volume of
BLOB data to be read or written
within a given transaction in such a way that when these limits
are exceeded, the current transaction implicitly executes any
accumulated operations. This avoids an excessive buildup of
pending data which can result in resource exhaustion in the NDB
kernel. The limits on the amount of data to be read and on the
amount of data to be written before this execution takes place
can be configured separately. (In other words, it is now
possible in MySQL Cluster to specify read batching and write
batching that is specific to
data.) These limits can be configured either on the NDB API
level, or in the MySQL Server.
On the NDB API level, four new methods are added to the
can be used to get and to set, respectively, the maximum amount
BLOB data to be read that
accumulates before this implicit execution is triggered.
can be used to get and to set, respectively, the maximum volume
BLOB data to be written that
accumulates before implicit execution occurs.
For the MySQL server, two new options are added. The
option sets a limit on the amount of pending
BLOB data to be read before
triggering implicit execution, and the
option controls the amount of pending
BLOB data to be written. These
limits can also be set using the mysqld
configuration file, or read and set within the
mysql client and other MySQL client
applications using the corresponding server system variables.
Two related problems could occur with read-committed scans made in parallel with transactions combining multiple (concurrent) operations:
When committing a multiple-operation transaction that contained concurrent insert and update operations on the same record, the commit arrived first for the insert and then for the update. If a read-committed scan arrived between these operations, it could thus read incorrect data; in addition, if the scan read variable-size data, it could cause the data node to fail.
When rolling back a multiple-operation transaction having concurrent delete and insert operations on the same record, the abort arrived first for the delete operation, and then for the insert. If a read-committed scan arrived between the delete and the insert, it could incorrectly assume that the record should not be returned (in other words, the scan treated the insert as though it had not yet been committed).
On Windows platforms, issuing a
command in the ndb_mgm client caused
management processes that had been started with the
--nodaemon option to exit
A row insert or update followed by a delete operation on the same row within the same transaction could in some cases lead to a buffer overflow. (Bug #59242)
References: See also Bug #56524. This bug was introduced by Bug #35208.
Data nodes configured with very large amounts (multiple
failed during startup with NDB error 2334 (Job buffer
References: See also Bug #47984.
FAIL_REP signal, used inside the NDB
kernel to declare that a node has failed, now includes the node
ID of the node that detected the failure. This information can
be useful in debugging.
When executing a full table scan caused by a
WHERE condition using
in combination with a join,
unique_key IS NULL
failed to close the scan.
References: See also Bug #57481.
EXPLAIN EXTENDED for a
query that would use condition pushdown could cause
mysqld to crash.
(Bug #58553, Bug #11765570)
In some circumstances, an SQL trigger on an
NDB table could read stale data.
During a node takeover, it was possible in some circumstances
for one of the remaining nodes to send an extra transaction
LQH_TRANSCONF) signal to the
DBTC kernel block, conceivably leading to a
crash of the data node trying to take over as the new
A query having multiple predicates joined by
OR in the
WHERE clause and
which used the
sort_union access method (as
EXPLAIN) could return
Trying to drop an index while it was being used to perform scan updates caused data nodes to crash. (Bug #58277, Bug #57057)
When handling failures of multiple data nodes, an error in the construction of internal signals could cause the cluster's remaining nodes to crash. This issue was most likely to affect clusters with large numbers of data nodes. (Bug #58240)
strcasecmp were declared in
ndb_global.h but never defined or used. The
declarations have been removed.
The number of rows affected by a statement that used a
WHERE clause having an
IN condition with a value list
containing a great many elements, and that deleted or updated
enough rows such that
them in batches, was not computed or reported correctly.
MySQL Cluster failed to compile correctly on FreeBSD 8.1 due to
A query using
BETWEEN as part of a
WHERE condition could cause
mysqld to hang or crash.
Data nodes no longer allocated all memory prior to being ready to exchange heartbeat and other messages with management nodes, as in NDB 6.3 and earlier versions of MySQL Cluster. This caused problems when data nodes configured with large amounts of memory failed to show as connected or showed as being in the wrong start phase in the ndb_mgm client even after making their initial connections to and fetching their configuration data from the management server. With this fix, data nodes now allocate all memory as they did in earlier MySQL Cluster versions. (Bug #57568)
In some circumstances, it was possible for
mysqld to begin a new multi-range read scan
without having closed a previous one. This could lead to
exhaustion of all scan operation objects, transaction objects,
or lock objects (or some combination of these) in
NDB, causing queries to fail with
such errors as Lock wait timeout exceeded
or Connect failure - out of connection
References: See also Bug #58750.
a table with a unique index created with
returned an empty result.
enabled, a query using
LIKE on an
ENUM column of an
NDB table failed to return any
results. This issue is resolved by disabling
performing such queries.
When a slash character (
/) was used as part
of the name of an index on an
table, attempting to execute a
TABLE statement on the table failed with the error
Index not found, and the table was
Partitioning; Disk Data:
When using multi-threaded data nodes, an
NDB table created with a very large
value for the
MAX_ROWS option could—if
this table was dropped and a new table with fewer partitions,
but having the same table ID, was created—cause
ndbmtd to crash when performing a system
restart. This was because the server attempted to examine each
partition whether or not it actually existed.
This issue is the same as that reported in Bug #45154, except that the current issue is specific to ndbmtd instead of ndbd. (Bug #58638)
In certain cases, a race condition could occur when
DROP LOGFILE GROUP removed the
logfile group while a read or write of one of the effected files
was in progress, which in turn could lead to a crash of the data
A race condition could sometimes be created when
DROP TABLESPACE was run
concurrently with a local checkpoint; this could in turn lead to
a crash of the data node.
Disk Data: Performing what should have been an online drop of a multi-column index was actually performed offline. (Bug #55618)
When at least one data node was not running, queries against the
INFORMATION_SCHEMA.FILES table took
an excessive length of time to complete because the MySQL server
waited for responses from any stopped nodes to time out. Now, in
such cases, MySQL does not attempt to contact nodes which are
not known to be running.
When a mysqld performing replication of a
MySQL Cluster that uses ndbmtd is forcibly
disconnected (thus causing an
signal to be sent), the
SUMA kernel block
iterates through all active subscriptions and disables them. If
a given subscription has no more active users, then this
subscription is also deactivated in the
This process had no flow control, and when there were many
subscriptions being deactivated (more than 512), this could
cause an overflow in the short-time queue defined in the
The fix for this problem includes implementing proper flow
control for this deactivation process and increasing the size of
the short-time queue in
It was not possible to obtain the status of nodes accurately
after an attempt to stop a data node using
ndb_mgm_stop() failed without
returning an error.
Attempting to read the same value (using
than 9000 times within the same transaction caused the
transaction to hang when executed. Now when more reads are
performed in this way than can be accommodated in a single
transaction, the call to
with a suitable error.
NOT IN predicate with a subquery containing
HAVING clause could retrieve too many rows,
when the subquery itself returned
(Bug #58818, Bug #11765815)
WHERE conditions of the following forms were
evaluated incorrectly and could return incorrect results:
null-valued-const-expressionNOT IN (
subquery) IS UNKNOWN
(Bug #58628, Bug #11765642)
WHERE conditions of the following form were
evaluated incorrectly and could return incorrect results:
subquery) IS UNKNOWN
Outer joins with an empty table could produce incorrect results. (Bug #58422, Bug #11765451)
Condition pushdown optimization could push down conditions with incorrect column references. (Bug #58134, Bug #11765196)
Outer joins on a unique key could return incorrect results. (Bug #57034, Bug #11764219)