This release incorporates new features in the
NDB storage engine and fixes
recently discovered bugs in previous MySQL Cluster NDB 7.0
Obtaining MySQL Cluster NDB 7.0. The latest MySQL Cluster NDB 7.0 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.0 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.0 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.0.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.51 (see Changes in MySQL 5.1.51 (2010-09-10)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Important Change: The following changes have been made with regard to the
TimeBetweenEpochsTimeoutdata node configuration parameter:
The maximum possible value for this parameter has been increased from 32000 milliseconds to 256000 milliseconds.
Setting this parameter to zero now has the effect of disabling GCP stops caused by save timeouts, commit timeouts, or both.
The current value of this parameter and a warning are written to the cluster log whenever a GCP save takes longer than 1 minute or a GCP commit takes longer than 10 seconds.
For more information, see Disk Data and GCP Stop errors. (Bug #58383)
--skip-broken-objectsoption for ndb_restore. This option causes ndb_restore to ignore tables corrupted due to missing blob parts tables, and to continue reading from the backup file and restoring the remaining tables. (Bug #54613)
References: See also: Bug #51652.
Cluster Replication: Added the
--ndb-log-apply-statusserver option, which causes a replication slave to apply updates to the master's
mysql.ndb_apply_statustable to its own
ndb_apply_statustable using its own server ID in place of the master's server ID. This option can be useful in circular or chain replication setups when you need to track updates to
ndb_apply_statusas they propagate from one MySQL Cluster to the next in the circle or chain.
Cluster API: It is now possible to stop or restart a node even while other nodes are starting, using the MGM API
ndb_mgm_restart4()function, respectively, with the
forceparameter set to 1. (Bug #58451)
References: See also: Bug #58319.
Cluster API: In some circumstances, very large
BLOBread and write operations in MySQL Cluster applications can cause excessive resource usage and even exhaustion of memory. To fix this issue and to provide increased stability when performing such operations, it is now possible to set limits on the volume of
BLOBdata to be read or written within a given transaction in such a way that when these limits are exceeded, the current transaction implicitly executes any accumulated operations. This avoids an excessive buildup of pending data which can result in resource exhaustion in the NDB kernel. The limits on the amount of data to be read and on the amount of data to be written before this execution takes place can be configured separately. (In other words, it is now possible in MySQL Cluster to specify read batching and write batching that is specific to
BLOBdata.) These limits can be configured either on the NDB API level, or in the MySQL Server.
On the NDB API level, four new methods are added to the
setMaxPendingBlobReadBytes()can be used to get and to set, respectively, the maximum amount of
BLOBdata to be read that accumulates before this implicit execution is triggered.
setMaxPendingBlobWriteBytes()can be used to get and to set, respectively, the maximum volume of
BLOBdata to be written that accumulates before implicit execution occurs.
For the MySQL server, two new options are added. The
--ndb-blob-read-batch-bytesoption sets a limit on the amount of pending
BLOBdata to be read before triggering implicit execution, and the
--ndb-blob-write-batch-bytesoption controls the amount of pending
BLOBdata to be written. These limits can also be set using the mysqld configuration file, or read and set within the mysql client and other MySQL client applications using the corresponding server system variables. (Bug #59113)
Two related problems could occur with read-committed scans made in parallel with transactions combining multiple (concurrent) operations:
When committing a multiple-operation transaction that contained concurrent insert and update operations on the same record, the commit arrived first for the insert and then for the update. If a read-committed scan arrived between these operations, it could thus read incorrect data; in addition, if the scan read variable-size data, it could cause the data node to fail.
When rolling back a multiple-operation transaction having concurrent delete and insert operations on the same record, the abort arrived first for the delete operation, and then for the insert. If a read-committed scan arrived between the delete and the insert, it could incorrectly assume that the record should not be returned (in other words, the scan treated the insert as though it had not yet been committed).
On Windows platforms, issuing a
SHUTDOWNcommand in the ndb_mgm client caused management processes that had been started with the
--nodaemonoption to exit abnormally. (Bug #59437)
A row insert or update followed by a delete operation on the same row within the same transaction could in some cases lead to a buffer overflow. (Bug #59242)
References: See also: Bug #56524. This issue is a regression of: Bug #35208.
Data nodes configured with very large amounts (multiple gigabytes) of
DiskPageBufferMemoryfailed during startup with NDB error 2334 (Job buffer congestion). (Bug #58945)
References: See also: Bug #47984.
FAIL_REPsignal, used inside the NDB kernel to declare that a node has failed, now includes the node ID of the node that detected the failure. This information can be useful in debugging. (Bug #58904)
When executing a full table scan caused by a
in combination with a join,
NDBfailed to close the scan. (Bug #58750)
References: See also: Bug #57481.
EXPLAIN EXTENDEDfor a query that would use condition pushdown could cause mysqld to crash. (Bug #58553, Bug #11765570)
In some circumstances, an SQL trigger on an
NDBtable could read stale data. (Bug #58538)
During a node takeover, it was possible in some circumstances for one of the remaining nodes to send an extra transaction confirmation (
LQH_TRANSCONF) signal to the
DBTCkernel block, conceivably leading to a crash of the data node trying to take over as the new transaction coordinator. (Bug #58453)
A query having multiple predicates joined by
WHEREclause and which used the
sort_unionaccess method (as shown using
EXPLAIN) could return duplicate rows. (Bug #58280)
Trying to drop an index while it was being used to perform scan updates caused data nodes to crash. (Bug #58277, Bug #57057)
When handling failures of multiple data nodes, an error in the construction of internal signals could cause the cluster's remaining nodes to crash. This issue was most likely to affect clusters with large numbers of data nodes. (Bug #58240)
strcasecmpwere declared in
ndb_global.hbut never defined or used. The declarations have been removed. (Bug #58204)
The number of rows affected by a statement that used a
WHEREclause having an
INcondition with a value list containing a great many elements, and that deleted or updated enough rows such that
NDBprocessed them in batches, was not computed or reported correctly. (Bug #58040)
MySQL Cluster failed to compile correctly on FreeBSD 8.1 due to misplaced
#includestatements. (Bug #58034)
A query using
BETWEENas part of a pushed-down
WHEREcondition could cause mysqld to hang or crash. (Bug #57735)
Data nodes no longer allocated all memory prior to being ready to exchange heartbeat and other messages with management nodes, as in NDB 6.3 and earlier versions of MySQL Cluster. This caused problems when data nodes configured with large amounts of memory failed to show as connected or showed as being in the wrong start phase in the ndb_mgm client even after making their initial connections to and fetching their configuration data from the management server. With this fix, data nodes now allocate all memory as they did in earlier MySQL Cluster versions. (Bug #57568)
In some circumstances, it was possible for mysqld to begin a new multi-range read scan without having closed a previous one. This could lead to exhaustion of all scan operation objects, transaction objects, or lock objects (or some combination of these) in
NDB, causing queries to fail with such errors as Lock wait timeout exceeded or Connect failure - out of connection objects. (Bug #57481)
References: See also: Bug #58750.
NULLon a table with a unique index created with
columnalways returned an empty result. (Bug #57032)
engine_condition_pushdownenabled, a query using
ENUMcolumn of an
NDBtable failed to return any results. This issue is resolved by disabling
engine_condition_pushdownwhen performing such queries. (Bug #53360)
When a slash character (
/) was used as part of the name of an index on an
NDBtable, attempting to execute a
TRUNCATE TABLEstatement on the table failed with the error Index not found, and the table was rendered unusable. (Bug #38914)
Partitioning; Disk Data: When using multi-threaded data nodes, an
NDBtable created with a very large value for the
MAX_ROWSoption could—if this table was dropped and a new table with fewer partitions, but having the same table ID, was created—cause ndbmtd to crash when performing a system restart. This was because the server attempted to examine each partition whether or not it actually existed.
This issue is the same as that reported in Bug #45154, except that the current issue is specific to ndbmtd instead of ndbd. (Bug #58638)
References: See also: Bug #45154.
Disk Data: In certain cases, a race condition could occur when
DROP LOGFILE GROUPremoved the logfile group while a read or write of one of the effected files was in progress, which in turn could lead to a crash of the data node. (Bug #59502)
Disk Data: A race condition could sometimes be created when
DROP TABLESPACEwas run concurrently with a local checkpoint; this could in turn lead to a crash of the data node. (Bug #59501)
Disk Data: Performing what should have been an online drop of a multi-column index was actually performed offline. (Bug #55618)
Disk Data: When at least one data node was not running, queries against the
INFORMATION_SCHEMA.FILEStable took an excessive length of time to complete because the MySQL server waited for responses from any stopped nodes to time out. Now, in such cases, MySQL does not attempt to contact nodes which are not known to be running. (Bug #54199)
Cluster Replication: When a mysqld performing replication of a MySQL Cluster that uses ndbmtd is forcibly disconnected (thus causing an
API_FAIL_REQsignal to be sent), the
SUMAkernel block iterates through all active subscriptions and disables them. If a given subscription has no more active users, then this subscription is also deactivated in the
This process had no flow control, and when there were many subscriptions being deactivated (more than 512), this could cause an overflow in the short-time queue defined in the
The fix for this problem includes implementing proper flow control for this deactivation process and increasing the size of the short-time queue in
DbtupProxy. (Bug #58693)
Cluster API: It was not possible to obtain the status of nodes accurately after an attempt to stop a data node using
ndb_mgm_stop()failed without returning an error. (Bug #58319)
Cluster API: Attempting to read the same value (using
getValue()) more than 9000 times within the same transaction caused the transaction to hang when executed. Now when more reads are performed in this way than can be accommodated in a single transaction, the call to
execute()fails with a suitable error. (Bug #58110)
NOT INpredicate with a subquery containing a
HAVINGclause could retrieve too many rows, when the subquery itself returned
NULL. (Bug #58818, Bug #11765815)
WHEREconditions of the following forms were evaluated incorrectly and could return incorrect results:
WHERE null-valued-const-expression NOT IN (subquery) WHERE null-valued-const-expression IN (subquery) IS UNKNOWN
(Bug #58628, Bug #11765642)
WHEREconditions of the following form were evaluated incorrectly and could return incorrect results:
WHERE column IN (subquery) IS UNKNOWN
Outer joins with an empty table could produce incorrect results. (Bug #58422, Bug #11765451)
Condition pushdown optimization could push down conditions with incorrect column references. (Bug #58134, Bug #11765196)
Outer joins on a unique key could return incorrect results. (Bug #57034, Bug #11764219)