MySQL Cluster NDB 7.0.9 and 7.0.9a were pulled shortly after being released due to Bug #48531 and Bug #48651. Users seeking to upgrade from a previous MySQL Cluster NDB 7.0 release should instead use MySQL Cluster NDB 7.0.9b, which contains fixes for these critical bugs, in addition to all bugfixes and improvements made in MySQL Cluster NDB 7.0.9.
This release incorporates new features in the
NDB storage engine and fixes
recently discovered bugs in MySQL Cluster NDB 7.0.8a.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.39 (see Changes in MySQL 5.1.39 (2009-09-04)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
Performance: Significant improvements in redo log handling and other file system operations can yield a considerable reduction in the time required for restarts. While actual restart times observed in a production setting will naturally vary according to database size, hardware, and other conditions, our own preliminary testing shows that these optimizations can yield startup times that are faster than those typical of previous MySQL Cluster releases by a factor of 50 or more.
--with-ndb-port-base option for
configure did not function correctly, and has
been deprecated. Attempting to use this option produces the
warning Ignoring deprecated option
Beginning with MySQL Cluster NDB 7.1.0, the deprecation warning
itself is removed, and the
option is simply handled as an unknown and invalid option if you
try to use it.
References: See also Bug #38502.
After upgrading a MySQL Cluster containing tables having unique indexes from an NDB 6.3 release to an NDB 7.0 release, attempts to create new unique indexes failed with inconsistent trigger errors (error code 293).
For more information (including a workaround for previous MySQL Cluster NDB 7.0 releases), see Upgrade and downgrade compatibility: MySQL Cluster NDB 7.x. (Bug #48416)
When a data node failed to start due to inability to recreate or drop objects during schema restoration (for example: insufficient memory was available to the data node process on account of issues not directly related to MySQL Cluster on the host machine), the reason for the failure was not provided. Now is such cases, a more informative error message is logged. (Bug #48232)
A table that was created following an upgrade from a MySQL
Cluster NDB 6.3 release to MySQL Cluster NDB 7.0 (starting with
version 6.4.0) or later was dropped by a system restart. This
was due to a change in the format of
schema files and the fact that the upgrade of the format of
NDB 6.3 schema files to the
NDB 7.0 format failed to change the version
number contained in the file; this meant that a system restart
re-ran the upgrade routine, which interpreted the newly created
table as an uncommitted table (which by definition ought not to
be saved). Now the version number of upgraded
NDB 6.3 schema files is set correctly during
the upgrade process.
In certain cases, performing very large inserts on
NDB tables when using
ndbmtd caused the memory allocations for
ordered or unique indexes (or both) to be exceeded. This could
cause aborted transactions and possibly lead to data node
References: See also Bug #48113.
IGNORE statements, batching of updates is now
disabled. This is because such statements failed when batching
of updates was employed if any updates violated a unique
constraint, to the fact a unique constraint violation could not
be handled without aborting the transaction.
Starting a data node with a very large amount of
(approximately 90G or more) could lead to crash of the node due
to job buffer congestion.
In some cases, ndbmtd could allocate more
space for the undo buffer than was actually available, leading
to a failure in the
LGMAN kernel block and
subsequent failure of the data node.
For example, consider the table created and populated using these statements:
CREATE TABLE t1 ( c1 INT NOT NULL, c2 INT NOT NULL, PRIMARY KEY(c1), KEY(c2) ) ENGINE = NDB; INSERT INTO t1 VALUES(1, 1);
even though they did not change any rows, each still matched a
row, but this was reported incorrectly in both cases, as shown
UPDATE t1 SET c2 = 1 WHERE c1 = 1;Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0 mysql>
UPDATE t1 SET c1 = 1 WHERE c2 = 1;Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0
Now in such cases, the number of rows matched is correct. (In
the case of each of the example
UPDATE statements just shown,
this is displayed as Rows matched: 1, as it should be.)
This issue could affect
statements involving any indexed columns in
NDB tables, regardless of the type
of index (including
PRIMARY KEY) or the number
of columns covered by the index.
On Solaris, shutting down a management node failed when issuing the command to do so from a client connected to a different management node. (Bug #47948)
After changing the value of
4294967039 (the maximum) in the
file and reloading the cluster configuration, the new value was
displayed in the update information written into the cluster log
as a signed number instead of unsigned.
References: See also Bug #47932.
On Solaris 10 for SPARC, ndb_mgmd failed to
config.ini file when the
configuration parameter, whose permitted range of values is
32768 to 4294967039, was set equal to 4294967040 (which is also
the value of the internal constant
MAX_INT_RNIL), nor could
DiskSyncSize be set
successfully any higher than the minimum value.
References: See also Bug #47944.
FragmentLogFileSize to a
value greater than 256 MB led to errors when trying to read the
redo log file.
SHOW CREATE TABLE did not display
AUTO_INCREMENT value for
NDB tables having
An optimization in MySQL Cluster NDB 7.0 causes the
DBDICT kernel block to copy several tables at
a time when synchronizing the data dictionary to a newly started
node; previously, this was done one table at a time. However,
NDB tables were sufficiently large and
numerous, the internal buffer for storing them could fill up,
causing a data node crash.
In testing, it was found that having 100
tables with 128 columns each was enough to trigger this issue.
Under some circumstances, when a scan encountered an error early
in processing by the
DBTC kernel block (see
The DBTC Block), a node
could crash as a result. Such errors could be caused by
applications sending incorrect data, or, more rarely, by a
DROP TABLE operation executed in
parallel with a scan.
When starting a node and synchronizing tables, memory pages were allocated even for empty fragments. In certain situations, this could lead to insufficient memory. (Bug #47782)
During an upgrade, newer nodes (NDB kernel block
DBTUP) could in some cases try to use the
long signal format for communication with older nodes
DBUTIL kernel block) that did not understand
the newer format, causing older data nodes to fail after
A very small race-condition between
LQH_TRANSREQ signals when handling node
failure could lead to operations (locks) not being taken over
when they should have been, and subsequently becoming stale.
This could lead to node restart failures, and applications
getting into endless lock-conflicts with operations that were
not released until the node was restarted.
References: See also Bug #41297.
In some cases, the MySQL Server tried to use an error status
whose value had never been set. The problem in the code that
caused this, in
when using debug builds of mysqld in MySQL
This fix brings MySQL Cluster's error handling in
hostname.cc in line with what is
implemented in MySQL 5.4.
configure failed to honor the
--with-zlib-dir option when trying to build
MySQL Cluster from source.
Performing a system restart of the cluster after having performed a table reorganization which added partitions caused the cluster to become inconsistent, possibly leading to a forced shutdown, in either of the following cases:
When a local checkpoint was in progress but had not yet
completed, new partitions were not restored; that is, data
that was supposed to be moved could be lost instead, leading
to an inconsistent cluster. This was due to an issue whereby
DBDIH kernel block did not save the
new table definition and instead used the old one (the
version having fewer partitions).
When the most recent LCP had completed, ordered indexes and unlogged tables were still not saved (since these did not participate in the LCP). In this case, the cluster crashed during a subsequent system restart, due to the inconsistency between the main table and the ordered index.
DBDIH is forced in such cases to use the
version of the table definition held by the
DBDICT kernel block, which was (already)
correct and up to date.
ndbd was not built correctly when compiled using gcc 4.4.0. (The ndbd binary was built, but could not be started.) (Bug #46113)
ndb_mgmd failed to close client connections that had timed out. After running for some time, a race condition could develop in the management server, due to ndb_mgmd having exhausted all of its file descriptors in this fashion. (Bug #45497)
References: See also Bug #47712.
If a node failed while sending a fragmented long signal, the receiving node did not free long signal assembly resources that it had allocated for the fragments of the long signal that had already been received. (Bug #44607)
Numeric configuration parameters set in
my.cnf were interpreted as signed rather
than unsigned values. The effect of this was that values of 2G
or more were truncated with the warning [MgmSrvr]
Warning: option '
opt_value adjusted to
2147483647. Now such parameter values are treated as
unsigned, so that this truncation does not take place.
This issue did not effect parameters set in
When starting a cluster with a great many tables, it was possible for MySQL client connections as well as the slave SQL thread to issue DML statements against MySQL Cluster tables before mysqld had finished connecting to the cluster and making all tables writeable. This resulted in Table ... is read only errors for clients and the Slave SQL thread.
This issue is fixed by introducing the
--ndb-wait-setup option for the
MySQL server. This provides a configurable maximum amount of
time that mysqld waits for all
NDB tables to become writeable,
before enabling MySQL clients or the slave SQL thread to
References: See also Bug #46955.
When building MySQL Cluster, it was possible to configure the
--with-ndb-port without supplying a
port number. Now in such cases, configure
fails with an error.
References: See also Bug #47941.
When the MySQL server SQL mode included
engine warnings and error codes specific to
NDB were returned when errors occurred,
instead of the MySQL server errors and error codes expected by
some programming APIs (such as Connector/J) and applications.
When a copying operation exhausted the available space on a data
node while copying large
columns, this could lead to failure of the data node and a
Table is full error on the SQL node which
was executing the operation. Examples of such operations could
ALTER TABLE that
INT column to a
BLOB column, or a bulk insert of
BLOB data that failed due to
running out of space or to a duplicate key error.
(Bug #34583, Bug #48040)
References: See also Bug #41674, Bug #45768.
--verbose was used to read a
binary log that had been written using row-based format, the
output for events that updated some but not all columns of
tables was not correct.
Disk Data: A local checkpoint of an empty fragment could cause a crash during a system restart which was based on that LCP. (Bug #47832)
References: See also Bug #41915.
Disk Data: Multi-threaded data nodes could in some cases attempt to access the same memory structure in parallel, in a non-safe manner. This could result in data node failure when running ndbmtd while using Disk Data tables. (Bug #44195)
References: See also Bug #46507.
Cluster Replication: When using multiple active replication channels, it was sometimes possible that a node group failed on the slave cluster, causing the slave cluster to shut down. (Bug #47935)
When recording a binary log using the
(both enabled by default) and later attempting to apply that
binary log with mysqlbinlog, any operations
that were played back from the log but which updated only some
(but not all) columns caused any columns that were not updated
to be reset to their default values.
References: See also Bug #47323, Bug #46662.
mysqlbinlog failed to apply correctly a
binary log that had been recorded using
References: See also Bug #47323, Bug #47674.
Cluster API: If an NDB API program reads the same column more than once, it is possible exceed the maximum permissible message size, in which case the operation should be aborted due to NDB error 880 Tried to read too much - too many getValue calls, however due to a change introduced in MySQL Cluster NDB 6.3.18, the check for this was not done correctly, which instead caused a data node crash. (Bug #48266)
The NDB API methods
NdbOperation::getErrorLine() formerly had
const and non-
variants. The non-
const versions of these
methods have been removed. In addition, the
method has been re-implemented to provide consistent internal
Cluster API: A duplicate read of a column caused NDB API applications to crash. (Bug #45282)
The error handling shown in the example file
ndbapi_scan.cpp included with the MySQL
Cluster distribution was incorrect.
Installation of MySQL on Windows failed to set the correct location for the character set files, which could lead to mysqld and mysql failing to initialize properly. (Bug #17270)