This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
MySQL Cluster NDB 6.2 no longer in development. MySQL Cluster NDB 6.2 is no longer being developed or maintained; if you are using a MySQL Cluster NDB 6.2 release, you should upgrade to the latest version of MySQL Cluster, which is available from http://dev.mysql.com/downloads/cluster/ .
Obtaining MySQL Cluster NDB 6.2. You can download the latest MySQL Cluster NDB 6.2 source code and binaries for supported platforms from ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/mysql-5.1.28-ndb-6.2.16.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.28 (see Changes in MySQL 5.1.28 (2008-08-28)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
It is no longer a requirement for database autodiscovery that an
SQL node already be connected to the cluster at the time that a
database is created on another SQL node. It is no longer
necessary to issue
SCHEMA) statements on an SQL node joining the cluster
after a database is created for the new SQL node to see the
database and any
NDBCLUSTER tables that it
Event buffer lag reports are now written to the cluster log. (Bug #37427)
--no-binlog option for
ndb_restore. When used, this option prevents
information being written to SQL node binary logs from the
restoration of a cluster backup.
The ndbd and ndb_mgmd man pages have been reclassified from volume 1 to volume 8. (Bug #34642)
Important Change; Disk Data:
It is no longer possible on 32-bit systems to issue statements
appearing to create Disk Data log files or data files greater
than 4 GB in size. (Trying to create log files or data files
larger than 4 GB on 32-bit systems led to unrecoverable data
node failures; such statements now fail with
NDB error 1515.)
Cluster API: Changing the system time on data nodes could cause MGM API applications to hang and the data nodes to crash. (Bug #35607)
In certain rare situations,
could fail with the error Can't use string
value") as a HASH ref while "strict
refs" in use.
Heavy DDL usage caused the mysqld processes
to hang due to a timeout error (
error code 266).
Starting the MySQL Server with the
--ndbcluster option plus an
invalid command-line option (for example, using
--foobar) caused it to hang while shutting down the
binary log thread.
Dropping and then re-creating a database on one SQL node caused other SQL nodes to hang. (Bug #39613)
Setting a low value of
100) and performing a large number of (certain) scans could
cause the Transaction Coordinator to run out of scan fragment
records, and then crash. Now when this resource is exhausted,
the cluster returns Error 291 (Out of scanfrag
records in TC (increase MaxNoOfLocalScans)) instead.
Creating a unique index on an
NDBCLUSTER table caused a memory
leak in the
SUMA) which could lead to mysqld
hanging, due to the fact that the resource shortage was not
reported back to the
References: See also Bug #39450.
Unique identifiers in tables having no primary key were not
cached. This fix has been observed to increase the efficiency of
INSERT operations on such tables
by as much as 50%.
MgmtSrvr::allocNodeId() left a mutex locked
following an Ambiguity for node if %d
An invalid path specification caused mysql-test-run.pl to fail. (Bug #39026)
During transactional coordinator takeover (directly after node
failure), the LQH finding an operation in the
LOG_COMMIT state sent an
LQH_TRANS_CONF signal twice, causing the TC
An invalid memory access caused the management server to crash on Solaris Sparc platforms. (Bug #38628)
A segfault in
ndbd to hang indefinitely.
ndb_mgmd failed to start on older Linux distributions (2.4 kernels) that did not support e-polling. (Bug #38592)
When restarting a data node, an excessively long shutdown message could cause the node process to crash. (Bug #38580)
ndb_mgmd sometimes performed unnecessary network I/O with the client. This in combination with other factors led to long-running threads that were attempting to write to clients that no longer existed. (Bug #38563)
ndb_restore failed with a floating point exception due to a division by zero error when trying to restore certain data files. (Bug #38520)
A failed connection to the management server could cause a resource leak in ndb_mgmd. (Bug #38424)
Failure to parse configuration parameters could cause a memory leak in the NDB log parser. (Bug #38380)
After a forced shutdown and initial restart of the cluster, it
was possible for SQL nodes to retain
files corresponding to
tables that had been dropped, and thus to be unaware that these
tables no longer existed. In such cases, attempting to re-create
the tables using
CREATE TABLE IF NOT EXISTS
could fail with a spurious Table ... doesn't
Failure of a data node could sometimes cause mysqld to crash. (Bug #37628)
If subscription was terminated while a node was down, the epoch was not properly acknowledged by that node. (Bug #37442)
In rare circumstances, a connection followed by a disconnection could give rise to a “stale” connection where the connection still existed but was not seen by the transporter. (Bug #37338)
A fail attempt to create an
table could in some cases lead to resource leaks or cluster
Checking of API node connections was not efficiently handled. (Bug #36843)
References: See also Bug #36851.
NDBCLUSTER table on one
SQL node, caused a trigger on this table to be deleted on
another SQL node.
SET GLOBAL ndb_extra_logging caused
mysqld to crash.
If the combined total of tables and indexes in the cluster was
greater than 4096, issuing
caused data nodes to fail.
When more than one SQL node connected to the cluster at the same
time, creation of the
failed on one of them with an explicit Table
exists error, which was not necessary.
mysqld failed to start after running mysql_upgrade. (Bug #35708)
Attempting to add a
UNIQUE INDEX twice to an
NDBCLUSTER table, then deleting
rows from the table could cause the MySQL Server to crash.
It was not possible to determine the value used for the
--ndb-cluster-connection-pool option in the
mysql client. Now this value is reported as a
system status variable.
The ndb_waiter utility wrongly calculated timeouts. (Bug #35435)
Where column values to be compared in a query were of the
NDBCLUSTER passed a value padded to
the full size of the column, which caused unnecessary data to be
sent to the data nodes. This also had the effect of wasting CPU
and network bandwidth, and causing condition pushdown to be
disabled where it could (and should) otherwise have been
ndb_restore incorrectly handled some data types when applying log files from backups. (Bug #35343)
In some circumstances, a stopped data node was handled incorrectly, leading to redo log space being exhausted following an initial restart of the node, or an initial or partial restart of the cluster (the wrong CGI might be used in such cases). This could happen, for example, when a node was stopped following the creation of a new table, but before a new LCP could be executed. (Bug #35241)
SELECT ... LIKE ... queries yielded incorrect
results when used on
NDB tables. As
part of this fix, condition pushdown of such queries has been
disabled; re-enabling it is expected to be done as part of a
later, permanent fix for this issue.
ndb_mgmd reported errors to
STDOUT rather than to
Nested Multi-Range Read scans failed when the second Multi-Range Read released the first read's unprocessed operations, sometimes leading to an SQL node crash. (Bug #35137)
In some situations, a problem with synchronizing checkpoints between nodes could cause a system restart or a node restart to fail with Error 630 during restore of TX. (Bug #34756)
References: This bug is a regression of Bug #34033.
When a secondary index on a
DECIMAL column was used to
retrieve data from an
NDB table, no
results were returned even if the target table had a matched
value in the column that was defined with the secondary index.
If a data node in one node group was placed in the “not
started” state (using
), it was not possible to stop a data node in a
different node group.
When configured with
MySQL failed to compile on 64bit FreeBSD systems.
ndb_restore failed when a single table was specified. (Bug #33801)
NDBCLUSTER test failures
occurred in builds compiled using icc on IA64
GCP_COMMIT did not wait for transaction
takeover during node failure. This could cause
GCP_SAVE_REQ to be executed too early. This
could also cause (very rarely) replication to skip rows.
NDB error 1427 (Api node
died, when SUB_START_REQ reached node) was
incorrectly classified as a schema error rather than a temporary
When dropping a table failed for any reason (such as when in
single user mode) then the corresponding
.ndb file was still removed.
If an API node disconnected and then reconnected during Start
Phase 8, then the connection could be
“blocked”—that is, the
QMGR kernel block failed to detect that the
API node was in fact connected to the cluster, causing issues
NDB Subscription Manager
When flushing tables, there was a slight chance that the flush
occurred between the processing of one table map event and the
next. Since the tables were opened one by one, subsequent
locking of tables would cause the slave to crash. This problem
was observed when replicating
InnoDB tables, when executing multi-table
updates, and when a trigger or a stored routine performed an
(additional) insert on a table so that two tables were
effectively being inserted into in the same statement.
Cluster Replication: In some cases, dropping a database on the master could cause table logging to fail on the slave, or, when using a debug build, could cause the slave mysqld to fail completely. (Bug #39404)
Cluster Replication: During a parallel node restart, the starting nodes could (sometimes) incorrectly synchronize subscriptions among themselves. Instead, this synchronization now takes place only among nodes that have actually (completely) started. (Bug #38767)
Data was written to the binary log with
SELECT ... FROM
mysql.ndb_apply_status before the
mysqld process had connected to the cluster
failed, and caused this table never to be created.
In some cases, when updating only one or some columns in a
table, the complete row was written to the binary log instead of
only the updated column or columns, even when
ndb_log_updated_only was set to 1.
Passing a value greater than 65535 to
caused these methods to have no effect.
method could be called multiple times without error.
Certain Multi-Range Read scans involving
IS NOT NULL comparisons
failed with an error in the
local query handler.
Problems with the public headers prevented
NDB applications from being built
with warnings turned on.
object using an
NdbScanOperation object that
had not yet had its
method called resulted in a crash when later attempting to use
interpreted delete created with an
option caused the transaction to abort.
When some operations succeeded and some failed following a call
AO_IgnoreOnError), a race condition could cause
spurious occurrences of NDB API Error 4011 (Internal
Cluster API: Ordered index scans were not pruned correctly where a partitioning key was specified with an EQ-bound. (Bug #36950)
When an insert operation involving
BLOB data was attempted on a row
which already existed, no duplicate key error was correctly
reported and the transaction is incorrectly aborted. In some
cases, the existing row could also become corrupted.
References: See also Bug #26756.
Accessing the debug version of
dlopen() resulted in a segmentation
NdbApi.hpp depended on
ndb_global.h, which was not actually
installed, causing the compilation of programs that used
NdbApi.hpp to fail.
Attempting to pass a nonexistent column name to the
NdbOperation caused NDB API
applications to crash. Now the column name is checked, and an
error is returned in the event that the column is not found.
Cluster API: Creating a table on an SQL node, then starting an NDB API application that listened for events from this table, then dropping the table from an SQL node, prevented data node restarts. (Bug #32949, Bug #37279)
A buffer overrun in
erroneous results on Mac OS X.
Relocation errors were encountered when trying to compile NDB
API applications on a number of platforms, including 64-bit
Linux. As a result,
have been changed from normal libraries to “noinst”
libtool helper libraries. They are no longer
installed as separate libraries; instead, all necessary symbols
from these are added directly to
libndbclient. This means that NDB API
programs now need to be linked using only
(Bug #29791, Bug #11746931)