This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
MySQL Cluster NDB 6.2 no longer in development.
MySQL Cluster NDB 6.2 is no longer being developed or
maintained; if you are using a MySQL Cluster NDB 6.2 release,
you should upgrade to the latest version of MySQL Cluster,
which is available from
Obtaining MySQL Cluster NDB 6.2. You can download the latest MySQL Cluster NDB 6.2 source code and binaries for supported platforms from ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/mysql-5.1.28-ndb-6.2.16.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.28 (see Changes in MySQL 5.1.28 (2008-08-28)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
It is no longer a requirement for database autodiscovery that an
SQL node already be connected to the cluster at the time that a
database is created on another SQL node. It is no longer
necessary to issue
SCHEMA) statements on an SQL node joining the cluster
after a database is created for the new SQL node to see the
database and any
NDBCLUSTER tables that it
After a forced shutdown and initial restart of the cluster, it
was possible for SQL nodes to retain
files corresponding to
tables that had been dropped, and thus to be unaware that these
tables no longer existed. In such cases, attempting to re-create
the tables using
CREATE TABLE IF NOT EXISTS
could fail with a spurious Table ... doesn't
When restarting a data node, an excessively long shutdown message could cause the node process to crash. (Bug #38580)
Starting the MySQL Server with the
--ndbcluster option plus an
invalid command-line option (for example, using
--foobar) caused it to hang while shutting down the
Heavy DDL usage caused the mysqld processes
to hang due to a timeout error (
error code 266).
Dropping and then re-creating a database on one SQL node caused other SQL nodes to hang. (Bug #39613)
Unique identifiers in tables having no primary key were not
cached. This fix has been observed to increase the efficiency of
INSERT operations on such tables
by as much as 50%.
Setting a low value of
100) and performing a large number of (certain) scans could
cause the Transaction Coordinator to run out of scan fragment
records, and then crash. Now when this resource is exhausted,
the cluster returns Error 291 (Out of scanfrag
records in TC (increase MaxNoOfLocalScans)) instead.
Creating a unique index on an
NDBCLUSTER table caused a memory
leak in the
SUMA) which could lead to mysqld
hanging, due to the fact that the resource shortage was not
reported back to the
References: See also Bug #39450.
MgmtSrvr::allocNodeId() left a mutex locked
following an Ambiguity for node if %d
An invalid path specification caused mysql-test-run.pl to fail. (Bug #39026)
During transactional coordinator takeover (directly after node
failure), the LQH finding an operation in the
LOG_COMMIT state sent an
LQH_TRANS_CONF signal twice, causing the TC
An invalid memory access caused the management server to crash on Solaris Sparc platforms. (Bug #38628)
A segfault in
ndbd to hang indefinitely.
ndb_mgmd failed to start on older Linux distributions (2.4 kernels) that did not support e-polling. (Bug #38592)
ndb_mgmd sometimes performed unnecessary network I/O with the client. This in combination with other factors led to long-running threads that were attempting to write to clients that no longer existed. (Bug #38563)
ndb_restore failed with a floating point exception due to a division by zero error when trying to restore certain data files. (Bug #38520)
A failed connection to the management server could cause a resource leak in ndb_mgmd. (Bug #38424)
Failure to parse configuration parameters could cause a memory leak in the NDB log parser. (Bug #38380)
NDBCLUSTER table on one
SQL node, caused a trigger on this table to be deleted on
another SQL node.
Attempting to add a
UNIQUE INDEX twice to an
NDBCLUSTER table, then deleting
rows from the table could cause the MySQL Server to crash.
ndb_restore failed when a single table was specified. (Bug #33801)
GCP_COMMIT did not wait for transaction
takeover during node failure. This could cause
GCP_SAVE_REQ to be executed too early. This
could also cause (very rarely) replication to skip rows.
Cluster Replication: In some cases, dropping a database on the master could cause table logging to fail on the slave, or, when using a debug build, could cause the slave mysqld to fail completely. (Bug #39404)
Cluster Replication: During a parallel node restart, the starting nodes could (sometimes) incorrectly synchronize subscriptions among themselves. Instead, this synchronization now takes place only among nodes that have actually (completely) started. (Bug #38767)
Data was written to the binlog with
Certain Multi-Range Read scans involving
IS NOT NULL comparisons
failed with an error in the
local query handler.
object using an
NdbScanOperation object that
had not yet had its
method called resulted in a crash when later attempting to use
interpreted delete created with an
option caused the transaction to abort.
Passing a value greater than 65535 to
caused these methods to have no effect.
method could be called multiple times without error.
Problems with the public headers prevented
NDB applications from being built
with warnings turned on.
Accessing the debug version of
dlopen() resulted in a segmentation