This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
Obtaining MySQL Cluster NDB 6.3. The latest MySQL Cluster NDB 6.3 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 6.3 release can be obtained from the same location. You can also access the MySQL Cluster NDB 6.3 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-5.1-telco-6.3.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.51 (see Changes in MySQL 5.1.51 (2010-09-10)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Id configuration parameter used with
MySQL Cluster management, data, and API nodes (including SQL
nodes) is now deprecated, and the
parameter (long available as a synonym for
when configuring these types of nodes) should be used instead.
Id continues to be supported for reasons of
backward compatibility, but now generates a warning when used
with these types of nodes, and is subject to removal in a future
release of MySQL Cluster.
This change affects the name of the configuration parameter
only, establishing a clear preference for
Id in the
sections of the MySQL Cluster global configuration
config.ini) file. The behavior of unique
identifiers for management, data, and SQL and API nodes in MySQL
Cluster has not otherwise been altered.
Id parameter as used in the
[computer] section of the MySQL Cluster
global configuration file is not affected by this change.
MySQL Cluster RPM distributions did not include a
shared-compat RPM for the MySQL Server, which
meant that MySQL applications depending on
libmysqlclient.so.15 (MySQL 5.0 and
earlier) no longer worked.
LQHKEYREQ request message used by the
local query handler when checking the major schema version of a
table, being only 16 bits wide, could cause this check to fail
with an Invalid schema version error
NDB error code 1227). This issue
occurred after creating and dropping (and re-creating) the same
table 65537 times, then trying to insert rows into the table.
References: See also Bug #57897.
An internal buffer overrun could cause a data node to fail. (Bug #57767)
Data nodes compiled with gcc 4.5 or higher crashed during startup. (Bug #57761)
ndb_restore now retries failed transactions when replaying log entries, just as it does when restoring data. (Bug #57618)
During a GCP takeover, it was possible for one of the data nodes
not to receive a
with the result that it would report itself as
GCP_COMMITTING while the other data nodes
of the form
when selecting from
NDB table having a primary key
on multiple columns could result in Error 4259
Invalid set of range scan bounds if
range2 started exactly where
range1 ended and the primary key
definition declared the columns in a different order relative to
the order in the table's column list. (Such a query should
simply return all rows in the table, since any expression
is always true.)
CREATE TABLE t (a, b, PRIMARY KEY (b, a)) ENGINE NDB;
This issue could then be triggered by a query such as this one:
SELECT * FROM t WHERE b < 8 OR b >= 8;
In addition, the order of the ranges in the
WHERE clause was significant; the issue was
not triggered, for example, by the query
SELECT * FROM t WHERE b
<= 8 OR b > 8.
A GCP stop is detected using 2 parameters which determine the
maximum time that a global checkpoint or epoch can go unchanged;
one of these controls this timeout for GCPs and one controls the
timeout for epochs. Suppose the cluster is configured such that
is 100 ms but
1500 ms. A node failure can be signalled after 4 missed
heartbeats—in this case, 6000 ms. However, this would
causing false detection of a GCP. To prevent this from
happening, the configured value for
is automatically adjusted, based on the values of
The current issue arose when the automatic adjustment routine
did not correctly take into consideration the fact that, during
cascading node-failures, several intervals of length
* (HeartbeatIntervalDBDB + ArbitrationTimeout) may
elapse before all node failures have internally been resolved.
This could cause false GCP detection in the event of a cascading
WHERE against an
NDB table having a
VARCHAR column as its primary key
failed to return all matching rows.
When a data node angel process failed to fork off a new worker
process (to replace one that had failed), the failure was not
handled. This meant that the angel process either transformed
itself into a worker process, or itself failed. In the first
case, the data node continued to run, but there was no longer
any angel to restart it in the event of failure, even with
StopOnError set to 0.
OPTION_ALLOW_BATCHING bitmask had the
same value as
OPTION_PROFILING. This caused
conflicts between using
ENUM columns represented using
more than 1 byte (that is,
columns with more than 8 members and
ENUM columns with more than 256
constants) between platforms using different endianness failed
when using the row-based format. This was because columns of
these types are represented internally using integers, but the
internal functions used by MySQL to handle them treated them as
References: See also Bug #53528.
An application dropping a table at the same time that another
application tried to set up a replication event on the same
table could lead to a crash of the data node. The same issue
could sometimes cause
An NDB API client program under load could abort with an
assertion error in
References: See also Bug #32708.