This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
You can obtain the GPL source code and GPL binaries for supported platforms for MySQL Cluster NDB 6.3.33 from http://dev.mysql.com/downloads/cluster/.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.44 (see Changes in MySQL 5.1.44 (2010-02-04)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
It is now possible to determine, using the
ndb_desc utility or the NDB API, which data
nodes contain replicas of which partitions. For
ndb_desc, a new
--extra-node-info option is
added to cause this information to be included in its output. A
added to the NDB API for obtaining this information
DUMP commands returned output to all
ndb_mgm clients connected to the same MySQL
Cluster. Now, these commands return their output only to the
ndb_mgm client that actually issued the
Replication; Cluster Replication:
MySQL Cluster Replication now supports attribute promotion and
demotion for row-based replication between columns of different
but similar types on the master and the slave. For example, it
is possible to promote an
column on the master to a
column on the slave, and to demote a
TEXT column to a
The implementation of type demotion distinguishes between lossy
and non-lossy type conversions, and their use on the slave can
be controlled by setting the
server system variable.
For more information about attribute promotion and demotion for row-based replication in MySQL Cluster, see Attribute promotion and demotion (MySQL Cluster). (Bug #47163, Bug #46584)
As part of the fix for this issue, rows for empty epochs are now
recorded in the
ndb_binlog_index table even
--ndb-log-empty-epochs is 0.
(Bug #49559, Bug #11757505)
If a node or cluster failure occurred while
mysqld was scanning the
ndb.ndb_schema table (which it does when
attempting to connect to the cluster), insufficient error
handling could lead to a crash by mysqld in
certain cases. This could happen in a MySQL Cluster with a great
many tables, when trying to restart data nodes while one or more
mysqld processes were restarting.
After running a mixed series of node and system restarts, a system restart could hang or fail altogether. This was caused by setting the value of the newest completed global checkpoint too low for a data node performing a node restart, which led to the node reporting incorrect GCI intervals for its first local checkpoint. (Bug #52217)
When performing a complex mix of node restarts and system
restarts, the node that was elected as master sometimes required
optimized node recovery due to missing
information. When this happened, the node crashed with
Failure to recreate object ... during restart, error
721 (because the
code was run twice). Now when this occurs, node takeover is
executed immediately, rather than being made to wait until the
remaining data nodes have started.
References: See also Bug #48436.
The redo log protects itself from being filled up by
periodically checking how much space remains free. If
insufficient redo log space is available, it sets the state
TAIL_PROBLEM which results in transactions
being aborted with error code 410 (out of redo
log). However, this state was not set following a
node restart, which meant that if a data node had insufficient
redo log space following a node restart, it could crash a short
time later with Fatal error due to end of REDO
log. Now, this space is checked during node
The output of the ndb_mgm client
REPORT BACKUPSTATUS command could sometimes
contain errors due to uninitialized data.
GROUP BY query against
NDB tables sometimes did not use
any indexes unless the query included a
INDEX option. With this fix, indexes are used by such
queries (where otherwise possible) even when
INDEX is not specified.
The ndb_mgm client sometimes inserted extra
prompts within the output of the
Issuing a command in the ndb_mgm client after it had lost its connection to the management server could cause the client to crash. (Bug #49219)
The ndb_print_backup_file utility failed to function, due to a previous internal change in the NDB code. (Bug #41512, Bug #48673)
configuration parameter was set in
config.ini, the ndb_mgm
REPORT MEMORYUSAGE command printed its
output multiple times.
ndb_mgm -e "... REPORT ..." did not write any
The fix for this issue also prevents the cluster log from being
INFO messages when
DataMemory usage reaches
100%, and insures that when the usage is decreased, an
appropriate message is written to the cluster log.
(Bug #31542, Bug #44183, Bug #49782)
Column length information generated by
InnoDB did not match that generated
MyISAM, which caused invalid
metadata to be written to the binary log when trying to
GEOMETRY fields was not properly
stored by the slave in its definitions of tables.
References: See also Bug #48776.
Disk Data: Inserts of blob column values into a MySQL Cluster Disk Data table that exhausted the tablespace resulted in misleading no such tuple error messages rather than the expected error tablespace full.
This issue appeared similar to Bug #48113, but had a different underlying cause. (Bug #52201)
The error message returned after atttempting to execute
ALTER LOGFILE GROUP on an
nonexistent logfile group did not indicate the reason for the
When reading blob data with lock mode
LM_SimpleRead, the lock was not upgraded as
A number of issues were corrected in the NDB API coding examples
found in the
directory in the MySQL Cluster source tree. These included
possible endless recursion in
ndbapi_scan.cpp as well as problems running
some of the examples on systems using Windows or Mac OS X due to
the lettercase used for some table names.
(Bug #30552, Bug #30737)
1) In rare cases, if a thread was interrupted during a
PRIVILEGES operation, a debug assertion occurred later
due to improper diagnostics area setup. 2) A
KILL operation could cause a
console error message referring to a diagnostic area state
without first ensuring that the state existed.