This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.2 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.32 (see Changes in MySQL 5.1.32 (2009-02-14)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
Formerly, when the management server failed to create a
transporter for a data node connection,
elapsed before the data node was actually permitted to
disconnect. Now in such cases the disconnection occurs
References: See also Bug #41713.
Important Change; Replication:
RESET MASTER and
RESET SLAVE now reset the values
Last_SQL_Errno in the output of
SHOW SLAVE STATUS.
References: See also Bug #44270.
Important Note; Cluster Replication:
This release of MySQL Cluster derives in part from MySQL 5.1.29,
where the default value for the
--binlog-format option changed to
STATEMENT. That change does
not affect this or future MySQL Cluster NDB
6.x releases, where the default value for this option remains
MIXED, since MySQL Cluster Replication does
not work with the statement-based format.
It is now possible to specify default locations for Disk Data
data files and undo log files, either together or separately,
using the data node configuration parameters
For information about these configuration parameters, see
Data file system parameters.
It is also now possible to specify a log file group, tablespace,
or both, that is created when the cluster is started, using the
node configuration parameters. For information about these
configuration parameters, see
Data object creation parameters.
Updates of the
SYSTAB_0 system table to
obtain a unique identifier did not use transaction hints for
tables having no primary key. In such cases the NDB kernel used
a cache size of 1. This meant that each insert into a table not
having a primary key required an update of the corresponding
SYSTAB_0 entry, creating a potential
With this fix, inserts on
NDB tables without
primary keys can be under some conditions be performed up to
100% faster than previously.
Packages for MySQL Cluster were missing the
ALTER TABLE ... REORGANIZE
PARTITION on an
NDBCLUSTER table having only one
partition caused mysqld to crash.
References: See also Bug #40389.
Failed operations on
TEXT columns were not always
reported correctly to the originating SQL node. Such errors were
sometimes reported as being due to timeouts, when the actual
problem was a transporter overload due to insufficient buffer
(Bug #39867, Bug #39879)
In some cases,
NDB did not check
correctly whether tables had changed before trying to use the
query cache. This could result in a crash of the debug MySQL
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN statement while inserting rows into the
table caused mysqld to crash.
When using ndbmtd, NDB kernel threads could
hang while trying to start the data nodes with
set to 1.
Backup IDs greater than 231 were not handled correctly, causing negative values to be used in backup directory names and printouts. (Bug #43042)
Given a MySQL Cluster containing no data (that is, whose data
nodes had all been started using
into which no data had yet been imported) and having an empty
backup directory, executing
START BACKUP with
a user-specified backup ID caused the data nodes to crash.
When using multiple management servers and starting several API nodes (possibly including one or more SQL nodes) whose connectstrings listed the management servers in different order, it was possible for 2 API nodes to be assigned the same node ID. When this happened it was possible for an API node not to get fully connected, consequently producing a number of errors whose cause was not easily recognizable. (Bug #42973)
caused such tables to become locked.
References: See also Bug #16229, Bug #18135.
ndb_error_reporter worked correctly only with GNU tar. (With other versions of tar, it produced empty archives.) (Bug #42753)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug #34526)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug #41462)
When performing more than 32 index or tuple scans on a single fragment, the scans could be left hanging. This caused unnecessary timeouts, and in addition could possibly lead to a hang of an LCP. (Bug #42559)
References: This bug is a regression of Bug #42084.
A data node failure that occurred between calls to
was not correctly handled; a subsequent call to
caused a null pointer to be deferenced, leading to a segfault in
SHOW GLOBAL STATUS LIKE 'NDB%' before
mysqld had connected to the cluster caused a
Data node failures that occurred before all data nodes had connected to the cluster were not handled correctly, leading to additional data node failures. (Bug #42422)
When a cluster backup failed with Error 1304 (Node
node_id1: Backup request from
node_id2 failed to start), no clear
reason for the failure was provided.
As part of this fix, MySQL Cluster now retries backups in the event of sequence errors. (Bug #42354)
References: See also Bug #22698.
NDBCLUSTER STATUS on an SQL node before the management
server had connected to the cluster caused
mysqld to crash.
A maximum of 11
TUP scans were permitted in
EXIT in the management client
sometimes caused the client to hang.
If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug #41469)
Start phase reporting was inconsistent between the management client and the cluster log. (Bug #39667)
An abort path in the
DBLQH kernel block
failed to release a commit acknowledgment marker. This meant
that, during node failure handling, the local query handler
could be added multiple times to the marker record which could
lead to additional node failures due an array overflow.
During node failure handling (of a data node other than the
master), there was a chance that the master was waiting for a
GCP_NODEFINISHED signal from the failed node
after having received it from all other data nodes. If this
occurred while the failed node had a transaction that was still
being committed in the current epoch, the master node could
crash in the
DBTC kernel block when
discovering that a transaction actually belonged to an epoch
which was already completed.
If a transaction was aborted during the handling of a data node failure, this could lead to the later handling of an API node failure not being completed. (Bug #41214)
Memory leaks could occur in handling of strings used for storing cluster metadata and providing output to users. (Bug #38662)
A segfault in
ndbd to hang indefinitely. This fix improves
on an earlier one for this issue, first made in MySQL Cluster
NDB 6.2.16 and MySQL Cluster NDB 6.3.17.
References: See also Bug #38609.
Redo log creation was very slow on some platforms, causing MySQL Cluster to start more slowly than necessary with some combinations of hardware and operating system. This was due to all write operations being synchronized to disk while creating a redo log file. Now this synchronization occurs only after the redo log has been created. (Bug #40734)
Transaction failures took longer to handle than was necessary.
When a data node acting as transaction coordinator (TC) failed, the surviving data nodes did not inform the API node initiating the transaction of this until the failure had been processed by all protocols. However, the API node needed only to know about failure handling by the transaction protocol—that is, it needed to be informed only about the TC takeover process. Now, API nodes (including MySQL servers acting as cluster SQL nodes) are informed as soon as the TC takeover is complete, so that it can carry on operating more quickly. (Bug #40697)
Events logged after setting
STATISTICS=15 in the management client did not always
include the node ID of the reporting node.
It was theoretically possible for stale data to be read from
NDBCLUSTER tables when the
transaction isolation level was set to
Restoring a MySQL Cluster from a dump made using mysqldump failed due to a spurious error: Can't execute the given command because you have active locked tables or an active transaction. (Bug #40346)
O_DIRECT was incorrectly disabled when making
MySQL Cluster backups.
Error messages for
codes 1224 and 1227 were missing.
A duplicate key or other error raised when inserting into an
NDBCLUSTER table caused the current
transaction to abort, after which any SQL statement other than a
failed. With this fix, the
NDBCLUSTER storage engine now
performs an implicit rollback when a transaction is aborted in
this way; it is no longer necessary to issue an explicit
statement, and the next statement that is issued automatically
begins a new transaction.
It remains necessary in such cases to retry the complete transaction, regardless of which statement caused it to be aborted.
References: See also Bug #47654.
Partitioning: A query on a user-partitioned table caused MySQL to crash, where the query had the following characteristics:
WHERE clause referenced
an indexed column that was also in the partitioning key.
WHERE clause included a
value found in the partition.
WHERE clause used the
operators to compare with the indexed column's value
with a constant.
The query used an
ORDER BY clause, and
the same indexed column was used in the
ORDER BY clause used an explicit or
ASC sort priority.
Two examples of such a query are given here, where
a represents an indexed column used in the
table's partitioning key:
SELECT * FROM
tableWHERE a <
constantORDER BY a;
SELECT * FROM
tableWHERE a <>
constantORDER BY a;
References: This bug was introduced by Bug #30573, Bug #33257, Bug #33555.
Dropping or creating an index on a partitioned table managed by
InnoDB Plugin locked the table.
Trying to execute a
GROUP statement using a value greater than
caused data nodes to crash.
As a result of this fix, the upper limit for
UNDO_BUFFER_SIZE is now
600M; attempting to set a higher value now
fails gracefully with an error.
References: See also Bug #36702.
Disk Data: Creating a Disk Data tablespace with a very large extent size caused the data nodes to fail. The issue was observed when using extent sizes of 100 MB and larger. (Bug #39096)
It was not possible to add an in-memory column online to a table
that used a table-level or column-level
DISK option. The same issue prevented
ONLINE TABLE ... REORGANIZE PARTITION from working on
Disk Data tables.
Disk Data: Creation of a tablespace data file whose size was greater than 4 GB failed silently on 32-bit platforms. (Bug #37116)
References: See also Bug #29186.
GROUP statements on separate SQL nodes caused a
resource leak that led to data node crashes when these
statements were used again later.
Disk Data: Disk-based variable-length columns were not always handled like their memory-based equivalents, which could potentially lead to a crash of cluster data nodes. (Bug #39645)
O_SYNC was incorrectly disabled on platforms
that do not support
O_DIRECT. This issue was
noted on Solaris but could have affected other platforms not
Disk Data: Attempting to perform a system restart of the cluster where there existed a logfile group without and undo log files caused the data nodes to crash.
While issuing a
GROUP statement without an
UNDOFILE option fails with an error in the MySQL
server, this situation could arise if an SQL node failed
during the execution of a valid
LOGFILE GROUP statement; it is also possible to
create a logfile group without any undo log files using the
Using a path or file name longer than 128 characters for Disk
Data undo log files and tablespace data files caused a number of
issues, including failures of
TABLESPACE statements, as well as crashes of
management nodes and data nodes.
With this fix, the maximum length for path and file names used for Disk Data undo log files and tablespace data files is now the same as the maximum for the operating system. (Bug #31769, Bug #31770, Bug #31772)
Disk Data: When attempting to create a tablespace that already existed, the error message returned was Table or index with given name already exists. (Bug #32662)
Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.
The fix for this bug also corrected an issue in the
LGMAN kernel block where the amount of free
space left in the undo buffer was miscalculated, causing buffer
overruns. This could cause records in the buffer to be
overwritten, leading to problems when restarting data nodes.
Sometimes, when using the
orig_server_id columns of the
ndb_binlog_index table on the slave contained
the ID and epoch of the local server instead.
Some error messages from ndb_mgmd contained
\n) characters. This could break the
MGM API protocol, which uses the newline as a line separator.
Cluster API: When using an ordered index scan without putting all key columns in the read mask, this invalid use of the NDB API went undetected, which resulted in the use of uninitialized memory. (Bug #42591)
It was not always possible to access parent objects directly
NdbScanOperation objects. To
alleviate this problem, a new
method has been added to
NdbBlob and new
getNdbTransaction() methods have been added to
NdbScanOperation. In addition,
a const variant of
now also available.
failed when used with incorrect column names or numbers.
Cluster API: The MGM API reset error codes on management server handles before checking them. This meant that calling an MGM API function with a null handle caused applications to crash. (Bug #40455)
Cluster API: The NDB API example programs included in MySQL Cluster source distributions failed to compile. (Bug #37491)
References: See also Bug #40238.
mgmapi.h contained constructs which only
worked in C++, but not in C.