This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.32 (see Changes in MySQL 5.1.32 (2009-02-14)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Important Change: Formerly, when the management server failed to create a transporter for a data node connection,
net_write_timeoutseconds elapsed before the data node was actually permitted to disconnect. Now in such cases the disconnection occurs immediately. (Bug #41965)
References: See also: Bug #41713.
Important Change; Replication:
RESET SLAVEnow reset the values shown for
Last_SQL_Errnoin the output of
SHOW SLAVE STATUS. (Bug #34654)
References: See also: Bug #44270.
Important Note; Cluster Replication: This release of MySQL Cluster derives in part from MySQL 5.1.29, where the default value for the
--binlog-formatoption changed to
STATEMENT. That change does not affect this or future MySQL Cluster NDB 6.x releases, where the default value for this option remains
MIXED, since MySQL Cluster Replication does not work with the statement-based format. (Bug #40586)
Disk Data: It is now possible to specify default locations for Disk Data data files and undo log files, either together or separately, using the data node configuration parameters
FileSystemPathUndoFiles. For information about these configuration parameters, see Disk Data file system parameters.
It is also now possible to specify a log file group, tablespace, or both, that is created when the cluster is started, using the
InitialTablespacedata node configuration parameters. For information about these configuration parameters, see Disk Data object creation parameters.
Performance: Updates of the
SYSTAB_0system table to obtain a unique identifier did not use transaction hints for tables having no primary key. In such cases the NDB kernel used a cache size of 1. This meant that each insert into a table not having a primary key required an update of the corresponding
SYSTAB_0entry, creating a potential performance bottleneck.
With this fix, inserts on
NDBtables without primary keys can be under some conditions be performed up to 100% faster than previously. (Bug #39268)
Packaging: Packages for MySQL Cluster were missing the
libndbclient.afiles. (Bug #42278)
ALTER TABLE ... REORGANIZE PARTITIONon an
NDBCLUSTERtable having only one partition caused mysqld to crash. (Bug #41945)
References: See also: Bug #40389.
Cluster API: Failed operations on
TEXTcolumns were not always reported correctly to the originating SQL node. Such errors were sometimes reported as being due to timeouts, when the actual problem was a transporter overload due to insufficient buffer space. (Bug #39867, Bug #39879)
Backup IDs greater than 231 were not handled correctly, causing negative values to be used in backup directory names and printouts. (Bug #43042)
When using ndbmtd, NDB kernel threads could hang while trying to start the data nodes with
LockPagesInMainMemoryset to 1. (Bug #43021)
When using multiple management servers and starting several API nodes (possibly including one or more SQL nodes) whose connection strings listed the management servers in different order, it was possible for 2 API nodes to be assigned the same node ID. When this happened it was possible for an API node not to get fully connected, consequently producing a number of errors whose cause was not easily recognizable. (Bug #42973)
ndb_error_reporter worked correctly only with GNU tar. (With other versions of tar, it produced empty archives.) (Bug #42753)
NDBCLUSTERtables caused such tables to become locked. (Bug #42751)
References: See also: Bug #16229, Bug #18135.
When performing more than 32 index or tuple scans on a single fragment, the scans could be left hanging. This caused unnecessary timeouts, and in addition could possibly lead to a hang of an LCP. (Bug #42559)
References: This issue is a regression of: Bug #42084.
A data node failure that occurred between calls to
NdbTransaction::execute()was not correctly handled; a subsequent call to
nextResult()caused a null pointer to be deferenced, leading to a segfault in mysqld. (Bug #42545)
SHOW GLOBAL STATUS LIKE 'NDB%'before mysqld had connected to the cluster caused a segmentation fault. (Bug #42458)
Data node failures that occurred before all data nodes had connected to the cluster were not handled correctly, leading to additional data node failures. (Bug #42422)
When a cluster backup failed with Error 1304 (Node
node_id1: Backup request from
node_id2failed to start), no clear reason for the failure was provided.
As part of this fix, MySQL Cluster now retries backups in the event of sequence errors. (Bug #42354)
References: See also: Bug #22698.
SHOW ENGINE NDBCLUSTER STATUSon an SQL node before the management server had connected to the cluster caused mysqld to crash. (Bug #42264)
A maximum of 11
TUPscans were permitted in parallel. (Bug #42084)
Trying to execute an
ALTER ONLINE TABLE ... ADD COLUMNstatement while inserting rows into the table caused mysqld to crash. (Bug #41905)
If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug #41469)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug #41462)
A race condition in transaction coordinator takeovers (part of node failure handling) could lead to operations (locks) not being taken over and subsequently getting stale. This could lead to subsequent failures of node restarts, and to applications getting into an endless lock conflict with operations that would not complete until the node was restarted. (Bug #41297)
References: See also: Bug #41295.
An abort path in the
DBLQHkernel block failed to release a commit acknowledgment marker. This meant that, during node failure handling, the local query handler could be added multiple times to the marker record which could lead to additional node failures due an array overflow. (Bug #41296)
During node failure handling (of a data node other than the master), there was a chance that the master was waiting for a
GCP_NODEFINISHEDsignal from the failed node after having received it from all other data nodes. If this occurred while the failed node had a transaction that was still being committed in the current epoch, the master node could crash in the
DBTCkernel block when discovering that a transaction actually belonged to an epoch which was already completed. (Bug #41295)
If a transaction was aborted during the handling of a data node failure, this could lead to the later handling of an API node failure not being completed. (Bug #41214)
Given a MySQL Cluster containing no data (that is, whose data nodes had all been started using
--initial, and into which no data had yet been imported) and having an empty backup directory, executing
START BACKUPwith a user-specified backup ID caused the data nodes to crash. (Bug #41031)
EXITin the management client sometimes caused the client to hang. (Bug #40922)
Redo log creation was very slow on some platforms, causing MySQL Cluster to start more slowly than necessary with some combinations of hardware and operating system. This was due to all write operations being synchronized to disk while creating a redo log file. Now this synchronization occurs only after the redo log has been created. (Bug #40734)
Transaction failures took longer to handle than was necessary.
When a data node acting as transaction coordinator (TC) failed, the surviving data nodes did not inform the API node initiating the transaction of this until the failure had been processed by all protocols. However, the API node needed only to know about failure handling by the transaction protocol—that is, it needed to be informed only about the TC takeover process. Now, API nodes (including MySQL servers acting as cluster SQL nodes) are informed as soon as the TC takeover is complete, so that it can carry on operating more quickly. (Bug #40697)
It was theoretically possible for stale data to be read from
NDBCLUSTERtables when the transaction isolation level was set to
ReadCommitted. (Bug #40543)
In some cases,
NDBdid not check correctly whether tables had changed before trying to use the query cache. This could result in a crash of the debug MySQL server. (Bug #40464)
Restoring a MySQL Cluster from a dump made using mysqldump failed due to a spurious error: Can't execute the given command because you have active locked tables or an active transaction. (Bug #40346)
O_DIRECTwas incorrectly disabled when making MySQL Cluster backups. (Bug #40205)
Events logged after setting
ALL CLUSTERLOG STATISTICS=15in the management client did not always include the node ID of the reporting node. (Bug #39839)
Start phase reporting was inconsistent between the management client and the cluster log. (Bug #39667)
A segfault in
Logger::Logcaused ndbd to hang indefinitely. This fix improves on an earlier one for this issue, first made in MySQL Cluster NDB 6.2.16 and MySQL Cluster NDB 6.3.17. (Bug #39180)
References: See also: Bug #38609.
Memory leaks could occur in handling of strings used for storing cluster metadata and providing output to users. (Bug #38662)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug #34526)
A duplicate key or other error raised when inserting into an
NDBCLUSTERtable caused the current transaction to abort, after which any SQL statement other than a
ROLLBACKfailed. With this fix, the
NDBCLUSTERstorage engine now performs an implicit rollback when a transaction is aborted in this way; it is no longer necessary to issue an explicit
ROLLBACKstatement, and the next statement that is issued automatically begins a new transaction.Note
It remains necessary in such cases to retry the complete transaction, regardless of which statement caused it to be aborted.
References: See also: Bug #47654.
Error messages for
NDBCLUSTERerror codes 1224 and 1227 were missing. (Bug #28496)
Partitioning: A query on a user-partitioned table caused MySQL to crash, where the query had the following characteristics:
WHEREclause referenced an indexed column that was also in the partitioning key.
WHEREclause included a value found in the partition.
WHEREclause used the
<>operators to compare with the indexed column's value with a constant.
The query used an
ORDER BYclause, and the same indexed column was used in the
ORDER BYclause used an explicit or implicit
Two examples of such a query are given here, where
arepresents an indexed column used in the table's partitioning key:
SELECT * FROM
tableWHERE a <
constantORDER BY a;
SELECT * FROM
tableWHERE a <>
constantORDER BY a;
This bug was introduced in MySQL Cluster NDB 6.2.16. (Bug #40954)
References: This issue is a regression of: Bug #30573, Bug #33257, Bug #33555.
Partitioning: Dropping or creating an index on a partitioned table managed by the
InnoDBPlugin locked the table. (Bug #37453)
Disk Data: It was not possible to add an in-memory column online to a table that used a table-level or column-level
STORAGE DISKoption. The same issue prevented
ALTER ONLINE TABLE ... REORGANIZE PARTITIONfrom working on Disk Data tables. (Bug #42549)
Disk Data: Issuing concurrent
CREATE LOGFILE GROUP, or
ALTER LOGFILE GROUPstatements on separate SQL nodes caused a resource leak that led to data node crashes when these statements were used again later. (Bug #40921)
Disk Data: Disk-based variable-length columns were not always handled like their memory-based equivalents, which could potentially lead to a crash of cluster data nodes. (Bug #39645)
Disk Data: Creating a Disk Data tablespace with a very large extent size caused the data nodes to fail. The issue was observed when using extent sizes of 100 MB and larger. (Bug #39096)
Disk Data: This improves on a previous fix for this issue that was made in MySQL Cluster 6.2.11. (Bug #37116)
References: See also: Bug #29186.
O_SYNCwas incorrectly disabled on platforms that do not support
O_DIRECT. This issue was noted on Solaris but could have affected other platforms not having
O_DIRECTcapability. (Bug #34638)
Disk Data: Trying to execute a
CREATE LOGFILE GROUPstatement using a value greater than
UNDO_BUFFER_SIZEcaused data nodes to crash.
As a result of this fix, the upper limit for
600M; attempting to set a higher value now fails gracefully with an error. (Bug #34102)
References: See also: Bug #36702.
Disk Data: When attempting to create a tablespace that already existed, the error message returned was Table or index with given name already exists. (Bug #32662)
Disk Data: Using a path or file name longer than 128 characters for Disk Data undo log files and tablespace data files caused a number of issues, including failures of
CREATE LOGFILE GROUP,
ALTER LOGFILE GROUP,
CREATE TABLESPACE, and
ALTER TABLESPACEstatements, as well as crashes of management nodes and data nodes.
With this fix, the maximum length for path and file names used for Disk Data undo log files and tablespace data files is now the same as the maximum for the operating system. (Bug #31769, Bug #31770, Bug #31772)
Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.
The fix for this bug also corrected an issue in the
LGMANkernel block where the amount of free space left in the undo buffer was miscalculated, causing buffer overruns. This could cause records in the buffer to be overwritten, leading to problems when restarting data nodes. (Bug #28077)
Disk Data: Attempting to perform a system restart of the cluster where there existed a logfile group without and undo log files caused the data nodes to crash.Note
While issuing a
CREATE LOGFILE GROUPstatement without an
ADD UNDOFILEoption fails with an error in the MySQL server, this situation could arise if an SQL node failed during the execution of a valid
CREATE LOGFILE GROUPstatement; it is also possible to create a logfile group without any undo log files using the NDB API.
Cluster Replication: When replicating between MySQL Clusters,
AUTO_INCREMENTwas not set properly on the slave cluster. (Bug #42232)
Cluster Replication: Sometimes, when using the
orig_server_idcolumns of the
ndb_binlog_indextable on the slave contained the ID and epoch of the local server instead. (Bug #41601)
Cluster API: Some error messages from ndb_mgmd contained newline (
\n) characters. This could break the MGM API protocol, which uses the newline as a line separator. (Bug #43104)
Cluster API: When using an ordered index scan without putting all key columns in the read mask, this invalid use of the NDB API went undetected, which resulted in the use of uninitialized memory. (Bug #42591)
Cluster API: The MGM API reset error codes on management server handles before checking them. This meant that calling an MGM API function with a null handle caused applications to crash. (Bug #40455)
Cluster API: It was not always possible to access parent objects directly from
NdbScanOperationobjects. To alleviate this problem, a new
getNdbOperation()method has been added to
NdbBloband new getNdbTransaction() methods have been added to
NdbScanOperation. In addition, a const variant of
NdbOperation::getErrorLine()is now also available. (Bug #40242)
getBlobHandle()failed when used with incorrect column names or numbers. (Bug #40241)
Cluster API: The NDB API example programs included in MySQL Cluster source distributions failed to compile. (Bug #37491)
References: See also: Bug #40238.
mgmapi.hcontained constructs which only worked in C++, but not in C. (Bug #27004)