Changes in the MySQL Cluster NDB 6.3 Series

This section contains unified change history highlights for all MySQL Cluster releases based on version 6.3 of the NDB storage engine through MySQL Cluster NDB 6.3.55. Included are all changelog entries in the categories MySQL Cluster, Disk Data, and Cluster API.

For an overview of features that were added in MySQL Cluster NDB 6.3, see MySQL Cluster Development in MySQL Cluster NDB 6.3.

Changes in MySQL Cluster NDB 6.3.54 (5.1.73-ndb-6.3.54)

Version 5.1.73-ndb-6.3.54 has no changelog entries, or they have not yet been published because the product version has not yet been released.

Changes in MySQL Cluster NDB 6.3.53 (5.1.72-ndb-6.3.53)

Version 5.1.72-ndb-6.3.53 has no changelog entries, or they have not yet been published because the product version has not yet been released.

Changes in MySQL Cluster NDB 6.3.52 (5.1.69-ndb-6.3.52)

Version 5.1.69-ndb-6.3.52 has no changelog entries, or they have not yet been published because the product version has not yet been released.

Changes in MySQL Cluster NDB 6.3.51 (5.1.67-ndb-6.3.51)

Bugs Fixed

  • Node failure during the dropping of a table could lead to the node hanging when attempting to restart.

    When this happened, the NDB internal dictionary (DBDICT) lock taken by the drop table operation was held indefinitely, and the logical global schema lock taken by the SQL the drop table operation from which the drop operation originated was held until the NDB internal operation timed out. To aid in debugging such occurrences, a new dump code, DUMP 1228 (or DUMP DictDumpLockQueue), which dumps the contents of the DICT lock queue, has been added in the ndb_mgm client. (Bug #14787522)

Changes in MySQL Cluster NDB 6.3.50 (5.1.66-ndb-6.3.50)

Bugs Fixed

  • A slow filesystem during local checkpointing could exert undue pressure on DBDIH kernel block file page buffers, which in turn could lead to a data node crash when these were exhausted. This fix limits the number of table definition updates that DBDIH can issue concurrently. (Bug #14828998)

  • Setting BackupMaxWriteSize to a very large value as compared with DiskCheckpointSpeed caused excessive writes to disk and CPU usage. (Bug #14472648)

  • Cluster API: When the buffer pool used for KeyInfo from NDB API requests for primary key and scans was exhausted while receiving the KeyInfo, the error handling path did not correctly abort the scan request. Symptoms of this incorrect error handling included the NDB API client that requested the scan experiencing a long timeout, as well as permanent leakage of the scan record, scan fragment records, and linked operation record associated with the scan.

    This issue is not present in MySQL Cluster NDB 7.0 and later, due to the replacement of the fixed-size single-purpose buffers for KeyInfo (and AttrInfo) with LongMessageBuffer, as well as improvements in error handling. (Bug #14386849)

Changes in MySQL Cluster NDB 6.3.49 (5.1.61-ndb-6.3.49)

Bugs Fixed

  • When reloading the redo log during a node or system restart, and with NoOfFragmentLogFiles greater than or equal to 42, it was possible for metadata to be read for the wrong file (or files). Thus, the node or nodes involved could try to reload the wrong set of data. (Bug #14389746)

  • If the Transaction Coordinator aborted a transaction in the prepared state, this could cause a resource leak. (Bug #14208924)

  • DUMP 2303 in the ndb_mgm client now includes the status of the single fragment scan record reserved for a local checkpoint. (Bug #13986128)

  • A shortage of scan fragment records in DBTC resulted in a leak of concurrent scan table records and key operation records. (Bug #13966723)

  • In some circumstances, transactions could be lost during an online upgrade. (Bug #13834481)

  • When trying to use --hostname=host:port to connect to a MySQL server running on a nonstandard port, the port argument was ignored. (Bug #13364905, Bug #62635)

  • Attempting to add both a column and an index on that column in the same online ALTER TABLE statement caused mysqld to fail. Although this issue affected only the mysqld shipped with MySQL Cluster, the table named in the ALTER TABLE could use any storage engine for which online operations are supported. (Bug #12755722)

  • Cluster API: When an NDB API application called NdbScanOperation::nextResult() again after the previous call had returned end-of-file (return code 1), a transaction object was leaked. Now when this happens, NDB returns error code 4210 (Ndb sent more info than length specified); previouslyu in such cases, -1 was returned. In addition, the extra transaction object associated with the scan is freed, by returning it to the transaction coordinator's idle list. (Bug #11748194)

Changes in MySQL Cluster NDB 6.3.47 (5.1.56-ndb-6.3.47)

Bugs Fixed

  • When a failure of multiple data nodes during a local checkpoint (LCP) that took a long time to complete included the node designated as master, any new data nodes attempting to start before all ongoing LCPs were completed later crashed. This was due to the fact that node takeover by the new master cannot be completed until there are no pending local checkpoints. Long-running LCPs such as those which triggered this issue can occur when fragment sizes are sufficiently large (see MySQL Cluster Nodes, Node Groups, Replicas, and Partitions, for more information). Now in such cases, data nodes (other than the new master) are kept from restarting until the takeover is complete. (Bug #13323589)

  • When deleting from multiple tables using a unique key in the WHERE condition, the wrong rows were deleted. In addition, UPDATE triggers failed when rows were changed by deleting from or updating multiple tables. (Bug #12718336, Bug #61705, Bug #12728221)

Changes in MySQL Cluster NDB 6.3.46 (5.1.56-ndb-6.3.46)

Bugs Fixed

  • When replicating DML statements with IGNORE between clusters, the number of operations that failed due to nonexistent keys was expected to be no greater than the number of defined operations of any single type. Because the slave SQL thread defines operations of multiple types in batches together, code which relied on this assumption could cause mysqld to fail. (Bug #12859831)

  • When failure handling of an API node takes longer than 300 seconds, extra debug information is included in the resulting output. In cases where the API node's node ID was greater than 48, these extra debug messages could lead to a crash, and confuing output otherwise. This was due to an attempt to provide information specific to data nodes for API nodes as well. (Bug #62208)

  • In rare cases, a series of node restarts and crashes during restarts could lead to errors while reading the redo log. (Bug #62206)

Changes in MySQL Cluster NDB 6.3.45 (5.1.56-ndb-6.3.45)

Bugs Fixed

  • When global checkpoint indexes were written with no intervening end-of-file or megabyte border markers, this could sometimes lead to a situation in which the end of the redo log was mistakenly regarded as being between these GCIs, so that if the restart of a data node took place before the start of the next redo log was overwritten, the node encountered an Error while reading the REDO log. (Bug #12653993, Bug #61500)

    References: See also Bug #56961.

  • Error reporting has been improved for cases in which API nodes are unable to connect due to apparent unavailability of node IDs. (Bug #12598398)

  • Error messages for Failed to convert connection transporter registration problems were inspecific. (Bug #12589691)

  • Under certain rare circumstances, a data node process could fail with Signal 11 during a restart. This was due to uninitialized variables in the QMGR kernel block. (Bug #12586190)

  • Handling of the MaxNoOfTables and MaxNoOfAttributes configuration parameters was not consistent in all parts of the NDB kernel, and were only strictly enforced by the DBDICT and SUMA kernel blocks. This could lead to problems when tables could be created but not replicated. Now these parameters are treated by SUMA and DBDICT as suggested maximums rather than hard limits, as they are elsewhere in the NDB kernel. (Bug #61684)

  • Cluster API: Within a transaction, after creating, executing, and closing a scan, calling NdbTransaction::refresh() after creating and executing but not closing a second scan caused the application to crash. (Bug #12646659)

Changes in MySQL Cluster NDB 6.3.44 (5.1.56-ndb-6.3.44)

Bugs Fixed

  • Two unused test files in storage/ndb/test/sql contained incorrect versions of the GNU Lesser General Public License. The files and the directory containing them have been removed. (Bug #11810156)

    References: See also Bug #11810224.

Changes in MySQL Cluster NDB 6.3.43 (5.1.56-ndb-6.3.43)

Bugs Fixed

  • Cluster API: Performing interpreted operations using a unique index did not work correctly, because the interpret bit was kept when sending the lookup to the index table.

Changes in MySQL Cluster NDB 6.3.42 (5.1.51-ndb-6.3.42)

Bugs Fixed

  • A scan with a pushed condition (filter) using the CommittedRead lock mode could hang for a short interval when it was aborted when just as it had decided to send a batch. (Bug #11932525)

  • When aborting a multi-read range scan exactly as it was changing ranges in the local query handler, LQH could fail to detect it, leaving the scan hanging. (Bug #11929643)

  • Disk Data: Limits imposed by the size of SharedGlobalMemory were not always enforced consistently with regard to Disk Data undo buffers and log files. This could sometimes cause a CREATE LOGFILE GROUP or ALTER LOGFILE GROUP statement to fail for no apparent reason, or cause the log file group specified by InitialLogFileGroup not to be created when starting the cluster. (Bug #57317)

Changes in MySQL Cluster NDB 6.3.41 (5.1.51-ndb-6.3.41)

Functionality Added or Changed

Changes in MySQL Cluster NDB 6.3.40 (5.1.51-ndb-6.3.40)

Functionality Added or Changed

  • Added the --skip-broken-objects option for ndb_restore. This option causes ndb_restore to ignore tables corrupted due to missing blob parts tables, and to continue reading from the backup file and restoring the remaining tables. (Bug #54613)

    References: See also Bug #51652.

Bugs Fixed

  • Two related problems could occur with read-committed scans made in parallel with transactions combining multiple (concurrent) operations:

    1. When committing a multiple-operation transaction that contained concurrent insert and update operations on the same record, the commit arrived first for the insert and then for the update. If a read-committed scan arrived between these operations, it could thus read incorrect data; in addition, if the scan read variable-size data, it could cause the data node to fail.

    2. When rolling back a multiple-operation transaction having concurrent delete and insert operations on the same record, the abort arrived first for the delete operation, and then for the insert. If a read-committed scan arrived between the delete and the insert, it could incorrectly assume that the record should not be returned (in other words, the scan treated the insert as though it had not yet been committed).

    (Bug #59496)

  • A row insert or update followed by a delete operation on the same row within the same transaction could in some cases lead to a buffer overflow. (Bug #59242)

    References: See also Bug #56524. This bug was introduced by Bug #35208.

  • The FAIL_REP signal, used inside the NDB kernel to declare that a node has failed, now includes the node ID of the node that detected the failure. This information can be useful in debugging. (Bug #58904)

  • In some circumstances, an SQL trigger on an NDB table could read stale data. (Bug #58538)

  • During a node takeover, it was possible in some circumstances for one of the remaining nodes to send an extra transaction confirmation (LQH_TRANSCONF) signal to the DBTC kernel block, conceivably leading to a crash of the data node trying to take over as the new transaction coordinator. (Bug #58453)

  • A query having multiple predicates joined by OR in the WHERE clause and which used the sort_union access method (as shown using EXPLAIN) could return duplicate rows. (Bug #58280)

  • Trying to drop an index while it was being used to perform scan updates caused data nodes to crash. (Bug #58277, Bug #57057)

  • When handling failures of multiple data nodes, an error in the construction of internal signals could cause the cluster's remaining nodes to crash. This issue was most likely to affect clusters with large numbers of data nodes. (Bug #58240)

  • Some queries of the form SELECT ... WHERE column IN (subquery) against an NDB table could cause mysqld to hang in an endless loop. (Bug #58163)

  • The number of rows affected by a statement that used a WHERE clause having an IN condition with a value list containing a great many elements, and that deleted or updated enough rows such that NDB processed them in batches, was not computed or reported correctly. (Bug #58040)

  • A query using BETWEEN as part of a pushed-down WHERE condition could cause mysqld to hang or crash. (Bug #57735)

  • In some circumstances, it was possible for mysqld to begin a new multi-range read scan without having closed a previous one. This could lead to exhaustion of all scan operation objects, transaction objects, or lock objects (or some combination of these) in NDB, causing queries to fail with such errors as Lock wait timeout exceeded or Connect failure - out of connection objects. (Bug #57481)

    References: See also Bug #58750.

  • Queries using column IS [NOT] NULL on a table with a unique index created with USING HASH on column always returned an empty result. (Bug #57032)

  • With engine_condition_pushdown enabled, a query using LIKE on an ENUM column of an NDB table failed to return any results. This issue is resolved by disabling engine_condition_pushdown when performing such queries. (Bug #53360)

  • When a slash character (/) was used as part of the name of an index on an NDB table, attempting to execute a TRUNCATE TABLE statement on the table failed with the error Index not found, and the table was rendered unusable. (Bug #38914)

  • Disk Data: In certain cases, a race condition could occur when DROP LOGFILE GROUP removed the logfile group while a read or write of one of the effected files was in progress, which in turn could lead to a crash of the data node. (Bug #59502)

  • Disk Data: A race condition could sometimes be created when DROP TABLESPACE was run concurrently with a local checkpoint; this could in turn lead to a crash of the data node. (Bug #59501)

  • Disk Data: Performing what should have been an online drop of a multi-column index was actually performed offline. (Bug #55618)

  • Disk Data: When at least one data node was not running, queries against the INFORMATION_SCHEMA.FILES table took an excessive length of time to complete because the MySQL server waited for responses from any stopped nodes to time out. Now, in such cases, MySQL does not attempt to contact nodes which are not known to be running. (Bug #54199)

  • Cluster API: Attempting to read the same value (using getValue()) more than 9000 times within the same transaction caused the transaction to hang when executed. Now when more reads are performed in this way than can be accommodated in a single transaction, the call to execute() fails with a suitable error. (Bug #58110)

Changes in MySQL Cluster NDB 6.3.39 (5.1.51-ndb-6.3.39)

Functionality Added or Changed

  • Important Change: The Id configuration parameter used with MySQL Cluster management, data, and API nodes (including SQL nodes) is now deprecated, and the NodeId parameter (long available as a synonym for Id when configuring these types of nodes) should be used instead. Id continues to be supported for reasons of backward compatibility, but now generates a warning when used with these types of nodes, and is subject to removal in a future release of MySQL Cluster.

    This change affects the name of the configuration parameter only, establishing a clear preference for NodeId over Id in the [mgmd], [ndbd], [mysql], and [api] sections of the MySQL Cluster global configuration (config.ini) file. The behavior of unique identifiers for management, data, and SQL and API nodes in MySQL Cluster has not otherwise been altered.

    The Id parameter as used in the [computer] section of the MySQL Cluster global configuration file is not affected by this change.

Bugs Fixed

  • Packaging: MySQL Cluster RPM distributions did not include a shared-compat RPM for the MySQL Server, which meant that MySQL applications depending on (MySQL 5.0 and earlier) no longer worked. (Bug #38596)

  • The LQHKEYREQ request message used by the local query handler when checking the major schema version of a table, being only 16 bits wide, could cause this check to fail with an Invalid schema version error (NDB error code 1227). This issue occurred after creating and dropping (and re-creating) the same table 65537 times, then trying to insert rows into the table. (Bug #57896)

    References: See also Bug #57897.

  • An internal buffer overrun could cause a data node to fail. (Bug #57767)

  • Data nodes compiled with gcc 4.5 or higher crashed during startup. (Bug #57761)

  • ndb_restore now retries failed transactions when replaying log entries, just as it does when restoring data. (Bug #57618)

  • During a GCP takeover, it was possible for one of the data nodes not to receive a SUB_GCP_COMPLETE_REP signal, with the result that it would report itself as GCP_COMMITTING while the other data nodes reported GCP_PREPARING. (Bug #57522)

  • Specifying a WHERE clause of the form range1 OR range2 when selecting from an NDB table having a primary key on multiple columns could result in Error 4259 Invalid set of range scan bounds if range2 started exactly where range1 ended and the primary key definition declared the columns in a different order relative to the order in the table's column list. (Such a query should simply return all rows in the table, since any expression value < constant OR value >= constant is always true.)

    Example. Suppose t is an NDB table defined by the following CREATE TABLE statement:


    This issue could then be triggered by a query such as this one:

    SELECT * FROM t WHERE b < 8 OR b >= 8;

    In addition, the order of the ranges in the WHERE clause was significant; the issue was not triggered, for example, by the query SELECT * FROM t WHERE b <= 8 OR b > 8. (Bug #57396)

  • A GCP stop is detected using 2 parameters which determine the maximum time that a global checkpoint or epoch can go unchanged; one of these controls this timeout for GCPs and one controls the timeout for epochs. Suppose the cluster is configured such that TimeBetweenEpochsTimeout is 100 ms but HeartbeatIntervalDbDb is 1500 ms. A node failure can be signalled after 4 missed heartbeats—in this case, 6000 ms. However, this would exceed TimeBetweenEpochsTimeout, causing false detection of a GCP. To prevent this from happening, the configured value for TimeBetweenEpochsTimeout is automatically adjusted, based on the values of HeartbeatIntervalDbDb and ArbitrationTimeout.

    The current issue arose when the automatic adjustment routine did not correctly take into consideration the fact that, during cascading node-failures, several intervals of length 4 * (HeartbeatIntervalDBDB + ArbitrationTimeout) may elapse before all node failures have internally been resolved. This could cause false GCP detection in the event of a cascading node failure. (Bug #57322)

  • Queries using WHERE varchar_pk_column LIKE 'pattern%' or WHERE varchar_pk_column LIKE 'pattern_' against an NDB table having a VARCHAR column as its primary key failed to return all matching rows. (Bug #56853)

  • When a data node angel process failed to fork off a new worker process (to replace one that had failed), the failure was not handled. This meant that the angel process either transformed itself into a worker process, or itself failed. In the first case, the data node continued to run, but there was no longer any angel to restart it in the event of failure, even with StopOnError set to 0. (Bug #53456)

  • Disk Data: Adding unique indexes to NDB Disk Data tables could take an extremely long time. This was particularly noticeable when using ndb_restore --rebuild-indexes. (Bug #57827)

  • Cluster API: An application dropping a table at the same time that another application tried to set up a replication event on the same table could lead to a crash of the data node. The same issue could sometimes cause NdbEventOperation::execute() to hang. (Bug #57886)

  • Cluster API: An NDB API client program under load could abort with an assertion error in TransporterFacade::remove_from_cond_wait_queue. (Bug #51775)

    References: See also Bug #32708.

Changes in MySQL Cluster NDB 6.3.38 (5.1.47-ndb-6.3.38)

Functionality Added or Changed

  • mysqldump as supplied with MySQL Cluster now has an --add-drop-trigger option which adds a DROP TRIGGER IF EXISTS statement before each dumped trigger definition. (Bug #55691)

    References: See also Bug #34325, Bug #11747863.

  • Cluster API: The MGM API function ndb_mgm_get_version(), which was previously internal, has now been moved to the public API. This function can be used to get NDB storage engine and other version information from the management server. (Bug #51310)

    References: See also Bug #51273.

Bugs Fixed

  • A data node can be shut down having completed and synchronized a given GCI x, while having written a great many log records belonging to the next GCI x + 1, as part of normal operations. However, when starting, completing, and synchronizing GCI x + 1, then the log records from original start must not be read. To make sure that this does not happen, the REDO log reader finds the last GCI to restore, scans forward from that point, and erases any log records that were not (and should never be) used.

    The current issue occurred because this scan stopped immediately as soon as it encountered an empty page. This was problematic because the REDO log is divided into several files; thus, it could be that there were log records in the beginning of the next file, even if the end of the previous file was empty. These log records were never invalidated; following a start or restart, they could be reused, leading to a corrupt REDO log. (Bug #56961)

  • An error in program flow in ndbd.cpp could result in data node shutdown routines being called multiple times. (Bug #56890)

  • When distributing CREATE TABLE and DROP TABLE operations among several SQL nodes attached to a MySQL Cluster. the LOCK_OPEN lock normally protecting mysqld's internal table list is released so that other queries or DML statements are not blocked. However, to make sure that other DDL is not executed simultaneously, a global schema lock (implemented as a row-level lock by NDB) is used, such that all operations that can modify the state of the mysqld internal table list also need to acquire this global schema lock. The SHOW TABLE STATUS statement did not acquire this lock. (Bug #56841)

  • In certain cases, DROP DATABASE could sometimes leave behind a cached table object, which caused problems with subsequent DDL operations. (Bug #56840)

  • Memory pages used for DataMemory, once assigned to ordered indexes, were not ever freed, even after any rows that belonged to the corresponding indexes had been deleted. (Bug #56829)

  • MySQL Cluster stores, for each row in each NDB table, a Global Checkpoint Index (GCI) which identifies the last committed transaction that modified the row. As such, a GCI can be thought of as a coarse-grained row version.

    Due to changes in the format used by NDB to store local checkpoints (LCPs) in MySQL Cluster NDB 6.3.11, it could happen that, following cluster shutdown and subsequent recovery, the GCI values for some rows could be changed unnecessarily; this could possibly, over the course of many node or system restarts (or both), lead to an inconsistent database. (Bug #56770)

  • When multiple SQL nodes were connected to the cluster and one of them stopped in the middle of a DDL operation, the mysqld process issuing the DDL timed out with the error distributing tbl_name timed out. Ignoring. (Bug #56763)

  • An online ALTER TABLE ... ADD COLUMN operation that changed the table schema such that the number of 32-bit words used for the bitmask allocated to each DML operation increased during a transaction in DML which was performed prior to DDL which was followed by either another DML operation or—if using replication—a commit, led to data node failure.

    This was because the data node did not take into account that the bitmask for the before-image was smaller than the current bitmask, which caused the node to crash. (Bug #56524)

    References: This bug is a regression of Bug #35208.

  • The text file cluster_change_hist.txt containing old MySQL Cluster changelog information was no longer being maintained, and so has been removed from the tree. (Bug #56116)

  • The failure of a data node during some scans could cause other data nodes to fail. (Bug #54945)

  • Exhausting the number of available commit-ack markers (controlled by the MaxNoOfConcurrentTransactions parameter) led to a data node crash. (Bug #54944)

  • When running a SELECT on an NDB table with BLOB or TEXT columns, memory was allocated for the columns but was not freed until the end of the SELECT. This could cause problems with excessive memory usage when dumping (using for example mysqldump) tables with such columns and having many rows, large column values, or both. (Bug #52313)

    References: See also Bug #56488, Bug #50310.

Changes in MySQL Cluster NDB 6.3.37 (5.1.47-ndb-6.3.37)

Functionality Added or Changed

Bugs Fixed

  • Following a failure of the master data node, the new master sometimes experienced a race condition which caused the node to terminate with a GcpStop error. (Bug #56044)

  • The warning MaxNoOfExecutionThreads (#) > LockExecuteThreadToCPU count (#), this could cause contention could be logged when running ndbd, even though the condition described can occur only when using ndbmtd. (Bug #54342)

  • The graceful shutdown of a data node could sometimes cause transactions to be aborted unnecessarily. (Bug #18538)

    References: See also Bug #55641.

Changes in MySQL Cluster NDB 6.3.36 (5.1.47-ndb-6.3.36)

Functionality Added or Changed

Bugs Fixed

  • Important Change; Cluster API: The poll and select calls made by the MGM API were not interrupt-safe; that is, a signal caught by the process while waiting for an event on one or more sockets returned error -1 with errno set to EINTR. This caused problems with MGM API functions such as ndb_logevent_get_next() and ndb_mgm_get_status2().

    To fix this problem, the internal ndb_socket_poller::poll() function has been made EINTR-safe.

    The old version of this function has been retained as poll_unsafe(), for use by those parts of NDB that do not need the EINTR-safe version of the function. (Bug #55906)

  • When another data node failed, a given data node DBTC kernel block could time out while waiting for DBDIH to signal commits of pending transactions, leading to a crash. Now in such cases the timeout generates a prinout, and the data node continues to operate. (Bug #55715)

  • The configure.js option WITHOUT_DYNAMIC_PLUGINS=TRUE was ignored when building MySQL Cluster for Windows using CMake. Among the effects of this issue was that CMake attempted to build the InnoDB storage engine as a plugin (.DLL file) even though the InnoDB Plugin is not currently supported by MySQL Cluster. (Bug #54913)

  • It was possible for a DROP DATABASE statement to remove NDB hidden blob tables without removing the parent tables, with the result that the tables, although hidden to MySQL clients, were still visible in the output of ndb_show_tables but could not be dropped using ndb_drop_table. (Bug #54788)

  • An excessive number of timeout warnings (normally used only for debugging) were written to the data node logs. (Bug #53987)

  • Disk Data: As an optimization when inserting a row to an empty page, the page is not read, but rather simply initialized. However, this optimzation was performed in all cases when an empty row was inserted, even though it should have been done only if it was the first time that the page had been used by a table or fragment. This is because, if the page had been in use, and then all records had been released from it, the page still needed to be read to learn its log sequence number (LSN).

    This caused problems only if the page had been flushed using an incorrect LSN and the data node failed before any local checkpoint was completed—which would remove any need to apply the undo log, hence the incorrect LSN was ignored.

    The user-visible result of the incorrect LSN was that it caused the data node to fail during a restart. It was perhaps also possible (although not conclusively proven) that this issue could lead to incorrect data. (Bug #54986)

  • Cluster API: Calling NdbTransaction::refresh() did not update the timer for TransactionInactiveTimeout. (Bug #54724)

Changes in MySQL Cluster NDB 6.3.35 (5.1.47-ndb-6.3.35)

Functionality Added or Changed

  • Restrictions on some types of mismatches in column definitions when restoring data using ndb_restore have been relaxed. These include the following types of mismatches:

    • Different COLUMN_FORMAT settings (FIXED, DYNAMIC, DEFAULT)

    • Different STORAGE settings (MEMORY, DISK)

    • Different default values

    • Different distribution key settings

    Now, when one of these types of mismatches in column definitions is encountered, ndb_restore no longer stops with an error; instead, it accepts the data and inserts it into the target table, while issuing a warning to the user.

    For more information, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #54423)

    References: See also Bug #53810, Bug #54178, Bug #54242, Bug #54279.

  • Introduced the HeartbeatOrder data node configuration parameter, which can be used to set the order in which heartbeats are transmitted between data nodes. This parameter can be useful in situations where multiple data nodes are running on the same host and a temporary disruption in connectivity between hosts would otherwise cause the loss of a node group, leading to failure of the cluster. (Bug #52182)

Bugs Fixed

  • The disconnection of all API nodes (including SQL nodes) during an ALTER TABLE caused a memory leak. (Bug #54685)

  • If a node shutdown (either in isolation or as part of a system shutdown) occurred directly following a local checkpoint, it was possible that this local checkpoint would not be used when restoring the cluster. (Bug #54611)

  • When performing an online alter table where 2 or more SQL nodes connected to the cluster were generating binary logs, an incorrect message could be sent from the data nodes, causing mysqld processes to crash. This problem was often difficult to detect, because restarting SQL node or data node processes could clear the error, and because the crash in mysqld did not occur until several minutes after the erroneous message was sent and received. (Bug #54168)

  • A table having the maximum number of attributes permitted could not be backed up using the ndb_mgm client.


    The maximum number of attributes supported per table is not the same for all MySQL Cluster releases. See Limits Associated with Database Objects in MySQL Cluster, to determine the maximum that applies in the release which you are using.

    (Bug #54155)

  • During initial node restarts, initialization of the REDO log was always performed 1 node at a time, during start phase 4. Now this is done during start phase 2, so that the initialization can be performed in parallel, thus decreasing the time required for initial restarts involving multiple nodes. (Bug #50062)

  • Cluster API: When using the NDB API, it was possible to rename a table with the same name as that of an existing table.


    This issue did not affect table renames executed using SQL on MySQL servers acting as MySQL Cluster API nodes.

    (Bug #54651)

  • Cluster API: An excessive number of client connections, such that more than 1024 file descriptors, sockets, or both were open, caused NDB API applications to crash. (Bug #34303)

Changes in MySQL Cluster NDB 6.3.34 (5.1.44-ndb-6.3.34)

Functionality Added or Changed

  • A --wait-nodes option has been added for ndb_waiter. When this option is used, the program waits only for the nodes having the listed IDs to reach the desired state. For more information, see ndb_waiter — Wait for MySQL Cluster to Reach a Given Status. (Bug #52323)

  • Added the --skip-unknown-objects option for ndb_restore. This option causes ndb_restore to ignore any schema objects which it does not recognize. Currently, this is useful chiefly for restoring native backups made from a cluster running MySQL Cluster NDB 7.0 to a cluster running MySQL Cluster NDB 6.3.

Bugs Fixed

  • Incompatible Change; Cluster API: The default behavior of the NDB API Event API has changed as follows:

    Previously, when creating an Event, DDL operations (alter and drop operations on tables) were automatically reported on any event operation that used this event, but as a result of this change, this is no longer the case. Instead, you must now invoke the event's setReport() method, with the new EventReport value ER_DDL, to get this behavior.

    For existing NDB API applications where you wish to retain the old behavior, you must update the code as indicated previously, then recompile, following an upgrade. Otherwise, DDL operations are no longer reported after upgrading libndbnclient.

    For more information, see The Event::EventReport Type, and Event::setReport(). (Bug #53308)

  • When attempting to create an NDB table on an SQL node that had not yet connected to a MySQL Cluster management server since the SQL node's last restart, the CREATE TABLE statement failed as expected, but with the unexpected Error 1495 For the partitioned engine it is necessary to define all partitions. (Bug #11747335, Bug #31853)

  • Creating a Disk Data table, dropping it, then creating an in-memory table and performing a restart, could cause data node processes to fail with errors in the DBTUP kernel block if the new table's internal ID was the same as that of the old Disk Data table. This could occur because undo log handling during the restart did not check that the table having this ID was now in-memory only. (Bug #53935)

  • A table created while ndb_table_no_logging was enabled was not always stored to disk, which could lead to a data node crash with Error opening DIH schema files for table. (Bug #53934)

  • An internal buffer allocator used by NDB has the form alloc(wanted, minimum) and attempts to allocate wanted pages, but is permitted to allocate a smaller number of pages, between wanted and minimum. However, this allocator could sometimes allocate fewer than minimum pages, causing problems with multi-threaded building of ordered indexes. (Bug #53580)

  • When compiled with support for epoll but this functionality is not available at runtime, MySQL Cluster tries to fall back to use the select() function in its place. However, an extra ndbout_c() call in the transporter registry code caused ndbd to fail instead. (Bug #53482)

  • NDB truncated a column declared as DECIMAL(65,0) to a length of 64. Now such a column is accepted and handled correctly. In cases where the maximum length (65) is exceeded, NDB now raises an error instead of truncating. (Bug #53352)

  • When a watchdog shutdown occurred due to an error, the process was not terminated quickly enough, sometimes resulting in a hang. (To correct this, the internal _exit() function is now called in such situations, rather than exit().) (Bug #53246)

  • Setting DataMemory higher than 4G on 32-bit platforms caused ndbd to crash, instead of failing gracefully with an error. (Bug #52536, Bug #50928)

  • NDB did not distinguish correctly between table names differing only by lettercase when lower_case_table_names was set to 0. (Bug #33158)

  • ndb_mgm -e "ALL STATUS" erroneously reported that data nodes remained in start phase 0 until they had actually started.

Changes in MySQL Cluster NDB 6.3.33 (5.1.44-ndb-6.3.33)

Functionality Added or Changed

  • Cluster API: It is now possible to determine, using the ndb_desc utility or the NDB API, which data nodes contain replicas of which partitions. For ndb_desc, a new --extra-node-info option is added to cause this information to be included in its output. A new method Table::getFragmentNodes() is added to the NDB API for obtaining this information programmatically. (Bug #51184)

  • Formerly, the REPORT and DUMP commands returned output to all ndb_mgm clients connected to the same MySQL Cluster. Now, these commands return their output only to the ndb_mgm client that actually issued the command. (Bug #40865)

Bugs Fixed

  • If a node or cluster failure occurred while mysqld was scanning the ndb.ndb_schema table (which it does when attempting to connect to the cluster), insufficient error handling could lead to a crash by mysqld in certain cases. This could happen in a MySQL Cluster with a great many tables, when trying to restart data nodes while one or more mysqld processes were restarting. (Bug #52325)

  • After running a mixed series of node and system restarts, a system restart could hang or fail altogether. This was caused by setting the value of the newest completed global checkpoint too low for a data node performing a node restart, which led to the node reporting incorrect GCI intervals for its first local checkpoint. (Bug #52217)

  • When performing a complex mix of node restarts and system restarts, the node that was elected as master sometimes required optimized node recovery due to missing REDO information. When this happened, the node crashed with Failure to recreate object ... during restart, error 721 (because the DBDICT restart code was run twice). Now when this occurs, node takeover is executed immediately, rather than being made to wait until the remaining data nodes have started. (Bug #52135)

    References: See also Bug #48436.

  • The redo log protects itself from being filled up by periodically checking how much space remains free. If insufficient redo log space is available, it sets the state TAIL_PROBLEM which results in transactions being aborted with error code 410 (out of redo log). However, this state was not set following a node restart, which meant that if a data node had insufficient redo log space following a node restart, it could crash a short time later with Fatal error due to end of REDO log. Now, this space is checked during node restarts. (Bug #51723)

  • The output of the ndb_mgm client REPORT BACKUPSTATUS command could sometimes contain errors due to uninitialized data. (Bug #51316)

  • A GROUP BY query against NDB tables sometimes did not use any indexes unless the query included a FORCE INDEX option. With this fix, indexes are used by such queries (where otherwise possible) even when FORCE INDEX is not specified. (Bug #50736)

  • The ndb_mgm client sometimes inserted extra prompts within the output of the REPORT MEMORYUSAGE command. (Bug #50196)

  • Issuing a command in the ndb_mgm client after it had lost its connection to the management server could cause the client to crash. (Bug #49219)

  • The ndb_print_backup_file utility failed to function, due to a previous internal change in the NDB code. (Bug #41512, Bug #48673)

  • When the MemReportFrequency configuration parameter was set in config.ini, the ndb_mgm client REPORT MEMORYUSAGE command printed its output multiple times. (Bug #37632)

  • ndb_mgm -e "... REPORT ..." did not write any output to stdout.

    The fix for this issue also prevents the cluster log from being flooded with INFO messages when DataMemory usage reaches 100%, and insures that when the usage is decreased, an appropriate message is written to the cluster log. (Bug #31542, Bug #44183, Bug #49782)

  • Disk Data: Inserts of blob column values into a MySQL Cluster Disk Data table that exhausted the tablespace resulted in misleading no such tuple error messages rather than the expected error tablespace full.

    This issue appeared similar to Bug #48113, but had a different underlying cause. (Bug #52201)

  • Disk Data: The error message returned after atttempting to execute ALTER LOGFILE GROUP on an nonexistent logfile group did not indicate the reason for the failure. (Bug #51111)

  • Cluster API: When reading blob data with lock mode LM_SimpleRead, the lock was not upgraded as expected. (Bug #51034)

  • Cluster API: A number of issues were corrected in the NDB API coding examples found in the storage/ndb/ndbapi-examples directory in the MySQL Cluster source tree. These included possible endless recursion in ndbapi_scan.cpp as well as problems running some of the examples on systems using Windows or Mac OS X due to the lettercase used for some table names. (Bug #30552, Bug #30737)

Changes in MySQL Cluster NDB 6.3.32 (5.1.41-ndb-6.3.32)

Functionality Added or Changed

  • A new configuration parameter HeartbeatThreadPriority makes it possible to select between a first-in, first-out or round-round scheduling policy for management node and API node heartbeat threads, as well as to set the priority of these threads. See Defining a MySQL Cluster Management Server, or Defining SQL and Other API Nodes in a MySQL Cluster, for more information. (Bug #49617)

  • Disk Data: The ndb_desc utility can now show the extent space and free extent space for subordinate BLOB and TEXT columns (stored in hidden BLOB tables by NDB). A --blob-info option has been added for this program that causes ndb_desc to generate a report for each subordinate BLOB table. For more information, see ndb_desc — Describe NDB Tables. (Bug #50599)

Bugs Fixed

  • When one or more data nodes read their LCPs and applied undo logs significantly faster than others, this could lead to a race condition causing system restarts of data nodes to hang. This could most often occur when using both ndbd and ndbmtd processes for the data nodes. (Bug #51644)

  • When deciding how to divide the REDO log, the DBDIH kernel block saved more than was needed to restore the previous local checkpoint, which could cause REDO log space to be exhausted prematurely (NDB error 410). (Bug #51547)

  • DML operations can fail with NDB error 1220 (REDO log files overloaded...) if the opening and closing of REDO log files takes too much time. If this occurred as a GCI marker was being written in the REDO log while REDO log file 0 was being opened or closed, the error could persist until a GCP stop was encountered. This issue could be triggered when there was insufficient REDO log space (for example, with configuration parameter settings NoOfFragmentLogFiles = 6 and FragmentLogFileSize = 6M) with a load including a very high number of updates. (Bug #51512)

    References: See also Bug #20904.

  • During an online upgrade from MySQL Cluster NDB 6.2 to MySQL Cluster NDB 6.3, a sufficiently large amount of traffic with more than 1 DML operation per transaction could NDB 6.3 data node to crash an NDB 6.2 data node with an internal error in the DBLOQH kernel block. (Bug #51389)

  • A side effect of the ndb_restore --disable-indexes and --rebuild-indexes options is to change the schema versions of indexes. When a mysqld later tried to drop a table that had been restored from backup using one or both of these options, the server failed to detect these changed indexes. This caused the table to be dropped, but the indexes to be left behind, leading to problems with subsequent backup and restore operations. (Bug #51374)

  • ndb_restore crashed while trying to restore a corrupted backup, due to missing error handling. (Bug #51223)

  • The ndb_restore message Successfully created index `PRIMARY`... was directed to stderr instead of stdout. (Bug #51037)

  • When using NoOfReplicas equal to 1 or 2, if data nodes from one node group were restarted 256 times and applications were running traffic such that it would encounter NDB error 1204 (Temporary failure, distribution changed), the live node in the node group would crash, causing the cluster to crash as well. The crash occurred only when the error was encountered on the 256th restart; having the error on any previous or subsequent restart did not cause any problems. (Bug #50930)

  • The AUTO_INCREMENT option for ALTER TABLE did not reset AUTO_INCREMENT columns of NDB tables. (Bug #50247)

  • A SELECT requiring a sort could fail with the error Can't find record in 'table' when run concurrently with a DELETE from the same table. (Bug #45687)

  • Disk Data: For a Disk Data tablespace whose extent size was not equal to a whole multiple of 32K, the value of the FREE_EXTENTS column in the INFORMATION_SCHEMA.FILES table was smaller than the value of TOTAL_EXTENTS.

    As part of this fix, the implicit rounding of INITIAL_SIZE, EXTENT_SIZE, and UNDO_BUFFER_SIZE performed by NDBCLUSTER (see CREATE TABLESPACE Syntax) is now done explicitly, and the rounded values are used for calculating INFORMATION_SCHEMA.FILES column values and other purposes. (Bug #49709)

    References: See also Bug #31712.

  • Disk Data: Once all data files associated with a given tablespace had been dropped, there was no way for MySQL client applications (including the mysql client) to tell that the tablespace still existed. To remedy this problem, INFORMATION_SCHEMA.FILES now holds an additional row for each tablespace. (Previously, only the data files in each tablespace were shown.) This row shows TABLESPACE in the FILE_TYPE column, and NULL in the FILE_NAME column. (Bug #31782)

  • Disk Data: It was possible to issue a CREATE TABLESPACE or ALTER TABLESPACE statement in which INITIAL_SIZE was less than EXTENT_SIZE. (In such cases, INFORMATION_SCHEMA.FILES erroneously reported the value of the FREE_EXTENTS column as 1 and that of the TOTAL_EXTENTS column as 0.) Now when either of these statements is issued such that INITIAL_SIZE is less than EXTENT_SIZE, the statement fails with an appropriate error message. (Bug #31712)

    References: See also Bug #49709.

  • Cluster API: An issue internal to ndb_mgm could cause problems when trying to start a large number of data nodes at the same time. (Bug #51273)

    References: See also Bug #51310.

Changes in MySQL Cluster NDB 6.3.31b (5.1.41-ndb-6.3.31b)

Bugs Fixed

  • Setting IndexMemory greater than 2GB could cause data nodes to crash while starting. (Bug #51256)

Changes in MySQL Cluster NDB 6.3.31a (5.1.41-ndb-6.3.31a)

Bugs Fixed

  • An initial restart of a data node configured with a large amount of memory could fail with a Pointer too large error. (Bug #51027)

    References: This bug was introduced by Bug #47818.

Changes in MySQL Cluster NDB 6.3.31 (5.1.41-ndb-6.3.31)

Functionality Added or Changed

  • Important Change: The maximum permitted value of the ndb_autoincrement_prefetch_sz system variable has been increased from 256 to 65536. (Bug #50621)

Bugs Fixed

  • Setting BuildIndexThreads greater than 1 with more than 31 ordered indexes caused node and system restarts to fail. (Bug #50266)

  • Dropping unique indexes in parallel while they were in use could cause node and cluster failures. (Bug #50118)

  • When setting the LockPagesInMainMemory configuration parameter failed, only the error Failed to memlock pages... was returned. Now in such cases the operating system's error code is also returned. (Bug #49724)

  • If a query on an NDB table compared a constant string value to a column, and the length of the string was greater than that of the column, condition pushdown did not work correctly. (The string was truncated to fit the column length before being pushed down.) Now in such cases, the condition is no longer pushed down. (Bug #49459)

  • Performing intensive inserts and deletes in parallel with a high scan load could a data node crashes due to a failure in the DBACC kernel block. This was because checking for when to perform bucket splits or merges considered the first 4 scans only. (Bug #48700)

  • During Start Phases 1 and 2, the STATUS command sometimes (falsely) returned Not Connected for data nodes running ndbmtd. (Bug #47818)

  • When performing a DELETE that included a left join from an NDB table, only the first matching row was deleted. (Bug #47054)

  • mysqld could sometimes crash during a commit while trying to handle NDB Error 4028 Node failure caused abort of transaction. (Bug #38577)

  • When setting LockPagesInMainMemory, the stated memory was not allocated when the node was started, but rather only when the memory was used by the data node process for other reasons. (Bug #37430)

  • Trying to insert more rows than would fit into an NDB table caused data nodes to crash. Now in such situations, the insert fails gracefully with error 633 Table fragment hash index has reached maximum possible size. (Bug #34348)

  • Disk Data: When a crash occurs due to a problem in Disk Data code, the currently active page list is printed to stdout (that is, in one or more ndb_nodeid_out.log files). One of these lists could contain an endless loop; this caused a printout that was effectively never-ending. Now in such cases, a maximum of 512 entries is printed from each list. (Bug #42431)

Changes in MySQL Cluster NDB 6.3.30 (5.1.39-ndb-6.3.30)

Functionality Added or Changed

  • Added multi-threaded ordered index building capability during system restarts or node restarts, controlled by the BuildIndexThreads data node configuration parameter (also introduced in this release).

Changes in MySQL Cluster NDB 6.3.29 (5.1.39-ndb-6.3.29)

Functionality Added or Changed

  • This enhanced functionality is supported for upgrades to MySQL Cluster NDB 7.0 when the NDB engine version is 7.0.10 or later. (Bug #48528, Bug #49163)

  • The output from ndb_config --configinfo --xml now indicates, for each configuration parameter, the following restart type information:

    • Whether a system restart or a node restart is required when resetting that parameter;

    • Whether cluster nodes need to be restarted using the --initial option when resetting the parameter.

    (Bug #47366)

Bugs Fixed

  • Node takeover during a system restart occurs when the REDO log for one or more data nodes is out of date, so that a node restart is invoked for that node or those nodes. If this happens while a mysqld process is attached to the cluster as an SQL node, the mysqld takes a global schema lock (a row lock), while trying to set up cluster-internal replication.

    However, this setup process could fail, causing the global schema lock to be held for an excessive length of time, which made the node restart hang as well. As a result, the mysqld failed to set up cluster-internal replication, which led to tables being read only, and caused one node to hang during the restart.


    This issue could actually occur in MySQL Cluster NDB 7.0 only, but the fix was also applied MySQL Cluster NDB 6.3, to keep the two codebases in alignment.

    (Bug #49560)

  • Sending SIGHUP to a mysqld running with the --ndbcluster and --log-bin options caused the process to crash instead of refreshing its log files. (Bug #49515)

  • If the master data node receiving a request from a newly started API or data node for a node ID died before the request has been handled, the management server waited (and kept a mutex) until all handling of this node failure was complete before responding to any other connections, instead of responding to other connections as soon as it was informed of the node failure (that is, it waited until it had received a NF_COMPLETEREP signal rather than a NODE_FAILREP signal). On visible effect of this misbehavior was that it caused management client commands such as SHOW and ALL STATUS to respond with unnecessary slowness in such circumstances. (Bug #49207)

  • When evaluating the options --include-databases, --include-tables, --exclude-databases, and --exclude-tables, the ndb_restore program overwrote the result of the database-level options with the result of the table-level options rather than merging these results together, sometimes leading to unexpected and unpredictable results.

    As part of the fix for this problem, the semantics of these options have been clarified; because of this, the rules governing their evaluation have changed slightly. These changes be summed up as follows:

    • All --include-* and --exclude-* options are now evaluated from right to left in the order in which they are passed to ndb_restore.

    • All --include-* and --exclude-* options are now cumulative.

    • In the event of a conflict, the first (rightmost) option takes precedence.

    For more detailed information and examples, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #48907)

  • When performing tasks that generated large amounts of I/O (such as when using ndb_restore), an internal memory buffer could overflow, causing data nodes to fail with signal 6.

    Subsequent analysis showed that this buffer was not actually required, so this fix removes it. (Bug #48861)

  • Exhaustion of send buffer memory or long signal memory caused data nodes to crash. Now an appropriate error message is provided instead when this situation occurs. (Bug #48852)

  • Under certain conditions, accounting of the number of free scan records in the local query handler could be incorrect, so that during node recovery or a local checkpoint operations, the LQH could find itself lacking a scan record that is expected to find, causing the node to crash. (Bug #48697)

    References: See also Bug #48564.

  • The creation of an ordered index on a table undergoing DDL operations could cause a data node crash under certain timing-dependent conditions. (Bug #48604)

  • During an LCP master takeover, when the newly elected master did not receive a COPY_GCI LCP protocol message but other nodes participating in the local checkpoint had received one, the new master could use an uninitialized variable, which caused it to crash. (Bug #48584)

  • When running many parallel scans, a local checkpoint (which performs a scan internally) could find itself not getting a scan record, which led to a data node crash. Now an extra scan record is reserved for this purpose, and a problem with obtaining the scan record returns an appropriate error (error code 489, Too many active scans). (Bug #48564)

  • During a node restart, logging was enabled on a per-fragment basis as the copying of each fragment was completed but local checkpoints were not enabled until all fragments were copied, making it possible to run out of redo log file space (NDB error code 410) before the restart was complete. Now logging is enabled only after all fragments has been copied, just prior to enabling local checkpoints. (Bug #48474)

  • When employing NDB native backup to back up and restore an empty NDB table that used a non-sequential AUTO_INCREMENT value, the AUTO_INCREMENT value was not restored correctly. (Bug #48005)

  • ndb_config --xml --configinfo now indicates that parameters belonging in the [SCI], [SCI DEFAULT], [SHM], and [SHM DEFAULT] sections of the config.ini file are deprecated or experimental, as appropriate. (Bug #47365)

  • NDB stores blob column data in a separate, hidden table that is not accessible from MySQL. If this table was missing for some reason (such as accidental deletion of the file corresponding to the hidden table) when making a MySQL Cluster native backup, ndb_restore crashed when attempting to restore the backup. Now in such cases, ndb_restore fails with the error message Table table_name has blob column (column_name) with missing parts table in backup instead. (Bug #47289)

  • DROP DATABASE failed when there were stale temporary NDB tables in the database. This situation could occur if mysqld crashed during execution of a DROP TABLE statement after the table definition had been removed from NDBCLUSTER but before the corresponding .ndb file had been removed from the crashed SQL node's data directory. Now, when mysqld executes DROP DATABASE, it checks for these files and removes them if there are no corresponding table definitions for them found in NDBCLUSTER. (Bug #44529)

  • Creating an NDB table with an excessive number of large BIT columns caused the cluster to fail. Now, an attempt to create such a table is rejected with error 791 (Too many total bits in bitfields). (Bug #42046)

    References: See also Bug #42047.

  • When a long-running transaction lasting long enough to cause Error 410 (REDO log files overloaded) was later committed or rolled back, it could happen that NDBCLUSTER was not able to release the space used for the REDO log, so that the error condition persisted indefinitely.

    The most likely cause of such transactions is a bug in the application using MySQL Cluster. This fix should handle most cases where this might occur. (Bug #36500)

  • Deprecation and usage information obtained from ndb_config --configinfo regarding the PortNumber and ServerPort configuration parameters was improved. (Bug #24584)

  • Disk Data: When running a write-intensive workload with a very large disk page buffer cache, CPU usage approached 100% during a local checkpoint of a cluster containing Disk Data tables. (Bug #49532)

  • Disk Data: Repeatedly creating and then dropping Disk Data tables could eventually lead to data node failures. (Bug #45794, Bug #48910)

  • Disk Data: When the FileSystemPathUndoFiles configuration parameter was set to an non-existent path, the data nodes shut down with the generic error code 2341 (Internal program error). Now in such cases, the error reported is error 2815 (File not found).

  • Cluster API: When a DML operation failed due to a uniqueness violation on an NDB table having more than one unique index, it was difficult to determine which constraint caused the failure; it was necessary to obtain an NdbError object, then decode its details property, which in could lead to memory management issues in application code.

    To help solve this problem, a new API method Ndb::getNdbErrorDetail() is added, providing a well-formatted string containing more precise information about the index that caused the unque constraint violation. The following additional changes are also made in the NDB API:

    • Use of NdbError.details is now deprecated in favor of the new method.

    • The Dictionary::listObjects() method has been modified to provide more information.

    (Bug #48851)

  • Cluster API: When using blobs, calling getBlobHandle() requires the full key to have been set using equal(), because getBlobHandle() must access the key for adding blob table operations. However, if getBlobHandle() was called without first setting all parts of the primary key, the application using it crashed. Now, an appropriate error code is returned instead. (Bug #28116, Bug #48973)

Changes in MySQL Cluster NDB 6.3.28b (5.1.39-ndb-6.3.28b)

Bugs Fixed

  • Using a large number of small fragment log files could cause NDBCLUSTER to crash while trying to read them during a restart. This issue was first observed with 1024 fragment log files of 16 MB each. (Bug #48651)

Changes in MySQL Cluster NDB 6.3.28a (5.1.39-ndb-6.3.28a)

Bugs Fixed

  • When the combined length of all names of tables using the NDB storage engine was greater than or equal to 1024 bytes, issuing the START BACKUP command in the ndb_mgm client caused the cluster to crash. (Bug #48531)

Changes in MySQL Cluster NDB 6.3.28 (5.1.39-ndb-6.3.28)

Functionality Added or Changed

  • Performance: Significant improvements in redo log handling and other file system operations can yield a considerable reduction in the time required for restarts. While actual restart times observed in a production setting will naturally vary according to database size, hardware, and other conditions, our own preliminary testing shows that these optimizations can yield startup times that are faster than those typical of previous MySQL Cluster releases by a factor of 50 or more.

Bugs Fixed

  • Important Change: The --with-ndb-port-base option for configure did not function correctly, and has been deprecated. Attempting to use this option produces the warning Ignoring deprecated option --with-ndb-port-base.

    Beginning with MySQL Cluster NDB 7.1.0, the deprecation warning itself is removed, and the --with-ndb-port-base option is simply handled as an unknown and invalid option if you try to use it. (Bug #47941)

    References: See also Bug #38502.

  • In certain cases, performing very large inserts on NDB tables when using ndbmtd caused the memory allocations for ordered or unique indexes (or both) to be exceeded. This could cause aborted transactions and possibly lead to data node failures. (Bug #48037)

    References: See also Bug #48113.

  • For UPDATE IGNORE statements, batching of updates is now disabled. This is because such statements failed when batching of updates was employed if any updates violated a unique constraint, to the fact a unique constraint violation could not be handled without aborting the transaction. (Bug #48036)

  • Starting a data node with a very large amount of DataMemory (approximately 90G or more) could lead to crash of the node due to job buffer congestion. (Bug #47984)

  • When an UPDATE statement was issued against an NDB table where an index was used to identify rows but no data was actually changed, the NDB storage returned zero found rows.

    For example, consider the table created and populated using these statements:

        c1 INT NOT NULL, 
        c2 INT NOT NULL,
        PRIMARY KEY(c1), 
    INSERT INTO t1 VALUES(1, 1);

    The following UPDATE statements, even though they did not change any rows, each still matched a row, but this was reported incorrectly in both cases, as shown here:

    mysql> UPDATE t1 SET c2 = 1 WHERE c1 = 1;
    Query OK, 0 rows affected (0.00 sec)
    Rows matched: 0  Changed: 0  Warnings: 0
    mysql> UPDATE t1 SET c1 = 1 WHERE c2 = 1;
    Query OK, 0 rows affected (0.00 sec)
    Rows matched: 0  Changed: 0  Warnings: 0

    Now in such cases, the number of rows matched is correct. (In the case of each of the example UPDATE statements just shown, this is displayed as Rows matched: 1, as it should be.)

    This issue could affect UPDATE statements involving any indexed columns in NDB tables, regardless of the type of index (including KEY, UNIQUE KEY, and PRIMARY KEY) or the number of columns covered by the index. (Bug #47955)

  • On Solaris, shutting down a management node failed when issuing the command to do so from a client connected to a different management node. (Bug #47948)

  • Setting FragmentLogFileSize to a value greater than 256 MB led to errors when trying to read the redo log file. (Bug #47908)

  • SHOW CREATE TABLE did not display the AUTO_INCREMENT value for NDB tables having AUTO_INCREMENT columns. (Bug #47865)

  • Under some circumstances, when a scan encountered an error early in processing by the DBTC kernel block (see The DBTC Block), a node could crash as a result. Such errors could be caused by applications sending incorrect data, or, more rarely, by a DROP TABLE operation executed in parallel with a scan. (Bug #47831)

  • When starting a node and synchronizing tables, memory pages were allocated even for empty fragments. In certain situations, this could lead to insufficient memory. (Bug #47782)

  • A very small race-condition between NODE_FAILREP and LQH_TRANSREQ signals when handling node failure could lead to operations (locks) not being taken over when they should have been, and subsequently becoming stale. This could lead to node restart failures, and applications getting into endless lock-conflicts with operations that were not released until the node was restarted. (Bug #47715)

    References: See also Bug #41297.

  • configure failed to honor the --with-zlib-dir option when trying to build MySQL Cluster from source. (Bug #47223)

  • ndbd was not built correctly when compiled using gcc 4.4.0. (The ndbd binary was built, but could not be started.) (Bug #46113)

  • If a node failed while sending a fragmented long signal, the receiving node did not free long signal assembly resources that it had allocated for the fragments of the long signal that had already been received. (Bug #44607)

  • When starting a cluster with a great many tables, it was possible for MySQL client connections as well as the slave SQL thread to issue DML statements against MySQL Cluster tables before mysqld had finished connecting to the cluster and making all tables writeable. This resulted in Table ... is read only errors for clients and the Slave SQL thread.

    This issue is fixed by introducing the --ndb-wait-setup option for the MySQL server. This provides a configurable maximum amount of time that mysqld waits for all NDB tables to become writeable, before enabling MySQL clients or the slave SQL thread to connect. (Bug #40679)

    References: See also Bug #46955.

  • When building MySQL Cluster, it was possible to configure the build using --with-ndb-port without supplying a port number. Now in such cases, configure fails with an error. (Bug #38502)

    References: See also Bug #47941.

  • When the MySQL server SQL mode included STRICT_TRANS_TABLES, storage engine warnings and error codes specific to NDB were returned when errors occurred, instead of the MySQL server errors and error codes expected by some programming APIs (such as Connector/J) and applications. (Bug #35990)

  • When a copying operation exhausted the available space on a data node while copying large BLOB columns, this could lead to failure of the data node and a Table is full error on the SQL node which was executing the operation. Examples of such operations could include an ALTER TABLE that changed an INT column to a BLOB column, or a bulk insert of BLOB data that failed due to running out of space or to a duplicate key error. (Bug #34583, Bug #48040)

    References: See also Bug #41674, Bug #45768.

  • Disk Data: A local checkpoint of an empty fragment could cause a crash during a system restart which was based on that LCP. (Bug #47832)

    References: See also Bug #41915.

  • Cluster API: If an NDB API program reads the same column more than once, it is possible exceed the maximum permissible message size, in which case the operation should be aborted due to NDB error 880 Tried to read too much - too many getValue calls, however due to a change introduced in MySQL Cluster NDB 6.3.18, the check for this was not done correctly, which instead caused a data node crash. (Bug #48266)

  • Cluster API: The NDB API methods Dictionary::listEvents(), Dictionary::listIndexes(), Dictionary::listObjects(), and NdbOperation::getErrorLine() formerly had both const and non-const variants. The non-const versions of these methods have been removed. In addition, the NdbOperation::getBlobHandle() method has been re-implemented to provide consistent internal semantics. (Bug #47798)

  • Cluster API: A duplicate read of a column caused NDB API applications to crash. (Bug #45282)

  • Cluster API: The error handling shown in the example file ndbapi_scan.cpp included with the MySQL Cluster distribution was incorrect. (Bug #39573)

Changes in MySQL Cluster NDB 6.3.27a (5.1.37-ndb-6.3.27a)

Bugs Fixed

  • The disconnection of an API or SQL node having a node ID greater than 49 caused a forced shutdown of the cluster. (Bug #47844)

  • The error message text for NDB error code 410 (REDO log files overloaded...) was truncated. (Bug #23662)

Changes in MySQL Cluster NDB 6.3.27 (5.1.37-ndb-6.3.27)

Functionality Added or Changed

  • Disk Data: Two new columns have been added to the output of ndb_desc to make it possible to determine how much of the disk space allocated to a given table or fragment remains free. (This information is not available from the INFORMATION_SCHEMA.FILES table, since the FILES table applies only to Disk Data files.) For more information, see ndb_desc — Describe NDB Tables. (Bug #47131)

Bugs Fixed

  • mysqld allocated an excessively large buffer for handling BLOB values due to overestimating their size. (For each row, enough space was allocated to accommodate every BLOB or TEXT column value in the result set.) This could adversely affect performance when using tables containing BLOB or TEXT columns; in a few extreme cases, this issue could also cause the host system to run out of memory unexpectedly. (Bug #47574)

    References: See also Bug #47572, Bug #47573.

  • NDBCLUSTER uses a dynamically allocated buffer to store BLOB or TEXT column data that is read from rows in MySQL Cluster tables.

    When an instance of the NDBCLUSTER table handler was recycled (this can happen due to table definition cache pressure or to operations such as FLUSH TABLES or ALTER TABLE), if the last row read contained blobs of zero length, the buffer was not freed, even though the reference to it was lost. This resulted in a memory leak.

    For example, consider the table defined and populated as shown here:

    INSERT INTO t VALUES (1, REPEAT('F', 20000));
    INSERT INTO t VALUES (2, '');

    Now execute repeatedly a SELECT on this table, such that the zero-length LONGTEXT row is last, followed by a FLUSH TABLES statement (which forces the handler object to be re-used), as shown here:

    SELECT a, length(b) FROM bl ORDER BY a;

    Prior to the fix, this resulted in a memory leak proportional to the size of the stored LONGTEXT value each time these two statements were executed. (Bug #47573)

    References: See also Bug #47572, Bug #47574.

  • Large transactions involving joins between tables containing BLOB columns used excessive memory. (Bug #47572)

    References: See also Bug #47573, Bug #47574.

  • A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug #47505)

  • When a data node restarts, it first runs the redo log until reaching the latest restorable global checkpoint; after this it scans the remainder of the redo log file, searching for entries that should be invalidated so they are not used in any subsequent restarts. (It is possible, for example, if restoring GCI number 25, that there might be entries belonging to GCI 26 in the redo log.) However, under certain rare conditions, during the invalidation process, the redo log files themselves were not always closed while scanning ahead in the redo log. In rare cases, this could lead to MaxNoOfOpenFiles being exceeded, causing a the data node to crash. (Bug #47171)

  • For very large values of MaxNoOfTables + MaxNoOfAttributes, the calculation for StringMemory could overflow when creating large numbers of tables, leading to NDB error 773 (Out of string memory, please modify StringMemory config parameter), even when StringMemory was set to 100 (100 percent). (Bug #47170)

  • The default value for the StringMemory configuration parameter, unlike other MySQL Cluster configuration parameters, was not set in ndb/src/mgmsrv/ConfigInfo.cpp. (Bug #47166)

  • Signals from a failed API node could be received after an API_FAILREQ signal (see Operations and Signals) has been received from that node, which could result in invalid states for processing subsequent signals. Now, all pending signals from a failing API node are processed before any API_FAILREQ signal is received. (Bug #47039)

    References: See also Bug #44607.

  • Using triggers on NDB tables caused ndb_autoincrement_prefetch_sz to be treated as having the NDB kernel's internal default value (32) and the value for this variable as set on the cluster's SQL nodes to be ignored. (Bug #46712)

  • Running an ALTER TABLE statement while an NDB backup was in progress caused mysqld to crash. (Bug #44695)

  • When performing auto-discovery of tables on individual SQL nodes, NDBCLUSTER attempted to overwrite existing MyISAM .frm files and corrupted them.

    Workaround. In the mysql client, create a new table (t2) with same definition as the corrupted table (t1). Use your system shell or file manager to rename the old .MYD file to the new file name (for example, mv t1.MYD t2.MYD). In the mysql client, repair the new table, drop the old one, and rename the new table using the old file name (for example, RENAME TABLE t2 TO t1).

    (Bug #42614)

  • Running ndb_restore with the --print or --print_log option could cause it to crash. (Bug #40428, Bug #33040)

  • An insert on an NDB table was not always flushed properly before performing a scan. One way in which this issue could manifest was that LAST_INSERT_ID() sometimes failed to return correct values when using a trigger on an NDB table. (Bug #38034)

  • When a data node received a TAKE_OVERTCCONF signal from the master before that node had received a NODE_FAILREP, a race condition could in theory result. (Bug #37688)

    References: See also Bug #25364, Bug #28717.

  • Some joins on large NDB tables having TEXT or BLOB columns could cause mysqld processes to leak memory. The joins did not need to reference the TEXT or BLOB columns directly for this issue to occur. (Bug #36701)

  • On Mac OS X 10.5, commands entered in the management client failed and sometimes caused the client to hang, although management client commands invoked using the --execute (or -e) option from the system shell worked normally.

    For example, the following command failed with an error and hung until killed manually, as shown here:

    ndb_mgm> SHOW      
    Warning, event thread startup failed, degraded printouts as result, errno=36

    However, the same management client command, invoked from the system shell as shown here, worked correctly:

    shell> ndb_mgm -e "SHOW"

    (Bug #35751)

    References: See also Bug #34438.

  • Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.


    This miscalculation was not reflected in the contents of the INFORMATION_SCHEMA.FILES table, as it applied to extents allocated to a fragment, and not to a file.

    (Bug #47072)

  • Cluster API: In some circumstances, if an API node encountered a data node failure between the creation of a transaction and the start of a scan using that transaction, then any subsequent calls to startTransaction() and closeTransaction() could cause the same transaction to be started and closed repeatedly. (Bug #47329)

  • Cluster API: Performing multiple operations using the same primary key within the same NdbTransaction::execute() call could lead to a data node crash.


    This fix does not make change the fact that performing multiple operations using the same primary key within the same execute() is not supported; because there is no way to determine the order of such operations, the result of such combined operations remains undefined.

    (Bug #44065)

    References: See also Bug #44015.

Changes in MySQL Cluster NDB 6.3.26 (5.1.35-ndb-6.3.26)

Functionality Added or Changed

  • On Solaris platforms, the MySQL Cluster management server and NDB API applications now use CLOCK_REALTIME as the default clock. (Bug #46183)

  • A new option --exclude-missing-columns has been added for the ndb_restore program. In the event that any tables in the database or databases being restored to have fewer columns than the same-named tables in the backup, the extra columns in the backup's version of the tables are ignored. For more information, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #43139)

  • Note

    This issue, originally resolved in MySQL 5.1.16, re-occurred due to a later (unrelated) change. The fix has been re-applied.

    (Bug #25984)

Bugs Fixed

  • Restarting the cluster following a local checkpoint and an online ALTER TABLE on a non-empty table caused data nodes to crash. (Bug #46651)

  • Full table scans failed to execute when the cluster contained more than 21 table fragments.


    The number of table fragments in the cluster can be calculated as the number of data nodes, times 8 (that is, times the value of the internal constant MAX_FRAG_PER_NODE), divided by the number of replicas. Thus, when NoOfReplicas = 1 at least 3 data nodes were required to trigger this issue, and when NoOfReplicas = 2 at least 4 data nodes were required to do so.

    (Bug #46490)

  • Killing MySQL Cluster nodes immediately following a local checkpoint could lead to a crash of the cluster when later attempting to perform a system restart.

    The exact sequence of events causing this issue was as follows:

    1. Local checkpoint occurs.

    2. Immediately following the LCP, kill the master data node.

    3. Kill the remaining data nodes within a few seconds of killing the master.

    4. Attempt to restart the cluster.

    (Bug #46412)

  • Ending a line in the config.ini file with an extra semicolon character (;) caused reading the file to fail with a parsing error. (Bug #46242)

  • When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug #46069)

  • OPTIMIZE TABLE on an NDB table could in some cases cause SQL and data nodes to crash. This issue was observed with both ndbd and ndbmtd. (Bug #45971)

  • The AutoReconnect configuration parameter for API nodes (including SQL nodes) has been added. This is intended to prevent API nodes from re-using allocated node IDs during cluster restarts. For more information, see Defining SQL and Other API Nodes in a MySQL Cluster.

    This fix also introduces two new methods of the NDB API Ndb_cluster_connection class: set_auto_reconnect() and get_auto_reconnect(). (Bug #45921)

  • The signals used by ndb_restore to send progress information about backups to the cluster log accessed the cluster transporter without using any locks. Because of this, it was theoretically possible that these signals could be interefered with by heartbeat signals if both were sent at the same time, causing the ndb_restore messages to be corrupted. (Bug #45646)

  • Problems could arise when using VARCHAR columns whose size was greater than 341 characters and which used the utf8_unicode_ci collation. In some cases, this combination of conditions could cause certain queries and OPTIMIZE TABLE statements to crash mysqld. (Bug #45053)

  • An internal NDB API buffer was not properly initialized. (Bug #44977)

  • When a data node had written its GCI marker to the first page of a megabyte, and that node was later killed during restart after having processed that page (marker) but before completing a LCP, the data node could fail with file system errors. (Bug #44952)

    References: See also Bug #42564, Bug #44291.

  • The warning message Possible bug in Dbdih::execBLOCK_COMMIT_ORD ... could sometimes appear in the cluster log. This warning is obsolete, and has been removed. (Bug #44563)

  • In some cases, OPTIMIZE TABLE on an NDB table did not free any DataMemory. (Bug #43683)

  • If the cluster crashed during the execution of a CREATE LOGFILE GROUP statement, the cluster could not be restarted afterward. (Bug #36702)

    References: See also Bug #34102.

  • Partitioning; Disk Data: An NDB table created with a very large value for the MAX_ROWS option could—if this table was dropped and a new table with fewer partitions, but having the same table ID, was created—cause ndbd to crash when performing a system restart. This was because the server attempted to examine each partition whether or not it actually existed. (Bug #45154)

    References: See also Bug #58638.

  • Disk Data: If the value set in the config.ini file for FileSystemPathDD, FileSystemPathDataFiles, or FileSystemPathUndoFiles was identical to the value set for FileSystemPath, that parameter was ignored when starting the data node with --initial option. As a result, the Disk Data files in the corresponding directory were not removed when performing an initial start of the affected data node or data nodes. (Bug #46243)

  • Disk Data: During a checkpoint, restore points are created for both the on-disk and in-memory parts of a Disk Data table. Under certain rare conditions, the in-memory restore point could include or exclude a row that should have been in the snapshot. This would later lead to a crash during or following recovery. (Bug #41915)

    References: See also Bug #47832.

Changes in MySQL Cluster NDB 6.3.25 (5.1.34-ndb-6.3.25)

Functionality Added or Changed

  • Two new server status variables Ndb_scan_count and Ndb_pruned_scan_count have been introduced. Ndb_scan_count gives the number of scans executed since the cluster was last started. Ndb_pruned_scan_count gives the number of scans for which NDBCLUSTER was able to use partition pruning. Together, these variables can be used to help determine in the MySQL server whether table scans are pruned by NDBCLUSTER. (Bug #44153)

  • The ndb_config utility program can now provide an offline dump of all MySQL Cluster configuration parameters including information such as default and permitted values, brief description, and applicable section of the config.ini file. A dump in text format is produced when running ndb_config with the new --configinfo option, and in XML format when the options --configinfo --xml are used together. For more information and examples, see ndb_config — Extract MySQL Cluster Configuration Information.

Bugs Fixed

  • Important Change; Partitioning: User-defined partitioning of an NDBCLUSTER table without any primary key sometimes failed, and could cause mysqld to crash.

    Now, if you wish to create an NDBCLUSTER table with user-defined partitioning, the table must have an explicit primary key, and all columns listed in the partitioning expression must be part of the primary key. The hidden primary key used by the NDBCLUSTER storage engine is not sufficient for this purpose. However, if the list of columns is empty (that is, the table is defined using PARTITION BY [LINEAR] KEY()), then no explicit primary key is required.

    This change does not effect the partitioning of tables using any storage engine other than NDBCLUSTER. (Bug #40709)

  • Important Change: Previously, the configuration parameter NoOfReplicas had no default value. Now the default for NoOfReplicas is 2, which is the recommended value in most settings. (Bug #44746)

  • Packaging: The pkg installer for MySQL Cluster on Solaris did not perform a complete installation due to an invalid directory reference in the postinstall script. (Bug #41998)

  • When ndb_config could not find the file referenced by the --config-file option, it tried to read my.cnf instead, then failed with a misleading error message. (Bug #44846)

  • When a data node was down so long that its most recent local checkpoint depended on a global checkpoint that was no longer restorable, it was possible for it to be unable to use optimized node recovery when being restarted later. (Bug #44844)

    References: See also Bug #26913.

  • ndb_config --xml did not output any entries for the HostName parameter. In addition, the default listed for MaxNoOfFiles was outside the permitted range of values. (Bug #44749)

    References: See also Bug #44685, Bug #44746.

  • The output of ndb_config --xml did not provide information about all sections of the configuration file. (Bug #44685)

    References: See also Bug #44746, Bug #44749.

  • Inspection of the code revealed that several assignment operators (=) were used in place of comparison operators (==) in DbdihMain.cpp. (Bug #44567)

    References: See also Bug #44570.

  • It was possible for NDB API applications to insert corrupt data into the database, which could subquently lead to data node crashes. Now, stricter checking is enforced on input data for inserts and updates. (Bug #44132)

  • ndb_restore failed when trying to restore data on a big-endian machine from a backup file created on a little-endian machine. (Bug #44069)

  • The file ndberror.c contained a C++-style comment, which caused builds to fail with some C compilers. (Bug #44036)

  • When trying to use a data node with an older version of the management server, the data node crashed on startup. (Bug #43699)

  • In some cases, data node restarts during a system restart could fail due to insufficient redo log space. (Bug #43156)

  • NDBCLUSTER did not build correctly on Solaris 9 platforms. (Bug #39080)

    References: See also Bug #39036, Bug #39038.

  • ndb_restore --print_data did not handle DECIMAL columns correctly. (Bug #37171)

  • The output of ndbd --help did not provide clear information about the program's --initial and --initial-start options. (Bug #28905)

  • It was theoretically possible for the value of a nonexistent column to be read as NULL, rather than causing an error. (Bug #27843)

  • Disk Data: This fix supersedes and improves on an earlier fix made for this bug in MySQL 5.1.18. (Bug #24521)

Changes in MySQL Cluster NDB 6.3.24 (5.1.32-ndb-6.3.24)

Bugs Fixed

  • Cluster Replication: If data node failed during an event creation operation, there was a slight risk that a surviving data node could send an invalid table reference back to NDB, causing the operation to fail with a false Error 723 (No such table). This could take place when a data node failed as a mysqld process was setting up MySQL Cluster Replication. (Bug #43754)

  • Cluster API: Partition pruning did not work correctly for queries involving multiple range scans.

    As part of the fix for this issue, several improvements have been made in the NDB API, including the addition of a new NdbScanOperation::getPruned() method, a new variant of NdbIndexScanOperation::setBound(), and a new PartitionSpec data structure. (Bug #37934)

  • TransactionDeadlockDetectionTimeout values less than 100 were treated as 100. This could cause scans to time out unexpectedly. (Bug #44099)

  • A race condition could occur when a data node failed to restart just before being included in the next global checkpoint. This could cause other data nodes to fail. (Bug #43888)

  • TimeBetweenLocalCheckpoints was measured from the end of one local checkpoint to the beginning of the next, rather than from the beginning of one LCP to the beginning of the next. This meant that the time spent performing the LCP was not taken into account when determining the TimeBetweenLocalCheckpoints interval, so that LCPs were not started often enough, possibly causing data nodes to run out of redo log space prematurely. (Bug #43567)

  • Using indexes containing variable-sized columns could lead to internal errors when the indexes were being built. (Bug #43226)

  • When a data node process had been killed after allocating a node ID, but before making contact with any other data node processes, it was not possible to restart it due to a node ID allocation failure.

    This issue could effect either ndbd or ndbmtd processes. (Bug #43224)

    References: This bug was introduced by Bug #42973.

  • Some queries using combinations of logical and comparison operators on an indexed column in the WHERE clause could fail with the error Got error 4541 'IndexBound has no bound information' from NDBCLUSTER. (Bug #42857)

  • ndb_restore crashed when trying to restore a backup made to a MySQL Cluster running on a platform having different endianness from that on which the original backup was taken. (Bug #39540)

  • When aborting an operation involving both an insert and a delete, the insert and delete were aborted separately. This was because the transaction coordinator did not know that the operations affected on same row, and, in the case of a committed-read (tuple or index) scan, the abort of the insert was performed first, then the row was examined after the insert was aborted but before the delete was aborted. In some cases, this would leave the row in a inconsistent state. This could occur when a local checkpoint was performed during a backup. This issue did not affect primary ley operations or scans that used locks (these are serialized).

    After this fix, for ordered indexes, all operations that follow the operation to be aborted are now also aborted.

  • Disk Data: When a log file group had an undo log file whose size was too small, restarting data nodes failed with Read underflow errors.

    As a result of this fix, the minimum permitted INTIAL_SIZE for an undo log file is now 1M (1 megabyte). (Bug #29574)

  • Cluster API: If the largest offset of a RecordSpecification used for an NdbRecord object was for the NULL bits (and thus not a column), this offset was not taken into account when calculating the size used for the RecordSpecification. This meant that the space for the NULL bits could be overwritten by key or other information. (Bug #43891)

  • Cluster API: BIT columns created using the native NDB API format that were not created as nullable could still sometimes be overwritten, or cause other columns to be overwritten.

    This issue did not effect tables having BIT columns created using the mysqld format (always used by MySQL Cluster SQL nodes). (Bug #43802)

  • Cluster API: The default NdbRecord structures created by NdbDictionary could have overlapping null bits and data fields. (Bug #43590)

  • Cluster API: When performing insert or write operations, NdbRecord permits key columns to be specified in both the key record and in the attribute record. Only one key column value for each key column should be sent to the NDB kernel, but this was not guaranteed. This is now ensured as follows: For insert and write operations, key column values are taken from the key record; for scan takeover update operations, key column values are taken from the attribute record. (Bug #42238)

  • Cluster API: Ordered index scans using NdbRecord formerly expressed a BoundEQ range as separate lower and upper bounds, resulting in 2 copies of the column values being sent to the NDB kernel.

    Now, when a range is specified by NdbIndexScanOperation::setBound(), the passed pointers, key lengths, and inclusive bits are compared, and only one copy of the equal key columns is sent to the kernel. This makes such operations more efficient, as half the amount of KeyInfo is now sent for a BoundEQ range as before. (Bug #38793)

Changes in MySQL Cluster NDB 6.3.23 (5.1.32-ndb-6.3.23)

Functionality Added or Changed

  • A new data node configuration parameter MaxLCPStartDelay has been introduced to facilitate parallel node recovery by causing a local checkpoint to be delayed while recovering nodes are synchronizing data dictionaries and other meta-information. For more information about this parameter, see Defining MySQL Cluster Data Nodes. (Bug #43053)

Bugs Fixed

  • Performance: Updates of the SYSTAB_0 system table to obtain a unique identifier did not use transaction hints for tables having no primary key. In such cases the NDB kernel used a cache size of 1. This meant that each insert into a table not having a primary key required an update of the corresponding SYSTAB_0 entry, creating a potential performance bottleneck.

    With this fix, inserts on NDB tables without primary keys can be under some conditions be performed up to 100% faster than previously. (Bug #39268)

  • Packaging: Packages for MySQL Cluster were missing the and libndbclient.a files. (Bug #42278)

  • Partitioning: Executing ALTER TABLE ... REORGANIZE PARTITION on an NDBCLUSTER table having only one partition caused mysqld to crash. (Bug #41945)

    References: See also Bug #40389.

  • Backup IDs greater than 231 were not handled correctly, causing negative values to be used in backup directory names and printouts. (Bug #43042)

  • When using ndbmtd, NDB kernel threads could hang while trying to start the data nodes with LockPagesInMainMemory set to 1. (Bug #43021)

  • When using multiple management servers and starting several API nodes (possibly including one or more SQL nodes) whose connection strings listed the management servers in different order, it was possible for 2 API nodes to be assigned the same node ID. When this happened it was possible for an API node not to get fully connected, consequently producing a number of errors whose cause was not easily recognizable. (Bug #42973)

  • ndb_error_reporter worked correctly only with GNU tar. (With other versions of tar, it produced empty archives.) (Bug #42753)

  • Triggers on NDBCLUSTER tables caused such tables to become locked. (Bug #42751)

    References: See also Bug #16229, Bug #18135.

  • Given a MySQL Cluster containing no data (that is, whose data nodes had all been started using --initial, and into which no data had yet been imported) and having an empty backup directory, executing START BACKUP with a user-specified backup ID caused the data nodes to crash. (Bug #41031)

  • In some cases, NDB did not check correctly whether tables had changed before trying to use the query cache. This could result in a crash of the debug MySQL server. (Bug #40464)

  • Disk Data: It was not possible to add an in-memory column online to a table that used a table-level or column-level STORAGE DISK option. The same issue prevented ALTER ONLINE TABLE ... REORGANIZE PARTITION from working on Disk Data tables. (Bug #42549)

  • Disk Data: Creating a Disk Data tablespace with a very large extent size caused the data nodes to fail. The issue was observed when using extent sizes of 100 MB and larger. (Bug #39096)

  • Disk Data: Trying to execute a CREATE LOGFILE GROUP statement using a value greater than 150M for UNDO_BUFFER_SIZE caused data nodes to crash.

    As a result of this fix, the upper limit for UNDO_BUFFER_SIZE is now 600M; attempting to set a higher value now fails gracefully with an error. (Bug #34102)

    References: See also Bug #36702.

  • Disk Data: When attempting to create a tablespace that already existed, the error message returned was Table or index with given name already exists. (Bug #32662)

  • Disk Data: Using a path or file name longer than 128 characters for Disk Data undo log files and tablespace data files caused a number of issues, including failures of CREATE LOGFILE GROUP, ALTER LOGFILE GROUP, CREATE TABLESPACE, and ALTER TABLESPACE statements, as well as crashes of management nodes and data nodes.

    With this fix, the maximum length for path and file names used for Disk Data undo log files and tablespace data files is now the same as the maximum for the operating system. (Bug #31769, Bug #31770, Bug #31772)

  • Disk Data: Attempting to perform a system restart of the cluster where there existed a logfile group without and undo log files caused the data nodes to crash.


    While issuing a CREATE LOGFILE GROUP statement without an ADD UNDOFILE option fails with an error in the MySQL server, this situation could arise if an SQL node failed during the execution of a valid CREATE LOGFILE GROUP statement; it is also possible to create a logfile group without any undo log files using the NDB API.

    (Bug #17614)

  • Cluster API: Some error messages from ndb_mgmd contained newline (\n) characters. This could break the MGM API protocol, which uses the newline as a line separator. (Bug #43104)

  • Cluster API: When using an ordered index scan without putting all key columns in the read mask, this invalid use of the NDB API went undetected, which resulted in the use of uninitialized memory. (Bug #42591)

Changes in MySQL Cluster NDB 6.3.22 (5.1.31-ndb-6.3.22)

Functionality Added or Changed

  • New options are introduced for ndb_restore for determining which tables or databases should be restored:

    • --include-tables and --include-databases can be used to restore specific tables or databases.

    • --exclude-tables and --exclude-databases can be used to exclude the specified tables or databases from being restored.

    For more information about these options, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #40429)

  • Disk Data: It is now possible to specify default locations for Disk Data data files and undo log files, either together or separately, using the data node configuration parameters FileSystemPathDD, FileSystemPathDataFiles, and FileSystemPathUndoFiles. For information about these configuration parameters, see Disk Data file system parameters.

    It is also now possible to specify a log file group, tablespace, or both, that is created when the cluster is started, using the InitialLogFileGroup and InitialTablespace data node configuration parameters. For information about these configuration parameters, see Disk Data object creation parameters.

Bugs Fixed

  • When performing more than 32 index or tuple scans on a single fragment, the scans could be left hanging. This caused unnecessary timeouts, and in addition could possibly lead to a hang of an LCP. (Bug #42559)

    References: This bug is a regression of Bug #42084.

  • A data node failure that occurred between calls to NdbIndexScanOperation::readTuples(SF_OrderBy) and NdbTransaction::execute() was not correctly handled; a subsequent call to nextResult() caused a null pointer to be deferenced, leading to a segfault in mysqld. (Bug #42545)

  • Issuing SHOW GLOBAL STATUS LIKE 'NDB%' before mysqld had connected to the cluster caused a segmentation fault. (Bug #42458)

  • Data node failures that occurred before all data nodes had connected to the cluster were not handled correctly, leading to additional data node failures. (Bug #42422)

  • When a cluster backup failed with Error 1304 (Node node_id1: Backup request from node_id2 failed to start), no clear reason for the failure was provided.

    As part of this fix, MySQL Cluster now retries backups in the event of sequence errors. (Bug #42354)

    References: See also Bug #22698.

  • Issuing SHOW ENGINE NDBCLUSTER STATUS on an SQL node before the management server had connected to the cluster caused mysqld to crash. (Bug #42264)

Changes in MySQL Cluster NDB 6.3.21 (5.1.31-ndb-6.3.21)

Functionality Added or Changed

  • Important Change: Formerly, when the management server failed to create a transporter for a data node connection, net_write_timeout seconds elapsed before the data node was actually permitted to disconnect. Now in such cases the disconnection occurs immediately. (Bug #41965)

    References: See also Bug #41713.

  • It is now possible while in Single User Mode to restart all data nodes using ALL RESTART in the management client. Restarting of individual nodes while in Single User Mode remains not permitted. (Bug #31056)

  • Formerly, when using MySQL Cluster Replication, records for empty epochs—that is, epochs in which no changes to NDBCLUSTER data or tables took place—were inserted into the ndb_apply_status and ndb_binlog_index tables on the slave even when --log-slave-updates was disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL Cluster NDB 6.3.13 this was changed so that these empty epochs were no longer logged. However, it is now possible to re-enable the older behavior (and cause empty epochs to be logged) by using the --ndb-log-empty-epochs option. For more information, see Replication Slave Options and Variables.

    References: See also Bug #37472.

Bugs Fixed

  • A maximum of 11 TUP scans were permitted in parallel. (Bug #42084)

  • Trying to execute an ALTER ONLINE TABLE ... ADD COLUMN statement while inserting rows into the table caused mysqld to crash. (Bug #41905)

  • If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug #41469)

  • API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug #41462)

  • It was not possible to perform online upgrades from a MySQL Cluster NDB 6.2 release to MySQL Cluster NDB 6.3.8 or a later MySQL Cluster NDB 6.3 release. (Bug #41435)

  • Cluster log files were opened twice by internal log-handling code, resulting in a resource leak. (Bug #41362)

  • A race condition in transaction coordinator takeovers (part of node failure handling) could lead to operations (locks) not being taken over and subsequently getting stale. This could lead to subsequent failures of node restarts, and to applications getting into an endless lock conflict with operations that would not complete until the node was restarted. (Bug #41297)

    References: See also Bug #41295.

  • An abort path in the DBLQH kernel block failed to release a commit acknowledgment marker. This meant that, during node failure handling, the local query handler could be added multiple times to the marker record which could lead to additional node failures due an array overflow. (Bug #41296)

  • During node failure handling (of a data node other than the master), there was a chance that the master was waiting for a GCP_NODEFINISHED signal from the failed node after having received it from all other data nodes. If this occurred while the failed node had a transaction that was still being committed in the current epoch, the master node could crash in the DBTC kernel block when discovering that a transaction actually belonged to an epoch which was already completed. (Bug #41295)

  • Issuing EXIT in the management client sometimes caused the client to hang. (Bug #40922)

  • In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug #34526)

  • If all data nodes were shut down, MySQL clients were unable to access NDBCLUSTER tables and data even after the data nodes were restarted, unless the MySQL clients themselves were restarted. (Bug #33626)

  • Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.

    The fix for this bug also corrected an issue in the LGMAN kernel block where the amount of free space left in the undo buffer was miscalculated, causing buffer overruns. This could cause records in the buffer to be overwritten, leading to problems when restarting data nodes. (Bug #28077)

  • Cluster API: mgmapi.h contained constructs which only worked in C++, but not in C. (Bug #27004)

Changes in MySQL Cluster NDB 6.3.20 (5.1.30-ndb-6.3.20)

Functionality Added or Changed

  • Cluster API: Two new Ndb_cluster_connection methods have been added to help in diagnosing problems with NDB API client connections. The get_latest_error() method tells whether or not the latest connection attempt succeeded; if the attempt failed, get_latest_error_msg() provides an error message giving the reason.

Bugs Fixed

  • If a transaction was aborted during the handling of a data node failure, this could lead to the later handling of an API node failure not being completed. (Bug #41214)

  • Issuing SHOW TABLES repeatedly could cause NDBCLUSTER tables to be dropped. (Bug #40854)

  • Statements of the form UPDATE ... ORDER BY ... LIMIT run against NDBCLUSTER tables failed to update all matching rows, or failed with the error Can't find record in 'table_name'. (Bug #40081)

  • Start phase reporting was inconsistent between the management client and the cluster log. (Bug #39667)

  • Status messages shown in the management client when restarting a management node were inappropriate and misleading. Now, when restarting a management node, the messages displayed are as follows, where node_id is the management node's node ID:

    ndb_mgm> node_id RESTART
    Shutting down MGM node node_id for restart
    Node node_id is being restarted

    (Bug #29275)

  • Disk Data: This improves on a previous fix for this issue that was made in MySQL Cluster 6.3.8. (Bug #37116)

    References: See also Bug #29186.

  • Cluster API: When creating a scan using an NdbScanFilter object, it was possible to specify conditions against a BIT column, but the correct rows were not returned when the scan was executed.

    As part of this fix, 4 new comparison operators have been implemented for use with scans on BIT columns:





    For more information about these operators, see The NdbScanFilter::BinaryCondition Type.

    Equivalent methods are now also defined for NdbInterpretedCode; for more information, see NdbInterpretedCode Bitwise Comparison Operations. (Bug #40535)

Changes in MySQL Cluster NDB 6.3.19 (5.1.29-ndb-6.3.19)

Functionality Added or Changed

  • Important Change; Cluster API: MGM API applications exited without raising any errors if the connection to the management server was lost. The fix for this issue includes two changes:

    1. The MGM API now provides its own SIGPIPE handler to catch the broken pipe error that occurs when writing to a closed or reset socket. This means that MGM API now behaves the same as NDB API in this regard.

    2. A new function ndb_mgm_set_ignore_sigpipe() has been added to the MGM API. This function makes it possible to bypass the SIGPIPE handler provided by the MGM API.

    (Bug #40498)

  • When performing an initial start of a data node, fragment log files were always created sparsely—that is, not all bytes were written. Now it is possible to override this behavior using the new InitFragmentLogFiles configuration parameter. (Bug #40847)

Bugs Fixed

  • Cluster API: Failed operations on BLOB and TEXT columns were not always reported correctly to the originating SQL node. Such errors were sometimes reported as being due to timeouts, when the actual problem was a transporter overload due to insufficient buffer space. (Bug #39867, Bug #39879)

  • Undo logs and data files were created in 32K increments. Now these files are created in 512K increments, resulting in shorter creation times. (Bug #40815)

  • Redo log creation was very slow on some platforms, causing MySQL Cluster to start more slowly than necessary with some combinations of hardware and operating system. This was due to all write operations being synchronized to disk while creating a redo log file. Now this synchronization occurs only after the redo log has been created. (Bug #40734)

  • Transaction failures took longer to handle than was necessary.

    When a data node acting as transaction coordinator (TC) failed, the surviving data nodes did not inform the API node initiating the transaction of this until the failure had been processed by all protocols. However, the API node needed only to know about failure handling by the transaction protocol—that is, it needed to be informed only about the TC takeover process. Now, API nodes (including MySQL servers acting as cluster SQL nodes) are informed as soon as the TC takeover is complete, so that it can carry on operating more quickly. (Bug #40697)

  • It was theoretically possible for stale data to be read from NDBCLUSTER tables when the transaction isolation level was set to ReadCommitted. (Bug #40543)

  • The LockExecuteThreadToCPU and LockMaintThreadsToCPU parameters did not work on Solaris. (Bug #40521)

  • SET SESSION ndb_optimized_node_selection = 1 failed with an invalid warning message. (Bug #40457)

  • A restarting data node could fail with an error in the DBDIH kernel block when a local or global checkpoint was started or triggered just as the node made a request for data from another data node. (Bug #40370)

  • Restoring a MySQL Cluster from a dump made using mysqldump failed due to a spurious error: Can't execute the given command because you have active locked tables or an active transaction. (Bug #40346)

  • O_DIRECT was incorrectly disabled when making MySQL Cluster backups. (Bug #40205)

  • Heavy DDL usage caused the mysqld processes to hang due to a timeout error (NDB error code 266). (Bug #39885)

  • Executing EXPLAIN SELECT on an NDBCLUSTER table could cause mysqld to crash. (Bug #39872)

  • Events logged after setting ALL CLUSTERLOG STATISTICS=15 in the management client did not always include the node ID of the reporting node. (Bug #39839)

  • The MySQL Query Cache did not function correctly with NDBCLUSTER tables containing TEXT columns. (Bug #39295)

  • A segfault in Logger::Log caused ndbd to hang indefinitely. This fix improves on an earlier one for this issue, first made in MySQL Cluster NDB 6.2.16 and MySQL Cluster NDB 6.3.17. (Bug #39180)

    References: See also Bug #38609.

  • Memory leaks could occur in handling of strings used for storing cluster metadata and providing output to users. (Bug #38662)

  • A duplicate key or other error raised when inserting into an NDBCLUSTER table caused the current transaction to abort, after which any SQL statement other than a ROLLBACK failed. With this fix, the NDBCLUSTER storage engine now performs an implicit rollback when a transaction is aborted in this way; it is no longer necessary to issue an explicit ROLLBACK statement, and the next statement that is issued automatically begins a new transaction.


    It remains necessary in such cases to retry the complete transaction, regardless of which statement caused it to be aborted.

    (Bug #32656)

    References: See also Bug #47654.

  • Error messages for NDBCLUSTER error codes 1224 and 1227 were missing. (Bug #28496)

  • Disk Data: Issuing concurrent CREATE TABLESPACE, ALTER TABLESPACE, CREATE LOGFILE GROUP, or ALTER LOGFILE GROUP statements on separate SQL nodes caused a resource leak that led to data node crashes when these statements were used again later. (Bug #40921)

  • Disk Data: Disk-based variable-length columns were not always handled like their memory-based equivalents, which could potentially lead to a crash of cluster data nodes. (Bug #39645)

  • Disk Data: O_SYNC was incorrectly disabled on platforms that do not support O_DIRECT. This issue was noted on Solaris but could have affected other platforms not having O_DIRECT capability. (Bug #34638)

  • Cluster API: The MGM API reset error codes on management server handles before checking them. This meant that calling an MGM API function with a null handle caused applications to crash. (Bug #40455)

  • Cluster API: It was not always possible to access parent objects directly from NdbBlob, NdbOperation, and NdbScanOperation objects. To alleviate this problem, a new getNdbOperation() method has been added to NdbBlob and new getNdbTransaction() methods have been added to NdbOperation and NdbScanOperation. In addition, a const variant of NdbOperation::getErrorLine() is now also available. (Bug #40242)

  • Cluster API: getBlobHandle() failed when used with incorrect column names or numbers. (Bug #40241)

  • Cluster API: The MGM API function ndb_mgm_listen_event() ignored bind addresses.

    As part of this fix, it is now possible to specify bind addresses in connection strings. See MySQL Cluster Connection Strings, for more information. (Bug #38473)

  • Cluster API: The NDB API example programs included in MySQL Cluster source distributions failed to compile. (Bug #37491)

    References: See also Bug #40238.

Changes in MySQL Cluster NDB 6.3.18 (5.1.28-ndb-6.3.18)

Functionality Added or Changed

  • It is no longer a requirement for database autodiscovery that an SQL node already be connected to the cluster at the time that a database is created on another SQL node. It is no longer necessary to issue CREATE DATABASE (or CREATE SCHEMA) statements on an SQL node joining the cluster after a database is created for the new SQL node to see the database and any NDBCLUSTER tables that it contains. (Bug #39612)

Bugs Fixed

  • Starting the MySQL Server with the --ndbcluster option plus an invalid command-line option (for example, using mysqld --ndbcluster --foobar) caused it to hang while shutting down the binary log thread. (Bug #39635)

  • Dropping and then re-creating a database on one SQL node caused other SQL nodes to hang. (Bug #39613)

  • Setting a low value of MaxNoOfLocalScans (< 100) and performing a large number of (certain) scans could cause the Transaction Coordinator to run out of scan fragment records, and then crash. Now when this resource is exhausted, the cluster returns Error 291 (Out of scanfrag records in TC (increase MaxNoOfLocalScans)) instead. (Bug #39549)

  • When a transaction included a multi-row insert to an NDBCLUSTER table that caused a constraint violation, the transaction failed to roll back. (Bug #39538)

  • Creating a unique index on an NDBCLUSTER table caused a memory leak in the NDB subscription manager (SUMA) which could lead to mysqld hanging, due to the fact that the resource shortage was not reported back to the NDB kernel correctly. (Bug #39518)

    References: See also Bug #39450.

  • Embedded libmysqld with NDB did not drop table events. (Bug #39450)

  • Unique identifiers in tables having no primary key were not cached. This fix has been observed to increase the efficiency of INSERT operations on such tables by as much as 50%. (Bug #39267)

  • When restarting a data node, an excessively long shutdown message could cause the node process to crash. (Bug #38580)

  • After a forced shutdown and initial restart of the cluster, it was possible for SQL nodes to retain .frm files corresponding to NDBCLUSTER tables that had been dropped, and thus to be unaware that these tables no longer existed. In such cases, attempting to re-create the tables using CREATE TABLE IF NOT EXISTS could fail with a spurious Table ... doesn't exist error. (Bug #37921)

  • A statement of the form DELETE FROM table WHERE primary_key=value or UPDATE table WHERE primary_key=value where there was no row whose primary key column had the stated value appeared to succeed, with the server reporting that 1 row had been changed.

    This issue was only known to affect MySQL Cluster NDB 6.3.11 and later NDB 6.3 versions. (Bug #37153)

  • Cluster API: Passing a value greater than 65535 to NdbInterpretedCode::add_val() and NdbInterpretedCode::sub_val() caused these methods to have no effect. (Bug #39536)

Changes in MySQL Cluster NDB 6.3.17 (5.1.27-ndb-6.3.17)

Bugs Fixed

  • Packaging: Support for the InnoDB storage engine was missing from the GPL source releases. An updated GPL source tarball mysql-5.1.27-ndb-6.3.17-innodb.tar.gz which includes code for building InnoDB can be found on the MySQL FTP site.

  • MgmtSrvr::allocNodeId() left a mutex locked following an Ambiguity for node if %d error. (Bug #39158)

  • An invalid path specification caused to fail. (Bug #39026)

  • During transactional coordinator takeover (directly after node failure), the LQH finding an operation in the LOG_COMMIT state sent an LQH_TRANS_CONF signal twice, causing the TC to fail. (Bug #38930)

  • An invalid memory access caused the management server to crash on Solaris Sparc platforms. (Bug #38628)

  • A segfault in Logger::Log caused ndbd to hang indefinitely. (Bug #38609)

  • ndb_mgmd failed to start on older Linux distributions (2.4 kernels) that did not support e-polling. (Bug #38592)

  • ndb_mgmd sometimes performed unnecessary network I/O with the client. This in combination with other factors led to long-running threads that were attempting to write to clients that no longer existed. (Bug #38563)

  • ndb_restore failed with a floating point exception due to a division by zero error when trying to restore certain data files. (Bug #38520)

  • A failed connection to the management server could cause a resource leak in ndb_mgmd. (Bug #38424)

  • Failure to parse configuration parameters could cause a memory leak in the NDB log parser. (Bug #38380)

  • Renaming an NDBCLUSTER table on one SQL node, caused a trigger on this table to be deleted on another SQL node. (Bug #36658)

  • Attempting to add a UNIQUE INDEX twice to an NDBCLUSTER table, then deleting rows from the table could cause the MySQL Server to crash. (Bug #35599)

  • ndb_restore failed when a single table was specified. (Bug #33801)

  • GCP_COMMIT did not wait for transaction takeover during node failure. This could cause GCP_SAVE_REQ to be executed too early. This could also cause (very rarely) replication to skip rows. (Bug #30780)

  • Cluster API: Support for Multi-Range Read index scans using the old API (using, for example, NdbIndexScanOperation::setBound() or NdbIndexScanOperation::end_of_bound()) were dropped in MySQL Cluster NDB 6.2. This functionality is restored in MySQL Cluster NDB 6.3 beginning with 6.3.17, but remains unavailable in MySQL Cluster NDB 6.2. Both MySQL Cluster NDB 6.2 and 6.3 support Multi-Range Read scans through the NdbRecord API. (Bug #38791)

  • Cluster API: The NdbScanOperation::readTuples() method could be called multiple times without error. (Bug #38717)

  • Cluster API: Certain Multi-Range Read scans involving IS NULL and IS NOT NULL comparisons failed with an error in the NDB local query handler. (Bug #38204)

  • Cluster API: Problems with the public headers prevented NDB applications from being built with warnings turned on. (Bug #38177)

  • Cluster API: Creating an NdbScanFilter object using an NdbScanOperation object that had not yet had its readTuples() method called resulted in a crash when later attempting to use the NdbScanFilter. (Bug #37986)

  • Cluster API: Executing an NdbRecord interpreted delete created with an ANYVALUE option caused the transaction to abort. (Bug #37672)

Changes in MySQL Cluster NDB 6.3.16 (5.1.24-ndb-6.3.16)

Functionality Added or Changed

  • Event buffer lag reports are now written to the cluster log. (Bug #37427)

  • Added the --no-binlog option for ndb_restore. When used, this option prevents information being written to SQL node binary logs from the restoration of a cluster backup. (Bug #30452)

Bugs Fixed

  • Cluster API: Changing the system time on data nodes could cause MGM API applications to hang and the data nodes to crash. (Bug #35607)

  • Failure of a data node could sometimes cause mysqld to crash. (Bug #37628)

  • DELETE ... WHERE unique_index_column=value deleted the wrong row from the table. (Bug #37516)

  • If subscription was terminated while a node was down, the epoch was not properly acknowledged by that node. (Bug #37442)

  • libmysqld failed to wait for the cluster binary log thread to terminate before exiting. (Bug #37429)

  • In rare circumstances, a connection followed by a disconnection could give rise to a stale connection where the connection still existed but was not seen by the transporter. (Bug #37338)

  • Queries against NDBCLUSTER tables were cached only if autocommit was in use. (Bug #36692)

  • Cluster API: When some operations succeeded and some failed following a call to NdbTransaction::execute(Commit, AO_IgnoreOnError), a race condition could cause spurious occurrences of NDB API Error 4011 (Internal error). (Bug #37158)

  • Cluster API: Creating a table on an SQL node, then starting an NDB API application that listened for events from this table, then dropping the table from an SQL node, prevented data node restarts. (Bug #32949, Bug #37279)

  • Cluster API: A buffer overrun in NdbBlob::setValue() caused erroneous results on Mac OS X. (Bug #31284)

Changes in MySQL Cluster NDB 6.3.15 (5.1.24-ndb-6.3.15)

Bugs Fixed

  • In certain rare situations, could fail with the error Can't use string ("value") as a HASH ref while "strict refs" in use. (Bug #43022)

  • Under some circumstances, a failed CREATE TABLE could mean that subsequent CREATE TABLE statements caused node failures. (Bug #37092)

  • A fail attempt to create an NDB table could in some cases lead to resource leaks or cluster failures. (Bug #37072)

  • Attempting to create a native backup of NDB tables having a large number of NULL columns and data could lead to node failures. (Bug #37039)

  • Checking of API node connections was not efficiently handled. (Bug #36843)

  • Attempting to delete a nonexistent row from a table containing a TEXT or BLOB column within a transaction caused the transaction to fail. (Bug #36756)

    References: See also Bug #36851.

  • If the combined total of tables and indexes in the cluster was greater than 4096, issuing START BACKUP caused data nodes to fail. (Bug #36044)

  • Where column values to be compared in a query were of the VARCHAR or VARBINARY types, NDBCLUSTER passed a value padded to the full size of the column, which caused unnecessary data to be sent to the data nodes. This also had the effect of wasting CPU and network bandwidth, and causing condition pushdown to be disabled where it could (and should) otherwise have been applied. (Bug #35393)

  • When dropping a table failed for any reason (such as when in single user mode) then the corresponding .ndb file was still removed.

  • Cluster API: Ordered index scans were not pruned correctly where a partitioning key was specified with an EQ-bound. (Bug #36950)

  • Cluster API: When an insert operation involving BLOB data was attempted on a row which already existed, no duplicate key error was correctly reported and the transaction is incorrectly aborted. In some cases, the existing row could also become corrupted. (Bug #36851)

    References: See also Bug #26756.

  • Cluster API: NdbApi.hpp depended on ndb_global.h, which was not actually installed, causing the compilation of programs that used NdbApi.hpp to fail. (Bug #35853)

Changes in MySQL Cluster NDB 6.3.14 (5.1.24-ndb-6.3.14)

Bugs Fixed

  • SET GLOBAL ndb_extra_logging caused mysqld to crash. (Bug #36547)

  • A race condition caused by a failure in epoll handling could cause data nodes to fail. (Bug #36537)

  • Under certain rare circumstances, the failure of the new master node while attempting a node takeover would cause takeover errors to repeat without being resolved. (Bug #36199, Bug #36246, Bug #36247, Bug #36276)

  • When more than one SQL node connected to the cluster at the same time, creation of the mysql.ndb_schema table failed on one of them with an explicit Table exists error, which was not necessary. (Bug #35943)

  • mysqld failed to start after running mysql_upgrade. (Bug #35708)

  • Notification of a cascading master node failures could sometimes not be transmitted correctly (that is, transmission of the NF_COMPLETEREP signal could fail), leading to transactions hanging and timing out (NDB error 4012), scans hanging, and failure of the management server process. (Bug #32645)

  • NDB error 1427 (Api node died, when SUB_START_REQ reached node) was incorrectly classified as a schema error rather than a temporary error.

  • If an API node disconnected and then reconnected during Start Phase 8, then the connection could be blocked—that is, the QMGR kernel block failed to detect that the API node was in fact connected to the cluster, causing issues with the NDB Subscription Manager (SUMA).

  • Cluster API: Accessing the debug version of libndbclient using dlopen() resulted in a segmentation fault. (Bug #35927)

  • Cluster API: Attempting to pass a nonexistent column name to the equal() and setValue() methods of NdbOperation caused NDB API applications to crash. Now the column name is checked, and an error is returned in the event that the column is not found. (Bug #33747)

Changes in MySQL Cluster NDB 6.3.13 (5.1.24-ndb-6.3.13)

Bugs Fixed

  • Important Change: mysqld_safe now traps Signal 13 (SIGPIPE) so that this signal no longer kills the MySQL server process. (Bug #33984)

  • Node or system restarts could fail due an unitialized variable in the DTUP kernel block. This issue was found in MySQL Cluster NDB 6.3.11. (Bug #35797)

  • If an error occurred while executing a statement involving a BLOB or TEXT column of an NDB table, a memory leak could result. (Bug #35593)

  • It was not possible to determine the value used for the --ndb-cluster-connection-pool option in the mysql client. Now this value is reported as a system status variable. (Bug #35573)

  • The ndb_waiter utility wrongly calculated timeouts. (Bug #35435)

  • A SELECT on a table with a nonindexed, large VARCHAR column which resulted in condition pushdown on this column could cause mysqld to crash. (Bug #35413)

  • ndb_restore incorrectly handled some data types when applying log files from backups. (Bug #35343)

  • In some circumstances, a stopped data node was handled incorrectly, leading to redo log space being exhausted following an initial restart of the node, or an initial or partial restart of the cluster (the wrong CGI might be used in such cases). This could happen, for example, when a node was stopped following the creation of a new table, but before a new LCP could be executed. (Bug #35241)

  • SELECT ... LIKE ... queries yielded incorrect results when used on NDB tables. As part of this fix, condition pushdown of such queries has been disabled; re-enabling it is expected to be done as part of a later, permanent fix for this issue. (Bug #35185)

  • ndb_mgmd reported errors to STDOUT rather than to STDERR. (Bug #35169)

  • Nested Multi-Range Read scans failed when the second Multi-Range Read released the first read's unprocessed operations, sometimes leading to an SQL node crash. (Bug #35137)

  • In some situations, a problem with synchronizing checkpoints between nodes could cause a system restart or a node restart to fail with Error 630 during restore of TX. (Bug #34756)

    References: This bug is a regression of Bug #34033.

  • A node failure during an initial node restart followed by another node start could cause the master data node to fail, because it incorrectly gave the node permission to start even if the invalidated node's LCP was still running. (Bug #34702)

  • When a secondary index on a DECIMAL column was used to retrieve data from an NDB table, no results were returned even if the target table had a matched value in the column that was defined with the secondary index. (Bug #34515)

  • An UPDATE on an NDB table that set a new value for a unique key column could cause subsequent queries to fail. (Bug #34208)

  • If a data node in one node group was placed in the not started state (using node_id RESTART -n), it was not possible to stop a data node in a different node group. (Bug #34201)

  • Numerous NDBCLUSTER test failures occurred in builds compiled using icc on IA64 platforms. (Bug #31239)

  • If a START BACKUP command was issued while ndb_restore was running, the backup being restored could be overwritten. (Bug #26498)

  • REPLACE statements did not work correctly with NDBCLUSTER tables when all columns were not explicitly listed. (Bug #22045)

  • CREATE TABLE and ALTER TABLE statements using ENGINE=NDB or ENGINE=NDBCLUSTER caused mysqld to fail on Solaris 10 for x86 platforms. (Bug #19911)

  • Cluster API: Closing a scan before it was executed caused the application to segfault. (Bug #36375)

  • Cluster API: Using NDB API applications from older MySQL Cluster versions with libndbclient from newer ones caused the cluster to fail. (Bug #36124)

  • Cluster API: Some ordered index scans could return tuples out of order. (Bug #35908)

  • Cluster API: Scans having no bounds set were handled incorrectly. (Bug #35876)

  • Cluster API: NdbScanFilter::getNdbOperation(), which was inadvertently removed in MySQL Cluster NDB 6.3.11, has been restored. (Bug #35854)

Changes in MySQL Cluster NDB 6.3.10 (5.1.23-ndb-6.3.10)

Bugs Fixed

  • Due to the reduction of the number of local checkpoints from 3 to 2 in MySQL Cluster NDB 6.3.8, a data node using ndbd from MySQL Cluster NDB 6.3.8 or later started using a file system from an earlier version could incorrectly invalidate local checkpoints too early during the startup process, causing the node to fail. (Bug #34596)

Changes in MySQL Cluster NDB 6.3.9 (5.1.23-ndb-6.3.9)

Bugs Fixed

  • Cluster failures could sometimes occur when performing more than three parallel takeovers during node restarts or system restarts. This affected MySQL Cluster NDB 6.3.x releases only. (Bug #34445)

  • Upgrades of a cluster using while a DataMemory setting in excess of 16 GB caused data nodes to fail. (Bug #34378)

  • Performing many SQL statements on NDB tables while in autocommit mode caused a memory leak in mysqld. (Bug #34275)

  • In certain rare circumstances, a race condition could occur between an aborted insert and a delete leading a data node crash. (Bug #34260)

  • Multi-table updates using ordered indexes during handling of node failures could cause other data nodes to fail. (Bug #34216)

  • When configured with NDB support, MySQL failed to compile using gcc 4.3 on 64bit FreeBSD systems. (Bug #34169)

  • The failure of a DDL statement could sometimes lead to node failures when attempting to execute subsequent DDL statements. (Bug #34160)

  • Extremely long SELECT statements (where the text of the statement was in excess of 50000 characters) against NDB tables returned empty results. (Bug #34107)

  • When configured with NDB support, MySQL failed to compile on 64bit FreeBSD systems. (Bug #34046)

  • Statements executing multiple inserts performed poorly on NDB tables having AUTO_INCREMENT columns. (Bug #33534)

  • The ndb_waiter utility polled ndb_mgmd excessively when obtaining the status of cluster data nodes. (Bug #32025)

    References: See also Bug #32023.

  • Transaction atomicity was sometimes not preserved between reads and inserts under high loads. (Bug #31477)

  • Having tables with a great many columns could cause Cluster backups to fail. (Bug #30172)

  • Disk Data; Cluster Replication: Statements violating unique keys on Disk Data tables (such as attempting to insert NULL into a NOT NULL column) could cause data nodes to fail. When the statement was executed from the binary log, this could also result in failure of the slave cluster. (Bug #34118)

  • Disk Data: Updating in-memory columns of one or more rows of Disk Data table, followed by deletion of these rows and re-insertion of them, caused data node failures. (Bug #33619)

Changes in MySQL Cluster NDB 6.3.8 (5.1.23-ndb-6.3.8)

Functionality Added or Changed

  • Important Change; Cluster API: Because NDB_LE_MemoryUsage.page_size_kb shows memory page sizes in bytes rather than kilobytes, it has been renamed to page_size_bytes. The name page_size_kb is now deprecated and thus subject to removal in a future release, although it currently remains supported for reasons of backward compatibility. See The Ndb_logevent_type Type, for more information about NDB_LE_MemoryUsage. (Bug #30271)

  • OPTIMIZE TABLE can now be interrupted. This can be done, for example, by killing the SQL thread performing the OPTIMIZE operation.

  • Now only 2 local checkpoints are stored, rather than 3 as in previous MySQL Cluster versions. This lowers disk space requirements and reduces the size and number of redo log files needed.

  • The mysqld option --ndb-batch-size has been added. This enables control of the size of batches used for running transactions.

  • Node recovery can now be done in parallel, rather than sequentially, which can result in much faster recovery times.

  • Persistence of NDB tables can now be controlled using the session variables ndb_table_temporary and ndb_table_no_logging. ndb_table_no_logging causes NDB tables not to be checkpointed to disk. ndb_table_temporary has the same effect; in addition, when ndb_table_temporary is used, no NDB table schema files are created.

  • ndb_restore now supports basic attribute promotion; that is, data from a column of a given type can be restored to a column using a larger type. For example, Cluster backup data taken from a SMALLINT column can be restored to a MEDIUMINT, INT, or BIGINT column.

    For more information, see ndb_restore — Restore a MySQL Cluster Backup.

Bugs Fixed

  • Important Change; Disk Data: It is no longer possible on 32-bit systems to issue statements appearing to create Disk Data log files or data files greater than 4 GB in size. (Trying to create log files or data files larger than 4 GB on 32-bit systems led to unrecoverable data node failures; such statements now fail with NDB error 1515.) (Bug #29186)

  • Replication: The code implementing heartbeats did not check for possible errors in some circumstances; this kept the dump thread hanging while waiting for heartbeats loop even though the slave was no longer connected. (Bug #33332)

  • High numbers of insert operations, delete operations, or both could cause NDB error 899 (Rowid already allocated) to occur unnecessarily. (Bug #34033)

  • A periodic failure to flush the send buffer by the NDB TCP transporter could cause a unnecessary delay of 10 ms between operations. (Bug #34005)

  • DROP TABLE did not free all data memory. This bug was observed in MySQL Cluster NDB 6.3.7 only. (Bug #33802)

  • A race condition could occur (very rarely) when the release of a GCI was followed by a data node failure. (Bug #33793)

  • Some tuple scans caused the wrong memory page to be accessed, leading to invalid results. This issue could affect both in-memory and Disk Data tables. (Bug #33739)

  • A failure to initialize an internal variable led to sporadic crashes during cluster testing. (Bug #33715)

  • The server failed to reject properly the creation of an NDB table having an unindexed AUTO_INCREMENT column. (Bug #30417)

  • Issuing an INSERT ... ON DUPLICATE KEY UPDATE concurrently with or following a TRUNCATE TABLE statement on an NDB table failed with NDB error 4350 Transaction already aborted. (Bug #29851)

  • The Cluster backup process could not detect when there was no more disk space and instead continued to run until killed manually. Now the backup fails with an appropriate error when disk space is exhausted. (Bug #28647)

  • It was possible in config.ini to define cluster nodes having node IDs greater than the maximum permitted value. (Bug #28298)

  • Under some circumstances, a recovering data node did not use its own data, instead copying data from another node even when this was not required. This in effect bypassed the optimized node recovery protocol and caused recovery times to be unnecessarily long. (Bug #26913)

  • Cluster API: Transactions containing inserts or reads would hang during NdbTransaction::execute() calls made from NDB API applications built against a MySQL Cluster version that did not support micro-GCPs accessing a later version that supported micro-GCPs. This issue was observed while upgrading from MySQL Cluster NDB 6.1.23 to MySQL Cluster NDB 6.2.10 when the API application built against the earlier version attempted to access a data node already running the later version, even after disabling micro-GCPs by setting TimeBetweenEpochs equal to 0. (Bug #33895)

  • Cluster API: When reading a BIT(64) value using NdbOperation::getValue(), 12 bytes were written to the buffer rather than the expected 8 bytes. (Bug #33750)

Changes in MySQL Cluster NDB 6.3.7 (5.1.23-ndb-6.3.7)

Functionality Added or Changed

  • Compressed local checkpoints and backups are now supported, resulting in a space savings of 50% or more over uncompressed LCPs and backups. Compression of these can be enabled in the config.ini file using the two new data node configuration parameters CompressedLCP and CompressedBackup, respectively.

  • It is now possible to cause statements occurring within the same transaction to be run as a batch by setting the session variable transaction_allow_batching to 1 or ON.


    To use this feature, autocommit must be disabled.

  • OPTIMIZE TABLE is now supported for NDBCLUSTER tables, subject to the following limitations:

    • Only in-memory tables are supported. OPTIMIZE still has no effect on Disk Data tables.

    • Only variable-length columns are supported. However, you can force columns defined using fixed-length data types to be dynamic using the ROW_FORMAT or COLUMN_FORMAT option with a CREATE TABLE or ALTER TABLE statement.

    Memory reclaimed from an NDB table using OPTIMIZE is generally available to the cluster, and not confined to the table from which it was recovered, unlike the case with memory freed using DELETE.

    The performance of OPTIMIZE on NDB tables can be regulated by adjusting the value of the ndb_optimization_delay system variable.

Bugs Fixed

  • Partitioning: When partition pruning on an NDB table resulted in an ordered index scan spanning only one partition, any descending flag for the scan was wrongly discarded, causing ORDER BY DESC to be treated as ORDER BY ASC, MAX() to be handled incorrectly, and similar problems. (Bug #33061)

  • When all data and SQL nodes in the cluster were shut down abnormally (that is, other than by using STOP in the cluster management client), ndb_mgm used excessive amounts of CPU. (Bug #33237)

  • When using micro-GCPs, if a node failed while preparing for a global checkpoint, the master node would use the wrong GCI. (Bug #32922)

  • Under some conditions, performing an ALTER TABLE on an NDBCLUSTER table failed with a Table is full error, even when only 25% of DataMemory was in use and the result should have been a table using less memory (for example, changing a VARCHAR(100) column to VARCHAR(80)). (Bug #32670)

Changes in MySQL Cluster NDB 6.3.6 (5.1.22-ndb-6.3.6)

Functionality Added or Changed

  • The output of the ndb_mgm client SHOW and STATUS commands now indicates when the cluster is in single user mode. (Bug #27999)

  • Unnecessary reads when performing a primary key or unique key update have been reduced, and in some cases, eliminated. (It is almost never necessary to read a record prior to an update, the lone exception to this being when a primary key is updated, since this requires a delete followed by an insert, which must be prepared by reading the record.) Depending on the number of primary key and unique key lookups that are performed per transaction, this can yield a considerable improvement in performance.

  • Batched operations are now better supported for DELETE and UPDATE. (UPDATE WHERE... and multiple DELETE.)

  • Introduced the Ndb_execute_count status variable, which measures the number of round trips made by queries to the NDB kernel.

Bugs Fixed

  • In a cluster running in diskless mode and with arbitration disabled, the failure of a data node during an insert operation caused other data node to fail. (Bug #31980)

  • An insert or update with combined range and equality constraints failed when run against an NDB table with the error Got unknown error from NDB. An example of such a statement would be UPDATE t1 SET b = 5 WHERE a IN (7,8) OR a >= 10;. (Bug #31874)

  • An error with an if statement in sql/ could potentially lead to an infinite loop in case of failure when working with AUTO_INCREMENT columns in NDB tables. (Bug #31810)

  • The NDB storage engine code was not safe for strict-alias optimization in gcc 4.2.1. (Bug #31761)

  • ndb_restore displayed incorrect backup file version information. This meant (for example) that, when attempting to restore a backup made from a MySQL 5.1.22 cluster to a MySQL Cluster NDB 6.3.3 cluster, the restore process failed with the error Restore program older than backup version. Not supported. Use new restore program. (Bug #31723)

  • Following an upgrade, ndb_mgmd failed with an ArbitrationError. (Bug #31690)

  • The NDB management client command node_id REPORT MEMORY provided no output when node_id was the node ID of a management or API node. Now, when this occurs, the management client responds with Node node_id: is not a data node. (Bug #29485)

  • Performing DELETE operations after a data node had been shut down could lead to inconsistent data following a restart of the node. (Bug #26450)

  • UPDATE IGNORE could sometimes fail on NDB tables due to the use of unitialized data when checking for duplicate keys to be ignored. (Bug #25817)

Changes in MySQL Cluster NDB 6.3.5 (5.1.22-ndb-6.3.5)

Bugs Fixed

  • A query against a table with TEXT or BLOB columns that would return more than a certain amount of data failed with Got error 4350 'Transaction already aborted' from NDBCLUSTER. (Bug #31482)

    References: This bug was introduced by Bug #29102.

Changes in MySQL Cluster NDB 6.3.4 (5.1.22-ndb-6.3.4)

Functionality Added or Changed

  • Incompatible Change: The --ndb_optimized_node_selection startup option for mysqld now permits a wider range of values and corresponding behaviors for SQL nodes when selecting a transaction coordinator.

    You should be aware that the default value and behavior as well as the value type used for this option have changed, and that you may need to update the setting used for this option in your my.cnf file prior to upgrading mysqld. See Server System Variables, for more information.

Bugs Fixed

  • It was possible in some cases for a node group to be lost due to missed local checkpoints following a system restart. (Bug #31525)

  • NDB tables having names containing nonalphanumeric characters (such as $) were not discovered correctly. (Bug #31470)

  • A node failure during a local checkpoint could lead to a subsequent failure of the cluster during a system restart. (Bug #31257)

  • A cluster restart could sometimes fail due to an issue with table IDs. (Bug #30975)

  • Transaction timeouts were not handled well in some circumstances, leading to excessive number of transactions being aborted unnecessarily. (Bug #30379)

  • In some cases, the cluster managment server logged entries multiple times following a restart of ndb_mgmd. (Bug #29565)

  • ndb_mgm --help did not display any information about the -a option. (Bug #29509)

  • An interpreted program of sufficient size and complexity could cause all cluster data nodes to shut down due to buffer overruns. (Bug #29390)

  • The cluster log was formatted inconsistently and contained extraneous newline characters. (Bug #25064)

Changes in MySQL Cluster NDB 6.3.3 (5.1.22-ndb-6.3.3)

Functionality Added or Changed

  • Mapping of NDB error codes to MySQL storage engine error codes has been improved. (Bug #28423)

Bugs Fixed

  • Partitioning: EXPLAIN PARTITIONS reported partition usage by queries on NDB tables according to the standard MySQL hash function than the hash function used in the NDB storage engine. (Bug #29550)

  • Attempting to restore a backup made on a cluster host using one endian to a machine using the other endian could cause the cluster to fail. (Bug #29674)

  • The description of the --print option provided in the output from ndb_restore --help was incorrect. (Bug #27683)

  • Restoring a backup made on a cluster host using one endian to a machine using the other endian failed for BLOB and DATETIME columns. (Bug #27543, Bug #30024)

Changes in MySQL Cluster NDB 6.3.2 (5.1.22-ndb-6.3.2)

Functionality Added or Changed

  • It is now possible to control whether fixed-width or variable-width storage is used for a given column of an NDB table by means of the COLUMN_FORMAT specifier as part of the column's definition in a CREATE TABLE or ALTER TABLE statement.

    It is also possible to control whether a given column of an NDB table is stored in memory or on disk, using the STORAGE specifier as part of the column's definition in a CREATE TABLE or ALTER TABLE statement.

    For permitted values and other information about COLUMN_FORMAT and STORAGE, see CREATE TABLE Syntax.

  • A new cluster management server startup option --bind-address makes it possible to restrict management client connections to ndb_mgmd to a single host and port. For more information, see ndb_mgmd — The MySQL Cluster Management Server Daemon.

  • Online ADD COLUMN, ADD INDEX, and DROP INDEX operations can now be performed explicitly for NDB tables—that is, without copying or locking of the affected tables—using ALTER ONLINE TABLE. Indexes can also be created and dropped online using CREATE INDEX and DROP INDEX, respectively, using the ONLINE keyword.

    You can force operations that would otherwise be performed online to be done offline using the OFFLINE keyword.

    Renaming of tables and columns for NDB and MyISAM tables is performed in place without table copying.

    For more information, see ALTER TABLE Online Operations in MySQL Cluster, CREATE INDEX Syntax, and DROP INDEX Syntax.

Bugs Fixed

  • When an NDB event was left behind but the corresponding table was later recreated and received a new table ID, the event could not be dropped. (Bug #30877)

  • When creating an NDB table with a column that has COLUMN_FORMAT = DYNAMIC, but the table itself uses ROW_FORMAT=FIXED, the table is considered dynamic, but any columns for which the row format is unspecified default to FIXED. Now in such cases the server issues the warning Row format FIXED incompatible with dynamic attribute column_name. (Bug #30276)

  • An insufficiently descriptive and potentially misleading Error 4006 (Connect failure - out of connection objects...) was produced when either of the following two conditions occurred:

    1. There were no more transaction records in the transaction coordinator

    2. An NDB object in the NDB API was initialized with insufficient parallelism

    Separate error messages are now generated for each of these two cases. (Bug #11313)

Changes in MySQL Cluster NDB 6.3.0 (5.1.19-ndb-6.3.0)

Functionality Added or Changed

  • Reporting functionality has been significantly enhanced in this release:

    • A new configuration parameter BackupReportFrequency now makes it possible to cause the management client to provide status reports at regular intervals as well as for such reports to be written to the cluster log (depending on cluster event logging levels).

    • A new REPORT command has been added in the cluster management client. REPORT BackupStatus enables you to obtain a backup status report at any time during a backup. REPORT MemoryUsage reports the current data memory and index memory used by each data node. For more about the REPORT command, see Commands in the MySQL Cluster Management Client.

    • ndb_restore now provides running reports of its progress when restoring a backup. In addition, a complete report status report on the backup is written to the cluster log.

  • A new configuration parameter ODirect causes NDB to attempt using O_DIRECT writes for LCP, backups, and redo logs, often lowering CPU usage.

Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb