Documentation Home
MySQL Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb

MySQL Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 6.2  /  Changes in MySQL Cluster NDB 6.2.16 (5.1.28-ndb-6.2.16) (2008-10-08)

Changes in MySQL Cluster NDB 6.2.16 (5.1.28-ndb-6.2.16) (2008-10-08)

This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.

MySQL Cluster NDB 6.2 no longer in development. MySQL Cluster NDB 6.2 is no longer being developed or maintained; if you are using a MySQL Cluster NDB 6.2 release, you should upgrade to the latest version of MySQL Cluster, which is available from .

Obtaining MySQL Cluster NDB 6.2. You can download the latest MySQL Cluster NDB 6.2 source code and binaries for supported platforms from

This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.28 (see Changes in MySQL 5.1.28 (2008-08-28)).


Please refer to our bug database at for more details about the individual bugs fixed in this version.

Functionality Added or Changed

  • It is no longer a requirement for database autodiscovery that an SQL node already be connected to the cluster at the time that a database is created on another SQL node. It is no longer necessary to issue CREATE DATABASE (or CREATE SCHEMA) statements on an SQL node joining the cluster after a database is created for the new SQL node to see the database and any NDBCLUSTER tables that it contains. (Bug #39612)

  • Event buffer lag reports are now written to the cluster log. (Bug #37427)

  • Added the --no-binlog option for ndb_restore. When used, this option prevents information being written to SQL node binary logs from the restoration of a cluster backup. (Bug #30452)

  • The ndbd and ndb_mgmd man pages have been reclassified from volume 1 to volume 8. (Bug #34642)

Bugs Fixed

  • Important Change; Disk Data: It is no longer possible on 32-bit systems to issue statements appearing to create Disk Data log files or data files greater than 4 GB in size. (Trying to create log files or data files larger than 4 GB on 32-bit systems led to unrecoverable data node failures; such statements now fail with NDB error 1515.) (Bug #29186)

  • Cluster API: Changing the system time on data nodes could cause MGM API applications to hang and the data nodes to crash. (Bug #35607)

  • In certain rare situations, could fail with the error Can't use string ("value") as a HASH ref while "strict refs" in use. (Bug #43022)

  • Heavy DDL usage caused the mysqld processes to hang due to a timeout error (NDB error code 266). (Bug #39885)

  • Executing EXPLAIN SELECT on an NDBCLUSTER table could cause mysqld to crash. (Bug #39872)

  • Starting the MySQL Server with the --ndbcluster option plus an invalid command-line option (for example, using mysqld --ndbcluster --foobar) caused it to hang while shutting down the binary log thread. (Bug #39635)

  • Dropping and then re-creating a database on one SQL node caused other SQL nodes to hang. (Bug #39613)

  • Setting a low value of MaxNoOfLocalScans (< 100) and performing a large number of (certain) scans could cause the Transaction Coordinator to run out of scan fragment records, and then crash. Now when this resource is exhausted, the cluster returns Error 291 (Out of scanfrag records in TC (increase MaxNoOfLocalScans)) instead. (Bug #39549)

  • Creating a unique index on an NDBCLUSTER table caused a memory leak in the NDB subscription manager (SUMA) which could lead to mysqld hanging, due to the fact that the resource shortage was not reported back to the NDB kernel correctly. (Bug #39518)

    References: See also Bug #39450.

  • Unique identifiers in tables having no primary key were not cached. This fix has been observed to increase the efficiency of INSERT operations on such tables by as much as 50%. (Bug #39267)

  • MgmtSrvr::allocNodeId() left a mutex locked following an Ambiguity for node if %d error. (Bug #39158)

  • An invalid path specification caused to fail. (Bug #39026)

  • During transactional coordinator takeover (directly after node failure), the LQH finding an operation in the LOG_COMMIT state sent an LQH_TRANS_CONF signal twice, causing the TC to fail. (Bug #38930)

  • An invalid memory access caused the management server to crash on Solaris Sparc platforms. (Bug #38628)

  • A segfault in Logger::Log caused ndbd to hang indefinitely. (Bug #38609)

  • ndb_mgmd failed to start on older Linux distributions (2.4 kernels) that did not support e-polling. (Bug #38592)

  • When restarting a data node, an excessively long shutdown message could cause the node process to crash. (Bug #38580)

  • ndb_mgmd sometimes performed unnecessary network I/O with the client. This in combination with other factors led to long-running threads that were attempting to write to clients that no longer existed. (Bug #38563)

  • ndb_restore failed with a floating point exception due to a division by zero error when trying to restore certain data files. (Bug #38520)

  • A failed connection to the management server could cause a resource leak in ndb_mgmd. (Bug #38424)

  • Failure to parse configuration parameters could cause a memory leak in the NDB log parser. (Bug #38380)

  • After a forced shutdown and initial restart of the cluster, it was possible for SQL nodes to retain .frm files corresponding to NDBCLUSTER tables that had been dropped, and thus to be unaware that these tables no longer existed. In such cases, attempting to re-create the tables using CREATE TABLE IF NOT EXISTS could fail with a spurious Table ... doesn't exist error. (Bug #37921)

  • Failure of a data node could sometimes cause mysqld to crash. (Bug #37628)

  • If subscription was terminated while a node was down, the epoch was not properly acknowledged by that node. (Bug #37442)

  • In rare circumstances, a connection followed by a disconnection could give rise to a stale connection where the connection still existed but was not seen by the transporter. (Bug #37338)

  • Under some circumstances, a failed CREATE TABLE could mean that subsequent CREATE TABLE statements caused node failures. (Bug #37092)

  • A fail attempt to create an NDB table could in some cases lead to resource leaks or cluster failures. (Bug #37072)

  • Checking of API node connections was not efficiently handled. (Bug #36843)

  • Attempting to delete a nonexistent row from a table containing a TEXT or BLOB column within a transaction caused the transaction to fail. (Bug #36756)

    References: See also Bug #36851.

  • Queries against NDBCLUSTER tables were cached only if autocommit was in use. (Bug #36692)

  • Renaming an NDBCLUSTER table on one SQL node, caused a trigger on this table to be deleted on another SQL node. (Bug #36658)

  • SET GLOBAL ndb_extra_logging caused mysqld to crash. (Bug #36547)

  • If the combined total of tables and indexes in the cluster was greater than 4096, issuing START BACKUP caused data nodes to fail. (Bug #36044)

  • When more than one SQL node connected to the cluster at the same time, creation of the mysql.ndb_schema table failed on one of them with an explicit Table exists error, which was not necessary. (Bug #35943)

  • mysqld failed to start after running mysql_upgrade. (Bug #35708)

  • Attempting to add a UNIQUE INDEX twice to an NDBCLUSTER table, then deleting rows from the table could cause the MySQL Server to crash. (Bug #35599)

  • If an error occurred while executing a statement involving a BLOB or TEXT column of an NDB table, a memory leak could result. (Bug #35593)

  • It was not possible to determine the value used for the --ndb-cluster-connection-pool option in the mysql client. Now this value is reported as a system status variable. (Bug #35573)

  • The ndb_waiter utility wrongly calculated timeouts. (Bug #35435)

  • Where column values to be compared in a query were of the VARCHAR or VARBINARY types, NDBCLUSTER passed a value padded to the full size of the column, which caused unnecessary data to be sent to the data nodes. This also had the effect of wasting CPU and network bandwidth, and causing condition pushdown to be disabled where it could (and should) otherwise have been applied. (Bug #35393)

  • ndb_restore incorrectly handled some data types when applying log files from backups. (Bug #35343)

  • In some circumstances, a stopped data node was handled incorrectly, leading to redo log space being exhausted following an initial restart of the node, or an initial or partial restart of the cluster (the wrong CGI might be used in such cases). This could happen, for example, when a node was stopped following the creation of a new table, but before a new LCP could be executed. (Bug #35241)

  • SELECT ... LIKE ... queries yielded incorrect results when used on NDB tables. As part of this fix, condition pushdown of such queries has been disabled; re-enabling it is expected to be done as part of a later, permanent fix for this issue. (Bug #35185)

  • ndb_mgmd reported errors to STDOUT rather than to STDERR. (Bug #35169)

  • Nested Multi-Range Read scans failed when the second Multi-Range Read released the first read's unprocessed operations, sometimes leading to an SQL node crash. (Bug #35137)

  • In some situations, a problem with synchronizing checkpoints between nodes could cause a system restart or a node restart to fail with Error 630 during restore of TX. (Bug #34756)

    References: This bug is a regression of Bug #34033.

  • When a secondary index on a DECIMAL column was used to retrieve data from an NDB table, no results were returned even if the target table had a matched value in the column that was defined with the secondary index. (Bug #34515)

  • An UPDATE on an NDB table that set a new value for a unique key column could cause subsequent queries to fail. (Bug #34208)

  • If a data node in one node group was placed in the not started state (using node_id RESTART -n), it was not possible to stop a data node in a different node group. (Bug #34201)

  • When configured with NDB support, MySQL failed to compile on 64bit FreeBSD systems. (Bug #34046)

  • ndb_restore failed when a single table was specified. (Bug #33801)

  • Numerous NDBCLUSTER test failures occurred in builds compiled using icc on IA64 platforms. (Bug #31239)

  • GCP_COMMIT did not wait for transaction takeover during node failure. This could cause GCP_SAVE_REQ to be executed too early. This could also cause (very rarely) replication to skip rows. (Bug #30780)

  • CREATE TABLE and ALTER TABLE statements using ENGINE=NDB or ENGINE=NDBCLUSTER caused mysqld to fail on Solaris 10 for x86 platforms. (Bug #19911)

  • NDB error 1427 (Api node died, when SUB_START_REQ reached node) was incorrectly classified as a schema error rather than a temporary error.

  • When dropping a table failed for any reason (such as when in single user mode) then the corresponding .ndb file was still removed.

  • If an API node disconnected and then reconnected during Start Phase 8, then the connection could be blocked—that is, the QMGR kernel block failed to detect that the API node was in fact connected to the cluster, causing issues with the NDB Subscription Manager (SUMA).

  • Replication: When flushing tables, there was a slight chance that the flush occurred between the processing of one table map event and the next. Since the tables were opened one by one, subsequent locking of tables would cause the slave to crash. This problem was observed when replicating NDBCLUSTER or InnoDB tables, when executing multi-table updates, and when a trigger or a stored routine performed an (additional) insert on a table so that two tables were effectively being inserted into in the same statement. (Bug #36197)

  • Cluster Replication: In some cases, dropping a database on the master could cause table logging to fail on the slave, or, when using a debug build, could cause the slave mysqld to fail completely. (Bug #39404)

  • Cluster Replication: During a parallel node restart, the starting nodes could (sometimes) incorrectly synchronize subscriptions among themselves. Instead, this synchronization now takes place only among nodes that have actually (completely) started. (Bug #38767)

  • Cluster Replication: Data was written to the binary log with --log-slave-updates disabled. (Bug #37472)

  • Cluster Replication: Performing SELECT ... FROM mysql.ndb_apply_status before the mysqld process had connected to the cluster failed, and caused this table never to be created. (Bug #36123)

  • Cluster Replication: In some cases, when updating only one or some columns in a table, the complete row was written to the binary log instead of only the updated column or columns, even when ndb_log_updated_only was set to 1. (Bug #35208)

  • Cluster API: Passing a value greater than 65535 to NdbInterpretedCode::add_val() and NdbInterpretedCode::sub_val() caused these methods to have no effect. (Bug #39536)

  • Cluster API: The NdbScanOperation::readTuples() method could be called multiple times without error. (Bug #38717)

  • Cluster API: Certain Multi-Range Read scans involving IS NULL and IS NOT NULL comparisons failed with an error in the NDB local query handler. (Bug #38204)

  • Cluster API: Problems with the public headers prevented NDB applications from being built with warnings turned on. (Bug #38177)

  • Cluster API: Creating an NdbScanFilter object using an NdbScanOperation object that had not yet had its readTuples() method called resulted in a crash when later attempting to use the NdbScanFilter. (Bug #37986)

  • Cluster API: Executing an NdbRecord interpreted delete created with an ANYVALUE option caused the transaction to abort. (Bug #37672)

  • Cluster API: When some operations succeeded and some failed following a call to NdbTransaction::execute(Commit, AO_IgnoreOnError), a race condition could cause spurious occurrences of NDB API Error 4011 (Internal error). (Bug #37158)

  • Cluster API: Ordered index scans were not pruned correctly where a partitioning key was specified with an EQ-bound. (Bug #36950)

  • Cluster API: When an insert operation involving BLOB data was attempted on a row which already existed, no duplicate key error was correctly reported and the transaction is incorrectly aborted. In some cases, the existing row could also become corrupted. (Bug #36851)

    References: See also Bug #26756.

  • Cluster API: Accessing the debug version of libndbclient using dlopen() resulted in a segmentation fault. (Bug #35927)

  • Cluster API: NdbApi.hpp depended on ndb_global.h, which was not actually installed, causing the compilation of programs that used NdbApi.hpp to fail. (Bug #35853)

  • Cluster API: Attempting to pass a nonexistent column name to the equal() and setValue() methods of NdbOperation caused NDB API applications to crash. Now the column name is checked, and an error is returned in the event that the column is not found. (Bug #33747)

  • Cluster API: Creating a table on an SQL node, then starting an NDB API application that listened for events from this table, then dropping the table from an SQL node, prevented data node restarts. (Bug #32949, Bug #37279)

  • Cluster API: A buffer overrun in NdbBlob::setValue() caused erroneous results on Mac OS X. (Bug #31284)

  • Cluster API: Relocation errors were encountered when trying to compile NDB API applications on a number of platforms, including 64-bit Linux. As a result, libmysys, libmystrings, and libdbug have been changed from normal libraries to noinst libtool helper libraries. They are no longer installed as separate libraries; instead, all necessary symbols from these are added directly to libndbclient. This means that NDB API programs now need to be linked using only -lndbclient. (Bug #29791, Bug #11746931)

Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb