Documentation Home
MySQL Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb

MySQL Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 7.0  /  Changes in MySQL Cluster NDB 7.0.8 (5.1.37-ndb-7.0.8) (2009-09-30)

Changes in MySQL Cluster NDB 7.0.8 (5.1.37-ndb-7.0.8) (2009-09-30)


MySQL Cluster NDB 7.0.8 was pulled shortly after release due to Bug #47844. Users seeking to upgrade from a previous MySQL Cluster NDB 7.0 release should instead use MySQL Cluster NDB 7.0.8a, which contains a fix for this bug, in addition to all bugfixes and improvements made in MySQL Cluster NDB 7.0.8.

This release incorporates new features in the NDB storage engine and fixes recently discovered bugs in MySQL Cluster NDB 7.0.7.

This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.37 (see Changes in MySQL 5.1.37 (2009-07-13)).


Please refer to our bug database at for more details about the individual bugs fixed in this version.

Functionality Added or Changed

  • A new option --log-name is added for ndb_mgmd. This option can be used to provide a name for the current node and then to identify it in messages written to the cluster log. For more information, see ndb_mgmd — The MySQL Cluster Management Server Daemon. (Bug #47643)

  • --config-dir is now accepted by ndb_mgmd as an alias for the --configdir option. (Bug #42013)

  • Disk Data: Two new columns have been added to the output of ndb_desc to make it possible to determine how much of the disk space allocated to a given table or fragment remains free. (This information is not available from the INFORMATION_SCHEMA.FILES table, since the FILES table applies only to Disk Data files.) For more information, see ndb_desc — Describe NDB Tables. (Bug #47131)

Bugs Fixed

  • Important Change: Previously, the MySQL Cluster management node and data node programs, when run on Windows platforms, required the --nodaemon option to produce console output. Now, these programs run in the foreground when invoked from the command line on Windows, which is the same behavior that mysqld.exe displays on Windows. (Bug #45588)

  • Important Change; Cluster Replication: In a MySQL Cluster acting as a replication slave and having multiple SQL nodes, only the SQL node receiving events directly from the master recorded DDL statements in its binary logs unless this SQL node also had binary logging enabled; otherwise, other SQL nodes in the slave cluster failed to log DDL statements, regardless of their individual --log-bin settings.

    The fix for this issue aligns binary logging of DDL statements with that of DML statements. In particular, you should take note of the following:

    • DDL and DML statements on the master cluster are logged with the server ID of the server that actually writes the log.

    • DDL and DML statements on the master cluster are logged by any attached mysqld that has binary logging enabled.

    • Replicated DDL and DML statements on the slave are logged by any attached mysqld that has both --log-bin and --log-slave-updates enabled.

    • Replicated DDL and DML statements are logged with the server ID of the original (master) MySQL server by any attached mysqld that has both --log-bin and --log-slave-updates enabled.

    Affect on upgrades. When upgrading from a previous MySQL CLuster release, you should perform either one of the following:

    1. Upgrade servers that are performing binary logging before those that are not; do not perform any DDL on old SQL nodes until all SQL nodes have been upgraded.

    2. Make sure that --log-slave-updates is enabled on all SQL nodes performing binary logging prior to the upgrade, so that all DDL is captured.


    Logging of DML statements was not affected by this issue.

    (Bug #45756)

  • The following issues with error logs generated by ndbmtd were addressed:

    1. The version string was sometimes truncated, or even not shown, depending on the number of threads in use (the more threads, the worse the problem). Now the version string is shown in full, as well as the file names for all tracefiles (where available).

    2. In the event of a crash, the thread number of the thread that crashed was not printed. Now this information is supplied, if available.

    (Bug #47629)

  • mysqld allocated an excessively large buffer for handling BLOB values due to overestimating their size. (For each row, enough space was allocated to accommodate every BLOB or TEXT column value in the result set.) This could adversely affect performance when using tables containing BLOB or TEXT columns; in a few extreme cases, this issue could also cause the host system to run out of memory unexpectedly. (Bug #47574)

    References: See also Bug #47572, Bug #47573.

  • NDBCLUSTER uses a dynamically allocated buffer to store BLOB or TEXT column data that is read from rows in MySQL Cluster tables.

    When an instance of the NDBCLUSTER table handler was recycled (this can happen due to table definition cache pressure or to operations such as FLUSH TABLES or ALTER TABLE), if the last row read contained blobs of zero length, the buffer was not freed, even though the reference to it was lost. This resulted in a memory leak.

    For example, consider the table defined and populated as shown here:

    INSERT INTO t VALUES (1, REPEAT('F', 20000));
    INSERT INTO t VALUES (2, '');

    Now execute repeatedly a SELECT on this table, such that the zero-length LONGTEXT row is last, followed by a FLUSH TABLES statement (which forces the handler object to be re-used), as shown here:

    SELECT a, length(b) FROM bl ORDER BY a;

    Prior to the fix, this resulted in a memory leak proportional to the size of the stored LONGTEXT value each time these two statements were executed. (Bug #47573)

    References: See also Bug #47572, Bug #47574.

  • Large transactions involving joins between tables containing BLOB columns used excessive memory. (Bug #47572)

    References: See also Bug #47573, Bug #47574.

  • After an NDB table had an ALTER ONLINE TABLE operation performed on it in a MySQL Cluster running a MySQL Cluster NDB 6.3.x release, it could not be upgraded online to a MySQL Cluster NDB 7.0.x release. This issue was detected using MySQL Cluster NDB 6.3.20, but is likely to effect any MySQL Cluster NDB 6.3.x release supporting online DDL operations. (Bug #47542)

  • When using multi-threaded data nodes (ndbmtd) with NoOfReplicas set to a value greater than 2, attempting to restart any of the data nodes caused a forced shutdown of the entire cluster. (Bug #47530)

  • A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug #47505)

  • Handling of LQH_TRANS_REQ signals was done incorrectly in DBLQH when the transaction coordinator failed during a LQH_TRANS_REQ session. This led to incorrect handling of multiple node failures, particularly when using ndbmtd. (Bug #47476)

  • The NDB kernel's parser (in ndb/src/common/util/Parser.cpp) did not interpret the backslash (\) character correctly. (Bug #47426)

  • During an online alter table operation, the new table definition was made available for users during the prepare-phase when it should only be exposed during and after a commit. This issue could affect NDB API applications, mysqld processes, or data node processes. (Bug #47375)

  • Aborting an online add column operation (for example, due to resource problems on a single data node, but not others) could lead to a forced node shutdown. (Bug #47364)

  • Clients attempting to connect to the cluster during shutdown could sometimes cause the management server to crash. (Bug #47325)

  • The size of the table descriptor pool used in the DBTUP kernel block was incorrect. This could lead to a data node crash when an LQH sent a CREATE_TAB_REF signal. (Bug #47215)

    References: See also Bug #44908.

  • When a data node restarts, it first runs the redo log until reaching the latest restorable global checkpoint; after this it scans the remainder of the redo log file, searching for entries that should be invalidated so they are not used in any subsequent restarts. (It is possible, for example, if restoring GCI number 25, that there might be entries belonging to GCI 26 in the redo log.) However, under certain rare conditions, during the invalidation process, the redo log files themselves were not always closed while scanning ahead in the redo log. In rare cases, this could lead to MaxNoOfOpenFiles being exceeded, causing a the data node to crash. (Bug #47171)

  • For very large values of MaxNoOfTables + MaxNoOfAttributes, the calculation for StringMemory could overflow when creating large numbers of tables, leading to NDB error 773 (Out of string memory, please modify StringMemory config parameter), even when StringMemory was set to 100 (100 percent). (Bug #47170)

  • The default value for the StringMemory configuration parameter, unlike other MySQL Cluster configuration parameters, was not set in ndb/src/mgmsrv/ConfigInfo.cpp. (Bug #47166)

  • Signals from a failed API node could be received after an API_FAILREQ signal (see Operations and Signals) has been received from that node, which could result in invalid states for processing subsequent signals. Now, all pending signals from a failing API node are processed before any API_FAILREQ signal is received. (Bug #47039)

    References: See also Bug #44607.

  • When reloading the management server configuration, only the last changed parameter was logged. (Bug #47036)

  • When using ndbmtd, a parallel DROP TABLE operation could cause data nodes to have different views of which tables should be included in local checkpoints; this discrepancy could lead to a node failure during an LCP. (Bug #46873)

  • Using triggers on NDB tables caused ndb_autoincrement_prefetch_sz to be treated as having the NDB kernel's internal default value (32) and the value for this variable as set on the cluster's SQL nodes to be ignored. (Bug #46712)

  • Now, when started with --initial --reload, ndb_mgmd tries to connect to and to copy the configuration of an existing ndb_mgmd process with a confirmed configuration. This works only if another management server is found, and the configuration files used by both management nodes are exactly the same.

    If no other management server is found, the local configuration file is read and used. With this change, it is now necessary when performing a rolling restart of a MySQL Cluster having multiple management nodes, to stop all ndb_mgmd processes, and when restarting them, to start the first of these with the --reload or --initial option (or both options), and then to start any remaining management nodes without using either of these two options. For more information, see Performing a Rolling Restart of a MySQL Cluster. (Bug #45495, Bug #46488, Bug #11753966, Bug #11754823)

    References: See also Bug #42015, Bug #11751233.

  • On Windows, ndbd --initial could hang in an endless loop while attempting to remove directories. (Bug #45402)

  • For multi-threaded data nodes, insufficient fragment records were allocated in the DBDIH NDB kernel block, which could lead to error 306 when creating many tables; the number of fragment records allocated did not take into account the number of LQH instances. (Bug #44908)

  • Running an ALTER TABLE statement while an NDB backup was in progress caused mysqld to crash. (Bug #44695)

  • When performing auto-discovery of tables on individual SQL nodes, NDBCLUSTER attempted to overwrite existing MyISAM .frm files and corrupted them.

    Workaround. In the mysql client, create a new table (t2) with same definition as the corrupted table (t1). Use your system shell or file manager to rename the old .MYD file to the new file name (for example, mv t1.MYD t2.MYD). In the mysql client, repair the new table, drop the old one, and rename the new table using the old file name (for example, RENAME TABLE t2 TO t1).

    (Bug #42614)

  • When started with the --initial and --reload options, if ndb_mgmd could not find a configuration file or connect to another management server, it appeared to hang. Now, when trying to fetch its configuration from another management node, ndb_mgmd checks and signals (Trying to get configuration from other mgmd(s)) each 30 seconds that it has not yet done so. (Bug #42015)

    References: See also Bug #45495.

  • Running ndb_restore with the --print or --print_log option could cause it to crash. (Bug #40428, Bug #33040)

  • An insert on an NDB table was not always flushed properly before performing a scan. One way in which this issue could manifest was that LAST_INSERT_ID() sometimes failed to return correct values when using a trigger on an NDB table. (Bug #38034)

  • When a data node received a TAKE_OVERTCCONF signal from the master before that node had received a NODE_FAILREP, a race condition could in theory result. (Bug #37688)

    References: See also Bug #25364, Bug #28717.

  • Some joins on large NDB tables having TEXT or BLOB columns could cause mysqld processes to leak memory. The joins did not need to reference the TEXT or BLOB columns directly for this issue to occur. (Bug #36701)

  • On Mac OS X 10.5, commands entered in the management client failed and sometimes caused the client to hang, although management client commands invoked using the --execute (or -e) option from the system shell worked normally.

    For example, the following command failed with an error and hung until killed manually, as shown here:

    ndb_mgm> SHOW      
    Warning, event thread startup failed, degraded printouts as result, errno=36

    However, the same management client command, invoked from the system shell as shown here, worked correctly:

    shell> ndb_mgm -e "SHOW"

    (Bug #35751)

    References: See also Bug #34438.

  • Replication: In some cases, a STOP SLAVE statement could cause the replication slave to crash. This issue was specific to MySQL on Windows or Macintosh platforms. (Bug #45238, Bug #45242, Bug #45243, Bug #46013, Bug #46014, Bug #46030)

    References: See also Bug #40796.

  • Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.


    This miscalculation was not reflected in the contents of the INFORMATION_SCHEMA.FILES table, as it applied to extents allocated to a fragment, and not to a file.

    (Bug #47072)

  • Cluster API: In some circumstances, if an API node encountered a data node failure between the creation of a transaction and the start of a scan using that transaction, then any subsequent calls to startTransaction() and closeTransaction() could cause the same transaction to be started and closed repeatedly. (Bug #47329)

  • Cluster API: Performing multiple operations using the same primary key within the same NdbTransaction::execute() call could lead to a data node crash.


    This fix does not make change the fact that performing multiple operations using the same primary key within the same execute() is not supported; because there is no way to determine the order of such operations, the result of such combined operations remains undefined.

    (Bug #44065)

    References: See also Bug #44015.

  • API: The fix for Bug #24507 could lead in some cases to client application failures due to a race condition. Now the server waits for the dummy thread to return before exiting, thus making sure that only one thread can initialize the POSIX threads library. (Bug #42850)

Download these Release Notes
PDF (US Ltr) - 2.8Mb
PDF (A4) - 2.9Mb
EPUB - 0.6Mb