MySQL Cluster NDB 7.0.8 was pulled shortly after release due to Bug #47844. Users seeking to upgrade from a previous MySQL Cluster NDB 7.0 release should instead use MySQL Cluster NDB 7.0.8a, which contains a fix for this bug, in addition to all bugfixes and improvements made in MySQL Cluster NDB 7.0.8.
This release incorporates new features in the
NDB storage engine and fixes
recently discovered bugs in MySQL Cluster NDB 7.0.7.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.37 (see Changes in MySQL 5.1.37 (2009-07-13)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
A new option
--log-nameis added for ndb_mgmd. This option can be used to provide a name for the current node and then to identify it in messages written to the cluster log. For more information, see ndb_mgmd — The MySQL Cluster Management Server Daemon. (Bug #47643)
--config-diris now accepted by ndb_mgmd as an alias for the
--configdiroption. (Bug #42013)
Disk Data: Two new columns have been added to the output of ndb_desc to make it possible to determine how much of the disk space allocated to a given table or fragment remains free. (This information is not available from the
INFORMATION_SCHEMA.FILEStable, since the
FILEStable applies only to Disk Data files.) For more information, see ndb_desc — Describe NDB Tables. (Bug #47131)
Important Change: Previously, the MySQL Cluster management node and data node programs, when run on Windows platforms, required the
--nodaemonoption to produce console output. Now, these programs run in the foreground when invoked from the command line on Windows, which is the same behavior that mysqld.exe displays on Windows. (Bug #45588)
Important Change; Cluster Replication: In a MySQL Cluster acting as a replication slave and having multiple SQL nodes, only the SQL node receiving events directly from the master recorded DDL statements in its binary logs unless this SQL node also had binary logging enabled; otherwise, other SQL nodes in the slave cluster failed to log DDL statements, regardless of their individual
The fix for this issue aligns binary logging of DDL statements with that of DML statements. In particular, you should take note of the following:
DDL and DML statements on the master cluster are logged with the server ID of the server that actually writes the log.
DDL and DML statements on the master cluster are logged by any attached mysqld that has binary logging enabled.
Affect on upgrades. When upgrading from a previous MySQL CLuster release, you should perform either one of the following:
Upgrade servers that are performing binary logging before those that are not; do not perform any DDL on “old” SQL nodes until all SQL nodes have been upgraded.
Make sure that
--log-slave-updatesis enabled on all SQL nodes performing binary logging prior to the upgrade, so that all DDL is captured.
Logging of DML statements was not affected by this issue.
The following issues with error logs generated by ndbmtd were addressed:
The version string was sometimes truncated, or even not shown, depending on the number of threads in use (the more threads, the worse the problem). Now the version string is shown in full, as well as the file names for all tracefiles (where available).
In the event of a crash, the thread number of the thread that crashed was not printed. Now this information is supplied, if available.
mysqld allocated an excessively large buffer for handling
BLOBvalues due to overestimating their size. (For each row, enough space was allocated to accommodate every
TEXTcolumn value in the result set.) This could adversely affect performance when using tables containing
TEXTcolumns; in a few extreme cases, this issue could also cause the host system to run out of memory unexpectedly. (Bug #47574)
References: See also: Bug #47572, Bug #47573.
When an instance of the
NDBCLUSTERtable handler was recycled (this can happen due to table definition cache pressure or to operations such as
ALTER TABLE), if the last row read contained blobs of zero length, the buffer was not freed, even though the reference to it was lost. This resulted in a memory leak.
For example, consider the table defined and populated as shown here:
CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB; INSERT INTO t VALUES (1, REPEAT('F', 20000)); INSERT INTO t VALUES (2, '');
SELECT a, length(b) FROM bl ORDER BY a; FLUSH TABLES;
Prior to the fix, this resulted in a memory leak proportional to the size of the stored
LONGTEXTvalue each time these two statements were executed. (Bug #47573)
References: See also: Bug #47572, Bug #47574.
Large transactions involving joins between tables containing
BLOBcolumns used excessive memory. (Bug #47572)
References: See also: Bug #47573, Bug #47574.
NDBtable had an
ALTER ONLINE TABLEoperation performed on it in a MySQL Cluster running a MySQL Cluster NDB 6.3.x release, it could not be upgraded online to a MySQL Cluster NDB 7.0.x release. This issue was detected using MySQL Cluster NDB 6.3.20, but is likely to effect any MySQL Cluster NDB 6.3.x release supporting online DDL operations. (Bug #47542)
When using multi-threaded data nodes (ndbmtd) with
NoOfReplicasset to a value greater than 2, attempting to restart any of the data nodes caused a forced shutdown of the entire cluster. (Bug #47530)
A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug #47505)
LQH_TRANS_REQsignals was done incorrectly in
DBLQHwhen the transaction coordinator failed during a
LQH_TRANS_REQsession. This led to incorrect handling of multiple node failures, particularly when using ndbmtd. (Bug #47476)
The NDB kernel's parser (in
ndb/src/common/util/Parser.cpp) did not interpret the backslash (“
\”) character correctly. (Bug #47426)
During an online alter table operation, the new table definition was made available for users during the prepare-phase when it should only be exposed during and after a commit. This issue could affect NDB API applications, mysqld processes, or data node processes. (Bug #47375)
Aborting an online add column operation (for example, due to resource problems on a single data node, but not others) could lead to a forced node shutdown. (Bug #47364)
Clients attempting to connect to the cluster during shutdown could sometimes cause the management server to crash. (Bug #47325)
The size of the table descriptor pool used in the
DBTUPkernel block was incorrect. This could lead to a data node crash when an LQH sent a
CREATE_TAB_REFsignal. (Bug #47215)
References: See also: Bug #44908.
When a data node restarts, it first runs the redo log until reaching the latest restorable global checkpoint; after this it scans the remainder of the redo log file, searching for entries that should be invalidated so they are not used in any subsequent restarts. (It is possible, for example, if restoring GCI number 25, that there might be entries belonging to GCI 26 in the redo log.) However, under certain rare conditions, during the invalidation process, the redo log files themselves were not always closed while scanning ahead in the redo log. In rare cases, this could lead to
MaxNoOfOpenFilesbeing exceeded, causing a the data node to crash. (Bug #47171)
For very large values of
MaxNoOfAttributes, the calculation for
StringMemorycould overflow when creating large numbers of tables, leading to
NDBerror 773 (Out of string memory, please modify StringMemory config parameter), even when
StringMemorywas set to
100(100 percent). (Bug #47170)
The default value for the
StringMemoryconfiguration parameter, unlike other MySQL Cluster configuration parameters, was not set in
ndb/src/mgmsrv/ConfigInfo.cpp. (Bug #47166)
Signals from a failed API node could be received after an
API_FAILREQsignal (see Operations and Signals) has been received from that node, which could result in invalid states for processing subsequent signals. Now, all pending signals from a failing API node are processed before any
API_FAILREQsignal is received. (Bug #47039)
References: See also: Bug #44607.
When reloading the management server configuration, only the last changed parameter was logged. (Bug #47036)
When using ndbmtd, a parallel
DROP TABLEoperation could cause data nodes to have different views of which tables should be included in local checkpoints; this discrepancy could lead to a node failure during an LCP. (Bug #46873)
Using triggers on
ndb_autoincrement_prefetch_szto be treated as having the NDB kernel's internal default value (32) and the value for this variable as set on the cluster's SQL nodes to be ignored. (Bug #46712)
Now, when started with
--reload, ndb_mgmd tries to connect to and to copy the configuration of an existing ndb_mgmd process with a confirmed configuration. This works only if another management server is found, and the configuration files used by both management nodes are exactly the same.
If no other management server is found, the local configuration file is read and used. With this change, it is now necessary when performing a rolling restart of a MySQL Cluster having multiple management nodes, to stop all ndb_mgmd processes, and when restarting them, to start the first of these with the
--initialoption (or both options), and then to start any remaining management nodes without using either of these two options. For more information, see Performing a Rolling Restart of a MySQL Cluster. (Bug #45495, Bug #46488, Bug #11753966, Bug #11754823)
References: See also: Bug #42015, Bug #11751233.
On Windows, ndbd
--initialcould hang in an endless loop while attempting to remove directories. (Bug #45402)
For multi-threaded data nodes, insufficient fragment records were allocated in the
DBDIHNDB kernel block, which could lead to error 306 when creating many tables; the number of fragment records allocated did not take into account the number of LQH instances. (Bug #44908)
When performing auto-discovery of tables on individual SQL nodes,
NDBCLUSTERattempted to overwrite existing
.frmfiles and corrupted them.
Workaround. In the mysql client, create a new table (
t2) with same definition as the corrupted table (
t1). Use your system shell or file manager to rename the old
.MYDfile to the new file name (for example, mv t1.MYD t2.MYD). In the mysql client, repair the new table, drop the old one, and rename the new table using the old file name (for example,
RENAME TABLE t2 TO t1).
When started with the
--reloadoptions, if ndb_mgmd could not find a configuration file or connect to another management server, it appeared to hang. Now, when trying to fetch its configuration from another management node, ndb_mgmd checks and signals (
Trying to get configuration from other mgmd(s)) each 30 seconds that it has not yet done so. (Bug #42015)
References: See also: Bug #45495.
Running ndb_restore with the
--print_logoption could cause it to crash. (Bug #40428, Bug #33040)
An insert on an
NDBtable was not always flushed properly before performing a scan. One way in which this issue could manifest was that
LAST_INSERT_ID()sometimes failed to return correct values when using a trigger on an
NDBtable. (Bug #38034)
When a data node received a
TAKE_OVERTCCONFsignal from the master before that node had received a
NODE_FAILREP, a race condition could in theory result. (Bug #37688)
References: See also: Bug #25364, Bug #28717.
Some joins on large
BLOBcolumns could cause mysqld processes to leak memory. The joins did not need to reference the
BLOBcolumns directly for this issue to occur. (Bug #36701)
On OS X 10.5, commands entered in the management client failed and sometimes caused the client to hang, although management client commands invoked using the
-e) option from the system shell worked normally.
For example, the following command failed with an error and hung until killed manually, as shown here:
ndb_mgm> SHOW Warning, event thread startup failed, degraded printouts as result, errno=36 ^C
However, the same management client command, invoked from the system shell as shown here, worked correctly:
shell> ndb_mgm -e "SHOW"
References: See also: Bug #34438.
Replication: In some cases, a
STOP SLAVEstatement could cause the replication slave to crash. This issue was specific to MySQL on Windows or Macintosh platforms. (Bug #45238, Bug #45242, Bug #45243, Bug #46013, Bug #46014, Bug #46030)
References: See also: Bug #40796.
Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.Note
This miscalculation was not reflected in the contents of the
INFORMATION_SCHEMA.FILEStable, as it applied to extents allocated to a fragment, and not to a file.
Cluster API: In some circumstances, if an API node encountered a data node failure between the creation of a transaction and the start of a scan using that transaction, then any subsequent calls to
closeTransaction()could cause the same transaction to be started and closed repeatedly. (Bug #47329)
Cluster API: Performing multiple operations using the same primary key within the same
NdbTransaction::execute()call could lead to a data node crash.Note
This fix does not make change the fact that performing multiple operations using the same primary key within the same
execute()is not supported; because there is no way to determine the order of such operations, the result of such combined operations remains undefined.
References: See also: Bug #44015.
API: The fix for Bug #24507 could lead in some cases to client application failures due to a race condition. Now the server waits for the “dummy” thread to return before exiting, thus making sure that only one thread can initialize the POSIX threads library. (Bug #42850)
References: This issue is a regression of: Bug #24507.