This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.51 (see Changes in MySQL 5.1.51 (2010-09-10)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Important Change: More finely grained control over restart-on-failure behavior is provided with two new data node configuration parameters
MaxStartFailRetrieslimits the total number of retries made before giving up on starting the data node;
StartFailRetryDelaysets the number of seconds between retry attempts.
These parameters are used only if
StopOnErroris set to 0.
For more information, see Defining MySQL Cluster Data Nodes. (Bug #54341)
Important Change: The
Idconfiguration parameter used with MySQL Cluster management, data, and API nodes (including SQL nodes) is now deprecated, and the
NodeIdparameter (long available as a synonym for
Idwhen configuring these types of nodes) should be used instead.
Idcontinues to be supported for reasons of backward compatibility, but now generates a warning when used with these types of nodes, and is subject to removal in a future release of MySQL Cluster.
This change affects the name of the configuration parameter only, establishing a clear preference for
[api]sections of the MySQL Cluster global configuration (
config.ini) file. The behavior of unique identifiers for management, data, and SQL and API nodes in MySQL Cluster has not otherwise been altered.
Idparameter as used in the
[computer]section of the MySQL Cluster global configuration file is not affected by this change.
Cluster API: It is now possible to determine, using the ndb_desc utility or the NDB API, which data nodes contain replicas of which partitions. For ndb_desc, a new
--extra-node-infooption is added to cause this information to be included in its output. A new method
Table::getFragmentNodes()is added to the NDB API for obtaining this information programmatically. (Bug #51184)
On Solaris platforms, the MySQL Cluster management server and NDB API applications now use
CLOCK_REALTIMEas the default clock. (Bug #46183)
Cluster Replication: In circular replication, it was sometimes possible for an event to propagate such that it would be reapplied on all servers. This could occur when the originating server was removed from the replication circle and so could no longer act as the terminator of its own events, as normally happens in circular replication.
To prevent this from occurring, a new
IGNORE_SERVER_IDSoption is introduced for the
CHANGE MASTER TOstatement. This option takes a list of replication server IDs; events having a server ID which appears in this list are ignored and not applied. For more information, see CHANGE MASTER TO Syntax.
In conjunction with the introduction of
SHOW SLAVE STATUShas two new fields.
Replicate_Ignore_Server_Idsdisplays information about ignored servers.
server_idvalue from the master. (Bug #47037)
References: See also: Bug #25998, Bug #27808.
Important Change: The
--with-ndb-port-baseoption for configure did not function correctly, and has been deprecated. Attempting to use this option produces the warning Ignoring deprecated option --with-ndb-port-base.
Beginning with MySQL Cluster NDB 7.1.0, the deprecation warning itself is removed, and the
--with-ndb-port-baseoption is simply handled as an unknown and invalid option if you try to use it. (Bug #47941)
References: See also: Bug #38502.
Important Change; Cluster Replication: In a MySQL Cluster acting as a replication slave and having multiple SQL nodes, only the SQL node receiving events directly from the master recorded DDL statements in its binary logs unless this SQL node also had binary logging enabled; otherwise, other SQL nodes in the slave cluster failed to log DDL statements, regardless of their individual
The fix for this issue aligns binary logging of DDL statements with that of DML statements. In particular, you should take note of the following:
DDL and DML statements on the master cluster are logged with the server ID of the server that actually writes the log.
DDL and DML statements on the master cluster are logged by any attached mysqld that has binary logging enabled.
Affect on upgrades. When upgrading from a previous MySQL CLuster release, you should perform either one of the following:
Upgrade servers that are performing binary logging before those that are not; do not perform any DDL on “old” SQL nodes until all SQL nodes have been upgraded.
Make sure that
--log-slave-updatesis enabled on all SQL nodes performing binary logging prior to the upgrade, so that all DDL is captured.
Logging of DML statements was not affected by this issue.
pkginstaller for MySQL Cluster on Solaris did not perform a complete installation due to an invalid directory reference in the postinstall script. (Bug #41998)
Two unused test files in
storage/ndb/test/sqlcontained incorrect versions of the GNU Lesser General Public License. The files and the directory containing them have been removed. (Bug #11810156)
References: See also: Bug #11810224.
It was possible for a
DROP DATABASEstatement to remove
NDBhidden blob tables without removing the parent tables, with the result that the tables, although hidden to MySQL clients, were still visible in the output of ndb_show_tables but could not be dropped using ndb_drop_table. (Bug #54788)
When performing an online alter table where 2 or more SQL nodes connected to the cluster were generating binary logs, an incorrect message could be sent from the data nodes, causing mysqld processes to crash. This problem was often difficult to detect, because restarting SQL node or data node processes could clear the error, and because the crash in mysqld did not occur until several minutes after the erroneous message was sent and received. (Bug #54168)
A table having the maximum number of attributes permitted could not be backed up using the ndb_mgm client.Note
The maximum number of attributes supported per table is not the same for all MySQL Cluster releases. See Limits Associated with Database Objects in MySQL Cluster, to determine the maximum that applies in the release which you are using.
Creating a Disk Data table, dropping it, then creating an in-memory table and performing a restart, could cause data node processes to fail with errors in the
DBTUPkernel block if the new table's internal ID was the same as that of the old Disk Data table. This could occur because undo log handling during the restart did not check that the table having this ID was now in-memory only. (Bug #53935)
A table created while
ndb_table_no_loggingwas enabled was not always stored to disk, which could lead to a data node crash with Error opening DIH schema files for table. (Bug #53934)
An internal buffer allocator used by
NDBhas the form
alloc(and attempts to allocate
wantedpages, but is permitted to allocate a smaller number of pages, between
minimum. However, this allocator could sometimes allocate fewer than
minimumpages, causing problems with multi-threaded building of ordered indexes. (Bug #53580)
engine_condition_pushdownenabled, a query using
ENUMcolumn of an
NDBtable failed to return any results. This issue is resolved by disabling
engine_condition_pushdownwhen performing such queries. (Bug #53360)
NDBtruncated a column declared as
DECIMAL(65,0)to a length of 64. Now such a column is accepted and handled correctly. In cases where the maximum length (65) is exceeded,
NDBnow raises an error instead of truncating. (Bug #53352)
When a watchdog shutdown occurred due to an error, the process was not terminated quickly enough, sometimes resulting in a hang. (To correct this, the internal
_exit()function is now called in such situations, rather than
exit().) (Bug #53246)
DataMemoryhigher than 4G on 32-bit platforms caused ndbd to crash, instead of failing gracefully with an error. (Bug #52536, Bug #50928)
When running a
TEXTcolumns, memory was allocated for the columns but was not freed until the end of the
SELECT. This could cause problems with excessive memory usage when dumping (using for example mysqldump) tables with such columns and having many rows, large column values, or both. (Bug #52313)
References: See also: Bug #56488, Bug #50310.
NoOfReplicasequal to 1 or 2, if data nodes from one node group were restarted 256 times and applications were running traffic such that it would encounter
NDBerror 1204 (Temporary failure, distribution changed), the live node in the node group would crash, causing the cluster to crash as well. The crash occurred only when the error was encountered on the 256th restart; having the error on any previous or subsequent restart did not cause any problems. (Bug #50930)
If a query on an
NDBtable compared a constant string value to a column, and the length of the string was greater than that of the column, condition pushdown did not work correctly. (The string was truncated to fit the column length before being pushed down.) Now in such cases, the condition is no longer pushed down. (Bug #49459)
When performing tasks that generated large amounts of I/O (such as when using ndb_restore), an internal memory buffer could overflow, causing data nodes to fail with signal 6.
Subsequent analysis showed that this buffer was not actually required, so this fix removes it. (Bug #48861)
Performing intensive inserts and deletes in parallel with a high scan load could a data node crashes due to a failure in the
DBACCkernel block. This was because checking for when to perform bucket splits or merges considered the first 4 scans only. (Bug #48700)
The creation of an ordered index on a table undergoing DDL operations could cause a data node crash under certain timing-dependent conditions. (Bug #48604)
In certain cases, performing very large inserts on
NDBtables when using ndbmtd caused the memory allocations for ordered or unique indexes (or both) to be exceeded. This could cause aborted transactions and possibly lead to data node failures. (Bug #48037)
References: See also: Bug #48113.
SHOW CREATE TABLEdid not display the
AUTO_INCREMENTcolumns. (Bug #47865)
Under some circumstances, when a scan encountered an error early in processing by the
DBTCkernel block (see The DBTC Block), a node could crash as a result. Such errors could be caused by applications sending incorrect data, or, more rarely, by a
DROP TABLEoperation executed in parallel with a scan. (Bug #47831)
When starting a node and synchronizing tables, memory pages were allocated even for empty fragments. In certain situations, this could lead to insufficient memory. (Bug #47782)
mysqld allocated an excessively large buffer for handling
BLOBvalues due to overestimating their size. (For each row, enough space was allocated to accommodate every
TEXTcolumn value in the result set.) This could adversely affect performance when using tables containing
TEXTcolumns; in a few extreme cases, this issue could also cause the host system to run out of memory unexpectedly. (Bug #47574)
References: See also: Bug #47572, Bug #47573.
When an instance of the
NDBCLUSTERtable handler was recycled (this can happen due to table definition cache pressure or to operations such as
ALTER TABLE), if the last row read contained blobs of zero length, the buffer was not freed, even though the reference to it was lost. This resulted in a memory leak.
For example, consider the table defined and populated as shown here:
CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB; INSERT INTO t VALUES (1, REPEAT('F', 20000)); INSERT INTO t VALUES (2, '');
SELECT a, length(b) FROM bl ORDER BY a; FLUSH TABLES;
Prior to the fix, this resulted in a memory leak proportional to the size of the stored
LONGTEXTvalue each time these two statements were executed. (Bug #47573)
References: See also: Bug #47572, Bug #47574.
Large transactions involving joins between tables containing
BLOBcolumns used excessive memory. (Bug #47572)
References: See also: Bug #47573, Bug #47574.
A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug #47505)
NDBstores blob column data in a separate, hidden table that is not accessible from MySQL. If this table was missing for some reason (such as accidental deletion of the file corresponding to the hidden table) when making a MySQL Cluster native backup, ndb_restore crashed when attempting to restore the backup. Now in such cases, ndb_restore fails with the error message Table
table_namehas blob column (
column_name) with missing parts table in backup instead. (Bug #47289)
For very large values of
MaxNoOfAttributes, the calculation for
StringMemorycould overflow when creating large numbers of tables, leading to
NDBerror 773 (Out of string memory, please modify StringMemory config parameter), even when
StringMemorywas set to
100(100 percent). (Bug #47170)
The default value for the
StringMemoryconfiguration parameter, unlike other MySQL Cluster configuration parameters, was not set in
ndb/src/mgmsrv/ConfigInfo.cpp. (Bug #47166)
Signals from a failed API node could be received after an
API_FAILREQsignal (see Operations and Signals) has been received from that node, which could result in invalid states for processing subsequent signals. Now, all pending signals from a failing API node are processed before any
API_FAILREQsignal is received. (Bug #47039)
References: See also: Bug #44607.
Using triggers on
ndb_autoincrement_prefetch_szto be treated as having the NDB kernel's internal default value (32) and the value for this variable as set on the cluster's SQL nodes to be ignored. (Bug #46712)
Full table scans failed to execute when the cluster contained more than 21 table fragments.Note
The number of table fragments in the cluster can be calculated as the number of data nodes, times 8 (that is, times the value of the internal constant
MAX_FRAG_PER_NODE), divided by the number of replicas. Thus, when
NoOfReplicas = 1at least 3 data nodes were required to trigger this issue, and when
NoOfReplicas = 2at least 4 data nodes were required to do so.
Ending a line in the
config.inifile with an extra semicolon character (
;) caused reading the file to fail with a parsing error. (Bug #46242)
When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug #46069)
Problems could arise when using
VARCHARcolumns whose size was greater than 341 characters and which used the
utf8_unicode_cicollation. In some cases, this combination of conditions could cause certain queries and
OPTIMIZE TABLEstatements to crash mysqld. (Bug #45053)
If a node failed while sending a fragmented long signal, the receiving node did not free long signal assembly resources that it had allocated for the fragments of the long signal that had already been received. (Bug #44607)
When performing auto-discovery of tables on individual SQL nodes,
NDBCLUSTERattempted to overwrite existing
.frmfiles and corrupted them.
Workaround. In the mysql client, create a new table (
t2) with same definition as the corrupted table (
t1). Use your system shell or file manager to rename the old
.MYDfile to the new file name (for example, mv t1.MYD t2.MYD). In the mysql client, repair the new table, drop the old one, and rename the new table using the old file name (for example,
RENAME TABLE t2 TO t1).
When starting a cluster with a great many tables, it was possible for MySQL client connections as well as the slave SQL thread to issue DML statements against MySQL Cluster tables before mysqld had finished connecting to the cluster and making all tables writeable. This resulted in Table ... is read only errors for clients and the Slave SQL thread.
This issue is fixed by introducing the
--ndb-wait-setupoption for the MySQL server. This provides a configurable maximum amount of time that mysqld waits for all
NDBtables to become writeable, before enabling MySQL clients or the slave SQL thread to connect. (Bug #40679)
References: See also: Bug #46955.
Running ndb_restore with the
--print_logoption could cause it to crash. (Bug #40428, Bug #33040)
When a slash character (
/) was used as part of the name of an index on an
NDBtable, attempting to execute a
TRUNCATE TABLEstatement on the table failed with the error Index not found, and the table was rendered unusable. (Bug #38914)
When building MySQL Cluster, it was possible to configure the build using
--with-ndb-portwithout supplying a port number. Now in such cases, configure fails with an error. (Bug #38502)
References: See also: Bug #47941.
An insert on an
NDBtable was not always flushed properly before performing a scan. One way in which this issue could manifest was that
LAST_INSERT_ID()sometimes failed to return correct values when using a trigger on an
NDBtable. (Bug #38034)
If the cluster crashed during the execution of a
CREATE LOGFILE GROUPstatement, the cluster could not be restarted afterward. (Bug #36702)
References: See also: Bug #34102.
Some joins on large
BLOBcolumns could cause mysqld processes to leak memory. The joins did not need to reference the
BLOBcolumns directly for this issue to occur. (Bug #36701)
When the MySQL server SQL mode included
STRICT_TRANS_TABLES, storage engine warnings and error codes specific to
NDBwere returned when errors occurred, instead of the MySQL server errors and error codes expected by some programming APIs (such as Connector/J) and applications. (Bug #35990)
On Mac OS X 10.5, commands entered in the management client failed and sometimes caused the client to hang, although management client commands invoked using the
-e) option from the system shell worked normally.
For example, the following command failed with an error and hung until killed manually, as shown here:
SHOWWarning, event thread startup failed, degraded printouts as result, errno=36
However, the same management client command, invoked from the system shell as shown here, worked correctly:
ndb_mgm -e "SHOW"
References: See also: Bug #34438.
When a copying operation exhausted the available space on a data node while copying large
BLOBcolumns, this could lead to failure of the data node and a Table is full error on the SQL node which was executing the operation. Examples of such operations could include an
ALTER TABLEthat changed an
INTcolumn to a
BLOBcolumn, or a bulk insert of
BLOBdata that failed due to running out of space or to a duplicate key error. (Bug #34583, Bug #48040)
References: See also: Bug #41674, Bug #45768.
Trying to insert more rows than would fit into an
NDBtable caused data nodes to crash. Now in such situations, the insert fails gracefully with error 633 Table fragment hash index has reached maximum possible size. (Bug #34348)
The error message text for
NDBerror code 410 (REDO log files overloaded...) was truncated. (Bug #23662)
Replication: When mysqlbinlog
--verbosewas used to read a binary log that had been written using row-based format, the output for events that updated some but not all columns of tables was not correct. (Bug #47323)
Replication: In some cases, a
STOP SLAVEstatement could cause the replication slave to crash. This issue was specific to MySQL on Windows or Macintosh platforms. (Bug #45238, Bug #45242, Bug #45243, Bug #46013, Bug #46014, Bug #46030)
References: See also: Bug #40796.
Disk Data: As an optimization when inserting a row to an empty page, the page is not read, but rather simply initialized. However, this optimzation was performed in all cases when an empty row was inserted, even though it should have been done only if it was the first time that the page had been used by a table or fragment. This is because, if the page had been in use, and then all records had been released from it, the page still needed to be read to learn its log sequence number (LSN).
This caused problems only if the page had been flushed using an incorrect LSN and the data node failed before any local checkpoint was completed—which would remove any need to apply the undo log, hence the incorrect LSN was ignored.
The user-visible result of the incorrect LSN was that it caused the data node to fail during a restart. It was perhaps also possible (although not conclusively proven) that this issue could lead to incorrect data. (Bug #54986)
Disk Data: For a Disk Data tablespace whose extent size was not equal to a whole multiple of 32K, the value of the
FREE_EXTENTScolumn in the
INFORMATION_SCHEMA.FILEStable was smaller than the value of
As part of this fix, the implicit rounding of
NDBCLUSTER(see CREATE TABLESPACE Syntax) is now done explicitly, and the rounded values are used for calculating
INFORMATION_SCHEMA.FILEScolumn values and other purposes. (Bug #49709)
References: See also: Bug #31712.
Disk Data: Inserts of blob column values into a Disk Data table that exhausted the tablespace resulted in misleading error messages about rows not being found in the table rather than the expected error Out of extents, tablespace full. (Bug #48113)
References: See also: Bug #48037, Bug #41674.
Disk Data: A local checkpoint of an empty fragment could cause a crash during a system restart which was based on that LCP. (Bug #47832)
References: See also: Bug #41915.
Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.Note
This miscalculation was not reflected in the contents of the
INFORMATION_SCHEMA.FILEStable, as it applied to extents allocated to a fragment, and not to a file.
Disk Data: If the value set in the
FileSystemPathUndoFileswas identical to the value set for
FileSystemPath, that parameter was ignored when starting the data node with
--initialoption. As a result, the Disk Data files in the corresponding directory were not removed when performing an initial start of the affected data node or data nodes. (Bug #46243)
Disk Data: Repeatedly creating and then dropping Disk Data tables could eventually lead to data node failures. (Bug #45794, Bug #48910)
Disk Data: When a crash occurs due to a problem in Disk Data code, the currently active page list is printed to
stdout(that is, in one or more
ndb_files). One of these lists could contain an endless loop; this caused a printout that was effectively never-ending. Now in such cases, a maximum of 512 entries is printed from each list. (Bug #42431)
Disk Data: Once all data files associated with a given tablespace had been dropped, there was no way for MySQL client applications (including the mysql client) to tell that the tablespace still existed. To remedy this problem,
INFORMATION_SCHEMA.FILESnow holds an additional row for each tablespace. (Previously, only the data files in each tablespace were shown.) This row shows
FILE_NAMEcolumn. (Bug #31782)
Disk Data: It was possible to issue a
ALTER TABLESPACEstatement in which
INITIAL_SIZEwas less than
EXTENT_SIZE. (In such cases,
INFORMATION_SCHEMA.FILESerroneously reported the value of the
1and that of the
0.) Now when either of these statements is issued such that
INITIAL_SIZEis less than
EXTENT_SIZE, the statement fails with an appropriate error message. (Bug #31712)
References: See also: Bug #49709.
Cluster Replication: When
expire_logs_dayswas set, the thread performing the purge of the log files could deadlock, causing all binary log operations to stop. (Bug #49536)
Cluster Replication: When using multiple active replication channels, it was sometimes possible that a node group failed on the slave cluster, causing the slave cluster to shut down. (Bug #47935)
Cluster Replication: When recording a binary log using the
--ndb-log-updated-onlyoptions (both enabled by default) and later attempting to apply that binary log with mysqlbinlog, any operations that were played back from the log but which updated only some (but not all) columns caused any columns that were not updated to be reset to their default values. (Bug #47674)
References: See also: Bug #47323, Bug #46662.
Cluster Replication: mysqlbinlog failed to apply correctly a binary log that had been recorded using
--ndb-log-update-as-write=1. (Bug #46662)
References: See also: Bug #47323, Bug #47674.
Cluster Replication: When inserting rows into the
mysql.ndb_binlog_indextable, duplicate key errors occurred when the size of the epoch number (a 64-bit integer) exceeded 253. This happened because the
NDBstorage engine handler called the wrong overloaded variant of a MySQL Server internal API (the
Field::store()method), which resulted in the epoch being mapped to a 64-bit double precision floating point number and a corresponding loss of accuracy for numbers greater than 253. (Bug #35217)
Cluster API: When reading blob data with lock mode
LM_SimpleRead, the lock was not upgraded as expected. (Bug #51034)
Cluster API: When a DML operation failed due to a uniqueness violation on an
NDBtable having more than one unique index, it was difficult to determine which constraint caused the failure; it was necessary to obtain an
NdbErrorobject, then decode its
detailsproperty, which in could lead to memory management issues in application code.
To help solve this problem, a new API method
Ndb::getNdbErrorDetail()is added, providing a well-formatted string containing more precise information about the index that caused the unque constraint violation. The following additional changes are also made in the NDB API:
NdbError.detailsis now deprecated in favor of the new method.
Dictionary::listObjects()method has been modified to provide more information.
Cluster API: The NDB API methods
NdbOperation::getErrorLine()formerly had both
constvariants. The non-
constversions of these methods have been removed. In addition, the
NdbOperation::getBlobHandle()method has been re-implemented to provide consistent internal semantics. (Bug #47798)
Cluster API: In some circumstances, if an API node encountered a data node failure between the creation of a transaction and the start of a scan using that transaction, then any subsequent calls to
closeTransaction()could cause the same transaction to be started and closed repeatedly. (Bug #47329)
Cluster API: A duplicate read of a column caused NDB API applications to crash. (Bug #45282)
Cluster API: Performing multiple operations using the same primary key within the same
NdbTransaction::execute()call could lead to a data node crash.Note
This fix does not make change the fact that performing multiple operations using the same primary key within the same
execute()is not supported; because there is no way to determine the order of such operations, the result of such combined operations remains undefined.
References: See also: Bug #44015.
Cluster API: The error handling shown in the example file
ndbapi_scan.cppincluded with the MySQL Cluster distribution was incorrect. (Bug #39573)
Cluster API: When using blobs, calling
getBlobHandle()requires the full key to have been set using
getBlobHandle()must access the key for adding blob table operations. However, if
getBlobHandle()was called without first setting all parts of the primary key, the application using it crashed. Now, an appropriate error code is returned instead. (Bug #28116, Bug #48973)
API: The fix for Bug #24507 could lead in some cases to client application failures due to a race condition. Now the server waits for the “dummy” thread to return before exiting, thus making sure that only one thread can initialize the POSIX threads library. (Bug #42850)
References: This issue is a regression of: Bug #24507.
On some Unix/Linux platforms, an error during build from source could be produced, referring to a missing
LT_INITprogram. This is due to versions of libtool 2.1 and earlier. (Bug #51009)
On Mac OS X or Windows, sending a
SIGHUPsignal to the server or an asynchronous flush (triggered by
flush_time) caused the server to crash. (Bug #47525)
1) In rare cases, if a thread was interrupted during a
FLUSH PRIVILEGESoperation, a debug assertion occurred later due to improper diagnostics area setup. 2) A
KILLoperation could cause a console error message referring to a diagnostic area state without first ensuring that the state existed. (Bug #33982)
When using the
SHOW TABLE STATUSdisplayed incorrect information for
Avg_row_length. (Bug #29203)
Installation of MySQL on Windows failed to set the correct location for the character set files, which could lead to mysqld and mysql failing to initialize properly. (Bug #17270)