This section contains unified change history highlights for all
MySQL Cluster releases based on version 7.0 of the
NDBCLUSTER storage engine through
MySQL Cluster NDB 7.0.38. Included are all
changelog entries in the categories MySQL
Cluster, Disk Data, and
Early MySQL Cluster NDB 7.0 releases tagged “NDB 6.4.x” are also included in this listing.
For an overview of features that were added in MySQL Cluster NDB 7.0, see MySQL Cluster Development in MySQL Cluster NDB 7.0.
Functionality Added or Changed
Added several new columns to the
transporters table and
counters for the
table of the
information database. The information provided may help in
troublehsooting of transport overloads and problems with send
buffer memory allocation. For more information, see the
descriptions of these tables.
To provide information which can help in assessing the current
state of arbitration in a MySQL Cluster as well as in diagnosing
and correcting arbitration problems, 3 new
been added to the
NDB table grew to contain
approximately one million rows or more per partition, it became
possible to insert rows having duplicate primary or unique keys
into it. In addition, primary key lookups began to fail, even
when matching rows could be found in the table by other means.
This issue was introduced in MySQL Cluster NDB 7.0.36, MySQL Cluster NDB 7.1.26, and MySQL Cluster NDB 7.2.9. Signs that you may have been affected include the following:
Rows left over that should have been deleted
Rows unchanged that should have been updated
Rows with duplicate unique keys due to inserts or updates (which should have been rejected) that failed to find an existing row and thus (wrongly) inserted a new one
This issue does not affect simple scans, so you can see all rows
in a given
SELECT * FROM
and similar queries
that do not depend on a primary or unique key.
Upgrading to or downgrading from an affected release can be troublesome if there are rows with duplicate primary or unique keys in the table; such rows should be merged, but the best means of doing so is application dependent.
In addition, since the key operations themselves are faulty, a merge can be difficult to achieve without taking the MySQL Cluster offline, and it may be necessary to dump, purge, process, and reload the data. Depending on the circumstances, you may want or need to process the dump with an external application, or merely to reload the dump while ignoring duplicates if the result is acceptable.
Another possibility is to copy the data into another table
without the original table' unique key constraints or
primary key (recall that
TABLE t2 SELECT * FROM t1 does not by default copy
t1's primary or unique key definitions
t2). Following this, you can remove the
duplicates from the copy, then add back the unique constraints
and primary key definitions. Once the copy is in the desired
state, you can either drop the original table and rename the
copy, or make a new dump (which can be loaded later) from the
(Bug #16023068, Bug #67928)
The management client command
BackupStatus failed with an error when used with data
nodes having multiple LQH worker threads
(ndbmtd data nodes). The issue did not effect
form of this command.
The multithreaded job scheduler could be suspended prematurely when there were insufficient free job buffers to allow the threads to continue. The general rule in the job thread is that any queued messages should be sent before the thread is allowed to suspend itself, which guarantees that no other threads or API clients are kept waiting for operations which have already completed. However, the number of messages in the queue was specified incorrectly, leading to increased latency in delivering signals, sluggish response, or otherwise suboptimal performance. (Bug #15908684)
Node failure during the dropping of a table could lead to the node hanging when attempting to restart. (Bug #14787522)
The recently added LCP fragment scan watchdog occasionally reported problems with LCP fragment scans having very high table id, fragment id, and row count values.
This was due to the watchdog not accounting for the time spent draining the backup buffer used to buffer rows before writing to the fragment checkpoint file.
Now, in the final stage of an LCP fragment scan, the watchdog switches from monitoring rows scanned to monitoring the buffer size in bytes. The buffer size should decrease as data is written to the file, after which the file should be promptly closed. (Bug #14680057)
During an online upgrade, certain SQL statements could cause the server to hang, resulting in the error Got error 4012 'Request ndbd time-out, maybe due to high load or communication problems' from NDBCLUSTER. (Bug #14702377)
Job buffers act as the internal queues for work requests (signals) between block threads in ndbmtd and could be exhausted if too many signals are sent to a block thread.
Performing pushed joins in the
block can execute multiple branches of the query tree in
parallel, which means that the number of signals being sent can
increase as more branches are executed. If
DBSPJ execution cannot be completed before
the job buffers are filled, the data node can fail.
This problem could be identified by multiple instances of the message sleeploop 10!! in the cluster out log, possibly followed by job buffer full. If the job buffers overflowed more gradually, there could also be failures due to error 1205 (Lock wait timeout exceeded), shutdowns initiated by the watchdog timer, or other timeout related errors. These were due to the slowdown caused by the 'sleeploop'.
Normally up to a 1:4 fanout ratio between consumed and produced signals is permitted. However, since there can be a potentially unlimited number of rows returned from the scan (and multiple scans of this type executing in parallel), any ratio greater 1:1 in such cases makes it possible to overflow the job buffers.
The fix for this issue defers any lookup child which otherwise would have been executed in parallel with another is deferred, to resume when its parallel child completes one of its own requests. This restricts the fanout ratio for bushy scan-lookup joins to 1:1. (Bug #14709490)
References: See also Bug #14648712.
Under certain rare circumstances, MySQL Cluster data nodes could
crash in conjunction with a configuration change on the data
nodes from a single-threaded to a multi-threaded transaction
coordinator (using the
configuration parameter for ndbmtd). The
problem occurred when a mysqld that had been
started prior to the change was shut down following the rolling
restart of the data nodes required to effect the configuration
Functionality Added or Changed
Added 3 new columns to the
transporters table in the
ndbinfo database. The
bytes_received columns help to provide an
overview of data transfer across the transporter links in a
MySQL Cluster. This information can be useful in verifying
system balance, partitioning, and front-end server load
balancing; it may also be of help when diagnosing network
problems arising from link saturation, hardware faults, or other
Data node logs now provide tracking information about arbitrations, including which nodes have assumed the arbitrator role and at what times. (Bug #11761263, Bug #53736)
A slow filesystem during local checkpointing could exert undue
DBDIH kernel block file page
buffers, which in turn could lead to a data node crash when
these were exhausted. This fix limits the number of table
definition updates that
DBDIH can issue
The management server process, when started with
sometimes hang during shutdown.
The output from ndb_config
--configinfo now contains the
same information as that from ndb_config
--xml, including explicit
indicators for parameters that do not require restarting a data
--initial to take effect.
ndb_config indicated incorrectly
node configuration parameter requires an initial node restart to
take effect, when in fact it does not; this error was also
present in the MySQL Cluster documentation, where it has also
Receiver threads could wait unnecessarily to process incomplete signals, greatly reducing performance of ndbmtd. (Bug #14525521)
ALTER TABLE with other
DML statements on the same NDB table returned Got
error -1 'Unknown error code' from NDBCLUSTER.
On platforms where epoll was not available, setting multiple
receiver threads with the
caused ndbmtd to fail.
CPU consumption peaked several seconds after the forced termination an NDB client application due to the fact that the DBTC kernel block waited for any open transactions owned by the disconnected API client to be terminated in a busy loop, and did not break between checks for the correct state. (Bug #14550056)
--connect-delay startup options for
ndbd and ndbmtd.
--connect-retries (default 12) controls how
many times the data node tries to connect to a management server
before giving up; setting it to -1 means that the data node
never stops trying to make contact.
--connect-delay sets the number of seconds to
wait between retries; the default is 5.
(Bug #14329309, Bug #66550)
It was possible in some cases for two transactions to try to
drop tables at the same time. If the master node failed while
one of these operations was still pending, this could lead
either to additional node failures (and cluster shutdown) or to
new dictionary operations being blocked. This issue is addressed
by ensuring that the master will reject requests to start or
stop a transaction while there are outstanding dictionary
takeover requests. In addition, table-drop operations now
correctly signal when complete, as the
kernel block could not confirm node takeovers while such
operations were still marked as pending completion.
Following a failed
TABLE ... REORGANIZE PARTITION statement, a subsequent
execution of this statement after adding new data nodes caused a
failure in the
DBDIH kernel block which led
to an unplanned shutdown of the cluster.
DBSPJ kernel block had no information
about which tables or indexes actually existed, or which had
been modified or dropped, since execution of a given query
DBSPJ might submit dictionary
requests for nonexistent tables or versions of tables, which
could cause a crash in the
This fix introduces a simplified dictionary into the
DBSPJ kernel block such that
DBSPJ can now check reliably for the
existence of a particular table or version of a table on which
it is about to request an operation.
When using ndbmtd and performing joins, data
nodes could fail where ndbmtd processes were
configured to use a large number of local query handler threads
(as set by the
configuration parameter), the tables accessed by the join had a
large number of partitions, or both.
(Bug #13799800, Bug #14143553)
When reloading the redo log during a node or system restart, and
greater than or equal to 42, it was possible for metadata to be
read for the wrong file (or files). Thus, the node or nodes
involved could try to reload the wrong set of data.
If the Transaction Coordinator aborted a transaction in the “prepared” state, this could cause a resource leak. (Bug #14208924)
When attempting to connect using a socket with a timeout, it was possible (if the timeout was exceeded) for the socket not to be set back to blocking. (Bug #14107173)
In some circumstances, transactions could be lost during an online upgrade. (Bug #13834481)
Attempting to add both a column and an index on that column in
the same online
statement caused mysqld to fail. Although
this issue affected only the mysqld shipped
with MySQL Cluster, the table named in the
TABLE could use any storage engine for which online
operations are supported.
When an NDB API application called
again after the previous call had returned end-of-file (return
code 1), a transaction object was leaked. Now when this happens,
NDB returns error code 4210 (Ndb sent more info than
length specified); previouslyu in such cases, -1 was
returned. In addition, the extra transaction object associated
with the scan is freed, by returning it to the transaction
coordinator's idle list.
DUMP 2303 in the ndb_mgm
client now includes the status of the single fragment scan
record reserved for a local checkpoint.
A shortage of scan fragment records in
resulted in a leak of concurrent scan table records and key
TABLE ... REORGANIZE PARTITION statement can be used
to create new table partitions after new empty nodes have been
added to a MySQL Cluster. Usually, the number of partitions to
create is determined automatically, such that, if no new
partitions are required, then none are created. This behavior
can be overridden by creating the original table using the
MAX_ROWS option, which indicates that extra
partitions should be created to store a large number of rows.
However, in this case
ALTER ONLINE TABLE ... REORGANIZE
PARTITION simply uses the
value specified in the original
TABLE statement to determine the number of partitions
required; since this value remains constant, so does the number
of partitions, and so no new ones are created. This means that
the table is not rebalanced, and the new data nodes remain
To solve this problem, support is added for
ONLINE TABLE ...
newvalue is greater than the value
MAX_ROWS in the original
CREATE TABLE statement. This larger
MAX_ROWS value implies that more partitions
are required; these are allocated on the new data nodes, which
restores the balanced distribution of the table data.
In some cases, restarting data nodes spent a very long time in
Start Phase 101, when API nodes must connect to the starting
when the API nodes trying to connect failed in a live-lock
scenario. This connection process uses a handshake during which
a small number of messages are exchanged, with a timeout used to
detect failures during the handshake.
Prior to this fix, this timeout was set such that, if one API node encountered the timeout, all other nodes connecting would do the same. The fix also decreases this timeout. This issue (and the effects of the fix) are most likely to be observed on relatively large configurations having 10 or more data nodes and 200 or more API nodes. (Bug #13825163)
TABLE failed when a
ndbmtd failed to restart when the size of a table definition exceeded 32K.
(The size of a table definition is dependent upon a number of factors, but in general the 32K limit is encountered when a table has 250 to 300 columns.) (Bug #13824773)
An initial start using ndbmtd could sometimes hang. This was due to a state which occurred when several threads tried to flush a socket buffer to a remote node. In such cases, to minimize flushing of socket buffers, only one thread actually performs the send, on behalf of all threads. However, it was possible in certain cases for there to be data in the socket buffer waiting to be sent with no thread ever being chosen to perform the send. (Bug #13809781)
When trying to use ndb_size.pl
to connect to a MySQL server running on a nonstandard port, the
port argument was ignored.
(Bug #13364905, Bug #62635)
Important Change: A number of changes have been made in the configuration of transporter send buffers.
The data node configuration parameter
is now deprecated, and thus subject to removal in a future
MySQL Cluster release.
ReservedSendBufferMemory has been
non-functional since it was introduced and remains so.
TotalSendBufferMemory now works correctly
with data nodes using ndbmtd.
A new data node configuration parameter
is introduced. Its purpose is to control how much additional
memory can be allocated to the send buffer over and above
that specified by
SendBufferMemory. The default setting
(0) allows up to 16MB to be allocated automatically.
(Bug #13633845, Bug #11760629, Bug #53053)
LIKE ... ESCAPE on
NDB tables failed when pushed down
to the data nodes. Such queries are no longer pushed down,
regardless of the value of
(Bug #13604447, Bug #61064)
Several instances in the NDB code affecting the operation of
multi-threaded data nodes, where
SendBufferMemory was associated with a
specific thread for an unnecessarily long time, have been
identified and fixed, by minimizing the time that any of these
buffers can be held exclusively by a given thread (send buffer
memory being critical to operation of the entire node).
To avoid TCP transporter overload, an overload flag is kept in
the NDB kernel for each data node; this flag is used to abort
key requests if needed, yielding error 1218 Send
Buffers overloaded in NDB kernel in such cases.
Scans can also put significant pressure on transporters,
especially where scans with a high degree of parallelism are
executed in a configuration with relatively small send buffers.
However, in these cases, overload flags were not checked, which
could lead to node failures due to send buffer exhaustion. Now,
overload flags are checked by scans, and in cases where
returning sufficient rows to match the batch size
--ndb-batch-size server option)
would cause an overload, the number of rows is limited to what
can be accommodated by the send buffer.
See also Configuring MySQL Cluster Send Buffer Parameters. (Bug #13602508)
A data node crashed when more than 16G fixed-size memory was
DBTUP to one fragment (because
DBACC kernel block was not prepared to
accept values greater than 32 bits from it, leading to an
overflow). Now in such cases, the data node returns Error 889
Table fragment fixed data reference has reached
maximum possible value.... When this happens, you
can work around the problem by increasing the number of
partitions used by the table (such as by using the
MAXROWS option with
References: See also Bug #11747870, Bug #34348.
A node failure and recovery while performing a scan on more than 32 partitions led to additional node failures during node takeover. (Bug #13528976)
option now causes ndb_mgmd to skip checking
for the configuration directory, and thus to skip creating it in
the event that it does not exist.
At the beginning of a local checkpoint, each data node marks its local tables with a “to be checkpointed” flag. A failure of the master node during this process could cause either the LCP to hang, or one or more data nodes to be forcibly shut down. (Bug #13436481)
A node failure while a
TABLE statement was executing resulted in a hung
connection (and the user was not informed of any error that
would cause this to happen).
References: See also Bug #13407848.
data node configuration parameter, which specifies a percentage
of data node resources to hold in reserve for restarts. The
resources monitored are
IndexMemory, and any
MAX_ROWS settings (see
CREATE TABLE Syntax). The default value of
MinFreePct is 5, which
means that 5% from each these resources is now set aside for
configuration parameters, used to control the maximum sizes of
result batches, are defined as integers. However, the values
used to store these were incorrectly interpreted as numbers of
bytes in the NDB kernel. This caused the
DBLQH kernel block to fail to detect when the
Because the log event buffer used internally by data nodes was circular, periodic events such as statistics events caused it to be overwritten too quickly. Now the buffer is partitioned by log event category, and its default size has been increased from 4K to 8K. (Bug #13394771)
Previously, forcing simultaneously the shutdown of multiple data
SHUTDOWN -F in the
ndb_mgm management client could cause the
entire cluster to fail. Now in such cases, any such nodes are
forced to abort immediately.
Functionality Added or Changed
data node configuration parameter. When enabled, this parameter
causes data nodes to handle corrupted tuples in a fail-fast
manner—in other words, whenever the data node detects a
corrupted tuple, it forcibly shuts down if
enabled. For backward compatibility, this parameter is disabled
When a failure of multiple data nodes during a local checkpoint (LCP) that took a long time to complete included the node designated as master, any new data nodes attempting to start before all ongoing LCPs were completed later crashed. This was due to the fact that node takeover by the new master cannot be completed until there are no pending local checkpoints. Long-running LCPs such as those which triggered this issue can occur when fragment sizes are sufficiently large (see MySQL Cluster Nodes, Node Groups, Replicas, and Partitions, for more information). Now in such cases, data nodes (other than the new master) are kept from restarting until the takeover is complete. (Bug #13323589)
When deleting from multiple tables using a unique key in the
WHERE condition, the wrong rows were deleted.
UPDATE triggers failed when rows
were changed by deleting from or updating multiple tables.
(Bug #12718336, Bug #61705, Bug #12728221)
A SubscriberNodeIdUndefined error was previously unhandled, resulting in a data node crash, but is now handled by NDB Error 1429, Subscriber node undefined in SubStartReq. (Bug #12598496)
Shutting down a mysqld while under load caused the spurious error messages Opening ndb_binlog_index: killed and Unable to lock table ndb_binlog_index to be written in the cluster log. (Bug #11930428)
Functionality Added or Changed
It is now possible to filter the output from
ndb_config so that it displays only system,
data node, or connection parameters and values, using one of the
In addition, it is now possible to specify from which data node
the configuration data is obtained, using the
that is added in this release.
For more information, see ndb_config — Extract MySQL Cluster Configuration Information. (Bug #11766870)
Incompatible Change; Cluster API: Restarting a machine hosting data nodes, SQL nodes, or both, caused such nodes when restarting to time out while trying to obtain node IDs.
As part of the fix for this issue, the behavior and default
values for the NDB API
method have been improved. Due to these changes, the version
number for the included NDB client library
libndbclient.so) has been increased from
4.0.0 to 5.0.0. For NDB API applications, this means that as
part of any upgrade, you must do both of the following:
Review and possibly modify any NDB API code that uses the
method, in order to take into account its changed default
Recompile any NDB API applications using the new version of the client library.
Also in connection with this issue, the default value for each
of the two mysqld options
--ndb-wait-setup has been
increased to 30 seconds (from 0 and 15, respectively). In
addition, a hard-coded 30-second delay was removed, so that the
now handled correctly in all cases.
When replicating DML statements with
between clusters, the number of operations that failed due to
nonexistent keys was expected to be no greater than the number
of defined operations of any single type. Because the slave SQL
thread defines operations of multiple types in batches together,
code which relied on this assumption could cause
mysqld to fail.
The maximum effective value for the
configuration parameter was limited by the value of
SendBufferMemory. Now the
value set for
OverloadLimit is used
correctly, up to this parameter's stated maximum (4G).
AUTO_INCREMENT values were not set correctly
IGNORE statements affecting
NDB tables. This could lead such
statements to fail with Got error 4350 'Transaction
already aborted' from NDBCLUSTER when inserting
multiple rows containing duplicate values.
(Bug #11755237, Bug #46985)
When failure handling of an API node takes longer than 300 seconds, extra debug information is included in the resulting output. In cases where the API node's node ID was greater than 48, these extra debug messages could lead to a crash, and confuing output otherwise. This was due to an attempt to provide information specific to data nodes for API nodes as well. (Bug #62208)
In rare cases, a series of node restarts and crashes during restarts could lead to errors while reading the redo log. (Bug #62206)
Functionality Added or Changed
data node configuration parameter, which can be used to limit
the number of DML operations used by a transaction; if the
transaction requires more than this many DML operations, the
transaction is aborted.
Restarting a mysqld during a rolling upgrade with data nodes running a mix of old and new versions of the MySQL Cluster software caused the mysqld to run in read-only mode. (Bug #12651364, Bug #61498)
When global checkpoint indexes were written with no intervening end-of-file or megabyte border markers, this could sometimes lead to a situation in which the end of the redo log was mistakenly regarded as being between these GCIs, so that if the restart of a data node took place before the start of the next redo log was overwritten, the node encountered an Error while reading the REDO log. (Bug #12653993, Bug #61500)
References: See also Bug #56961.
Under certain rare circumstances, a data node process could fail
with Signal 11 during a restart. This was due to uninitialized
variables in the
QMGR kernel block.
Error reporting has been improved for cases in which API nodes are unable to connect due to apparent unavailability of node IDs. (Bug #12598398)
Error messages for Failed to convert connection transporter registration problems were inspecific. (Bug #12589691)
Multiple management servers were unable to detect one another
until all nodes had fully started. As part of the fix for this
issue, two new status values
CONNECTED can be reported for management
nodes in the output of the ndb_mgm client
SHOW command (see
Commands in the MySQL Cluster Management Client). Two
corresponding status values
NDB_MGM_NODE_STATUS_CONNECTED are also added
to the list of possible values for an
structure in the MGM API.
(Bug #12352191, Bug #48301)
Handling of the
configuration parameters was not consistent in all parts of the
NDB kernel, and were only strictly
enforced by the
SUMA kernel blocks. This could lead to
problems when tables could be created but not replicated. Now
these parameters are treated by
DBDICT as suggested maximums rather than hard
limits, as they are elsewhere in the
It was not possible to shut down a management node while one or more data nodes were stopped (for whatever reason). This issue was a regression introduced in MySQL Cluster NDB 7.0.24 and MySQL Cluster NDB 7.1.13. (Bug #61607)
References: See also Bug #61147.
Within a transaction, after creating, executing, and closing a
creating and executing but not closing a second scan caused the
application to crash.
Applications that included the header file
ndb_logevent.h could not be built using the
Microsoft Visual Studio C compiler or the Oracle (Sun) Studio C
compiler due to empty struct definitions.
Ndb_getinaddr() function has
been rewritten to use
my_gethostbyname_r() (which is removed
in a later version of the MySQL Server).
Two unused test files in
storage/ndb/test/sql contained incorrect
versions of the GNU Lesser General Public License. The files and
the directory containing them have been removed.
References: See also Bug #11810224.
Error 1302 gave the wrong error message (Out of backup record). This has been corrected to A backup is already running. (Bug #11793592)
In ndbmtd, a node connection event is
detected by a
CMVMI thread which sends a
CONNECT_REP signal to the
QMGR kernel block. In a few isolated
circumstances, a signal might be transferred to
QMGR directly by the
NDB transporter before the
CONNECT_REP signal actually arrived. This
resulted in reports in the error log with status
Temporary error, restart node, and the
message Internal program error.
Under heavy loads with many concurrent inserts, temporary
failures in transactions could occur (and were misreported as
being due to
NDB Error 899
Rowid already allocated). As part of the
fix for this issue,
NDB Error 899
has been reclassified as an internal error, rather than as a
temporary transaction error.
(Bug #56051, Bug #11763354)
When using two management servers, issuing in an
ndb_mgm client connected to one management
STOP command for stopping the other
management server caused Error 2002 (Stop failed ...
Send to process or receive failed.: Permanent error: Application
error), even though the
command actually succeeded, and the second
ndb_mgmd was shut down.
incorrect with regard to data files in MySQL Cluster Disk Data
tablespaces. This could lead to a crash when
Functionality Added or Changed
It is now possible to add data nodes online to a running MySQL
Cluster without performing a rolling restart of the cluster or
starting data node processes with the
--nowait-nodes option. This can be
done by setting
65536 in the
config.ini file for
any data nodes that should be started at a later time, when
first starting the cluster. (It was possible to set
NodeGroup to this value
previously, but the management server failed to start.)
As part of this fix, a new data node configuration parameter
has been added. When the management server sees that there are
data nodes with no node group (that is, nodes for which
Nodegroup = 65536), it
milliseconds before treating these nodes as though they were
listed with the
option, and proceeds to start.
For more information, see Adding MySQL Cluster Data Nodes Online. (Bug #11766167, Bug #59213)
config_generation column has been added to
nodes table of the
ndbinfo database. By checking
this column, it is now possible to determine which version or
versions of the MySQL Cluster configuration file are in effect
on the data nodes. This information can be especially useful
when performing a rolling restart of the cluster to update its
Cluster API: A unique index operation is executed in two steps: a lookup on an index table, and an operation on the base table. When the operation on the base table failed, while being executed in a batch with other operations that succeeded, this could lead to a hanging execute, eventually timing out with Error 4012 (Request ndbd time-out, maybe due to high load or communication problems). (Bug #12315582)
A memory leak in
LGMAN, that leaked 8 bytes
of log buffer memory per 32k written, was introduced in MySQL
Cluster NDB 7.0.9, effecting all MySQL Cluster NDB 7.1 releases
as well as MySQL Cluster NDB 7.0.9 and later MySQL Cluster NDB
7.0 releases. (For example, when 128MB log buffer memory was
used, it was exhausted after writing 512GB to the undo log.)
This led to a GCP stop and data node failure.
References: This bug was introduced by Bug #47966.
When using ndbmtd, a MySQL Cluster configured with 32 data nodes failed to start correctly. (Bug #60943)
When performing a TUP scan with locks in parallel, and with a highly concurrent load of inserts and deletions, the scan could sometimes fail to notice that a record had moved while waiting to acquire a lock on it, and so read the wrong record. During node recovery, this could lead to a crash of a node that was copying data to the node being started, and a possible forced shutdown of the cluster.
Cluster API: Performing interpreted operations using a unique index did not work correctly, because the interpret bit was kept when sending the lookup to the index table.
Functionality Added or Changed
Improved scaling of ordered index scans performance by removing
a hard-coded limit
making the number of
TUX scans per fragment configurable by adding
data node configuration parameter.
server option set a status variable as well as a system
variable. The status variable has been removed as redundant.
A scan with a pushed condition (filter) using the
CommittedRead lock mode could hang for a
short interval when it was aborted when just as it had decided
to send a batch.
When aborting a multi-read range scan exactly as it was changing ranges in the local query handler, LQH could fail to detect it, leaving the scan hanging. (Bug #11929643)
Schema distribution did not take place for tables converted from
another storage engine to
ALTER TABLE; this meant that such
tables were not always visible to all SQL nodes attached to the
A GCI value inserted by ndb_restore
--restore_epoch into the
ndb_apply_status table was actually 1 less
than the correct value.
Limits imposed by the size of
not always enforced consistently with regard to Disk Data undo
buffers and log files. This could sometimes cause a
CREATE LOGFILE GROUP or
ALTER LOGFILE GROUP statement to
fail for no apparent reason, or cause the log file group
to be created when starting the cluster.
Functionality Added or Changed
now provides disk usage as well as memory usage information for
Disk Data tables. Also,
formerly did not show any statistics for
NDB tables. Now the
DATA_FREE columns contain correct information
for the table's partitions.
option is added for ndb_restore, which makes
it possible to restore to a database having a different name
from that of the database in the backup.
For more information, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #54327)
The NDB kernel now implements a number of statistical counters
relating to actions performed by or affecting
Ndb objects, such as starting,
closing, or aborting transactions; primary key and unique key
operations; table, range, and pruned scans; blocked threads
waiting for various operations to complete; and data and events
sent and received by
These NDB API counters are incremented inside the NDB kernel
whenever NDB API calls are made or data is sent to or received
by the data nodes. mysqld exposes these
counters as system status variables; their values can be read in
the output of
SHOW STATUS, or by
table in the
INFORMATION_SCHEMA database. By
comparing the values of these status variables prior to and
following the execution of SQL statements that act on
NDB tables, you can observe the
corresponding actions taken on the NDB API level, which can be
beneficial for monitoring and performance tuning of MySQL
Important Note: Due to an error in merging the original fix, it did not appear MySQL Cluster NDB 7.0.21; this oversight has been corrected in the current release. (Bug #58256)
CREATE TABLE statement
failed due to
NDB error 1224
(Too many fragments), it was not possible
to create the table afterward unless either it had no ordered
indexes, or a
statement was issued first, even if the subsequent
CREATE TABLE was valid and should
otherwise have succeeded.
References: See also Bug #59751.
When a query used multiple references to or instances of the
same physical tables,
NDB failed to
recognize these multiple instances as different tables; in such
NDB could incorrectly use
condition pushdown on a condition referring to these other
instances to be pushed to the data nodes, even though the
condition should have been rejected as unpushable, leading to
Successive queries on the
counters table from the same
SQL node returned unchanging results. To fix this issue, and to
prevent similar issues from occurring in the future,
ndbinfo tables are now
excluded from the query cache.
This issue affects all previous MySQL Cluster NDB 7.0 releases. (Bug #60045)
When attempting to create a table on a MySQL Cluster with many
standby data nodes (setting
config.ini for the nodes that should wait,
starting the nodes that should start immediately with the
--nowait-nodes option, and using
CREATE TABLE statement's
MAX_ROWS option), mysqld
miscalculated the number of fragments to use. This caused the
CREATE TABLE to fail.
CREATE TABLE failure caused
by this issue in turn prevented any further attempts to create
the table, even if the table structure was simplified or
changed in such a way that the attempt should have succeeded.
This “ghosting” issue is handled in Bug #59756.
The logic used in determining whether to collapse a range to a
simple equality was faulty. In certain cases, this could cause
NDB to treat a range as if it were
a primary key lookup when determining the query plan to be used.
Although this did not affect the actual result returned by the
query, it could in such cases result in inefficient execution of
queries due to the use of an inappropriate query plan.
caused multi-threaded index building to occur on the master node
NDB sometimes treated a simple (not
unique) ordered index as unique.
When an NDBAPI client application was waiting for more scan
results after calling
the calling thread sometimes woke up even if no new batches for
any fragment had arrived, which was unnecessary, and which could
have a negative impact on the application's performance.
during a node restart, it was possible to get a spurious error
711 (System busy with node restart, schema operations
not allowed when a node is starting).
Functionality Added or Changed
The following changes have been made with regard to the
data node configuration parameter:
The maximum possible value for this parameter has been increased from 32000 milliseconds to 256000 milliseconds.
Setting this parameter to zero now has the effect of disabling GCP stops caused by save timeouts, commit timeouts, or both.
The current value of this parameter and a warning are written to the cluster log whenever a GCP save takes longer than 1 minute or a GCP commit takes longer than 10 seconds.
For more information, see Disk Data and GCP Stop errors. (Bug #58383)
for ndb_restore. This option causes
ndb_restore to ignore tables corrupted due to
missing blob parts tables, and to continue reading from the
backup file and restoring the remaining tables.
References: See also Bug #51652.
It is now possible to stop or restart a node even while other
nodes are starting, using the MGM API
respectively, with the
parameter set to 1.
References: See also Bug #58319.
In some circumstances, very large
BLOB read and write operations in
MySQL Cluster applications can cause excessive resource usage
and even exhaustion of memory. To fix this issue and to provide
increased stability when performing such operations, it is now
possible to set limits on the volume of
BLOB data to be read or written
within a given transaction in such a way that when these limits
are exceeded, the current transaction implicitly executes any
accumulated operations. This avoids an excessive buildup of
pending data which can result in resource exhaustion in the NDB
kernel. The limits on the amount of data to be read and on the
amount of data to be written before this execution takes place
can be configured separately. (In other words, it is now
possible in MySQL Cluster to specify read batching and write
batching that is specific to
data.) These limits can be configured either on the NDB API
level, or in the MySQL Server.
On the NDB API level, four new methods are added to the
can be used to get and to set, respectively, the maximum amount
BLOB data to be read that
accumulates before this implicit execution is triggered.
can be used to get and to set, respectively, the maximum volume
BLOB data to be written that
accumulates before implicit execution occurs.
For the MySQL server, two new options are added. The
option sets a limit on the amount of pending
BLOB data to be read before
triggering implicit execution, and the
option controls the amount of pending
BLOB data to be written. These
limits can also be set using the mysqld
configuration file, or read and set within the
mysql client and other MySQL client
applications using the corresponding server system variables.
In some circumstances, an SQL trigger on an
NDB table could read stale data.
Data nodes configured with very large amounts (multiple
failed during startup with NDB error 2334 (Job buffer
References: See also Bug #47984.
strcasecmp were declared in
ndb_global.h but never defined or used. The
declarations have been removed.
On Windows platforms, issuing a
command in the ndb_mgm client caused
management processes that had been started with the
--nodaemon option to exit
When handling failures of multiple data nodes, an error in the construction of internal signals could cause the cluster's remaining nodes to crash. This issue was most likely to affect clusters with large numbers of data nodes. (Bug #58240)
MySQL Cluster failed to compile correctly on FreeBSD 8.1 due to
Trying to drop an index while it was being used to perform scan updates caused data nodes to crash. (Bug #58277, Bug #57057)
The number of rows affected by a statement that used a
WHERE clause having an
IN condition with a value list
containing a great many elements, and that deleted or updated
enough rows such that
them in batches, was not computed or reported correctly.
A query using
BETWEEN as part of a
WHERE condition could cause
mysqld to hang or crash.
a table with a unique index created with
returned an empty result.
When executing a full table scan caused by a
WHERE condition using
in combination with a join,
unique_key IS NULL
failed to close the scan.
References: See also Bug #57481.
A query having multiple predicates joined by
OR in the
WHERE clause and
which used the
sort_union access method (as
EXPLAIN) could return
enabled, a query using
LIKE on an
ENUM column of an
NDB table failed to return any
results. This issue is resolved by disabling
performing such queries.
FAIL_REP signal, used inside the NDB
kernel to declare that a node has failed, now includes the node
ID of the node that detected the failure. This information can
be useful in debugging.
A row insert or update followed by a delete operation on the same row within the same transaction could in some cases lead to a buffer overflow. (Bug #59242)
References: See also Bug #56524. This bug was introduced by Bug #35208.
Data nodes no longer allocated all memory prior to being ready to exchange heartbeat and other messages with management nodes, as in NDB 6.3 and earlier versions of MySQL Cluster. This caused problems when data nodes configured with large amounts of memory failed to show as connected or showed as being in the wrong start phase in the ndb_mgm client even after making their initial connections to and fetching their configuration data from the management server. With this fix, data nodes now allocate all memory as they did in earlier MySQL Cluster versions. (Bug #57568)
In some circumstances, it was possible for
mysqld to begin a new multi-range read scan
without having closed a previous one. This could lead to
exhaustion of all scan operation objects, transaction objects,
or lock objects (or some combination of these) in
NDB, causing queries to fail with
such errors as Lock wait timeout exceeded
or Connect failure - out of connection
References: See also Bug #58750.
During a node takeover, it was possible in some circumstances
for one of the remaining nodes to send an extra transaction
LQH_TRANSCONF) signal to the
DBTC kernel block, conceivably leading to a
crash of the data node trying to take over as the new
Two related problems could occur with read-committed scans made in parallel with transactions combining multiple (concurrent) operations:
When committing a multiple-operation transaction that contained concurrent insert and update operations on the same record, the commit arrived first for the insert and then for the update. If a read-committed scan arrived between these operations, it could thus read incorrect data; in addition, if the scan read variable-size data, it could cause the data node to fail.
When rolling back a multiple-operation transaction having concurrent delete and insert operations on the same record, the abort arrived first for the delete operation, and then for the insert. If a read-committed scan arrived between the delete and the insert, it could incorrectly assume that the record should not be returned (in other words, the scan treated the insert as though it had not yet been committed).
When a slash character (
/) was used as part
of the name of an index on an
table, attempting to execute a
TABLE statement on the table failed with the error
Index not found, and the table was
Partitioning; Disk Data:
When using multi-threaded data nodes, an
NDB table created with a very large
value for the
MAX_ROWS option could—if
this table was dropped and a new table with fewer partitions,
but having the same table ID, was created—cause
ndbmtd to crash when performing a system
restart. This was because the server attempted to examine each
partition whether or not it actually existed.
This issue is the same as that reported in Bug #45154, except that the current issue is specific to ndbmtd instead of ndbd. (Bug #58638)
Disk Data: Performing what should have been an online drop of a multi-column index was actually performed offline. (Bug #55618)
In certain cases, a race condition could occur when
DROP LOGFILE GROUP removed the
logfile group while a read or write of one of the effected files
was in progress, which in turn could lead to a crash of the data
A race condition could sometimes be created when
DROP TABLESPACE was run
concurrently with a local checkpoint; this could in turn lead to
a crash of the data node.
When at least one data node was not running, queries against the
INFORMATION_SCHEMA.FILES table took
an excessive length of time to complete because the MySQL server
waited for responses from any stopped nodes to time out. Now, in
such cases, MySQL does not attempt to contact nodes which are
not known to be running.
Attempting to read the same value (using
than 9000 times within the same transaction caused the
transaction to hang when executed. Now when more reads are
performed in this way than can be accommodated in a single
transaction, the call to
with a suitable error.
It was not possible to obtain the status of nodes accurately
after an attempt to stop a data node using
ndb_mgm_stop() failed without
returning an error.
ALL DUMP command during a rolling
upgrade to MySQL Cluster NDB 7.0.20 caused the cluster to crash.
Functionality Added or Changed
ndbd now bypasses use of Non-Uniform Memory
Access support on Linux hosts by default. If your system
supports NUMA, you can enable it and override
ndbd use of interleaving by setting the
Numa data node
configuration parameter which is added in this release. See
Data Nodes: Realtime Performance Parameters, for more
Id configuration parameter used with
MySQL Cluster management, data, and API nodes (including SQL
nodes) is now deprecated, and the
parameter (long available as a synonym for
when configuring these types of nodes) should be used instead.
Id continues to be supported for reasons of
backward compatibility, but now generates a warning when used
with these types of nodes, and is subject to removal in a future
release of MySQL Cluster.
This change affects the name of the configuration parameter
only, establishing a clear preference for
Id in the
sections of the MySQL Cluster global configuration
config.ini) file. The behavior of unique
identifiers for management, data, and SQL and API nodes in MySQL
Cluster has not otherwise been altered.
Id parameter as used in the
[computer] section of the MySQL Cluster
global configuration file is not affected by this change.
MySQL Cluster RPM distributions did not include a
shared-compat RPM for the MySQL Server, which
meant that MySQL applications depending on
libmysqlclient.so.15 (MySQL 5.0 and
earlier) no longer worked.
The method for calculating table schema versions used by schema
transactions did not follow the established rules for recording
schemas used in the
References: See also Bug #57896.
of the form
when selecting from
NDB table having a primary key
on multiple columns could result in Error 4259
Invalid set of range scan bounds if
range2 started exactly where
range1 ended and the primary key
definition declared the columns in a different order relative to
the order in the table's column list. (Such a query should
simply return all rows in the table, since any expression
is always true.)
CREATE TABLE t (a, b, PRIMARY KEY (b, a)) ENGINE NDB;
This issue could then be triggered by a query such as this one:
SELECT * FROM t WHERE b < 8 OR b >= 8;
In addition, the order of the ranges in the
WHERE clause was significant; the issue was
not triggered, for example, by the query
SELECT * FROM t WHERE b
<= 8 OR b > 8.
ndb_restore now retries failed transactions when replaying log entries, just as it does when restoring data. (Bug #57618)
When a data node angel process failed to fork off a new worker
process (to replace one that had failed), the failure was not
handled. This meant that the angel process either transformed
itself into a worker process, or itself failed. In the first
case, the data node continued to run, but there was no longer
any angel to restart it in the event of failure, even with
StopOnError set to 0.
Transient errors during a local checkpoint were not retried, leading to a crash of the data node. Now when such errors occur, they are retried up to 10 times if necessary. (Bug #57650)
WHERE against an
NDB table having a
VARCHAR column as its primary key
failed to return all matching rows.
CREATE NODEGROUP and
NODEGROUP commands could cause
mysqld processes to crash.
Data nodes compiled with gcc 4.5 or higher crashed during startup. (Bug #57761)
LQHKEYREQ request message used by the
local query handler when checking the major schema version of a
table, being only 16 bits wide, could cause this check to fail
with an Invalid schema version error
NDB error code 1227). This issue
occurred after creating and dropping (and re-creating) the same
table 65537 times, then trying to insert rows into the table.
References: See also Bug #57897.
The disconnection of an API or management node due to missed heartbeats led to a race condition which could cause data nodes to crash. (Bug #57946)
During a GCP takeover, it was possible for one of the data nodes
not to receive a
with the result that it would report itself as
GCP_COMMITTING while the other data nodes
MAX_ROWS option for
CREATE TABLE was ignored, which
meant that it was not possible to enable multi-threaded building
On Windows, the angel process which monitors and (when necessary) restarts the data node process failed to spawn a new worker in some circumstances where the arguments vector contained extra items placed at its beginning. This could occur when the path to ndbd.exe or ndbmtd.exe contained one or more spaces. (Bug #57949)
A number of cluster log warning messages relating to deprecated configuration parameters contained spelling, formatting, and other errors. (Bug #57381)
A GCP stop is detected using 2 parameters which determine the
maximum time that a global checkpoint or epoch can go unchanged;
one of these controls this timeout for GCPs and one controls the
timeout for epochs. Suppose the cluster is configured such that
is 100 ms but
1500 ms. A node failure can be signalled after 4 missed
heartbeats—in this case, 6000 ms. However, this would
causing false detection of a GCP. To prevent this from
happening, the configured value for
is automatically adjusted, based on the values of
The current issue arose when the automatic adjustment routine
did not correctly take into consideration the fact that, during
cascading node-failures, several intervals of length
* (HeartbeatIntervalDBDB + ArbitrationTimeout) may
elapse before all node failures have internally been resolved.
This could cause false GCP detection in the event of a cascading
SUMA kernel block has a 10-element ring
buffer for storing out-of-order
SUB_GCP_COMPLETE_REP signals received from
the local query handlers when global checkpoints are completed.
In some cases, exceeding the ring buffer capacity on all nodes
of a node group at the same time caused the node group to fail
with an assertion.
Aborting a native
NDB backup in the
ndb_mgm client using the
BACKUP command did not work correctly when using
ndbmtd, in some cases leading to a crash of
Disk Data: When performing online DDL on Disk Data tables, scans and moving of the relevant tuples were done in more or less random order. This fix causes these scans to be done in the order of the tuples, which should improve performance of such operations due to the more sequential ordering of the scans. (Bug #57848)
References: See also Bug #57827.
An application dropping a table at the same time that another
application tried to set up a replication event on the same
table could lead to a crash of the data node. The same issue
could sometimes cause
An NDB API client program under load could abort with an
assertion error in
References: See also Bug #32708.
Functionality Added or Changed
It is now possible using the ndb_mgm management client or the MGM API to force a data node shutdown or restart even if this would force the shutdown or restart of the entire cluster.
In the management client, this is implemented through the
addition of the
-f (force) option to the
For more information, see
Commands in the MySQL Cluster Management Client.
References: See also Bug #34325, Bug #11747863.
The MGM API function
was previously internal, has now been moved to the public API.
This function can be used to get
engine and other version information from the management server.
References: See also Bug #51273.
The failure of a data node during some scans could cause other data nodes to fail. (Bug #54945)
The text file
containing old MySQL Cluster changelog information was no longer
being maintained, and so has been removed from the tree.
MySQL Cluster stores, for each row in each
NDB table, a Global Checkpoint Index (GCI)
which identifies the last committed transaction that modified
the row. As such, a GCI can be thought of as a coarse-grained
Due to changes in the format used by
store local checkpoints (LCPs) in MySQL Cluster NDB 6.3.11, it
could happen that, following cluster shutdown and subsequent
recovery, the GCI values for some rows could be changed
unnecessarily; this could possibly, over the course of many node
or system restarts (or both), lead to an inconsistent database.
Under certain rare conditions, attempting to start more than one
ndb_mgmd process simultaneously using the
--reload option caused a race
condition such that none of the ndb_mgmd
processes could start.
At startup, an ndbd or ndbmtd process creates directories for its file system without checking to see whether they already exist. Portability code added in MySQL Cluster NDB 7.0.18 and MySQL Cluster NDB 7.1.7 did not account for this fact, printing a spurious error message when a directory to be created already existed. This unneeded printout has been removed. (Bug #57087)
A data node can be shut down having completed and synchronized a
x, while having written a
great many log records belonging to the next GCI
x + 1, as part of normal operations.
However, when starting, completing, and synchronizing GCI
x + 1, then the log records from
original start must not be read. To make sure that this does not
happen, the REDO log reader finds the last GCI to restore, scans
forward from that point, and erases any log records that were
not (and should never be) used.
The current issue occurred because this scan stopped immediately as soon as it encountered an empty page. This was problematic because the REDO log is divided into several files; thus, it could be that there were log records in the beginning of the next file, even if the end of the previous file was empty. These log records were never invalidated; following a start or restart, they could be reused, leading to a corrupt REDO log. (Bug #56961)
When multiple SQL nodes were connected to the cluster and one of
them stopped in the middle of a DDL operation, the
mysqld process issuing the DDL timed out with
the error distributing
tbl_name timed out.
Exhausting the number of available commit-ack markers
(controlled by the
parameter) led to a data node crash.
TABLE ... ADD COLUMN operation that changed the table
schema such that the number of 32-bit words used for the bitmask
allocated to each DML operation increased during a transaction
in DML which was performed prior to DDL which was followed by
either another DML operation or—if using
replication—a commit, led to data node failure.
This was because the data node did not take into account that the bitmask for the before-image was smaller than the current bitmask, which caused the node to crash. (Bug #56524)
References: This bug is a regression of Bug #35208.
An error in program flow in
result in data node shutdown routines being called multiple
On Windows, a data node refused to start in some cases unless the ndbd.exe executable was invoked using an absolute rather than a relative path. (Bug #56257)
Memory pages used for
assigned to ordered indexes, were not ever freed, even after any
rows that belonged to the corresponding indexes had been
DROP TABLE operations among
several SQL nodes attached to a MySQL Cluster. the
LOCK_OPEN lock normally protecting
mysqld's internal table list is released
so that other queries or DML statements are not blocked.
However, to make sure that other DDL is not executed
simultaneously, a global schema lock (implemented as a row-level
NDB) is used, such that all
operations that can modify the state of the
mysqld internal table list also need to
acquire this global schema lock. The
TABLE STATUS statement did not acquire this lock.
When running a
SELECT on an
NDB table with
TEXT columns, memory was
allocated for the columns but was not freed until the end of the
SELECT. This could cause problems
with excessive memory usage when dumping (using for example
mysqldump) tables with such columns and
having many rows, large column values, or both.
References: See also Bug #56488, Bug #50310.
In certain cases,
could sometimes leave behind a cached table object, which caused
problems with subsequent DDL operations.
The MGM API function
ndb_mgm_get_version() did not
set the error message before returning with an error. With this
fix, it is now possible to call
after a failed call to this function such that
returns an error number and error message, as expected of MGM
The MGM API functions
ndb_mgm_restart() set the error
code and message without first checking whether the management
server handle was
NULL, which could lead to
fatal errors in MGM API applications that depended on these
Functionality Added or Changed
More finely grained control over restart-on-failure behavior is
provided with two new data node configuration parameters
limits the total number of retries made before giving up on
starting the data node;
the number of seconds between retry attempts.
These parameters are used only if
StopOnError is set to 0.
For more information, see Defining MySQL Cluster Data Nodes. (Bug #54341)
Following a failure of the master data node, the new master sometimes experienced a race condition which caused the node to terminate with a GcpStop error. (Bug #56044)
Startup messages previously written by
stdout are now
written to the cluster log instead when
LogDestination is set.
Trying to create a table having a
TEXT column with
'' failed with the error Illegal null
attribute. (An empty default is permitted and
NDB should do the same.)
configuration parameter was not handled correctly for CPU ID
values greater than 255.
ndb_restore always reported 0 for the
GCPStop (end point of the backup). Now it
provides useful binary log position and epoch information.
The warning MaxNoOfExecutionThreads
#) > LockExecuteThreadToCPU count
#), this could cause
contention could be logged when running
ndbd, even though the condition described can
occur only when using ndbmtd.
--nodaemon logged to the
console in addition to the configured log destination.
The graceful shutdown of a data node could sometimes cause transactions to be aborted unnecessarily. (Bug #18538)
References: See also Bug #55641.
Functionality Added or Changed
--server-id-bits option for mysqld
For mysqld, the
--server-id-bits option indicates
the number of least significant bits within the 32-bit server ID
which actually identify the server. Indicating that the server
ID uses less than 32 bits permits the remaining bits to be used
for other purposes by NDB API applications using the Event API
For mysqlbinlog, the
tells mysqlbinlog how to interpret the server
IDs in the binary log when the binary log was written by a
mysqld having its
server_id_bits set to less than
the maximum (32).
Important Change; Cluster API:
The poll and select calls made by the MGM API were not
interrupt-safe; that is, a signal caught by the process while
waiting for an event on one or more sockets returned error -1
errno set to
EINTR. This caused problems with MGM API
functions such as
To fix this problem, the internal
ndb_socket_poller::poll() function has been
The old version of this function has been retained as
poll_unsafe(), for use by those parts of NDB
that do not need the EINTR-safe version
of the function.
When another data node failed, a given data node
DBTC kernel block could time out while
DBDIH to signal commits of
pending transactions, leading to a crash. Now in such cases the
timeout generates a prinout, and the data node continues to
Starting ndb_mgmd with
--config-cache=0 caused it to
An excessive number of timeout warnings (normally used only for debugging) were written to the data node logs. (Bug #53987)
The TCP configuration parameters
HostName2 were not displayed in the
output of ndb_config
The configure.js option
WITHOUT_DYNAMIC_PLUGINS=TRUE was ignored when
building MySQL Cluster for Windows using
CMake. Among the effects of this issue was
that CMake attempted to build the
InnoDB storage engine as a plugin
.DLL file) even though the
Plugin is not currently supported by MySQL Cluster.
It was possible for a
DATABASE statement to remove
NDB hidden blob tables without
removing the parent tables, with the result that the tables,
although hidden to MySQL clients, were still visible in the
output of ndb_show_tables but could not be
dropped using ndb_drop_table.
Disk Data: As an optimization when inserting a row to an empty page, the page is not read, but rather simply initialized. However, this optimzation was performed in all cases when an empty row was inserted, even though it should have been done only if it was the first time that the page had been used by a table or fragment. This is because, if the page had been in use, and then all records had been released from it, the page still needed to be read to learn its log sequence number (LSN).
This caused problems only if the page had been flushed using an incorrect LSN and the data node failed before any local checkpoint was completed—which would remove any need to apply the undo log, hence the incorrect LSN was ignored.
The user-visible result of the incorrect LSN was that it caused the data node to fail during a restart. It was perhaps also possible (although not conclusively proven) that this issue could lead to incorrect data. (Bug #54986)
not update the timer for
Functionality Added or Changed
HeartbeatOrder data node
configuration parameter, which can be used to set the order in
which heartbeats are transmitted between data nodes. This
parameter can be useful in situations where multiple data nodes
are running on the same host and a temporary disruption in
connectivity between hosts would otherwise cause the loss of a
node group, leading to failure of the cluster.
Restrictions on some types of mismatches in column definitions when restoring data using ndb_restore have been relaxed. These include the following types of mismatches:
Different default values
Different distribution key settings
Now, when one of these types of mismatches in column definitions is encountered, ndb_restore no longer stops with an error; instead, it accepts the data and inserts it into the target table, while issuing a warning to the user.
For more information, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #54423)
References: See also Bug #53810, Bug #54178, Bug #54242, Bug #54279.
It is now possible to install management node and data node processes as Windows services. (See Installing MySQL Cluster Processes as Windows Services, for more information.) In addition, data node processes on Windows are now maintained by angel processes, just as they are on other platforms supported by MySQL Cluster.
The disconnection of all API nodes (including SQL nodes) during
ALTER TABLE caused a memory
The presence of duplicate
[tcp] sections in
config.ini file caused the management
server to crash. Now in such cases, ndb_mgmd
fails gracefully with an appropriate error message.
A table having the maximum number of attributes permitted could not be backed up using the ndb_mgm client.
The maximum number of attributes supported per table is not the same for all MySQL Cluster releases. See Limits Associated with Database Objects in MySQL Cluster, to determine the maximum that applies in the release which you are using.
When performing an online alter table where 2 or more SQL nodes connected to the cluster were generating binary logs, an incorrect message could be sent from the data nodes, causing mysqld processes to crash. This problem was often difficult to detect, because restarting SQL node or data node processes could clear the error, and because the crash in mysqld did not occur until several minutes after the erroneous message was sent and received. (Bug #54168)
The setting for
ignored by ndbmtd, which made it impossible
to use more than 4 cores for rebuilding indexes.
During initial node restarts, initialization of the REDO log was always performed 1 node at a time, during start phase 4. Now this is done during start phase 2, so that the initialization can be performed in parallel, thus decreasing the time required for initial restarts involving multiple nodes. (Bug #50062)
If a node shutdown (either in isolation or as part of a system shutdown) occurred directly following a local checkpoint, it was possible that this local checkpoint would not be used when restoring the cluster. (Bug #54611)
When adding multiple new node groups to a MySQL Cluster, it was
necessary for each new node group to add only the nodes to be
assigned to the new node group, create that node group using
CREATE NODEGROUP, then repeat this process
for each new node group to be added to the cluster. The fix for
this issue makes it possible to add all of the new nodes at one
time, and then issue several
commands in succession.
Cluster API: When using the NDB API, it was possible to rename a table with the same name as that of an existing table.
This issue did not affect table renames executed using SQL on MySQL servers acting as MySQL Cluster API nodes.
Cluster API: An excessive number of client connections, such that more than 1024 file descriptors, sockets, or both were open, caused NDB API applications to crash. (Bug #34303)
The value of an internal constant used in the implementation of
NdbScanOperation classes caused
MySQL Cluster NDB 7.0 NDB API applications compiled against
MySQL Cluster NDB 7.0.14 or earlier to fail when run with MySQL
Cluster 7.0.15, and MySQL Cluster NDB 7.1 NDB API applications
compiled against MySQL Cluster NDB 7.1.3 or earlier to break
when used with MySQL Cluster 7.1.4.
When using mysqldump to back up and restore schema information while using ndb_restore for restoring only the data, restoring to MySQL Cluster NDB 7.1.4 from an older version failed on tables having columns with default values. This was because versions of MySQL Cluster prior to MySQL Cluster NDB 7.1.4 did not have native support for default values.
In addition, the MySQL Server supports
TIMESTAMP columns having dynamic
default values, such as
CURRENT_TIMESTAMP; however, the current implementation
NDB-native default values permits only a
constant default value.
To fix this issue, the manner in which
TIMESTAMP columns is
reverted to its pre-NDB-7.1.4 behavior (obtaining the default
value from mysqld rather than
NDBCLUSTER) except where a
TIMESTAMP column uses a constant
default, as in the case of a column declared as
TIMESTAMP DEFAULT 0 or
Functionality Added or Changed
Important Change: The maximum number of attributes (columns plus indexes) per table has increased to 512.
--wait-nodes option has been added for
ndb_waiter. When this option is used, the
program waits only for the nodes having the listed IDs to reach
the desired state. For more information, see
ndb_waiter — Wait for MySQL Cluster to Reach a Given Status.
option for ndb_restore. This option causes
ndb_restore to ignore any schema objects
which it does not recognize. Currently, this is useful chiefly
for restoring native backups made from a cluster running MySQL
Cluster NDB 7.0 to a cluster running MySQL Cluster NDB 6.3.
As part of this change, new methods relating to default values
have been added to the
Table classes in the NDB
API. For more information, see
Added the MySQL Cluster management server option
--config-cache, which makes it
possible to enable and disable configuration caching. This
option is turned on by default; to disable configuration
caching, start ndb_mgmd with
--config-cache=0, or with
ndb_mgmd — The MySQL Cluster Management Server Daemon, for more
When attempting to create an
table on an SQL node that had not yet connected to a MySQL
Cluster management server since the SQL node's last
statement failed as expected, but with the unexpected Error 1495
For the partitioned engine it is necessary to define
(Bug #11747335, Bug #31853)
An internal buffer allocator used by
NDB has the form
alloc( and attempts to
wanted pages, but is
permitted to allocate a smaller number of pages, between
minimum. However, this allocator
could sometimes allocate fewer than
minimum pages, causing problems with
multi-threaded building of ordered indexes.
higher than 4G on 32-bit platforms caused
ndbd to crash, instead of failing gracefully
with an error.
(Bug #52536, Bug #50928)
Specifying the node ID as part of the
--ndb-connectstring option to
mysqld was not handled correctly.
The fix for this issue includes the following changes:
Multiple occurrences of any of the mysqld
--ndb-nodeid are now handled
in the same way as with other MySQL server options, in that
the value set in the last occurrence of the option is the
value that is used by mysqld.
--ndb-nodeid is used,
its value overrides that of any
setting used in
example, starting mysqld with
--ndb-nodeid=3 now produces the same result as
starting it with
The 1024-character limit on the length of the connectstring
is removed, and
--ndb-connectstring is now
handled in this regard in the same way as other
In the NDB API, a new constructor for
added which takes as its arguments a connectstring and the
node ID to force the API node to use.
NDB log handler failed, the memory
allocated to it was freed twice.
NDB did not distinguish correctly between table names differing
only by lettercase when
lower_case_table_names was set
NDB tables until
creation of a table failed due to
NDB error 905 Out of
attribute records (increase MaxNoOfAttributes), then
restarting all management node and data node processes,
attempting to drop and re-create one of the tables failed with
the error Out of table records..., even
when sufficient table records were available.
References: See also Bug #52055. This bug is a regression of Bug #44294.
When compiled with support for
epoll but this
functionality is not available at runtime, MySQL Cluster tries
to fall back to use the
select() function in
its place. However, an extra
in the transporter registry code caused ndbd
to fail instead.
Creating a Disk Data table, dropping it, then creating an
in-memory table and performing a restart, could cause data node
processes to fail with errors in the
kernel block if the new table's internal ID was the same as
that of the old Disk Data table. This could occur because undo
log handling during the restart did not check that the table
having this ID was now in-memory only.
A table created while
enabled was not always stored to disk, which could lead to a
data node crash with Error opening DIH schema files
When creating an index,
to check whether the internal ID allocated to the index was
within the permissible range, leading to an assertion. This
issue could manifest itself as a data node failure with
NDB error 707 (No more
table metadata records (increase MaxNoOfTables)),
when creating tables in rapid succession (for example, by a
script, or when importing from mysqldump),
even with a relatively high value for
MaxNoOfTables and a
relatively low number of tables.
ndb_restore did not raise any errors if hashmap creation failed during execution. (Bug #51434)
The value set for the ndb_mgmd option
--ndb-nodeid was not verified
prior to use as being within the permitted range (1 to 255,
inclusive), leading to a crash of the management server.
NDB truncated a column declared as
DECIMAL(65,0) to a length of 64.
Now such a column is accepted and handled correctly. In cases
where the maximum length (65) is exceeded,
NDB now raises an error instead of
ndb_mgm -e "ALL STATUS" erroneously reported
that data nodes remained in start phase 0 until they had
Functionality Added or Changed
It is now possible to determine, using the
ndb_desc utility or the NDB API, which data
nodes contain replicas of which partitions. For
ndb_desc, a new
--extra-node-info option is
added to cause this information to be included in its output. A
added to the NDB API for obtaining this information
DUMP commands returned output to all
ndb_mgm clients connected to the same MySQL
Cluster. Now, these commands return their output only to the
ndb_mgm client that actually issued the
Incompatible Change; Cluster API: The default behavior of the NDB API Event API has changed as follows:
Previously, when creating an
Event, DDL operations (alter
and drop operations on tables) were automatically reported on
any event operation that used this event, but as a result of
this change, this is no longer the case. Instead, you must now
invoke the event's
setReport() method, with
ER_DDL, to get this behavior.
For existing NDB API applications where you wish to retain the
old behavior, you must update the code as indicated previously,
then recompile, following an upgrade. Otherwise, DDL operations
are no longer reported after upgrading
The output of the ndb_mgm client
REPORT BACKUPSTATUS command could sometimes
contain errors due to uninitialized data.
The mysql client
command did not work properly. This issue was only known to
affect the version of the mysql client that
was included with MySQL Cluster NDB 7.0 and MySQL Cluster NDB
ha_ndbcluster.cc was not compiled with the
SAFE_MUTEX flags as the MySQL Server.
configuration parameter was set in
config.ini, the ndb_mgm
REPORT MEMORYUSAGE command printed its
output multiple times.
The internal variable
is no longer used, has been removed.
Restoring a MySQL Cluster backup between platforms having different endianness failed when also restoring metadata and the backup contained a hashmap not already present in the database being restored to. This issue was discovered when trying to restore a backup made on Solaris/SPARC to a MySQL Cluster running on Solaris/x86, but could conceivably occur in other cases where the endianness of the platform on which the backup was taken differed from that of the platform being restored to. (Bug #51432)
When performing a complex mix of node restarts and system
restarts, the node that was elected as master sometimes required
optimized node recovery due to missing
information. When this happened, the node crashed with
Failure to recreate object ... during restart, error
721 (because the
code was run twice). Now when this occurs, node takeover is
executed immediately, rather than being made to wait until the
remaining data nodes have started.
References: See also Bug #48436.
The following issues were fixed in the
When debug compiling MySQL Cluster on Windows, the mysys library was not compiled with -DSAFEMALLOC and -DSAFE_MUTEX, due to the fact that my_socket.c was misnamed as my_socket.cc. (Bug #51856)
If a node or cluster failure occurred while
mysqld was scanning the
ndb.ndb_schema table (which it does when
attempting to connect to the cluster), insufficient error
handling could lead to a crash by mysqld in
certain cases. This could happen in a MySQL Cluster with a great
many tables, when trying to restart data nodes while one or more
mysqld processes were restarting.
Information about several management client commands was missing
from (that is, truncated in) the output of the
After running a mixed series of node and system restarts, a system restart could hang or fail altogether. This was caused by setting the value of the newest completed global checkpoint too low for a data node performing a node restart, which led to the node reporting incorrect GCI intervals for its first local checkpoint. (Bug #52217)
Issuing a command in the ndb_mgm client after it had lost its connection to the management server could cause the client to crash. (Bug #49219)
In MySQL Cluster NDB 7.0 and later, DDL operations are performed within schema transactions; the NDB kernel code for starting a schema transaction checks that all data nodes are at the same version before permitting a schema transaction to start. However, when a version mismatch was detected, the client was not actually informed of this problem, which caused the client to hang. (Bug #52228)
GROUP BY query against
NDB tables sometimes did not use
any indexes unless the query included a
INDEX option. With this fix, indexes are used by such
queries (where otherwise possible) even when
INDEX is not specified.
The redo log protects itself from being filled up by
periodically checking how much space remains free. If
insufficient redo log space is available, it sets the state
TAIL_PROBLEM which results in transactions
being aborted with error code 410 (out of redo
log). However, this state was not set following a
node restart, which meant that if a data node had insufficient
redo log space following a node restart, it could crash a short
time later with Fatal error due to end of REDO
log. Now, this space is checked during node
method could in some cases cause a buffer overflow.
The ndb_print_backup_file utility failed to function, due to a previous internal change in the NDB code. (Bug #41512, Bug #48673)
ndb_mgm -e "... REPORT ..." did not write any
The fix for this issue also prevents the cluster log from being
INFO messages when
DataMemory usage reaches
100%, and insures that when the usage is decreased, an
appropriate message is written to the cluster log.
(Bug #31542, Bug #44183, Bug #49782)
The error message returned after atttempting to execute
ALTER LOGFILE GROUP on an
nonexistent logfile group did not indicate the reason for the
Disk Data: Inserts of blob column values into a MySQL Cluster Disk Data table that exhausted the tablespace resulted in misleading no such tuple error messages rather than the expected error tablespace full.
This issue appeared similar to Bug #48113, but had a different underlying cause. (Bug #52201)
DDL operations on Disk Data tables having a relatively small
UNDO_BUFFER_SIZE could fail unexpectedly.
When reading blob data with lock mode
LM_SimpleRead, the lock was not upgraded as
A number of issues were corrected in the NDB API coding examples
found in the
directory in the MySQL Cluster source tree. These included
possible endless recursion in
ndbapi_scan.cpp as well as problems running
some of the examples on systems using Windows or Mac OS X due to
the lettercase used for some table names.
(Bug #30552, Bug #30737)
Functionality Added or Changed
A new configuration parameter
HeartbeatThreadPriority makes it possible to
select between a first-in, first-out or round-round scheduling
policy for management node and API node heartbeat threads, as
well as to set the priority of these threads. See
Defining a MySQL Cluster Management Server, or
Defining SQL and Other API Nodes in a MySQL Cluster, for more
Start phases are now written to the data node logs. (Bug #49158)
The ndb_desc utility can now show the extent
space and free extent space for subordinate
TEXT columns (stored in hidden
BLOB tables by NDB). A
--blob-info option has been
added for this program that causes ndb_desc
to generate a report for each subordinate
BLOB table. For more information, see
ndb_desc — Describe NDB Tables.
Replication of a MySQL Cluster using multi-threaded data nodes
could fail with forced shutdown of some data nodes due to the
fact that ndbmtd exhausted
more quickly than ndbd. After this fix,
passing of replication data between the
SUMA NDB kernel blocks is done using
DataMemory rather than
Until you can upgrade, you may be able to work around this issue
by increasing the
setting; doubling the default should be sufficient in most
When performing a system restart of a MySQL Cluster where multi-threaded data nodes were in use, there was a slight risk that the restart would hang due to incorrect serialization of signals passed between LQH instances and proxies; some signals were sent using a proxy, and others directly, which meant that the order in which they were sent and received could not be guaranteed. If signals arrived in the wrong order, this could cause one or more data nodes to hang. Now all signals that need to be sent and received in the same order are sent using the same path. (Bug #51645)
When one or more data nodes read their LCPs and applied undo logs significantly faster than others, this could lead to a race condition causing system restarts of data nodes to hang. This could most often occur when using both ndbd and ndbmtd processes for the data nodes. (Bug #51644)
When deciding how to divide the REDO log, the
DBDIH kernel block saved more than was needed
to restore the previous local checkpoint, which could cause REDO
log space to be exhausted prematurely (
A side effect of the ndb_restore
--rebuild-indexes options is
to change the schema versions of indexes. When a
mysqld later tried to drop a table that had
been restored from backup using one or both of these options,
the server failed to detect these changed indexes. This caused
the table to be dropped, but the indexes to be left behind,
leading to problems with subsequent backup and restore
The ndb_restore message
created index `PRIMARY`... was directed to
stderr instead of
DML operations can fail with
NDB error 1220
(REDO log files overloaded...) if the
opening and closing of REDO log files takes too much time. If
this occurred as a GCI marker was being written in the REDO log
while REDO log file 0 was being opened or closed, the error
could persist until a GCP stop was encountered. This issue could
be triggered when there was insufficient REDO log space (for
example, with configuration parameter settings
NoOfFragmentLogFiles = 6 and
FragmentLogFileSize = 6M) with a load
including a very high number of updates.
References: See also Bug #20904.
equal to 1 or 2, if data nodes from one node group were
restarted 256 times and applications were running traffic such
that it would encounter
1204 (Temporary failure, distribution
changed), the live node in the node group would
crash, causing the cluster to crash as well. The crash occurred
only when the error was encountered on the 256th restart; having
the error on any previous or subsequent restart did not cause
ndb_restore crashed while trying to restore a corrupted backup, due to missing error handling. (Bug #51223)
Cluster API: An issue internal to ndb_mgm could cause problems when trying to start a large number of data nodes at the same time. (Bug #51273)
References: See also Bug #51310.
greater than 2GB could cause data nodes to crash while starting.
An initial restart of a data node configured with a large amount of memory could fail with a Pointer too large error. (Bug #51027)
References: This bug was introduced by Bug #47818.
Functionality Added or Changed
Numeric codes used in management server status update messages in the cluster logs have been replaced with text descriptions. (Bug #49627)
References: See also Bug #44248.
When a primary key lookup on an
table containing one or more
columns was executed in a transaction, a shared lock on any blob
tables used by the
NDB table was
held for the duration of the transaction. (This did not occur
for indexed or non-indexed
Now in such cases, the lock is released after all
BLOB data has been read.
ndbmtd started on a single-core machine could
sometimes fail with a Job Buffer Full
was set greater than
Now a warning is logged when this occurs.
greater than 2GB could cause data nodes to crash while starting.
An initial restart of a data node configured with a large amount of memory could fail with a Pointer too large error. (Bug #51027)
References: This bug was introduced by Bug #47818.
Functionality Added or Changed
The maximum permitted value of the
system variable has been increased from 256 to 65536.
Added multi-threaded ordered index building capability during
system restarts or node restarts, controlled by the
node configuration parameter (also introduced in this release).
Dropping unique indexes in parallel while they were in use could cause node and cluster failures. (Bug #50118)
Under some circumstances, the
block could send an excessive number of commit and completion
messages which could lead to a the job buffer filling up and
node failure. This was especially likely to occur when using
ndbmtd with a single data node.
When attempting to join a running cluster whose management
server had been started with the
--nowait-nodes option and
having SQL nodes with dynamically allocated node IDs, a second
management server failed with spurious INTERNAL
ERROR: Found dynamic ports with value in config...
Online upgrades from MySQL Cluster NDB 7.0.9b to MySQL Cluster NDB 7.0.10 did not work correctly. Current MySQL Cluster NDB 7.0 users should upgrade directly to MySQL Cluster NDB 7.0.11 or later.
This issue is not known to have affected MySQL Cluster NDB 6.3, and it should be possible to upgrade from MySQL Cluster NDB 6.3 to MySQL Cluster NDB 7.0.10 without problems. See Upgrade and Downgrade Compatibility: MySQL Cluster NDB 6.x, for more information. (Bug #50433)
During Start Phases 1 and 2, the
command sometimes (falsely) returned
Connected for data nodes running
If a query on an
NDB table compared
a constant string value to a column, and the length of the
string was greater than that of the column, condition pushdown
did not work correctly. (The string was truncated to fit the
column length before being pushed down.) Now in such cases, the
condition is no longer pushed down.
Initial start of partitioned nodes did not work correctly. This issue was observed in MySQL Cluster NDB 7.0 only. (Bug #50661)
mysqld could sometimes crash during a commit while trying to handle NDB Error 4028 Node failure caused abort of transaction. (Bug #38577)
Performing intensive inserts and deletes in parallel with a high
scan load could a data node crashes due to a failure in the
DBACC kernel block. This was because checking
for when to perform bucket splits or merges considered the first
4 scans only.
Local query handler information was not reported or written to the cluster log correctly. This issue is thought to have been introduced in MySQL Cluster NDB 7.0.10. (Bug #50467)
Trying to insert more rows than would fit into an
NDB table caused data nodes to crash. Now in
such situations, the insert fails gracefully with error 633
Table fragment hash index has reached maximum
ndbmtd was not built on Windows (CMake did not provide a build target for it). (Bug #49325)
the stated memory was not allocated when the node was started,
but rather only when the memory was used by the data node
process for other reasons.
When setting the
configuration parameter failed, only the error Failed
to memlock pages... was returned. Now in such cases
the operating system's error code is also returned.
CREATE NODEGROUP client command in
ndb_mgm could sometimes cause the forced
shutdown of a data node.
Functionality Added or Changed
Added the ndb_mgmd
--nowait-nodes option, which
permits a cluster that is configured to use multiple management
servers to be started using fewer than the number configured.
This is most likely to be useful when a cluster is configured
with two management servers and you wish to start the cluster
using only one of them.
See ndb_mgmd — The MySQL Cluster Management Server Daemon, for more information. (Bug #48669)
Whether a system restart or a node restart is required when resetting that parameter;
Whether cluster nodes need to be restarted using the
--initial option when resetting the
This enhanced functionality is supported for upgrades from MySQL
Cluster NDB 6.3 when the
version is 6.3.29 or later.
(Bug #48528, Bug #49163)
The configuration check that each management server runs to verify that all connected ndb_mgmd processes have the same configuration could fail when a configuration change took place while this check was in progress. Now in such cases, the configuration check is rescheduled for a later time, after the change is complete. (Bug #48143)
NDB table with an
excessive number of large
columns caused the cluster to fail. Now, an attempt to create
such a table is rejected with error 791 (Too many
total bits in bitfields).
References: See also Bug #42047.
When evaluating the options
ndb_restore program overwrote the result of
the database-level options with the result of the table-level
options rather than merging these results together, sometimes
leading to unexpected and unpredictable results.
As part of the fix for this problem, the semantics of these options have been clarified; because of this, the rules governing their evaluation have changed slightly. These changes be summed up as follows:
--exclude-* options are now evaluated from
right to left in the order in which they are passed to
--exclude-* options are now cumulative.
In the event of a conflict, the first (rightmost) option takes precedence.
For more detailed information and examples, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #48907)
In MySQL Cluster NDB 7.0, ndb_config and
ndb_error_reporter were printing warnings
about management and data nodes running on the same host to
stdout instead of
as was the case in earlier MySQL Cluster release series.
(Bug #44689, Bug #49160)
References: See also Bug #25941.
Node takeover during a system restart occurs when the REDO log for one or more data nodes is out of date, so that a node restart is invoked for that node or those nodes. If this happens while a mysqld process is attached to the cluster as an SQL node, the mysqld takes a global schema lock (a row lock), while trying to set up cluster-internal replication.
However, this setup process could fail, causing the global schema lock to be held for an excessive length of time, which made the node restart hang as well. As a result, the mysqld failed to set up cluster-internal replication, which led to tables being read only, and caused one node to hang during the restart.
This issue could actually occur in MySQL Cluster NDB 7.0 only, but the fix was also applied MySQL Cluster NDB 6.3, to keep the two codebases in alignment.
When running many parallel scans, a local checkpoint (which performs a scan internally) could find itself not getting a scan record, which led to a data node crash. Now an extra scan record is reserved for this purpose, and a problem with obtaining the scan record returns an appropriate error (error code 489, Too many active scans). (Bug #48564)
When a long-running transaction lasting long enough to cause
Error 410 (REDO log files overloaded) was
later committed or rolled back, it could happen that
NDBCLUSTER was not able to release
the space used for the REDO log, so that the error condition
The most likely cause of such transactions is a bug in the application using MySQL Cluster. This fix should handle most cases where this might occur. (Bug #36500)
NDB stores blob column data in a
separate, hidden table that is not accessible from MySQL. If
this table was missing for some reason (such as accidental
deletion of the file corresponding to the hidden table) when
making a MySQL Cluster native backup, ndb_restore crashed when
attempting to restore the backup. Now in such cases, ndb_restore
fails with the error message Table
table_name has blob column
column_name) with missing parts
table in backup instead.
If the master data node receiving a request from a newly started API or data node for a node ID died before the request has been handled, the management server waited (and kept a mutex) until all handling of this node failure was complete before responding to any other connections, instead of responding to other connections as soon as it was informed of the node failure (that is, it waited until it had received a NF_COMPLETEREP signal rather than a NODE_FAILREP signal). On visible effect of this misbehavior was that it caused management client commands such as SHOW and ALL STATUS to respond with unnecessary slowness in such circumstances. (Bug #49207)
When performing tasks that generated large amounts of I/O (such as when using ndb_restore), an internal memory buffer could overflow, causing data nodes to fail with signal 6.
Subsequent analysis showed that this buffer was not actually required, so this fix removes it. (Bug #48861)
In some situations, when it was not possible for an SQL node to
start a schema transaction (necessary, for instance, as part of
NDBCLUSTER did not correctly
indicate the error to the MySQL server, which led
mysqld to crash.
During an LCP master takeover, when the newly elected master did
not receive a
COPY_GCI LCP protocol message
but other nodes participating in the local checkpoint had
received one, the new master could use an uninitialized
variable, which caused it to crash.
DROP DATABASE failed when there
were stale temporary
NDB tables in
the database. This situation could occur if
mysqld crashed during execution of a
DROP TABLE statement after the
table definition had been removed from
NDBCLUSTER but before the
.ndb file had been removed
from the crashed SQL node's data directory. Now, when
DATABASE, it checks for these files and removes them
if there are no corresponding table definitions for them found
Exhaustion of send buffer memory or long signal memory caused data nodes to crash. Now an appropriate error message is provided instead when this situation occurs. (Bug #48852)
Starting a mysqld process with
--ndb-nodeid (either as
a command-line option or by assigning it a value in
my.cnf) caused the
mysqld to get only the corresponding
connection from the
[mysqld] section in the
config.ini file having the matching ID,
even when connection pooling was enabled (that is, when the
mysqld process was started with
greater than 1).
References: See also Bug #27644, Bug #38590, Bug #41592.
Attempting to create more than 11435 tables failed with Error 306 (Out of fragment records in DIH). (Bug #49156)
During a node restart, logging was enabled on a per-fragment
basis as the copying of each fragment was completed but local
checkpoints were not enabled until all fragments were copied,
making it possible to run out of redo log file space
NDB error code 410) before the
restart was complete. Now logging is enabled only after all
fragments has been copied, just prior to enabling local
The creation of an ordered index on a table undergoing DDL operations could cause a data node crash under certain timing-dependent conditions. (Bug #48604)
A data node crashing while restarting, followed by a system restart could lead to incorrect handling of redo log metadata, causing the system restart to fail with Error while reading REDO log. (Bug #48436)
When using very large transactions containing many inserts,
ndbmtd could fail with Signal
11 without an easily detectable reason, due to an
internal variable being unitialized in the event that the
overloaded. Now, the variable is initialized in such cases,
avoiding the crash, and an appropriate error message is
References: See also Bug #46914.
--configinfo now indicates that
parameters belonging in the
[SHM DEFAULT] sections of the
config.ini file are deprecated or
experimental, as appropriate.
Under certain conditions, accounting of the number of free scan records in the local query handler could be incorrect, so that during node recovery or a local checkpoint operations, the LQH could find itself lacking a scan record that is expected to find, causing the node to crash. (Bug #48697)
References: See also Bug #48564.
Disk Data: In certain limited cases, it was possible when the cluster contained Disk Data tables for ndbmtd to crash during a system restart. (Bug #48498)
References: See also Bug #47832.
Disk Data: Repeatedly creating and then dropping Disk Data tables could eventually lead to data node failures. (Bug #45794, Bug #48910)
Disk Data: When running a write-intensive workload with a very large disk page buffer cache, CPU usage approached 100% during a local checkpoint of a cluster containing Disk Data tables. (Bug #49532)
When a crash occurs due to a problem in Disk Data code, the
currently active page list is printed to
stdout (that is, in one or more
files). One of these lists could contain an endless loop; this
caused a printout that was effectively never-ending. Now in such
cases, a maximum of 512 entries is printed from each list.
NDBCLUSTER failed to provide a
valid error message it failed to commit schema transactions
during an initial start if the cluster was configured using the
configuration parameter was set to an non-existent path, the
data nodes shut down with the generic error code 2341
(Internal program error). Now in such
cases, the error reported is error 2815 (File not
When a DML operation failed due to a uniqueness violation on an
NDB table having more than one
unique index, it was difficult to determine which constraint
caused the failure; it was necessary to obtain an
NdbError object, then decode
details property, which in could lead to
memory management issues in application code.
To help solve this problem, a new API method
added, providing a well-formatted string containing more precise
information about the index that caused the unque constraint
violation. The following additional changes are also made in the
NdbError.details is now deprecated
in favor of the new method.
method has been modified to provide more information.
When using blobs, calling
requires the full key to have been set using
getBlobHandle() must access the key for
adding blob table operations. However, if
getBlobHandle() was called without first
setting all parts of the primary key, the application using it
crashed. Now, an appropriate error code is returned instead.
(Bug #28116, Bug #48973)
Using a large number of small fragment log files could cause
NDBCLUSTER to crash while trying to
read them during a restart. This issue was first observed with
1024 fragment log files of 16 MB each.
When the combined length of all names of tables using the
NDB storage engine was greater than
or equal to 1024 bytes, issuing the
BACKUP command in the ndb_mgm
client caused the cluster to crash.
Functionality Added or Changed
Performance: Significant improvements in redo log handling and other file system operations can yield a considerable reduction in the time required for restarts. While actual restart times observed in a production setting will naturally vary according to database size, hardware, and other conditions, our own preliminary testing shows that these optimizations can yield startup times that are faster than those typical of previous MySQL Cluster releases by a factor of 50 or more.
--with-ndb-port-base option for
configure did not function correctly, and has
been deprecated. Attempting to use this option produces the
warning Ignoring deprecated option
Beginning with MySQL Cluster NDB 7.1.0, the deprecation warning
itself is removed, and the
option is simply handled as an unknown and invalid option if you
try to use it.
References: See also Bug #38502.
SHOW CREATE TABLE did not display
AUTO_INCREMENT value for
NDB tables having
Numeric configuration parameters set in
my.cnf were interpreted as signed rather
than unsigned values. The effect of this was that values of 2G
or more were truncated with the warning [MgmSrvr]
Warning: option '
opt_value adjusted to
2147483647. Now such parameter values are treated as
unsigned, so that this truncation does not take place.
This issue did not effect parameters set in
If a node failed while sending a fragmented long signal, the receiving node did not free long signal assembly resources that it had allocated for the fragments of the long signal that had already been received. (Bug #44607)
configure failed to honor the
--with-zlib-dir option when trying to build
MySQL Cluster from source.
IGNORE statements, batching of updates is now
disabled. This is because such statements failed when batching
of updates was employed if any updates violated a unique
constraint, to the fact a unique constraint violation could not
be handled without aborting the transaction.
ndb_mgmd failed to close client connections that had timed out. After running for some time, a race condition could develop in the management server, due to ndb_mgmd having exhausted all of its file descriptors in this fashion. (Bug #45497)
References: See also Bug #47712.
FragmentLogFileSize to a
value greater than 256 MB led to errors when trying to read the
redo log file.
In some cases, ndbmtd could allocate more
space for the undo buffer than was actually available, leading
to a failure in the
LGMAN kernel block and
subsequent failure of the data node.
When a copying operation exhausted the available space on a data
node while copying large
columns, this could lead to failure of the data node and a
Table is full error on the SQL node which
was executing the operation. Examples of such operations could
ALTER TABLE that
INT column to a
BLOB column, or a bulk insert of
BLOB data that failed due to
running out of space or to a duplicate key error.
(Bug #34583, Bug #48040)
References: See also Bug #41674, Bug #45768.
An optimization in MySQL Cluster NDB 7.0 causes the
DBDICT kernel block to copy several tables at
a time when synchronizing the data dictionary to a newly started
node; previously, this was done one table at a time. However,
NDB tables were sufficiently large and
numerous, the internal buffer for storing them could fill up,
causing a data node crash.
In testing, it was found that having 100
tables with 128 columns each was enough to trigger this issue.
A very small race-condition between
LQH_TRANSREQ signals when handling node
failure could lead to operations (locks) not being taken over
when they should have been, and subsequently becoming stale.
This could lead to node restart failures, and applications
getting into endless lock-conflicts with operations that were
not released until the node was restarted.
References: See also Bug #41297.
Under some circumstances, when a scan encountered an error early
in processing by the
DBTC kernel block (see
DBTC Block), a node
could crash as a result. Such errors could be caused by
applications sending incorrect data, or, more rarely, by a
DROP TABLE operation executed in
parallel with a scan.
During an upgrade, newer nodes (NDB kernel block
DBTUP) could in some cases try to use the
long signal format for communication with older nodes
DBUTIL kernel block) that did not understand
the newer format, causing older data nodes to fail after
Performing a system restart of the cluster after having performed a table reorganization which added partitions caused the cluster to become inconsistent, possibly leading to a forced shutdown, in either of the following cases:
When a local checkpoint was in progress but had not yet
completed, new partitions were not restored; that is, data
that was supposed to be moved could be lost instead, leading
to an inconsistent cluster. This was due to an issue whereby
DBDIH kernel block did not save the
new table definition and instead used the old one (the
version having fewer partitions).
When the most recent LCP had completed, ordered indexes and unlogged tables were still not saved (since these did not participate in the LCP). In this case, the cluster crashed during a subsequent system restart, due to the inconsistency between the main table and the ordered index.
DBDIH is forced in such cases to use the
version of the table definition held by the
DBDICT kernel block, which was (already)
correct and up to date.
When starting a cluster with a great many tables, it was possible for MySQL client connections as well as the slave SQL thread to issue DML statements against MySQL Cluster tables before mysqld had finished connecting to the cluster and making all tables writeable. This resulted in Table ... is read only errors for clients and the Slave SQL thread.
This issue is fixed by introducing the
--ndb-wait-setup option for the
MySQL server. This provides a configurable maximum amount of
time that mysqld waits for all
NDB tables to become writeable,
before enabling MySQL clients or the slave SQL thread to
References: See also Bug #46955.
After changing the value of
4294967039 (the maximum) in the
file and reloading the cluster configuration, the new value was
displayed in the update information written into the cluster log
as a signed number instead of unsigned.
References: See also Bug #47932.
On Solaris 10 for SPARC, ndb_mgmd failed to
config.ini file when the
configuration parameter, whose permitted range of values is
32768 to 4294967039, was set equal to 4294967040 (which is also
the value of the internal constant
MAX_INT_RNIL), nor could
DiskSyncSize be set
successfully any higher than the minimum value.
References: See also Bug #47944.
In certain cases, performing very large inserts on
NDB tables when using
ndbmtd caused the memory allocations for
ordered or unique indexes (or both) to be exceeded. This could
cause aborted transactions and possibly lead to data node
References: See also Bug #48113.
Starting a data node with a very large amount of
(approximately 90G or more) could lead to crash of the node due
to job buffer congestion.
ndbd was not built correctly when compiled using gcc 4.4.0. (The ndbd binary was built, but could not be started.) (Bug #46113)
In some cases, the MySQL Server tried to use an error status
whose value had never been set. The problem in the code that
caused this, in
when using debug builds of mysqld in MySQL
This fix brings MySQL Cluster's error handling in
hostname.cc in line with what is
implemented in MySQL 5.4.
After upgrading a MySQL Cluster containing tables having unique indexes from an NDB 6.3 release to an NDB 7.0 release, attempts to create new unique indexes failed with inconsistent trigger errors (error code 293).
For more information (including a workaround for previous MySQL Cluster NDB 7.0 releases), see Upgrade and downgrade compatibility: MySQL Cluster NDB 7.x. (Bug #48416)
When the MySQL server SQL mode included
engine warnings and error codes specific to
NDB were returned when errors occurred,
instead of the MySQL server errors and error codes expected by
some programming APIs (such as Connector/J) and applications.
A table that was created following an upgrade from a MySQL
Cluster NDB 6.3 release to MySQL Cluster NDB 7.0 (starting with
version 6.4.0) or later was dropped by a system restart. This
was due to a change in the format of
schema files and the fact that the upgrade of the format of
NDB 6.3 schema files to the
NDB 7.0 format failed to change the version
number contained in the file; this meant that a system restart
re-ran the upgrade routine, which interpreted the newly created
table as an uncommitted table (which by definition ought not to
be saved). Now the version number of upgraded
NDB 6.3 schema files is set correctly during
the upgrade process.
For example, consider the table created and populated using these statements:
CREATE TABLE t1 ( c1 INT NOT NULL, c2 INT NOT NULL, PRIMARY KEY(c1), KEY(c2) ) ENGINE = NDB; INSERT INTO t1 VALUES(1, 1);
even though they did not change any rows, each still matched a
row, but this was reported incorrectly in both cases, as shown
UPDATE t1 SET c2 = 1 WHERE c1 = 1;Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0 mysql>
UPDATE t1 SET c1 = 1 WHERE c2 = 1;Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0
Now in such cases, the number of rows matched is correct. (In
the case of each of the example
UPDATE statements just shown,
this is displayed as Rows matched: 1, as it should be.)
This issue could affect
statements involving any indexed columns in
NDB tables, regardless of the type
of index (including
PRIMARY KEY) or the number
of columns covered by the index.
On Solaris, shutting down a management node failed when issuing the command to do so from a client connected to a different management node. (Bug #47948)
When building MySQL Cluster, it was possible to configure the
--with-ndb-port without supplying a
port number. Now in such cases, configure
fails with an error.
References: See also Bug #47941.
When a data node failed to start due to inability to recreate or drop objects during schema restoration (for example: insufficient memory was available to the data node process on account of issues not directly related to MySQL Cluster on the host machine), the reason for the failure was not provided. Now is such cases, a more informative error message is logged. (Bug #48232)
When starting a node and synchronizing tables, memory pages were allocated even for empty fragments. In certain situations, this could lead to insufficient memory. (Bug #47782)
Disk Data: Multi-threaded data nodes could in some cases attempt to access the same memory structure in parallel, in a non-safe manner. This could result in data node failure when running ndbmtd while using Disk Data tables. (Bug #44195)
References: See also Bug #46507.
Disk Data: A local checkpoint of an empty fragment could cause a crash during a system restart which was based on that LCP. (Bug #47832)
References: See also Bug #41915.
The error handling shown in the example file
ndbapi_scan.cpp included with the MySQL
Cluster distribution was incorrect.
Cluster API: If an NDB API program reads the same column more than once, it is possible exceed the maximum permissible message size, in which case the operation should be aborted due to NDB error 880 Tried to read too much - too many getValue calls, however due to a change introduced in MySQL Cluster NDB 6.3.18, the check for this was not done correctly, which instead caused a data node crash. (Bug #48266)
Cluster API: A duplicate read of a column caused NDB API applications to crash. (Bug #45282)
The NDB API methods
formerly had both
const variants. The
const versions of these methods have been
removed. In addition, the
method has been re-implemented to provide consistent internal
The disconnection of an API or SQL node having a node ID greater than 49 caused a forced shutdown of the cluster. (Bug #47844)
The error message text for
error code 410 (REDO log files
overloaded...) was truncated.
Functionality Added or Changed
--config-dir is now accepted by
ndb_mgmd as an alias for the
A new option
added for ndb_mgmd. This option can be used
to provide a name for the current node and then to identify it
in messages written to the cluster log. For more information,
see ndb_mgmd — The MySQL Cluster Management Server Daemon.
Two new columns have been added to the output of
ndb_desc to make it possible to determine how
much of the disk space allocated to a given table or fragment
remains free. (This information is not available from the
FILES table applies only
to Disk Data files.) For more information, see
ndb_desc — Describe NDB Tables.
Previously, the MySQL Cluster management node and data node
programs, when run on Windows platforms, required the
--nodaemon option to produce console output.
Now, these programs run in the foreground when invoked from the
command line on Windows, which is the same behavior that
mysqld.exe displays on Windows.
Clients attempting to connect to the cluster during shutdown could sometimes cause the management server to crash. (Bug #47325)
When reloading the management server configuration, only the last changed parameter was logged. (Bug #47036)
During an online alter table operation, the new table definition was made available for users during the prepare-phase when it should only be exposed during and after a commit. This issue could affect NDB API applications, mysqld processes, or data node processes. (Bug #47375)
The default value for the
configuration parameter, unlike other MySQL Cluster
configuration parameters, was not set in
A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug #47505)
Using triggers on
NDB tables caused
to be treated as having the NDB kernel's internal default
value (32) and the value for this variable as set on the
cluster's SQL nodes to be ignored.
Signals from a failed API node could be received after an
API_FAILREQ signal (see
Operations and Signals)
has been received from that node, which could result in invalid
states for processing subsequent signals. Now, all pending
signals from a failing API node are processed before any
API_FAILREQ signal is received.
References: See also Bug #44607.
Aborting an online add column operation (for example, due to resource problems on a single data node, but not others) could lead to a forced node shutdown. (Bug #47364)
LQH_TRANS_REQ signals was done
DBLQH when the transaction
coordinator failed during a
session. This led to incorrect handling of multiple node
failures, particularly when using ndbmtd.
The following issues with error logs generated by ndbmtd were addressed:
The version string was sometimes truncated, or even not shown, depending on the number of threads in use (the more threads, the worse the problem). Now the version string is shown in full, as well as the file names for all tracefiles (where available).
In the event of a crash, the thread number of the thread that crashed was not printed. Now this information is supplied, if available.
When an instance of the
handler was recycled (this can happen due to table definition
cache pressure or to operations such as
FLUSH TABLES or
ALTER TABLE), if the last row
read contained blobs of zero length, the buffer was not freed,
even though the reference to it was lost. This resulted in a
For example, consider the table defined and populated as shown here:
CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB; INSERT INTO t VALUES (1, REPEAT('F', 20000)); INSERT INTO t VALUES (2, '');
SELECT a, length(b) FROM bl ORDER BY a; FLUSH TABLES;
Prior to the fix, this resulted in a memory leak proportional to
the size of the stored
each time these two statements were executed.
References: See also Bug #47572, Bug #47574.
On Windows, ndbd
--initial could hang in
an endless loop while attempting to remove directories.
On Mac OS X 10.5, commands entered in the management client
failed and sometimes caused the client to hang, although
management client commands invoked using the
-e) option from the system shell worked
For example, the following command failed with an error and hung until killed manually, as shown here:
SHOWWarning, event thread startup failed, degraded printouts as result, errno=36
However, the same management client command, invoked from the system shell as shown here, worked correctly:
ndb_mgm -e "SHOW"
References: See also Bug #34438.
The NDB kernel's parser (in
ndb/src/common/util/Parser.cpp) did not
interpret the backslash (“
When a data node received a
signal from the master before that node had received a
NODE_FAILREP, a race condition could in
References: See also Bug #25364, Bug #28717.
Some joins on large
BLOB columns could cause
mysqld processes to leak memory. The joins
did not need to reference the
BLOB columns directly for this
issue to occur.
When started with the
--reload options, if
ndb_mgmd could not find a configuration file
or connect to another management server, it appeared to hang.
Now, when trying to fetch its configuration from another
management node, ndb_mgmd checks and signals
Trying to get configuration from other
mgmd(s)) each 30 seconds that it has not yet done so.
References: See also Bug #45495.
An insert on an
NDB table was not
always flushed properly before performing a scan. One way in
which this issue could manifest was that
LAST_INSERT_ID() sometimes failed
to return correct values when using a trigger on an
When using ndbmtd, a parallel
DROP TABLE operation could cause
data nodes to have different views of which tables should be
included in local checkpoints; this discrepancy could lead to a
node failure during an LCP.
Now, when started with
ndb_mgmd tries to connect to and to copy the
configuration of an existing ndb_mgmd process
with a confirmed configuration. This works only if another
management server is found, and the configuration files used by
both management nodes are exactly the same.
If no other management server is found, the local configuration
file is read and used. With this change, it is now necessary
when performing a rolling restart of a MySQL Cluster having
multiple management nodes, to stop all
ndb_mgmd processes, and when restarting them,
to start the first of these with the
--initial option (or both
options), and then to start any remaining management nodes
without using either of these two options. For more information,
see Performing a Rolling Restart of a MySQL Cluster.
(Bug #45495, Bug #46488, Bug #11753966, Bug #11754823)
References: See also Bug #42015, Bug #11751233.
For very large values of
overflow when creating large numbers of tables, leading to
NDB error 773 (Out of
string memory, please modify StringMemory config
parameter), even when
StringMemory was set to
100 (100 percent).
Large transactions involving joins between tables containing
BLOB columns used excessive
References: See also Bug #47573, Bug #47574.
NDB table had an
TABLE operation performed on it in a MySQL Cluster
running a MySQL Cluster NDB 6.3.x release, it could not be
upgraded online to a MySQL Cluster NDB 7.0.x release. This issue
was detected using MySQL Cluster NDB 6.3.20, but is likely to
effect any MySQL Cluster NDB 6.3.x release supporting online DDL
When a data node restarts, it first runs the redo log until
reaching the latest restorable global checkpoint; after this it
scans the remainder of the redo log file, searching for entries
that should be invalidated so they are not used in any
subsequent restarts. (It is possible, for example, if restoring
GCI number 25, that there might be entries belonging to GCI 26
in the redo log.) However, under certain rare conditions, during
the invalidation process, the redo log files themselves were not
always closed while scanning ahead in the redo log. In rare
cases, this could lead to
exceeded, causing a the data node to crash.
When using multi-threaded data nodes (ndbmtd)
NoOfReplicas set to
a value greater than 2, attempting to restart any of the data
nodes caused a forced shutdown of the entire cluster.
mysqld allocated an excessively large buffer
BLOB values due to
overestimating their size. (For each row, enough space was
allocated to accommodate every
TEXT column value in the result
set.) This could adversely affect performance when using tables
TEXT columns; in a few extreme
cases, this issue could also cause the host system to run out of
References: See also Bug #47572, Bug #47573.
For multi-threaded data nodes, insufficient fragment records
were allocated in the
DBDIH NDB kernel block,
which could lead to error 306 when creating many tables; the
number of fragment records allocated did not take into account
the number of LQH instances.
Running ndb_restore with the
could cause it to crash.
(Bug #40428, Bug #33040)
The size of the table descriptor pool used in the
DBTUP kernel block was incorrect. This could
lead to a data node crash when an LQH sent a
References: See also Bug #44908.
When performing auto-discovery of tables on individual SQL
NDBCLUSTER attempted to overwrite
files and corrupted them.
In the mysql client, create a new table
t2) with same definition as the corrupted
t1). Use your system shell or file
manager to rename the old
.MYD file to
the new file name (for example, mv t1.MYD
t2.MYD). In the mysql client,
repair the new table, drop the old one, and rename the new
table using the old file name (for example,
RENAME TABLE t2
Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.
This miscalculation was not reflected in the contents of the
as it applied to extents allocated to a fragment, and not to a
In some circumstances, if an API node encountered a data node
failure between the creation of a transaction and the start of a
scan using that transaction, then any subsequent calls to
closeTransaction() could cause the same
transaction to be started and closed repeatedly.
Performing multiple operations using the same primary key within
could lead to a data node crash.
This fix does not make change the fact that performing
multiple operations using the same primary key within the same
not supported; because there is no way to determine the order
of such operations, the result of such combined operations
References: See also Bug #44015.
Functionality Added or Changed
The default value of the
node configuration parameter has changed from 8 to 2.
A new option
--exclude-missing-columns has been
added for the ndb_restore program. In the
event that any tables in the database or databases being
restored to have fewer columns than the same-named tables in the
backup, the extra columns in the backup's version of the
tables are ignored. For more information, see
ndb_restore — Restore a MySQL Cluster Backup.
On Solaris platforms, the MySQL Cluster management server and
NDB API applications now use
as the default clock.
Formerly, node IDs were represented in the cluster log using a complex hexadecimal/binary encoding scheme. Now, node IDs are reported in the cluster log using numbers in standard decimal notation. (Bug #44248)
Previously, it was possible to disable arbitration only by
ArbitrationRank to 0 on all
management and API nodes. A new data node configuration
simplifies this task; to disable arbitration, you can now use
Arbitration = Disabled in the
default] section of the
It is now also possible to configure arbitration in such a way
that the cluster waits until the time determined by
passes for an external manager to perform arbitration instead of
handling it internally. This can be done by setting
Arbitration = WaitExternal in the
[ndbd default] section of the
The default value for the Arbitration parameter is
Default, which permits arbitration to proceed
normally, as determined by the
ArbitrationRank settings for the management
and API nodes.
For more information, see Defining MySQL Cluster Data Nodes.
This issue, originally resolved in MySQL 5.1.16, re-occurred due to a later (unrelated) change. The fix has been re-applied.
pkg installer for MySQL Cluster on
Solaris did not perform a complete installation due to an
invalid directory reference in the postinstall script.
Problems could arise when using
whose size was greater than 341 characters and which used the
utf8_unicode_ci collation. In some cases,
this combination of conditions could cause certain queries and
OPTIMIZE TABLE statements to
The signals used by ndb_restore to send progress information about backups to the cluster log accessed the cluster transporter without using any locks. Because of this, it was theoretically possible that these signals could be interefered with by heartbeat signals if both were sent at the same time, causing the ndb_restore messages to be corrupted. (Bug #45646)
Killing MySQL Cluster nodes immediately following a local checkpoint could lead to a crash of the cluster when later attempting to perform a system restart.
The exact sequence of events causing this issue was as follows:
Local checkpoint occurs.
Immediately following the LCP, kill the master data node.
Kill the remaining data nodes within a few seconds of killing the master.
Attempt to restart the cluster.
Debugging code causing ndbd to use file compression on NTFS file systems failed with an error. (The code was removed.) This issue affected debug builds of MySQL Cluster on Windows platforms only. (Bug #44418)
DML statements run during an upgrade from MySQL Cluster NDB 6.3 to NDB 7.0 were not handled correctly. (Bug #45917)
configuration parameter for API nodes (including SQL nodes) has
been added. This is intended to prevent API nodes from re-using
allocated node IDs during cluster restarts. For more
information, see Defining SQL and Other API Nodes in a MySQL Cluster.
Ending a line in the
config.ini file with
an extra semicolon character (
reading the file to fail with a parsing error.
Creating an index when the cluster had run out of table records could cause data nodes to crash. (Bug #46295)
During a global checkpoint, LQH threads could run unevenly, causing a circular buffer overflow by the Subscription Manager, which led to data node failure. (Bug #46782)
References: See also Bug #46123, Bug #46723, Bug #45612.
The warning message Possible bug in Dbdih::execBLOCK_COMMIT_ORD ... could sometimes appear in the cluster log. This warning is obsolete, and has been removed. (Bug #44563)
REORGANIZE PARTITION could fail with Error 741
(Unsupported alter table) if the
appropriate hash-map was not present. This could occur when
adding nodes online; for example, when going from 2 data nodes
to 3 data nodes with
NoOfReplicas=1, or from
4 data nodes to 6 data nodes with
GCP STOP event was written to
the cluster log as an
INFO event. Now it is
logged as a
WARNING event instead.
When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug #46069)
Following an upgrade from MySQL Cluster NDB 6.3.x to MySQL Cluster NDB 7.0.6, DDL and backup operations failed. (Bug #46494, Bug #46563)
Due to changes in the way that
NDBCLUSTER handles schema changes
(implementation of schema transactions) in MySQL Cluster NDB
7.0, it was not possible to create MySQL Cluster tables having
more than 16 indexes using a single
This issue occurs only in MySQL Cluster NDB 7.0 releases prior to 7.0.7 (including releases numbered NDB 6.4.x).
If you are not yet able to upgrade from an earlier MySQL Cluster
NDB 7.0 release, you can work around this problem by creating
the table without any indexes, then adding the indexes using a
CREATE INDEX statement
for each index.
When using multi-threaded data node processes
(ndbmtd), it was possible for LQH threads to
continue running even after all
tables had been dropped. This meant that dropping the last
NDB table during a local
checkpoint could cause multi-threaded data nodes to fail.
Full table scans failed to execute when the cluster contained more than 21 table fragments.
The number of table fragments in the cluster can be calculated
as the number of data nodes, times 8 (that is, times the value
of the internal constant
MAX_FRAG_PER_NODE), divided by the number
of replicas. Thus, when
NoOfReplicas = 1 at
least 3 data nodes were required to trigger this issue, and
NoOfReplicas = 2 at least 4 data nodes
were required to do so.
Restarting the cluster following a local checkpoint and an
ALTER TABLE on a non-empty
table caused data nodes to crash.
On Windows, the internal
basestring_vsprintf() function did not return
a POSIX-compliant value as expected, causing the management
server to crash when trying to start a MySQL Cluster with more
than 4 data nodes.
did not build the BaseString-t test program
for Windows as the equivalent
does when building MySQL Cluster on Unix platforms.
If the cluster crashed during the execution of a
CREATE LOGFILE GROUP statement,
the cluster could not be restarted afterward.
References: See also Bug #34102.
A combination of index creation and drop operations (or creating and dropping tables having indexes) with node and system restarts could lead to a crash. (Bug #46552)
Partitioning; Disk Data:
NDB table created with a very
large value for the
could—if this table was dropped and a new table with fewer
partitions, but having the same table ID, was
created—cause ndbd to crash when
performing a system restart. This was because the server
attempted to examine each partition whether or not it actually
References: See also Bug #58638.
If the value set in the
config.ini file for
was identical to the value set for
parameter was ignored when starting the data node with
--initial option. As a result, the
Disk Data files in the corresponding directory were not removed
when performing an initial start of the affected data node or
Functionality Added or Changed
The ndb_config utility program can now
provide an offline dump of all MySQL Cluster configuration
parameters including information such as default and permitted
values, brief description, and applicable section of the
config.ini file. A dump in text format is
produced when running ndb_config with the new
--configinfo option, and in XML format when the
--configinfo --xml are used together.
For more information and examples, see
ndb_config — Extract MySQL Cluster Configuration Information.
Important Change; Partitioning:
User-defined partitioning of an
NDBCLUSTER table without any
primary key sometimes failed, and could cause
mysqld to crash.
Now, if you wish to create an
NDBCLUSTER table with user-defined
partitioning, the table must have an explicit primary key, and
all columns listed in the partitioning expression must be part
of the primary key. The hidden primary key used by the
NDBCLUSTER storage engine is not
sufficient for this purpose. However, if the list of columns is
empty (that is, the table is defined using
[LINEAR] KEY()), then no explicit primary key is
This change does not effect the partitioning of tables using any
storage engine other than
Important Note: It was not possible to perform an online upgrade from any MySQL Cluster NDB 6.x release to MySQL Cluster NDB 7.0.5 or any to earlier MySQL Cluster NDB 7.0 release.
With this fix, it is possible in MySQL Cluster NDB 7.0.6 and later to perform online upgrades from MySQL Cluster NDB 6.3.8 and later MySQL Cluster NDB 6.3 releases, or from MySQL Cluster NDB 7.0.5 or later MySQL Cluster NDB 7.0 releases. Online upgrades to MySQL Cluster NDB 7.0 releases previous to MySQL Cluster NDB 7.0.6 from earlier MySQL Cluster releases remain unsupported; online upgrades from MySQL Cluster NDB 7.0 releases previous to MySQL Cluster NDB 7.0.5 (including NDB 6.4.x beta releases) to later MySQL Cluster NDB 7.0 releases also remain unsupported. (Bug #44294)
In some cases, data node restarts during a system restart could fail due to insufficient redo log space. (Bug #43156)
When trying to use a data node with an older version of the management server, the data node crashed on startup. (Bug #43699)
A number of incorrectly formatted output strings in the source code caused compiler warnings. (Bug #43878)
An NDB internal timing function did not work correctly on Windows and could cause mysqld to fail on some AMD processors, or when running inside a virtual machine. (Bug #44276)
ndb_restore failed when trying to restore data on a big-endian machine from a backup file created on a little-endian machine. (Bug #44069)
When restarting a data nodes, management and API nodes reconnecting to it failed to re-use existing ports that had already been dynamically allocated for communications with that data node. (Bug #44866)
SSL connections to SQL nodes failed on big-endian platforms. (Bug #44295)
NDBCLUSTER did not build correctly
on Solaris 9 platforms.
References: See also Bug #39036, Bug #39038.
When a data node was down so long that its most recent local checkpoint depended on a global checkpoint that was no longer restorable, it was possible for it to be unable to use optimized node recovery when being restarted later. (Bug #44844)
References: See also Bug #26913.
When using large numbers of configuration parameters, the management server took an excessive amount of time (several minutes or more) to load these from the configuration cache when starting. This problem occurred when there were more than 32 configuration parameters specified, and became progressively worse with each additional multiple of 32 configuration parameters. (Bug #44488)
__builtin_expect() had the side effect
that compiler warnings about misuse of
(assignment) instead of
== in comparisons
were lost when building in debug mode. This is no longer
employed when configuring the build with the
References: See also Bug #44567.
Building the MySQL Cluster NDB 7.0 tree failed when using the icc compiler. (Bug #44310)
did not output any entries for the
parameter. In addition, the default listed for
MaxNoOfFiles was outside the permitted range
References: See also Bug #44685, Bug #44746.
The output of ndb_config
did not provide information about all sections of the
References: See also Bug #44746, Bug #44749.
When a data node had written its GCI marker to the first page of a megabyte, and that node was later killed during restart after having processed that page (marker) but before completing a LCP, the data node could fail with file system errors. (Bug #44952)
References: See also Bug #42564, Bug #44291.
Inspection of the code revealed that several assignment
=) were used in place of
comparison operators (
References: See also Bug #44570.
An internal NDB API buffer was not properly initialized. (Bug #44977)
Repeated starting and stopping of data nodes could cause ndb_mgmd to fail. This issue was observed on Solaris/SPARC. (Bug #43974)
Signals providing node state information
CHANGE_NODE_STATE_REQ) were not propagated to
all blocks of ndbmtd. This could cause the
Inconsistent redo logs when performing a graceful shutdown;
Data nodes crashing when later restarting the cluster, data nodes needing to perform node recovery during the system restart, or both.
References: See also Bug #42564.
When ndb_config could not find the file
referenced by the
--config-file option, it
tried to read
my.cnf instead, then failed
with a misleading error message.
It was possible for NDB API applications to insert corrupt data into the database, which could subquently lead to data node crashes. Now, stricter checking is enforced on input data for inserts and updates. (Bug #44132)
Online upgrades to MySQL Cluster NDB 7.0 from a MySQL Cluster NDB 6.3 release could fail due to changes in the handling of key lengths and unique indexes during node recovery. (Bug #44827)
It was theoretically possible for the value of a nonexistent
column to be read as
NULL, rather than
causing an error.
The output of ndbd
did not provide clear information about the program's
Disk Data: During a checkpoint, restore points are created for both the on-disk and in-memory parts of a Disk Data table. Under certain rare conditions, the in-memory restore point could include or exclude a row that should have been in the snapshot. This would later lead to a crash during or following recovery.
This issue was somewhat more likely to be encountered when using ndbmtd. (Bug #41915)
References: See also Bug #47832.
Disk Data: This fix supersedes and improves on an earlier fix made for this bug in MySQL 5.1.18. (Bug #24521)
Functionality Added or Changed
Two new server status variables
Ndb_scan_count gives the
number of scans executed since the cluster was last started.
the number of scans for which
NDBCLUSTER was able to use
partition pruning. Together, these variables can be used to help
determine in the MySQL server whether table scans are pruned by
Important Note: Due to problem discovered after the code freeze for this release, it is not possible to perform an online upgrade from any MySQL Cluster NDB 6.x release to MySQL Cluster NDB 7.0.5 or any earlier MySQL Cluster NDB 7.0 release.
This issue is fixed in MySQL Cluster NDB 7.0.6 and later for upgrades from MySQL Cluster NDB 6.3.8 and later MySQL Cluster NDB 6.3 releases, or from MySQL Cluster NDB 7.0.5. (Bug #44294)
Cluster Replication: If data node failed during an event creation operation, there was a slight risk that a surviving data node could send an invalid table reference back to NDB, causing the operation to fail with a false Error 723 (No such table). This could take place when a data node failed as a mysqld process was setting up MySQL Cluster Replication. (Bug #43754)
Cluster API: The following issues occurred when performing an online (rolling) upgrade of a cluster to a version of MySQL Cluster that supports configuration caching from a version that does not:
When using multiple management servers, after upgrading and restarting one ndb_mgmd, any remaining management servers using the previous version of ndb_mgmd could not synchronize their configuration data.
The MGM API
function failed to obtain configuration data.
values less than 100 were treated as 100. This could cause scans
to time out unexpectedly.
ndb_restore crashed when trying to restore a backup made to a MySQL Cluster running on a platform having different endianness from that on which the original backup was taken. (Bug #39540)
ndberror.c contained a C++-style
comment, which caused builds to fail with some C compilers.
If the number of fragments per table rises above a certain
DBDIH kernel block's
on-disk table-definition grows large enough to occupy 2 pages.
However, in MySQL Cluster NDB 7.0 (including MySQL Cluster NDB
6.4 releases), only 1 page was actually written, causing table
definitions stored on disk to be incomplete.
This issue was not observed in MySQL Cluster release series prior to MySQL Cluster NDB 7.0. (Bug #44135)
The setting for
ignored. This issue was only known to occur in MySQL Cluster NDB
6.4.3 and MySQL Cluster NDB 7.0.4.
When a data node process had been killed after allocating a node ID, but before making contact with any other data node processes, it was not possible to restart it due to a node ID allocation failure.
This issue could effect either ndbd or ndbmtd processes. (Bug #43224)
References: This bug was introduced by Bug #42973.
PID files for the data and management node daemons were not removed following a normal shutdown. (Bug #37225)
A race condition could occur when a data node failed to restart just before being included in the next global checkpoint. This could cause other data nodes to fail. (Bug #43888)
When aborting an operation involving both an insert and a delete, the insert and delete were aborted separately. This was because the transaction coordinator did not know that the operations affected on same row, and, in the case of a committed-read (tuple or index) scan, the abort of the insert was performed first, then the row was examined after the insert was aborted but before the delete was aborted. In some cases, this would leave the row in a inconsistent state. This could occur when a local checkpoint was performed during a backup. This issue did not affect primary ley operations or scans that used locks (these are serialized).
After this fix, for ordered indexes, all operations that follow the operation to be aborted are now also aborted.
Invoking the management client
command from the system shell (for example, as ndb_mgm
-e "START BACKUP") did not work correctly, unless the
backup ID was included when the command was invoked.
Now, the backup ID is no longer required in such cases, and the
backup ID that is automatically generated is printed to stdout,
similar to how this is done when invoking
BACKUP within the management client.
Disk Data: This fix completes one that was made for this issue in MySQL Cluster NDB-7.0.4, which did not rectify the problem in all cases. (Bug #43632)
When using multi-threaded data nodes,
TABLE statements on Disk Data tables could hang.
BIT columns created using the
native NDB API format that were not created as nullable could
still sometimes be overwritten, or cause other columns to be
This issue did not effect tables having
BIT columns created using the
mysqld format (always used by MySQL Cluster SQL nodes).
If the largest offset of a
RecordSpecification used for an
NdbRecord object was for the
NULL bits (and thus not a column), this
offset was not taken into account when calculating the size used
This meant that the space for the
could be overwritten by key or other information.
Functionality Added or Changed
The default values for a number of MySQL Cluster configuration
parameters relating to memory usage and buffering have changed.
These parameters include
applied to TCP transporters), and
For more information, see MySQL Cluster Configuration.
When restoring from backup, ndb_restore now reports the last global checkpoint reached when the backup was taken. (Bug #37384)
Cluster API: Partition pruning did not work correctly for queries involving multiple range scans.
As part of the fix for this issue, several improvements have
been made in the NDB API, including the addition of a new
method, a new variant of
and a new
DROP NODEGROUP command, the
SHOW in the
ndb_mgm cliently was not updated to reflect
the fact that the data nodes affected by this command were no
longer part of a node group.
Using indexes containing variable-sized columns could lead to internal errors when the indexes were being built. (Bug #43226)
It was not possible to add new data nodes to the cluster online using multi-threaded data node processes (ndbmtd). (Bug #43108)
The management server failed to start correctly in daemon mode. (Bug #43559)
When using ndbmtd, multiple data node failures caused the remaining data nodes to fail as well. (Bug #43109)
Some queries using combinations of logical and comparison
operators on an indexed column in the
clause could fail with the error Got error 4541
'IndexBound has no bound information' from
was measured from the end of one local checkpoint to the
beginning of the next, rather than from the beginning of one LCP
to the beginning of the next. This meant that the time spent
performing the LCP was not taken into account when determining
interval, so that LCPs were not started often enough, possibly
causing data nodes to run out of redo log space prematurely.
Disk Data: When using multi-threaded data nodes, dropping a Disk Data table followed by a data node restart led to a crash. (Bug #43632)
Disk Data: When using ndbmtd, repeated high-volume inserts (on the order of 10000 rows inserted at a time) on a Disk Data table would eventually lead to a data node crash. (Bug #41398)
Disk Data: When a log file group had an undo log file whose size was too small, restarting data nodes failed with Read underflow errors.
As a result of this fix, the minimum permitted
INTIAL_SIZE for an undo log file is now
1M (1 megabyte).
Ordered index scans using
NdbRecord formerly expressed a
BoundEQ range as separate lower and upper
bounds, resulting in 2 copies of the column values being sent to
the NDB kernel.
Now, when a range is specified by
the passed pointers, key lengths, and inclusive bits are
compared, and only one copy of the equal key columns is sent to
the kernel. This makes such operations more efficient, as half
the amount of
KeyInfo is now sent for a
BoundEQ range as before.
structures created by
NdbDictionary could have
overlapping null bits and data fields.
When performing insert or write operations,
NdbRecord permits key columns
to be specified in both the key record and in the attribute
record. Only one key column value for each key column should be
sent to the NDB kernel, but this was not guaranteed. This is now
ensured as follows: For insert and write operations, key column
values are taken from the key record; for scan takeover update
operations, key column values are taken from the attribute
Functionality Added or Changed
A new data node configuration parameter
been introduced to facilitate parallel node recovery by causing
a local checkpoint to be delayed while recovering nodes are
synchronizing data dictionaries and other meta-information. For
more information about this parameter, see
Defining MySQL Cluster Data Nodes.
New options are introduced for ndb_restore for determining which tables or databases should be restored:
--include-databases can be used to restore
specific tables or databases.
--exclude-databases can be used to exclude
the specified tables or databases from being restored.
For more information about these options, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #40429)
It is now possible to specify default locations for Disk Data
data files and undo log files, either together or separately,
using the data node configuration parameters
For information about these configuration parameters, see
Data file system parameters.
It is also now possible to specify a log file group, tablespace,
or both, that is created when the cluster is started, using the
node configuration parameters. For information about these
configuration parameters, see
Data object creation parameters.
Updates of the
SYSTAB_0 system table to
obtain a unique identifier did not use transaction hints for
tables having no primary key. In such cases the NDB kernel used
a cache size of 1. This meant that each insert into a table not
having a primary key required an update of the corresponding
SYSTAB_0 entry, creating a potential
With this fix, inserts on
NDB tables without
primary keys can be under some conditions be performed up to
100% faster than previously.
References: This bug was introduced by Bug #29263.
Packages for MySQL Cluster were missing the
ALTER TABLE ... REORGANIZE
PARTITION on an
NDBCLUSTER table having only one
partition caused mysqld to crash.
References: See also Bug #40389.
If the cluster configuration cache file was larger than 32K, the management server would not start. (Bug #42543)
SHOW GLOBAL STATUS LIKE 'NDB%' before
mysqld had connected to the cluster caused a
When performing more than 32 index or tuple scans on a single fragment, the scans could be left hanging. This caused unnecessary timeouts, and in addition could possibly lead to a hang of an LCP. (Bug #42559)
References: This bug is a regression of Bug #42084.
ndb_error_reporter worked correctly only with GNU tar. (With other versions of tar, it produced empty archives.) (Bug #42753)
Given a MySQL Cluster containing no data (that is, whose data
nodes had all been started using
into which no data had yet been imported) and having an empty
backup directory, executing
START BACKUP with
a user-specified backup ID caused the data nodes to crash.
When using ndbmtd, NDB kernel threads could
hang while trying to start the data nodes with
set to 1.
When using multi-threaded data nodes,
were effectively multiplied by the number of local query
handlers in use by each ndbmtd instance.
References: See also Bug #42215.
When using ndbmtd for all data nodes, repeated failures of one data node during DML operations caused other data nodes to fail. (Bug #42450)
Data node failures that occurred before all data nodes had connected to the cluster were not handled correctly, leading to additional data node failures. (Bug #42422)
A data node failure that occurred between calls to
was not correctly handled; a subsequent call to
caused a null pointer to be deferenced, leading to a segfault in
caused such tables to become locked.
References: See also Bug #16229, Bug #18135.
Backup IDs greater than 231 were not handled correctly, causing negative values to be used in backup directory names and printouts. (Bug #43042)
When using multiple management servers and starting several API nodes (possibly including one or more SQL nodes) whose connectstrings listed the management servers in different order, it was possible for 2 API nodes to be assigned the same node ID. When this happened it was possible for an API node not to get fully connected, consequently producing a number of errors whose cause was not easily recognizable. (Bug #42973)
In some cases,
NDB did not check
correctly whether tables had changed before trying to use the
query cache. This could result in a crash of the debug MySQL
When using multi-threaded data nodes, their
IndexMemory usage as
reported was multiplied by the number of local query handlers
(worker threads), making it appear that much more memory was
being used than was actually the case.
References: See also Bug #42765.
Disk Data: Creating a Disk Data tablespace with a very large extent size caused the data nodes to fail. The issue was observed when using extent sizes of 100 MB and larger. (Bug #39096)
It was not possible to add an in-memory column online to a table
that used a table-level or column-level
DISK option. The same issue prevented
ONLINE TABLE ... REORGANIZE PARTITION from working on
Disk Data tables.
Repeated insert and delete operations on disk-based tables could
lead to failures in the NDB Tablespace Manager
TSMAN kernel block).
Trying to execute a
GROUP statement using a value greater than
caused data nodes to crash.
As a result of this fix, the upper limit for
UNDO_BUFFER_SIZE is now
600M; attempting to set a higher value now
fails gracefully with an error.
References: See also Bug #36702.
Using a path or file name longer than 128 characters for Disk
Data undo log files and tablespace data files caused a number of
issues, including failures of
TABLESPACE statements, as well as crashes of
management nodes and data nodes.
With this fix, the maximum length for path and file names used for Disk Data undo log files and tablespace data files is now the same as the maximum for the operating system. (Bug #31769, Bug #31770, Bug #31772)
Disk Data: When attempting to create a tablespace that already existed, the error message returned was Table or index with given name already exists. (Bug #32662)
Disk Data: Attempting to perform a system restart of the cluster where there existed a logfile group without and undo log files caused the data nodes to crash.
While issuing a
GROUP statement without an
UNDOFILE option fails with an error in the MySQL
server, this situation could arise if an SQL node failed
during the execution of a valid
LOGFILE GROUP statement; it is also possible to
create a logfile group without any undo log files using the
Cluster API: When using an ordered index scan without putting all key columns in the read mask, this invalid use of the NDB API went undetected, which resulted in the use of uninitialized memory. (Bug #42591)
Some error messages from ndb_mgmd contained
\n) characters. This could break the
MGM API protocol, which uses the newline as a line separator.
NDBCLUSTER STATUS on an SQL node before the management
server had connected to the cluster caused
mysqld to crash.
When using ndbmtd, setting
MaxNoOfExecutionThreads to a
value higher than the actual number of cores available and with
caused the data nodes to crash.
The fix for this issue changes the behavior of
ndbmtd such that its internal job buffers no
longer rely on
Connections using IPv6 were not handled correctly by mysqld. (Bug #42413)
References: See also Bug #42412, Bug #38247.
When a cluster backup failed with Error 1304 (Node
node_id1: Backup request from
node_id2 failed to start), no clear
reason for the failure was provided.
As part of this fix, MySQL Cluster now retries backups in the event of sequence errors. (Bug #42354)
References: See also Bug #22698.
Functionality Added or Changed
Formerly, when the management server failed to create a
transporter for a data node connection,
elapsed before the data node was actually permitted to
disconnect. Now in such cases the disconnection occurs
References: See also Bug #41713.
Formerly, when using MySQL Cluster Replication, records for
“empty” epochs—that is, epochs in which no
NDBCLUSTER data or
tables took place—were inserted into the
ndb_binlog_index tables on the slave even
disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL
Cluster NDB 6.3.13 this was changed so that these
“empty” epochs were no longer logged. However, it
is now possible to re-enable the older behavior (and cause
“empty” epochs to be logged) by using the
--ndb-log-empty-epochs option. For more
information, see Replication Slave Options and Variables.
References: See also Bug #37472.
MySQL Cluster would not compile when using
libwrap. This issue was known to occur only
in MySQL Cluster NDB 6.4.0.
When a data node connects to the management server, the node sends its node ID and transporter type; the management server then verifies that there is a transporter set up for that node and that it is in the correct state, and then sends back an acknowledgment to the connecting node. If the transporter was not in the correct state, no reply was sent back to the connecting node, which would then hang until a read timeout occurred (60 seconds). Now, if the transporter is not in the correct state, the management server acknowledges this promptly, and the node immediately disconnects. (Bug #41713)
References: See also Bug #41965.
If all data nodes were shut down, MySQL clients were unable to
NDBCLUSTER tables and data
even after the data nodes were restarted, unless the MySQL
clients themselves were restarted.
The management server could hang after attempting to halt it
STOP command in the management
References: See also Bug #40922.
When using ndbmtd, one thread could flood another thread, which would cause the system to stop with a job buffer full condition (currently implemented as an abort). This could be caused by committing or aborting a large transaction (50000 rows or more) on a single data node running ndbmtd. To prevent this from happening, the number of signals that can be accepted by the system threads is calculated before executing them, and only executing them if sufficient space is found. (Bug #42052)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug #34526)
A maximum of 11
TUP scans were permitted in
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN statement while inserting rows into the
table caused mysqld to crash.
EXIT in the management client
sometimes caused the client to hang.
Functionality Added or Changed
MySQL Cluster now caches its configuration data. This means
that, by default, the management server only reads the global
configuration file (usually named
config.ini) the first time that it is
started, and does not automatically re-read the this file when
restarted. This behavior can be controlled using new management
server options (
have been added for this purpose. For more information, see
MySQL Cluster Configuration Files, and
ndb_mgmd — The MySQL Cluster Management Server Daemon.
A multi-threaded version of the MySQL Cluster data node daemon is now available. The multi-threaded ndbmtd binary is similar to ndbd and functions in much the same way, but is intended for use on machines with multiple CPU cores.
For more information, see ndbmtd — The MySQL Cluster Data Node Daemon (Multi-Threaded).
It is now possible when performing a cluster backup to determine
whether the backup matches the state of the data when the backup
began or when it ended, using the new
SNAPSHOTEND in the management client. See
Using The MySQL Cluster Management Client to Create a Backup,
for more information.
It is now possible while in Single User Mode to restart all data
ALL RESTART in the management
client. Restarting of individual nodes while in Single User Mode
remains not permitted.
It is now possible to add data nodes to a MySQL Cluster online—that is, to a running MySQL Cluster without shutting it down.
For information about the procedure for adding data nodes online, see Adding MySQL Cluster Data Nodes Online.
methods have been added to help in diagnosing problems with NDB
API client connections. The
method tells whether or not the latest connection attempt
succeeded; if the attempt failed,
provides an error message giving the reason.
The failure of a master node during a DDL operation caused the cluster to be unavailable for further DDL operations until it was restarted; failures of nonmaster nodes during DLL operations caused the cluster to become completely inaccessible. (Bug #36718)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug #41462)
When long signal buffer exhaustion in the ndbd process resulted in a signal being dropped, the usual handling mechanism did not take fragmented signals into account. This could result in a crash of the data node because the fragmented signal handling mechanism was not able to work with the missing fragments. (Bug #39235)
Status messages shown in the management client when restarting a
management node were inappropriate and misleading. Now, when
restarting a management node, the messages displayed are as
node_id is the
management node's node ID:
Shutting down MGM node
node_idfor restart Node
node_idis being restarted ndb_mgm>
A data node failure when
NoOfReplicas was greater
than 2 caused all cluster SQL nodes to crash.