This section contains unified change history highlights for all
MySQL Cluster releases based on version 7.1 of the
NDB storage engine through MySQL
Cluster NDB 7.1.33. Included are all changelog
entries in the categories MySQL Cluster,
Disk Data, and Cluster
For an overview of features that were added in MySQL Cluster NDB 7.1, see MySQL Cluster Development in MySQL Cluster NDB 7.1.
Version 5.1.73-ndb-7.1.33 has no changelog entries.
Functionality Added or Changed
Added as an aid to debugging the ability to specify a
human-readable name for a given
Ndb object and later to
retrieve it. These operations are implemented, respectively, as
To make tracing of event handling between a user application and
NDB easier, you can use the reference (from
getReference() followed by
the name (if provided) in printouts; the reference ties together
Ndb object, the event buffer,
NDB storage engine's
Processing a NODE_FAILREP signal that contained an invalid node ID could cause a data node to fail. (Bug #18993037, Bug #73015)
References: This bug is a regression of Bug #16007980.
Attribute promotion between different
TEXT types (any of
ndb_restore was not handled properly in some
cases. In addition,
TEXT values are now
truncated according to the limits set by
mysqld (for example, values converted to
TINYTEXT from another type are truncated to
256 bytes). In the case of columns using a multibyte character
set, the value is truncated to the end of the last well-formed
Also as a result of this fix, conversion to a
TEXT column of any size that uses a different
character set from the original is now disallowed.
TABLE ... REORGANIZE PARTITION after increasing the
number of data nodes in the cluster from 4 to 16 led to a crash
of the data nodes. This issue was shown to be a regression
caused by previous fix which added a new dump handler using a
dump code that was already in use (7019), which caused the
command to execute two different handlers with different
semantics. The new handler was assigned a new
DUMP code (7024).
References: This bug is a regression of Bug #14220269.
ndbmtd supports multiple parallel receiver
threads, each of which performs signal reception for a subset of
the remote node connections (transporters) with the mapping of
remote_nodes to receiver threads decided at node startup.
Connection control is managed by the multi-instance
TRPMAN block, which is organized as a proxy
and workers, and each receiver thread has a
TRPMAN worker running locally.
QMGR block sends signals to
TRPMAN to enable and disable communications
with remote nodes. These signals are sent to the
TRPMAN proxy, which forwards them to the
workers. The workers themselves decide whether to act on
signals, based on the set of remote nodes they manage.
The current isuue arises because the mechanism used by the
TRPMAN workers for determining which
connections they are responsible for was implemented in such a
way that each worker thought it was responsible for all
connections. This resulted in the
CLOSE_COMREQ being processed multiple times.
The fix keeps
TRPMAN instances (receiver
CLOSE_COMREQ requests. In addition, the
TRPMAN instance is now chosen when
routing from this instance for a specific remote connection.
A local checkpoint (LCP) is tracked using a global LCP state
c_lcpState), and each
NDB table has a status indicator
which indicates the LCP status of that table
tabLcpStatus). If the global LCP state is
LCP_STATUS_IDLE, then all the tables should
have an LCP status of
When an LCP starts, the global LCP status is
LCP_INIT_TABLES and the thread starts setting
NDB tables to
TLS_ACTIVE. If any tables are not ready for
LCP, the LCP initialization procedure continues with
CONTINUEB signals until all tables have
become available and been marked
When this initialization is complete, the global LCP status is
This bug occurred when the following conditions were met:
An LCP was in the
and some but not all tables had been set to
The master node failed before the global LCP state changed
LCP_STATUS_ACTIVE; that is, before the
LCP could finish processing all tables.
NODE_FAILREP signal resulting from
the node failure was processed before the final
CONTINUEB signal from the LCP
initialization process, so that the node failure was
processed while the LCP remained in the
Following master node failure and selection of a new one, the
new master queries the remaining nodes with a
MASTER_LCPREQ signal to determine the state
of the LCP. At this point, since the LCP status was
LCP_INIT_TABLES, the LCP status was reset to
LCP_STATUS_IDLE. However, the LCP status of
the tables was not modified, so there remained tables with
TLS_ACTIVE. Afterwards, the failed node is
removed from the LCP. If the LCP status of a given table is
TLS_ACTIVE, there is a check that the global
LCP status is not
LCP_STATUS_IDLE; this check
failed and caused the data node to fail.
MASTER_LCPREQ handler ensures that
tabLcpStatus for all tables is updated to
TLS_COMPLETED when the global LCP status is
The logging of insert failures has been improved. This is
intended to help diagnose occasional issues seen when writing to
CHAR column that used
UTF8 character set as a table's
primary key column led to node failure when restarting data
nodes. Attempting to restore a table with such a primary key
also caused ndb_restore to fail.
(Bug #16895311, Bug #68893)
-o) option for the
ndb_select_all utility worked only when
specified as the last option, and did not work with an equals
As part of this fix, the program's
output was also aligned with the
option's correct behavior.
(Bug #64426, Bug #16374870)
NDB data node indicates a
buffer overflow via an empty epoch, the event buffer places an
inconsistent data event in the event queue. When this was
consumed, it was not removed from the event queue as expected,
nextEvent() calls to return
0. This caused event consumption to stall because the
inconsistency remained flagged forever, while event data
accumulated in the queue.
Event data belonging to an empty inconsistent epoch can be found
either at the beginning or somewhere in the middle.
pollEvents() returns 0 for
the first case. This fix handles the second case: calling
nextEvent() call dequeues the inconsistent
event before it returns. In order to benefit from this fix, user
applications must call
nextEvent() even when
pollEvents() returns 0.
returned 1, even when called with a wait time equal to 0, and
there were no events waiting in the queue. Now in such cases it
returns 0 as expected.
Functionality Added or Changed
shortages and statistics has been improved as follows:
The default value of
has been increased from 4 MB to 64 MB.
When this resource is exhausted, a suitable informative message is now printed in the data node log describing possible causes of the problem and suggesting possible solutions.
LongMessageBuffer usage information is
now shown in the
See the description of this table for an example and
The server system variables
ndb_index_stat_freq, which had been
deprecated in a previous MySQL Cluster release series, have now
(Bug #11746486, Bug #26673)
ALTER TABLE statement
changed table schemas without causing a change in the
table's partitioning, the new table definition did not copy
the hash map from the old definition, but used the current
default hash map instead. However, the table data was not
reorganized according to the new hashmap, which made some rows
inaccessible using a primary key lookup if the two hash maps had
To keep this situation from occurring, any
TABLE that entails a hashmap change now triggers a
reorganisation of the table. In addition, when copying a table
definition in such cases, the hashmap is now also copied.
When certain queries generated signals having more than 18 data words prior to a node failure, such signals were not written correctly in the trace file. (Bug #18419554)
ndb_show_tables sometimes failed with the
error message Unable to connect to management
server and immediately terminated, without providing
the underlying reason for the failure. To provide more useful
information in such cases, this program now also prints the most
recent error from the
used to instantiate the connection.
After dropping an
neither the cluster log nor the output of the
MemoryUsage command showed that the
IndexMemory used by that
table had been freed, even though the memory had in fact been
deallocated. This issue was introduced in MySQL Cluster NDB
The block threads managed by the multi-threading scheduler communicate by placing signals in an out queue or job buffer which is set up between all block threads. This queue has a fixed maximum size, such that when it is filled up, the worker thread must wait for the consumer to drain the queue. In a highly loaded system, multiple threads could end up in a circular wait lock due to full out buffers, such that they were preventing each other from performing any useful work. This condition eventually led to the data node being declared dead and killed by the watchdog timer.
To fix this problem, we detect situations in which a circular wait lock is about to begin, and cause buffers which are otherwise held in reserve to become available for signal processing by queues which are highly loaded. (Bug #18229003)
The ndb_mgm client
BACKUP command (see
Commands in the MySQL Cluster Management Client) could
experience occasional random failures when a ping was received
prior to an expected
Now the connection established by this command is not checked
until it has been properly set up.
When performing a copying
TABLE operation, mysqld creates a
new copy of the table to be altered. This intermediate table,
which is given a name bearing the prefix
#sql-, has an updated schema but contains no
data. mysqld then copies the data from the
original table to this intermediate table, drops the original
table, and finally renames the intermediate table with the name
of the original table.
mysqld regards such a table as a temporary
table and does not include it in the output from
mysqldump also ignores an intermediate table.
NDB sees no difference
between such an intermediate table and any other table. This
difference in how intermediate tables are viewed by
mysqld (and MySQL client programs) and by the
NDB storage engine can give rise to problems
when performing a backup and restore if an intermediate table
NDB, possibly left over from a
ALTER TABLE that used copying. If a
schema backup is performed using mysqldump
and the mysql client, this table is not
included. However, in the case where a data backup was done
using the ndb_mgm client's
BACKUP command, the intermediate table was
included, and was also included by
ndb_restore, which then failed due to
attempting to load data into a table which was not defined in
the backed up schema.
To prevent such failures from occurring,
ndb_restore now by default ignores
intermediate tables created during
TABLE operations (that is, tables whose names begin
with the prefix
#sql-). A new option
is added that makes it possible to override the new behavior.
The option's default value is
cause ndb_restore to revert to the old
behavior and to attempt to restore intermediate tables, set this
Data nodes running ndbmtd could stall while performing an online upgrade of a MySQL Cluster containing a great many tables from a version prior to NDB 7.1.20 to version 7.1.20 or later. (Bug #16693068)
When an NDB API client application received a signal with an
invalid block or signal number,
only a very brief error message that did not accurately convey
the nature of the problem. Now in such cases, appropriate
printouts are provided when a bad signal or message is detected.
In addition, the message length is now checked to make certain
that it matches the size of the embedded signal.
Refactoring that was performed in MySQL Cluster NDB 7.1.30
inadvertently introduced a dependency in
Ndb.hpp on a file that is not included in
the distribution, which caused NDB API applications to fail to
compile. The dependency has been removed.
(Bug #18293112, Bug #71803)
References: This bug was introduced by Bug #17647637.
An NDB API application sends a scan query to a data node; the
scan is processed by the transaction coordinator (TC). The TC
LQHKEYREQ request to the
appropriate LDM, and aborts the transaction if it does not
LQHKEYCONF response within the
specified time limit. After the transaction is successfully
aborted, the TC sends a
TCROLLBACKREP to the
NDBAPI client, and the NDB API client processes this message by
cleaning up any
associated with the transaction.
The client receives the data which it has requested in the form
TRANSID_AI signals, buffered for sending
at the data node, and may be delivered after a delay. On
receiving such a signal,
NDB checks the
transaction state and ID: if these are as expected, it processes
the signal using the
Ndb objects associated
with that transaction.
The current bug occurs when all the following conditions are fulfilled:
The transaction coordinator aborts a transaction due to
delays and sends a
to the client, while at the same time a
TRANSID_AI which has been buffered for
delivery at an LDM is delivered to the same client.
The NDB API client considers the transaction complete on
receipt of a
TCROLLBACKREP signal, and
immediately closes the transaction.
The client has a separate receiver thread running concurrently with the thread that is engaged in closing the transaction.
The arrival of the late
interleaves with the closing of the user thread's
transaction such that
processing passes normal checks before
resets the transaction state and invalidates the receiver.
When these conditions are all met, the receiver thread proceeds
to continue working on the
using the invalidated receiver. Since the receiver is already
invalidated, its usage results in a node failure.
Ndb object cleanup done for
TCROLLBACKREP includes invalidation of the
transaction ID, so that, for a given transaction, any signal
which is received after the
arrives does not pass the transaction ID check and is silently
dropped. This fix is also implemented for the
TCKEY_FAILREF signals as well.
See also Operations and Signals, for additional information about NDB messaging. (Bug #18196562)
Cluster API: ndb_restore could sometimes report Error 701 System busy with other schema operation unnecessarily when restoring in parallel. (Bug #17916243)
Compilation of ndbmtd failed on Solaris 10
and 11 for 32-bit
x86, and the binary was not
included in the binary distributions for these platforms.
Disk Data: When using Disk Data tables and ndbmtd data nodes, it was possible for the undo buffer to become overloaded, leading to a crash of the data nodes. This issue was more likely to be encountered when using Disk Data columns whose size was approximately 8K or larger. (Bug #16766493)
UINT_MAX64 was treated as a signed value by
Visual Studio 2010. To prevent this from happening, the value is
now explicitly defined as unsigned.
References: See also Bug #17647637.
Monotonic timers on several platforms can experience issues which might result in the monotonic clock doing small jumps back in time. This is due to imperfect synchronization of clocks between multiple CPU cores and does not normally have an adverse effect on the scheduler and watchdog mechanisms; so we handle some of these cases by making backtick protection less strict, although we continue to ensure that the backtick is less than 10 milliseconds. This fix also removes several checks for backticks which are thereby made redundant. (Bug #17973819)
Poor support or lack of support on some platforms for monotonic timers caused issues with delayed signal handling by the job scheduler for the multithreaded data node. Variances (timer leaps) on such platforms are now handled in the same way the multithreaded data node process that they are by the singlethreaded version. (Bug #17857442)
References: See also Bug #17475425, Bug #17647637.
When using single-threaded (ndbd) data nodes
enabled, the CPU did not, as intended, temporarily lower its
scheduling priority to normal every 10 milliseconds to give
other, non-realtime threads a chance to run.
The global checkpoint lag watchdog tracking the number of times a check for GCP lag was performed using the system scheduler and used this count to check for a timeout condition, but this caused a number of issues. To overcome these limitations, the GCP watchdog has been refactored to keep track of its own start times, and to calculate elapsed time by reading the (real) clock every time it is called.
In addition, any backticks (rare in any case) are now handled by taking the backward time as the new current time and calculating the elapsed time for this round as 0. Finally, any ill effects of a forward leap, which possibly could expire the watchdog timer immediately, are reduced by never calculating an elapsed time longer than the requested delay time for the watchdog timer. (Bug #17647469)
References: See also Bug #17842035.
Timers used in timing scheduler events in the
NDB kernel have been refactored, in
part to insure that they are monotonic on all platforms. In
particular, on Windows, event intervals were previously
calculated using values obtained from
GetSystemTimeAsFileTime(), which reads
directly from the system time (“wall clock”), and
which may arbitrarily be reset backward or forward, leading to
false watchdog or heartbeat alarms, or even node shutdown. Lack
of timer monotonicity could also cause slow disk writes during
backups and global checkpoints. To fix this issue, the Windows
implementation now uses
QueryPerformanceCounters() instead of
GetSystemTimeAsFileTime(). In the event that
a monotonic timer is not found on startup of the data nodes, a
warning is logged.
In addition, on all platforms, a check is now performed at
compile time for available system monotonic timers, and the
build fails if one cannot be found; note that
CLOCK_HIGHRES is now supported as an
CLOCK_MONOTONIC if the latter
is not available.
In certain rare cases on commit of a transaction, an
Ndb object was released before
the transaction coordinator (
block) sent the expected
NDB failed to send a
COMMIT_ACK signal in response, which caused a
memory leak in the
NDB kernel could later
lead to node failure.
Ndb object is not released until the
COMMIT_CONF signal has actually been
After restoring the database metadata (but not any data) by
-m), SQL nodes would hang while trying to
SELECT from a table in the
database to which the metadata was restored. In such cases the
attempt to query the table now fails as expected, since the
table does not actually exist until
ndb_restore is executed with
The ndbd_redo_log_reader utility now supports
Using this options causes the program to print basic usage
information, and then to exit.
(Bug #11749591, Bug #36805)
It was possible for an
object to receive signals for handling before it was
initialized, leading to thread interleaving and possible data
node failure when executing a call to
Ndb::init(). To guard against
this happening, a check is now made when it is starting to
receive signals that the
Ndb object is
properly initialized before any signals are actually handled.
Functionality Added or Changed
The length of time a management node waits for a heartbeat
message from another management node is now configurable using
management node configuration parameter added in this release.
The connection is considered dead after 3 missed heartbeats. The
default value is 1500 milliseconds, or a timeout of
approximately 6000 ms.
(Bug #17807768, Bug #16426805)
ndb_restore could abort during the last stages of a restore using attribute promotion or demotion into an existing table. This could happen if a converted attribute was nullable and the backup had been run on active database. (Bug #17275798)
DBUTIL data node block is now less strict
about the order in which it receives certain messages from other
The Windows error ERROR_FILE_EXISTS was
not recognized by
treated it as an unknown error.
not work correctly with data nodes running
Maintenance and checking of parent batch completion in the
SPJ block of the
kernel was reimplemented. Among other improvements, the
completion state of all ancestor nodes in the tree are now
The LCP fragment scan watchdog periodically checks for lack of
progress in a fragment scan performed as part of a local
checkpoint, and shuts down the node if there is no progress
after a given amount of time has elapsed. This interval,
formerly hard-coded as 60 seconds, can now be configured using
data node configuration parameter added in this release.
This configuration parameter sets the maximum time the local checkpoint can be stalled before the LCP fragment scan watchdog shuts down the node. The default is 60 seconds, which provides backward compatibility with previous releases.
You can disable the LCP fragment scan watchdog by setting this parameter to 0. (Bug #16630410)
Added the ndb_error_reporter options
which makes it possible to set a timeout for connecting to
which disables scp connections to remote hosts, and
which skips all nodes in a given node group.
References: See also Bug #11752792, Bug #44082.
When the available job buffers for a given thread fell below the critical threshold, the internal multi-threading job scheduler waited for job buffers for incoming rather than outgoing signals to become available, which meant that the scheduler waited the maximum timeout (1 millisecond) before resuming execution. (Bug #15907122)
NDB receive thread waited unnecessarily
for additional job buffers to become available when receiving
data. This caused the receive mutex to be held during this wait,
which could result in a busy wait when the receive thread was
running with real-time priority.
This fix also handles the case where a negative return value
from the initial check of the job buffer by the receive thread
prevented further execution of data reception, which could
possibly lead to communication blockage or configured
Under some circumstances, a race occurred where the wrong
watchdog state could be reported. A new state name
Packing Send Buffers is added for watchdog
state number 11, previously reported as
place. As part of this fix, the state numbers for
states without names are always now reported in such cases.
When a node fails, the Distribution Handler
DBDIH kernel block) takes steps together
with the Transaction Coordinator (
make sure that all ongoing transactions involving the failed
node are taken over by a surviving node and either committed or
aborted. Transactions taken over which are then committed belong
in the epoch that is current at the time the node failure
occurs, so the surviving nodes must keep this epoch available
until the transaction takeover is complete. This is needed to
maintain ordering between epochs.
A problem was encountered in the mechanism intended to keep the current epoch open which led to a race condition between this mechanism and that normally used to declare the end of an epoch. This could cause the current epoch to be closed prematurely, leading to failure of one or more surviving data nodes. (Bug #14623333, Bug #16990394)
Formerly, the node used as the coordinator or leader for
distributed decision making between nodes (also known as the
DBDICT Block) was
indicated in the output of the ndb_mgm client
SHOW command as the “master”
node, although this node has no relationship to a master server
in MySQL Replication. (It should also be noted that it is not
necessary to know which node is the leader except when debugging
NDBCLUSTER source code.) To avoid possible
confusion, this label has been removed, and the leader node is
now indicated in
SHOW command output using an
(Bug #11746263, Bug #24880)
START BACKUP WAIT STARTED was run from
the command line using ndb_mgm
the client did not exit until the backup completed.
(Bug #11752837, Bug #44146)
ndb_error-reporter did not support the
(Bug #11756666, Bug #48606)
References: See also Bug #11752792, Bug #44082.
Program execution failed to break out of a loop after meeting a desired condition in a number of internal methods, performing unneeded work in all cases where this occurred. (Bug #69610, Bug #69611, Bug #69736, Bug #17030606, Bug #17030614, Bug #17160263)
ABORT BACKUP in the
ndb_mgm client (see
Commands in the MySQL Cluster Management Client) took an
excessive amount of time to return (approximately as long as the
backup would have required to complete, had it not been
aborted), and failed to remove the files that had been generated
by the aborted backup.
(Bug #68853, Bug #17719439)
Note that converted character data is not checked to conform to any character set.
When performing such promotions, the only other sort of type conversion that can be performed at the same time is between character types and binary types.
now supports a pointer or a reference to table as its required
argument. If a null table pointer is used, the method now
returns -1 to make it clear that this is what has occurred.
Functionality Added or Changed
ExtraSendBufferMemory parameter for
management nodes and API nodes. (Formerly, this parameter was
available only for configuring data nodes.) See
(management nodes), and
(API nodes), for more information.
Performance: In a number of cases found in various locations in the MySQL Cluster codebase, unnecessary iterations were performed; this was caused by failing to break out of a repeating control structure after a test condition had been met. This community-contributed fix removes the unneeded repetitions by supplying the missing breaks. (Bug #16904243, Bug #69392, Bug #16904338, Bug #69394, Bug #16778417, Bug #69171, Bug #16778494, Bug #69172, Bug #16798410, Bug #69207, Bug #16801489, Bug #69215, Bug #16904266, Bug #69393)
File system errors occurring during a local checkpoint could sometimes cause an LCP to hang with no obvious cause when they were not handled correctly. Now in such cases, such errors always cause the node to fail. Note that the LQH block always shuts down the node when a local checkpoint fails; the change here is to make likely node failure occur more quickly and to make the original file system error more visible. (Bug #16961443)
DROP TABLE while
DBDIH was updating table checkpoint
information subsequent to a node failure could lead to a data
The planned or unplanned shutdown of one or more data nodes
while reading table data from the
ndbinfo database caused a
CLUSTERLOG command (see
Commands in the MySQL Cluster Management Client) caused
ndb_mgm to crash on Solaris SPARC systems.
Failure to use container classes specific
NDB during node failure handling
could cause leakage of commit-ack markers, which could later
lead to resource shortages or additional node crashes.
Use of an uninitialized variable employed in connection with
error handling in the
DBLQH kernel block
could sometimes lead to a data node crash or other stability
issues for no apparent reason.
In certain cases, when starting a new SQL node, mysqld failed with Error 1427 Api node died, when SUB_START_REQ reached node. (Bug #16840741)
A race condition in the time between the reception of a
execNODE_FAILREP signal by the
QMGR kernel block and its reception by the
blocks could lead to data node crashes during shutdown.
When trying to specify a backup ID greater than the maximum allowed, the value was silently truncated. (Bug #16585455, Bug #68796)
ndb_mgm treated backup IDs provided to
ABORT BACKUP commands as signed values, so
that backup IDs greater than 231
wrapped around to negative values. This issue also affected
out-of-range backup IDs, which wrapped around to negative values
instead of causing errors as expected in such cases. The backup
ID is now treated as an unsigned value, and
ndb_mgm now performs proper range checking
for backup ID values greater than
(Bug #16585497, Bug #68798)
id WAIT STARTED
id had already been used for a backup
ID, an error caused by the duplicate ID occurred as expected,
but following this, the
START BACKUP command
(Bug #16593604, Bug #68854)
The unexpected shutdown of another data node as a starting data node received its node ID caused the latter to hang in Start Phase 1. (Bug #16007980)
References: See also Bug #18993037.
Creating more than 32 hash maps caused data nodes to fail.
Usually new hashmaps are created only when performing
reorganzation after data nodes have been added or when explicit
partitioning is used, such as when creating a table with the
MAX_ROWS option, or using
BY KEY() PARTITIONS .
When performing an
ON DUPLICATE KEY UPDATE on an
NDB table where the row to be
inserted already existed and was locked by another transaction,
the error message returned from the
following the timeout was Transaction already
aborted instead of the expected Lock wait
(Bug #14065831, Bug #65130)
When using dynamic listening ports for accepting connections from API nodes, the port numbers were reported to the management server serially. This required a round trip for each API node, causing the time required for data nodes to connect to the management server to grow linearly with the number of API nodes. To correct this problem, each data node now reports all dynamic ports at once. (Bug #12593774)
For each log event retrieved using the MGM API, the log event
simply cast to an
enum type, which resulted
in invalid category values. Now an offset is added to the
category following the cast to ensure that the value does not
fall out of the allowed range.
This change was reverted by the fix for Bug #18354165. See the
MySQL Cluster API Developer documentation for
Functionality Added or Changed
DUMP code 2514, which provides
information about counts of transaction objects per API node.
For more information, see
DUMP 2514. See also
Commands in the MySQL Cluster Management Client.
When ndb_restore fails to find a table, it now includes in the error output an NDB API error code giving the reason for the failure. (Bug #16329067)
Following an upgrade to MySQL Cluster NDB 7.2.7 or later, it was
not possible to downgrade online again to any previous version,
due to a change in that version in the default size (number of
LDM threads used) for
hash maps. The fix for this issue makes the size configurable,
with the addition of the
To retain compatibility with an older release that does not
support large hash maps, you can set this parameter in the
config.ini file to the value
used in older releases (240) before performing an upgrade, so
that the data nodes continue to use smaller hash maps that are
compatible with the older release. You can also now employ this
parameter in MySQL Cluster NDB 7.0 and MySQL Cluster NDB 7.1 to
enable larger hash maps prior to upgrading to MySQL Cluster NDB
7.2. For more information, see the description of the
References: See also Bug #14645319.
Important Change; Cluster API:
When checking—as part of evaluating an
if predicate—which error codes should
be propagated to the application, any error code less than 6000
caused the current row to be skipped, even those codes that
should have caused the query to be aborted. In addition, a scan
that aborted due to an error from
no rows had been sent to the API caused
to send a
SCAN_FRAGCONF signal rather than a
SCAN_FRAGREF signal to
DBTC. This caused
time out waiting for a
that was never sent, and the scan was never closed.
As part of this fix, the default
value used by
has been changed from 899 (Rowid already
allocated) to 626 (Tuple did not
exist). The old value continues to be supported for
backward compatibility. User-defined values in the range
6000-6999 (inclusive) are also now supported. You should also
keep in mind that the result of using any other
ErrorCode value not mentioned here is not
defined or guaranteed.
The NDB Error-Reporting Utility
(ndb_error_reporter) failed to include the
cluster nodes' log files in the archive it produced when the
FILE option was set for the parameter
References: See also Bug #11752792, Bug #44082.
In some cases a data node could stop with an exit code but no
error message other than
(null) was logged.
(This could occur when using ndbd or
ndbmtd for the data node process.) Now in
such cases the appropriate error message is used instead (see
ndbd Error Messages).
WHERE condition that contained a boolean
test of the result of an
IN subselect was not
When using tables having more than 64 fragments in a MySQL
Cluster where multiple TC threads were configured (on data nodes
running ndbmtd, using
memory could be freed prematurely, before scans relying on these
objects could be completed, leading to a crash of the data node.
References: See also Bug #13799800. This bug was introduced by Bug #14143553.
When started with
-f) option, ndb_mgmd
removed the old configuration cache before verifying the
configuration file. Now in such cases,
ndb_mgmd first checks for the file, and
continues with removing the configuration cache only if the
configuration file is found and is valid.
DUMP 2304 command during a data
node restart could cause the data node to crash with a
Pointer too large error.
Data nodes could fail during a system restart when the host ran
short of memory, due to signals of the wrong types
TRANSID_AI_R) being sent to the
DBSPJ kernel block.
Improved handling of lagging row change event subscribers by
setting size of the GCP pool to the value of
fix also introduces a new
data node configuration parameter, which makes it possible to
set a total number of bytes per node to be reserved for
buffering epochs. In addition, a new
code (8013) has been added which causes a list a lagging
subscribers for each node to be printed to the cluster log (see
Attempting to perform additional operations such as
COLUMN as part of an
[ONLINE | OFFLINE] TABLE ... RENAME ... statement is
not supported, and now fails with an
Purging the binary logs could sometimes cause mysqld to crash. (Bug #15854719)
Due to a known issue in the MySQL Server, it is possible to drop
PERFORMANCE_SCHEMA database. (Bug
#15831748) In addition, when executed on a MySQL Server acting
as a MySQL Cluster SQL node,
DATABASE caused this database to be dropped on all SQL
nodes in the cluster. Now, when executing a distributed drop of
NDB does not delete
tables that are local only. This prevents MySQL system databases
from being dropped in such cases.
DUMP 1000 command (see
DUMP 1000) that
contained extra or malformed arguments could lead to data node
An error message in
(Bug #14548052, Bug #66518)
under heavy load could cause data nodes running
ndbmtd to fail.
The help text for ndb_select_count did not include any information about using table names. (Bug #11755737, Bug #47551)
The ndb_mgm client
command did not show the complete syntax for the
method performs a
malloc() if no buffer is
provided for it to use. However, it was assumed that the memory
thus returned would always be suitably aligned, which is not
always the case. Now when
malloc() provides a
buffer to this method, the buffer is aligned after it is
allocated, and before it is used.
Functionality Added or Changed
Added several new columns to the
transporters table and
counters for the
table of the
information database. The information provided may help in
troublehsooting of transport overloads and problems with send
buffer memory allocation. For more information, see the
descriptions of these tables.
To provide information which can help in assessing the current
state of arbitration in a MySQL Cluster as well as in diagnosing
and correcting arbitration problems, 3 new
been added to the
NDB table grew to contain
approximately one million rows or more per partition, it became
possible to insert rows having duplicate primary or unique keys
into it. In addition, primary key lookups began to fail, even
when matching rows could be found in the table by other means.
This issue was introduced in MySQL Cluster NDB 7.0.36, MySQL Cluster NDB 7.1.26, and MySQL Cluster NDB 7.2.9. Signs that you may have been affected include the following:
Rows left over that should have been deleted
Rows unchanged that should have been updated
Rows with duplicate unique keys due to inserts or updates (which should have been rejected) that failed to find an existing row and thus (wrongly) inserted a new one
This issue does not affect simple scans, so you can see all rows
in a given
SELECT * FROM
and similar queries
that do not depend on a primary or unique key.
Upgrading to or downgrading from an affected release can be troublesome if there are rows with duplicate primary or unique keys in the table; such rows should be merged, but the best means of doing so is application dependent.
In addition, since the key operations themselves are faulty, a merge can be difficult to achieve without taking the MySQL Cluster offline, and it may be necessary to dump, purge, process, and reload the data. Depending on the circumstances, you may want or need to process the dump with an external application, or merely to reload the dump while ignoring duplicates if the result is acceptable.
Another possibility is to copy the data into another table
without the original table' unique key constraints or
primary key (recall that
TABLE t2 SELECT * FROM t1 does not by default copy
t1's primary or unique key definitions
t2). Following this, you can remove the
duplicates from the copy, then add back the unique constraints
and primary key definitions. Once the copy is in the desired
state, you can either drop the original table and rename the
copy, or make a new dump (which can be loaded later) from the
(Bug #16023068, Bug #67928)
The management client command
BackupStatus failed with an error when used with data
nodes having multiple LQH worker threads
(ndbmtd data nodes). The issue did not effect
form of this command.
The multi-threaded job scheduler could be suspended prematurely when there were insufficient free job buffers to allow the threads to continue. The general rule in the job thread is that any queued messages should be sent before the thread is allowed to suspend itself, which guarantees that no other threads or API clients are kept waiting for operations which have already completed. However, the number of messages in the queue was specified incorrectly, leading to increased latency in delivering signals, sluggish response, or otherwise suboptimal performance. (Bug #15908684)
The setting for the
API node configuration parameter was ignored, and the default
value used instead.
Node failure during the dropping of a table could lead to the node hanging when attempting to restart.
When this happened, the
internal dictionary (
DBDICT) lock taken by
the drop table operation was held indefinitely, and the logical
global schema lock taken by the SQL the drop table operation
from which the drop operation originated was held until the
NDB internal operation timed out. To aid in
debugging such occurrences, a new dump code,
which dumps the contents of the
queue, has been added in the ndb_mgm client.
The recently added LCP fragment scan watchdog occasionally reported problems with LCP fragment scans having very high table id, fragment id, and row count values.
This was due to the watchdog not accounting for the time spent draining the backup buffer used to buffer rows before writing to the fragment checkpoint file.
Now, in the final stage of an LCP fragment scan, the watchdog switches from monitoring rows scanned to monitoring the buffer size in bytes. The buffer size should decrease as data is written to the file, after which the file should be promptly closed. (Bug #14680057)
During an online upgrade, certain SQL statements could cause the server to hang, resulting in the error Got error 4012 'Request ndbd time-out, maybe due to high load or communication problems' from NDBCLUSTER. (Bug #14702377)
Job buffers act as the internal queues for work requests (signals) between block threads in ndbmtd and could be exhausted if too many signals are sent to a block thread.
Performing pushed joins in the
block can execute multiple branches of the query tree in
parallel, which means that the number of signals being sent can
increase as more branches are executed. If
DBSPJ execution cannot be completed before
the job buffers are filled, the data node can fail.
This problem could be identified by multiple instances of the message sleeploop 10!! in the cluster out log, possibly followed by job buffer full. If the job buffers overflowed more gradually, there could also be failures due to error 1205 (Lock wait timeout exceeded), shutdowns initiated by the watchdog timer, or other timeout related errors. These were due to the slowdown caused by the 'sleeploop'.
Normally up to a 1:4 fanout ratio between consumed and produced signals is permitted. However, since there can be a potentially unlimited number of rows returned from the scan (and multiple scans of this type executing in parallel), any ratio greater 1:1 in such cases makes it possible to overflow the job buffers.
The fix for this issue defers any lookup child which otherwise would have been executed in parallel with another is deferred, to resume when its parallel child completes one of its own requests. This restricts the fanout ratio for bushy scan-lookup joins to 1:1. (Bug #14709490)
References: See also Bug #14648712.
Under certain rare circumstances, MySQL Cluster data nodes could
crash in conjunction with a configuration change on the data
nodes from a single-threaded to a multi-threaded transaction
coordinator (using the
configuration parameter for ndbmtd). The
problem occurred when a mysqld that had been
started prior to the change was shut down following the rolling
restart of the data nodes required to effect the configuration
Functionality Added or Changed
Added 3 new columns to the
transporters table in the
ndbinfo database. The
bytes_received columns help to provide an
overview of data transfer across the transporter links in a
MySQL Cluster. This information can be useful in verifying
system balance, partitioning, and front-end server load
balancing; it may also be of help when diagnosing network
problems arising from link saturation, hardware faults, or other
Data node logs now provide tracking information about arbitrations, including which nodes have assumed the arbitrator role and at what times. (Bug #11761263, Bug #53736)
A slow filesystem during local checkpointing could exert undue
DBDIH kernel block file page
buffers, which in turn could lead to a data node crash when
these were exhausted. This fix limits the number of table
definition updates that
DBDIH can issue
The management server process, when started with
sometimes hang during shutdown.
The output from ndb_config
--configinfo now contains the
same information as that from ndb_config
--xml, including explicit
indicators for parameters that do not require restarting a data
--initial to take effect.
ndb_config indicated incorrectly
node configuration parameter requires an initial node restart to
take effect, when in fact it does not; this error was also
present in the MySQL Cluster documentation, where it has also
ALTER TABLE with other
DML statements on the same NDB table returned Got
error -1 'Unknown error code' from NDBCLUSTER.
Receiver threads could wait unnecessarily to process incomplete signals, greatly reducing performance of ndbmtd. (Bug #14525521)
On platforms where epoll was not available, setting multiple
receiver threads with the
caused ndbmtd to fail.
CPU consumption peaked several seconds after the forced termination an NDB client application due to the fact that the DBTC kernel block waited for any open transactions owned by the disconnected API client to be terminated in a busy loop, and did not break between checks for the correct state. (Bug #14550056)
--connect-delay startup options for
ndbd and ndbmtd.
--connect-retries (default 12) controls how
many times the data node tries to connect to a management server
before giving up; setting it to -1 means that the data node
never stops trying to make contact.
--connect-delay sets the number of seconds to
wait between retries; the default is 5.
(Bug #14329309, Bug #66550)
It was possible in some cases for two transactions to try to
drop tables at the same time. If the master node failed while
one of these operations was still pending, this could lead
either to additional node failures (and cluster shutdown) or to
new dictionary operations being blocked. This issue is addressed
by ensuring that the master will reject requests to start or
stop a transaction while there are outstanding dictionary
takeover requests. In addition, table-drop operations now
correctly signal when complete, as the
kernel block could not confirm node takeovers while such
operations were still marked as pending completion.
Following a failed
TABLE ... REORGANIZE PARTITION statement, a subsequent
execution of this statement after adding new data nodes caused a
failure in the
DBDIH kernel block which led
to an unplanned shutdown of the cluster.
DUMP code 7019 was added as part of this fix.
It can be used to obtain diagnostic information relating to a
failed data node. See
DUMP 7019, for more
References: See also Bug #18550318.
DBSPJ kernel block had no information
about which tables or indexes actually existed, or which had
been modified or dropped, since execution of a given query
DBSPJ might submit dictionary
requests for nonexistent tables or versions of tables, which
could cause a crash in the
This fix introduces a simplified dictionary into the
DBSPJ kernel block such that
DBSPJ can now check reliably for the
existence of a particular table or version of a table on which
it is about to request an operation.
Previously, it was possible to store a maximum of 46137488 rows in a single MySQL Cluster partition. This limitation has now been removed. (Bug #13844405, Bug #14000373)
References: See also Bug #13436216.
When using ndbmtd and performing joins, data
nodes could fail where ndbmtd processes were
configured to use a large number of local query handler threads
(as set by the
configuration parameter), the tables accessed by the join had a
large number of partitions, or both.
(Bug #13799800, Bug #14143553)
When reloading the redo log during a node or system restart, and
greater than or equal to 42, it was possible for metadata to be
read for the wrong file (or files). Thus, the node or nodes
involved could try to reload the wrong set of data.
FILE was used for the value of the
without also specifying the
filename, the log
file name defaulted to
logger.log. Now in
such cases, the name defaults to
(Bug #11764570, Bug #57417)
If the Transaction Coordinator aborted a transaction in the “prepared” state, this could cause a resource leak. (Bug #14208924)
When attempting to connect using a socket with a timeout, it was possible (if the timeout was exceeded) for the socket not to be set back to blocking. (Bug #14107173)
An error handling routine in the local query handler used the wrong code path, which could corrupt the transaction ID hash, causing the data node process to fail. This could in some cases possibly lead to failures of other data nodes in the same node group when the failed node attempted to restart. (Bug #14083116)
An error handling routine in the local query handler
DBLQH) used the wrong code path, which could
corrupt the transaction ID hash, causing the data node process
to fail. This could in some cases possibly lead to failures of
other data nodes in the same node group when the failed node
attempted to restart.
When a fragment scan occurring as part of a local checkpoint (LCP) stopped progressing, this kept the entire LCP from completing, which could result it redo log exhaustion, write service outage, inability to recover nodes, and longer system recovery times. To help keep this from occurring, MySQL Cluster now implements an LCP watchdog mechanism, which monitors the fragment scans making up the LCP and takes action if the LCP is observed to be delinquent.
This is intended to guard against any scan related system-level I/O errors or other issues causing problems with LCP and thus having a negative impact on write service and recovery times. Each node independently monitors the progress of local fragment scans occurring as part of an LCP. If no progress is made for 20 seconds, warning logs are generated every 10 seconds thereafter for up to 1 minute. At this point, if no progress has been made, the fragment scan is considered to have hung, and the node is restarted to enable the LCP to continue.
In addition, a new ndbd exit code
NDBD_EXIT_LCP_SCAN_WATCHDOG_FAIL is added
to identify when this occurs. See
LQH Errors, for more information.
In some circumstances, transactions could be lost during an online upgrade. (Bug #13834481)
NDB table was created
during a data node restart, the operation was rolled back in the
NDB engine, but not on the SQL node where it
was executed. This was due to the table
.FRM files not being cleaned up following
the operation that was rolled back by
Now in such cases these files are removed.
Attempting to add both a column and an index on that column in
the same online
statement caused mysqld to fail. Although
this issue affected only the mysqld shipped
with MySQL Cluster, the table named in the
TABLE could use any storage engine for which online
operations are supported.
When an NDB API application called
again after the previous call had returned end-of-file (return
code 1), a transaction object was leaked. Now when this happens,
NDB returns error code 4210 (Ndb sent more info than
length specified); previouslyu in such cases, -1 was
returned. In addition, the extra transaction object associated
with the scan is freed, by returning it to the transaction
coordinator's idle list.
DUMP 2303 in the ndb_mgm
client now includes the status of the single fragment scan
record reserved for a local checkpoint.
A shortage of scan fragment records in
resulted in a leak of concurrent scan table records and key
TABLE ... REORGANIZE PARTITION statement can be used
to create new table partitions after new empty nodes have been
added to a MySQL Cluster. Usually, the number of partitions to
create is determined automatically, such that, if no new
partitions are required, then none are created. This behavior
can be overridden by creating the original table using the
MAX_ROWS option, which indicates that extra
partitions should be created to store a large number of rows.
However, in this case
ALTER ONLINE TABLE ... REORGANIZE
PARTITION simply uses the
value specified in the original
TABLE statement to determine the number of partitions
required; since this value remains constant, so does the number
of partitions, and so no new ones are created. This means that
the table is not rebalanced, and the new data nodes remain
To solve this problem, support is added for
ONLINE TABLE ...
newvalue is greater than the value
MAX_ROWS in the original
CREATE TABLE statement. This larger
MAX_ROWS value implies that more partitions
are required; these are allocated on the new data nodes, which
restores the balanced distribution of the table data.
In some cases, restarting data nodes spent a very long time in
Start Phase 101, when API nodes must connect to the starting
when the API nodes trying to connect failed in a live-lock
scenario. This connection process uses a handshake during which
a small number of messages are exchanged, with a timeout used to
detect failures during the handshake.
Prior to this fix, this timeout was set such that, if one API node encountered the timeout, all other nodes connecting would do the same. The fix also decreases this timeout. This issue (and the effects of the fix) are most likely to be observed on relatively large configurations having 10 or more data nodes and 200 or more API nodes. (Bug #13825163)
TABLE failed when a
ndbmtd failed to restart when the size of a table definition exceeded 32K.
(The size of a table definition is dependent upon a number of factors, but in general the 32K limit is encountered when a table has 250 to 300 columns.) (Bug #13824773)
An initial start using ndbmtd could sometimes hang. This was due to a state which occurred when several threads tried to flush a socket buffer to a remote node. In such cases, to minimize flushing of socket buffers, only one thread actually performs the send, on behalf of all threads. However, it was possible in certain cases for there to be data in the socket buffer waiting to be sent with no thread ever being chosen to perform the send. (Bug #13809781)
When trying to use ndb_size.pl
to connect to a MySQL server running on a nonstandard port, the
port argument was ignored.
(Bug #13364905, Bug #62635)
Important Change: A number of changes have been made in the configuration of transporter send buffers.
The data node configuration parameter
is now deprecated, and thus subject to removal in a future
MySQL Cluster release.
ReservedSendBufferMemory has been
non-functional since it was introduced and remains so.
TotalSendBufferMemory now works correctly
with data nodes using ndbmtd.
A new data node configuration parameter
is introduced. Its purpose is to control how much additional
memory can be allocated to the send buffer over and above
that specified by
SendBufferMemory. The default setting
(0) allows up to 16MB to be allocated automatically.
(Bug #13633845, Bug #11760629, Bug #53053)
Several instances in the NDB code affecting the operation of
multi-threaded data nodes, where
SendBufferMemory was associated with a
specific thread for an unnecessarily long time, have been
identified and fixed, by minimizing the time that any of these
buffers can be held exclusively by a given thread (send buffer
memory being critical to operation of the entire node).
LIKE ... ESCAPE on
NDB tables failed when pushed down
to the data nodes. Such queries are no longer pushed down,
regardless of the value of
(Bug #13604447, Bug #61064)
To avoid TCP transporter overload, an overload flag is kept in
the NDB kernel for each data node; this flag is used to abort
key requests if needed, yielding error 1218 Send
Buffers overloaded in NDB kernel in such cases.
Scans can also put significant pressure on transporters,
especially where scans with a high degree of parallelism are
executed in a configuration with relatively small send buffers.
However, in these cases, overload flags were not checked, which
could lead to node failures due to send buffer exhaustion. Now,
overload flags are checked by scans, and in cases where
returning sufficient rows to match the batch size
--ndb-batch-size server option)
would cause an overload, the number of rows is limited to what
can be accommodated by the send buffer.
See also Configuring MySQL Cluster Send Buffer Parameters. (Bug #13602508)
A data node crashed when more than 16G fixed-size memory was
DBTUP to one fragment (because
DBACC kernel block was not prepared to
accept values greater than 32 bits from it, leading to an
overflow). Now in such cases, the data node returns Error 889
Table fragment fixed data reference has reached
maximum possible value.... When this happens, you
can work around the problem by increasing the number of
partitions used by the table (such as by using the
MAXROWS option with
References: See also Bug #11747870, Bug #34348.
References: See also Bug #13608135.
A node failure and recovery while performing a scan on more than 32 partitions led to additional node failures during node takeover. (Bug #13528976)
option now causes ndb_mgmd to skip checking
for the configuration directory, and thus to skip creating it in
the event that it does not exist.
Accessing a table having a
column but no primary key following a restart of the SQL node
failed with Error 1 (Unknown error code).
At the beginning of a local checkpoint, each data node marks its local tables with a “to be checkpointed” flag. A failure of the master node during this process could cause either the LCP to hang, or one or more data nodes to be forcibly shut down. (Bug #13436481)
A node failure while a
TABLE statement was executing resulted in a hung
connection (and the user was not informed of any error that
would cause this to happen).
References: See also Bug #13407848.
data node configuration parameter, which specifies a percentage
of data node resources to hold in reserve for restarts. The
resources monitored are
IndexMemory, and any
MAX_ROWS settings (see
CREATE TABLE Syntax). The default value of
MinFreePct is 5, which
means that 5% from each these resources is now set aside for
configuration parameters, used to control the maximum sizes of
result batches, are defined as integers. However, the values
used to store these were incorrectly interpreted as numbers of
bytes in the NDB kernel. This caused the
DBLQH kernel block to fail to detect when the
In addition, the
DBSPJ kernel block could
miscalculate statistics for adaptive parallelism.
Because the log event buffer used internally by data nodes was circular, periodic events such as statistics events caused it to be overwritten too quickly. Now the buffer is partitioned by log event category, and its default size has been increased from 4K to 8K. (Bug #13394771)
Previously, forcing simultaneously the shutdown of multiple data
SHUTDOWN -F in the
ndb_mgm management client could cause the
entire cluster to fail. Now in such cases, any such nodes are
forced to abort immediately.
A SubscriberNodeIdUndefined error was previously unhandled, resulting in a data node crash, but is now handled by NDB Error 1429, Subscriber node undefined in SubStartReq. (Bug #12598496)
Functionality Added or Changed
data node configuration parameter. When enabled, this parameter
causes data nodes to handle corrupted tuples in a fail-fast
manner—in other words, whenever the data node detects a
corrupted tuple, it forcibly shuts down if
enabled. For backward compatibility, this parameter is disabled
data node configuration parameter to enable control of multiple
threads and CPUs when using ndbmtd, by
assigning threads of one or more specified types to execute on
one or more CPUs. This can provide more precise and flexible
control over multiple threads than can be obtained using the
Added the ndbinfo_select_all utility.
When adding data nodes online, if the SQL nodes were not restarted before starting the new data nodes, the next query to be executed crashed the SQL node on which it was run. (Bug #13715216, Bug #62847)
References: This bug was introduced by Bug #13117187.
When a failure of multiple data nodes during a local checkpoint (LCP) that took a long time to complete included the node designated as master, any new data nodes attempting to start before all ongoing LCPs were completed later crashed. This was due to the fact that node takeover by the new master cannot be completed until there are no pending local checkpoints. Long-running LCPs such as those which triggered this issue can occur when fragment sizes are sufficiently large (see MySQL Cluster Nodes, Node Groups, Replicas, and Partitions, for more information). Now in such cases, data nodes (other than the new master) are kept from restarting until the takeover is complete. (Bug #13323589)
When deleting from multiple tables using a unique key in the
WHERE condition, the wrong rows were deleted.
UPDATE triggers failed when rows
were changed by deleting from or updating multiple tables.
(Bug #12718336, Bug #61705, Bug #12728221)
Shutting down a mysqld while under load caused the spurious error messages Opening ndb_binlog_index: killed and Unable to lock table ndb_binlog_index to be written in the cluster log. (Bug #11930428)
When more than 32KB of data must be sent in a single signal
using the NDB API, the data is split across 2 or more signals
each of which is smaller than 32kB, and these are then
reassembled back into the original, full-length signal by the
receiver. Such fragmented signals are used for some scan
requests, as well as for SPJ
requests. However, extra (spurious) signals could sometimes be
sent when using fragmented signals, causing errors on the
receiver; these implementation artifacts have now been
Functionality Added or Changed
It is now possible to filter the output from
ndb_config so that it displays only system,
data node, or connection parameters and values, using one of the
In addition, it is now possible to specify from which data node
the configuration data is obtained, using the
that is added in this release.
For more information, see ndb_config — Extract MySQL Cluster Configuration Information. (Bug #11766870)
Incompatible Change; Cluster API: Restarting a machine hosting data nodes, SQL nodes, or both, caused such nodes when restarting to time out while trying to obtain node IDs.
As part of the fix for this issue, the behavior and default
values for the NDB API
method have been improved. Due to these changes, the version
number for the included NDB client library
libndbclient.so) has been increased from
4.0.0 to 5.0.0. For NDB API applications, this means that as
part of any upgrade, you must do both of the following:
Review and possibly modify any NDB API code that uses the
method, in order to take into account its changed default
Recompile any NDB API applications using the new version of the client library.
Also in connection with this issue, the default value for each
of the two mysqld options
--ndb-wait-setup has been
increased to 30 seconds (from 0 and 15, respectively). In
addition, a hard-coded 30-second delay was removed, so that the
now handled correctly in all cases.
When replicating DML statements with
between clusters, the number of operations that failed due to
nonexistent keys was expected to be no greater than the number
of defined operations of any single type. Because the slave SQL
thread defines operations of multiple types in batches together,
code which relied on this assumption could cause
mysqld to fail.
The maximum effective value for the
configuration parameter was limited by the value of
SendBufferMemory. Now the
value set for
OverloadLimit is used
correctly, up to this parameter's stated maximum (4G).
AUTO_INCREMENT values were not set correctly
IGNORE statements affecting
NDB tables. This could lead such
statements to fail with Got error 4350 'Transaction
already aborted' from NDBCLUSTER when inserting
multiple rows containing duplicate values.
(Bug #11755237, Bug #46985)
When failure handling of an API node takes longer than 300 seconds, extra debug information is included in the resulting output. In cases where the API node's node ID was greater than 48, these extra debug messages could lead to a crash, and confuing output otherwise. This was due to an attempt to provide information specific to data nodes for API nodes as well. (Bug #62208)
In rare cases, a series of node restarts and crashes during restarts could lead to errors while reading the redo log. (Bug #62206)
Functionality Added or Changed
data node configuration parameter, which can be used to limit
the number of DML operations used by a transaction; if the
transaction requires more than this many DML operations, the
transaction is aborted.
Restarting a mysqld during a rolling upgrade with data nodes running a mix of old and new versions of the MySQL Cluster software caused the mysqld to run in read-only mode. (Bug #12651364, Bug #61498)
When global checkpoint indexes were written with no intervening end-of-file or megabyte border markers, this could sometimes lead to a situation in which the end of the redo log was mistakenly regarded as being between these GCIs, so that if the restart of a data node took place before the start of the next redo log was overwritten, the node encountered an Error while reading the REDO log. (Bug #12653993, Bug #61500)
References: See also Bug #56961.
Under certain rare circumstances, a data node process could fail
with Signal 11 during a restart. This was due to uninitialized
variables in the
QMGR kernel block.
Error reporting has been improved for cases in which API nodes are unable to connect due to apparent unavailability of node IDs. (Bug #12598398)
Error messages for Failed to convert connection transporter registration problems were inspecific. (Bug #12589691)
Multiple management servers were unable to detect one another
until all nodes had fully started. As part of the fix for this
issue, two new status values
CONNECTED can be reported for management
nodes in the output of the ndb_mgm client
SHOW command (see
Commands in the MySQL Cluster Management Client). Two
corresponding status values
NDB_MGM_NODE_STATUS_CONNECTED are also added
to the list of possible values for an
structure in the MGM API.
(Bug #12352191, Bug #48301)
Handling of the
configuration parameters was not consistent in all parts of the
NDB kernel, and were only strictly
enforced by the
SUMA kernel blocks. This could lead to
problems when tables could be created but not replicated. Now
these parameters are treated by
DBDICT as suggested maximums rather than hard
limits, as they are elsewhere in the
It was not possible to shut down a management node while one or more data nodes were stopped (for whatever reason). This issue was a regression introduced in MySQL Cluster NDB 7.0.24 and MySQL Cluster NDB 7.1.13. (Bug #61607)
References: See also Bug #61147.
Within a transaction, after creating, executing, and closing a
creating and executing but not closing a second scan caused the
application to crash.
Applications that included the header file
ndb_logevent.h could not be built using the
Microsoft Visual Studio C compiler or the Oracle (Sun) Studio C
compiler due to empty struct definitions.
Ndb_getinaddr() function has
been rewritten to use
my_gethostbyname_r() (which is removed
in a later version of the MySQL Server).
mysql_upgrade failed when performing an
online upgrade from MySQL Cluster NDB 7.1.8 or an earlier
release to MySQL Cluster NDB 7.1.9 or later in which the SQL
nodes were upgraded before the data nodes. This issue could
occur during any online upgrade or downgrade where one or more
ndbinfo tables had more,
fewer, or differing columns between the two versions, and when
the data nodes were not upgraded before the SQL nodes.
For more information, see Upgrade and downgrade compatibility: MySQL Cluster NDB 7.x. (Bug #11885602, Bug #12581895, Bug #12581954)
Two unused test files in
storage/ndb/test/sql contained incorrect
versions of the GNU Lesser General Public License. The files and
the directory containing them have been removed.
References: See also Bug #11810224.
Error 1302 gave the wrong error message (Out of backup record). This has been corrected to A backup is already running. (Bug #11793592)
In ndbmtd, a node connection event is
detected by a
CMVMI thread which sends a
CONNECT_REP signal to the
QMGR kernel block. In a few isolated
circumstances, a signal might be transferred to
QMGR directly by the
NDB transporter before the
CONNECT_REP signal actually arrived. This
resulted in reports in the error log with status
Temporary error, restart node, and the
message Internal program error.
Under heavy loads with many concurrent inserts, temporary
failures in transactions could occur (and were misreported as
being due to
NDB Error 899
Rowid already allocated). As part of the
fix for this issue,
NDB Error 899
has been reclassified as an internal error, rather than as a
temporary transaction error.
(Bug #56051, Bug #11763354)
When using two management servers, issuing in an
ndb_mgm client connected to one management
STOP command for stopping the other
management server caused Error 2002 (Stop failed ...
Send to process or receive failed.: Permanent error: Application
error), even though the
command actually succeeded, and the second
ndb_mgmd was shut down.
incorrect with regard to data files in MySQL Cluster Disk Data
tablespaces. This could lead to a crash when
Functionality Added or Changed
It is now possible to add data nodes online to a running MySQL
Cluster without performing a rolling restart of the cluster or
starting data node processes with the
--nowait-nodes option. This can be
done by setting
65536 in the
config.ini file for
any data nodes that should be started at a later time, when
first starting the cluster. (It was possible to set
NodeGroup to this value
previously, but the management server failed to start.)
As part of this fix, a new data node configuration parameter
has been added. When the management server sees that there are
data nodes with no node group (that is, nodes for which
Nodegroup = 65536), it
milliseconds before treating these nodes as though they were
listed with the
option, and proceeds to start.
For more information, see Adding MySQL Cluster Data Nodes Online. (Bug #11766167, Bug #59213)
config_generation column has been added to
nodes table of the
ndbinfo database. By checking
this column, it is now possible to determine which version or
versions of the MySQL Cluster configuration file are in effect
on the data nodes. This information can be especially useful
when performing a rolling restart of the cluster to update its
Cluster API: A unique index operation is executed in two steps: a lookup on an index table, and an operation on the base table. When the operation on the base table failed, while being executed in a batch with other operations that succeeded, this could lead to a hanging execute, eventually timing out with Error 4012 (Request ndbd time-out, maybe due to high load or communication problems). (Bug #12315582)
A memory leak in
LGMAN, that leaked 8 bytes
of log buffer memory per 32k written, was introduced in MySQL
Cluster NDB 7.0.9, effecting all MySQL Cluster NDB 7.1 releases
as well as MySQL Cluster NDB 7.0.9 and later MySQL Cluster NDB
7.0 releases. (For example, when 128MB log buffer memory was
used, it was exhausted after writing 512GB to the undo log.)
This led to a GCP stop and data node failure.
References: This bug was introduced by Bug #47966.
When using ndbmtd, a MySQL Cluster configured with 32 data nodes failed to start correctly. (Bug #60943)
When performing a TUP scan with locks in parallel, and with a highly concurrent load of inserts and deletions, the scan could sometimes fail to notice that a record had moved while waiting to acquire a lock on it, and so read the wrong record. During node recovery, this could lead to a crash of a node that was copying data to the node being started, and a possible forced shutdown of the cluster.
Cluster API: Performing interpreted operations using a unique index did not work correctly, because the interpret bit was kept when sending the lookup to the index table.
Functionality Added or Changed
Improved scaling of ordered index scans performance by removing
a hard-coded limit
making the number of
TUX scans per fragment configurable by adding
data node configuration parameter.
server option set a status variable as well as a system
variable. The status variable has been removed as redundant.
A scan with a pushed condition (filter) using the
CommittedRead lock mode could hang for a
short interval when it was aborted when just as it had decided
to send a batch.
When aborting a multi-read range scan exactly as it was changing ranges in the local query handler, LQH could fail to detect it, leaving the scan hanging. (Bug #11929643)
Schema distribution did not take place for tables converted from
another storage engine to
ALTER TABLE; this meant that such
tables were not always visible to all SQL nodes attached to the
A GCI value inserted by ndb_restore
--restore_epoch into the
ndb_apply_status table was actually 1 less
than the correct value.
Limits imposed by the size of
not always enforced consistently with regard to Disk Data undo
buffers and log files. This could sometimes cause a
CREATE LOGFILE GROUP or
ALTER LOGFILE GROUP statement to
fail for no apparent reason, or cause the log file group
to be created when starting the cluster.
Functionality Added or Changed
now provides disk usage as well as memory usage information for
Disk Data tables. Also,
formerly did not show any statistics for
NDB tables. Now the
DATA_FREE columns contain correct information
for the table's partitions.
option is added for ndb_restore, which makes
it possible to restore to a database having a different name
from that of the database in the backup.
For more information, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #54327)
Made it possible to enable multi-threaded building of ordered
indexes during initial restarts, using the new
data node configuration parameter.
For additional information about type conversions currently supported by MySQL Cluster for attribute promotion and demotion, see Replication of Columns Having Different Data Types.
The NDB kernel now implements a number of statistical counters
relating to actions performed by or affecting
Ndb objects, such as starting,
closing, or aborting transactions; primary key and unique key
operations; table, range, and pruned scans; blocked threads
waiting for various operations to complete; and data and events
sent and received by
These NDB API counters are incremented inside the NDB kernel
whenever NDB API calls are made or data is sent to or received
by the data nodes. mysqld exposes these
counters as system status variables; their values can be read in
the output of
SHOW STATUS, or by
table in the
INFORMATION_SCHEMA database. By
comparing the values of these status variables prior to and
following the execution of SQL statements that act on
NDB tables, you can observe the
corresponding actions taken on the NDB API level, which can be
beneficial for monitoring and performance tuning of MySQL
CREATE TABLE statement
failed due to
NDB error 1224
(Too many fragments), it was not possible
to create the table afterward unless either it had no ordered
indexes, or a
statement was issued first, even if the subsequent
CREATE TABLE was valid and should
otherwise have succeeded.
References: See also Bug #59751.
When a query used multiple references to or instances of the
same physical tables,
NDB failed to
recognize these multiple instances as different tables; in such
NDB could incorrectly use
condition pushdown on a condition referring to these other
instances to be pushed to the data nodes, even though the
condition should have been rejected as unpushable, leading to
Successive queries on the
counters table from the same
SQL node returned unchanging results. To fix this issue, and to
prevent similar issues from occurring in the future,
ndbinfo tables are now
excluded from the query cache.
This issue affects all previous MySQL Cluster NDB 7.1 releases. (Bug #60045)
When attempting to create a table on a MySQL Cluster with many
standby data nodes (setting
config.ini for the nodes that should wait,
starting the nodes that should start immediately with the
--nowait-nodes option, and using
CREATE TABLE statement's
MAX_ROWS option), mysqld
miscalculated the number of fragments to use. This caused the
CREATE TABLE to fail.
CREATE TABLE failure caused
by this issue in turn prevented any further attempts to create
the table, even if the table structure was simplified or
changed in such a way that the attempt should have succeeded.
This “ghosting” issue is handled in Bug #59756.
The logic used in determining whether to collapse a range to a
simple equality was faulty. In certain cases, this could cause
NDB to treat a range as if it were
a primary key lookup when determining the query plan to be used.
Although this did not affect the actual result returned by the
query, it could in such cases result in inefficient execution of
queries due to the use of an inappropriate query plan.
NDB sometimes treated a simple (not
unique) ordered index as unique.
caused multi-threaded index building to occur on the master node
When an NDBAPI client application was waiting for more scan
results after calling
the calling thread sometimes woke up even if no new batches for
any fragment had arrived, which was unnecessary, and which could
have a negative impact on the application's performance.
during a node restart, it was possible to get a spurious error
711 (System busy with node restart, schema operations
not allowed when a node is starting).
Functionality Added or Changed
The following changes have been made with regard to the
data node configuration parameter:
The maximum possible value for this parameter has been increased from 32000 milliseconds to 256000 milliseconds.
Setting this parameter to zero now has the effect of disabling GCP stops caused by save timeouts, commit timeouts, or both.
The current value of this parameter and a warning are written to the cluster log whenever a GCP save takes longer than 1 minute or a GCP commit takes longer than 10 seconds.
For more information, see Disk Data and GCP Stop errors. (Bug #58383)
for ndb_restore. This option causes
ndb_restore to ignore tables corrupted due to
missing blob parts tables, and to continue reading from the
backup file and restoring the remaining tables.
References: See also Bug #51652.
Made it possible to exercise more direct control over handling
of timeouts occurring when trying to flush redo logs to disk
using two new data node configuration parameters
as well as the new API node configuration parameter
all added in this release. Now, when such timeouts occur more
than a specified number of times for the flushing of a given
redo log, any transactions that were to be written are instead
aborted, and the operations contained in those transactions can
be either re-tried or themselves aborted.
For more information, see Redo log over-commit handling.
It is now possible to stop or restart a node even while other
nodes are starting, using the MGM API
respectively, with the
parameter set to 1.
References: See also Bug #58319.
In some circumstances, very large
BLOB read and write operations in
MySQL Cluster applications can cause excessive resource usage
and even exhaustion of memory. To fix this issue and to provide
increased stability when performing such operations, it is now
possible to set limits on the volume of
BLOB data to be read or written
within a given transaction in such a way that when these limits
are exceeded, the current transaction implicitly executes any
accumulated operations. This avoids an excessive buildup of
pending data which can result in resource exhaustion in the NDB
kernel. The limits on the amount of data to be read and on the
amount of data to be written before this execution takes place
can be configured separately. (In other words, it is now
possible in MySQL Cluster to specify read batching and write
batching that is specific to
data.) These limits can be configured either on the NDB API
level, or in the MySQL Server.
On the NDB API level, four new methods are added to the
can be used to get and to set, respectively, the maximum amount
BLOB data to be read that
accumulates before this implicit execution is triggered.
can be used to get and to set, respectively, the maximum volume
BLOB data to be written that
accumulates before implicit execution occurs.
For the MySQL server, two new options are added. The
option sets a limit on the amount of pending
BLOB data to be read before
triggering implicit execution, and the
option controls the amount of pending
BLOB data to be written. These
limits can also be set using the mysqld
configuration file, or read and set within the
mysql client and other MySQL client
applications using the corresponding server system variables.
In some circumstances, an SQL trigger on an
NDB table could read stale data.
MySQL Cluster failed to compile correctly on FreeBSD 8.1 due to
Trying to drop an index while it was being used to perform scan updates caused data nodes to crash. (Bug #58277, Bug #57057)
A query using
BETWEEN as part of a
WHERE condition could cause
mysqld to hang or crash.
a table with a unique index created with
returned an empty result.
A query having multiple predicates joined by
OR in the
WHERE clause and
which used the
sort_union access method (as
EXPLAIN) could return
enabled, a query using
LIKE on an
ENUM column of an
NDB table failed to return any
results. This issue is resolved by disabling
performing such queries.
A row insert or update followed by a delete operation on the same row within the same transaction could in some cases lead to a buffer overflow. (Bug #59242)
References: See also Bug #56524. This bug was introduced by Bug #35208.
Data nodes no longer allocated all memory prior to being ready to exchange heartbeat and other messages with management nodes, as in NDB 6.3 and earlier versions of MySQL Cluster. This caused problems when data nodes configured with large amounts of memory failed to show as connected or showed as being in the wrong start phase in the ndb_mgm client even after making their initial connections to and fetching their configuration data from the management server. With this fix, data nodes now allocate all memory as they did in earlier MySQL Cluster versions. (Bug #57568)
When a slash character (
/) was used as part
of the name of an index on an
table, attempting to execute a
TABLE statement on the table failed with the error
Index not found, and the table was
Data nodes configured with very large amounts (multiple
failed during startup with NDB error 2334 (Job buffer
References: See also Bug #47984.
On Windows platforms, issuing a
command in the ndb_mgm client caused
management processes that had been started with the
--nodaemon option to exit
strcasecmp were declared in
ndb_global.h but never defined or used. The
declarations have been removed.
When handling failures of multiple data nodes, an error in the construction of internal signals could cause the cluster's remaining nodes to crash. This issue was most likely to affect clusters with large numbers of data nodes. (Bug #58240)
The number of rows affected by a statement that used a
WHERE clause having an
IN condition with a value list
containing a great many elements, and that deleted or updated
enough rows such that
them in batches, was not computed or reported correctly.
When executing a full table scan caused by a
WHERE condition using
in combination with a join,
unique_key IS NULL
failed to close the scan.
References: See also Bug #57481.
FAIL_REP signal, used inside the NDB
kernel to declare that a node has failed, now includes the node
ID of the node that detected the failure. This information can
be useful in debugging.
In some circumstances, it was possible for
mysqld to begin a new multi-range read scan
without having closed a previous one. This could lead to
exhaustion of all scan operation objects, transaction objects,
or lock objects (or some combination of these) in
NDB, causing queries to fail with
such errors as Lock wait timeout exceeded
or Connect failure - out of connection
References: See also Bug #58750.
During a node takeover, it was possible in some circumstances
for one of the remaining nodes to send an extra transaction
LQH_TRANSCONF) signal to the
DBTC kernel block, conceivably leading to a
crash of the data node trying to take over as the new
Two related problems could occur with read-committed scans made in parallel with transactions combining multiple (concurrent) operations:
When committing a multiple-operation transaction that contained concurrent insert and update operations on the same record, the commit arrived first for the insert and then for the update. If a read-committed scan arrived between these operations, it could thus read incorrect data; in addition, if the scan read variable-size data, it could cause the data node to fail.
When rolling back a multiple-operation transaction having concurrent delete and insert operations on the same record, the abort arrived first for the delete operation, and then for the insert. If a read-committed scan arrived between the delete and the insert, it could incorrectly assume that the record should not be returned (in other words, the scan treated the insert as though it had not yet been committed).
Partitioning; Disk Data:
When using multi-threaded data nodes, an
NDB table created with a very large
value for the
MAX_ROWS option could—if
this table was dropped and a new table with fewer partitions,
but having the same table ID, was created—cause
ndbmtd to crash when performing a system
restart. This was because the server attempted to examine each
partition whether or not it actually existed.
This issue is the same as that reported in Bug #45154, except that the current issue is specific to ndbmtd instead of ndbd. (Bug #58638)
Disk Data: Performing what should have been an online drop of a multi-column index was actually performed offline. (Bug #55618)
In certain cases, a race condition could occur when
DROP LOGFILE GROUP removed the
logfile group while a read or write of one of the effected files
was in progress, which in turn could lead to a crash of the data
A race condition could sometimes be created when
DROP TABLESPACE was run
concurrently with a local checkpoint; this could in turn lead to
a crash of the data node.
When at least one data node was not running, queries against the
INFORMATION_SCHEMA.FILES table took
an excessive length of time to complete because the MySQL server
waited for responses from any stopped nodes to time out. Now, in
such cases, MySQL does not attempt to contact nodes which are
not known to be running.
It was not possible to obtain the status of nodes accurately
after an attempt to stop a data node using
ndb_mgm_stop() failed without
returning an error.
Attempting to read the same value (using
than 9000 times within the same transaction caused the
transaction to hang when executed. Now when more reads are
performed in this way than can be accommodated in a single
transaction, the call to
with a suitable error.
ALL DUMP command during a rolling
upgrade to MySQL Cluster NDB 7.1.9 caused the cluster to crash.
InnoDB plugin was not included
in MySQL Cluster RPM packages.
References: See also Bug #54912.
Functionality Added or Changed
Important Change; InnoDB:
Building the MySQL Server with the
InnoDB plugin is now supported when
building MySQL Cluster. For more information, see
MySQL Cluster Installation and Upgrades.
References: See also Bug #58283.
ndbd now bypasses use of Non-Uniform Memory
Access support on Linux hosts by default. If your system
supports NUMA, you can enable it and override
ndbd use of interleaving by setting the
Numa data node
configuration parameter which is added in this release. See
Data Nodes: Realtime Performance Parameters, for more
Id configuration parameter used with
MySQL Cluster management, data, and API nodes (including SQL
nodes) is now deprecated, and the
parameter (long available as a synonym for
when configuring these types of nodes) should be used instead.
Id continues to be supported for reasons of
backward compatibility, but now generates a warning when used
with these types of nodes, and is subject to removal in a future
release of MySQL Cluster.
This change affects the name of the configuration parameter
only, establishing a clear preference for
Id in the
sections of the MySQL Cluster global configuration
config.ini) file. The behavior of unique
identifiers for management, data, and SQL and API nodes in MySQL
Cluster has not otherwise been altered.
Id parameter as used in the
[computer] section of the MySQL Cluster
global configuration file is not affected by this change.
providing statistics on disk page buffer usage by Disk Data
tables, is added to the
ndbinfo information database.
These statistics can be used to monitor performance of reads and
writes on Disk Data tables, and to assist in the tuning of
related parameters such as
MySQL Cluster RPM distributions did not include a
shared-compat RPM for the MySQL Server, which
meant that MySQL applications depending on
libmysqlclient.so.15 (MySQL 5.0 and
earlier) no longer worked.
The method for calculating table schema versions used by schema
transactions did not follow the established rules for recording
schemas used in the
References: See also Bug #57896.
of the form
when selecting from
NDB table having a primary key
on multiple columns could result in Error 4259
Invalid set of range scan bounds if
range2 started exactly where
range1 ended and the primary key
definition declared the columns in a different order relative to
the order in the table's column list. (Such a query should
simply return all rows in the table, since any expression
is always true.)
CREATE TABLE t (a, b, PRIMARY KEY (b, a)) ENGINE NDB;
This issue could then be triggered by a query such as this one:
SELECT * FROM t WHERE b < 8 OR b >= 8;
In addition, the order of the ranges in the
WHERE clause was significant; the issue was
not triggered, for example, by the query
SELECT * FROM t WHERE b
<= 8 OR b > 8.
ndb_restore now retries failed transactions when replaying log entries, just as it does when restoring data. (Bug #57618)
When a data node angel process failed to fork off a new worker
process (to replace one that had failed), the failure was not
handled. This meant that the angel process either transformed
itself into a worker process, or itself failed. In the first
case, the data node continued to run, but there was no longer
any angel to restart it in the event of failure, even with
StopOnError set to 0.
Transient errors during a local checkpoint were not retried, leading to a crash of the data node. Now when such errors occur, they are retried up to 10 times if necessary. (Bug #57650)
WHERE against an
NDB table having a
VARCHAR column as its primary key
failed to return all matching rows.
CREATE NODEGROUP and
NODEGROUP commands could cause
mysqld processes to crash.
Data nodes compiled with gcc 4.5 or higher crashed during startup. (Bug #57761)
LQHKEYREQ request message used by the
local query handler when checking the major schema version of a
table, being only 16 bits wide, could cause this check to fail
with an Invalid schema version error
NDB error code 1227). This issue
occurred after creating and dropping (and re-creating) the same
table 65537 times, then trying to insert rows into the table.
References: See also Bug #57897.
The disconnection of an API or management node due to missed heartbeats led to a race condition which could cause data nodes to crash. (Bug #57946)
During a GCP takeover, it was possible for one of the data nodes
not to receive a
with the result that it would report itself as
GCP_COMMITTING while the other data nodes
MAX_ROWS option for
CREATE TABLE was ignored, which
meant that it was not possible to enable multi-threaded building
On Windows, the angel process which monitors and (when necessary) restarts the data node process failed to spawn a new worker in some circumstances where the arguments vector contained extra items placed at its beginning. This could occur when the path to ndbd.exe or ndbmtd.exe contained one or more spaces. (Bug #57949)
A number of cluster log warning messages relating to deprecated configuration parameters contained spelling, formatting, and other errors. (Bug #57381)
A GCP stop is detected using 2 parameters which determine the
maximum time that a global checkpoint or epoch can go unchanged;
one of these controls this timeout for GCPs and one controls the
timeout for epochs. Suppose the cluster is configured such that
is 100 ms but
1500 ms. A node failure can be signalled after 4 missed
heartbeats—in this case, 6000 ms. However, this would
causing false detection of a GCP. To prevent this from
happening, the configured value for
is automatically adjusted, based on the values of
The current issue arose when the automatic adjustment routine
did not correctly take into consideration the fact that, during
cascading node-failures, several intervals of length
* (HeartbeatIntervalDBDB + ArbitrationTimeout) may
elapse before all node failures have internally been resolved.
This could cause false GCP detection in the event of a cascading
SUMA kernel block has a 10-element ring
buffer for storing out-of-order
SUB_GCP_COMPLETE_REP signals received from
the local query handlers when global checkpoints are completed.
In some cases, exceeding the ring buffer capacity on all nodes
of a node group at the same time caused the node group to fail
with an assertion.
Aborting a native
NDB backup in the
ndb_mgm client using the
BACKUP command did not work correctly when using
ndbmtd, in some cases leading to a crash of
Disk Data: When performing online DDL on Disk Data tables, scans and moving of the relevant tuples were done in more or less random order. This fix causes these scans to be done in the order of the tuples, which should improve performance of such operations due to the more sequential ordering of the scans. (Bug #57848)
References: See also Bug #57827.
An application dropping a table at the same time that another
application tried to set up a replication event on the same
table could lead to a crash of the data node. The same issue
could sometimes cause
An NDB API client program under load could abort with an
assertion error in
References: See also Bug #32708.
Functionality Added or Changed
It is now possible using the ndb_mgm management client or the MGM API to force a data node shutdown or restart even if this would force the shutdown or restart of the entire cluster.
In the management client, this is implemented through the
addition of the
-f (force) option to the
For more information, see
Commands in the MySQL Cluster Management Client.
References: See also Bug #34325, Bug #11747863.
The MGM API function
was previously internal, has now been moved to the public API.
This function can be used to get
engine and other version information from the management server.
References: See also Bug #51273.
The failure of a data node during some scans could cause other data nodes to fail. (Bug #54945)
The text file
containing old MySQL Cluster changelog information was no longer
being maintained, and so has been removed from the tree.
MySQL Cluster stores, for each row in each
NDB table, a Global Checkpoint Index (GCI)
which identifies the last committed transaction that modified
the row. As such, a GCI can be thought of as a coarse-grained
Due to changes in the format used by
store local checkpoints (LCPs) in MySQL Cluster NDB 6.3.11, it
could happen that, following cluster shutdown and subsequent
recovery, the GCI values for some rows could be changed
unnecessarily; this could possibly, over the course of many node
or system restarts (or both), lead to an inconsistent database.
Under certain rare conditions, attempting to start more than one
ndb_mgmd process simultaneously using the
--reload option caused a race
condition such that none of the ndb_mgmd
processes could start.
At startup, an ndbd or ndbmtd process creates directories for its file system without checking to see whether they already exist. Portability code added in MySQL Cluster NDB 7.0.18 and MySQL Cluster NDB 7.1.7 did not account for this fact, printing a spurious error message when a directory to be created already existed. This unneeded printout has been removed. (Bug #57087)
A data node can be shut down having completed and synchronized a
x, while having written a
great many log records belonging to the next GCI
x + 1, as part of normal operations.
However, when starting, completing, and synchronizing GCI
x + 1, then the log records from
original start must not be read. To make sure that this does not
happen, the REDO log reader finds the last GCI to restore, scans
forward from that point, and erases any log records that were
not (and should never be) used.
The current issue occurred because this scan stopped immediately as soon as it encountered an empty page. This was problematic because the REDO log is divided into several files; thus, it could be that there were log records in the beginning of the next file, even if the end of the previous file was empty. These log records were never invalidated; following a start or restart, they could be reused, leading to a corrupt REDO log. (Bug #56961)
When multiple SQL nodes were connected to the cluster and one of
them stopped in the middle of a DDL operation, the
mysqld process issuing the DDL timed out with
the error distributing
tbl_name timed out.
Exhausting the number of available commit-ack markers
(controlled by the
parameter) led to a data node crash.
TABLE ... ADD COLUMN operation that changed the table
schema such that the number of 32-bit words used for the bitmask
allocated to each DML operation increased during a transaction
in DML which was performed prior to DDL which was followed by
either another DML operation or—if using
replication—a commit, led to data node failure.
This was because the data node did not take into account that the bitmask for the before-image was smaller than the current bitmask, which caused the node to crash. (Bug #56524)
References: This bug is a regression of Bug #35208.
An error in program flow in
result in data node shutdown routines being called multiple
On Windows, a data node refused to start in some cases unless the ndbd.exe executable was invoked using an absolute rather than a relative path. (Bug #56257)
Memory pages used for
assigned to ordered indexes, were not ever freed, even after any
rows that belonged to the corresponding indexes had been
DROP TABLE operations among
several SQL nodes attached to a MySQL Cluster. the
LOCK_OPEN lock normally protecting
mysqld's internal table list is released
so that other queries or DML statements are not blocked.
However, to make sure that other DDL is not executed
simultaneously, a global schema lock (implemented as a row-level
NDB) is used, such that all
operations that can modify the state of the
mysqld internal table list also need to
acquire this global schema lock. The
TABLE STATUS statement did not acquire this lock.
When running a
SELECT on an
NDB table with
TEXT columns, memory was
allocated for the columns but was not freed until the end of the
SELECT. This could cause problems
with excessive memory usage when dumping (using for example
mysqldump) tables with such columns and
having many rows, large column values, or both.
References: See also Bug #56488, Bug #50310.
In certain cases,
could sometimes leave behind a cached table object, which caused
problems with subsequent DDL operations.
The MGM API function
ndb_mgm_get_version() did not
set the error message before returning with an error. With this
fix, it is now possible to call
after a failed call to this function such that
returns an error number and error message, as expected of MGM
The MGM API functions
ndb_mgm_restart() set the error
code and message without first checking whether the management
server handle was
NULL, which could lead to
fatal errors in MGM API applications that depended on these
Functionality Added or Changed
More finely grained control over restart-on-failure behavior is
provided with two new data node configuration parameters
limits the total number of retries made before giving up on
starting the data node;
the number of seconds between retry attempts.
These parameters are used only if
StopOnError is set to 0.
For more information, see Defining MySQL Cluster Data Nodes. (Bug #54341)
It is no longer possible to make a dump of the
ndbinfo database using
Following a failure of the master data node, the new master sometimes experienced a race condition which caused the node to terminate with a GcpStop error. (Bug #56044)
Startup messages previously written by
stdout are now
written to the cluster log instead when
LogDestination is set.
Trying to create a table having a
TEXT column with
'' failed with the error Illegal null
attribute. (An empty default is permitted and
NDB should do the same.)
configuration parameter was not handled correctly for CPU ID
values greater than 255.
ndb_restore always reported 0 for the
GCPStop (end point of the backup). Now it
provides useful binary log position and epoch information.
The warning MaxNoOfExecutionThreads
#) > LockExecuteThreadToCPU count
#), this could cause
contention could be logged when running
ndbd, even though the condition described can
occur only when using ndbmtd.
--nodaemon logged to the
console in addition to the configured log destination.
The graceful shutdown of a data node could sometimes cause transactions to be aborted unnecessarily. (Bug #18538)
References: See also Bug #55641.
Functionality Added or Changed
--server-id-bits option for mysqld
For mysqld, the
--server-id-bits option indicates
the number of least significant bits within the 32-bit server ID
which actually identify the server. Indicating that the server
ID uses less than 32 bits permits the remaining bits to be used
for other purposes by NDB API applications using the Event API
For mysqlbinlog, the
tells mysqlbinlog how to interpret the server
IDs in the binary log when the binary log was written by a
mysqld having its
server_id_bits set to less than
the maximum (32).
Important Change; Cluster API:
The poll and select calls made by the MGM API were not
interrupt-safe; that is, a signal caught by the process while
waiting for an event on one or more sockets returned error -1
errno set to
EINTR. This caused problems with MGM API
functions such as
To fix this problem, the internal
ndb_socket_poller::poll() function has been
The old version of this function has been retained as
poll_unsafe(), for use by those parts of NDB
that do not need the EINTR-safe version
of the function.
When another data node failed, a given data node
DBTC kernel block could time out while
DBDIH to signal commits of
pending transactions, leading to a crash. Now in such cases the
timeout generates a prinout, and the data node continues to
Starting ndb_mgmd with
--config-cache=0 caused it to
An excessive number of timeout warnings (normally used only for debugging) were written to the data node logs. (Bug #53987)
The TCP configuration parameters
HostName2 were not displayed in the
output of ndb_config
The configure.js option
WITHOUT_DYNAMIC_PLUGINS=TRUE was ignored when
building MySQL Cluster for Windows using
CMake. Among the effects of this issue was
that CMake attempted to build the
InnoDB storage engine as a plugin
.DLL file) even though the
Plugin is not currently supported by MySQL Cluster.
It was possible for a
DATABASE statement to remove
NDB hidden blob tables without
removing the parent tables, with the result that the tables,
although hidden to MySQL clients, were still visible in the
output of ndb_show_tables but could not be
dropped using ndb_drop_table.
Disk Data: As an optimization when inserting a row to an empty page, the page is not read, but rather simply initialized. However, this optimzation was performed in all cases when an empty row was inserted, even though it should have been done only if it was the first time that the page had been used by a table or fragment. This is because, if the page had been in use, and then all records had been released from it, the page still needed to be read to learn its log sequence number (LSN).
This caused problems only if the page had been flushed using an incorrect LSN and the data node failed before any local checkpoint was completed—which would remove any need to apply the undo log, hence the incorrect LSN was ignored.
The user-visible result of the incorrect LSN was that it caused the data node to fail during a restart. It was perhaps also possible (although not conclusively proven) that this issue could lead to incorrect data. (Bug #54986)
not update the timer for
Functionality Added or Changed
Restrictions on some types of mismatches in column definitions when restoring data using ndb_restore have been relaxed. These include the following types of mismatches:
Different default values
Different distribution key settings
Now, when one of these types of mismatches in column definitions is encountered, ndb_restore no longer stops with an error; instead, it accepts the data and inserts it into the target table, while issuing a warning to the user.
For more information, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #54423)
References: See also Bug #53810, Bug #54178, Bug #54242, Bug #54279.
It is now possible to install management node and data node processes as Windows services. (See Installing MySQL Cluster Processes as Windows Services, for more information.) In addition, data node processes on Windows are now maintained by angel processes, just as they are on other platforms supported by MySQL Cluster.
The disconnection of all API nodes (including SQL nodes) during
ALTER TABLE caused a memory
The presence of duplicate
[tcp] sections in
config.ini file caused the management
server to crash. Now in such cases, ndb_mgmd
fails gracefully with an appropriate error message.
A table having the maximum number of attributes permitted could not be backed up using the ndb_mgm client.
The maximum number of attributes supported per table is not the same for all MySQL Cluster releases. See Limits Associated with Database Objects in MySQL Cluster, to determine the maximum that applies in the release which you are using.
When performing an online alter table where 2 or more SQL nodes connected to the cluster were generating binary logs, an incorrect message could be sent from the data nodes, causing mysqld processes to crash. This problem was often difficult to detect, because restarting SQL node or data node processes could clear the error, and because the crash in mysqld did not occur until several minutes after the erroneous message was sent and received. (Bug #54168)
The setting for
ignored by ndbmtd, which made it impossible
to use more than 4 cores for rebuilding indexes.
If a node shutdown (either in isolation or as part of a system shutdown) occurred directly following a local checkpoint, it was possible that this local checkpoint would not be used when restoring the cluster. (Bug #54611)
When adding multiple new node groups to a MySQL Cluster, it was
necessary for each new node group to add only the nodes to be
assigned to the new node group, create that node group using
CREATE NODEGROUP, then repeat this process
for each new node group to be added to the cluster. The fix for
this issue makes it possible to add all of the new nodes at one
time, and then issue several
commands in succession.
Cluster API: When using the NDB API, it was possible to rename a table with the same name as that of an existing table.
This issue did not affect table renames executed using SQL on MySQL servers acting as MySQL Cluster API nodes.
Cluster API: An excessive number of client connections, such that more than 1024 file descriptors, sockets, or both were open, caused NDB API applications to crash. (Bug #34303)
Functionality Added or Changed
Commercial binary releases of MySQL Cluster NDB 7.1 now include
support for the
InnoDB storage engine.
References: Reverted bug patches: Bug #31989.
The value of an internal constant used in the implementation of
NdbScanOperation classes caused
MySQL Cluster NDB 7.0 NDB API applications compiled against
MySQL Cluster NDB 7.0.14 or earlier to fail when run with MySQL
Cluster 7.0.15, and MySQL Cluster NDB 7.1 NDB API applications
compiled against MySQL Cluster NDB 7.1.3 or earlier to break
when used with MySQL Cluster 7.1.4.
When using mysqldump to back up and restore schema information while using ndb_restore for restoring only the data, restoring to MySQL Cluster NDB 7.1.4 from an older version failed on tables having columns with default values. This was because versions of MySQL Cluster prior to MySQL Cluster NDB 7.1.4 did not have native support for default values.
In addition, the MySQL Server supports
TIMESTAMP columns having dynamic
default values, such as
CURRENT_TIMESTAMP; however, the current implementation
NDB-native default values permits only a
constant default value.
To fix this issue, the manner in which
TIMESTAMP columns is
reverted to its pre-NDB-7.1.4 behavior (obtaining the default
value from mysqld rather than
NDBCLUSTER) except where a
TIMESTAMP column uses a constant
default, as in the case of a column declared as
TIMESTAMP DEFAULT 0 or
Functionality Added or Changed
Important Change: The maximum number of attributes (columns plus indexes) per table has increased to 512.
--wait-nodes option has been added for
ndb_waiter. When this option is used, the
program waits only for the nodes having the listed IDs to reach
the desired state. For more information, see
ndb_waiter — Wait for MySQL Cluster to Reach a Given Status.
option for ndb_restore. This option causes
ndb_restore to ignore any schema objects
which it does not recognize. Currently, this is useful chiefly
for restoring native backups made from a cluster running MySQL
Cluster NDB 7.0 to a cluster running MySQL Cluster NDB 6.3.
As part of this change, new methods relating to default values
have been added to the
Table classes in the NDB
API. For more information, see
Added the MySQL Cluster management server option
--config-cache, which makes it
possible to enable and disable configuration caching. This
option is turned on by default; to disable configuration
caching, start ndb_mgmd with
--config-cache=0, or with
ndb_mgmd — The MySQL Cluster Management Server Daemon, for more
Incompatible Change; Cluster API: The default behavior of the NDB API Event API has changed as follows:
Previously, when creating an
Event, DDL operations (alter
and drop operations on tables) were automatically reported on
any event operation that used this event, but as a result of
this change, this is no longer the case. Instead, you must now
invoke the event's
setReport() method, with
ER_DDL, to get this behavior.
For existing NDB API applications where you wish to retain the
old behavior, you must update the code as indicated previously,
then recompile, following an upgrade. Otherwise, DDL operations
are no longer reported after upgrading
An internal buffer allocator used by
NDB has the form
alloc( and attempts to
wanted pages, but is
permitted to allocate a smaller number of pages, between
minimum. However, this allocator
could sometimes allocate fewer than
minimum pages, causing problems with
multi-threaded building of ordered indexes.
higher than 4G on 32-bit platforms caused
ndbd to crash, instead of failing gracefully
with an error.
(Bug #52536, Bug #50928)
NDB log handler failed, the memory
allocated to it was freed twice.
NDB did not distinguish correctly between table names differing
only by lettercase when
lower_case_table_names was set
NDB tables until
creation of a table failed due to
NDB error 905 Out of
attribute records (increase MaxNoOfAttributes), then
restarting all management node and data node processes,
attempting to drop and re-create one of the tables failed with
the error Out of table records..., even
when sufficient table records were available.
References: See also Bug #52055. This bug is a regression of Bug #44294.
Specifying the node ID as part of the
--ndb-connectstring option to
mysqld was not handled correctly.
The fix for this issue includes the following changes:
Multiple occurrences of any of the mysqld
--ndb-nodeid are now handled
in the same way as with other MySQL server options, in that
the value set in the last occurrence of the option is the
value that is used by mysqld.
--ndb-nodeid is used,
its value overrides that of any
setting used in
example, starting mysqld with
--ndb-nodeid=3 now produces the same result as
starting it with
The 1024-character limit on the length of the connection
string is removed, and
--ndb-connectstring is now
handled in this regard in the same way as other
In the NDB API, a new constructor for
added which takes as its arguments a connection string and
the node ID to force the API node to use.
When compiled with support for
epoll but this
functionality is not available at runtime, MySQL Cluster tries
to fall back to use the
select() function in
its place. However, an extra
in the transporter registry code caused ndbd
to fail instead.
Creating a Disk Data table, dropping it, then creating an
in-memory table and performing a restart, could cause data node
processes to fail with errors in the
kernel block if the new table's internal ID was the same as
that of the old Disk Data table. This could occur because undo
log handling during the restart did not check that the table
having this ID was now in-memory only.
A table created while
enabled was not always stored to disk, which could lead to a
data node crash with Error opening DIH schema files
When creating an index,
to check whether the internal ID allocated to the index was
within the permissible range, leading to an assertion. This
issue could manifest itself as a data node failure with
NDB error 707 (No more
table metadata records (increase MaxNoOfTables)),
when creating tables in rapid succession (for example, by a
script, or when importing from mysqldump),
even with a relatively high value for
MaxNoOfTables and a
relatively low number of tables.
ndb_restore did not raise any errors if hashmap creation failed during execution. (Bug #51434)
The value set for the ndb_mgmd option
--ndb-nodeid was not verified
prior to use as being within the permitted range (1 to 255,
inclusive), leading to a crash of the management server.
NDB truncated a column declared as
DECIMAL(65,0) to a length of 64.
Now such a column is accepted and handled correctly. In cases
where the maximum length (65) is exceeded,
NDB now raises an error instead of
ndb_mgm -e "ALL STATUS" erroneously reported
that data nodes remained in start phase 0 until they had
Functionality Added or Changed
max column has been renamed to
max) columns now display values
in bytes rather than memory pages.
Added the columns
total_pages to show the amount of a
resource used and total amount available in pages.
The size of the memory pages used for calculating data
total_pages columns) is now 32K rather
For more information, aee The ndbinfo memoryusage Table.
has been removed from the
ndbinfo database. Information
useful to MySQL Cluster administration that was contained in
this table should be available from other
Important Note: MySQL Cluster 7.1 is now supported for production use on Windows platforms.
Some limitations specific to Windows remain; the most important of these are given in the following list:
There is not yet any Windows installer for MySQL Cluster; you must extract, place, configure, and start the necessary MySQL Cluster executables manually.
MySQL Cluster processes cannot yet be installed as Windows services. This means that each process executable must be run from a command prompt, and cannot be backgrounded. If you close the command prompt window in which you started the process, the process terminates.
There is as yet no “angel” process for data nodes; if a data node process quits, it must be restarted manually.
ndb_error_reporter is not yet available on Windows.
The multi-threaded data node process (ndbmtd) is not yet included in the binary distribution. However, it should be built automatically if you build MySQL Cluster from source.
As with MySQL Cluster on other supported platforms, you cannot build MySQL Cluster for Windows from the MySQL Server 5.1 sources; you must use the source code from the MySQL Cluster NDB 7.1 tree.
A mysqld, when attempting to access the
ndbinfo database, crashed if
could not contact the management server.
The mysql client
command did not work properly. This issue was only known to
affect the version of the mysql client that
was included with MySQL Cluster NDB 7.0 and MySQL Cluster NDB
ha_ndbcluster.cc was not compiled with the
SAFE_MUTEX flags as the MySQL Server.
The internal variable
is no longer used, has been removed.
Restoring a MySQL Cluster backup between platforms having different endianness failed when also restoring metadata and the backup contained a hashmap not already present in the database being restored to. This issue was discovered when trying to restore a backup made on Solaris/SPARC to a MySQL Cluster running on Solaris/x86, but could conceivably occur in other cases where the endianness of the platform on which the backup was taken differed from that of the platform being restored to. (Bug #51432)
When performing a complex mix of node restarts and system
restarts, the node that was elected as master sometimes required
optimized node recovery due to missing
information. When this happened, the node crashed with
Failure to recreate object ... during restart, error
721 (because the
code was run twice). Now when this occurs, node takeover is
executed immediately, rather than being made to wait until the
remaining data nodes have started.
References: See also Bug #48436.
Some values shown in the
memoryusage table did not
match corresponding values shown by the
When debug compiling MySQL Cluster on Windows, the mysys library was not compiled with -DSAFEMALLOC and -DSAFE_MUTEX, due to the fact that my_socket.c was misnamed as my_socket.cc. (Bug #51856)
If a node or cluster failure occurred while
mysqld was scanning the
ndb.ndb_schema table (which it does when
attempting to connect to the cluster), insufficient error
handling could lead to a crash by mysqld in
certain cases. This could happen in a MySQL Cluster with a great
many tables, when trying to restart data nodes while one or more
mysqld processes were restarting.
After running a mixed series of node and system restarts, a system restart could hang or fail altogether. This was caused by setting the value of the newest completed global checkpoint too low for a data node performing a node restart, which led to the node reporting incorrect GCI intervals for its first local checkpoint. (Bug #52217)
In MySQL Cluster NDB 7.0 and later, DDL operations are performed within schema transactions; the NDB kernel code for starting a schema transaction checks that all data nodes are at the same version before permitting a schema transaction to start. However, when a version mismatch was detected, the client was not actually informed of this problem, which caused the client to hang. (Bug #52228)
The redo log protects itself from being filled up by
periodically checking how much space remains free. If
insufficient redo log space is available, it sets the state
TAIL_PROBLEM which results in transactions
being aborted with error code 410 (out of redo
log). However, this state was not set following a
node restart, which meant that if a data node had insufficient
redo log space following a node restart, it could crash a short
time later with Fatal error due to end of REDO
log. Now, this space is checked during node
Packaging; Cluster API:
was missing from the
clusterjpa JAR file.
This could cause setting
ndb” to be rejected.
References: See also Bug #14192154.
Disk Data: Inserts of blob column values into a MySQL Cluster Disk Data table that exhausted the tablespace resulted in misleading no such tuple error messages rather than the expected error tablespace full.
This issue appeared similar to Bug #48113, but had a different underlying cause. (Bug #52201)
DDL operations on Disk Data tables having a relatively small
UNDO_BUFFER_SIZE could fail unexpectedly.
A number of issues were corrected in the NDB API coding examples
found in the
directory in the MySQL Cluster source tree. These included
possible endless recursion in
ndbapi_scan.cpp as well as problems running
some of the examples on systems using Windows or Mac OS X due to
the lettercase used for some table names.
(Bug #30552, Bug #30737)
Functionality Added or Changed
It is now possible to determine, using the
ndb_desc utility or the NDB API, which data
nodes contain replicas of which partitions. For
ndb_desc, a new
--extra-node-info option is
added to cause this information to be included in its output. A
added to the NDB API for obtaining this information
A new configuration parameter
HeartbeatThreadPriority makes it possible to
select between a first-in, first-out or round-round scheduling
policy for management node and API node heartbeat threads, as
well as to set the priority of these threads. See
Defining a MySQL Cluster Management Server, or
Defining SQL and Other API Nodes in a MySQL Cluster, for more
Start phases are now written to the data node logs. (Bug #49158)
Numeric codes used in management server status update messages in the cluster logs have been replaced with text descriptions. (Bug #49627)
References: See also Bug #44248.
DUMP commands returned output to all
ndb_mgm clients connected to the same MySQL
Cluster. Now, these commands return their output only to the
ndb_mgm client that actually issued the
The ndb_desc utility can now show the extent
space and free extent space for subordinate
TEXT columns (stored in hidden
BLOB tables by NDB). A
--blob-info option has been
added for this program that causes ndb_desc
to generate a report for each subordinate
BLOB table. For more information, see
ndb_desc — Describe NDB Tables.
DATA_MEMORY column of the
memoryusage table was renamed
An initial restart of a data node configured with a large amount of memory could fail with a Pointer too large error. (Bug #51027)
References: This bug was introduced by Bug #47818.
ndbmtd started on a single-core machine could
sometimes fail with a Job Buffer Full
was set greater than
Now a warning is logged when this occurs.
configuration parameter was set in
config.ini, the ndb_mgm
REPORT MEMORYUSAGE command printed its
output multiple times.
greater than 2GB could cause data nodes to crash while starting.
When deciding how to divide the REDO log, the
DBDIH kernel block saved more than was needed
to restore the previous local checkpoint, which could cause REDO
log space to be exhausted prematurely (
An attempted online upgrade from a MySQL Cluster NDB 6.3 or 7.0 release to a MySQL Cluster NDB 7.1 release failed, as the first upgraded data node rejected the remaining data nodes as using incompatible versions. (Bug #51429)
GROUP BY query against
NDB tables sometimes did not use
any indexes unless the query included a
INDEX option. With this fix, indexes are used by such
queries (where otherwise possible) even when
INDEX is not specified.
The output of the ndb_mgm client
REPORT BACKUPSTATUS command could sometimes
contain errors due to uninitialized data.
transporters table showed
the status of a disconnected node as
DISCONNECTING rather than
A side effect of the ndb_restore
--rebuild-indexes options is
to change the schema versions of indexes. When a
mysqld later tried to drop a table that had
been restored from backup using one or both of these options,
the server failed to detect these changed indexes. This caused
the table to be dropped, but the indexes to be left behind,
leading to problems with subsequent backup and restore
The following issues were fixed in the
DML operations can fail with
NDB error 1220
(REDO log files overloaded...) if the
opening and closing of REDO log files takes too much time. If
this occurred as a GCI marker was being written in the REDO log
while REDO log file 0 was being opened or closed, the error
could persist until a GCP stop was encountered. This issue could
be triggered when there was insufficient REDO log space (for
example, with configuration parameter settings
NoOfFragmentLogFiles = 6 and
FragmentLogFileSize = 6M) with a load
including a very high number of updates.
References: See also Bug #20904.
equal to 1 or 2, if data nodes from one node group were
restarted 256 times and applications were running traffic such
that it would encounter
1204 (Temporary failure, distribution
changed), the live node in the node group would
crash, causing the cluster to crash as well. The crash occurred
only when the error was encountered on the 256th restart; having
the error on any previous or subsequent restart did not cause
Information about several management client commands was missing
from (that is, truncated in) the output of the
Replication of a MySQL Cluster using multi-threaded data nodes
could fail with forced shutdown of some data nodes due to the
fact that ndbmtd exhausted
more quickly than ndbd. After this fix,
passing of replication data between the
SUMA NDB kernel blocks is done using
DataMemory rather than
Until you can upgrade, you may be able to work around this issue
by increasing the
setting; doubling the default should be sufficient in most
Issuing a command in the ndb_mgm client after it had lost its connection to the management server could cause the client to crash. (Bug #49219)
The ndb_restore message
created index `PRIMARY`... was directed to
stderr instead of
ndb_restore crashed while trying to restore a corrupted backup, due to missing error handling. (Bug #51223)
ndb_mgm -e "... REPORT ..." did not write any
The fix for this issue also prevents the cluster log from being
INFO messages when
DataMemory usage reaches
100%, and insures that when the usage is decreased, an
appropriate message is written to the cluster log.
(Bug #31542, Bug #44183, Bug #49782)
For a Disk Data tablespace whose extent size was not equal to a
whole multiple of 32K, the value of the
FREE_EXTENTS column in the
INFORMATION_SCHEMA.FILES table was
smaller than the value of
As part of this fix, the implicit rounding of
UNDO_BUFFER_SIZE performed by
CREATE TABLESPACE Syntax) is now done explicitly, and
the rounded values are used for calculating
values and other purposes.
References: See also Bug #31712.
The error message returned after atttempting to execute
ALTER LOGFILE GROUP on an
nonexistent logfile group did not indicate the reason for the
Once all data files associated with a given tablespace had been
dropped, there was no way for MySQL client applications
(including the mysql client) to tell that the
tablespace still existed. To rememdy this problem,
INFORMATION_SCHEMA.FILES now holds
an additional row for each tablespace. (Previously, only the
data files in each tablespace were shown.) This row shows
TABLESPACE in the
FILE_TYPE column, and
It was possible to issue a
TABLESPACE statement in which
INITIAL_SIZE was less than
EXTENT_SIZE. (In such cases,
erroneously reported the value of the
FREE_EXTENTS column as
and that of the
TOTAL_EXTENTS column as
0.) Now when either of these statements is
issued such that
INITIAL_SIZE is less than
EXTENT_SIZE, the statement fails with an
appropriate error message.
References: See also Bug #49709.
Cluster API: An issue internal to ndb_mgm could cause problems when trying to start a large number of data nodes at the same time. (Bug #51273)
References: See also Bug #51310.
When reading blob data with lock mode
LM_SimpleRead, the lock was not upgraded as
Functionality Added or Changed
ndbinfo database is added
to provide MySQL Cluster metadata in real time. The tables
making up this database contain information about memory,
buffer, and other resource usage, as well as configuration
parameters and settings, event counts, and other useful data.
ndbinfo is done by
executing standard SQL queries on its tables using the
mysql command-line client or other MySQL
client application. No special setup procedures are required;
ndbinfo is created
automatically and visible in the output of
SHOW DATABASES when the MySQL
Server is connected to a MySQL Cluster.
For more information, see The ndbinfo MySQL Cluster Information Database.
ClusterJ 1.0 and ClusterJPA 1.0 are now available for
programming Java applications with MySQL Cluster. ClusterJ is a
Java connector providing an object-relational API for performing
high-speed operations such as primary key lookups on a MySQL
Cluster database, but does not require the use of the MySQL
Server or JDBC (Connector/J). ClusterJ uses a new library
NdbJTie which enables direct access from Java to the NDB API and
thus to the
engine. ClusterJPA is a new implementation of
OpenJPA, and can
use either a JDBC connection to a MySQL Cluster SQL node (MySQL
Server) or a direct connection to MySQL Cluster using NdbJTie,
depending on availability and operational performance.
ClusterJ, ClusterJPA, and NdbJTie require Java 1.5 or 1.6, and MySQL Cluster NDB 7.0 or later.
All necessary libraries and other files for ClusterJ, ClusterJPA, and NdbJTie can be found in the MySQL Cluster NDB 7.1.1 or later distribution.
When a primary key lookup on an
table containing one or more
columns was executed in a transaction, a shared lock on any blob
tables used by the
NDB table was
held for the duration of the transaction. (This did not occur
for indexed or non-indexed
Now in such cases, the lock is released after all
BLOB data has been read.
This version was for testing and internal use only, and not officially released.
Functionality Added or Changed
The default value of the
node configuration parameter has changed from 8 to 2.
Incompatible Change; Cluster API:
Several NDB API methods were declared as
const, but did not return an
lvalue, which caused compiler warnings when
using gcc 4.3 or newer to perform the build.
The methods affected are
--with-ndb-port-base option for
configure has been removed. It is now handled
as an unknown and invalid option if you attempt to use it when
configuring a build of MySQL Cluster.
References: See also Bug #38502.
mysqld could sometimes crash during a commit while trying to handle NDB Error 4028 Node failure caused abort of transaction. (Bug #38577)