This section contains unified change history highlights for all
MySQL Cluster releases based on version 7.3 of the
NDBCLUSTER storage engine through
MySQL Cluster NDB 7.3.9. Included are all
changelog entries in the categories MySQL
Cluster, Disk Data, and
For an overview of features that were added in MySQL Cluster NDB 7.3, see MySQL Cluster Development in MySQL Cluster NDB 7.3.
Version 5.6.22-ndb-7.3.9 has no changelog entries.
Functionality Added or Changed
Performance: Recent improvements made to the multithreaded scheduler were intended to optimize the cache behavior of its internal data structures, with members of these structures placed such that those local to a given thread do not overflow into a cache line which can be accessed by another thread. Where required, extra padding bytes are inserted to isolate cache lines owned (or shared) by other threads, thus avoiding invalidation of the entire cache line if another thread writes into a cache line not entirely owned by itself. This optimization improved MT Scheduler performance by several percent.
It has since been found that the optimization just described
depends on the global instance of struct
thr_repository starting at a cache line
aligned base address as well as the compiler not rearranging or
adding extra padding to the scheduler struct; it was also found
that these prerequisites were not guaranteed (or even checked).
Thus this cache line optimization has previously worked only
g_thr_repository (that is, the global
instance) ended up being cache line aligned only by accident. In
addition, on 64-bit platforms, the compiler added extra padding
words in struct
thr_safe_pool such that
attempts to pad it to a cache line aligned size failed.
The current fix ensures that
is constructed on a cache line aligned address, and the
constructors modified so as to verify cacheline aligned adresses
where these are assumed by design.
Results from internal testing show improvements in MT Scheduler read performance of up to 10% in some cases, following these changes. (Bug #18352514)
Two new example programs, demonstrating reads and writes of
VARBINARY column values, have
been added to
in the MySQL Cluster source tree. For more information about
these programs, including source code listings, see
NDB API Simple Array Example, and
NDB API Simple Array Example Using Adapter.
The global checkpoint commit and save protocols can be delayed
by various causes, including slow disk I/O. The
DIH master node monitors the progress of both
of these protocols, and can enforce a maximum lag time during
which the protocols are stalled by killing the node responsible
for the lag when it reaches this maximum. This
DIH master GCP monitor mechanism did not
perform its task more than once per master node; that is, it
failed to continue monitoring after detecting and handling a GCP
References: See also Bug #19858151.
When running mysql_upgrade on a MySQL Cluster
SQL node, the expected drop of the
performance_schema database on this node was
instead performed on all SQL nodes connected to the cluster.
A number of problems relating to the fired triggers pool have been fixed, including the following issues:
When the fired triggers pool was exhausted,
NDB returned Error 218 (Out of
LongMessageBuffer). A new error code 221 is
added to cover this case.
An additional, separate case in which Error 218 was wrongly reported now returns the correct error.
Setting low values for
led to an error when no memory was allocated if there was
only one hash bucket.
An aborted transaction now releases any fired trigger
records it held. Previously, these records were held until
ApiConnectRecord was reused by
In addition, for the
Fired Triggers pool
in the internal
the high value always equalled the total, due to the fact
that all records were momentarily seized when initializing
them. Now the high value shows the maximum following
completion of initialization.
Online reorganization when using ndbmtd data
nodes and with binary logging by mysqld
enabled could sometimes lead to failures in the
blocks, or in silent data corruption.
References: See also Bug #19912988.
The local checkpoint ScanFrag watchdog and the global checkpoint monitor can each exclude a node when it is too slow when participating in their respective protocols. This exclusion was implemented by simply asking the failing node to shut down, which in case this was delayed (for whatever reason) could prolong the duration of the GCP or LCP stall for other, unaffected nodes.
To minimize this time, an isolation mechanism has been added to both protocols whereby any other live nodes forcibly disconnect the failing node after a predetermined amount of time. This allows the failing node the opportunity to shut down gracefully (after logging debugging and other information) if possible, but limits the time that other nodes must wait for this to occur. Now, once the remaining live nodes have processed the disconnection of any failing nodes, they can commence failure handling and restart the related protocol or protocol, even if the failed node takes an excessiviely long time to shut down. (Bug #19858151)
References: See also Bug #20128256.
A watchdog failure resulted from a hang while freeing a disk
TUP_COMMITREQ, due to use of an
uninitialized block variable.
(Bug #19815044, Bug #74380)
Multiple threads crashing led to multiple sets of trace files being printed and possibly to deadlocks. (Bug #19724313)
When a client retried against a new master a schema transaction that failed previously against the previous master while the latter was restarting, the lock obtained by this transaction on the new master prevented the previous master from progressing past start phase 3 until the client was terminated, and resources held by it were cleaned up. (Bug #19712569, Bug #74154)
When using the
NDB storage engine,
the maximum possible length of a database or table name is 63
characters, but this limit was not always strictly enforced.
This meant that a statement using a name having 64 characters
DROP DATABASE, or
RENAME could cause the SQL node on which it was
executed to fail. Now such statements fail with an appropriate
When a new data node started, API nodes were allowed to attempt to register themselves with the data node for executing transactions before the data node was ready. This forced the API node to wait an extra heartbeat interval before trying again.
To address this issue, a number of HA_ERR_NO_CONNECTION errors (Error 4009) that could be issued during this time have been changed to Cluster temporarily unavailable errors (Error 4035), which should allow API nodes to use new data nodes more quickly than before. As part of this fix, some errors which were incorrectly categorised have been moved into the correct categories, and some errors which are no longer used have been removed. (Bug #19524096, Bug #73758)
When executing very large pushdown joins involving one or more
indexes each defined over several columns, it was possible in
some cases for the
DBSPJ block (see
The DBSPJ Block) in the
NDB kernel to generate
SCAN_FRAGREQ signals that were excessively
large. This caused data nodes to fail when these could not be
handled correctly, due to a hard limit in the kernel on the size
of such signals (32K). This fix bypasses that limitation by
SCAN_FRAGREQ data that is too
large for one such signal, and sending the
SCAN_FRAGREQ as a chunked or fragmented
ndb_index_stat sometimes failed when used against a table containing unique indexes. (Bug #18715165)
Queries against tables containing a CHAR(0) columns failed with ERROR 1296 (HY000): Got error 4547 'RecordSpecification has overlapping offsets' from NDBCLUSTER. (Bug #14798022)
NDB kernel, it was possible for a
TransporterFacade object to reset a buffer
while the data contained by the buffer was being sent, which
could lead to a race condition.
(Bug #75041, Bug #20112981)
mysql_upgrade failed to drop and recreate the
ndbinfo database and its
tables as expected.
(Bug #74863, Bug #20031425)
Due to a lack of memory barriers, MySQL Cluster programs such as
ndbmtd did not compile on
(Bug #74782, Bug #20007248)
In some cases, when run against a table having an
DELETE trigger, a
DELETE statement that matched no
rows still caused the trigger to execute.
(Bug #74751, Bug #19992856)
A basic requirement of the
storage engine's design is that the transporter registry
not attempt to receive data
and update the connection status
of the same set of transporters concurrently, due to the fact
that the updates perform final cleanup and reinitialization of
buffers used when receiving data. Changing the contents of these
buffers while reading or writing to them could lead to "garbage"
or inconsistent signals being read or written.
During the course of work done previously to improve the
implementation of the transporter facade, a mutex intended to
protect against the concurrent use of the
update_connections()) methods on the same
transporter was inadvertently removed. This fix adds a watchdog
check for concurrent usage. In addition,
performReceive() calls are now serialized
together while polling the transporters.
(Bug #74011, Bug #19661543)
ndb_restore failed while restoring a table
which contained both a built-in conversion on the primary key
and a staging conversion on a
During staging, a
BLOB table is
created with a primary key column of the target type. However, a
conversion function was not provided to convert the primary key
values before loading them into the staging blob table, which
resulted in corrupted primary key values in the staging
BLOB table. While moving data from the
staging table to the target table, the
read failed because it could not find the primary key in the
BLOB tables are checked to see
whether there are conversions on primary keys of their main
tables. This check is done after all the main tables are
processed, so that conversion functions and parameters have
already been set for the main tables. Any conversion functions
and parameters used for the primary key in the main table are
now duplicated in the
(Bug #73966, Bug #19642978)
Corrupted messages to data nodes sometimes went undetected, causing a bad signal to be delivered to a block which aborted the data node. This failure in combination with disconnecting nodes could in turn cause the entire cluster to shut down.
To keep this from happening, additional checks are now made when unpacking signals received over TCP, including checks for byte order, compression flag (which must not be used), and the length of the next message in the receive buffer (if there is one).
Whenever two consecutive unpacked messages fail the checks just described, the current message is assumed to be corrupted. In this case, the transporter is marked as having bad data and no more unpacking of messages occurs until the transporter is reconnected. In addition, an entry is written to the cluster log containing the error as well as a hex dump of the corrupted message. (Bug #73843, Bug #19582925)
Transporter send buffers were not updated properly following a failed send. (Bug #45043, Bug #20113145)
When a node acting as a
DICT master fails,
the arbitrator selects another node to take over in place of the
failed node. During the takeover procedure, which includes
cleaning up any schema transactions which are still open when
the master failed, the disposition of the uncommitted schema
transaction is decided. Normally this transaction be rolled
back, but if it has completed a sufficient portion of a commit
request, the new master finishes processing the commit. Until
the fate of the transaction has been decided, no new
TRANS_END_REQ messages from clients can be
processed. In addition, since multiple concurrent schema
transactions are not supported, takeover cleanup must be
completed before any new transactions can be started.
A similar restriction applies to any schema operations which are performed in the scope of an open schema transaction. The counter used to coordinate schema operation across all nodes is employed both during takeover processing and when executing any non-local schema operations. This means that starting a schema operation while its schema transaction is in the takeover phase causes this counter to be overwritten by concurrent uses, with unpredictable results.
The scenarios just described were handled previously using a pseudo-random delay when recovering from a node failure. Now we check before the new master has rolled forward or backwards any schema transactions remaining after the failure of the previous master and avoid starting new schema transactions or performing operations using old transactions until takeover processing has cleaned up after the abandoned transaction. (Bug #19874809, Bug #74503)
When a node acting as
DICT master fails, it
is still possible to request that any open schema transaction be
either committed or aborted by sending this request to the new
DICT master. In this event, the new master
takes over the schema transaction and reports back on whether
the commit or abort request succeeded. In certain cases, it was
possible for the new master to be misidentified—that is,
the request was sent to the wrong node, which responded with an
error that was interpreted by the client application as an
aborted schema transaction, even in cases where the transaction
could have been successfully committed, had the correct node
(Bug #74521, Bug #19880747)
It was possible to delete an
while there remained instances of
Ndb using references to it. Now
Ndb_cluster_connection destructor waits
for all related
Ndb objects to be released
References: See also Bug #19846392.
The buffer allocated by an
NdbScanOperation for receiving
scanned rows was not released until the
NdbTransaction owning the scan
operation was closed. This could lead to excessive memory usage
in an application where multiple scans were created within the
same transaction, even if these scans were closed at the end of
their lifecycle, unless
invoked with the
true. Now the buffer is released
whenever the cursor navigating the result set is closed with
NdbScanOperation::close(), regardless of the
value of this argument.
(Bug #75128, Bug #20166585)
Functionality Added or Changed
After adding new data nodes to the configuration file of a MySQL
Cluster having many API nodes, but prior to starting any of the
data node processes, API nodes tried to connect to these
“missing” data nodes several times per second,
placing extra loads on management nodes and the network. To
reduce unnecessary traffic caused in this way, it is now
possible to control the amount of time that an API node waits
between attempts to connect to data nodes which fail to respond;
this is implemented in two new API node configuration parameters
Time elapsed during node connection attempts is not taken into
account when applying these parameters, both of which are given
in milliseconds with approximately 100 ms resolution. As long as
the API node is not connected to any data nodes as described
previously, the value of the
StartConnectBackoffMaxTime parameter is
In a MySQL Cluster with many unstarted data nodes, the values of these parameters can be raised to circumvent connection attempts to data nodes which have not yet begun to function in the cluster, as well as moderate high traffic to management nodes.
For more information about the behavior of these parameters, see Defining SQL and Other API Nodes in a MySQL Cluster. (Bug #17257842)
option for ndb_restore. When enabled, the
option causes tables present in the backup but not in the target
database to be ignored.
(Bug #57566, Bug #11764704)
When assembling error messages of the form Incorrect
state for node
node_state, written when
the transporter failed to connect, the node state was used in
place of the node ID in a number of instances, which resulted in
errors of this type for which the node state was reported
(Bug #19559313, Bug #73801)
In some cases, transporter receive buffers were reset by one thread while being read by another. This happened when a race condition occurred between a thread receiving data and another thread initiating disconnect of the transporter (disconnection clears this buffer). Concurrency logic has now been implemented to keep this race from taking place. (Bug #19554279, Bug #73656)
The failure of a data node could in some situations cause a set
of API nodes to fail as well due to the sending of a
CLOSE_COMREQ signal that was sometimes not
A more detailed error report is printed in the event of a
critical failure in one of the
sendSignal*() methods, prior to crashing the
process, as was already implemented for
sendSignal(), but was missing from the more
Having a crash of this type correctly reported can help with
identifying configuration hardware issues in some cases.
References: See also Bug #19390895.
ndb_restore failed to restore the cluster's metadata when there were more than approximately 17 K data objects. (Bug #19202654)
The fix for a previous issue with the handling of multiple node failures required determining the number of TC instances the failed node was running, then taking them over. The mechanism to determine this number sometimes provided an invalid result which caused the number of TC instances in the failed node to be set to an excessively high value. This in turn caused redundant takeover attempts, which wasted time and had a negative impact on the processing of other node failures and of global checkpoints. (Bug #19193927)
References: This bug was introduced by Bug #18069334.
Parallel transactions performing reads immediately preceding a
delete on the same tuple could cause the
kernel to crash. This was more likely to occur when separate TC
threads were specified using the
Attribute promotion between different
TEXT types (any of
ndb_restore was not handled properly in some
cases. In addition,
TEXT values are now
truncated according to the limits set by
mysqld (for example, values converted to
TINYTEXT from another type are truncated to
256 bytes). In the case of columns using a multibyte character
set, the value is truncated to the end of the last well-formed
Also as a result of this fix, conversion to a
TEXT column of any size that uses a different
character set from the original is now disallowed.
NDB optimized node recovery mechanism
attempts to transfer only relevant page changes to a starting
node in order to speed the recovery process; this is done by
having the starting node indicate the index of the last global
checkpoint (GCI) in which it participated, so that the node that
was already running copies only data for rows which have changed
since that GCI. Every row has a GCI metacolumn which facilitates
this; for a deleted row, the slot formerly stpring this
row's data contains a GCI value, and for deleted pages,
every row on the missing page is considered changed and thus
needs to be sent.
When these changes are received by the starting node, this node performs a lookup for the page and index to determine what they contain. This lookup could cause a real underlying page to be mapped against the logical page ID, even when this page contained no data.
One way in which this issue could manifest itself occurred after
approached maximum, and deletion of many rows followed by a
rolling restart of the data nodes was performed with the
expectation that this would free memory, but in fact it was
possible in this scenario for memory not to be freed and in some
cases for memory usage actually to increase to its maximum.
This fix solves these issues by ensuring that a real physical page is mapped to a logical ID during node recovery only when this page contains actual data which needs to be stored. (Bug #18683398, Bug #18731008)
When a data node sent a
due to a buffer overflow and no event data had yet been sent for
the current epoch, the dummy event list created to handle this
inconsistency was not deleted after the information in the dummy
event list was transferred to the completed list.
Incorrect calculation of the next autoincrement value following
a manual insertion towards the end of a cached range could
result in duplicate values sometimes being used. This issue
could manifest itself whne using certain combinations of values
This issue has been fixed by modifying this calculation to
ensure that the next value from the cache as computed by
NDB is of the form
auto_increment_offset + (. This avoids any rounding up
by the MySQL Server of the returned value, which could result in
duplicate entries when the rounded-up value fell outside the
range of values cached by
--help option with
ndb_print_file caused the program to
For multithreaded data nodes, some threads do communicate often, with the result that very old signals can remain at the top of the signal buffers. When performing a thread trace, the signal dumper calculated the latest signal ID from what it found in the signal buffers, which meant that these old signals could be erroneously counted as the newest ones. Now the signal ID counter is kept as part of the thread state, and it is this value that is used when dumping signals for trace files. (Bug #73842, Bug #19582807)
The fix for Bug #16723708 stopped the
function from casting a log event's
ndb_mgm_event_category to an
enum type, but this change interfered with
existing applications, and so the function's original
behavior is now reinstated. A new MGM API function exhibiting
the corrected behavior
been added in this release to take the place of the reverted
function, for use in applications that do not require backward
compatibility. In all other respects apart from this, the new
function is identical with its predecessor.
NDB API scans leaked
was called when an operation resulted in an error. This leak
locked up the corresponding connection objects in the
DBTC kernel block until the connection was
Functionality Added or Changed
Added as an aid to debugging the ability to specify a
human-readable name for a given
Ndb object and later to
retrieve it. These operations are implemented, respectively, as
To make tracing of event handling between a user application and
NDB easier, you can use the reference (from
getReference() followed by
the name (if provided) in printouts; the reference ties together
Ndb object, the event buffer,
NDB storage engine's
When two tables had different foreign keys with the same name,
ndb_restore considered this a name conflict
and failed to restore the schema. As a result of this fix, a
slash character (
/) is now expressly
disallowed in foreign key names, and the naming format
is now enforced by the NDB API.
Processing a NODE_FAILREP signal that contained an invalid node ID could cause a data node to fail. (Bug #18993037, Bug #73015)
References: This bug is a regression of Bug #16007980.
When building out of source, some files were written to the
source directory instead of the build dir. These included the
manifest.mf files used for creating
ClusterJ jars and the
pom.xml file used by
mvn_install_ndbjtie.sh. In addition,
ndbinfo.sql was written to the build
directory, but marked as output to the source directory in
(Bug #18889568, Bug #72843)
When the binary log injector thread commits an epoch to the binary log and this causes the log file to reach maximum size, it may need to rotate the binary log. The rotation is not performed until either all the committed transactions from all client threads are flushed to the binary log, or a maximum of 30 seconds has elapsed. In the case where all transactions were committed prior to the 30-second wait, it was possible for committed transactions from multiple client threads to belong to newer epochs than the latest epoch committed by the injector thread, causing the thread to deadlock with itself, and causing an unnecessary 30-second delay before breaking the deadlock. (Bug #18845822)
Adding a foreign key failed with NDB Error 208 if the parent index was parent table's primary key, the primary key was not on the table's initial attributes, and the child table was not empty. (Bug #18825966)
NDB table served as both
the parent table and a child table for 2 different foreign keys
having the same name, dropping the foreign key on the child
table could cause the foreign key on the parent table to be
dropped instead, leading to a situation in which it was
impossible to drop the remaining foreign key. This situation can
be modelled using the following
CREATE TABLE parent ( id INT NOT NULL, PRIMARY KEY (id) ) ENGINE=NDB; CREATE TABLE child ( id INT NOT NULL, parent_id INT, PRIMARY KEY (id), INDEX par_ind (parent_id), FOREIGN KEY (parent_id) REFERENCES parent(id) ) ENGINE=NDB; CREATE TABLE grandchild ( id INT, parent_id INT, INDEX par_ind (parent_id), FOREIGN KEY (parent_id) REFERENCES child(id) ) ENGINE=NDB;
With the tables created as just shown, the issue occured when
executing the statement
ALTER TABLE child
DROP FOREIGN KEY parent_id, because it was possible in
some cases for
NDB to drop the foreign key
grandchild table instead. When this
happened, any subsequent attempt to drop the foreign key from
child or from the
grandchild table failed.
TABLE ... REORGANIZE PARTITION after increasing the
number of data nodes in the cluster from 4 to 16 led to a crash
of the data nodes. This issue was shown to be a regression
caused by previous fix which added a new dump handler using a
dump code that was already in use (7019), which caused the
command to execute two different handlers with different
semantics. The new handler was assigned a new
DUMP code (7024).
References: This bug is a regression of Bug #14220269.
Following a long series of inserts, when running with a
relatively small redo log and an insufficient large value for
there remained transactions that were blocked by the lack of
redo log and were thus not aborted in the correct state (waiting
for prepare log to be sent to disk, or
LOG_QUEUED state). This caused the redo log
to remain blocked until unblocked by a completion of a local
checkpoint. This could lead to a deadlock, when the blocked
aborts in turned blocked global checkpoints, and blocked GCPs
block LCPs. To prevent this situation from arising, we now abort
immediately when we reach the
state in the abort state handler.
ndbmtd supports multiple parallel receiver
threads, each of which performs signal reception for a subset of
the remote node connections (transporters) with the mapping of
remote_nodes to receiver threads decided at node startup.
Connection control is managed by the multi-instance
TRPMAN block, which is organized as a proxy
and workers, and each receiver thread has a
TRPMAN worker running locally.
QMGR block sends signals to
TRPMAN to enable and disable communications
with remote nodes. These signals are sent to the
TRPMAN proxy, which forwards them to the
workers. The workers themselves decide whether to act on
signals, based on the set of remote nodes they manage.
The current issue arises because the mechanism used by the
TRPMAN workers for determining which
connections they are responsible for was implemented in such a
way that each worker thought it was responsible for all
connections. This resulted in the
CLOSE_COMREQ being processed multiple times.
The fix keeps
TRPMAN instances (receiver
CLOSE_COMREQ requests. In addition, the
TRPMAN instance is now chosen when
routing from this instance for a specific remote connection.
During data node failure handling, the transaction coordinator
performing takeover gathers all known state information for any
failed TC instance transactions, determines whether each
transaction has been committed or aborted, and informs any
involved API nodes so that they can report this accurately to
their clients. The TC instance provides this information by
TCKEY_FAILCONF signals to the API nodes as
appropriate top each affected transaction.
In the event that this TC instance does not have a direct
connection to the API node, it attempts to deliver the signal by
routing it through another data node in the same node group as
the failing TC, and sends a
GSN_TCKEY_FAILREFCONF_R signal to TC block
instance 0 in that data node. A problem arose in the case of
multiple transaction cooridnators, when this TC instance did not
have a signal handler for such signals, which led it to fail.
This issue has been corrected by adding a handler to the TC proxy block which in such cases forwards the signal to one of the local TC worker instances, which in turn attempts to forward the signal on to the API node. (Bug #18455971)
When running with a very slow main thread, and one or more
transaction coordinator threads, on different CPUs, it was
possible to encounter a timeout when sending a
DIH_SCAN_GET_NODESREQ signal, which could
lead to a crash of the data node. Now in such cases the timeout
Failure of multiple nodes while using ndbmtd with multiple TC threads was not handled gracefully under a moderate amount of traffic, which could in some cases lead to an unplanned shutdown of the cluster. (Bug #18069334)
A local checkpoint (LCP) is tracked using a global LCP state
c_lcpState), and each
NDB table has a status indicator
which indicates the LCP status of that table
tabLcpStatus). If the global LCP state is
LCP_STATUS_IDLE, then all the tables should
have an LCP status of
When an LCP starts, the global LCP status is
LCP_INIT_TABLES and the thread starts setting
NDB tables to
TLS_ACTIVE. If any tables are not ready for
LCP, the LCP initialization procedure continues with
CONTINUEB signals until all tables have
become available and been marked
When this initialization is complete, the global LCP status is
This bug occurred when the following conditions were met:
An LCP was in the
and some but not all tables had been set to
The master node failed before the global LCP state changed
LCP_STATUS_ACTIVE; that is, before the
LCP could finish processing all tables.
NODE_FAILREP signal resulting from
the node failure was processed before the final
CONTINUEB signal from the LCP
initialization process, so that the node failure was
processed while the LCP remained in the
Following master node failure and selection of a new one, the
new master queries the remaining nodes with a
MASTER_LCPREQ signal to determine the state
of the LCP. At this point, since the LCP status was
LCP_INIT_TABLES, the LCP status was reset to
LCP_STATUS_IDLE. However, the LCP status of
the tables was not modified, so there remained tables with
TLS_ACTIVE. Afterwards, the failed node is
removed from the LCP. If the LCP status of a given table is
TLS_ACTIVE, there is a check that the global
LCP status is not
LCP_STATUS_IDLE; this check
failed and caused the data node to fail.
MASTER_LCPREQ handler ensures that
tabLcpStatus for all tables is updated to
TLS_COMPLETED when the global LCP status is
When performing a copying
TABLE operation, mysqld creates a
new copy of the table to be altered. This intermediate table,
which is given a name bearing the prefix
#sql-, has an updated schema but contains no
data. mysqld then copies the data from the
original table to this intermediate table, drops the original
table, and finally renames the intermediate table with the name
of the original table.
mysqld regards such a table as a temporary
table and does not include it in the output from
mysqldump also ignores an intermediate table.
NDB sees no difference
between such an intermediate table and any other table. This
difference in how intermediate tables are viewed by
mysqld (and MySQL client programs) and by the
NDB storage engine can give rise to problems
when performing a backup and restore if an intermediate table
NDB, possibly left over from a
ALTER TABLE that used copying. If a
schema backup is performed using mysqldump
and the mysql client, this table is not
included. However, in the case where a data backup was done
using the ndb_mgm client's
BACKUP command, the intermediate table was
included, and was also included by
ndb_restore, which then failed due to
attempting to load data into a table which was not defined in
the backed up schema.
To prevent such failures from occurring,
ndb_restore now by default ignores
intermediate tables created during
TABLE operations (that is, tables whose names begin
with the prefix
#sql-). A new option
is added that makes it possible to override the new behavior.
The option's default value is
cause ndb_restore to revert to the old
behavior and to attempt to restore intermediate tables, set this
The logging of insert failures has been improved. This is
intended to help diagnose occasional issues seen when writing to
DEFINER column in the
contained erroneous values for views contained in the
ndbinfo information database. This could be
seen in the result of a query such as
TABLE_NAME, DEFINER FROM INFORMATION_SCHEMA.VIEWS WHERE
CHAR column that used
UTF8 character set as a table's
primary key column led to node failure when restarting data
nodes. Attempting to restore a table with such a primary key
also caused ndb_restore to fail.
(Bug #16895311, Bug #68893)
-o) option for the
ndb_select_all utility worked only when
specified as the last option, and did not work with an equals
As part of this fix, the program's
output was also aligned with the
option's correct behavior.
(Bug #64426, Bug #16374870)
Setting the undo buffer size used by
InitialLogFileGroup to a
value greater than that set by
prevented data nodes from starting; the data nodes failed with
Error 1504 Out of logbuffer memory. While
the failure itself is expected behavior, the error message did
not provide sufficient information to diagnose the actual source
of the problem; now in such cases, a more specific error message
Out of logbuffer memory (specify smaller
undo_buffer_size or increase SharedGlobalMemory) is
(Bug #11762867, Bug #55515)
NDB data node indicates a
buffer overflow via an empty epoch, the event buffer places an
inconsistent data event in the event queue. When this was
consumed, it was not removed from the event queue as expected,
nextEvent() calls to return
0. This caused event consumption to stall because the
inconsistency remained flagged forever, while event data
accumulated in the queue.
Event data belonging to an empty inconsistent epoch can be found
either at the beginning or somewhere in the middle.
pollEvents() returns 0 for
the first case. This fix handles the second case: calling
nextEvent() call dequeues the inconsistent
event before it returns. In order to benefit from this fix, user
applications must call
nextEvent() even when
pollEvents() returns 0.
returned 1, even when called with a wait time equal to 0, and
there were no events waiting in the queue. Now in such cases it
returns 0 as expected.
Functionality Added or Changed
shortages and statistics has been improved as follows:
The default value of
has been increased from 4 MB to 64 MB.
When this resource is exhausted, a suitable informative message is now printed in the data node log describing possible causes of the problem and suggesting possible solutions.
LongMessageBuffer usage information is
now shown in the
See the description of this table for an example and
The server system variables
ndb_index_stat_freq, which had been
deprecated in a previous MySQL Cluster release series, have now
(Bug #11746486, Bug #26673)
ALTER TABLE statement
changed table schemas without causing a change in the
table's partitioning, the new table definition did not copy
the hash map from the old definition, but used the current
default hash map instead. However, the table data was not
reorganized according to the new hashmap, which made some rows
inaccessible using a primary key lookup if the two hash maps had
To keep this situation from occurring, any
TABLE that entails a hashmap change now triggers a
reorganisation of the table. In addition, when copying a table
definition in such cases, the hashmap is now also copied.
When certain queries generated signals having more than 18 data words prior to a node failure, such signals were not written correctly in the trace file. (Bug #18419554)
Checking of timeouts is handled by the signal
TIME_SIGNAL. Previously, this signal was
generated by the
NDB kernel block in the main
thread, and sent to the
(see NDB Kernel Blocks) as needed to
check (respectively) heartbeats, disk writes, and transaction
timeouts. In ndbmtd (as opposed to
ndbd), these blocks all execute in different
threads. This meant that if, for example,
QMGR was actively working and some other
thread was put to sleep, the previously sleeping thread received
a large number of TIME_SIGNAL messages
simultaneously when it was woken up again, with the effect that
effective times moved very quickly in DBLQH
as well as in DBTC. In
DBLQH, this had no noticeable adverse
effects, but this was not the case in DBTC;
the latter block could not work on transactions even though time
was still advancing, leading to a situation in which many
operations appeared to time out because the transaction
coordinator (TC) thread was comparatively slow in answering
In addition, when the TC thread slept for longer than 1500
milliseconds, the data node crashed due to detecting that the
timeout handling loop had not yet stopped. To rectify this
problem, the generation of the
has been moved into the local threads instead of
QMGR; this provides for better control over
TIME_SIGNAL messages are allowed
After dropping an
neither the cluster log nor the output of the
MemoryUsage command showed that the
IndexMemory used by that
table had been freed, even though the memory had in fact been
deallocated. This issue was introduced in MySQL Cluster NDB
ndb_show_tables sometimes failed with the
error message Unable to connect to management
server and immediately terminated, without providing
the underlying reason for the failure. To provide more useful
information in such cases, this program now also prints the most
recent error from the
used to instantiate the connection.
-DWITH_NDBMTD=0 did not function
correctly, which could cause the build to fail on platforms such
as ARM and Raspberry Pi which do not define the memory barrier
functions required to compile ndbmtd.
References: See also Bug #16620938.
The block threads managed by the multi-threading scheduler communicate by placing signals in an out queue or job buffer which is set up between all block threads. This queue has a fixed maximum size, such that when it is filled up, the worker thread must wait for the consumer to drain the queue. In a highly loaded system, multiple threads could end up in a circular wait lock due to full out buffers, such that they were preventing each other from performing any useful work. This condition eventually led to the data node being declared dead and killed by the watchdog timer.
To fix this problem, we detect situations in which a circular wait lock is about to begin, and cause buffers which are otherwise held in reserve to become available for signal processing by queues which are highly loaded. (Bug #18229003)
An issue found when compiling the MySQL Cluster software for
Solaris platforms could lead to problems when using
ThreadConfig on such systems.
The ndb_mgm client
BACKUP command (see
Commands in the MySQL Cluster Management Client) could
experience occasional random failures when a ping was received
prior to an expected
Now the connection established by this command is not checked
until it has been properly set up.
When creating a table with foreign key referencing an index in another table, it sometimes appeared possible to create the foreign key even if the order of the columns in the indexes did not match, due to the fact that an appropriate error was not always returned internally. This fix improves the error used internally to work in most cases; however, it is still possible for this situation to occur in the event that the parent index is a unique index. (Bug #18094360)
Dropping a nonexistent foreign key on an
NDB table (using, for example,
ALTER TABLE) appeared to succeed.
Now in such cases, the statement fails with a relevant error
message, as expected.
Data nodes running ndbmtd could stall while performing an online upgrade of a MySQL Cluster containing a great many tables from a version prior to NDB 7.2.5 to version 7.2.5 or later. (Bug #16693068)
When an NDB API client application received a signal with an
invalid block or signal number,
only a very brief error message that did not accurately convey
the nature of the problem. Now in such cases, appropriate
printouts are provided when a bad signal or message is detected.
In addition, the message length is now checked to make certain
that it matches the size of the embedded signal.
Refactoring that was performed in MySQL Cluster NDB 7.3.4
inadvertently introduced a dependency in
Ndb.hpp on a file that is not included in
the distribution, which caused NDB API applications to fail to
compile. The dependency has been removed.
(Bug #18293112, Bug #71803)
References: This bug was introduced by Bug #17647637.
An NDB API application sends a scan query to a data node; the
scan is processed by the transaction coordinator (TC). The TC
LQHKEYREQ request to the
appropriate LDM, and aborts the transaction if it does not
LQHKEYCONF response within the
specified time limit. After the transaction is successfully
aborted, the TC sends a
TCROLLBACKREP to the
NDBAPI client, and the NDB API client processes this message by
cleaning up any
associated with the transaction.
The client receives the data which it has requested in the form
TRANSID_AI signals, buffered for sending
at the data node, and may be delivered after a delay. On
receiving such a signal,
NDB checks the
transaction state and ID: if these are as expected, it processes
the signal using the
Ndb objects associated
with that transaction.
The current bug occurs when all the following conditions are fulfilled:
The transaction coordinator aborts a transaction due to
delays and sends a
to the client, while at the same time a
TRANSID_AI which has been buffered for
delivery at an LDM is delivered to the same client.
The NDB API client considers the transaction complete on
receipt of a
TCROLLBACKREP signal, and
immediately closes the transaction.
The client has a separate receiver thread running concurrently with the thread that is engaged in closing the transaction.
The arrival of the late
interleaves with the closing of the user thread's
transaction such that
processing passes normal checks before
resets the transaction state and invalidates the receiver.
When these conditions are all met, the receiver thread proceeds
to continue working on the
using the invalidated receiver. Since the receiver is already
invalidated, its usage results in a node failure.
Ndb object cleanup done for
TCROLLBACKREP includes invalidation of the
transaction ID, so that, for a given transaction, any signal
which is received after the
arrives does not pass the transaction ID check and is silently
dropped. This fix is also implemented for the
TCKEY_FAILREF signals as well.
See also Operations and Signals, for additional information about NDB messaging. (Bug #18196562)
included an internal header file
ndb_global.h) not found in the MySQL
Cluster binary distribution. The example now uses
instead of this file.
(Bug #18096866, Bug #71409)
attempted (as a normal part of its internal operations) to drop
an index used by a foreign key constraint, the drop failed. Now
in such cases, invoking
all foreign keys on the table to be dropped, whether this table
acts as a parent table, child table, or both.
This issue did not affect dropping of indexes using SQL statements. (Bug #18069680)
References: See also Bug #17591531.
Cluster API: ndb_restore could sometimes report Error 701 System busy with other schema operation unnecessarily when restoring in parallel. (Bug #17916243)
Compilation of ndbmtd failed on Solaris 10
and 11 for 32-bit
x86, and the binary was not
included in the binary distributions for these platforms.
Disk Data: When using Disk Data tables and ndbmtd data nodes, it was possible for the undo buffer to become overloaded, leading to a crash of the data nodes. This issue was more likely to be encountered when using Disk Data columns whose size was approximately 8K or larger. (Bug #16766493)
UINT_MAX64 was treated as a signed value by
Visual Studio 2010. To prevent this from happening, the value is
now explicitly defined as unsigned.
References: See also Bug #17647637.
Interrupting a drop of a foreign key could cause the underlying table to become corrupt. (Bug #18041636)
Monotonic timers on several platforms can experience issues which might result in the monotonic clock doing small jumps back in time. This is due to imperfect synchronization of clocks between multiple CPU cores and does not normally have an adverse effect on the scheduler and watchdog mechanisms; so we handle some of these cases by making backtick protection less strict, although we continue to ensure that the backtick is less than 10 milliseconds. This fix also removes several checks for backticks which are thereby made redundant. (Bug #17973819)
Under certain specific circumstances, in a cluster having two SQL nodes, one of these could hang, and could not be accessed again even after killing the mysqld process and restarting it. (Bug #17875885, Bug #18080104)
References: See also Bug #17934985.
Poor support or lack of support on some platforms for monotonic timers caused issues with delayed signal handling by the job scheduler for the multithreaded data node. Variances (timer leaps) on such platforms are now handled in the same way the multithreaded data node process that they are by the singlethreaded version. (Bug #17857442)
References: See also Bug #17475425, Bug #17647637.
In some cases, with
ndb_join_pushdown enabled, it
was possible to obtain from a valid query the error
Got error 290 'Corrupt key in TC, unable to xfrm'
from NDBCLUSTER even though the data was not
It was determined that a
NULL in a
VARCHAR column could be used to
construct a lookup key, but since
never equal to any other value, such a lookup could simple have
been eliminated instead. This
NULL lookup in
turn led to the spurious error message.
This fix takes advantage of the fact that a key lookup with
NULL never finds any matching rows, and so
NDB does not try to perform the
lookup that would have led to the error.
It was theoretically possible in certain cases for a number of
output functions internal to the
NDB code to supply an uninitialized
buffer as output. Now in such cases, a newline character is
(Bug #17775602, Bug #17775772)
Use of the
localtime() function in
NDB multithreading code led to
otherwise nondeterministic failures in
ndbmtd. This fix replaces this function,
which on many platforms uses a buffer shared among multiple
localtime_r(), which can have
allocated to it a buffer of its own.
When using single-threaded (ndbd) data nodes
enabled, the CPU did not, as intended, temporarily lower its
scheduling priority to normal every 10 milliseconds to give
other, non-realtime threads a chance to run.
During arbitrator selection,
The QMGR Block) runs through
a series of states, the first few of which are (in order)
START. A check
for an arbitration selection timeout occurred in the
FIND state, even though the corresponding
timer was not set until
QMGR reached the
Attempting to read the resulting uninitialized timestamp value
could lead to false Could not find an arbitrator,
cluster is not partition-safe warnings.
This fix moves the setting of the timer for arbitration timeout
INIT state, so that the value later
FIND is always initialized.
Timers used in timing scheduler events in the
NDB kernel have been refactored, in
part to insure that they are monotonic on all platforms. In
particular, on Windows, event intervals were previously
calculated using values obtained from
GetSystemTimeAsFileTime(), which reads
directly from the system time (“wall clock”), and
which may arbitrarily be reset backward or forward, leading to
false watchdog or heartbeat alarms, or even node shutdown. Lack
of timer monotonicity could also cause slow disk writes during
backups and global checkpoints. To fix this issue, the Windows
implementation now uses
QueryPerformanceCounters() instead of
GetSystemTimeAsFileTime(). In the event that
a monotonic timer is not found on startup of the data nodes, a
warning is logged.
In addition, on all platforms, a check is now performed at
compile time for available system monotonic timers, and the
build fails if one cannot be found; note that
CLOCK_HIGHRES is now supported as an
CLOCK_MONOTONIC if the latter
is not available.
The global checkpoint lag watchdog tracking the number of times a check for GCP lag was performed using the system scheduler and used this count to check for a timeout condition, but this caused a number of issues. To overcome these limitations, the GCP watchdog has been refactored to keep track of its own start times, and to calculate elapsed time by reading the (real) clock every time it is called.
In addition, any backticks (rare in any case) are now handled by taking the backward time as the new current time and calculating the elapsed time for this round as 0. Finally, any ill effects of a forward leap, which possibly could expire the watchdog timer immediately, are reduced by never calculating an elapsed time longer than the requested delay time for the watchdog timer. (Bug #17647469)
References: See also Bug #17842035.
The length of the interval (intended to be 10 seconds) between
GCP_COMMIT when the GCP progress
watchdog did not detect progress in a global checkpoint was not
always calculated correctly.
Trying to drop an index used by a foreign key constraint caused data node failure. Now in such cases, the statement used to perform the drop fails. (Bug #17591531)
In certain rare cases on commit of a transaction, an
Ndb object was released before
the transaction coordinator (
block) sent the expected
NDB failed to send a
COMMIT_ACK signal in response, which caused a
memory leak in the
NDB kernel could later
lead to node failure.
Ndb object is not released until the
COMMIT_CONF signal has actually been
After restoring the database metadata (but not any data) by
-m), SQL nodes would hang while trying to
SELECT from a table in the
database to which the metadata was restored. In such cases the
attempt to query the table now fails as expected, since the
table does not actually exist until
ndb_restore is executed with
Losing its connections to the management node or data nodes
while a query against the
ndbinfo.memoryusage table was
in progress caused the SQL node where the query was issued to
(Bug #14483440, Bug #16810415)
The ndbd_redo_log_reader utility now supports
Using this options causes the program to print basic usage
information, and then to exit.
(Bug #11749591, Bug #36805)
It was possible for an
object to receive signals for handling before it was
initialized, leading to thread interleaving and possible data
node failure when executing a call to
Ndb::init(). To guard against
this happening, a check is now made when it is starting to
receive signals that the
Ndb object is
properly initialized before any signals are actually handled.
Cluster API: Compilation of example NDB API program files failed due to missing include directives. (Bug #17672846, Bug #70759)
An application, having opened two distinct instances of
attempted to use the second connection object to send signals to
itself, but these signals were blocked until the destructor was
explicitly called for that connection object.
References: This bug is a regression of Bug #16595838.
Functionality Added or Changed
The length of time a management node waits for a heartbeat
message from another management node is now configurable using
management node configuration parameter added in this release.
The connection is considered dead after 3 missed heartbeats. The
default value is 1500 milliseconds, or a timeout of
approximately 6000 ms.
(Bug #17807768, Bug #16426805)
The MySQL Cluster Auto-Installer now generates a
my.cnf file for each
mysqld in the cluster before starting it. For
more information, see
Using the MySQL Cluster Auto-Installer.
Performance: In a number of cases found in various locations in the MySQL Cluster codebase, unnecessary iterations were performed; this was caused by failing to break out of a repeating control structure after a test condition had been met. This community-contributed fix removes the unneeded repetitions by supplying the missing breaks. (Bug #16904243, Bug #69392, Bug #16904338, Bug #69394, Bug #16778417, Bug #69171, Bug #16778494, Bug #69172, Bug #16798410, Bug #69207, Bug #16801489, Bug #69215, Bug #16904266, Bug #69393)
Portions of the documentation specific to MySQL Cluster and the
NDB storage engine were not
included when installing from RPMs.
ndb_restore could abort during the last stages of a restore using attribute promotion or demotion into an existing table. This could happen if a converted attribute was nullable and the backup had been run on active database. (Bug #17275798)
It was not possible to start MySQL Cluster processes created by the Auto-Installer on a Windows host running freeSSHd. (Bug #17269626)
DBUTIL data node block is now less strict
about the order in which it receives certain messages from other
ONLINE TABLE ... REORGANIZE PARTITION failed when run
against a table having or using a reference to a foreign key.
(Bug #17036744, Bug #69619)
TUPKEYREQ signals are used to read data from
the tuple manager block (
DBTUP), and are used
for all types of data access, especially for scans which read
many rows. A TUPKEYREQ specifies a series of 'columns' to be
read, which can be either single columns in a specific table, or
pseudocolumns, two of which—
READ_PACKED—are aliases to read all
columns in a table, or some subset of these columns.
Pseudocolumns are used by modern NDB API applications as they
require less space in the
specify columns to be read, and can return the data in a more
compact (packed) format.
This fix moves the creation and initialization of on-stack
Signal objects to only those pseudocolumn reads which need to
EXECUTE_DIRECT to other block instances,
rather than for every read. In addition, the size of an on-stack
signal is now varied to suit the requirements of each
pseudocolumn, so that only reads of the
INDEX_STAT pseudocolumn now require
initialization (and 3KB memory each time this is performed).
A race condition could sometimes occur when trying to lock receive threads to cores. (Bug #17009393)
Results from joins using a
WHERE with an
ORDER BY ... DESC clause were not sorted
DESC keyword in such cases was
(Bug #16999886, Bug #69528)
The Windows error ERROR_FILE_EXISTS was
not recognized by
treated it as an unknown error.
not work correctly with data nodes running
File system errors occurring during a local checkpoint could sometimes cause an LCP to hang with no obvious cause when they were not handled correctly. Now in such cases, such errors always cause the node to fail. Note that the LQH block always shuts down the node when a local checkpoint fails; the change here is to make likely node failure occur more quickly and to make the original file system error more visible. (Bug #16961443)
Maintenance and checking of parent batch completion in the
SPJ block of the
kernel was reimplemented. Among other improvements, the
completion state of all ancestor nodes in the tree are now
Dropping a column, which was not itself a foreign key, from an
NDB table having foreign keys
failed with ER_TABLE_DEF_CHANGED.
The LCP fragment scan watchdog periodically checks for lack of
progress in a fragment scan performed as part of a local
checkpoint, and shuts down the node if there is no progress
after a given amount of time has elapsed. This interval,
formerly hard-coded as 60 seconds, can now be configured using
data node configuration parameter added in this release.
This configuration parameter sets the maximum time the local checkpoint can be stalled before the LCP fragment scan watchdog shuts down the node. The default is 60 seconds, which provides backward compatibility with previous releases.
You can disable the LCP fragment scan watchdog by setting this parameter to 0. (Bug #16630410)
Added the ndb_error_reporter options
which makes it possible to set a timeout for connecting to
which disables scp connections to remote hosts, and
which skips all nodes in a given node group.
References: See also Bug #11752792, Bug #44082.
id WAIT STARTED
id had already been used for a backup
ID, an error caused by the duplicate ID occurred as expected,
but following this, the
START BACKUP command
(Bug #16593604, Bug #68854)
ndb_mgm treated backup IDs provided to
ABORT BACKUP commands as signed values, so
that backup IDs greater than 231
wrapped around to negative values. This issue also affected
out-of-range backup IDs, which wrapped around to negative values
instead of causing errors as expected in such cases. The backup
ID is now treated as an unsigned value, and
ndb_mgm now performs proper range checking
for backup ID values greater than
(Bug #16585497, Bug #68798)
When trying to specify a backup ID greater than the maximum allowed, the value was silently truncated. (Bug #16585455, Bug #68796)
The unexpected shutdown of another data node as a starting data node received its node ID caused the latter to hang in Start Phase 1. (Bug #16007980)
References: See also Bug #18993037.
NDB receive thread waited unnecessarily
for additional job buffers to become available when receiving
data. This caused the receive mutex to be held during this wait,
which could result in a busy wait when the receive thread was
running with real-time priority.
This fix also handles the case where a negative return value
from the initial check of the job buffer by the receive thread
prevented further execution of data reception, which could
possibly lead to communication blockage or configured
When the available job buffers for a given thread fell below the critical threshold, the internal multi-threading job scheduler waited for job buffers for incoming rather than outgoing signals to become available, which meant that the scheduler waited the maximum timeout (1 millisecond) before resuming execution. (Bug #15907122)
to 1 or 2 on Windows systems caused
ALTER TABLE ... ADD
FOREIGN KEY statements against tables with names
containing uppercase letters to fail with Error 155,
No such table: '(null)'.
(Bug #14826778, Bug #67354)
Under some circumstances, a race occurred where the wrong
watchdog state could be reported. A new state name
Packing Send Buffers is added for watchdog
state number 11, previously reported as
place. As part of this fix, the state numbers for
states without names are always now reported in such cases.
When a node fails, the Distribution Handler
DBDIH kernel block) takes steps together
with the Transaction Coordinator (
make sure that all ongoing transactions involving the failed
node are taken over by a surviving node and either committed or
aborted. Transactions taken over which are then committed belong
in the epoch that is current at the time the node failure
occurs, so the surviving nodes must keep this epoch available
until the transaction takeover is complete. This is needed to
maintain ordering between epochs.
A problem was encountered in the mechanism intended to keep the current epoch open which led to a race condition between this mechanism and that normally used to declare the end of an epoch. This could cause the current epoch to be closed prematurely, leading to failure of one or more surviving data nodes. (Bug #14623333, Bug #16990394)
under heavy load could cause data nodes running
ndbmtd to fail.
When using dynamic listening ports for accepting connections from API nodes, the port numbers were reported to the management server serially. This required a round trip for each API node, causing the time required for data nodes to connect to the management server to grow linearly with the number of API nodes. To correct this problem, each data node now reports all dynamic ports at once. (Bug #12593774)
ndb_error-reporter did not support the
(Bug #11756666, Bug #48606)
References: See also Bug #11752792, Bug #44082.
Formerly, the node used as the coordinator or leader for
distributed decision making between nodes (also known as the
The DBDICT Block) was
indicated in the output of the ndb_mgm client
SHOW command as the “master”
node, although this node has no relationship to a master server
in MySQL Replication. (It should also be noted that it is not
necessary to know which node is the leader except when debugging
NDBCLUSTER source code.) To avoid possible
confusion, this label has been removed, and the leader node is
now indicated in
SHOW command output using an
(Bug #11746263, Bug #24880)
Program execution failed to break out of a loop after meeting a desired condition in a number of internal methods, performing unneeded work in all cases where this occurred. (Bug #69610, Bug #69611, Bug #69736, Bug #17030606, Bug #17030614, Bug #17160263)
ABORT BACKUP in the
ndb_mgm client (see
Commands in the MySQL Cluster Management Client) took an
excessive amount of time to return (approximately as long as the
backup would have required to complete, had it not been
aborted), and failed to remove the files that had been generated
by the aborted backup.
(Bug #68853, Bug #17719439)
Note that converted character data is not checked to conform to any character set.
When performing such promotions, the only other sort of type conversion that can be performed at the same time is between character types and binary types.
now supports a pointer or a reference to table as its required
argument. If a null table pointer is used, the method now
returns -1 to make it clear that this is what has occurred.
The MySQL Cluster installer for Windows provided a nonfunctional
option to install debug symbols (contained in
*.pdb files). This option has been removed
from the installer.
You can obtain the
*.pdb debug files for
a given MySQL Cluster release from the Windows
.zip archive for the same release, such
(Bug #16748308, Bug #69112)
mysql_upgrade failed when upgrading from
MySQL Cluster NDB 7.1.26 to MySQL Cluster NDB 7.2.13 when it
attempted to invoke a stored procedure before the
mysql.proc table had been upgraded.
References: This bug is a regression of Bug #16226274.
The planned or unplanned shutdown of one or more data nodes
while reading table data from the
ndbinfo database caused a
DROP TABLE while
DBDIH was updating table checkpoint
information subsequent to a node failure could lead to a data
In certain cases, when starting a new SQL node, mysqld failed with Error 1427 Api node died, when SUB_START_REQ reached node. (Bug #16840741)
Failure to use container classes specific
NDB during node failure handling
could cause leakage of commit-ack markers, which could later
lead to resource shortages or additional node crashes.
Use of an uninitialized variable employed in connection with
error handling in the
DBLQH kernel block
could sometimes lead to a data node crash or other stability
issues for no apparent reason.
A race condition in the time between the reception of a
execNODE_FAILREP signal by the
QMGR kernel block and its reception by the
blocks could lead to data node crashes during shutdown.
CLUSTERLOG command (see
Commands in the MySQL Cluster Management Client) caused
ndb_mgm to crash on Solaris SPARC systems.
On Solaris SPARC platforms, batched key access execution of some joins could fail due to invalid memory access. (Bug #16818575)
NDB tables had foreign key
references to each other, it was necessary to drop the tables in
the same order in which they were created.
The duplicate weedout algorithm introduced in MySQL 5.6
evaluates semi-joins such as subqueries using
IN) by first performing a normal join between
the outer and inner table which may create duplicates of rows
form the outer (and inner) table and then removing any duplicate
result rows from the outer table by comparing their primary key
values. Problems could arise when
VARCHAR values using their
maximum length, resulting in a binary key image which contained
garbage past the actual lengths of the
VARCHAR values, which meant that multiple
instances of the same key were not binary-identical as assumed
by the MySQL server.
To fix this problem,
NDB now zero-pads such
values to the maximum length of the column so that copies of the
same key are treated as identical by the weedout process.
DROP DATABASE failed to work
correctly when executed against a database containing
NDB tables joined by foreign key
constraints (and all such tables being contained within this
database), leaving these tables in place while dropping the
remaining tables in the database and reporting failure.
(Bug #16692652, Bug #69008)
firstmatch=on with the
variable, pushed joins could return too many rows.
A variable used by the batched key access implementation was not
NDB as expected.
This could cause a “batch full” condition to be
reported after only a single row had been batched, effectively
disabling batching altogether and leading to an excessive number
of round trips between mysqld and
When started with
-f) option, ndb_mgmd
removed the old configuration cache before verifying the
configuration file. Now in such cases,
ndb_mgmd first checks for the file, and
continues with removing the configuration cache only if the
configuration file is found and is valid.
Creating more than 32 hash maps caused data nodes to fail.
Usually new hashmaps are created only when performing
reorganzation after data nodes have been added or when explicit
partitioning is used, such as when creating a table with the
MAX_ROWS option, or using
BY KEY() PARTITIONS .
foreign_key_checks = 0
had no effect on the handling of
NDB tables. Now, doing so causes
such checks of foreign key constraints to be
suspended—that is, has the same effect on
NDB tables as it has on
(Bug #14095855, Bug #16286309)
References: See also Bug #16286164.
ALTER LOGFILE GROUP, and
ALTER TABLESPACE failed with a
syntax error when
INITIAL_SIZE was specified
using letter abbreviations such as
G. In addition,
LOGFILE GROUP failed when
UNDO_BUFFER_SIZE, or both options were
specified using letter abbreviations.
(Bug #13116514, Bug #16104705, Bug #62858)
For each log event retrieved using the MGM API, the log event
simply cast to an
enum type, which resulted
in invalid category values. Now an offset is added to the
category following the cast to ensure that the value does not
fall out of the allowed range.
This change was reverted by the fix for Bug #18354165. See the
MySQL Cluster API Developer documentation for
method performs a
malloc() if no buffer is
provided for it to use. However, it was assumed that the memory
thus returned would always be suitably aligned, which is not
always the case. Now when
malloc() provides a
buffer to this method, the buffer is aligned after it is
allocated, and before it is used.
Based on MySQL Server 5.6
Important Change: MySQL Cluster SQL nodes are now based on MySQL Server 5.6. For information about feature additions and other changes made in MySQL 5.6, see What Is New in MySQL 5.6.
The mysqld binary provided with MySQL Cluster NDB 7.3.1 is based on MySQL Server 5.6.10, and includes all MySQL Server 5.6 feature enhancements and bug fixes found in that release; see Changes in MySQL 5.6.10 (2013-02-05, General Availability), for information about these.
MySQL Cluster GUI Configuration Wizard
Important Change: The MySQL Cluster distribution now includes a browser-based graphical configuration wizard that assists the user in configuring and deploying a MySQL Cluster. This deployment can consist of an arbitrary number of nodes (within certain limits) on the user machine only, or include nodes distributed on a local network. The wizard can be launched from the command line (using the ndb_setup utility now included in the binary distribution) or a desktop file browser.
For more information about this tool, see The MySQL Cluster Auto-Installer.
Support for Foreign Key Constraints
MySQL Cluster now supports foreign key constraints between
NDB tables, including support for
SET NULL, and
reference options for
UPDATE actions. (MySQL currently does not
MySQL requires generally that all child and parent tables in
foreign key relationships employ the same storage engine; thus,
to use foreign keys with MySQL Cluster tables, the child and
parent table must each use the
engine. (It is not possible, for example, for a foreign key on
NDB table to reference an index of an
Note that MySQL Cluster tables that are explicitly partitioned
LINEAR KEY may
contain foreign key references or be referenced by foreign keys
(or both). This is unlike the case with
InnoDB tables that are user partitioned,
which may not have any foreign key relationships.
You can create an
NDB table having a foreign
key reference on another
NDB table using
TABLE ... [CONSTRAINT] FOREIGN KEY ... REFERENCES. A
child table's foreign key definitions can be seen in the
SHOW CREATE TABLE; you
can also obtain information about foreign keys by querying the
applications written against Node.js with MySQL Cluster and
provides a domain object model similar in many ways to that
employed by ClusterJ (see
The ClusterJ API and Data Object Model) and can
be used with either of two backend adapters: the
ndb adapter, which uses the NDB API to
provide high-performance native access to MySQL Cluster; and the
mysql-js adapter, which uses the MySQL Server
node-mysql driver available from
distribution, and contains setup programs which can assist you
with installation of the connector. You must have Node.js and
MySQL Cluster installed prior to running the setup scripts. The
node-mysql driver is also required for the
mysql-js Node.js adapter; you can install
this using the package management tool included with Node.js.
Functionality Added or Changed
The behavior of and values used for the
configuration parameters have been improved. Formerly, the
default values for these parameters were 70080 and 71540,
respectively—which it was later found could lead to
excessive timeouts in some circumstances—with the minimum
for each of them being 1. Now, the default and recommended value
is 0 for both
allows the operating system or platform to choose the send or
receive buffer size for TCP sockets.
References: See also Bug #14168828.
DUMP code 2514, which provides
information about counts of transaction objects per API node.
For more information, see
DUMP 2514. See also
Commands in the MySQL Cluster Management Client.
When ndb_restore fails to find a table, it now includes in the error output an NDB API error code giving the reason for the failure. (Bug #16329067)
Data node logs now provide tracking information about arbitrations, including which nodes have assumed the arbitrator role and at what times. (Bug #11761263, Bug #53736)
mysqld failed to respond when
mysql_shutdown() was invoked
from a C application, or mysqladmin
shutdown was run from the command line.
When an update of an
changes the primary key (or part of the primary key), the
operation is executed as a delete plus an insert. In some cases,
the initial read operation did not retrieve all column values
required by the insert, so that another read was required. This
fix ensures that all required column values are included in the
first read in such cases, which saves the overhead of an
additional read operation.
Pushed joins executed when
was also in use returned incorrect results.
Selecting from the
table while using tables with foreign keys caused
mysqld to crash.
(Bug #16246874, Bug #68224)
Including a table as a part of a pushed join should be rejected if there are outer joined tables in between the table to be included and the tables with which it is joined with; however the check as performed for any such outer joined tables did so by checking the join type against the root of the pushed query, rather than the common ancestor of the tables being joined. (Bug #16199028)
References: See also Bug #16198866.
Some queries were handled differently with
ndb_join_pushdown enabled, due
to the fact that outer join conditions were not always pruned
correctly from joins before they were pushed down.
References: See also Bug #16199028.
Attempting to perform additional operations such as
COLUMN as part of an
[ONLINE | OFFLINE] TABLE ... RENAME ... statement is
not supported, and now fails with an
Purging the binary logs could sometimes cause mysqld to crash. (Bug #15854719)