This section contains unified change history highlights for all
MySQL Cluster releases based on version 7.2 of the
NDBCLUSTER storage engine through
MySQL Cluster NDB 7.2.14. Included are all
changelog entries in the categories MySQL
Cluster, Disk Data, and
For an overview of features that were added in MySQL Cluster NDB 7.2, see MySQL Cluster Development History.
Trying to access old rows while using
role=ndb-caching after restarting the
memached daemon caused the daemon to crash.
The current fix prevents the crash in such cases but does not
provide full support for
Functionality Added or Changed
Following an upgrade to MySQL Cluster NDB 7.2.7 or later, it was
not possible to downgrade online again to any previous version,
due to a change in that version in the default size (number of
LDM threads used) for
hash maps. The fix for this issue makes the size configurable,
with the addition of the
To retain compatibility with an older release that does not
support large hash maps, you can set this parameter in the
config.ini file to the value
used in older releases (240) before performing an upgrade, so
that the data nodes continue to use smaller hash maps that are
compatible with the older release. You can also now employ this
parameter in MySQL Cluster NDB 7.0 and MySQL Cluster NDB 7.1 to
enable larger hash maps prior to upgrading to MySQL Cluster NDB
7.2. For more information, see the description of the
References: See also Bug #14645319.
Important Change; Cluster API:
When checking—as part of evaluating an
if predicate—which error codes should
be propagated to the application, any error code less than 6000
caused the current row to be skipped, even those codes that
should have caused the query to be aborted. In addition, a scan
that aborted due to an error from
no rows had been sent to the API caused
to send a
SCAN_FRAGCONF signal rather than a
SCAN_FRAGREF signal to
DBTC. This caused
time out waiting for a
that was never sent, and the scan was never closed.
As part of this fix, the default
value used by
has been changed from 899 (Rowid already
allocated) to 626 (Tuple did not
exist). The old value continues to be supported for
backward compatibility. User-defined values in the range
6000-6999 (inclusive) are also now supported. You should also
keep in mind that the result of using any other
ErrorCode value not mentioned here is not
defined or guaranteed.
When using tables having more than 64 fragments in a MySQL
Cluster where multiple TC threads were configured (on data nodes
running ndbmtd, using
memory could be freed prematurely, before scans relying on these
objects could be completed, leading to a crash of the data node.
References: See also Bug #13799800. This bug was introduced by Bug #14143553.
When started with
-f) option, ndb_mgmd
removed the old configuration cache before verifying the
configuration file. Now in such cases,
ndb_mgmd first checks for the file, and
continues with removing the configuration cache only if the
configuration file is found and is valid.
DUMP 2304 command during a data
node restart could cause the data node to crash with a
Pointer too large error.
mysql_upgrade failed when run on an SQL node where distributed privileges were in use. (Bug #16226274)
Including a table as a part of a pushed join should be rejected if there are outer joined tables in between the table to be included and the tables with which it is joined with; however the check as performed for any such outer joined tables did so by checking the join type against the root of the pushed query, rather than the common ancestor of the tables being joined. (Bug #16199028)
References: See also Bug #16198866.
Some queries were handled differently with
enabled, due to the fact that outer join conditions were not
always pruned correctly from joins before they were pushed down.
References: See also Bug #16199028.
Data nodes could fail during a system restart when the host ran
short of memory, due to signals of the wrong types
TRANSID_AI_R) being sent to the
DBSPJ kernel block.
Attempting to perform additional operations such as
COLUMN as part of an
[ONLINE | OFFLINE] TABLE ... RENAME ... statement is
not supported, and now fails with an
Due to a known issue in the MySQL Server, it is possible to drop
PERFORMANCE_SCHEMA database. (Bug
#15831748) In addition, when executed on a MySQL Server acting
as a MySQL Cluster SQL node,
DATABASE caused this database to be dropped on all SQL
nodes in the cluster. Now, when executing a distributed drop of
NDB does not
delete tables that are local only. This prevents MySQL system
databases from being dropped in such cases.
When performing large numbers of DDL statements (100 or more) in
succession, adding an index to a table sometimes caused
mysqld to crash when it could not find the
NDB. Now when this problem
occurs, the DDL statement should fail with an appropriate error.
DUMP 1000 command (see
that contained extra or malformed arguments could lead to data
The ndb_mgm client
command did not show the complete syntax for the
method performs a
malloc() if no buffer is
provided for it to use. However, it was assumed that the memory
thus returned would always be suitably aligned, which is not
always the case. Now when
malloc() provides a
buffer to this method, the buffer is aligned after it is
allocated, and before it is used.
Functionality Added or Changed
Added several new columns to the
transporters table and
counters for the
table of the
information database. The information provided may help in
troublehsooting of transport overloads and problems with send
buffer memory allocation. For more information, see the
descriptions of these tables.
To provide information which can help in assessing the current
state of arbitration in a MySQL Cluster as well as in diagnosing
and correcting arbitration problems, 3 new
been added to the
NDB table grew to contain
approximately one million rows or more per partition, it became
possible to insert rows having duplicate primary or unique keys
into it. In addition, primary key lookups began to fail, even
when matching rows could be found in the table by other means.
This issue was introduced in MySQL Cluster NDB 7.0.36, MySQL Cluster NDB 7.1.26, and MySQL Cluster NDB 7.2.9. Signs that you may have been affected include the following:
Rows left over that should have been deleted
Rows unchanged that should have been updated
Rows with duplicate unique keys due to inserts or updates (which should have been rejected) that failed to find an existing row and thus (wrongly) inserted a new one
This issue does not affect simple scans, so you can see all rows
in a given
SELECT * FROM
and similar queries
that do not depend on a primary or unique key.
Upgrading to or downgrading from an affected release can be troublesome if there are rows with duplicate primary or unique keys in the table; such rows should be merged, but the best means of doing so is application dependent.
In addition, since the key operations themselves are faulty, a merge can be difficult to achieve without taking the MySQL Cluster offline, and it may be necessary to dump, purge, process, and reload the data. Depending on the circumstances, you may want or need to process the dump with an external application, or merely to reload the dump while ignoring duplicates if the result is acceptable.
Another possibility is to copy the data into another table
without the original table' unique key constraints or
primary key (recall that
TABLE t2 SELECT * FROM t1 does not by default copy
t1's primary or unique key definitions
t2). Following this, you can remove the
duplicates from the copy, then add back the unique constraints
and primary key definitions. Once the copy is in the desired
state, you can either drop the original table and rename the
copy, or make a new dump (which can be loaded later) from the
(Bug #16023068, Bug #67928)
The management client command
BackupStatus failed with an error when used with data
nodes having multiple LQH worker threads
(ndbmtd data nodes). The issue did not effect
form of this command.
The multithreaded job scheduler could be suspended prematurely when there were insufficient free job buffers to allow the threads to continue. The general rule in the job thread is that any queued messages should be sent before the thread is allowed to suspend itself, which guarantees that no other threads or API clients are kept waiting for operations which have already completed. However, the number of messages in the queue was specified incorrectly, leading to increased latency in delivering signals, sluggish response, or otherwise suboptimal performance. (Bug #15908684)
The setting for the
API node configuration parameter was ignored, and the default
value used instead.
Node failure during the dropping of a table could lead to the node hanging when attempting to restart. (Bug #14787522)
The recently added LCP fragment scan watchdog occasionally reported problems with LCP fragment scans having very high table id, fragment id, and row count values.
This was due to the watchdog not accounting for the time spent draining the backup buffer used to buffer rows before writing to the fragment checkpoint file.
Now, in the final stage of an LCP fragment scan, the watchdog switches from monitoring rows scanned to monitoring the buffer size in bytes. The buffer size should decrease as data is written to the file, after which the file should be promptly closed. (Bug #14680057)
During an online upgrade, certain SQL statements could cause the server to hang, resulting in the error Got error 4012 'Request ndbd time-out, maybe due to high load or communication problems' from NDBCLUSTER. (Bug #14702377)
Job buffers act as the internal queues for work requests (signals) between block threads in ndbmtd and could be exhausted if too many signals are sent to a block thread.
Performing pushed joins in the
block can execute multiple branches of the query tree in
parallel, which means that the number of signals being sent can
increase as more branches are executed. If
DBSPJ execution cannot be completed before
the job buffers are filled, the data node can fail.
This problem could be identified by multiple instances of the message sleeploop 10!! in the cluster out log, possibly followed by job buffer full. If the job buffers overflowed more gradually, there could also be failures due to error 1205 (Lock wait timeout exceeded), shutdowns initiated by the watchdog timer, or other timeout related errors. These were due to the slowdown caused by the 'sleeploop'.
Normally up to a 1:4 fanout ratio between consumed and produced signals is permitted. However, since there can be a potentially unlimited number of rows returned from the scan (and multiple scans of this type executing in parallel), any ratio greater 1:1 in such cases makes it possible to overflow the job buffers.
The fix for this issue defers any lookup child which otherwise would have been executed in parallel with another is deferred, to resume when its parallel child completes one of its own requests. This restricts the fanout ratio for bushy scan-lookup joins to 1:1. (Bug #14709490)
References: See also Bug #14648712.
Under certain rare circumstances, MySQL Cluster data nodes could
crash in conjunction with a configuration change on the data
nodes from a single-threaded to a multi-threaded transaction
coordinator (using the
configuration parameter for ndbmtd). The
problem occurred when a mysqld that had been
started prior to the change was shut down following the rolling
restart of the data nodes required to effect the configuration
Functionality Added or Changed
Important Change; ClusterJ:
A new CMake option
WITH_NDB_JAVA is introduced. When
this option is enabled, the MySQL Cluster build is configured to
include Java support, including support for
ClusterJ. If the JDK cannot be found,
CMake fails with an error. This option is
enabled by default; if you do not wish to
compile Java support into MySQL Cluster, you must now set this
explicitly when configuring the build, using
Added 3 new columns to the
transporters table in the
ndbinfo database. The
bytes_received columns help to provide an
overview of data transfer across the transporter links in a
MySQL Cluster. This information can be useful in verifying
system balance, partitioning, and front-end server load
balancing; it may also be of help when diagnosing network
problems arising from link saturation, hardware faults, or other
Data node logs now provide tracking information about arbitrations, including which nodes have assumed the arbitrator role and at what times. (Bug #11761263, Bug #53736)
A slow filesystem during local checkpointing could exert undue
DBDIH kernel block file page
buffers, which in turn could lead to a data node crash when
these were exhausted. This fix limits the number of table
definition updates that
DBDIH can issue
The management server process, when started with
could sometimes hang during shutdown.
The output from ndb_config
--configinfo now contains the
same information as that from ndb_config
--xml, including explicit
indicators for parameters that do not require restarting a data
--initial to take effect.
ndb_config indicated incorrectly
data node configuration parameter requires an initial node
restart to take effect, when in fact it does not; this error was
also present in the MySQL Cluster documentation, where it has
also been corrected.
Attempting to restart more than 5 data nodes simultaneously could cause the cluster to crash. (Bug #14647210)
In MySQL Cluster NDB 7.2.7, the size of the hash map was
increased to 3840 LDM threads. However, when upgrading a MySQL
Cluster from a previous release, existing tables could not use
or be modified online to take advantage of the new size, even
when the number of fragments was increased by (for example)
adding new data nodes to the cluster. Now in such cases,
following an upgrade and (once the number of fragments has been
increased), you can run
TABLE ... REORGANIZE PARTITION on tables that were
created in MySQL Cluster NDB 7.2.6 or earlier, after which they
can use the larger hash map size.
ALTER TABLE with other
DML statements on the same NDB table returned Got
error -1 'Unknown error code' from NDBCLUSTER.
Receiver threads could wait unnecessarily to process incomplete signals, greatly reducing performance of ndbmtd. (Bug #14525521)
On platforms where epoll was not available, setting multiple
receiver threads with the
caused ndbmtd to fail.
CPU consumption peaked several seconds after the forced termination an NDB client application due to the fact that the DBTC kernel block waited for any open transactions owned by the disconnected API client to be terminated in a busy loop, and did not break between checks for the correct state. (Bug #14550056)
options for ndbd and
(default 12) controls how many times the data node tries to
connect to a management server before giving up; setting it to
-1 means that the data node never stops trying to make contact.
--connect-delay sets the number of seconds to
wait between retries; the default is 5.
(Bug #14329309, Bug #66550)
It was possible in some cases for two transactions to try to
drop tables at the same time. If the master node failed while
one of these operations was still pending, this could lead
either to additional node failures (and cluster shutdown) or to
new dictionary operations being blocked. This issue is addressed
by ensuring that the master will reject requests to start or
stop a transaction while there are outstanding dictionary
takeover requests. In addition, table-drop operations now
correctly signal when complete, as the
kernel block could not confirm node takeovers while such
operations were still marked as pending completion.
Following a failed
TABLE ... REORGANIZE PARTITION statement, a subsequent
execution of this statement after adding new data nodes caused a
failure in the
DBDIH kernel block which led
to an unplanned shutdown of the cluster.
DBSPJ kernel block had no information
about which tables or indexes actually existed, or which had
been modified or dropped, since execution of a given query
DBSPJ might submit dictionary
requests for nonexistent tables or versions of tables, which
could cause a crash in the
This fix introduces a simplified dictionary into the
DBSPJ kernel block such that
DBSPJ can now check reliably for the
existence of a particular table or version of a table on which
it is about to request an operation.
When using ndbmtd and performing joins, data
nodes could fail where ndbmtd processes were
configured to use a large number of local query handler threads
(as set by the
configuration parameter), the tables accessed by the join had a
large number of partitions, or both.
(Bug #13799800, Bug #14143553)
Disk Data: Concurrent DML and DDL operations against the same NDB table could cause mysqld to crash. (Bug #14577463)
An improvement introduced in MySQL Cluster NDB 7.2.7 saves memory used by buffers for communications between threads. One way in which this was implemented was by not allocating buffers between threads which were assumed not to communicate, including communication between local query handler (LQH or LDM) threads.
BACKUP kernel block is used by
the LDM thread, and during
native backup the first instance of this block used by the first
LDM thread acts as a client coordinator, and thus attempts to
communicate with other LDM threads. This caused the
START BACKUP command to fail when using
ndbmtd configured with multiple LDM threads.
The fix for this issue restores the buffer used for communication between LDM threads in such cases. (Bug #14489398)
When reloading the redo log during a node or system restart, and
greater than or equal to 42, it was possible for metadata to be
read for the wrong file (or files). Thus, the node or nodes
involved could try to reload the wrong set of data.
NDB table was created
during a data node restart, the operation was rolled back in the
NDB engine, but not on the SQL node where it
was executed. This was due to the table
.FRM files not being cleaned up following
the operation that was rolled back by
Now in such cases these files are removed.
FILE was used for the value of the
parameter without also specifying the
filename, the log file name defaulted to
logger.log. Now in such cases, the name
(Bug #11764570, Bug #57417)
Packaging: Some builds on Solaris 64-bit failed because the packages exceeded the 2GB limit for the SVR4 installation layout. Now such packages are built without the embedded versions of mysqltest and mysql-client_test to save space. (Bug #14058643)
If the Transaction Coordinator aborted a transaction in the “prepared” state, this could cause a resource leak. (Bug #14208924)
When attempting to connect using a socket with a timeout, it was possible (if the timeout was exceeded) for the socket not to be set back to blocking. (Bug #14107173)
An error handling routine in the local query handler used the wrong code path, which could corrupt the transaction ID hash, causing the data node process to fail. This could in some cases possibly lead to failures of other data nodes in the same node group when the failed node attempted to restart. (Bug #14083116)
A shortage of scan fragment records in
resulted in a leak of concurrent scan table records and key
Attempting to add both a column and an index on that column in
the same online
TABLE statement caused mysqld to
fail. Although this issue affected only the
mysqld shipped with MySQL Cluster, the table
named in the
ALTER TABLE could use any
storage engine for which online operations are supported.
Packaging; Cluster API:
was missing from the
clusterjpa JAR file.
This could cause setting
ndb” to be rejected.
References: See also Bug #52106.
libndbclient did not include the
When an NDB API application called
again after the previous call had returned end-of-file (return
code 1), a transaction object was leaked. Now when this happens,
NDB returns error code 4210 (Ndb sent more info than
length specified); previouslyu in such cases, -1 was
returned. In addition, the extra transaction object associated
with the scan is freed, by returning it to the transaction
coordinator's idle list.
TABLE ... REORGANIZE PARTITION statement can be used
to create new table partitions after new empty nodes have been
added to a MySQL Cluster. Usually, the number of partitions to
create is determined automatically, such that, if no new
partitions are required, then none are created. This behavior
can be overridden by creating the original table using the
MAX_ROWS option, which indicates that extra
partitions should be created to store a large number of rows.
However, in this case
ALTER ONLINE TABLE ... REORGANIZE
PARTITION simply uses the
value specified in the original
TABLE statement to determine the number of partitions
required; since this value remains constant, so does the number
of partitions, and so no new ones are created. This means that
the table is not rebalanced, and the new data nodes remain
To solve this problem, support is added for
ONLINE TABLE ...
newvalue is greater than the value
MAX_ROWS in the original
CREATE TABLE statement. This larger
MAX_ROWS value implies that more partitions
are required; these are allocated on the new data nodes, which
restores the balanced distribution of the table data.
When a fragment scan occurring as part of a local checkpoint (LCP) stopped progressing, this kept the entire LCP from completing, which could result it redo log exhaustion, write service outage, inability to recover nodes, and longer system recovery times. To help keep this from occurring, MySQL Cluster now implements an LCP watchdog mechanism, which monitors the fragment scans making up the LCP and takes action if the LCP is observed to be delinquent.
This is intended to guard against any scan related system-level I/O errors or other issues causing problems with LCP and thus having a negative impact on write service and recovery times. Each node independently monitors the progress of local fragment scans occurring as part of an LCP. If no progress is made for 20 seconds, warning logs are generated every 10 seconds thereafter for up to 1 minute. At this point, if no progress has been made, the fragment scan is considered to have hung, and the node is restarted to enable the LCP to continue.
In addition, a new ndbd exit code
NDBD_EXIT_LCP_SCAN_WATCHDOG_FAIL is added
to identify when this occurs. See
LQH Errors, for more information.
It could sometimes happen that a query pushed down to the data nodes could refer to buffered rows which had been released, and possibly overwritten by other rows. Such rows, if overwritten, could lead to incorrect results from a pushed query, and possibly even to failure of one or more data nodes or SQL nodes. (Bug #14010406)
DUMP 2303 in the ndb_mgm
client now includes the status of the single fragment scan
record reserved for a local checkpoint.
Pushed joins performed as part of a stored procedure or trigger could cause spurious Out of memory errors on the SQL node where they were executed. (Bug #13945264)
References: See also Bug #13944272.
ndbmemcached exited unexpectedly when more
than 128 clients attempted to connect concurrently using
prefixes. In addition, a NOT FOUND error
was returned when the
encountered a temporary error from
NDB; now the error No
Ndb Instances in freelist is returned instead.
(Bug #13890064, Bug #13891085)
References: See also Bug #13945264.
When upgrading or downgrading between a MySQL Cluster version supporting distributed pushdown joins (MySQL Cluster NDB 7.2 and later) and one that did not, queries that the later MySQL Cluster version tried to push down could cause data nodes still running the earlier version to fail. Now the SQL nodes check the version of the software running on the data nodes, so that queries are not pushed down if there are any data nodes in the cluster that do not support pushdown joins. (Bug #13894817)
The memcached server failed to build correctly on 64-bit Solaris/SPARC. (Bug #13854122)
TABLE failed when a
In some cases, restarting data nodes spent a very long time in
Start Phase 101, when API nodes must connect to the starting
when the API nodes trying to connect failed in a live-lock
scenario. This connection process uses a handshake during which
a small number of messages are exchanged, with a timeout used to
detect failures during the handshake.
Prior to this fix, this timeout was set such that, if one API node encountered the timeout, all other nodes connecting would do the same. The fix also decreases this timeout. This issue (and the effects of the fix) are most likely to be observed on relatively large configurations having 10 or more data nodes and 200 or more API nodes. (Bug #13825163)
ndbmtd failed to restart when the size of a table definition exceeded 32K.
(The size of a table definition is dependent upon a number of factors, but in general the 32K limit is encountered when a table has 250 to 300 columns.) (Bug #13824773)
An initial start using ndbmtd could sometimes hang. This was due to a state which occurred when several threads tried to flush a socket buffer to a remote node. In such cases, to minimize flushing of socket buffers, only one thread actually performs the send, on behalf of all threads. However, it was possible in certain cases for there to be data in the socket buffer waiting to be sent with no thread ever being chosen to perform the send. (Bug #13809781)
When trying to use ndb_size.pl
to connect to a MySQL server running on a nonstandard port, the
port argument was ignored.
(Bug #13364905, Bug #62635)
server system variable was inadvertently removed from the NDB
7.2 codebase prior to General Availability. This fix restores
(Bug #64697, Bug #13891116, Bug #13947227)
An assert in
NDB could cause
memcached to fail.
Important Change: A number of changes have been made in the configuration of transporter send buffers.
The data node configuration parameter
is now deprecated, and thus subject to removal in a future
MySQL Cluster release.
ReservedSendBufferMemory has been
non-functional since it was introduced and remains so.
TotalSendBufferMemory now works correctly
with data nodes using ndbmtd.
A new data node configuration parameter
is introduced. Its purpose is to control how much additional
memory can be allocated to the send buffer over and above
that specified by
SendBufferMemory. The default setting
(0) allows up to 16MB to be allocated automatically.
(Bug #13633845, Bug #11760629, Bug #53053)
Several instances in the NDB code affecting the operation of
multi-threaded data nodes, where
SendBufferMemory was associated with a
specific thread for an unnecessarily long time, have been
identified and fixed, by minimizing the time that any of these
buffers can be held exclusively by a given thread (send buffer
memory being critical to operation of the entire node).
LIKE ... ESCAPE on
NDB tables failed when pushed down
to the data nodes. Such queries are no longer pushed down,
regardless of the value of
(Bug #13604447, Bug #61064)
To avoid TCP transporter overload, an overload flag is kept in
the NDB kernel for each data node; this flag is used to abort
key requests if needed, yielding error 1218 Send
Buffers overloaded in NDB kernel in such cases.
Scans can also put significant pressure on transporters,
especially where scans with a high degree of parallelism are
executed in a configuration with relatively small send buffers.
However, in these cases, overload flags were not checked, which
could lead to node failures due to send buffer exhaustion. Now,
overload flags are checked by scans, and in cases where
returning sufficient rows to match the batch size
--ndb-batch-size server option)
would cause an overload, the number of rows is limited to what
can be accommodated by the send buffer.
See also Configuring MySQL Cluster Send Buffer Parameters. (Bug #13602508)
A data node crashed when more than 16G fixed-size memory was
DBTUP to one fragment (because
DBACC kernel block was not prepared to
accept values greater than 32 bits from it, leading to an
overflow). Now in such cases, the data node returns Error 889
Table fragment fixed data reference has reached
maximum possible value.... When this happens, you
can work around the problem by increasing the number of
partitions used by the table (such as by using the
MAXROWS option with
References: See also Bug #11747870, Bug #34348.
ndb_engine.so with only 2 API slots
available in the cluster configuration file,
memcached attempted to make a third
connection and crashed when this failed.
References: See also Bug #13608135.
A node failure and recovery while performing a scan on more than 32 partitions led to additional node failures during node takeover. (Bug #13528976)
option now causes ndb_mgmd to skip checking
for the configuration directory, and thus to skip creating it in
the event that it does not exist.
Functionality Added or Changed
A mysqld process joining a MySQL Cluster
where distributed privileges are in use now automatically
PRIVILEGES as part of the connection process, so that
the cluster's distributed privileges take immediate effect
on the new SQL node.
RPM distributions of MySQL Cluster NDB 7.1 contained a number of
packages which in MySQL Cluster NDB 7.2 have been merged into
MySQL-Cluster-server RPM. However, the
MySQL Cluster NDB 7.2
RPM did not actually obsolete these packages, which meant that
they had to be removed manually prior to performing an upgrade
from a MySQL Cluster NDB 7.1 RPM installation.
These packages include the
For more information, see Installing MySQL Cluster from RPM. (Bug #13545589)
Accessing a table having a
column but no primary key following a restart of the SQL node
failed with Error 1 (Unknown error code).
At the beginning of a local checkpoint, each data node marks its local tables with a “to be checkpointed” flag. A failure of the master node during this process could cause either the LCP to hang, or one or more data nodes to be forcibly shut down. (Bug #13436481)
A node failure while a
TABLE statement was executing resulted in a hung
connection (and the user was not informed of any error that
would cause this to happen).
References: See also Bug #13407848.
Functionality Added or Changed
data node configuration parameter to enable control of multiple
threads and CPUs when using ndbmtd, by
assigning threads of one or more specified types to execute on
one or more CPUs. This can provide more precise and flexible
control over multiple threads than can be obtained using the
data node configuration parameter, which specifies a percentage
of data node resources to hold in reserve for restarts. The
resources monitored are
IndexMemory, and any
MAX_ROWS settings (see
CREATE TABLE Syntax). The default value of
MinFreePct is 5, which
means that 5% from each these resources is now set aside for
TRUNCATE TABLE on
mysql.procs_priv, when these tables had been
converted to MySQL Cluster distributed grant tables, caused
mysqld to crash.
Restarting an SQL node configured for distributed grants could sometimes result in a crash. (Bug #13340819)
Previously, forcing simultaneously the shutdown of multiple data
SHUTDOWN -F in the
ndb_mgm management client could cause the
entire cluster to fail. Now in such cases, any such nodes are
forced to abort immediately.
Functionality Added or Changed
Added the ndbinfo_select_all utility.
Cluster API: Added support for the Memcache API using ndbmemcache, a loadable storage engine for memcached version 1.6 and later, which can be used to provide a persistent MySQL Cluster data store, accessed using the memcache protocol.
The standard memcached caching engine is now included in the MySQL Cluster distribution. Each memcached server, in addition to providing direct access to data stored in a MySQL Cluster, is able to cache data locally and serve (some) requests from this local cache.
The memcached server can also provide an interface to existing MySQL Cluster tables that is strictly defined, so that an administrator can control exactly which tables and columns are referenced by particular memcache keys and values, and which operations are allowed on these keys and values.
For more information, see
ndbmemcache—Memcache API for MySQL Cluster.
include/storage directory, where the
header files supplied for use in compiling MySQL Cluster
applications are normally located, was missing from MySQL
Cluster release packages for Windows.
Functionality Added or Changed
Added support for pushing down joins to the NDB kernel for
parallel execution across the data nodes, which can speed up
execution of joins across
tables by a factor of 20 or more in some cases. Some
restrictions apply on the types of joins that can be optimized
in this way; in particular, columns to be joined must use
exactly the same data type, and cannot be any of the
TEXT types. In addition, columns
to be joined must be part of a table index or primary key.
Support for this feature can be enabled or disabled using the
ndb_join_pushdown server system
variable (enabled by default); see the description of this
variable for more information and examples.
As part of this improvement, the status variables
Ndb_pushed_reads also been
introduced for monitoring purposes. You can also see whether a
given join is pushed down using
EXPLAIN. In addition, several new
counters relating to push-down join performance have been added
counters table in the
ndbinfo database. For more
information, see the descriptions of the status variables
The default values for a number of MySQL Cluster data node
configuration parameters have changed, to provide more
resiliency to environmental issues and better handling of some
potential failure scenarios, and to perform more reliably with
increases in memory and other resource requirements brought
about by recent improvements in join handling by
NDB. The affected parameters are
Default increased from 1500 ms to 5000 ms.
Default increased from 3000 ms to 7500 ms.
Now effectively disabled by default (default changed from
4000 ms to 0).
Default increased from 20MB to 128MB.
Default increased from 32 to 256.
In addition, when
MaxNoOfLocalScans is not
specified, the value computed for it automatically has been
increased by a factor of 4 (that is, to 4 times
times the number of data nodes in the cluster).
MySQL user privileges can now be distributed automatically
across all MySQL servers (SQL nodes) connected to the same MySQL
Cluster. Previously, each MySQL Server's user privilege
tables were required to use the
MyISAM storage engine, which meant
that a user account and its associated privileges created on one
SQL node were not available on any other SQL node without manual
intervention. MySQL Cluster now provides an SQL script file
ndb_dist_priv.sql that can be found in
share/mysql under the MySQL installation
directory; loading this script creates a stored procedure
mysql_cluster_move_privileges that can be
used following initial installation to convert the privilege
tables to the
NDB storage engine,
so that any time a MySQL user account is created, dropped, or
has its privileges updated on any SQL node, the changes take
effect immediately on all other MySQL servers attached to the
cluster. Note that you may need to execute
PRIVILEGES on any SQL nodes with connected MySQL
clients (or to disconnect and then reconnect the clients) in
order for those clients to be able to see the changes in
mysql_cluster_move_privileges has been
executed successfully, all MySQL user privileges are distributed
across all connected MySQL Servers. MySQL Servers that join the
cluster after this automatically participate in the privilege
ndb_dist_priv.sql also provides stored
routines that can be used to verify that the privilege tables
have been distributed successfully, and to perform other tasks.
For more information, see Distributed MySQL Privileges for MySQL Cluster.
By default, data nodes now exhibit fail-fast behavior whenever
they encounter corrupted tuples—in other words, a data
node forcibly shuts down whenever it detects a corrupted tuple.
To override this behavior, you can disable the
data node configuration parameter, which is enabled by default.
This is a change from MySQL Cluster NDB 7.0 and MySQL Cluster
NDB 7.1, where
was disabled (so that data nodes ignored tuple corruption) by
It is now possible to filter the output from
ndb_config so that it displays only system,
data node, or connection parameters and values, using one of the
In addition, it is now possible to specify from which data node
the configuration data is obtained, using the
that is added in this release.
For more information, see ndb_config — Extract MySQL Cluster Configuration Information. (Bug #11766870)
Incompatible Change; Cluster API: Restarting a machine hosting data nodes, SQL nodes, or both, caused such nodes when restarting to time out while trying to obtain node IDs.
As part of the fix for this issue, the behavior and default
values for the NDB API
method have been improved. Due to these changes, the version
number for the included NDB client library
libndbclient.so) has been increased from
4.0.0 to 5.0.0. For NDB API applications, this means that as
part of any upgrade, you must do both of the following:
Review and possibly modify any NDB API code that uses the
method, in order to take into account its changed default
Recompile any NDB API applications using the new version of the client library.
Also in connection with this issue, the default value for each
of the two mysqld options
--ndb-wait-setup has been
increased to 30 seconds (from 0 and 15, respectively). In
addition, a hard-coded 30-second delay was removed, so that the
now handled correctly in all cases.
AUTO_INCREMENT values were not set correctly
IGNORE statements affecting
NDB tables. This could lead such
statements to fail with Got error 4350 'Transaction
already aborted' from NDBCLUSTER when inserting
multiple rows containing duplicate values.
(Bug #11755237, Bug #46985)