Important Change; NDB Disk Data: The following changes are made in the display of information about Disk Data files in the
Tablespaces and log file groups are no longer represented in the
FILEStable. (These constructs are not actually files.)
Each data file is now represented by a single row in the
FILEStable. Each undo log file is also now represented in this table by one row only. (Previously, a row was displayed for each copy of each of these files on each data node.)
For rows corresponding to data files or undo log files, node ID and undo log buffer information is no longer displayed in the
EXTRAcolumn of the
The removal of undo log buffer information is reverted in NDB 8.0.15. (Bug #92796, Bug #28800252)
Important Change; NDB Client Programs: Removed the deprecated
--ndboption for perror. Use ndb_perror to obtain error message information from
NDBerror codes instead. (Bug #81705, Bug #23523957)
References: See also: Bug #81704, Bug #23523926.
Important Change: Beginning with this release, MySQL NDB Cluster is being developed in parallel with the standard MySQL 8.0 server under a new unified release model with the following features:
NDB 8.0 is developed in, built from, and released with the MySQL 8.0 source code tree.
The numbering scheme for NDB Cluster 8.0 releases follows the scheme for MySQL 8.0, starting with the current MySQL release (8.0.13).
Building the source with NDB support appends
-clusterto the version string returned by mysql
-V, as shown here:
shell≫ mysql -V mysql Ver 8.0.13-cluster for Linux on x86_64 (Source distribution)
NDB binaries continue to display both the MySQL Server version and the NDB engine version, like this:
shell> ndb_mgm -V MySQL distrib mysql-8.0.13 ndb-8.0.13-dmr, for Linux (x86_64)
In MySQL Cluster NDB 8.0, these two version numbers are always the same.
To build the MySQL 8.0.13 (or later) source with NDB Cluster support, use the CMake option
-DWITH_NDBCLUSTER. (WL #11762)
NDB Cluster APIs: Added the
setExtraMetadata(). (WL #9865)
INFORMATION_SCHEMAtables now are populated with tablespace statistics for MySQL Cluster tables. (Bug #27167728)
It is now possible to specify a set of cores to be used for I/O threads performing offline multithreaded builds of ordered indexes, as opposed to normal I/O duties such as file I/O， compression， or decompression. “Offline” in this context refers to building of ordered indexes performed when the parent table is not being written to; such building takes place when an
NDBcluster performs a node or system restart, or as part of restoring a cluster from backup using ndb_restore
In addition, the default behaviour for offline index build work is modified to use all cores available to ndbmtd, rather limiting itself to the core reserved for the I/O thread. Doing so can improve restart and restore times and performance, availability, and the user experience.
This enhancement is implemented as follows:
The default value for
BuildIndexThreadsis changed from 0 to 128. This means that offline ordered index builds are now multithreaded by default.
The default value for
TwoPassInitialNodeRestartCopyis changed from
true. This means that an initial node restart first copies all data from a “live” node to one that is starting—without creating any indexes—builds ordered indexes offline, and then again synchronizes its data with the live node, that is, synchronizing twice and building indexes offline between the two synchonizations. This causes an initial node restart to behave more like the normal restart of a node, and reduces the time required for building indexes.
A new thread type (
idxbld) is defined for the
ThreadConfigconfiguration parameter, to allow locking of offline index build threads to specific CPUs.
NDBnow distinguishes the thread types that are accessible to “ThreadConfig” by the following two criteria:
Whether the thread is an execution thread. Threads of types
sendare execution threads; thread types
Whether the allocation of the thread to a given task is permanent or temporary. Currently all thread types except
For additonal information, see the descriptions of the parameters in the Manual. (Bug #25835748, Bug #26928111)
When performing an NDB backup, the
ndbinfo.logbufferstable now displays information regarding buffer usage by the backup process on each data node. This is implemented as rows reflecting two new log types in addition to
DD-UNDO. One of these rows has the log type
BACKUP-DATA, which shows the amount of data buffer used during backup to copy fragments to backup files. The other row has the log type
BACKUP-LOG, which displays the amount of log buffer used during the backup to record changes made after the backup has started. One each of these
log_typerows is shown in the
logbufferstable for each data node in the cluster. Rows having these two log types are present in the table only while an NDB backup is currently in progress. (Bug #25822988)
ODirectSyncFlagconfiguration parameter for data nodes. When enabled, the data node treats all completed filesystem writes to the redo log as though they had been performed using
This parameter has no effect if at least one of the following conditions is true:
ODirectis not enabled.
InitFragmentLogFilesis set to
--logbuffer-sizeoption for ndbd and ndbmtd, for use in debugging with a large number of log messages. This controls the size of the data node log buffer; the default (32K) is intended for normal operations. (Bug #89679, Bug #27550943)
Prior to NDB 8.0, all string hashing was based on first transforming the string into a normalized form, then MD5-hashing the resulting binary image. This could give rise to some performance problems, for the following reasons:
The normalized string is always space padded to its full length. For a
VARCHAR, this often involved adding more spaces than there were characters in the original string.
The string libraries were not optimized for this space padding, and added considerable overhead in some use cases.
The padding semantics varied between character sets, some of which were not padded to their full length.
The transformed string could become quite large, even without space padding; some Unicode 9.0 collations can transform a single code point into 100 bytes of character data or more.
Subsequent MD5 hashing consisted mainly of padding with spaces, and was not particularly efficient, possibly causing additional performance penalties by flush significant portions of the L1 cache.
Collations provide their own hash functions, which hash the string directly without first creating a normalized string. In addition, for Unicode 9.0 collations, the hashes are computed without padding.
NDBnow takes advantage of this built-in function whenever hashing a string identified as using a Unicode 9.0 collation.
Since, for other collations there are existing databases which are hash partitioned on the transformed string,
NDBcontinues to employ the previous method for hashing strings that use these, to maintain compatibility. (Bug #89609, Bug #27523758)
References: See also: Bug #89590, Bug #27515000, Bug #89604, Bug #27522732.
A table created in NDB 7.6 and earlier contains metadata in the form of a compressed
.frmfile, which is no longer supported in MySQL 8.0. To facilitate online upgrades to NDB 8.0,
NDBperforms on-the-fly translation of this metadata and writes it into the MySQL Server's data dictionary, which enables the mysqld in NDB Cluster 8.0 to work with the table without preventing subsequent use of the table by a previous version of the
Once a table's structure has been modified in NDB 8.0, its metadata is stored using the Data Dictionary, and it can no longer be accessed by NDB 7.6 and earlier.
This enhancement also makes it possible to restore an
NDBbackup made using an earlier version to a cluster running NDB 8.0 (or later). (WL #10167)
Important Change; NDB Disk Data: It was possible to issue a
CREATE TABLEstatement that referred to a nonexistent tablespace. Now such a statement fails with an error. (Bug #85859, Bug #25860404)
NDBsupports any of the following three values for the
DYNAMIC. Formerly, any values other than these were accepted but resulted in
DYNAMICbeing used. Now a
CREATE TABLEstatement that attempts to create an
NDBtable fails with an error if
ROW_FORMATis used, and does not have one of the three values listed. (Bug #88803, Bug #27230898)
Microsoft Windows; ndbinfo Information Database: The process ID of the monitor process used on Windows platforms by
RESTARTto spawn and restart a mysqld is now shown in the
ndbinfo.processestable as an
angel_pid. (Bug #90235, Bug #27767237)
NDB Cluster APIs: The example NDB API programs
ndbapi_array_using_adapterdid not perform cleanup following execution; in addition, the example program
ndbapi_simple_dualdid not check to see whether the table used by this example already existed. Due to these issues, none of these examples could be run more than once in succession.
The issues just described have been corrected in the example sources, and the relevant code listings in the NDB API documentation have been updated to match. (Bug #27009386)
NDB Cluster APIs: A previous fix for an issue, in which the failure of multiple data nodes during a partial restart could cause API nodes to fail, did not properly check the validity of the associated
NdbReceiverobject before proceeding. Now in such cases an invalid object triggers handling for invalid signals, rather than a node failure. (Bug #25902137)
References: This issue is a regression of: Bug #25092498.
NDB Cluster APIs: Incorrect results, usually an empty result set, were returned when
setBound()was used to specify a
NULLbound. This issue appears to have been caused by a problem in gcc, limited to cases using the old version of this method (which does not employ
NdbRecord), and is fixed by rewriting the problematic internal logic in the old implementation. (Bug #89468, Bug #27461752)
NDB Cluster APIs: Released NDB API objects are kept in one or more
Ndb_free_liststructures for later reuse. Each list also keeps track of all objects seized from it, and makes sure that these are eventually released back to it. In the event that the internal function
NdbScanOperation::init()failed, it was possible for an
NdbApiSignalalready allocated by the
NdbOperationto be leaked. Now in such cases,
NdbScanOperation::release()is called to release any objects allocated by the failed
NdbScanOperationbefore it is returned to the free list.
This fix also handles a similar issue with
NdbOperation::init(), where a failed call could also leak a signal. (Bug #89249, Bug #27389894)
NDB Cluster APIs: Removed the unused
TFSentinelimplementation class, which raised compiler warnings on 32-bit systems. (Bug #89005, Bug #27302881)
NDB Cluster APIs: The success of thread creation by API calls was not always checked, which could lead to timeouts in some cases. (Bug #88784, Bug #27225714)
NDB Cluster APIs: The file
storage/ndb/src/ndbapi/ndberror.cwas renamed to
ndberror.cpp. (Bug #87725, Bug #26781567)
NDB Client Programs: When passed an invalid connection string, the ndb_mgm client did not always free up all memory used before exiting. (Bug #90179, Bug #27737906)
NDB Client Programs: ndb_show_tables did not always free up all memory which it used. (Bug #90152, Bug #27727544)
Local checkpoints did not always handle
DROP TABLEoperations correctly. (Bug #27926532)
References: This issue is a regression of: Bug #26908347, Bug #26968613.
In some circumstances, when a transaction was aborted in the
DBTCblock, there remained links to trigger records from operation records which were not yet reference-counted, but when such an operation record was released the trigger reference count was still decremented. (Bug #27629680)
An internal buffer being reused immediately after it had been freed could lead to an unplanned data node shutdown. (Bug #27622643)
References: See also: Bug #28698831.
NDBonline backup consists of data, which is fuzzy, and a redo and undo log. To restore to a consistent state it is necessary to ensure that the log contains all of the changes spanning the capture of the fuzzy data portion and beyond to a consistent snapshot point. This is achieved by waiting for a GCI boundary to be passed after the capture of data is complete, but before stopping change logging and recording the stop GCI in the backup's metadata.
At restore time, the log is replayed up to the stop GCI, restoring the system to the state it had at the consistent stop GCI. A problem arose when, under load, it was possible to select a GCI boundary which occurred too early and did not span all the data captured. This could lead to inconsistencies when restoring the backup; these could be noticed as broken constraints or corrupted
Now the stop GCI is chosen is so that it spans the entire duration of the fuzzy data capture process, so that the backup log always contains all data within a given stop GCI. (Bug #27497461)
References: See also: Bug #27566346.
NDBtables, when a foreign key was added or dropped as a part of a DDL statement, the foreign key metatdata for all parent tables referenced should be reloaded in the handler on all SQL nodes connected to the cluster, but this was done only on the mysqld on which the statement was executed. Due to this, any subsequent queries relying on foreign key metadata from the corresponding parent tables could return inconsistent results. (Bug #27439587)
References: See also: Bug #82989, Bug #24666177.
ANALYZE TABLEused excessive amounts of CPU on large, low-cardinality tables. (Bug #27438963)
Queries using very large lists with
INwere not handled correctly, which could lead to data node failures. (Bug #27397802)
References: See also: Bug #28728603.
A data node overload could in some situations lead to an unplanned shutdown of the data node, which led to all data nodes disconnecting from the management and nodes.
This was due to a situation in which
API_FAILREQwas not the last received signal prior to the node failure.
As part of this fix, the transaction coordinator's handling of
SCAN_TABREQsignals for an
ApiConnectRecordin an incorrect state was also improved. (Bug #27381901)
References: See also: Bug #47039, Bug #11755287.
In a two-node cluster, when the node having the lowest ID was started using
--nostart, API clients could not connect, failing with Could not alloc node id at HOST port PORT_NO: No free node id found for mysqld(API). (Bug #27225212)
MaxNoOfExecutionThreadswithout an initial system restart led to an unplanned data node shutdown. (Bug #27169282)
References: This issue is a regression of: Bug #26908347, Bug #26968613.
In most cases, and especially in error conditions,
NDBcommand-line programs failed on exit to free memory used by option handling, and failed to call
ndb_end(). This is fixed by removing the internal methods
storage/ndb/include/util/ndb_opts.h, and replacing these with an
Ndb_optsclass that automatically frees such resources as part of its destructor. (Bug #26930148)
References: See also: Bug #87396, Bug #26617328.
A query against the
INFORMATION_SCHEMA.FILEStable returned no results when it included an
ORDER BYclause. (Bug #26877788)
During a restart,
DBLQHloads redo log part metadata for each redo log part it manages, from one or more redo log files. Since each file has a limited capacity for metadata, the number of files which must be consulted depends on the size of the redo log part. These files are opened, read, and closed sequentially, but the closing of one file occurs concurrently with the opening of the next.
In cases where closing of the file was slow, it was possible for more than 4 files per redo log part to be open concurrently; since these files were opened using the
OM_WRITE_BUFFERoption, more than 4 chunks of write buffer were allocated per part in such cases. The write buffer pool is not unlimited; if all redo log parts were in a similar state, the pool was exhausted, causing the data node to shut down.
This issue is resolved by avoiding the use of
OM_WRITE_BUFFERduring metadata reload, so that any transient opening of more than 4 redo log files per log file part no longer leads to failure of the data node. (Bug #25965370)
Under certain conditions, data nodes restarted unnecessarily during execution of
ALTER TABLE... REORGANIZE PARTITION. (Bug #25675481)
References: See also: Bug #26735618, Bug #27191468.
Race conditions sometimes occurred during asynchronous disconnection and reconnection of the transporter while other threads concurrently inserted signal data into the send buffers, leading to an unplanned shutdown of the cluster.
As part of the work fixing this issue, the internal templating function used by the Transporter Registry when it prepares a send is refactored to use likely-or-unlikely logic to speed up execution, and to remove a number of duplicate checks for NULL. (Bug #24444908, Bug #25128512)
References: See also: Bug #20112700.
ndb_restore sometimes logged data file and log file progress values much greater than 100%. (Bug #20989106)
Removed unneeded debug printouts from the internal function
ha_ndbcluster::copy_fk_for_offline_alter(). (Bug #90991, Bug #28069711)
The internal function
BitmaskImpl::setRange()set one bit fewer than specified. (Bug #90648, Bug #27931995)
Inserting a row into an
NDBtable having a self-referencing foreign key that referenced a unique index on the table other than the primary key failed with
ER_NO_REFERENCED_ROW_2. This was due to the fact that
NDBchecked foreign key constraints before the unique index was updated, so that the constraint check was unable to use the index for locating the row. Now, in such cases,
NDBwaits until all unique index values have been updated before checking foreign key constraints on the inserted row. (Bug #90644, Bug #27930382)
References: See also: Bug #91965, Bug #28486390.
Removed all references to the C++
registerstorage class in the NDB Cluster sources; use of this specifier, which was deprecated in C++11 and removed in C++17, raised warnings when building with recent compilers. (Bug #90110, Bug #27705985)
It was not possible to create an
FOR_RA_BY_LDM_X_4. (Bug #89811, Bug #27602352)
References: This issue is a regression of: Bug #81759, Bug #23544301.
[shm]section to the global configuration file for a cluster with multiple data nodes caused default TCP connections to be lost to the node using the single section. (Bug #89627, Bug #27532407)
Removed a memory leak in the configuration file parser. (Bug #89392, Bug #27440614)
Fixed a number of implicit-fallthrough warnings, warnings raised by uninitialized values, and other warnings encountered when compiling
NDBwith GCC 7.2.0. (Bug #89254, Bug #89255, Bug #89258, Bug #89259, Bug #89270, Bug #27390714, Bug #27390745, Bug #27390684, Bug #27390816, Bug #27396662)
References: See also: Bug #88136, Bug #26990244.
Node connection states were not always reported correctly by
ClusterMgrimmediately after reporting a disconnection. (Bug #89121, Bug #27349285)
As a result of the reuse of code intended for send threads when performing an assist send, all of the local release send buffers were released to the global pool, which caused the intended level of the local send buffer pool to be ignored. Now send threads and assisting worker threads follow their own policies for maintaining their local buffer pools. (Bug #89119, Bug #27349118)
PGMANblock seized a new
seizeLast, its return value was not checked, which could cause access to invalid memory. (Bug #89009, Bug #27303191)
TCROLLBACKREPsignals were not handled correctly by the
DBTCkernel block. (Bug #89004, Bug #27302734)
When sending priority A signals, we now ensure that the number of pending signals is explicitly initialized. (Bug #88986, Bug #27294856)
The internal function
ha_ndbcluster::unpack_record()did not perform proper error handling. (Bug #88587, Bug #27150980)
CHECKSUMis not supported for
NDBtables, but this was not not reflected in the
CHECKSUMcolumn of the
INFORMATION_SCHEMA.TABLEStable, which could potentially assume a random value in such cases. Now the value of this column is always set to
NDBtables, just as it is for
InnoDBtables. (Bug #88552, Bug #27143813)
Removed a memory leak detected when running ndb_mgm -e "CLUSTERLOG ...". (Bug #88517, Bug #27128846)
When terminating, ndb_config did not release all memory which it had used. (Bug #88515, Bug #27128398)
ndb_restore did not free memory properly before exiting. (Bug #88514, Bug #27128361)
In certain circumstances where multiple
Ndbobjects were being used in parallel from an API node, the block number extracted from a block reference in
DBLQHwas the same as that of a
SUMAblock even though the request was coming from an API node. Due to this ambiguity,
DBLQHmistook the request from the API node for a request from a
SUMAblock and failed. This is fixed by checking node IDs before checking block numbers. (Bug #88441, Bug #27130570)
A join entirely within the materialized part of a semijoin was not pushed even if it could have been. In addition,
EXPLAINprovided no information about why the join was not pushed. (Bug #88224, Bug #27022925)
References: See also: Bug #27067538.
All known compiler warnings raised by
-Werrorwhen building the
NDBsource code have been fixed. (Bug #88136, Bug #26990244)
When the duplicate weedout algorithm was used for evaluating a semijoin, the result had missing rows. (Bug #88117, Bug #26984919)
References: See also: Bug #87992, Bug #26926666.
NDBdid not compile with GCC 7. (Bug #88011, Bug #26933472)
A table used in a loose scan could be used as a child in a pushed join query, leading to possibly incorrect results. (Bug #87992, Bug #26926666)
When representing a materialized semijoin in the query plan, the MySQL Optimizer inserted extra
JOIN_TABobjects to represent access to the materialized subquery result. The join pushdown analyzer did not properly set up its internal data structures for these, leaving them uninitialized instead. This meant that later usage of any item objects referencing the materialized semijoin accessed an initialized
tablenocolumn when accessing a 64-bit
tablenobitmask, possibly referring to a point beyond its end, leading to an unplanned shutdown of the SQL node. (Bug #87971, Bug #26919289)
In some cases, a
SCAN_FRAGCONFsignal was received after a
SCAN_FRAGREQwith a close flag had already been sent, clearing the timer. When this occurred, the next
SCAN_FRAGREFto arrive caused time tracking to fail. Now in such cases, a check for a cleared timer is performed prior to processing the
SCAN_FRAGREFmessage. (Bug #87942, Bug #26908347)
While deleting an element in
Dbacc, or moving it during hash table expansion or reduction, the method used (
getLastAndRemove()) could return a reference to a removed element on a released page, which could later be referenced from the functions calling it. This was due to a change brought about by the implementation of dynamic index memory in NDB 7.6.2; previously, the page had always belonged to a single
Dbaccinstance, so accessing it was safe. This was no longer the case following the change; a page released in
Dbacccould be placed directly into the global page pool where any other thread could then allocate it.
Now we make sure that newly released pages in
Dbaccare kept within the current
Dbaccinstance and not given over directly to the global page pool. In addition, the reference to a released page has been removed; the affected internal method now returns the last element by value, rather than by reference. (Bug #87932, Bug #26906640)
References: See also: Bug #87987, Bug #26925595.
When creating a table with a nonexistent conflict detection function,
NDBreturned an improper error message. (Bug #87628, Bug #26730019)
ndb_top failed to build with the error "HAVE_NCURSESW_H" is not defined. (Bug #87035, Bug #26429281)
In a MySQL Cluster with one MySQL Server configured to write a binary log failure occurred when creating and using an
NDBtable with non-stored generated columns. The problem arose only when the product was built with debugging support. (Bug #86084, Bug #25957586)
It was possible to create or alter a
STORAGE MEMORYtable using a nonexistent tablespace without any error resulting. Such an operation now fails with Error 3510
ER_TABLESPACE_MISSING_WITH_NAME, as intended. (Bug #82116, Bug #23744378)
--hexdid not print trailing 0s of
LONGVARBINARYvalues. (Bug #65560, Bug #14198580)
When the internal function
ha_ndbcluster::copy_fk_for_offline_alter()checked dependent objects on a table from which it was supposed to drop a foreign key, it did not perform any filtering for foreign keys, making it possible for it to attempt retrieval of an index or trigger instead, leading to a spurious Error 723 (No such table).