MySQL NDB Cluster 8.0.18 is a new development release of NDB 8.0,
based on MySQL Server 8.0 and including features in version 8.0 of
NDB storage engine, as well as
fixing recently discovered bugs in previous NDB Cluster releases.
Obtaining NDB Cluster 8.0. NDB Cluster 8.0 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in NDB Cluster 8.0, see What is New in MySQL NDB Cluster 8.0.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 8.0 through MySQL 8.0.18 (see Changes in MySQL 8.0.18 (2019-10-14, General Availability)).
Important Change: The 63-byte limit on
NDBdatabase and table names has been removed. These identifiers may now take up to 64 bytes, as when using other MySQL storage engines. For more information, see Previous NDB Cluster Issues Resolved in NDB Cluster 8.0. (Bug #44940, Bug #11753491, Bug #27447958)
Important Change: Implemented the
NDB_STORED_USERprivilege, which enables sharing of users, roles, and privileges across all SQL nodes attached to a given NDB Cluster. This replaces the distributed grant tables mechanism from NDB 7.6 and earlier versions of NDB Cluster, which was removed in NDB 8.0.16 due to its incompatibility with changes made to the MySQL privilege system in MySQL 8.0.
A user or role which has this privilege is propagated, along with its (other) privileges to a MySQL server (SQL node) as soon as it connects to the cluster. Changes made to the privileges of the user or role are synchronized immediately with all connected SQL nodes.
NDB_STORED_USERcan be granted to users and roles other than reserved accounts such as
mysql.infoschema@localhost. A role can be shared, but assigning a shared role to a user does not cause this user to be shared; the
NDB_STORED_USERprivilege must be granted to the user explicitly in order for the user to be shared between NDB Cluster SQL nodes.
NDB_STORED_USERprivilege is always global and must be granted using
ON *.*. This privilege is recognized only if the MySQL server enables support for the
For usage information, see the description of
NDB_STORED_USER. Privilege Synchronization and NDB_STORED_USER, has additional information on how
NDB_STORED_USERand privilege synchronization work. For information on how this change may affect upgrades to NDB 8.0 from previous versions, see Upgrading and Downgrading NDB Cluster. (WL #12637)
References: See also: Bug #29862601, Bug #29996547.
Important Change: The maximum row size for an
NDBtable is increased from 14000 to 30000 bytes.
As before, only the first 264 bytes of a
TEXTcolumn count towards this total.
The maximum offset for a fixed-width column of an
NDBtable is 8188 bytes; this is also unchanged from previous NDB Cluster releases.
For more information, see Limits Associated with Database Objects in NDB Cluster. (WL #13079, WL #11160)
References: See also: Bug #29485977, Bug #29024275.
Important Change: A new binary format has been implemented for the NDB management server's cached configuration file, which is intended to support much larger numbers of nodes in a cluster than previously. Prior to this release, the configuration file supported a maximum of 16381 sections; this number is increased to 4G.
Upgrades to the new format should not require any manual intervention, as the management server (and other cluster nodes) can still read the old format. For downgrades from this release or a later one to NDB 8.0.17 or earlier, it is necessary to remove the binary configuration files prior to starting the old management server binary, or start it using the
For more information, see Upgrading and Downgrading NDB Cluster. (WL #12453)
Important Change: The maximum number of data nodes supported in a single NDB cluster is raised in this release from 48 to 144. The range of supported data node IDs is increased in conjunction with this enhancement to 1-144, inclusive.
In previous releases, recommended node IDs for management nodes were 49 and 50. These values are still supported, but, if used, limit the maximum number of data nodes to 142. For this reason, the recommended node ID values for management servers are now 145 and 146.
The maximum total supported number of nodes of all types in a given cluster is 255. This total is unchanged from previous releases.
For a cluster running more than 48 data nodes, it is not possible to downgrade directly to a previous release that supports only 48 data nodes. In such cases, it is necessary to reduce the number of data nodes to 48 or fewer, and to make sure that all data nodes use node IDs that are less than 49.
This change also introduces a new version (v2) of the format used for the data node
sysfile, which records information such as the last global checkpoint index, restart status, and node group membership of each node (see NDB Cluster Data Node File System Directory). (WL #12680, WL #12564, WL #12876)
When using any of these methods, the table column values to be compared must be of exactly the same type, including with respect to length, precision, and scale. In addition, in all cases,
NULLis always considered by these methods to be less than any other value. You should also be aware that, when used to compare table column values,
NdbScanFilter::cmp()does not support all possible values of
For more information, see the descriptions of the individual API methods. (WL #13120)
NDB Client Programs: The dependency of the ndb_delete_all utility on the
NDBTlibrary has been removed. This library, used in
NDBdevelopment for testing, is not required for normal use. The visible change for users is that ndb_delete_all no longer prints
NDBT_ProgramExit -following completion of its run. Applications that depend upon this behavior should be updated to reflect this change when upgrading to this release. (WL #13223)
ndb_restore now reports the specific
NDBerror number and message when it is unable to load a table descriptor from a backup
.ctlfile. This can happen when attempting to restore a backup taken from a later version of the NDB Cluster software to a cluster running an earlier version—for example, when the backup includes a table using a character set which is unknown to the version of ndb_restore being used to restore it. (Bug #30184265)
References: See also: Bug #29929996.
NDB Cluster's condition pushdown functionality has been extended as follows:
Expressions using any previously allowed comparisons are now supported.
Comparisons between columns in the same table and of the same type are now supported. The columns must be of exactly the same type.
Example: Suppose there are two tables
t2created as shown here:
CREATE TABLE t1 (a INT, b INT, c CHAR(10), d CHAR(5)) ENGINE=NDB; CREATE TABLE t2 LIKE t1;
The following joins can now be pushed down to the data nodes:
SELECT * FROM t1 JOIN t2 ON t2.a < t1.a+10; SELECT * FROM t1 JOIN t2 ON t2.a = t1.a+t1.b; SELECT * FROM t1 JOIN t2 ON t2.a = t1.a+t1.b; SELECT * FROM t1 JOIN t2 ON t2.d = SUBSTRING(t1.c,1,5); SELECT * FROM t1 JOIN t2 ON t2.c = CONCAT('foo',t1.d,'ba');
Supported comparisons are
<>. (Bug #29685643, WL #12956, WL #13121)
NDB Cluster now uses
as the naming pattern for internally generated foreign keys, which is similar to the
pattern used by
InnoDB. (Bug #96508, Bug #30171959)
References: See also: Bug #30210839.
ndb_schema_dist_lock_wait_timeoutsystem variable to control how long to wait for a schema lock to be released when trying to update the SQL node's local data dictionary for one or more tables currently in use from the
NDBdata dictionary's metadata. If this synchronization has not yet occurred by the end of this time, the SQL node returns a warning that schema distribution did not succeed; the next time that the table for which distribution failed is accessed,
NDBtries once again to synchronize the table metadata. (WL #10164)
NDBtable objects submitted by the metadata change monitor thread are now automatically checked for any mismatches and synchronized by the
NDBbinary logging thread. The status variable
Ndb_metadata_synced_countadded in this release shows the number of objects synchronized automatically; it is possible to see which objects have been synchronized by checking the cluster log. In addition, the new status variable
Ndb_metadata_blacklist_sizeindicates the number of objects for which synchronization has failed. (WL #11914)
References: See also: Bug #30000202.
It is now possible to build
ARMCPUs from the NDB Cluster sources. Currently, we do not provide any precompiled binaries for this platform. (WL #12928)
Start times for the ndb_mgmd management node daemon have been significantly improved as follows:
More efficient handling of properties from configuration data can decrease startup times for the management server by a factor of 6 or more as compared with previous versions.
Host names not present in the management server's
hostsfile no longer create a bottleneck during startup, making ndb_mgmd start times up to 20 times shorter where these are used.
NDBtables can now be renamed online, using
ALGORITHM=INPLACE. (WL #11734)
References: See also: Bug #28609968.
Important Change: Because the current implementation for node failure handling cannot guarantee that even a single transaction of size
MaxNoOfConcurrentOperationsis completed in each round, this parameter is once again used to set a global limit on the total number of concurrent operations in all transactions within a single transaction coordinator instance. (Bug #96617, Bug #30216204)
Partitioning; NDB Disk Data: Creation of a partitioned disk data table was unsuccessful due to a missing metadata lock on the tablespace specified in the
CREATE TABLEstatement. (Bug #28876892)
NDB Disk Data: Tablespaces and data files are not tightly coupled in
NDB, in the sense that they are represented by independent
NdbDictionaryobjects. Thus, when metadata is restored using the ndb_restore tool, there was no guarantee that the tablespace and its associated datafile objects were restored at the same time. This led to the possibility that the tablespace mismatch was detected and automatically synchronized to the data dictionary before the datafile was restored to
NDB. This issue also applied to log file groups and undo files.
To fix this problem, the metadata change monitor now submits tablespaces and logfile groups only if their corresponding datafiles and undofiles actually exist in
NDB. (Bug #30090080)
NDB Disk Data: When a data node failed following creation and population of an
NDBtable having columns on disk, but prior to execution of a local checkpoint, it was possible to lose row data from the tablespace. (Bug #29506869)
NDB Cluster APIs: The NDB API examples
ndbapi_array_simple.cpp(see NDB API Simple Array Example) and
ndbapi_array_using_adapter.cpp(see NDB API Simple Array Example Using Adapter) made assignments directly to a
std::vectorarray instead of using
push_back()calls to do so. (Bug #28956047)
MySQL NDB ClusterJ: If ClusterJ was deployed as a separate module of a multi-module web application, when the application tried to create a new instance of a domain object, the exception
java.lang.IllegalArgumentException: non-public interface is not defined by the given loaderwas thrown. It was because ClusterJ always tries to create a proxy class from which the domain object can be instantiated, and the proxy class is an implementation of the domain interface and the protected
DomainTypeHandlerImpl::Finalizableinterface. The class loaders of these two interfaces were different in the case, as they belonged to different modules running on the web server, so that when ClusterJ tried to create the proxy class using the domain object interface's class loader, the above-mentioned exception was thrown. This fix makes the
Finalizationinterface public so that the class loader of the web application would be able to access it even if it belongs to a different module from that of the domain interface. (Bug #29895213)
MySQL NDB ClusterJ: ClusterJ sometimes failed with a segmentation fault after reconnecting to an NDB Cluster. This was due to ClusterJ reusing old database metadata objects from the old connection. With the fix, those objects are discarded before a reconnection to the cluster. (Bug #29891983)
Faulty calculation of microseconds caused the internal
ndb_milli_sleep()function to sleep for too short a time. (Bug #30211922)
Once a data node is started, 95% of its configured
DataMemoryshould be available for normal data, with 5% to spare for use in critical situations. During the node startup process, all of its configured
DataMemoryis usable for data, in order to minimize the risk that restoring the node data fails due to running out of data memory due to some dynamic memory structure using more pages for the same data than when the node was stopped. For example, a hash table grows differently during a restart than it did previously, since the order of inserts to the table differs from the historical order.
The issue raised in this bug report occurred when a check that the data memory used plus the spare data memory did not exceed the value set for
DataMemoryfailed at the point where the spare memory was reserved. This happened as the state of the data node transitioned from starting to started, when reserving spare pages. After calculating the number of reserved pages to be used for spare memory, and then the number of shared pages (that is, pages from shared global memory) to be used for this, the number of reserved pages already allocated was not taken into consideration. (Bug #30205182)
References: See also: Bug #29616383.
Removed a memory leak found in the ndb_import utility. (Bug #30192989)
It was not possible to use ndb_restore and a backup taken from an NDB 8.0 cluster to restore to a cluster running NDB 7.6. (Bug #30184658)
References: See also: Bug #30221717.
When starting, a data node's local sysfile was not updated between the first completed local checkpoint and start phase 50. (Bug #30086352)
BACKUPblock, the assumption was made that the first record in
c_backupswas the local checkpoint record, which is not always the case. Now
NDBloops through the records in
c_backupsto find the (correct) LCP record instead. (Bug #30080194)
During node takeover for the master it was possible to end in the state
LCP_STATUS_IDLEwhile the remaining data nodes were reporting their state as
LCP_TAB_SAVED. This led to failure of the node when attempting to handle reception of a
LCP_COMPLETE_REPsignal since this is not expected when idle. Now in such cases local checkpoint handling is done in a manner that ensures that this node finishes in the proper state (
LCP_TAB_SAVED). (Bug #30032863)
When a MySQL Server built with
NDBCLUSTERsupport was run on Solaris/x86, it failed during schema distribution. The root cause of the problem was an issue with the Developer Studio compiler used to build binaries for this platform when optimization level
-xO2was used. This issue is fixed by using optimization level
NDBCLUSTERbuilt for Solaris/x86. (Bug #30031130)
References: See also: Bug #28585914, Bug #30014295.
free()directly to deallocate
ndb_mgm_configurationobjects instead of calling
ndb_mgm_destroy_configuration(), which correctly uses
deletefor deallocation. (Bug #29998980)
Default configuration sections did not have the configuration section types set when unpacked into memory, which caused a memory leak since this meant that the section destructor would not destroy the entries for these sections. (Bug #29965125)
No error was propagated when
NDBfailed to discover a table due to the table format being old and no longer supported, which could cause the
NDBhandler to retry the discovery operation endlessly and thereby hang. (Bug #29949096, Bug #29934763)
During upgrade of an NDB Cluster when half of the data nodes were running NDB 7.6 while the remainder were running NDB 8.0, attempting to shut down those nodes which were running NDB 7.6 led to failure of one node with the error CHECK FAILEDNODEPTR.P->DBLQHFAI. (Bug #29912988, Bug #30141203)
Altering a table in the middle of an ongoing transaction caused a table discovery operation which led to the transaction being committed prematurely; in addition, no error was returned when performing further updates as part of the same transaction.
Now in such cases, the table discovery operation fails, when a transaction is in progress. (Bug #29911440)
When performing a local checkpoint (LCP), a table's schema version was intermittently read as 0, which caused
NDBLCP handling to treat the table as though it were being dropped. This could effect rebuilding of indexes offline by ndb_restore while the table was in the
TABLE_READ_ONLYstate. Now the function reading the schema version (
getCreateSchemaVersion()) no longer not changes it while the table is read-only. (Bug #29910397)
When an error occurs on an SQL node during schema distribution, information about this was written in the error log, but no indication was provided by the mysql client that the DDL statement in question was unsuccessful. Now in such cases, one or more generic warnings are displayed by the client to indicate that a given schema distribution operation has not been successful, with further information available in the error log of the originating SQL node. (Bug #29889869)
Errors and warnings pushed to the execution thread during metadata synchronization and metadata change detection were not properly logged and cleared. (Bug #29874313)
Altering a normal column to a stored generated column was performed online even though this is not supported. (Bug #29862463)
A pushed join with
ORDER BYdid not always return the rows of the result in the specified order. This could occur when the optimizer used an ordered index to provide the ordering and the index used a column from the table that served as the root of the pushed join. (Bug #29860378)
A number of issues in the Backup block for local checkpoints (LCPs) were found and fixed, including the following:
Bytes written to LCP part files were not always included in the LCP byte count.
The maximum record size for the buffer used for all LCP part files was not updated in all cases in which the table maximum record size had changed.
LCP surfacing could occur for LCP scans at times other than when receiving
It was possible in some cases for the table currently being scanned to be altered in the middle of a scan request, which behavior is not supported.
References: See also: Bug #29485977.
requestInfofields for the long and short forms of the
LQHKEYREQsignal had different definitions; bits used for the key length in the short version were reused for flags in the long version, since the key length is implicit in the section length of the long version of the signal but it was possible for long
LQHKEYREQsignals to contain a keylength in these same bits, which could be misinterpreted by the receiving local query handler, potentially leading to errors. Checks have now been implemented to make sure that this no longer happens. (Bug #29820838)
The list of dropped shares could hold only one dropped
NDB_SHAREinstance for each key, which prevented
NDB_SHAREinstances with same key from being dropped multiple times while handlers held references to those
NDB_SHAREinstances. This interfered with keeping track of the memory allocated and being able to release it if mysqld shut down without all handlers having released their references to the shares. To resolve this issue, the dropped share list has been changed to use a list type which allows more than one
NDB_SHAREwith the same key to exist at the same time. (Bug #29812659, Bug #29812613)
Removed an ndb_restore compile-time dependency on table names that was defined by the
ndbclusterplugin. (Bug #29801100)
When creating a table in parallel on multiple SQL nodes, the result was a race condition between checking that the table existed and opening the table, which caused
CREATE TABLE IF NOT EXISTSto fail with Error 1. This was the result of two issues, described with their fixes here:
Opening a table whose
NDB_SHAREdid not exist returned the non-descriptive error message ERROR 1296 (HY000): Got error 1 'Unknown error code' from NDBCLUSTER. This is fixed with a warning describing the problem in more detail, along with a more sensible error code.
It was possible to open a table before schema synchronization was completed. This is fixed with a warning better describing the problem, along with an error indicating that cluster is not yet ready.
In addition, this fixes a related issue in which creating indexes sometimes also failed with Error 1. (Bug #29793534, Bug #29871321)
Previously, for a pushed condition, every request sent to
NDBfor a given table caused the generation of a new instance of
NdbInterpretedCode. When joining tables, generation of multiple requests for all tables following the first table in the query plan is very likely; if the pushed condition had no dependencies on prior tables in the query plan, identical instances of
NdbInterpretedCodewere generated for each request, at a significant cost in wasted CPU cycles. Now such pushed conditions are identified and the required
NdbInterpretedCodeobject is generated only once, and reused for every request sent for this table without the need for generating new code each time.
This change also makes it possible for Scan Filter too large errors to be detected and set during query optimization, which corrects cases where the query plan shown was inaccurate because the indicated push of a condition later had to be undone during the execution phase. (Bug #29704575)
Some instances of
NdbScanFilterused in pushdown conditions were not generated properly due to
FLOATvalues being represented internally as having zero length. This led to more than the expected number of rows being returned from
NDB, as shown by the value of
Ndb_api_read_row_count. While the condition was re-evaluated by mysqld when generation of scan filter failed, the end result was still correct in such cases, but any performance gain expected from pushing the condition was lost. (Bug #29699347)
When creating a table,
NDBdid not always determine correctly whether it exceeded the maximum allowed record size. (Bug #29698277)
NDBindex statistics are calculated based on the topology of one fragment of an ordered index; the fragment chosen in any particular index is decided at index creation time, both when the index is originally created, and when a node or system restart has recreated the index locally. This calculation is based in part on the number of fragments in the index, which can change when a table is reorganized. This means that, the next time that the node is restarted, this node may choose a different fragment, so that no fragments, one fragment, or two fragments are used to generate index statistics, resulting in errors from
This issue is solved by modifying the online table reorganization to recalculate the chosen fragment immediately, so that all nodes are aligned before and after any subsequent restart. (Bug #29534647)
As part of initializing schema distribution, each data node must maintain a subscriber bitmap providing information about the API nodes that are currently subscribed to this data node. Previously, the size of the bitmap was hard-coded to
MAX_NODES(256), which meant that large amounts of memory might be allocated but never used when the cluster had significantly fewer nodes than this value. Now the size of the bitmap is determined by checking the maximum API node ID used in the cluster configuration file. (Bug #29270539)
The removal of the mysql_upgrade utility and its replacement by mysqld
--initializemeans that the upgrade procedure is executed much earlier than previously, possibly before
NDBis fully ready to handle queries. This caused migration of the MySQL privilege tables from
InnoDBto fail. (Bug #29205142)
During a restart when the data nodes had started but not yet elected a president, the management server received a node ID already in use error, which resulted in excessive retries and logging. This is fixed by introducing a new error 1705 Not ready for connection allocation yet for this case.
During a restart when the data nodes had not yet completed node failure handling, a spurious Failed to allocate nodeID error was returned. This is fixed by adding a check to detect an incomplete node start and to return error 1703 Node failure handling not completed instead.
As part of this fix, the frequency of retries has been reduced for not ready to alloc nodeID errors, an error insert has been added to simulate a slow restart for testing purposes, and log messages have been reworded to indicate that the relevant node ID allocation errors are minor and only temporary. (Bug #27484514)
NDBon Windows and macOS platforms did not always treat table names using mixed case consistently with
lower_case_table_names= 2. (Bug #27307793)
The process of selecting the transaction coordinator checked for “live” data nodes but not necessarily for those that were actually available. (Bug #27160203)
The automatic metadata synchronization mechanism requires the binary logging thread to acquire the global schema lock before an object can be safely synchronized. When another thread had acquired this lock at the same time, the binary logging thread waited for up to
TransactionDeadlockDetectionTimeoutmilliseconds and then returned failure if it was unsuccessful in acquiring the lock, which was unnecessary and which negatively impacted performance.
This has been fixed by ensuring that the binary logging thread acquires the global schema lock, or else returns with an error, immediately. As part of this work, a new
OO_NOWAIThas also been implemented in the NDB API. (WL #29740946)