The following sections describe changes in the implementation of NDB Cluster in MySQL NDB Cluster 8.0 through 8.0.21, as compared to earlier release series. NDB Cluster 8.0 is available as a General Availability (GA) release, beginning with NDB 8.0.19. NDB Cluster 7.6 and 7.5 are previous GA releases still supported in production; for information about NDB Cluster 7.6, see What is New in NDB Cluster 7.6. For similar information about NDB Cluster 7.5, see What is New in NDB Cluster 7.5. NDB Cluster 7.4 and 7.3 are previous GA releases still supported in production, although we recommend that new deployments for production use NDB Cluster 8.0; see MySQL NDB Cluster 7.3 and NDB Cluster 7.4.
Major changes and new features in NDB Cluster 8.0 which are likely to be of interest are shown in the following list:
Compatibility enhancements. The following changes reduce longstanding nonessential differences in
NDBbehavior as compared to that of other MySQL storage engines:
Development in parallel with MySQL server. Beginning with this release, MySQL NDB Cluster is being developed in parallel with the standard MySQL 8.0 server under a new unified release model with the following features:
NDB 8.0 is developed in, built from, and released with the MySQL 8.0 source code tree.
The numbering scheme for NDB Cluster 8.0 releases follows the scheme for MySQL 8.0, starting with version 8.0.13.
Building the source with
-clusterto the version string returned by mysql
-V, as shown here:
shell≫ mysql -V mysql Ver 8.0.21-cluster for Linux on x86_64 (Source distribution)
NDBbinaries continue to display both the MySQL Server version and the
NDBengine version, like this:
shell> ndb_mgm -V MySQL distrib mysql-8.0.21 ndb-8.0.21, for Linux (x86_64)
In MySQL Cluster NDB 8.0, these two version numbers are always the same.
To build the MySQL 8.0.13 (or later) source with NDB Cluster support, use the CMake option
Database and table names. As of NDB 8.0.18, the 63-byte limit on identifiers for databases and tables is removed. These identifiers can now use up to 64 bytes, as for such objects using other MySQL storage engines. See Section 3.7.11, “Previous NDB Cluster Issues Resolved in NDB Cluster 8.0”.
Generated names for foreign keys.
NDB(version 8.0.18 and later) now uses the pattern
for naming internally generated foreign keys. This is similar to the pattern used by
Schema and metadata distribution and synchronization. NDB 8.0 makes use of the MySQL data dictionary to distribute schema information to SQL nodes joining a cluster and to synchronize new schema changes between existing SQL nodes. The following list describes individual enhancements relating to this integration work:
Schema distribution enhancements. The
NDBschema distribution coordinator, which handles schema operations and tracks their progress, has been extended in NDB 8.0.17 to ensure that resources used during a schema operation are released at its conclusion. Previously, some of this work was done by the schema distribution client; this has been changed due to the fact that the client did not always have all needed state information, which could lead to resource leaks when the client decided to abandon the schema operation prior to completion and without informing the coordinator.
To help fix this issue, schema operation timeout detection has been moved from the schema distribution client to the coordinator, providing the coordinator with an opportunity to clean up any resources used during the schema operation. The coordinator now checks ongoing schema operations for timeout at regular intervals, and marks participants that have not yet completed a given schema operation as failed when detecting timeout. It also provides suitable warnings whenever a schema operation timeout occurs. (It should be noted that, after such a timeout is detected, the schema operation itself continues.) Additional reporting is done by printing a list of active schema operations at regular intervals whenever one or more of these operations is ongoing.
As an additional part of this work, a new mysqld option
--ndb-schema-dist-timeoutmakes it possible to set the length of time to wait until a schema operation is marked as having timed out.
Disk data file distribution. Beginning with NDB Cluster 8.0.14,
NDBuses the MySQL data dictionary to make sure that disk data files and related constructs such as tablespaces and log file groups are correctly distributed between all connected SQL nodes.
Schema synchronization of tablespace objects. When a MySQL Server connects as an SQL node to an NDB cluster, it checks its data dictionary against the information found in the
Previously, the only
NDBobjects synchronized on connection of a new SQL node were databases and tables; MySQL NDB Cluster 8.0.14 and later also implement schema synchronization of disk data objects including tablespaces and log file groups. Among other benefits, this eliminates the possibility of a mismatch between the MySQL data dictionary and the
NDBdictionary following a native backup and restore, in which tablespaces and log file groups were restored to the
NDBdictionary, but not to the MySQL Server's data dictionary.
It is also no longer possible to issue a
CREATE TABLEstatement that refers to a nonexistent tablespace. Such a statement now fails with an error.
Database DDL synchronization enhancements. Work done in NDB 8.0.17 insures that synchronization of databases by newly joined (or rejoined) SQL nodes with those on existing SQL nodes now makes proper use of the data dictionary so that any database-level operations (
ALTER DATABASE, or
DROP DATABASE) that may have been misssed by this SQL node are now correctly duplicated on it when it connects (or reconnects) to the cluster.
As part of the schema synchronization procedure performed when starting, an SQL node now compares all databases on the cluster's data nodes with those in its own data dictionary, and if any of these is found to be missing from the SQL node's data dictionary, the SQL Node installs it locally by executing a
CREATE DATABASEstatement. A database thus created uses the default MySQL Server database properties (such as those as determined by
collation_database) that are in effect on this SQL node at the time the statement is executed.
NDB metadata change detection and synchronization. NDB 8.0.16 implements a new mechanism for detection of updates to metadata for data objects such as tables, tablespaces, and log file groups with the MySQL data dictionary. This is done using a thread, the
NDBmetadata change monitor thread, which runs in the background and checks periodically for inconsistencies between the
NDBdictionary and the MySQL data dictionary.
The monitor performs metadata checks every 60 seconds by default. The polling interval can be adjusted by setting the value of the
ndb_metadata_check_intervalsystem variable; polling can be disabled altogether by setting the
ndb_metadata_checksystem variable to
OFF. A status variable (also added in NDB 8.0.16)
Ndb_metadata_detected_countshows the number of times since mysqld was last started that inconsistencies have been detected.
Beginning in version 8.0.18,
NDBtable, log file group, and tablespace objects submitted by the metadata change monitor thread during operations following startup are automatically checked for mismatches and synchronized by the
NDB 8.0.18 also adds two status variables relating to automatic synchronization:
Ndb_metadata_synced_countshows the number of objects synchronized automatically;
Ndb_metadata_blacklist_sizeindicates the number of objects for which synchronization has failed. In addition, you can see which objects have been synchronized by inspecting the cluster log.
NDB 8.0.19 further enhances this functionality by adding databases to those objects in which changes are detected and synchronized. Only databases actually used by
NDBtables are so handled; other databases which may be present in the MySQL data dictionary are ignored. This eliminates a previous requirement, for the case when a table existed in
NDBbut the table and the database towhich it belonged did not exist on the SQL node, to create this database manually; now in such cases, the database and all
NDBtables belonging to it should be created on the SQL node automatically.
Beginning with NDB 8.0.21, more detailed information about the current state of automatic synchronization than can be obtained from log messages or status variables is provided by two new tables added to the MySQL Performance Schema. The tables are listed here:
ndb_sync_pending_objects: Contains information about database objects for which mismatches have been detected between the
NDBdictionary and the MySQL data dictionary (and which have not been blacklisted from automatic synchronization).
ndb_sync_excluded_objects: Contains information about
NDBdatabase objects which have been blacklisted because they cannot be synchronized between the
NDBdictionary and the MySQL data dictionary, and thus require manual intervention.
A row in one of these tables provides the database object's parent schema, name, and type. Types of objects include schemas, tablespaces, log file groups, and tables. (If the object is a log file group or tablespace, the parent schema is
NULL.) In addition, the
ndb_sync_excluded_objectstable shows the reason for which the object has been blacklisted.
These tables are present only if
NDBCLUSTERstorage engine support is enabled. For more information about these tables, see Performance Schema NDB Cluster Tables.
Changes in NDB table extra metadata. In NDB 8.0.14 and later, the extra metadata property of an
NDBtable is used for storing serialized metadata from the MySQL data dictionary, rather than storing the binary representation of the table as in previous versions. (This was a
.frmfile, no longer used by the MySQL Server—see MySQL Data Dictionary.) As part of the work to support this change, the available size of the table's extra metadata has been increased. This means that
NDBtables created in NDB Cluster 8.0.14 and later are not compatible with previous NDB Cluster releases. Tables created in previous releases can be used with NDB 8.0.14 and later, but cannot be opened afterwards by an earlier version.
For more information, see Section 4.8, “Upgrading and Downgrading NDB Cluster”.
On-the-fly upgrades of tables using .frm files. A table created in NDB 7.6 and earlier contains metadata in the form of a compressed
.frmfile, which is no longer supported in MySQL 8.0. To facilitate online upgrades to NDB 8.0,
NDBperforms on-the-fly translation of this metadata and writes it into the MySQL Server's data dictionary, which enables the mysqld in NDB Cluster 8.0 to work with the table without preventing subsequent use of the table by a previous version of the
Once a table's structure has been modified in NDB 8.0, its metadata is stored using the data dictionary, and it can no longer be accessed by NDB 7.6 and earlier.
This enhancement also makes it possible to restore an
NDBbackup made using an earlier version to a cluster running NDB 8.0 (or later).
Synchronization of user privileges with NDB_STORED_USER. A new mechanism for sharing and synchronizing users, roles, and privileges between SQL nodes is available beginning with NDB 8.0.18, using the
NDB_STORED_USERprivilege. Distributed privileges as implemented in NDB 7.6 and earlier (see Distributed Privileges Using Shared Grant Tables) are no longer supported.
Once a user account is created on an SQL node, the user and its privileges can be stored in
NDBand thus shared between all SQL nodes in the cluster by issuing a
GRANTstatement such as this one:
GRANT NDB_STORED_USER ON *.* TO 'jon'@'localhost';
NDB_STORED_USERalways has global scope and must be granted using
ON *.*. System reserved accounts such as
mysql.infoschema@localhostcannot be assigned this privilege.
Roles can also be shared between SQL nodes by issuing the appropriate
GRANT NDB_STORED_USERstatement. Assigning such a role to a user does not cause the user to be shared; the
NDB_STORED_USERprivilege must be granted to each user explicitly.
A user or role having
NDB_STORED_USER, along with its privileges, is shared with all SQL nodes as soon as they join a given NDB Cluster. Changes to the privileges of the user or role are synchronized immediately with all connected SQL nodes. It is possible to make such changes from any connected SQL node, but recommended practice is to do so from a designated SQL node only, since the order of execution of statements affecting privileges from different SQL nodes cannot be guaranteed to be the same on all SQL nodes.
Implications for upgrades. Due to changes in the MySQL server's privilege system (see Grant Tables), privilege tables using the
NDBstorage engine do not function correctly in NDB 8.0. It is safe but not necessary to retain such privilege tables created in NDB 7.6 or earlier, but they are no longer used for access control. Beginning with NDB 8.0.16, a mysqld acting as an SQL node and detecting such tables in
NDBwrites a warning to the MySQL server log, and creates
InnoDBshadow tables local to itself; such shadow tables are created on each MySQL server connected to the cluster. When performing an upgrade from NDB 7.6 or earlier, the privilege tables using
NDBcan be removed safely using ndb_drop_table once all MySQL servers acting as SQL nodes have been upgraded (see Section 4.8, “Upgrading and Downgrading NDB Cluster”).
The ndb_restore utility's
--restore-privilege-tablesoption is deprecated but continues to be honored in NDB 8.0, and can still be used to restore distributed privilege tables present in a backup taken from a previous release of NDB Cluster to a cluster running NDB 8.0. These tables are handled as described in the preceeding paragraph.
Shared users and grants are stored in the
ndb_sql_metadatatable, which in NDB 8.0.19 and later ndb_restore by default does not restore; you can specify the
--include-stored-grantsoption to cause it to do so.
INFORMATION_SCHEMA changes. The following changes are made in the display of information regarding Disk Data files in the
Tablespaces and log file groups are no longer represented in the
FILEStable. (These constructs are not actually files.)
Each data file is now represented by a single row in the
FILEStable. Each undo log file is also now represented in this table by one row only. (Previously, a row was displayed for each copy of each of these files on each data node.)
INFORMATION_SCHEMAtables are now populated with tablespace statistics for MySQL Cluster tables. (Bug #27167728)
Error information with ndb_perror. The deprecated
--ndboption for perror has been removed. Instead, use ndb_perror to obtain error message information from
NDBerror codes. (Bug #81704, Bug #81705, Bug #23523926, Bug #23523957)
Condition pushdown enhancements. Previously, condition pushdown was limited to predicate terms referring to column values from the same table to which the condition was being pushed. In NDB 8.0.16, this restriction is removed such that column values from tables earlier in the query plan can also be referred to from pushed conditions. As of NDB 8.0.18, joins comparing column expressions are supported, as are comparisons between columns in the same table. Columns and column expressions to be compared must be of exactly the same type; this means they must also be of the same signedness, length, character set, precision, and scale, whenever these attributes apply.
Pushing down larger parts of a condition allows more rows to be filtered out by the data nodes, thereby reducing the number of rows which mysqld must handle during join processing. Another benefit of these enhancements is that filtering can be performed in parallel in the LDM threads, rather than in a single mysqld process on an SQL node; this has the potential to improve query performance significantly.
Existing rules for type compatibility between column values being compared continue to apply (see Engine Condition Pushdown Optimization).
Increase in maximum row size. NDB 8.0.18 increases the maximum number of bytes that can be stored in an
NDBCLUSTERtable from 14000 to 30000 bytes.
The maximum offset for a fixed-width column of an
NDBtable is 8188 bytes; this is also unchanged from releases previous to 8.0.18.
See Section 3.7.5, “Limits Associated with Database Objects in NDB Cluster”, for more information.
ndb_mgm SHOW command and single user mode. Beginning with NDB 8.0.17, when the cluster in single user mode, the output of the management client
SHOWcommand indicates which API or SQL node has exclusive access while this mode is in effect.
Online column renames. Beginning with NDB 8.0.18, columns of
NDBtables can be renamed online, using
ALGORITHM=INPLACE. See Section 7.14, “Online Operations with ALTER TABLE in NDB Cluster”, for more information.
Improved ndb_mgmd startup times. Start times for management nodes daemon have been significantly improved in NDB 8.0.18 and later, in the following ways:
Due to replacing the list data structure formerly used by
ndb_mgmdfor handling node properties from configuration data with a hash table, overall startup times for the management server have been decreased by a factor of 6 or more.
In addition, in cases where data and SQL node host names not present in the management server's
hostsfile are used in the cluster configuration file, ndb_mgmd start times can be up to 20 times shorter than was previously the case.
NDB API enhancements. Beginning with NDB 8.0.18,
NdbScanFilter::cmp()and several comparison methods of
NdbInterpretedCodecan be used to compare table column values with each other. The affected
NdbInterpretedCodemethods are listed here:
For all of the methods just listed, table column values to be compared much be of exactly matching types, including with respect to length, precision, signedness, scale, character set, and collation, as applicable.
See the descriptions of the individual API methods for more information.
Offline multithreaded index builds. It is now possible to specify a set of cores to be used for I/O threads performing offline multithreaded builds of ordered indexes, as opposed to normal I/O duties such as file I/O， compression， or decompression. “Offline” in this context refers to building of ordered indexes performed when the parent table is not being written to; such building takes place when an
NDBcluster performs a node or system restart, or as part of restoring a cluster from backup using ndb_restore
In addition, the default behaviour for offline index build work is modified to use all cores available to ndbmtd, rather limiting itself to the core reserved for the I/O thread. Doing so can improve restart and restore times and performance, availability, and the user experience.
This enhancement is implemented as follows:
The default value for
BuildIndexThreadsis changed from 0 to 128. This means that offline ordered index builds are now multithreaded by default.
The default value for
TwoPassInitialNodeRestartCopyis changed from
true. This means that an initial node restart first copies all data from a “live” node to one that is starting—without creating any indexes—builds ordered indexes offline, and then again synchronizes its data with the live node, that is, synchronizing twice and building indexes offline between the two synchonizations. This causes an initial node restart to behave more like the normal restart of a node, and reduces the time required for building indexes.
A new thread type (
idxbld) is defined for the
ThreadConfigconfiguration parameter, to allow locking of offline index build threads to specific CPUs.
NDBnow distinguishes the thread types that are accessible to
ThreadConfigby these two criteria:
Whether the thread is an execution thread. Threads of types
sendare execution threads; thread types
Whether the allocation of the thread to a given task is permanent or temporary. Currently all thread types except
For additonal information, see the descriptions of the indicated parameters in the Manual. (Bug #25835748, Bug #26928111)
logbuffers table backup process information. When performing an NDB backup, the
ndbinfo.logbufferstable now displays information regarding buffer usage by the backup process on each data node. This is implemented as rows reflecting two new log types in addition to
DD-UNDO. One of these rows has the log type
BACKUP-DATA, which shows the amount of data buffer used during backup to copy fragments to backup files. The other row has the log type
BACKUP-LOG, which displays the amount of log buffer used during the backup to record changes made after the backup has started. One each of these
log_typerows is shown in the
logbufferstable for each data node in the cluster. Rows having these two log types are present in the table only while an NDB backup is currently in progress. (Bug #25822988)
String hashing improvements. Prior to NDB 8.0, all string hashing was based on first transforming the string into a normalized form, then MD5-hashing the resulting binary image. This could give rise to some performance problems, for the following reasons:
The normalized string is always space padded to its full length. For a
VARCHAR, this often involved adding more spaces than there were characters in the original string.
The string libraries were not optimized for this space padding, which added considerable overhead in some use cases.
The padding semantics varied between character sets, some of which were not padded to their full length.
The transformed string could become quite large, even without space padding; some Unicode 9.0 collations can transform a single code point into 100 bytes or more of character data.
Subsequent MD5 hashing consisted mainly of padding with spaces, and was not particularly efficient, possibly causing additional performance penalties by flushing significant portions of the L1 cache.
A collation provides its own hash function, which hashes the string directly without first creating a normalized string. In addition, for a Unicode 9.0 collation, the hash is computed without padding.
NDBnow takes advantage of this built-in function whenever hashing a string identified as using a Unicode 9.0 collation.
Since, for other collations, there are existing databases which are hash partitioned on the transformed string,
NDBcontinues to employ the previous method for hashing strings that use these, to maintain compatibility. (Bug #89590, Bug #89604, Bug #89609, Bug #27515000, Bug #27523758, Bug #27522732)
RESET MASTER changes. Because the MySQL Server now executes
RESET MASTERwith a global read lock, the behavior of this statement when used with NDB Cluster has changed in the following two respects:
It is no longer guaranteed to be synonchrous; that is, it is now possible that a read coming immediately before
RESET MASTERis issued may not be logged until after the binary log has been rotated.
It now behaves in exactly the same fashion, whether the statement is issued on the same SQL node that is writing the binary log, or on a different SQL node in the same cluster.
ndb_log_bin default. Beginning with NDB 8.0.16, the default value of the
ndb_log_binsystem variable has changed from
Dynamic transactional resource allocation. Allocation of resources in the transaction corrdinator (see The DBTC Block) is now performed using dynamic memory pools. This means that resource allocation determined by data node configuration parameters such as
TransactionBufferMemoryis now done in such a way that, if the load represented by each of these parameters is within the target load for all such resources, others of these resources can be limited so as not to exceed the total resources available.
As part of this work, several new data node parameters controlling transactional resources in
DBTC, listed here, have been added:
See the descriptions of the parameters just listed for further information.
Backups using multiple LDMs per data node.
NDBbackups can now be performed in a parallel fashion on individual data nodes using multiple local data managers (LDMs). (Previously, backups were done in parallel across data nodes, but were always serial within data node processes.) No special syntax is required for the
START BACKUPcommand in the ndb_mgm client to enable this feature, but all data nodes must be using multiple LDMs. This means that data nodes must be running ndbmtd (ndbd is single-threaded and thus always has only one LDM) and they must be configured to use multiple LDMs before taking the backup; you can do this by choosing an appropriate setting for one of the multi-threaded data node configuration parameters
Backups using multiple LDMs create subdirectories, one per LDM, under the
BACKUP/BACKUP-directory. ndb_restore now detects these subdirectories automatically, and if they exist, attempts to restore the backup in parallel; see Section 6.23.2, “Restoring from a backup taken in parallel”, for details. (Single-threaded backups are restored as in previous versions of
NDB.) It is also possible to restore backups taken in parallel using an ndb_restore binary from a previous version of NDB Cluster by modifying the usual restore procedure; Section 188.8.131.52, “Restoring a parallel backup serially”, provides information on how to do this.
ARM support (source only). Beginning with NDB 8.0.18, it is possible to build
NDBfrom source for 64-bit
ARMCPUs. Currently, this support is source-only, and we do not provide any precompiled binaries for this platform.
Binary configuration file enhancements. Beginning with NDB 8.0.18, a new format is used for the management server's binary configuration file. Previously, a maximum of 16381 sections could appear in the cluster configuration file; now the maximum number of sections is 4G. This is intended to support larger numbers of nodes in a cluster than was possible before this change.
Upgrades to the new format are relatively seamless, and should seldom if ever require manual intervention, as the management server continues to be able to read the old format without issue. A downgrade from NDB 8.0.18 (or later) to an older version of the NDB Cluster software requires manual removal of any binary configuration files or, alternatively, starting the older management server binary with the
For more information, see Section 4.8, “Upgrading and Downgrading NDB Cluster”.
Increased number of data nodes. NDB 8.0.18 increases the maximum number of data nodes supported per cluster to 144 (previously, this was 48). Data nodes can now use node IDs in the range 1 to 144, inclusive.
Previously, the recommended node IDs for management nodes were 49 and 50. These are still supported for management nodes, but using them as such limits the maximum number of data nodes to 142; for this reason, it is now recommended that node IDs 145 and 146 are used for management nodes.
RedoOverCommitCounter and RedoOverCommitLimit changes. Due to ambiguities in the semantics for setting them to 0, the minimum value for each of the data node configuration parameters
RedoOverCommitLimithas been increased to 1, beginning with NDB 8.0.19.
ndb_autoincrement_prefetch_sz changes. In NDB 8.0.19, the default value of the
ndb_autoincrement_prefetch_szserver system variable is increased to 512.
Changes in parameter maxmimums and defaults. NDB 8.0.19 makes the following changes in configuration parameter maximum and default values:
Disk Data checkpointing improvements. NDB Cluster 8.0.19 provides a number of new enhancements which help to reduce the latency of checkpoints of Disk Data tables and tablespaces when using non-volatile memory devices such as solid-state drives and the NVMe specification for such devices. These improvements include those in the following list:
Avoiding bursts of checkpoint disk writes
Speeding up checkpoints for disk data tablespaces when the redo log or the undo log becomes full
Balancing checkpoints to disk and in-memory checkpoints against one other, when necessary
Protecting disk devices from overload to help ensure low latency under high loads
As part of this work, NDB 8.0.19 introduces two new data node configuration parameters.
MaxDiskDataLatencyplaces a ceiling on the degree of latency permitted for disk access and causes transactions taking longer than this length of time to be aborted.
DiskDataUsingSameDiskmakes it possible to take advantage of housing Disk Data tablespaces on separate disks by increasing the rate at which checkpoints of such tablespaces can be performed.
In addition, three new tables in the
ndbinfodatabase, also added in NDB 8.0.19 and listed here, provide information about Disk Data performance:
Memory allocation and TransactionMemory. NDB 8.0.19 introduces a new
TransactionMemoryparameter which simplifies allocation of data node memory for transactions as part of the work done to pool transactional and Local Data Manager (LDM) memory. This parameter is intended to replace several older transactional memory parameters which have been deprecated.
Transaction memory can now be set in any of the three ways listed here:
Several deprecated configuration parameters are incompatible with
TransactionMemory. If any of these are set,
TransactionMemorycannot be set (see Parameters incompatible with TransactionMemory), and the data node's transaction memory is determined as it was previous to NDB 8.0.19.Note
Attempting to set
TransactionMemoryand any of the deprecated parameters concurrently in the
config.inifile prevents the management server from starting.
TransactionMemoryis set, this value is used for determining transaction memory.
TransactionMemorycannot be set if any of the incompatible deprecated parameters mentioned in the previous item have also been set.
If none of the incompatible parameters are set and
TransactionMemoryis also not set, transaction memory is set by
Support for additional replicas. NDB 8.0.19 increases the maximum number of replicas supported in production from 2 to 4. (Previously, it was possible to set
NoOfReplicasto 3 or 4, but this was not officially supported or verified in testing.)
Restoring by slices. Beginning with NDB 8.0.20, it is possible to divide a backup into roughly equal portions (slices) and to restore these slices in parallel using two new options implemented for ndb_restore:
This makes it possible to employ multiple instances of ndb_restore to restore subsets of the backup in parallel, potentially reducing the amount of time required to perform the restore operation.
Read from any replica enabled. Beginning with NDB 8.0.19, read from any replica is enabled by default for all
NDBtables. This means that the default value for the
ndb_read_backupsystem variable is now ON, and that the value of the
READ_BACKUPis 1 when creating a new
NDBtable. Enabling read from any replica significantly improves performance for reads from
NDBtables, with minimal impact on writes.
ndb_blob_tool enhancements. Beginning with NDB 8.0.20, the ndb_blob_tool utility can detect missing blob parts for which inline parts exist and replace these with placeholder blob parts (consisting of space characters) of the correct length. To check whether there are missing blob parts, use the
--check-missingoption with this program. To replace any missing blob parts with placeholders, use the
For more information, see Section 6.6, “ndb_blob_tool — Check and Repair BLOB and TEXT columns of NDB Cluster Tables”.
NDB8.0.20 and later supports versioning for
ndbinfotables, and maintains the current definitions for its tables internally. At startup,
NDBcompares its supported
ndbinfoversion with the version stored in the data dictionary. If the versions differ,
NDBdrops any old
ndbinfotables and recreates them using the current definitions.
Support for Fedora Linux. Beginning with NDB 8.0.20, Fedora Linux is a supported platform for NDB Cluster Community releases and can be installed using the RPMs supplied for this purpose by Oracle. These can be obtained from the NDB Cluster downloads page.
NDB programs—NDBT dependency removal. The dependency of a number of
NDButility programs on the
NDBTlibrary has been removed. This library is used internally for development, and is not required for normal use; its inclusion in these programs could lead to unwanted issues when testing.
Affected programs are listed here, along with the
NDBversions in which the dependency was removed:
The principal effect of this change for users is that these programs no longer print
NDBT_ProgramExit -following completion of a run. Applications that depend upon such behavior should be updated to reflect the change when upgrading to the indicated versions.
Pushdown of outer joins and semijoins. Work done in NDB 8.0.20 allows many outer joins and semijoins, and not only those using a primary key or unique key lookup, to be pushed down to the data nodes (see Engine Condition Pushdown Optimization).
Outer joins using scans which can now be pushed include those which meet the following conditions:
There are no unpushed conditions on the table
There are no unpushed conditions on other tables in the same join nest, or in upper join nests on which it depends
All other tables in the same join nest, or in upper join nests on which it depends, are also pushed
A semijoin that uses an index scan can now be pushed if it meets the the conditions just noted for a pushed outer join, and it uses the
firstMatchstrategy (see Optimizing IN and EXISTS Subquery Predicates with Semijoin Transformations).
When a join cannot be pushed,
EXPLAINshould provide the reason or reasons.
Foreign keys and lettercasing.
NDBstores the names of foreign keys using the case with which they were defined. Formerly, when the value of the
lower_case_table_namessystem variable was set to 0, it performed case-sensitive comparisons of foreign key names as used in
SELECTand other SQL statements with the names as stored. Beginning with NDB 8.0.20, such comparisons are now always performed in a case-insensitive fashion, regardless of the value of
Multiple transporters. NDB 8.0.20 introduces support for multiple transporters to handle node-to-node communication between pairs of data nodes. This facilitates higher rates of update operations for each node group in the cluster, and helps avoid constraints imposed by system or other limitations on inter-node communications using a single socket.
NDBnow uses a number of transporters based on the number of local data management (LDM) threads or the number of transaction coordinator (TC) threads, whichever is greater. By default, the number of transporters is equal to half of this number. While the default should perform well for most workloads, it is possible to adjust the number of transporters employed by each node group by setting the
NodeGroupTransportersdata node configuration parameter (also introduced in NDB 8.0.20), up a maximum of the greater of the number of LDM threads or the number of TC threads. Setting it to 0 causes the number of transporters to be the same as the number of LDM threads.
ndb_restore: primary key schema changes. NDB 8.0.21 (and later) supports different primary key definitions for source and target tables when restoring an
NDBnative backup with ndb_restore when it is run with the
--allow-pk-changesoption. Both increasing and decreasing the number of columns making up the original primary key are supported.
When the primary key is extended with an additional column or columns, any columns added must be defined as
NOT NULL, and no values in any such columns may be changed during the time that the backup is being taken. Because some applications set all column values in a row when updating it, whether or not all values are actually changed, this can cause a restore operation to fail even if no values in the column to be added to the primary key have changed. You can override this behavior using the
--ignore-extended-pk-updatesoption also added in NDB 8.0.21; in this case, you must ensure that no such values are changed.
A column can be removed from the table's primary key whether or not this column remains part of the table.
Merging backups with ndb_restore. In some cases, it may be desirable to consolidate data originally stored in different instances of NDB Cluster (all using the same schema) into a single target NDB Cluster. This is now supported when using backups created in the ndb_mgm client (see Section 7.3.2, “Using The NDB Cluster Management Client to Create a Backup”) and restoring them with ndb_restore, using the
--remap-columnoption added in NDB 8.0.21 along with
--restore-data(and possibly additional compatible options as needed or desired).
--remap-columncan be employed to handle cases in which primary and unique key values are overlapping between source clusters, and it is necessary that they do not overlap in the target cluster, as well as to preserve other relationships between tables such as foreign keys.
--remap-columntakes as its argument a string having the format
colare, respectively, the names of the database, table, and column,
fnis the name of a remapping function, and
argsis one or more arguments to
fn. There is no default value. Only
offsetis supported as the function name, with
argsas the integer offset to be applied to the value of the column when inserting it into the target table from the backup. The column must use one of the MySQL integer types (see Integer Types (Exact Value) - INTEGER, INT, SMALLINT, TINYINT, MEDIUMINT, BIGINT); the allowed range of the offset value is the same as the signed version of that type (this allows the offset to be negative if desired).
The new option can be used multiple times in the same invocation of ndb_restore, so that you can remap to new values multiple columns of the same table, different tables, or both. The offset value does not have to be the same for all instances of the option.
In addition, two new options are provided for ndb_desc, also beginning in NDB 8.0.21:
For more information and examples, see the description of the
Send thread improvements. As of NDB 8.0.20, each send thread now handles sends to a subset of transporters, and each block thread now assists only one send thread, resulting in more send threads, and thus better performance and data node scalability.
Adaptive spin control using SpinMethod. NDB 8.0.20 introduces a simple interface for setting up adaptive CPU spin on platforms supporting it, using the
SpinMethoddata node parameter. This parameter has four settings, one each for static spinning, cost-based adaptive spinning, latency-optimized adaptive spinning, and adaptive spinning optimized for database machines on which each thread has its own CPU. Each of these settings causes the data node to use a set of predetermined values for one or more spin parameters which enable adaptive spinning, set spin timing, and set spin overhead, as appropriate to a given scenario, thus obviating the need to set these directly for common use cases.
For fine-tuning spin behavior, it is also possible to set these and additional spin parameters directly, using the existing
SchedulerSpinTimerdata node configuration parameter as well as the following
DUMPcommands in the ndb_mgm client:
DUMP 104000 (SetSchedulerSpinTimerAll): Sets spin time for all threads
DUMP 104001 (SetSchedulerSpinTimerThread): Sets spin time for a specified thread
DUMP 104002 (SetAllowedSpinOverhead): Sets spin overhead as the number of units of CPU time allowed to gain 1 unit of latency
DUMP 104003 (SetSpintimePerCall): Sets the time for a call to spin
DUMP 104004 (EnableAdaptiveSpinning): Enables or disables adpative spinning
NDB 8.0.20 also adds a new TCP configuration parameter
TcpSpinTimewhich sets the time to spin for a given TCP connection.
The ndb_top tool is also enhanced to provide spin time information per thread.
For additional information, see the description of the
SpinMethodparameter, the listed
DUMPcommands, and Section 6.29, “ndb_top — View CPU usage information for NDB threads”.
Disk Data and cluster restarts. Beginning with NDB 8.0.21, an initial restart of the cluster forces the removal of all Disk Data objects such as tablespaces and log file groups, including any data files and undo log files associated with these objects.
See Section 7.13, “NDB Cluster Disk Data Tables”, for more information.
Disk Data extent allocation. Beginning with NDB 8.0.20, allocation of extents in data files is done in a round-robin fashion among all data files used by a given tablespace. This is expected to improve distribution of data in cases where multiple storage devices are used for Disk Data storage.
For more information, see Section 7.13.1, “NDB Cluster Disk Data Objects”.
--ndb-log-fail-terminate option. Beginning with NDB 8.0.21, you can cause the SQL node to terminate whenever it is unable to log all row events fully. This can be done by starting mysqld with the
MySQL Cluster Manager 1.4.8 also provides experimental support for NDB Cluster 8.0. MySQL Cluster Manager has an advanced command-line interface that can simplify many complex NDB Cluster management tasks. See MySQL™ Cluster Manager 1.4.8 User Manual, for more information.