MySQL NDB Cluster 8.0.31 is a new release of NDB 8.0, based on
MySQL Server 8.0 and including features in version 8.0 of the
NDB storage engine, as well as fixing
recently discovered bugs in previous NDB Cluster releases.
Obtaining NDB Cluster 8.0. NDB Cluster 8.0 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in NDB Cluster 8.0, see What is New in MySQL NDB Cluster 8.0.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 8.0 through MySQL 8.0.31 (see Changes in MySQL 8.0.31 (2022-10-11, General Availability)).
-
This release implements Transparent Data Encryption (TDE), which provides protection by encryption of
NDBdata at rest. This includes allNDBtable data and log files which are persisted to disk, and is intended to protect against recovering data subsequent to unauthorized access to NDB Cluster data files such as tablespace files or logs.To enforce encryption on files storing
NDBtable data, setEncryptedFileSystemto1, which causes all data to be encrypted and decrypted as necessary, as it is written to and read from these files. These include LCP data files, redo log files, tablespace files, and undo log files.When using file system encryption with
NDB, you must also perform the following tasks:-
Provide a password to each data node when starting or restarting it, using either one of the data node options
--filesystem-passwordor--filesystem-password-from-stdin. This password uses the same format and is subject to the same constraints as the password used for an encryptedNDBbackup (see the description of the ndb_restore--backup-passwordoption).You can provide the encryption password on the command line, or in a
my.cnffile. See NDB File System Encryption Setup and Usage, for more information and examples.
Only tables using the
NDBstorage engine are subject to encryption by this feature; see NDB File System Encryption Limitations. Other tables, such as those used forNDBschema distribution, replication, and binary logging, typically useInnoDB; see InnoDB Data-at-Rest Encryption. For information about encryption of binary log files, see Encrypting Binary Log Files and Relay Log Files.Files generated or used by
NDBprocesses, such as operating system logs, crash logs, and core dumps, are not encrypted. Files used byNDBbut not containing any user table data are also not encrypted; these include LCP control files, schema files, and system files (see NDB Cluster Data Node File System). The management server configuration cache is also not encrypted.In addition, NDB 8.0.31 adds a new utility ndb_secretsfile_reader for extracting key information from encrypted files.
This enhancement builds on work done in NDB 8.0.22 to implement encrypted
NDBbackups. For more information, see the description of theRequireEncryptedBackupconfiguration parameter, as well as Using The NDB Cluster Management Client to Create a Backup.NoteUpgrading an encrypted filesystem to NDB 8.0.31 or later from a previous release requires a rolling initial restart of the data nodes, due to improvements in key handling.
(Bug #34417282, WL #14687, WL #15051, WL #15204)
-
-
ndbinfo Information Database: Upgrades of SQL nodes from NDB 7.5 or NDB 7.6 to NDB 8.0 using RPM files did not enable the
ndbinfoplugin properly. This was due to the fact that, since thendbclusterplugin is disabled during an upgrade of mysqld, so is thendbinfoplugin; this led to.frmfiles associated withndbinfotables being left behind following the upgrade.Now in such cases, any
ndbinfotable.frmfiles from the earlier release are removed, and the plugin enabled. (Bug #34432446)
-
Important Change; NDB Client Programs: A number of NDB program options were never implemented and have now been removed. The options and the programs from which they have been dropped are listed here:
--ndb-optimized-node-selection: ndbd, ndbmtd, ndb_mgm, ndb_delete_all, ndb_desc, ndb_drop_index, ndb_drop_table, ndb_show_tables, ndb_blob_tool, ndb_config, ndb_index_stat, ndb_move_data, ndbinfo_select_all, ndb_select_count--character-sets-dir: ndb_mgm, ndb_mgmd, ndb_config, ndb_delete_all, ndb_desc, ndb_drop_index, ndb_drop_table, ndb_show_tables, ndb_blob_tool, ndb_config, ndb_index_stat, ndb_move_data, ndbinfo_select_all, ndb_select_count, ndb_waiter--core-file: ndb_mgm, ndb_mgmd, ndb_config, ndb_delete_all, ndb_desc, ndb_drop_index, ndb_drop_table, ndb_show_tables, ndb_blob_tool, ndb_config, ndb_index_stat, ndb_move_data, ndbinfo_select_all, ndb_select_count, ndb_waiter--connect-retriesand--connect-retry-delay: ndb_mgmd--ndb-nodeid: ndb_config
See the descriptions of the indicated program and program options in NDB Cluster Programs, for more information. (Bug #34059253)
-
Important Change: The
ndbclusterplugin is now included in all MySQL server builds, with the exception of builds for 32-bit platforms. As part of this work, we address a number of issues with cmake options for NDB Cluster, making the plugin option forNDBCLUSTERbehave as other plugin options, and adding a new optionWITH_NDBto control the build of NDB for MySQL Cluster.This release makes the following changes in cmake options relating to MySQL Cluster:
Adds the
WITH_NDBoption (defaultOFF). Enabling this option causes the MySQL Cluster binaries to be built.Deprecates the
WITH_NDBCLUSTERoption; useWITH_NDBinstead.Removes the
WITH_PLUGIN_NDBCLUSTERoption. UseWITH_NDB, instead, to build MySQL Cluster.Changes the
WITH_NDBCLUSTER_STORAGE_ENGINEoption so that it now controls (only) whether thendbclusterplugin itself is built. This option is now automatically set toONwhenWITH_NDBis enabled for the build, so it should no longer be necessary to set it when compiling MySQL with NDB Cluster support.
For more information, see CMake Options for Compiling NDB Cluster. (WL #14788, WL #15157)
Added the
--detailed-infooption for ndbxfrm. This is similar to the--infooption, but in addition prints out the file's header and trailer. (Bug #34380739)-
This release makes it possible to enable and disable binary logging with compressed transactions using
ZSTDcompression forNDBtables in a mysql or other client session while the MySQL server is running. To enable the feature, set thendb_log_transaction_compressionsystem variable introduced in this release toON. The level of compression used can be controlled using thendb_log_transaction_compression_level_zstdsystem variable, which is also added in this release; the default compression level is3.NoteAlthough changing the values of the
binlog_transaction_compressionandbinlog_transaction_compression_level_zstdsystem variables from a client session has no effect on binary logging ofNDBtables, setting--binlog-transaction-compression=ONon the command line or in amy.cnffile causesndb_log_transaction_compressionto be enabled, regardless of any setting for--ndb-log-transaction-compression. In this case, to disable binary log transaction compression for (only)NDBtables, setndb_log_transaction_compression=OFFin a MySQL client session following startup of mysqld.For more information, see Binary Log Transaction Compression. (Bug #32704705, Bug #32927582, WL #15138, WL #15139)
-
When pushing a condition as part of a pushed join, it is a requirement that all
table.columnreferences are to one of the following:The table to which the condition itself is pushed
A table which is an ancestor of the root of the pushed join
A table which is an ancestor of the table in the pushed query tree
In the last case, when finding possible ancestors, we did not fully identify all candidates for such tables, in either or both of these two ways:
Any tables being required ancestors due to nest-level dependencies were not added as ancestors
Tables having all possible ancestors as either required ancestors or key parents are known to be directly joined with our ancestor, and to provide these as ancestors themselves; thus, such tables should be made available as ancestors as well.
This patch implements both cases 1 and 2. In the second case, we take a conservative approach and add only those tables having a
single row lookupaccess type, but not those using index scans. (Bug #34508948) Execution of and
EXPLAINfor some large join queries with ndb_join_pushdown enabled (the default) were rejected with NDB errorQRY_NEST_NOT_SUPPORTEDFirstInner/Upper has to be an ancestor or a sibling. (Bug #34486874)-
When the
NDBjoin pushdown handler finds a table which cannot be pushed down it tries to produce an explanatory message communicating the reason for the rejection, which includes the names of the tables involved. In some cases the optimizer had already optimized away the table which meant that it could no longer be accessed by theNDBhandler, resulting in failure of the query.We fix this by introducing a check for such cases and printing a more generic message which does not include the table name if no table is found. (Bug #34482783)
The
EncryptedFilesystemparameter was not defined withCI_RESTART_INITIAL, and so was not shown in the output of ndb_config as requiring--initial, even though the parameter does in fact require an initial restart to take effect. (Bug #34456384)-
When finding tables possible to push down in a pushed join, the pushability of a table may depend on whether later tables are pushed as well. In such cases we take an optimistic approach and assume that later tables are also pushed. If this assumption fails, we might need to “unpush” a table and any other tables depending on it. Such a cascading “unpush” may be due to either or both of the following conditions:
A key reference referred to a column from a table which later turned out to not be pushable.
A pushed condition referred to a column from a table which later turn out to not be pushable.
We previously handled the first case, but handling of the second was omitted from work done in NDB 8.0.27 to enable pushing of conditions referring to columns from other tables that were part of the same pushed join. (Bug #34379950)
-
NdbScanOperationerrors are returned asynchronously to the client, possibly while the client is engaged in other processing. A successful call toNdbTransaction::execute()guarantees only that the scan request has been assembled and sent to the transaction coordinator without any errors; it does not wait for any sort ofCONForREFsignal to be returned from the data nodes. In this particular case, the expectedTAB_SCANREFsignal was returned asynchronously into the client space, possibly while the client was still performing other operations.We make this behavior more deterministic by not setting the
NdbTransactionerror code when aTAB_SCANREFerror is received. (Bug #34348706) When attempting to update a
VARCHARcolumn that was part of anNDBtable's primary key, the length of the value read from the database supplied to thecmp_attr()method was reportedly incorrectly. In addition to fixing this issue, we also remove an incorrect length check which required the binary byte length of the arguments to this method to be the same, which is not true of attributes being compared as characters, whose comparison semantics are defined by their character sets and collations. (Bug #34312769)-
When compiling NDB Cluster on OEL7 and OEL8 using
-Ogfor debug builds, gcc raised a null pointer subtraction error. (Bug #34199675, Bug #34199732)References: See also: Bug #33855533.
ndb_blob_tool did not perform proper handling of errors raised while reading data. (Bug #34194708)
-
As part of setting up the signal execution strategy, we calculate a safe quota for the maximum numbers signals to execute from each job buffer. As each executed signal is assumed to generate up to four outward bound signals, we might need to limit the signal quota so that we do not overfill the out buffers. Effectively, in each round of signal execution we cannot execute more signals than 1/4 of the signals that can fit in the out buffers.
This calculation did not take into account work done in NDB 8.0.23 introducing the possibility of having multiple writers, all using the same available free space in the same job buffer. Thus the signal quota needed to be further divided among the workers writing to the same buffers.
Now the computation of the maximum numbers signals to execute takes into account the resulting possibly greater number of writers to each queue. (Bug #34065930)
-
When the
NDBscheduler detects that job buffers are full, and starts to allocate from reserved buffers, it is expected to yield the CPU while waiting for the consumer. Just before yielding, it performs a final check for this condition, before sleeping. Problems arose when this check indicated that the job buffers were not full, so that the scheduler was allowed to continue executing signals, even though the limit on how many signals it was permitted to execute was still0. This led to a round of executing no signals, followed by another yield check, and so on, keeping the CPU occupied for no reason while waiting for something to be consumed by the receiver threads.The root cause of the problem was that different metrics were employed for calculating the limit on signals to execute (which triggered the yield check when this limit was
0), and for the yield callback which subsequently checked whether the job buffers were actually full.Prior to the implementation of scalable job buffers in MySQL NDB Cluster 8.0.23,
NDBwaited for more job buffer up to 10 times; this was inadvertently changed so that it gave up after waiting one time only, despite log messages indicating thatNDBhad slept ten times. As part of this fix, we revert that change, so that, as before, we wait up to ten times for more job buffer before giving up. As an additional part of this work, we also remove extra (and unneeded) code previously added to detect spin waits. (Bug #34038016)References: See also: Bug #33869715, Bug #34025532.
-
Job buffers act as the communication links between data node internal block threads. When the data structures for these were initialized, a 32K page was allocated to each such link, even if these threads never (by design) communicate with each other. This wasted memory resources, and had a small performance impact since the job buffer pages were checked frequently for available signals, so that us was necessary to load the unused job buffer pages into the translation lookaside buffer and L1, L2, and L3 caches.
Now, instead, we set up an empty job buffer as a sentinel to which all the communication links refer initially. Actual (used) job buffer pages are now allocated only when we actually write signals into them, in the same way that new memory pages are allocated when a page gets full. (Bug #34032102)
A data node could be forced to shut down due to a full job buffer, even when the local buffer was still available. (Bug #34028364)
-
Made checks of pending signals by the job scheduler more consistent and reliable. (Bug #34025532)
References: See also: Bug #33869715, Bug #34038016.
The combination of batching with multiple in-flight operations per key, use of
IgnoreError, and transient errors occurring on non-primary replicas led in some cases to inconsistencies withinDBTUPresulting in replica misalignment and other issues. We now prevent this from happening by detecting when operations are failing on non-primary replicas, and forcingAbortOnErrorhandling (rollback) in such cases for the containing transaction. (Bug #34013385)Handling by ndb_restore of temporary errors raised by DDL operations has been improved and made consistent. In all such cases, ndb_restore now retries the operation up to
MAX_RETRIES(11) times before giving up. (Bug #33982499)Removed the causes of many warnings raised when compiling NDB Cluster. (Bug #33797357, Bug #33881953)
When the rate of changes was high, event subscribers were slow to acknowledge receipt, or both, it was possible for the
SUMAblock to run out of space for buffering events. (Bug #30467140)-
ALTER TABLE ... COMMENT="NDB_TABLE=READ_BACKUP=1"orALTER TABLE..COMMENT="NDB_TABLE=READ_BACKUP=0"performs a non-copying (online)ALTERoperation on a table to add or remove itsREAD_BACKUPproperty (see NDB_TABLE Options), which increments the index version of all indexes on the table. Existing statistics, stored using the previous index version, were orphaned and never deleted; this led to wasted memory and inefficient searches when collecting index statistics.We address these issues by cleaning up the index samples; we delete any samples whose sample version is greater than or less than the current sample version. In addition, when no existing statistics are found by index ID and version, and when indexes are dropped. In this last case, we relax the bounds for the delete operation and remove all entries corresponding to the index ID in question, as opposed to both index ID and index version.
This fix cleans up the sample table which stores the bulk of index statistics data. The head table, which consists of index metadata rather than actual statistics, still contains orphaned rows, but since these occupy an insignificant amount of memory, they do not adversely affect statistics search efficiency, and stale entries are cleaned up when index IDs and versions are reused.
See also NDB API Statistics Counters and Variables. (Bug #29611297)