NDB Cluster APIs: The
Node.JSpackage included with NDB Cluster has been updated to version 16.5.0. (Bug #33770627)
Empty lines in CSV files are now accepted as valid input by ndb_import. (Previously, empty lines in such files were always rejected.) Now, if an empty value can be used as the value for a single imported column, ndb_import uses it in the same manner as
LOAD DATA. (Bug #34119833)
NDBstores blob column values differently from other types; by default, only the first 256 bytes of the value are stored in the table (“inline”), with any remainder kept in a separate blob parts table. This is true for columns of MySQL type
TINYTEXTare exceptions, since they are always inline-only.)
JSONcolumn values in a similar fashion, the only difference being that, for a
JSONcolumn, the first 4000 bytes of the value are stored inline.
Previously, it was possible to control the inline size for blob columns of
NDBtables only by using the NDB API (
Column::setInlineSize()method). This now can be done in the mysql client (or other application supplying an SQL interface) using a column comment which consists of an
NDB_COLUMNstring containing a
BLOB_INLINE_SIZEspecification, as part of a
CREATE TABLEstatement like this one:
CREATE TABLE t ( a BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY, b BLOB COMMENT 'NDB_COLUMN=BLOB_INLINE_SIZE=3000' ) ENGINE NDBCLUSTER;
tcreated by the statement just shown, column
b(emphasized text in the preceding example) is a
BLOBcolumn whose first 3000 bytes are stored in
titself, rather than just the first 256 bytes. This means that, if no value stored in
bexceeds 3000 bytes in length, no extra work is required to read or write any excess data from the
NDBblob parts table when storing or retrieving the column value. This can improve performance significantly when performing many operations on blob columns.
The maximum supported value for
BLOB_INLINE_SIZEis 29980. Setting it to any value less than 1 causes the default inline size to be used for the column.
You can also alter a column as part of a copying
ALGORITHM=INPLACEis not supported for such operations.
BLOB_INLINE_SIZEcan be used alone, or together with
MAX_BLOB_PART_SIZEin the same
NDB_COMMENTstring. Unlike the case with
BLOB_INLINE_SIZEis supported for
--missing-ai-columnoption is added to ndb_import. This enables ndb_import to accept a CSV file from which the data for an
AUTO_INCREMENTcolumn is missing and to supply these values itself, much as
LOAD DATAdoes. This can be done with one or more tables for which the CSV representation contains no values for such a column.
This option works only when the CSV file contains no nonempty values for the
AUTO_INCREMENTcolumn to be imported. (Bug #102730, Bug #32553029)
This release adds Performance Schema instrumentation for transaction batch memory used by
NDBCLUSTER, making it possible to monitor memory used by transactions. For more information, see Transaction Memory Usage. (WL #15073)
Important Change: When using the
ThreadConfigmultithreaded data node parameter to specify the threads to be created on the data nodes, the receive thread (
recv) in some cases was placed in the same worker thread as block threads such as
DBTC(0). This represented a regression from NDB 8.0.22 and earlier, where the receive thread is colocated only with
TRPMAN, as expected in such cases.
Now, when setting the value of
ThreadConfig, you must include
ldmexplicitly; to avoid using one or more of the
ldmthread types, you must set
count=0explicitly for each applicable thread type.
In addition, a minimum value of
1is now enforced for the
recvcount; setting the replication thread (
rep) count to
1also requires setting
count=1for the main thread.
These changes can have serious implications for upgrades from previous NDB Cluster releases. For more information, see Upgrading and Downgrading NDB Cluster, as well as the description of the
ThreadConfigparameter, in the MySQL Manual. (Bug #33869715)
References: See also: Bug #34038016, Bug #34025532.
macOS: ndb_import could not be compiled on MacOS/ARM because the
ndbgenerallibrary was not explicitly included in
LINK_LIBRARIES. (Bug #33931512)
NDB Disk Data: The
LGMANkernel block did not initialize its local encrypted filesystem state, and did not check
EncryptedFileSystemfor undo log files, so that their encryption status was never actually set.
This meant that, for release builds, it was possible for the undo log files to be encrypted on some systems, even though they should not have been; in debug builds, undo log files were always encrypted. This could lead to problems when using Disk Data tables and upgrading to or from NDB 8.0.29. (A workaround is to perform initial restarts of the data nodes when doing so.)
This issue could also cause unexpected CPU load for I/O threads when there were a great many Disk Data updates to write to the undo log, or at data node startup while reading the undo log.Note
EncryptedFileSystemparameter, introduced in NDB 8.0.29, is considered experimental and is not supported in production.
NDB Cluster APIs: The internal function
NdbThread_SetThreadPrio()sets the thread priority (
thread_prio) for a given thread type when applying the setting of the
ThreadConfigconfiguration parameter. It was possible for this function in some cases to return an error when it had actually succeeded, which could have a an unfavorable impact on the performance of some NDB API applications. (Bug #34038630)
NDB Cluster APIs: The following
NdbInterpretedCodemethods did not function correctly when a nonzero value was employed for the
Compilation of NDB Cluster on Debian 11 and Ubuntu 22.04 halted during the link time optimization phase due to source code warnings being treated as errors. (Bug #34252425)
NDBdoes not support in-place changes of default values for columns; such changes can be made only by using a copying
ALTER TABLE. Changing of the default value in such cases was already detected, but the additional or removal of default value was not.
We fix this issue by detecting when default value is added or removed during
ALTER TABLE, and refusing to perform the operation in place. (Bug #34224193)
After creating a user on SQL node A and granting it the
NDB_STORED_USERprivilege, dropping this user from SQL node B led to inconsistent results. In some cases, the drop was not distributed, so that after the drop the user still existed on SQL node A.
The cause of this issue is that
NDBmaintains a cache of all local users with
NDB_STORED_USER, but when a user was created on SQL node B, this cache was not updated. Later, when executing
DROP USER, this led SQL node B to determine that the drop did not have to be distributed. We fix this by ensuring that this cache is updated whenever a new distributed user is created. (Bug #34183149)
When the internal
ndbd_exit()function was invoked on a data node, information and error messages sent to the event logger just prior to the
ndbd_exit()call were not printed in the log as expected. (Bug #34148712)
NDB Cluster did not compile correctly on Ubuntu 22.04 due to changes in OpenSSL 3.0. (Bug #34109171)
NDB Cluster would not compile correctly using GCC 8.4 due to a change in Bison fallthrough handling. (Bug #34098818)
ndbclusterplugin or the
libndbclientlibrary required a number of files kept under directories specific to data nodes (
src/kernel) and management servers (
src/mgmsrv). These have now been moved to more suitable locations. Files moved that may be of interest are listed here:
ndbd_exit_codes.cppis moved to
ConfigInfo.cppis moved to
mt_thr_config.cppis moved to
NdbinfoTables.cppis moved to
When an error occurred during the begin schema transaction phase, an attempt to update the index statistics automatically was made without releasing the transaction handle, leading to a leak. (Bug #34007422)
References: See also: Bug #34992370.
Path lengths were not always calculated correctly by the data nodes. (Bug #33993607)
When ndb_restore performed an NDB API operation with any concurrent NDB API events taking place, contention could occur in the event of limited resources in
DBUTIL. This led to temporary errors in
NDB. In such cases, ndb_restore now attempts to retry the NDB API operation which failed. (Bug #33984717)
References: See also: Bug #33982499.
Removed a duplicate check of a table pointer found in the internal method
Dbtc::execSCAN_TABREQ(). (Bug #33945967)
The internal function
NdbReceiver::unpackRecAttr(), which unpacks attribute values from a buffer from a
GSN_TRANSID_AIsignal, did not check to ensure that attribute sizes fit within the buffer. This could corrupt the buffer which could in turn lead to reading beyond the buffer and copying beyond destination buffers. (Bug #33941167)
Improved formatting of log messages such that the format string verification employed by some compilers is no longer bypassed. (Bug #33930738)
NDBinternal signals were not always checked properly. (Bug #33896428)
Fixed a number of issues in the source that raised
-Wunused-parameterwarnings when compiling NDB Cluster with GCC 11.2. (Bug #33881953)
When an SQL node was not yet connected to
NDBCLUSTER, an excessive number of warnings were written to the MySQL error log when the SQL node could not discover an
NDBtable. (Bug #33875273)
The NDB API statistics variables
Ndb_api_wait_nanos_count_sessionare used for determining CPU times and wait times for applications. These counters are intended to show the time spent waiting for responses from data nodes, but they were not entirely accurate because time spent waiting for key requests was not included.
For more information, see NDB API Statistics Counters and Variables. (Bug #33840016)
References: See also: Bug #33850590.
It was possible in some cases for a duplicate
se_private_identry to be installed in the MySQL data dictionary for an
NDBtable, even when the previous table definition should have been dropped.
When data nodes drop out of the cluster and need to rejoin, each SQL node starts to synchronize the schema definitions in its own data dictionary. The
NDBtable installed in the data dictionary is the same as its
NDBtable ID. It is common for tables to be updated with different IDs, such as when executing an
DROP TABLE, or
CREATE TABLEstatement. The previous table definition, obtained by referencing the table in
format, is usually sufficient for a drop and thus for the new table to be installed with a new ID, since it is assumed that no other installed table definition uses that ID. An exception to this could occur during synchronization, if a data node shutdown allowed the previous table definition of a table having the same ID other than the one to be installed to remain, then the old definition was not dropped.
To correct this issue, we now check whether the ID of the table to be installed in the data dictionary differs from that of the previous one, in which case we also check whether an old table definition already exists with that ID, and, if it does, we drop the old table before continuing. (Bug #33824058)
After receiving a
DBLQHsometimes places the signal in a queue by copying the signal object into a stored object. Problems could arise when this signal object was used to send another signal before the incoming
COPY_FRAGREQwas stored; this led to saving a corrupt signal that, when sent, prevented a system restart from completing. We fix this by using a static copy of the signal for storage and retrieval, rather than the original signal object. (Bug #33581144)
When the mysqld binary supplied with NDB Cluster was run without
NDBsupport enabled, the
ndb_transid_mysql_connection_mapplugins were still enabled, and for example, still shown with status
ACTIVEin the output of
SHOW PLUGINS. (Bug #33473346)
Attempting to seize a redo log page could in theory fail due to a wrong bound error. (Bug #32959887)
When a data node was started using the
--foregroundoption, and with a node ID not configured to connect from a valid host, the data node underwent a forced shutdown instead of reporting an error. (Bug #106962, Bug #34052740)
NDBtables were skipped in the MySQL Server upgrade phase and were instead migrated by the
ndbclusterplugin at a later stage. As a result, triggers associated with
NDBtables were not created during upgrades from 5.7 based versions.
This occurred because it is not possible to create such triggers when the
NDBtables are migrated by the
ndbclusterplugin, since metadata about the triggers is lost in the upgrade finalization phase of the MySQL Server upgrade in which all
.TRGfiles are deleted.
To fix this issue, we make the following changes:
NDBtables with triggers is no longer deferred during the Server upgrade phase.
NDBtables with triggers are no longer removed from the data dictionary during setup even when initial system starts are detected.
(Bug #106883, Bug #34058984)
When initializing a file,
NDBFSenabled autosync but never called
sync_on_write()), so that the file was never synchronized to disk until it was saved. This meant that, for a system whose network disk was stalled for some time, the file could use up system memory on buffered file data.
We fix this by calling
NDBFSwrites to a file.
As part of this work, we increase the autosync size from 1 MB to 16 MB when initializing files.Note
O_SYNCon platforms that provide it, but does not implement
OM_SYNCfor opening files.
(Bug #106697, Bug #33946801, Bug #34131456)