MySQL NDB Cluster 9.0 Release Notes  /  Changes in MySQL NDB Cluster 9.0.0 (2024-07-02, Innovation Release)

Changes in MySQL NDB Cluster 9.0.0 (2024-07-02, Innovation Release)

MySQL NDB Cluster 9.0.0 is a new Innovation release of NDB Cluster, based on MySQL Server 9.0 and including features in version 9.0 of the NDB storage engine, as well as fixing recently discovered bugs in previous NDB Cluster releases.

Obtaining MySQL NDB Cluster 9.0.  NDB Cluster 9.0 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.

For an overview of major changes made in NDB Cluster 9.0, see What is New in MySQL NDB Cluster 9.0.

This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 9 through MySQL 9.0.0 (see Changes in MySQL 9.0.0 (2024-07-01, Innovation Release)).

Important

This release is no longer available for download. It was removed due to a critical issue that could stop the server from restarting following the creation of a very large number of tables (8001 or more). Please upgrade to MySQL Cluster 9.0.1 instead.

Deprecation and Removal Notes

  • NDB Cluster APIs: Node.js support in MySQL NDB Cluster is now deprecated, and you should expect its removal in a future release.

    You can also find the NDB Cluster jones-ndb driver for Node.js at https://github.com/mysql/mysql-js, where it remains available to interested users. (WL #16245)

  • NDB Client Programs: The ndb_size.pl utility is now deprecated and is no longer supported. You can expect it to be removed from a future version of the NDB Cluster distribution; for this reason, you should now modify any applications which depend on it accordingly. (WL #16456)

  • Use of an Ndb.cfg file for setting the connection string for an NDB process was not well documented or supported. With this release, this file is now formally deprecated, and you should expect support for it to be removed in a future release of MySQL Cluster. (WL #15765)

Performance Schema Notes

  • NDB Replication: Previously, information about the NDB replication applier was available to the user only as a set of server status variables, and only for the default replication channel.

    This release implements a new Performance Schema ndb_replication_applier_status table, which can be thought of as an NDB-specific extension to the existing replication_applier_status table. This enhancement provides per-channel information about an applier's status, including information relating to NDB Cluster conflict detection and resolution, in table form.

    For more information about this table, see The ndb_replication_applier_status Table. For information about replication in NDB Cluster, see NDB Cluster Replication. (WL #16013)

Functionality Added or Changed

  • Important Change: Now, when the removal of a data node file or directory fails with a file does not exist (ENOENT) error, this is treated as a successful removal.

  • ndbinfo Information Database: Added a type column to the transporter_details table in the ndbinfo information database. This column shows the type of connection used by the transporter, which is either of TCP or SHM.

  • NDB Client Programs: Added the --CA-days option to ndb_sign_keys to make it possible to specify a certificate's lifetime. (Bug #36549567)

  • NDB Client Programs: When started, ndbd now produces a warning in the data node log like this one:

    2024-05-28 13:32:16 [ndbd] WARNING  -- Running ndbd with a single thread of
    signal execution.  For multi-threaded signal execution run the ndbmtd binary.

    (Bug #36326896)

Bugs Fixed

  • NDB Replication: When subscribing to changes in the mysql.ndb_apply_status table, different settings were used depending on whether ndb_log_apply_status was ON or OFF. Since ndb_log_apply_status can be changed at runtime and subscriptions are not recreated at that time, changing these settings at runtime did not have the desired effect.

    The difference between enabling ndb_log_apply_status dynamically at runtime and doing so from the start of the MySQL proccess was in the format used when writing the ndb_apply_status updates to the binary log. When ndb_log_apply_status was enabled at runtime, writes were still done using the UPDATE format when WRITE was intended.

    To fix this inconsistency and make the behavior more distinct, we now always use WRITE format in such cases; using the WRITE format also makes the binary log image slightly smaller and is thus preferred. In addition, the cleanup of old events has been improved, which improves the cleanup of failed attempts to create tables and events. (Bug #36453684)

  • NDB Replication: The binary log index purge callback was skipped for the replica applier, which caused orphan rows to be left behind in the ndb_binlog_index table. (Bug #20573020, Bug #35847745, Bug #36378551, Bug #36420628, Bug #36423593, Bug #36485220, Bug #36492736)

  • NDB Cluster APIs: TLS connection errors were printed even though TLS was not specified for connections.

    To fix this issue, following an ignored TLS error, we explicitly reset the error condition in the management handle to NO_ERROR. (Bug #36354973)

  • NDB Cluster APIs: It was possible to employ the following NDB API methods without them being used as const, although this alternative usage had long been deprecated (and was not actually documented):

    Now, each of these methods must always be invoked as const. (Bug #36165876)

  • NDB Client Programs: In some cases, it was not possible to load cerificates generated using ndb_sign_keys. (Bug #36430004)

  • NDB Client Programs: ndb_redo_log_reader could not read data from encrypted files. (Bug #36313482)

  • NDB Client Programs: The following command-line options did not function correctly for the ndb_redo_log_reader utility program:

    (Bug #36313427)

  • NDB Client Programs: ndb_redo_log_reader exited with Record type = 0 not implemented when reaching an unused page, all zero bytes, or a page which was only partially used (typically a page consisting of the page header only). (Bug #36313259)

  • NDB Client Programs: ndb_restore did not restore a foreign key whose columns differed in order from those of the parent key.

    Our thanks to Axel Svensson for the contribution. (Bug #114147, Bug #36345882)

  • The destructor for NDB_SCHEMA_OBJECT makes several assertions about the state of the schema object, but the state was protected by a mutex, and the destructor did not acquire this mutex before testing the state.

    We fix this by acquiring the mutex within the destructor. (Bug #36568964)

  • NDB now writes a message to the MySQL server log before and after logging an incident in the binary log. (Bug #36548269)

  • Removed a memory leak in /util/NodeCertificate.cpp. (Bug #36537931)

  • Removed a memory leak from src/ndbapi/NdbDictionaryImpl.cpp. (Bug #36532102)

  • The internal method CertLifetime::set_set_cert_lifetime(X509 *cert) should set the not-before and not-after times in the certificate to the same as those stored in the CertLifetime object, but instead it set the not-before time to the current time, and the not-after time to be of the same duration as the object. (Bug #36514834)

  • Removed a possible use-after-free warning in ConfigObject::copy_current(). (Bug #36497108)

  • When a thread acquires and releases the global schema lock required for schema changes and reads, the associated log message did not identify who performed the operation.

    To fix this issue, we now do the following:

    • Prepend the message in the log with the identification of the NDB Cluster component or user session responsible.

    • Provide information about the related Performance Schema thread so that it can be traced.

    (Bug #36446730)

    References: See also: Bug #36446604.

  • Metadata changes were not logged with their associated thread IDs. (Bug #36446604)

    References: See also: Bug #36446730.

  • When building NDB using lld, the build terminated prematurely with the error message ld.lld: error: version script assignment of 'local' to symbol 'my_init' failed: symbol not defined while attempting to link libndbclient.so. (Bug #36431274)

  • TLS did not fail cleanly on systems which used OpenSSL 1.0, which is unsupported. Now in such cases, users get a clear error message advising that an upgrade to OpenSSL 1.1 or later is required to use TLS with NDB Cluster. (Bug #36426461)

  • The included libxml2 library was updated to version 2.9.13. (Bug #36417013)

  • NDB Cluster's pushdown join functionality expects pushed conditions to filter exactly, so that no rows that do not match the condition must be returned, and all rows that do match the condition must returned. When the condition contained a BINARY value compared to a BINARY column this was not always true; if the value was shorter than the column size, it could compare as equal to a column value despite having different lengths, if the condition was pushed down to NDB.

    Now, when deciding whether a condition is pushable, we also make sure that the BINARY value length exactly matches the BINARY column's size. In addition, when binary string values were used in conditions with BINARY or VARBINARY columns, the actual length of a given string value was not used but rather an overestimate of its length. This is now changed; this should allow more conditions comparing short string values with VARBINARY columns to be pushed down than before this fix was made. (Bug #36390313, Bug #36513270)

    References: See also: Bug #36399759, Bug #36400256. This issue is a regression of: Bug #36364619.

  • Setting AutomaticThreadConfig and NumCPUs when running single-threaded data nodes (ndbd) sometimes led to unrecoverable errors. Now ndbd ignores settings for these parameters, which are intended to apply only to multi-threaded data nodes (ndbmtd). (Bug #36388981)

  • Improved the error message returned when trying to add a primary key to an NDBCLUSTER table using ALGORITHM=INPLACE. (Bug #36382071)

    References: See also: Bug #30766579.

  • The handling of the LQH operation pool which occurs as part of TC takeover skipped the last element in either of the underlying physical pools (static or dynamic). If this element was in use, holding an operation record for a transaction belonging to a transaction coordinator on the failed node, it was not returned, resulting in an incomplete takeover which sometimes left operations behind. Such operations interfered with subsequent transactions and the copying process (CopyFrag) used by the failed node to recover.

    To fix this problem, we avoid skipping the final record while iterating through the LQH operation records during TC takeover. (Bug #36363119)

  • The libssh library was updated to version 0.10.4. (Bug #36135621)

  • When distribution awareness was not in use, the cluster tended to choose the same data node as the transaction coordinator repeatedly. (Bug #35840020, Bug #36554026)

  • In certain cases, management nodes were unable to allocate node IDs to restarted data and SQL nodes. (Bug #35658072)

  • Setting ODirect in the cluster's configuration caused excess logging when verifying that ODirect was actually settable for all paths. (Bug #34754817)

  • In some cases, when trying to perform an online add index operation on an NDB table with no explicit primary key (see Limitations of NDB online operations), the resulting error message did not make the nature of the problem clear. (Bug #30766579)

    References: See also: Bug #36382071.