Documentation Home
MySQL NDB Cluster 8.0 Release Notes
Related Documentation Download these Release Notes
PDF (US Ltr) - 2.0Mb
PDF (A4) - 2.0Mb


MySQL NDB Cluster 8.0 Release Notes  /  Release Series Changelogs: MySQL NDB Cluster 8.0  /  Changes in MySQL NDB Cluster 8.0.38 (2024-07-02, General Availability)

Changes in MySQL NDB Cluster 8.0.38 (2024-07-02, General Availability)

Important

This release is no longer available for download. It was removed due to a critical issue that could stop the server from restarting following the creation of a very large number of tables (8001 or more). Please upgrade to MySQL Cluster 8.0.39 instead.

Functionality Added or Changed

  • ndbinfo Information Database: The following columns have been added to the transporter_details table:

    • sendbuffer_used_bytes: Number of bytes of signal data currently stored pending send using this transporter.

    • sendbuffer_max_used_bytes: Historical maximum number of bytes of signal data stored pending send using this transporter. Reset when the transporter connects.

    • sendbuffer_alloc_bytes: Number of bytes of send buffer currently allocated to store pending send bytes for this transporter. Send buffer memory is allocated in large blocks which may be sparsely used.

    • sendbuffer_max_alloc_bytes: Historical maximum number of bytes of send buffer allocated to store pending send bytes for this transporter.

    • type: The connection type used by this transporter (TCP or SHM).

    See The ndbinfo transporter_details Table, for more information. (Bug #36579842)

    References: See also: Bug #36569947.

  • NDB Client Programs: When started, ndbd now produces a warning in the data node log like this one:

    2024-05-28 13:32:16 [ndbd] WARNING  -- Running ndbd with a single thread of
    signal execution.  For multi-threaded signal execution run the ndbmtd binary.

    (Bug #36326896)

Bugs Fixed

  • NDB Client Programs: ndb_restore did not restore a foreign key whose columns differed in order from those of the parent key.

    Our thanks to Axel Svensson for the contribution. (Bug #114147, Bug #36345882)

  • The destructor for NDB_SCHEMA_OBJECT makes several assertions about the state of the schema object, but the state was protected by a mutex, and the destructor did not acquire this mutex before testing the state.

    We fix this by acquiring the mutex within the destructor. (Bug #36568964)

  • NDB now writes a message to the MySQL server log before and after logging an incident in the binary log. (Bug #36548269)

  • Removed a memory leak from src/ndbapi/NdbDictionaryImpl.cpp. (Bug #36532102)

  • Removed a possible use-after-free warning in ConfigObject::copy_current(). (Bug #36497108)

  • When a thread acquires and releases the global schema lock required for schema changes and reads, the associated log message did not identify who performed the operation.

    To fix this issue, we now do the following:

    • Prepend the message in the log with the identification of the NDB Cluster component or user session responsible.

    • Provide information about the related Performance Schema thread so that it can be traced.

    (Bug #36446730)

    References: See also: Bug #36446604.

  • Metadata changes were not logged with their associated thread IDs. (Bug #36446604)

    References: See also: Bug #36446730.

  • When building NDB using lld, the build terminated prematurely with the error message ld.lld: error: version script assignment of 'local' to symbol 'my_init' failed: symbol not defined while attempting to link libndbclient.so. (Bug #36431274)

  • An error injected in the LQH proxy and then resent to all workers was eventually cleared by the workers, but not in the LQH proxy where it was never cleared. (Bug #36398772)

  • NDB Cluster's pushdown join functionality expects pushed conditions to filter exactly, so that no rows that do not match the condition must be returned, and all rows that do match the condition must returned. When the condition contained a BINARY value compared to a BINARY column this was not always true; if the value was shorter than the column size, it could compare as equal to a column value despite having different lengths, if the condition was pushed down to NDB.

    Now, when deciding whether a condition is pushable, we also make sure that the BINARY value length exactly matches the BINARY column's size. In addition, when binary string values were used in conditions with BINARY or VARBINARY columns, the actual length of a given string value was not used but rather an overestimate of its length. This is now changed; this should allow more conditions comparing short string values with VARBINARY columns to be pushed down than before this fix was made. (Bug #36390313, Bug #36513270)

    References: See also: Bug #36399759, Bug #36400256. This issue is a regression of: Bug #36364619.

  • Setting AutomaticThreadConfig and NumCPUs when running single-threaded data nodes (ndbd) sometimes led to unrecoverable errors. Now ndbd ignores settings for these parameters, which are intended to apply only to multi-threaded data nodes (ndbmtd). (Bug #36388981)

  • Improved the error message returned when trying to add a primary key to an NDBCLUSTER table using ALGORITHM=INPLACE. (Bug #36382071)

    References: See also: Bug #30766579.

  • The handling of the LQH operation pool which occurs as part of TC takeover skipped the last element in either of the underlying physical pools (static or dynamic). If this element was in use, holding an operation record for a transaction belonging to a transaction coordinator on the failed node, it was not returned, resulting in an incomplete takeover which sometimes left operations behind. Such operations interfered with subsequent transactions and the copying process (CopyFrag) used by the failed node to recover.

    To fix this problem, we avoid skipping the final record while iterating through the LQH operation records during TC takeover. (Bug #36363119)

  • When distribution awareness was not in use, the cluster tended to choose the same data node as the transaction coordinator repeatedly. (Bug #35840020, Bug #36554026)

  • In certain cases, management nodes were unable to allocate node IDs to restarted data and SQL nodes. (Bug #35658072)

  • Setting ODirect in the cluster's configuration caused excess logging when verifying that ODirect was actually settable for all paths. (Bug #34754817)