Documentation Home
MySQL NDB Cluster 7.6 Release Notes
Related Documentation Download these Release Notes
PDF (US Ltr) - 1.1Mb
PDF (A4) - 1.1Mb


MySQL NDB Cluster 7.6 Release Notes  /  Release Series Changelogs: MySQL NDB Cluster 7.6  /  Changes in MySQL NDB Cluster 7.6.27 (5.7.43-ndb-7.6.27) (2023-07-18, General Availability)

Changes in MySQL NDB Cluster 7.6.27 (5.7.43-ndb-7.6.27) (2023-07-18, General Availability)

Functionality Added or Changed

  • Important Change; NDB Cluster APIs: The NdbRecord interface allows equal changes of primary key values; that is, you can update a primary key value to its current value, or to a value which compares as equal according to the collation rules being used, without raising an error. NdbRecord does not itself try to prevent the update; instead, the data nodes check whether a primary key is updated to an unequal value and in this case reject the update with Error 897: Update attempt of primary key via ndbcluster internal api.

    Previously, when using any other mechanism than NdbRecord in an attempt to update a primary key value, the NDB API returned error 4202 Set value on tuple key attribute is not allowed, even setting a value identical to the existing one. With this release, the check when performing updates by other means is now passed off to the data nodes, as it is already by NdbRecord.

    This change applies to performing primary key updates with NdbOperation::setValue(), NdbInterpretedCode::write_attr(), and other methods of these two classes which set column values (including NdbOperation methods incValue(), subValue(), NdbInterpretedCode methods add_val(), sub_val(), and so on), as well as the OperationOptions::OO_SETVALUE extension to the NdbOperation interface. (Bug #35106292)

  • NDB 7.6 is now built with support for OpenSSL 3.0. (WL #15614)

Bugs Fixed

  • Backups using NOWAIT did not start following a restart of the data node. (Bug #35389533)

  • When handling the connection (or reconnection) of an API node, it was possible for data nodes to inform the API node that it was permitted to send requests too quickly, which could result in requests not being delivered and subsequently timing out on the API node with errors such as Error 4008 Receive from Ndb failed or Error 4012 Request ndbd time-out, maybe due to high load or communication problems. (Bug #35387076)

  • Made the following improvements in warning output:

    • Now, in addition to local checkpoint (LCP) elapsed time, the maximum time allowed without any progress is also printed.

    • Table IDs and fragment IDs are undefined and thus not relevant when an LCP has reached WAIT_END_LCP state, and are no longer printed at that point.

    • When the maximum limit was reached, the same information was shown twice, as both warning and crash information.

    (Bug #35376705)

  • When deferred triggers remained pending for an uncommitted transaction, a subsequent transaction could waste resources performing unnecessary checks for deferred triggers; this could lead to an unplanned shutdown of the data node if the latter transaction had no committable operations.

    This was because, in some cases, the control state was not reinitialized for management objects used by DBTC.

    We fix this by making sure that state initialization is performed for any such object before it is used. (Bug #35256375)

  • A pushdown join between queries featuring very large and possibly overlapping IN() and NOT IN() lists caused SQL nodes to exit unexpectedly. One or more of the IN() (or NOT IN()) operators required in excess of 2500 arguments to trigger this issue. (Bug #35185670, Bug #35293781)

  • The buffers allocated for a key of size MAX_KEY_SIZE were of insufficient size. (Bug #35155005)

  • Some calls made by the ndbcluster handler to push_warning_printf() used severity level ERROR, which caused an assertion in debug builds. This fix changes all such calls to use severity WARNING instead. (Bug #35092279)

  • When a connection between a data node and an API or management node was established but communication was available only from the other node to the data node, the data node considered the other node live, since it was receiving heartbeats, but the other node did not monitor heartbeats and so reported no problems with the connection. This meant that the data node assumed wrongly that the other node was (fully) connected.

    We solve this issue by having the API or management node side begin to monitor data node liveness even before receiving the first REGCONF signal from it; the other node sends a REGREQ signal every 100 milliseconds, and only if it receives no REGCONF from the data node in response within 60 seconds is the node reported as disconnected. (Bug #35031303)

  • The log contained a high volume of messages having the form DICT: index index number stats auto-update requested, logged by the DBDICT block each time it received a report from DBTUX requesting an update. These requests often occur in quick succession during writes to the table, with the additional possibility in this case that duplicate requests for updates to the same index were being logged.

    Now we log such messages just before DBDICT actually performs the calculation. This removes duplicate messages and spaces out messages related to different indexes. Additional debug log messages are also introduced by this fix, to improve visibility of the decisions taken and calculations performed. (Bug #34760437)

  • Local checkpoints (LCPs) wait for a global checkpoint (GCP) to finish for a fixed time during the end phase, so they were performed sometimes even before all nodes were started.

    In addition, this bound, calculated by the GCP coordinator, was available only on the coordinator itself, and only when the node had been started (start phase 101).

    These two issues are fixed by calculating the bound earlier in start phase 4; GCP participants also calculate the bound whenever a node joins or leaves the cluster. (Bug #32528899)