MySQL NDB Cluster 7.4.22 is a new release of MySQL NDB Cluster
7.4, based on MySQL Server 5.6 and including features in version
7.4 of the NDB
storage engine, as
well as fixing recently discovered bugs in previous NDB Cluster
releases.
Obtaining MySQL NDB Cluster 7.4. MySQL NDB Cluster 7.4 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in MySQL NDB Cluster 7.4, see What is New in NDB Cluster 7.4.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.6 through MySQL 5.6.42 (see Changes in MySQL 5.6.42 (2018-10-22, General Availability)).
MySQL NDB ClusterJ: When a table containing a
BLOB
or aTEXT
field was being queried with ClusterJ for a record that did not exist, an exception (“The method is not valid in current blob state”) was thrown. (Bug #28536926)MySQL NDB ClusterJ: A
NullPointerException
was thrown when a full table scan was performed with ClusterJ on tables containing either a BLOB or a TEXT field. It was because the proper object initializations were omitted, and they have now been added by this fix. (Bug #28199372, Bug #91242)-
When the
SUMA
kernel block receives aSUB_STOP_REQ
signal, it executes the signal then replies withSUB_STOP_CONF
. (After this response is relayed back to the API, the API is open to send moreSUB_STOP_REQ
signals.) After sending theSUB_STOP_CONF
, SUMA drops the subscription if no subscribers are present, which involves sending multipleDROP_TRIG_IMPL_REQ
messages toDBTUP
. LocalProxy can handle up to 21 of these requests in parallel; any more than this are queued in the Short Time Queue. When execution of aDROP_TRIG_IMPL_REQ
was delayed, there was a chance for the queue to become overloaded, leading to a data node shutdown with Error in short time queue.This issue is fixed by delaying the execution of the
SUB_STOP_REQ
signal ifDBTUP
is already handlingDROP_TRIG_IMPL_REQ
signals at full capacity, rather than queueing up theDROP_TRIG_IMPL_REQ
signals. (Bug #26574003) -
Having a large number of deferred triggers could sometimes lead to job buffer exhaustion. This could occur due to the fact that a single trigger can execute many operations—for example, a foreign key parent trigger may perform operations on multiple matching child table rows—and that a row operation on a base table can execute multiple triggers. In such cases, row operations are executed in batches. When execution of many triggers was deferred—meaning that all deferred triggers are executed at pre-commit—the resulting concurrent execution of a great many trigger operations could cause the data node job buffer or send buffer to be exhausted, leading to failure of the node.
This issue is fixed by limiting the number of concurrent trigger operations as well as the number of trigger fire requests outstanding per transaction.
For immediate triggers, limiting of concurrent trigger operations may increase the number of triggers waiting to be executed, exhausting the trigger record pool and resulting in the error Too many concurrently fired triggers (increase MaxNoOfFiredTriggers. This can be avoided by increasing
MaxNoOfFiredTriggers
, reducing the user transaction batch size, or both. (Bug #22529864)References: See also: Bug #18229003, Bug #27310330.