Packaging: Expected NDB header files were in the
develRPM package instead oflibndbclient-devel. (Bug #84580, Bug #26448330)MySQL NDB ClusterJ: When a table containing a
BLOBor aTEXTfield was being queried with ClusterJ for a record that did not exist, an exception (“The method is not valid in current blob state”) was thrown. (Bug #28536926)MySQL NDB ClusterJ: A
NullPointerExceptionwas thrown when a full table scan was performed with ClusterJ on tables containing either a BLOB or a TEXT field. It was because the proper object initializations were omitted, and they have now been added by this fix. (Bug #28199372, Bug #91242)-
When the
SUMAkernel block receives aSUB_STOP_REQsignal, it executes the signal then replies withSUB_STOP_CONF. (After this response is relayed back to the API, the API is open to send moreSUB_STOP_REQsignals.) After sending theSUB_STOP_CONF, SUMA drops the subscription if no subscribers are present, which involves sending multipleDROP_TRIG_IMPL_REQmessages toDBTUP. LocalProxy can handle up to 21 of these requests in parallel; any more than this are queued in the Short Time Queue. When execution of aDROP_TRIG_IMPL_REQwas delayed, there was a chance for the queue to become overloaded, leading to a data node shutdown with Error in short time queue.This issue is fixed by delaying the execution of the
SUB_STOP_REQsignal ifDBTUPis already handlingDROP_TRIG_IMPL_REQsignals at full capacity, rather than queueing up theDROP_TRIG_IMPL_REQsignals. (Bug #26574003) -
Having a large number of deferred triggers could sometimes lead to job buffer exhaustion. This could occur due to the fact that a single trigger can execute many operations—for example, a foreign key parent trigger may perform operations on multiple matching child table rows—and that a row operation on a base table can execute multiple triggers. In such cases, row operations are executed in batches. When execution of many triggers was deferred—meaning that all deferred triggers are executed at pre-commit—the resulting concurrent execution of a great many trigger operations could cause the data node job buffer or send buffer to be exhausted, leading to failure of the node.
This issue is fixed by limiting the number of concurrent trigger operations as well as the number of trigger fire requests outstanding per transaction.
For immediate triggers, limiting of concurrent trigger operations may increase the number of triggers waiting to be executed, exhausting the trigger record pool and resulting in the error Too many concurrently fired triggers (increase MaxNoOfFiredTriggers. This can be avoided by increasing
MaxNoOfFiredTriggers, reducing the user transaction batch size, or both. (Bug #22529864)References: See also: Bug #18229003, Bug #27310330.
When moving an
OperationRecfrom the serial to the parallel queue,Dbacc::startNext()failed to update theOperationrec::OP_ACC_LOCK_MODEflag which is required to reflect the accumulatedOP_LOCK_MODEof all previous operations in the parallel queue. This inconsistency in the ACC lock queues caused the scan lock takeover mechanism to fail, as it incorrectly concluded that a lock to take over was not held. The same failure caused an assert when aborting an operation that was a member of such an inconsistent parallel lock queue. (Bug #92100, Bug #28530928)-
DBTUPsent the error Tuple corruption detected when a read operation attempted to read the value of a tuple inserted within the same transaction. (Bug #92009, Bug #28500861)References: See also: Bug #28893633.
-
False constraint violation errors could occur when executing updates on self-referential foreign keys. (Bug #91965, Bug #28486390)
References: See also: Bug #90644, Bug #27930382.
An
NDBinternal trigger definition could be dropped while pending instances of the trigger remained to be executed, by attempting to look up the definition for a trigger which had already been released. This caused unpredictable and thus unsafe behavior possibly leading to data node failure. The root cause of the issue lay in an invalid assumption in the code relating to determining whether a given trigger had been released; the issue is fixed by ensuring that the behavior ofNDB, when a trigger definition is determined to have been released, is consistent, and that it meets expectations. (Bug #91894, Bug #28451957)In certain cases, a cascade update trigger was fired repeatedly on the same record, which eventually consumed all available concurrent operations, leading to Error 233 Out of operation records in transaction coordinator (increase MaxNoOfConcurrentOperations). If
MaxNoOfConcurrentOperationswas set to a value sufficiently high to avoid this, the issue manifested as data nodes consuming very large amounts of CPU, very likely eventually leading to a timeout. (Bug #91472, Bug #28262259)-
Inserting a row into an
NDBtable having a self-referencing foreign key that referenced a unique index on the table other than the primary key failed withER_NO_REFERENCED_ROW_2. This was due to the fact thatNDBchecked foreign key constraints before the unique index was updated, so that the constraint check was unable to use the index for locating the row. Now, in such cases,NDBwaits until all unique index values have been updated before checking foreign key constraints on the inserted row. (Bug #90644, Bug #27930382)References: See also: Bug #91965, Bug #28486390.