Packaging: Expected NDB header files were in the
develRPM package instead of
libndbclient-devel. (Bug #84580, Bug #26448330)
SUMAkernel block receives a
SUB_STOP_REQsignal, it executes the signal then replies with
SUB_STOP_CONF. (After this response is relayed back to the API, the API is open to send more
SUB_STOP_REQsignals.) After sending the
SUB_STOP_CONF, SUMA drops the subscription if no subscribers are present, which involves sending multiple
DBTUP. LocalProxy can handle up to 21 of these requests in parallel; any more than this are queued in the Short Time Queue. When execution of a
DROP_TRIG_IMPL_REQwas delayed, there was a chance for the queue to become overloaded, leading to a data node shutdown with Error in short time queue.
This issue is fixed by delaying the execution of the
DBTUPis already handling
DROP_TRIG_IMPL_REQsignals at full capacity, rather than queueing up the
DROP_TRIG_IMPL_REQsignals. (Bug #26574003)
Having a large number of deferred triggers could sometimes lead to job buffer exhaustion. This could occur due to the fact that a single trigger can execute many operations—for example, a foreign key parent trigger may perform operations on multiple matching child table rows—and that a row operation on a base table can execute multiple triggers. In such cases, row operations are executed in batches. When execution of many triggers was deferred—meaning that all deferred triggers are executed at pre-commit—the resulting concurrent execution of a great many trigger operations could cause the data node job buffer or send buffer to be exhausted, leading to failure of the node.
This issue is fixed by limiting the number of concurrent trigger operations as well as the number of trigger fire requests outstanding per transaction.
For immediate triggers, limiting of concurrent trigger operations may increase the number of triggers waiting to be executed, exhausting the trigger record pool and resulting in the error Too many concurrently fired triggers (increase MaxNoOfFiredTriggers. This can be avoided by increasing
MaxNoOfFiredTriggers, reducing the user transaction batch size, or both. (Bug #22529864)
References: See also: Bug #18229003, Bug #27310330.
When moving an
OperationRecfrom the serial to the parallel queue,
Dbacc::startNext()failed to update the
Operationrec::OP_ACC_LOCK_MODEflag which is required to reflect the accumulated
OP_LOCK_MODEof all previous operations in the parallel queue. This inconsistency in the ACC lock queues caused the scan lock takeover mechanism to fail, as it incorrectly concluded that a lock to take over was not held. The same failure caused an assert when aborting an operation that was a member of such an inconsistent parallel lock queue. (Bug #92100, Bug #28530928)
DBTUPsent the error Tuple corruption detected when a read operation attempted to read the value of a tuple inserted within the same transaction. (Bug #92009, Bug #28500861)
References: See also: Bug #28893633.
False constraint violation errors could occur when executing updates on self-referential foreign keys. (Bug #91965, Bug #28486390)
References: See also: Bug #90644, Bug #27930382.
NDBinternal trigger definition could be dropped while pending instances of the trigger remained to be executed, by attempting to look up the definition for a trigger which had already been released. This caused unpredictable and thus unsafe behavior possibly leading to data node failure. The root cause of the issue lay in an invalid assumption in the code relating to determining whether a given trigger had been released; the issue is fixed by ensuring that the behavior of
NDB, when a trigger definition is determined to have been released, is consistent, and that it meets expectations. (Bug #91894, Bug #28451957)
In certain cases, a cascade update trigger was fired repeatedly on the same record, which eventually consumed all available concurrent operations, leading to Error 233 Out of operation records in transaction coordinator (increase MaxNoOfConcurrentOperations). If
MaxNoOfConcurrentOperationswas set to a value sufficiently high to avoid this, the issue manifested as data nodes consuming very large amounts of CPU, very likely eventually leading to a timeout. (Bug #91472, Bug #28262259)
Inserting a row into an
NDBtable having a self-referencing foreign key that referenced a unique index on the table other than the primary key failed with
ER_NO_REFERENCED_ROW_2. This was due to the fact that
NDBchecked foreign key constraints before the unique index was updated, so that the constraint check was unable to use the index for locating the row. Now, in such cases,
NDBwaits until all unique index values have been updated before checking foreign key constraints on the inserted row. (Bug #90644, Bug #27930382)
References: See also: Bug #91965, Bug #28486390.