A query against the
INFORMATION_SCHEMA.FILES
table returned no results when it included anORDER BY
clause. (Bug #26877788)-
During a restart,
DBLQH
loads redo log part metadata for each redo log part it manages, from one or more redo log files. Since each file has a limited capacity for metadata, the number of files which must be consulted depends on the size of the redo log part. These files are opened, read, and closed sequentially, but the closing of one file occurs concurrently with the opening of the next.In cases where closing of the file was slow, it was possible for more than 4 files per redo log part to be open concurrently; since these files were opened using the
OM_WRITE_BUFFER
option, more than 4 chunks of write buffer were allocated per part in such cases. The write buffer pool is not unlimited; if all redo log parts were in a similar state, the pool was exhausted, causing the data node to shut down.This issue is resolved by avoiding the use of
OM_WRITE_BUFFER
during metadata reload, so that any transient opening of more than 4 redo log files per log file part no longer leads to failure of the data node. (Bug #25965370) Following
TRUNCATE TABLE
on anNDB
table, itsAUTO_INCREMENT
ID was not reset on an SQL node not performing binary logging. (Bug #14845851)In certain circumstances where multiple
Ndb
objects were being used in parallel from an API node, the block number extracted from a block reference inDBLQH
was the same as that of aSUMA
block even though the request was coming from an API node. Due to this ambiguity,DBLQH
mistook the request from the API node for a request from aSUMA
block and failed. This is fixed by checking node IDs before checking block numbers. (Bug #88441, Bug #27130570)-
When the duplicate weedout algorithm was used for evaluating a semijoin, the result had missing rows. (Bug #88117, Bug #26984919)
References: See also: Bug #87992, Bug #26926666.
A table used in a loose scan could be used as a child in a pushed join query, leading to possibly incorrect results. (Bug #87992, Bug #26926666)
When representing a materialized semijoin in the query plan, the MySQL Optimizer inserted extra
QEP_TAB
andJOIN_TAB
objects to represent access to the materialized subquery result. The join pushdown analyzer did not properly set up its internal data structures for these, leaving them uninitialized instead. This meant that later usage of any item objects referencing the materialized semijoin accessed an initializedtableno
column when accessing a 64-bittableno
bitmask, possibly referring to a point beyond its end, leading to an unplanned shutdown of the SQL node. (Bug #87971, Bug #26919289)-
When a data node was configured for locking threads to CPUs, it failed during startup with Failed to lock tid.
This was is a side effect of a fix for a previous issue, which disabled CPU locking based on the version of the available
glibc
. The specificglibc
issue being guarded against is encountered only in response to an internal NDB API call (Ndb_UnlockCPU()
) not used by data nodes (and which can be accessed only through internal API calls). The current fix enables CPU locking for data nodes and disables it only for the relevant API calls when an affectedglibc
version is used. (Bug #87683, Bug #26758939)References: This issue is a regression of: Bug #86892, Bug #26378589.
The
NDBFS
block'sOM_SYNC
flag is intended to make sure that all FSWRITEREQ signals used for a given file are synchronized, but was ignored by platforms that do not supportO_SYNC
, meaning that this feature did not behave properly on those platforms. Now the synchronization flag is used on those platforms that do not supportO_SYNC
. (Bug #76975, Bug #21049554)