Important Change: The old
MaxAllocatedata node configuration parameter has no effect in any current version of
NDB. As of this release, it is deprecated and subject to removal in a future release. (Bug #52980, Bug #11760559)
NDB Cluster APIs: Conditions pushed as part of a pushed query can now refer to columns from ancestor tables within the same pushed query.
For example, given a table created using
CREATE TABLE t (x INT PRIMARY KEY, y INT) ENGINE=NDB, the query such as that shown here can now employ condition pushdown:
SELECT * FROM t AS a LEFT JOIN t AS b ON a.x=0 AND b.y>5,
Pushed conditions may include any of the common comparison operators
Values being compared must be of the same type, including length, precision, and scale.
NULLhandling is performed according to the comparison semantics specified by the ISO SQL standard; any comparison with
For more information, see Engine Condition Pushdown Optimization.
As part of this work, the following
NdbInterpretedCodemethods are implemented in the NDB API for comparing column values with values of parameters:
In addition, a new
NdbScanFilter::cmp_param()API method makes it possible to define comparisons between column values and parameter values. (WL #14388)
In environments that monitor and disconnect idle TCP connections, an idle cluster could suffer from unnecessary data node failures, and the failure of more than one data node could lead to an unplanned shutdown of the cluster.
To fix this problem, we introduce a new keep-alive signal (
GSN_TRP_KEEP_ALIVE) that is sent on all connections between data nodes on a regular basis, by default once every 6000 milliseconds (one minute). The length of the interval between these signals can be adjusted by setting the
KeepAliveSendIntervaldata node parameter introduced in this release, which can be set to 0 to disable keep-alive signals. You should be aware that
NDBperforms no checking that these signals are received and performs no disconnects on their account (this remains the responsibility of the heartbeat protocol). (Bug #32776593)
ALTER TABLEnow checks the source table's fragment commit counts before and after performing the copy. This allows the SQL node executing the
ALTER TABLEto determine whether there has been any concurrent write activity to the table being altered, and, if so, to terminate the operation, which can help avoid silent loss or corruption of data. When this occurs, the
ALTER TABLEstatement is now rejected with the error Detected change to data in source table during copying ALTER TABLE. Alter aborted to avoid inconsistency. (Bug #24511580, Bug #25694856, WL #10540)
The creation and updating of
NDBindex statistics are now enabled by default. In addition, when restoring metadata, ndb_restore now creates the index statistics tables if they do not already exist. (WL #14355)
Important Change; NDB Cluster APIs: Since MySQL 8.0 uses the data dictionary to store table metadata, the following NDB API
Tablemethods relating to
.FRMfiles are now deprecated:
NDB 8.0 uses
setExtraMetadata()for reading and writing table metadata stored in the MySQL data dictionary; you should expect the
*Frm*()methods listed previously to be removed in a future release of NDB Cluster. (Bug #28248575)
Microsoft Windows: A number of warnings generated when building NDB Cluster and the NDB utilities with Visual Studio 16.9.5 were removed. (Bug #32881961)
Microsoft Windows: On Windows, it was not possible to start data nodes successfully when the cluster was configured to use more than 64 data nodes. (Bug #104682, Bug #33262452)
NDB Cluster APIs: A number of MGM API functions, including
ndb_mgm_create_logevent_handle(), did not release memory properly. (Bug #32751506)
NDB Cluster APIs: Trying to create an index using
NdbDictionarywith index statistics enabled and the index statistics tables missing resulted in
NDBerror 723 No such table existed, the missing table in this context being one of the statistics tables, which was not readily apparent to the user. Now in such cases,
NDBinstead returns error 4714 Index stats system tables do not exist, which is added in this release. (Bug #32739080)
A buffer used in the
SUMAkernel block did not always accommodate multiple signals. (Bug #33246047)
DbtupBuffer.cppthe priority level is adjusted to what is currently executing in one path, but it was not used for short signals. This leads to the risk of
SCAN_FRAGCONFsignals, or both sorts of signals arriving out of order. (Bug #33206293)
A query executed as a pushed join by the
NDBstorage engine returned fewer rows than expected, under the following conditions:
The query contained an
EXISTSsubquery executed as a pushed join, using the
The subquery itself also contained an outer join using at least 2 tables, at least one of which used the
Part of the work done in NDB 8.0.23 to add query threads to the
ThreadConfigparameter included the addition of a
TUXscan context, used to optimize scans, but in some cases this was not set up correctly following the close of a scan. (Bug #33161080, Bug #32794719)
References: See also: Bug #33379702.
An attribute not found error was returned on a pushed join in
NDBwhen looking up a column to add a linked value.
The issue was caused by use of the wrong lettercase for the name of the column, and is fixed by insuring that we use the unmodified, original name of the column when performing lookups. (Bug #33104337)
Changes in NDB 8.0 resulted in a permanent error (NDB Error 261) being returned when the resources needed by a transaction's operations did not fit within those allocated for the transaction coordinator, rather than a temporary one (Error 233) as in previous versions. This is significant in NDB Replication, in which a temporary error is retried, but a permanent error is not; a permanent error is suitable when the transaction itself is too large to fit in the transaction coordinator without reconfiguration, but when the transaction cannot fit due to consumption of resources by other transactions, the error should be temporary, as the transaction may be able to fit later, or in some other
The temporary error returned in such cases (NDB error 233) now has a slightly different meaning; that is, that there is insufficient pooled memory for allocating another operation. (Previously, this error meant that the limit set by
MaxNoOfConcurrentOperationshad been reached.)
Rather than conflate these meanings (dynamic allocation and configured limit), we add a new temporary error (Error 234) which is returned when the configured limit has been reached. See Temporary Resource error, and Application error, for more information about these errors. (Bug #32997832)
References: See also: Bug #33092571.
QMGRto check whether the node ID received from the
CM_REGREFsignal is less than
MAX_NDB_NODES. (Bug #32983311)
A check was reported missing from the code for handling
GET_TABLEID_REQsignals. To fix this issue, all code relating to all
GET_TABLEID_*signals has been removed from the
NDBsources, since these signals are no longer used or supported in NDB Cluster. (Bug #32983249)
QMGRto ensure that process reports from signal data use appropriate node IDs. (Bug #32983240)
It was possible in some cases to specify an invalid node type when working with the internal management API. Now the API specifically disallows invalid node types, and defines an “unknown” node type (
NDB_MGM_NODE_TYPE_UNKNOWN) to cover such cases. (Bug #32957364)
NdbReceiverdid not always initialize storage for a MySQL
BITcolumn correctly. (Bug #32920099)
Receiving a spurious schema operation reply from a node not registered as a participant in the current schema operation led to an unplanned shutdown of the SQL node.
Now in such cases we discard replies from any node not registered as a participant. (Bug #32891206)
References: See also: Bug #30930132, Bug #32509544.
falsefor Boolean parameters such as
AutomaticThreadConfigwere not handled correctly when set in a
.cnffile. (This issue did not affect handling of such values in
.inifiles.) (Bug #32871875)
Removed unneeded copying of a temporary variable which caused a compiler truncation warning in
storage/ndb/src/common/util/version.cpp. (Bug #32763321)
The maximum index size supported by the
NDBindex statistics implementation is 3056 bytes. Attempting to create an index of a larger size when the table held enough data to trigger a statistics update caused
CREATE INDEXto be rejected with the error Got error 911 'Index stat scan requested on index with unsupported key size' from NDBCLUSTER.
This error originated in the
TUXkernel block during a scan which caused the schema transaction to fail. This scan is triggered during index creation when the table contains a nonzero number of rows; this also occurs during automatic updates of index statistics or execution of
Creating the index as part of
CREATE TABLEor when the table contained no rows returned no error. No statistics were generated in such situations, while
ANALYZE TABLEreturned an error similar to the one above.
We fix this by allowing the index to be created while returning an appropriate warning from a new check introduced at the handler level. In addition, the
TUXscan now handles this situation by suppressing the error, and instead returns success, effectively treating the table as an empty fragment. Otherwise, the behavior in such cases remains unchanged, with a warning returned to the client and no index statistics generated, whether or not the table contains any rows. (Bug #32749829)
References: This issue is a regression of: Bug #28714864.
CREATE TABLEstatement using ordered indexes returned an error when
IndexStatAutoCreatewas set to
1and all SQL nodes had been started with
--ndb-index-stat-enable=OFF, due to the fact that, when set to
OFF, the option prevented the creation of the index statistics tables. Now these tables are always created at mysqld startup regardless of the value of
--ndb-index-stat-enable. (Bug #32649528)
NDBschema operation was lost before the coordinator could process it, the client which logged the operation waited indefinitely for the coordinator to complete or abort it. (Bug #32593352)
References: See also: Bug #32579148.
ndb_mgmd now writes a descriptive error message to the cluster log when it is invoked with one or more invalid options. (Bug #32554492)
An IPv6 address used as part of an
NDBconnection string and which had only decimal digits following the first colon was incorrectly parsed, and could not be used to connect to the management server. (Bug #32532157)
Simultaneously creating a user and then granting this user the
NDB_STORED_USERprivilege on different MySQL servers sometimes caused these servers to hang.
This was due to the fact that, when the
NDBstorage engine is enabled, all SQL statements that involve users and grants are evaluated to determine whether they effect any users having the
NDB_STORED_USERprivilege, after which some statements are ignored, some are distributed to all SQL nodes as statements, and some are distributed to all SQL nodes as requests to read and apply a user privilege snapshot. These snapshots are stored in the
mysql.ndb_sql_metadatatable. Unlike a statement update, which is limited to one SQL statement, a snapshot update can contain up to seven SQL statements per user. Waiting for any lock in the
NDBbinary logging thread while managing distributed users could easily lead to a deadlock, when the thread was waiting for an exclusive lock on the local ACL cache.
We fix this problem by implementing explicit locking around
NDB_STORED_USERsnapshot updates; snapshot distribution is now performed while holding a global read lock on one row of the
ndb_sql_metadatatable. (Previously, both statement and snapshot distribution were performed asynchronously, with no locking.) Now, when a thread does not obtain this lock on the first attempt, a warning is raised, and the deadlock prevented.
For more information, see Privilege Synchronization and NDB_STORED_USER. (Bug #32424653)
References: See also: Bug #32832676.
It was not possible to create or update index statistics when the cluster was in single user mode, due to transactions being disallowed from any node other than the designated API node granted access, regardless of type. This prevented the data node responsible for starting transactions relating to index statistics from doing so.
We address this issue by relaxing the constraint in single user mode and allowing transactions originating from data nodes (but not from other API nodes). (Bug #32407897)
When starting multiple management nodes, the first such node waits for the others to start before committing the configuration, but this was not explicitly communicated to users. In addition, when data nodes were started without starting all management nodes, no indication was given to users that its node ID was not allocated since no configuration had yet been committed. Now in such cases, the management node prints a message advising the user that the cluster is configured to use multiple management nodes, and to ensure that all such nodes have been started. (Bug #32339789)
To handle cases in which a cluster is restarted while the MySQL Server (SQL node) is left running, the index statistics thread is notified when an initial cluster start or restart occurs. The index statistics thread forced the creation of a fresh
Ndbobject and checking of various system objects, which is unnecessary when the MySQL Server is started at the same time as the initial Cluster start which led to the unnecessary re-creation of the
We fix this by restarting only the listener in such cases, rather than forcing the
Ndbobject to be re-created. (Bug #29610555, Bug #33130864)
Removed extraneous spaces that appeared in some entries written by errors in the node logs. (Bug #29540486)
--disable-indexesis used to restore metadata before restoring data, the tables in the target schema have no indexes. We now check when restoring data with this option to ensure that there are no indexes on the target table, and print the warning only if the table already has indexes. (Bug #28749799)
NDBbinlog injector thread now detects errors while handling data change events received from the storage engine. If an error is detected, the thread logs error messages and restarts itself, and as part of the restart an exceptional, incident, or
LOST_EVENTSentry is written to the binary log. This special entry indicates to a replication applier that the binary log is incomplete. (Bug #27150740)
When restoring of metadata was done using
--disable-indexes, there was no attempt to create indexes or foreign keys dependent on these indexes, but when ndb_restore was used without the option, indexes and foreign keys were created. When
--disable-indexeswas used later while restoring data,
NDBattempted to drop any indexes created in the previous step, but ignored the failure of a drop index operation due to a dependency on the index of a foreign key which had not been dropped. This led subsequently to problems while rebuilding indexes, when there was an attempt to create foreign keys which already existed.
We fix ndb_restore as follows:
Event buffer status messages shown by the event logger have been improved. Percentages are now displayed only when it makes to do so. In addition, if a maximum size is not defined, the printout shows
max=unlimited. (Bug #21276857)
File handles and
FileLogHandlerobjects created in
MgmtSrvr::configure_eventloggerwere leaked due to an incomplete destructor for
BufferedLogHandler. This meant that, each time the cluster configuration changed in a running ndb_mgmd, the cluster log was reopened and a file handle leaked, which could lead to issues with test programs and possibly to other problems. (Bug #18192573)
--configdirwas specified as
., but with a current working directory other than
DataDir, the binary configuration was created in
DataDirand not in the current directory. In addition, ndb_mgmd would not start when there was an existing binary configuration in
We fix this by having ndb_mgmd check the path and refusing to start when a relative path is specified for
--configdir. (Bug #11755867)
A memory leak occurred when
NDBCLUSTERwas unable to create a subscription for receiving cluster events. Ownership of the provided event data is supposed to be taken over but actually happened only when creation succeeded, in other cases the provided event data simply being lost. (Bug #102794, Bug #32579459)
The data node configuration parameters
UndoIndexBufferhave no effect in any currently supported version of NDB Cluster. Both parameters are now deprecated and the presence of either in the cluster configuration file raises a warning; you should expect them to be removed in a future release. (Bug #84184, Bug #26448357)
Execution of a bulk
UPDATEstatement using a
LIMITclause led to a debug assertion when an error was returned by
NDB. We fix this by relaxing the assertion for
NDBtables, since we expect in certain scenarios for an error to be returned at this juncture.