Important Change: The old
MaxAllocate
data node configuration parameter has no effect in any current version ofNDB
. As of this release, it is deprecated and subject to removal in a future release. (Bug #52980, Bug #11760559)-
NDB Cluster APIs: Conditions pushed as part of a pushed query can now refer to columns from ancestor tables within the same pushed query.
For example, given a table created using
CREATE TABLE t (x INT PRIMARY KEY, y INT) ENGINE=NDB
, the query such as that shown here can now employ condition pushdown:SELECT * FROM t AS a LEFT JOIN t AS b ON a.x=0 AND b.y>5,
Pushed conditions may include any of the common comparison operators
<
,<=
,>
,>=
,=
, and<>
.Values being compared must be of the same type, including length, precision, and scale.
NULL
handling is performed according to the comparison semantics specified by the ISO SQL standard; any comparison withNULL
returnsNULL
.For more information, see Engine Condition Pushdown Optimization.
As part of this work, the following
NdbInterpretedCode
methods are implemented in the NDB API for comparing column values with values of parameters:In addition, a new
NdbScanFilter::cmp_param()
API method makes it possible to define comparisons between column values and parameter values. (WL #14388) -
In environments that monitor and disconnect idle TCP connections, an idle cluster could suffer from unnecessary data node failures, and the failure of more than one data node could lead to an unplanned shutdown of the cluster.
To fix this problem, we introduce a new keep-alive signal (
GSN_TRP_KEEP_ALIVE
) that is sent on all connections between data nodes on a regular basis, by default once every 6000 milliseconds (one minute). The length of the interval between these signals can be adjusted by setting theKeepAliveSendInterval
data node parameter introduced in this release, which can be set to 0 to disable keep-alive signals. You should be aware thatNDB
performs no checking that these signals are received and performs no disconnects on their account (this remains the responsibility of the heartbeat protocol). (Bug #32776593) A copying
ALTER TABLE
now checks the source table's fragment commit counts before and after performing the copy. This allows the SQL node executing theALTER TABLE
to determine whether there has been any concurrent write activity to the table being altered, and, if so, to terminate the operation, which can help avoid silent loss or corruption of data. When this occurs, theALTER TABLE
statement is now rejected with the error Detected change to data in source table during copying ALTER TABLE. Alter aborted to avoid inconsistency. (Bug #24511580, Bug #25694856, WL #10540)The creation and updating of
NDB
index statistics are now enabled by default. In addition, when restoring metadata, ndb_restore now creates the index statistics tables if they do not already exist. (WL #14355)
-
Important Change; NDB Cluster APIs: Since MySQL 8.0 uses the data dictionary to store table metadata, the following NDB API
Table
methods relating to.FRM
files are now deprecated:NDB 8.0 uses
getExtraMetadata()
andsetExtraMetadata()
for reading and writing table metadata stored in the MySQL data dictionary; you should expect the*Frm*()
methods listed previously to be removed in a future release of NDB Cluster. (Bug #28248575) Important Change: The default value for each of the two mysqld options
--ndb-wait-connected
and--ndb-wait-setup
has been increased from 30 to 120 seconds. (Bug #32850056)Microsoft Windows: A number of warnings generated when building NDB Cluster and the NDB utilities with Visual Studio 16.9.5 were removed. (Bug #32881961)
Microsoft Windows: On Windows, it was not possible to start data nodes successfully when the cluster was configured to use more than 64 data nodes. (Bug #104682, Bug #33262452)
NDB Cluster APIs: A number of MGM API functions, including
ndb_mgm_create_logevent_handle()
, did not release memory properly. (Bug #32751506)NDB Cluster APIs: Trying to create an index using
NdbDictionary
with index statistics enabled and the index statistics tables missing resulted inNDB
error 723 No such table existed, the missing table in this context being one of the statistics tables, which was not readily apparent to the user. Now in such cases,NDB
instead returns error 4714 Index stats system tables do not exist, which is added in this release. (Bug #32739080)NDB Cluster APIs: The MySQL NoSQL Connector for JavaScript included with NDB Cluster is now built using Node.js version 12.2.6.
A buffer used in the
SUMA
kernel block did not always accommodate multiple signals. (Bug #33246047)In
DbtupBuffer.cpp
the priority level is adjusted to what is currently executing in one path, but it was not used for short signals. This leads to the risk ofTRANSID_AI
signals,SCAN_FRAGCONF
signals, or both sorts of signals arriving out of order. (Bug #33206293)-
A query executed as a pushed join by the
NDB
storage engine returned fewer rows than expected, under the following conditions:The query contained an
IN
orEXISTS
subquery executed as a pushed join, using thefirstMatch
algorithm.The subquery itself also contained an outer join using at least 2 tables, at least one of which used the
eq_ref
access type.
(Bug #33181964)
-
Part of the work done in NDB 8.0.23 to add query threads to the
ThreadConfig
parameter included the addition of aTUX
scan context, used to optimize scans, but in some cases this was not set up correctly following the close of a scan. (Bug #33161080, Bug #32794719)References: See also: Bug #33379702.
-
An attribute not found error was returned on a pushed join in
NDB
when looking up a column to add a linked value.The issue was caused by use of the wrong lettercase for the name of the column, and is fixed by insuring that we use the unmodified, original name of the column when performing lookups. (Bug #33104337)
-
Changes in NDB 8.0 resulted in a permanent error (NDB Error 261) being returned when the resources needed by a transaction's operations did not fit within those allocated for the transaction coordinator, rather than a temporary one (Error 233) as in previous versions. This is significant in NDB Replication, in which a temporary error is retried, but a permanent error is not; a permanent error is suitable when the transaction itself is too large to fit in the transaction coordinator without reconfiguration, but when the transaction cannot fit due to consumption of resources by other transactions, the error should be temporary, as the transaction may be able to fit later, or in some other
TC
instance.The temporary error returned in such cases (NDB error 233) now has a slightly different meaning; that is, that there is insufficient pooled memory for allocating another operation. (Previously, this error meant that the limit set by
MaxNoOfConcurrentOperations
had been reached.)Rather than conflate these meanings (dynamic allocation and configured limit), we add a new temporary error (Error 234) which is returned when the configured limit has been reached. See Temporary Resource error, and Application error, for more information about these errors. (Bug #32997832)
References: See also: Bug #33092571.
Added an
ndbrequire()
inQMGR
to check whether the node ID received from theCM_REGREF
signal is less thanMAX_NDB_NODES
. (Bug #32983311)A check was reported missing from the code for handling
GET_TABLEID_REQ
signals. To fix this issue, all code relating to allGET_TABLEID_*
signals has been removed from theNDB
sources, since these signals are no longer used or supported in NDB Cluster. (Bug #32983249)Added an
ndbrequire()
inQMGR
to ensure that process reports from signal data use appropriate node IDs. (Bug #32983240)It was possible in some cases to specify an invalid node type when working with the internal management API. Now the API specifically disallows invalid node types, and defines an “unknown” node type (
NDB_MGM_NODE_TYPE_UNKNOWN
) to cover such cases. (Bug #32957364)NdbReceiver
did not always initialize storage for a MySQLBIT
column correctly. (Bug #32920099)-
Receiving a spurious schema operation reply from a node not registered as a participant in the current schema operation led to an unplanned shutdown of the SQL node.
Now in such cases we discard replies from any node not registered as a participant. (Bug #32891206)
References: See also: Bug #30930132, Bug #32509544.
The values
true
andfalse
for Boolean parameters such asAutomaticThreadConfig
were not handled correctly when set in a.cnf
file. (This issue did not affect handling of such values in.ini
files.) (Bug #32871875)Removed unneeded copying of a temporary variable which caused a compiler truncation warning in
storage/ndb/src/common/util/version.cpp
. (Bug #32763321)-
The maximum index size supported by the
NDB
index statistics implementation is 3056 bytes. Attempting to create an index of a larger size when the table held enough data to trigger a statistics update causedCREATE INDEX
to be rejected with the error Got error 911 'Index stat scan requested on index with unsupported key size' from NDBCLUSTER.This error originated in the
TUX
kernel block during a scan which caused the schema transaction to fail. This scan is triggered during index creation when the table contains a nonzero number of rows; this also occurs during automatic updates of index statistics or execution ofANALYZE TABLE
.Creating the index as part of
CREATE TABLE
or when the table contained no rows returned no error. No statistics were generated in such situations, whileANALYZE TABLE
returned an error similar to the one above.We fix this by allowing the index to be created while returning an appropriate warning from a new check introduced at the handler level. In addition, the
TUX
scan now handles this situation by suppressing the error, and instead returns success, effectively treating the table as an empty fragment. Otherwise, the behavior in such cases remains unchanged, with a warning returned to the client and no index statistics generated, whether or not the table contains any rows. (Bug #32749829)References: This issue is a regression of: Bug #28714864.
A
CREATE TABLE
statement using ordered indexes returned an error whenIndexStatAutoCreate
was set to1
and all SQL nodes had been started with--ndb-index-stat-enable=OFF
, due to the fact that, when set toOFF
, the option prevented the creation of the index statistics tables. Now these tables are always created at mysqld startup regardless of the value of--ndb-index-stat-enable
. (Bug #32649528)-
If an
NDB
schema operation was lost before the coordinator could process it, the client which logged the operation waited indefinitely for the coordinator to complete or abort it. (Bug #32593352)References: See also: Bug #32579148.
ndb_mgmd now writes a descriptive error message to the cluster log when it is invoked with one or more invalid options. (Bug #32554492)
An IPv6 address used as part of an
NDB
connection string and which had only decimal digits following the first colon was incorrectly parsed, and could not be used to connect to the management server. (Bug #32532157)-
Simultaneously creating a user and then granting this user the
NDB_STORED_USER
privilege on different MySQL servers sometimes caused these servers to hang.This was due to the fact that, when the
NDB
storage engine is enabled, all SQL statements that involve users and grants are evaluated to determine whether they effect any users having theNDB_STORED_USER
privilege, after which some statements are ignored, some are distributed to all SQL nodes as statements, and some are distributed to all SQL nodes as requests to read and apply a user privilege snapshot. These snapshots are stored in themysql.ndb_sql_metadata
table. Unlike a statement update, which is limited to one SQL statement, a snapshot update can contain up to seven SQL statements per user. Waiting for any lock in theNDB
binary logging thread while managing distributed users could easily lead to a deadlock, when the thread was waiting for an exclusive lock on the local ACL cache.We fix this problem by implementing explicit locking around
NDB_STORED_USER
snapshot updates; snapshot distribution is now performed while holding a global read lock on one row of thendb_sql_metadata
table. (Previously, both statement and snapshot distribution were performed asynchronously, with no locking.) Now, when a thread does not obtain this lock on the first attempt, a warning is raised, and the deadlock prevented.For more information, see Privilege Synchronization and NDB_STORED_USER. (Bug #32424653)
References: See also: Bug #32832676.
-
It was not possible to create or update index statistics when the cluster was in single user mode, due to transactions being disallowed from any node other than the designated API node granted access, regardless of type. This prevented the data node responsible for starting transactions relating to index statistics from doing so.
We address this issue by relaxing the constraint in single user mode and allowing transactions originating from data nodes (but not from other API nodes). (Bug #32407897)
When starting multiple management nodes, the first such node waits for the others to start before committing the configuration, but this was not explicitly communicated to users. In addition, when data nodes were started without starting all management nodes, no indication was given to users that its node ID was not allocated since no configuration had yet been committed. Now in such cases, the management node prints a message advising the user that the cluster is configured to use multiple management nodes, and to ensure that all such nodes have been started. (Bug #32339789)
-
To handle cases in which a cluster is restarted while the MySQL Server (SQL node) is left running, the index statistics thread is notified when an initial cluster start or restart occurs. The index statistics thread forced the creation of a fresh
Ndb
object and checking of various system objects, which is unnecessary when the MySQL Server is started at the same time as the initial Cluster start which led to the unnecessary re-creation of theNdb
object.We fix this by restarting only the listener in such cases, rather than forcing the
Ndb
object to be re-created. (Bug #29610555, Bug #33130864) Removed extraneous spaces that appeared in some entries written by errors in the node logs. (Bug #29540486)
-
ndb_restore raised a warning to use
--disable-indexes
when restoring data after the metadata had already been restored with--disable-indexes
.When
--disable-indexes
is used to restore metadata before restoring data, the tables in the target schema have no indexes. We now check when restoring data with this option to ensure that there are no indexes on the target table, and print the warning only if the table already has indexes. (Bug #28749799) -
When restoring of metadata was done using
--disable-indexes
, there was no attempt to create indexes or foreign keys dependent on these indexes, but when ndb_restore was used without the option, indexes and foreign keys were created. When--disable-indexes
was used later while restoring data,NDB
attempted to drop any indexes created in the previous step, but ignored the failure of a drop index operation due to a dependency on the index of a foreign key which had not been dropped. This led subsequently to problems while rebuilding indexes, when there was an attempt to create foreign keys which already existed.We fix ndb_restore as follows:
When
--disable-indexes
is used, ndb_restore now drops any foreign keys restored from the backup.ndb_restore now checks for the existence of indexes before attempting to drop them.
(Bug #26974491)
The
--ndb-nodegroup-map
option for ndb_restore did not function as intended, and code supporting it has been removed. The option now does nothing, and any value set for it is ignored. (Bug #25449055)Event buffer status messages shown by the event logger have been improved. Percentages are now displayed only when it makes to do so. In addition, if a maximum size is not defined, the printout shows
max=unlimited
. (Bug #21276857)File handles and
FileLogHandler
objects created inMgmtSrvr::configure_eventlogger
were leaked due to an incomplete destructor forBufferedLogHandler
. This meant that, each time the cluster configuration changed in a running ndb_mgmd, the cluster log was reopened and a file handle leaked, which could lead to issues with test programs and possibly to other problems. (Bug #18192573)-
When
--configdir
was specified as.
, but with a current working directory other thanDataDir
, the binary configuration was created inDataDir
and not in the current directory. In addition, ndb_mgmd would not start when there was an existing binary configuration inDataDir
.We fix this by having ndb_mgmd check the path and refusing to start when a relative path is specified for
--configdir
. (Bug #11755867) A memory leak occurred when
NDBCLUSTER
was unable to create a subscription for receiving cluster events. Ownership of the provided event data is supposed to be taken over but actually happened only when creation succeeded, in other cases the provided event data simply being lost. (Bug #102794, Bug #32579459)ndb_mgmd ignores the
--ndb-connectstring
option if--config-file
is also specified. Now a warning to this effect is issued, if both options are used. (Bug #102738, Bug #32554759)The data node configuration parameters
UndoDataBuffer
andUndoIndexBuffer
have no effect in any currently supported version of NDB Cluster. Both parameters are now deprecated and the presence of either in the cluster configuration file raises a warning; you should expect them to be removed in a future release. (Bug #84184, Bug #26448357)Execution of a bulk
UPDATE
statement using aLIMIT
clause led to a debug assertion when an error was returned byNDB
. We fix this by relaxing the assertion forNDB
tables, since we expect in certain scenarios for an error to be returned at this juncture.