For Windows, MSI installer packages for NDB Cluster now include a check for the required Visual Studio redistributable package, and produce a message asking the user to install it if it is missing. (Bug #30541398)
-
NDB Disk Data: An initial restart of the cluster now causes the removal of all
NDB
tablespaces and log file groups from theNDB
dictionary and the MySQL data dictionary. This includes the removal of all data files and undo log files associated with these objects. (Bug #30435378)References: See also: Bug #29894166.
The status variable
Ndb_metadata_blacklist_size
is now deprecated, and is replaced in NDB 8.0.22 byNdb_metadata_excluded_count
. (Bug #31465469)-
It now possible to consolidate data from separate instances of NDB Cluster into a single target NDB Cluster when the original datasets all use the same schema. This is supported when using backups created using
START BACKUP
in ndb_mgm and restoring them with ndb_restore, using the--remap-column
option implemented in this release (along with--restore-data
and possibly other options).--remap-column
can be employed to handle cases of overlapping primary, unique, or both sorts of key values between source clusters, and you need to make sure that they do not overlap in the target cluster. This can also be done to preserve other relationships between tables.When used together with
--restore-data
, the new option applies a function to the value of the indicated column. The value set for this option is a string of the format
, whose components are listed here:db
.tbl
.col
:fn
:args
db
: Database name, after performing any renames.tbl
: Table name.col
: Name of the column to be updated. This column's type must be one ofINT
orBIGINT
, and can optionally beUNSIGNED
.fn
: Function name; currently, the only supported name isoffset
.args
: The size of the offset to be added to the column value byoffset
. The range of the argument is that of the signed variant of the column's type; thus, negative offsets are supported.
You can use
--remap-column
for updating multiple columns of the same table and different columns of different tables, as well as combinations of multiple tables and columns. Different offset values can be employed for different columns of the same table.As part of this work, two new options are also added to ndb_desc in this release:
--auto-inc
(short form-a
): Includes the next auto-increment value in the output, if the table has anAUTO_INCREMENT
column.--context
(short form-x
): Provides extra information about the table, including the schema, database name, table name, and internal ID.
These options may be useful for obtaining information about
NDB
tables when planning a merge, particularly in situations where the mysql client may not be readily available.For more information, see the descriptions for
--remap-column
,--auto-inc
, and--context
. (Bug #30383950, WL #11796) -
Detailed real-time information about the state of automatic metadata mismatch detection and synchronization can now be obtained from tables in the MySQL Performance Schema. These two tables are listed here:
ndb_sync_pending_objects
: Contains information aboutNDB
database objects for which mismatches have been detected between theNDB
dictionary and the MySQL data dictionary. It does not include objects which have been excluded from mismatch detection due to permanent errors raised when attempting to synchronize them.ndb_sync_excluded_objects
: Contains information aboutNDB
database objects which have been excluded because they cannot be synchronized between theNDB
dictionary and the MySQL data dictionary, and thus require manual intervention. These objects are no longer subject to mismatch detection until such intervention has been performed.
In each of these tables, each row corresponds to a database object, and contains the database object's parent schema (if any), the object's name, and the object's type. Types of objects include schemas, tablespaces, log file groups, and tables. The
ndb_sync_excluded_objects
table shows in addition to this information the reason for which the object has been excluded.Performance Schema NDB Cluster Tables, provides further information about these Performance Schema tables. (Bug #30107543, WL #13712)
-
ndb_restore now supports different primary key definitions for source and target tables when restoring from an
NDB
native backup, using the--allow-pk-changes
option introduced in this release. Both increasing and decreasing the number of columns making up the original primary key are supported. This may be useful when it is necessary to accommodate schema version changes while restoring data, or when doing so is more efficient or less time-consuming than performingALTER TABLE
statements involving primary key changes on a great many tables following the restore operation.When extending a primary key with additional columns, any columns added must not be nullable, and any values stored in any such columns must not change while the backup is being taken. Changes in the values of any such column while trying to add it to the table's primary key causes the restore operation to fail. Due to the fact that some applications set the values of all columns when updating a row even if the values of one or more of the columns does not change, it is possible to override this behavior by using the
--ignore-extended-pk-updates
option which is also added in this release. If you do this, care must be taken to insure that such column values do not actually change.When removing columns from the table's primary key, it is not necessary that the columns dropped from the primary key remain part of the table afterwards.
For more information, see the description of the
--allow-pk-changes
option in the documentation for ndb_restore. (Bug #26435136, Bug #30383947, Bug #30634010, WL #10730) -
Added the
--ndb-log-fail-terminate
option for mysqld. When used, this causes the SQL node to terminate if it is unable to log all row events. (Bug #21911930)References: See also: Bug #30383919.
When a scalar subquery has no outer references to the table to which the embedding condition is attached, the subquery may be evaluated independent of that table; that is, the subquery is not dependent.
NDB
now attempts to identify and evaluate such a subquery before trying to retrieve any rows from the table to which it is attached, and to use the value thus obtained in a pushed condition, rather than using the subquery which provided the value. (WL #13798)-
In MySQL 8.0.17 and later, the MySQL Optimizer transforms
NOT EXISTS
andNOT IN
queries into antijoins.NDB
can now push these down to the data nodes.This can be done when there is no unpushed condition on the table, and the query fulfills any other conditions which must be met for an outer join to be pushed down. (WL #13796, WL #13978)
-
Important Change; NDB Disk Data: An online change of tablespace is not supported for
NDB
tables. Now, for anNDB
table, the statementALTER TABLE
is specifically disallowed.ndb_table
... ALGORITHM=INPLACE, TABLESPACE=new_tablespace
As part of this fix, the output of the ndb_desc utility is improved to include the tablespace name and ID for an
NDB
table which is using one. (Bug #31180526) The wrong index was used in the array of indexes while dropping an index. For a table with 64 indexes this caused uninitialized memory to be released. This problem also caused a memory leak when a new index was created at any later time following the drop. (Bug #31408095)
Removed an unnecessary dependency of ndb_restore on the
NDBCLUSTER
plugin. (Bug #31347684)-
Objects for which auto-synchronization fails due to temporary errors, such as failed acquisitions of metadata locks, are simply removed from the list of detected objects, making such objects eligible for detection in later cycles in which the synchronization is retried and hopefully succeeds. This best-effort approach is suitable for the default auto-synchronization behaviour but is not ideal when the using the
ndb_metadata_sync
system variable, which triggers synchronization of all metadata, and when synchronization is complete, is automatically set to false to indicate that this has been done.What happened, when a temporary error persisted for a sizable length of time, was that metadata synchronization could take much longer than expected and, in extreme cases, could hang indefinitely, pending user action. One such case occurred when using ndb_restore with the
--disable-indexes
option to restore metadata, when the synchronization process entered a vicious cycle of detection and failed synchronization attempts due to the missing indexes until the indexes were rebuilt using ndb_restore--rebuild-indexes
.The fix for this issue is, whenever
ndb_metadata_sync
is set totrue
, to exclude an object after synchronization of it fails 10 times with temporary errors by promoting these errors to a permanent error, in order to prevent stalling. This is done by maintaining a list of such objects, this list including a count of the number of times each such object has been retried. Validation of this list is performed during change detection in a similar manner to validation of the exclusion list. (Bug #31341888) 32-bit platforms are not supported by NDB 8.0. Beginning with this release, the build process checks the system architecture and aborts if it is not 64-bit. (Bug #31340969)
-
Page-oriented allocations on the data nodes are divided into nine resource groups, some having pages dedicated to themselves, and some having pages dedicated to shared global memory which can be allocated by any resource group. To prevent the query memory resource group from depriving other, more important resource groups such as transaction memory of resources, allocations for query memory are performed with low priority and are not allowed to use the last 10% of shared global memory. This change was introduced by poolification work done in NDB 8.0.15.
Subsequently, it was observed that the calculation for the number of pages of shared global memory kept inaccessible to query memory was correct only when no pages were in use, which is the case when the
LateAlloc
data node parameter is disabled (0).This fix corrects that calculation as performed when
LateAlloc
is enabled. (Bug #31328947)References: See also: Bug #31231286.
-
Multi-threaded restore is able to drive greater cluster load than the previous single-threaded restore, especially while restoring of the data file. To avoid load-related issues, the insert operation parallelism specified for an ndb_restore instance is divided equally among the part threads, so that a multithreaded instance has a similar level of parallelism for transactions and operations to a single-threaded instance.
An error in division caused some part threads to have lower insert operation parallelism than they should have, leading to an slower restore than expected. This fix ensures all part threads in a multi-threaded ndb_restore instance get an equal share for parallelism. (Bug #31256989)
DUMP 1001
(DumpPageMemoryOnFail
) now prints out information about the internal state of the data node page memory manager when allocation of pages fails due to resource constraints. (Bug #31231286)Statistics generated by
NDB
for use in tracking internal objects allocated and deciding when to release them were not calculated correctly, with the result that the threshold for resource usage was 50% higher than intended. This fix corrects the issue, and should allow for reduced memory usage. (Bug #31127237)The Dojo toolkit included with NDB Cluster and used by the Auto-Installer was upgraded to version 1.15.3. (Bug #31029110)
A packed version 1 configuration file returned by ndb_mgmd could contain duplicate entries following an upgrade to NDB 8.0, which made the file incompatible with clients using version 1. This occurs due to the fact that the code for handling backwards compatibility assumed that the entries in each section were already sorted when merging it with the default section. To fix this, we now make sure that this sort is performed prior to merging. (Bug #31020183)
-
When executing any of the
SHUTDOWN
,ALL STOP
, orALL RESTART
management commands, it is possible for different nodes to attempt to stop on different global checkpoint index (CGI) boundaries. If they succeed in doing so, then a subsequent system restart is slower than normal because any nodes having an earlier stop GCI must undergo takeover as part of the process. When nodes failing on the first GCI boundary cause surviving nodes to be nonviable, surviving nodes suffer an arbitration failure; this has the positive effect of causing such nodes to halt at the correct GCI, but can give rise to spurious errors or similar.To avoid such issues, extra synchronization is now performed during a planned shutdown to reduce the likelihood that different data nodes attempt to shut down at different GCIs as well as the use of unnecessary node takeovers during system restarts. (Bug #31008713)
During an upgrade, a client could connect to an NDB 8.0 data node without specifying a multiple transporter instance ID, so that this ID defaulted to -1. Due to an assumption that this would occur only in the Node starting state with a single transporter, the node could hang during the restart. (Bug #30899046)
-
When an NDB cluster was upgraded from a version that does not support the data dictionary to one that does, any DDL executed on a newer SQL node was not properly distributed to older ones. In addition, newer SDI generated during DDL execution was ignored by any data nodes that had not yet been upgraded. These two issues led to schema states that were not consistent between nodes of different NDB software versions.
We fix this problem by blocking any DDL affecting NDB data objects while an upgrade from a previous NDB version to a version with data dictionary support is ongoing. (Bug #30877440)
References: See also: Bug #30184658.
-
The
mysql.ndb_schema
table, used internally for schema distribution among SQL nodes, has been modified in NDB 8.0. When a cluster is being upgraded from a older version of NDB, the first SQL node to be upgraded updates the definition of this table to match that used by NDB 8.0 GA releases. (For this purpose, NDB now uses 8.0.21 as the cutoff version.) This is done by dropping the existing table and re-creating it using the newer definition. SQL nodes which have not yet been upgraded receive thisndb_schema
table drop event and enter read-only mode, becoming writable again only after they are upgraded.To keep SQL nodes running older versions of NDB from going into read-only mode, we change the upgrade behavior of mysqld such that the
ndb_schema
table definition is updated only if all SQL nodes connected to the cluster are running an 8.0 GA version of NDB and thus having the updatedndb_schema
table definition. This means that, during an upgrade to the current or any later version, no MySQL Server that is being upgraded updates thendb_schema
table if there is at least one SQL node with an older version connected to the cluster. Any SQL node running an older version of NDB remains writable throughout the upgrade process. (Bug #30876990, Bug #31016905) ndb_import did not handle correctly the case where a CSV parser error occurred in a block of input other than the final block. (Bug #30839144)
-
When mysqld was upgraded to a version that used a new SDI version, all
NDB
tables become inaccessible. This was because, during an upgrade, synchronization ofNDB
tables relies on deserializing the SDI packed into the NDB Dictionary; if the SDI format was of an version older than that used prior to the upgrade, deserialization could not take place if the format was not the same as that of the new version, which made it impossible to create a table object in the MySQL data dictionary.This is fixed by making it possible for
NDB
to bypass the SDI version check in the MySQL server when necessary to perform deserialization as part of an upgrade. (Bug #30789293, Bug #30825260) When responding to a
SCANTABREQ
, an API node can provide a distribution key if it knows that the scan should work on only one fragment, in which case the distribution key should be the fragment ID, but in some cases a hash of the partition key was used instead, leading to failures inDBTC
. (Bug #30774226)Several memory leaks found in ndb_import have been removed. (Bug #30756434, Bug #30727956)
-
The master node in a backup shut down unexpectedly on receiving duplicate replies to a
DEFINE_BACKUP_REQ
signal. These occurred when a data node other than the master errored out during the backup, and the backup master handled the situation by sending itself aDEFINE_BACKUP_REF
signal on behalf of the missing node, which resulted in two replies being received from the same node (aCONF
signal from the problem node prior to shutting down and theREF
signal from the master on behalf of this node), even though the master expected only one reply per node. This scenario was also encountered forSTART_BACKUP_REQ
andSTOP_BACKUP_REQ
signals.This is fixed in such cases by allowing duplicate replies when the error is the result of an unplanned node shutdown. (Bug #30589827)
-
When updating
NDB_TABLE
comment options usingALTER TABLE
, other options which has been set to non-default values when the table was created but which were not specified in theALTER TABLE
statement could be reset to their defaults.See Setting NDB Comment Options, for more information. (Bug #30428829)
Removed a memory leak found in the ndb_import utility. (Bug #29820879)
Incorrect handling of operations on fragment replicas during node restarts could result in a forced shutdown, or in content diverging between fragment replicas, when primary keys with nonbinary (case-sensitive) equality conditions were used. (Bug #98526, Bug #30884622)