MySQL NDB Cluster 8.0.20 is a new release of NDB 8.0, based on
MySQL Server 8.0 and including features in version 8.0 of the
NDB storage engine, as well as fixing
recently discovered bugs in previous NDB Cluster releases.
Obtaining NDB Cluster 8.0. NDB Cluster 8.0 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in NDB Cluster 8.0, see What is New in MySQL NDB Cluster 8.0.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 8.0 through MySQL 8.0.20 (see Changes in MySQL 8.0.20 (2020-04-27, General Availability)).
-
Important Change: It is now possible to divide a backup into slices and to restore these in parallel using two new options implemented for the ndb_restore utility, making it possible to employ multiple instances of ndb_restore to restore subsets of roughly the same size of the backup in parallel, which should help to reduce the length of time required to restore an NDB Cluster from backup.
The
--num-slicesoptions determines the number of slices into which the backup should be divided;--slice-idprovides the ID of the slice (0 to 1 less than the number of slices) to be restored by ndb_restore.Up to 1024 slices are supported.
For more information, see the descriptions of the
--num-slicesand--slice-idoptions. (Bug #30383937, WL #10691) -
Important Change: To increase the rate at which update operations can be processed,
NDBnow supports and by default makes use of multiple transporters per node group. By default, the number of transporters used by each node group in the cluster is equal to the number of the number of local data management (LDM) threads. While this number should be optimal for most use cases, it can be adjusted by setting the value of theNodeGroupTransportersdata node configuration parameter which is introduced in this release. The maximum is the greater of the number of LDM threads or the number of TC threads, up to an overall maximum of 32 transporters.See Multiple Transporters, for additional information. (WL #12837)
NDB Client Programs: Two options are added for the ndb_blob_tool utility, to enable it to detect missing blob parts for which inline parts exist, and to replace these with placeholder blob parts (consisting of space characters) of the correct length. To check whether there are missing blob parts, use the ndb_blob_tool
--check-missingoption. To replace with placeholders any blob parts which are missing, use the program's--add-missingoption, also added in this release. (Bug #28583971)NDB Client Programs: Removed a dependency from the ndb_waiter and ndb_show_tables utility programs on the
NDBTlibrary. This library, used inNDBdevelopment for testing, is not required for normal use. The visible effect for users from this change is that these programs no longer printNDBT_ProgramExit -following completion of a run. Applications that depend upon this behavior should be updated to reflect this change when upgrading to this release. (WL #13727, WL #13728)statusMySQL NDB ClusterJ: The unused
antlr3plugin has been removed from the ClusterJpomfile. (Bug #29931625)MySQL NDB ClusterJ: The minimum Java version ClusterJ supports for MySQL NDB Cluster 8.0 is now Java 8. (Bug #29931625)
-
MySQL NDB ClusterJ: A few Java APIs used by ClusterJ are now deprecated in recent Java versions. These adjustments have been made to ClusterJ:
Replaced all
Class.newInstance()calls withClass.getDeclaredConstructor().newInstance()calls. Also updated the exception handling and the test cases wherever required.All the
Numberclasses' constructors that instantiate an object from aStringor a primitive type are deprecated. Replaced all such deprecated instantiation calls with the correspondingvalueOf()method calls.The
Proxy.getProxyClass()is now deprecated. TheDomainTypeHandlerImplclass now directly creates a new instance using theProxy.newProxyInstance()method; all references to theProxyclass and its constructors are removed from theDomainTypeHandlerImplclass.SessionFactoryImplclass now uses the interfaces underlying the proxy object to identify the domain class rather than using the Proxy class. Also updatedDomainTypeHandlerFactoryTest.The
finalize()method is now deprecated. This patch does not change the overriding finalize() methods, but just suppresses the warnings on them. This deprecation will be handled separately in a later patch.Updated the CMake configuration to treat deprecation warnings as errors when compiling ClusterJ.
(Bug #29931625)
NDBnow supports versioning forndbinfotables, and maintains the current definitions for its tables internally. At startup,NDBcompares its supportedndbinfoversion with the version stored in the data dictionary. If the versions differ,NDBdrops any oldndbinfotables and recreates them using the current definitions. (WL #11563)-
Many outer joins and semijoins which previously could not be pushed down to the data nodes can now pushed (see Engine Condition Pushdown Optimization).
Outer joins which can now be pushed include those which meet the following conditions:
There are no unpushed conditions on this table
There are no unpushed conditions on other tables in the same join nest, or in upper join nests on which it depends
All other tables in the same join nest, or in upper join nests on which it depends are also pushed
A semijoin using an index scan can now be pushed if it meets the the conditions just noted for a pushed outer join, and it uses the
firstMatchstrategy. (WL #7636, WL #13576)References: See also: Bug #28728603, Bug #28672214, Bug #29296615, Bug #29232744, Bug #29161281, Bug #28728007.
-
A new and simplified interface is implemented for enabling and configuring adaptive CPU spin. The
SpinMethoddata node parameter, added in this release, provides the following four settings:StaticSpinning: Disables adaptive spinning; uses the static spinning employed in previous NDB Cluster releasesCostBasedSpinning: Enables adaptive spinning using a cost-based modelLatencyOptimisedSpinning: Enables adaptive spinning optimized for latencyDatabaseMachineSpinning: Enables adaptive spinning optimized for machines hosting databases, where each thread has its own CPU
Each of these settings causes the data node to use a set of predetermined values, as needed, for one or more of the spin parameters listed here:
SchedulerSpinTimer: The data node configuration parameter of this name.EnableAdaptiveSpinning: Enables or disables adaptive spinning; cannot be set directly in the cluster configuration file, but can be controlled directly usingDUMP 104004SetAllowedSpinOverhead: CPU time to allow to gain latency; cannot be set directly in theconfig.inifile, but possible to change directly, usingDUMP 104002
The presets available from
SpinMethodshould cover most use cases, but you can fine-tune the adaptive spin behavior using theSchedulerSpinTimerdata node configuration parameter and theDUMPcommands just listed, as well as additionalDUMPcommands in the ndb_mgm cluster management client; see the description ofSchedulerSpinTimerfor a complete listing.NDB 8.0.20 also adds a new TCP configuration parameter
TcpSpinTimewhich sets the time to spin for a given TCP connection. This can be used to enable adaptive spinning for any such connections between data nodes, management nodes, and SQL or API nodes.The ndb_top tool is also enhanced to provide spin time information per thread; this is displayed in green in the terminal window.
For more information, see the descriptions of the
SpinMethodandTcpSpinTimeconfiguration parameters, theDUMPcommands listed or indicated previously, and the documentation for ndb_top. (WL #12554)
-
Important Change: When
lower_case_table_nameswas set to 0, issuing a query in which the lettercase of any foreign key names differed from the case with which they were created led to an unplanned shutdown of the cluster. This was due to the fact that mysqld treats foreign key names as case insensitive, even on case-sensitive file systems, whereas the manner in which theNDBdictionary stored foreign key names depended on the value oflower_case_table_names, such that, when this was set to 0, during lookup,NDBexpected the lettercase of any foreign key names to match that with which they were created. Foreign key names which differed in lettercase could then not be found in theNDBdictionary, even though it could be found in the MySQL data dictionary, leading to the previously described issue inNDBCLUSTER.This issue did not happen when
lower_case_table_nameswas set to 1 or 2.The problem is fixed by making foreign key names case insensitive and removing the dependency on
lower_case_table_names. This means that the following two items are now always true:Foreign key names are now stored using the same lettercase with which they are created, without regard to the value of
lower_case_table_names.Lookups for foreign key names by
NDBare now always case insensitive.
(Bug #30512043)
Packaging: Removed an unnecessary dependency on Perl from the
mysql-cluster-community-server-minimalRPM package. (Bug #30677589)-
Packaging:
NDBdid not compile successfully on Ubuntu 16.04 with GCC 5.4 due to the use ofisnan()rather thanstd::isnan(). (Bug #30396292)References: This issue is a regression of: Bug #30338980.
OS X: Removed the variable
SCHEMA_UUID_VALUE_LENGTHwhich was used only once in theNDBsources, and which caused compilation warnings when building on Mac OSX. The variable has been replaced withUUID_LENGTH. (Bug #30622139)NDB Disk Data: Allocation of extents in tablespace data files is now performed in round-robin fashion among all data files used by the tablespace. This should provide more even distribution of data in cases where multiple storage devices are used for Disk Data storage. (Bug #30739018)
NDB Disk Data: Under certain conditions, checkpointing of Disk Data tables could not be completed, leading to an unplanned data node shutdown. (Bug #30728270)
NDB Disk Data: An uninitialized variable led to issues when performing Disk Data DDL operations following a restart of the cluster. (Bug #30592528)
MySQL NDB ClusterJ: When a
Datevalue was read from a NDB cluster, ClusterJ sometimes extracted the wrong year value from the row. It was because theUtilityclass, when unpacking theDatevalue, wrongly extracted some extra bits for the year. This patch makes ClusterJ only extract the required bits. (Bug #30600320)MySQL NDB ClusterJ: When the cluster's
NdbOperation::AbortOptiontype had the value ofAO_IgnoreOnError, when there was a read error, ClusterJ took that as the row was missing and returnednullinstead of an exception. This was because withAO_IgnoreOnErro, theexecute()method always returns a success code after each transaction, and ClusterJ is supposed to check for any errors in any of the individual operations; however, read operations were not checked by ClusterJ in the case. With this patch, read operations are now checked for errors after query executions, so that a reading error is reported as such. (Bug #30076276)-
The fix for a previous issue in the MySQL Optimizer adversely affected engine condition pushdown for the
NDBstorage engine. (Bug #303756135)References: This issue is a regression of: Bug #97552, Bug #30520749.
When restoring signed auto-increment columns, ndb_restore incorrectly handled negative values when determining the maximum value included in the data. (Bug #30928710)
On an SQL node which had been started with
--ndbcluster, before any other nodes in the cluster were started, table creation succeeded while creating thendbinfoschema, but creation of views did not, raisingHA_ERR_NO_CONNECTIONinstead. (Bug #30846678)-
Formerly (prior to NDB 7.6.4) an
SPJworker instance was activated for each fragment of the root table of the pushed join, but in NDB 7.6 and later, a single worker is activated for each data node and is responsible for all fragments on that data node.Before this change was made, it was sufficient for each such worker to scan a fragment with parallelism equal to 1 for all
SPJworkers to keep all local data manager threads busy. When the number of workers was reduced as result of the change, the minimum parallelism should have been increased to equal the number of fragments per worker to maintain the degree of parallelism.This fix ensures that this is now done. (Bug #30639503)
The
ndb_metadata_syncsystem variable is set to true to trigger synchronization of metadata between the MySQL data dictionary and theNDBdictionary; when synchronization is complete, the variable is automatically reset to false to indicate that this has been done. One scenario involving the detection of a schema not present in the MySQL data dictionary but in use by theNDBDictionary sometimes led tondb_metadata_syncbeing reset before all tables belonging to this schema were successfully synchronized. (Bug #30627292)-
When using shared user and grants, all
ALTER USERstatements were distributed as snapshots, whether they contained plaintext passwords or not.In addition,
SHOW CREATE USERdid not include resource limits (such asMAX_QUERIES_PER_HOUR) that were set to zero, which meant that these were not distributed among SQL nodes. (Bug #30600321) -
Two buffers used for logging in
QMGRwere of insufficient size. (Bug #30598737)References: See also: Bug #30593511.
Removed extraneous debugging output relating to
SPJfrom the node out logs. (Bug #30572315)When performing an initial restart of an NDB Cluster, each MySQL Server attached to it as an SQL node recognizes the restart, reinstalls the
ndb_schematable from the data dictionary, and then clears all NDB schema definitions created prior to the restart. Because the data dictionary was cleared only afterndb_schemais reinstalled, installation sometimes failed due tondb_schemahaving the same table ID as one of the tables from before the restart was performed. This issue is fixed by ensuring that the data dictionary is cleared before thendb_schematable is reinstalled. (Bug #30488610)NDBsometimes made the assumption that the list of nodes containing index statistics was ordered, but this list is not always ordered in the same way on all nodes. This meant that in some casesNDBignored a request to update index statistics, which could result in stale data in the index statistics tables. (Bug #30444982)When the optimizer decides to presort a table into a temporary table, before later tables are joined, the table to be sorted should not be part of a pushed join. Although logic was present in the abstract query plan interface to detect such query plans, that this did not detect correctly all situations using
filesort into temporary table. This is changed to check whether a filesort descriptor has been set up; if so, the table content is sorted into a temporary file as its first step of accessing the table, which greatly simplifies interpretation of the structure of the join. We now also detect when the table to be sorted is a part of a pushed join, which should prevent future regressions in this interface. (Bug #30338585)When a node ID allocation request failed with NotMaster temporary errors, the node ID allocation was always retried immediately, without regard to the cause of the error. This caused a very high rate of retries, whose effects could be observed as an excessive number of Alloc node id for node
nnnfailed log messages (on the order of 15,000 messages per second). (Bug #30293495)For
NDBtables having no explicit primary key,NdbReceiverBuffercould be allocated with too small a size. This was due to the fact that the attribute bitmap sent toNDBfrom the data nodes always includes the primary key. The extra space required for hidden primary keys is now taken into consideration in such cases. (Bug #30183466)-
When translating an
NDBtable created using.frmfiles in a previous version of NDB Cluster and storing it as a table object in the MySQL data dictionary, it was possible for the table object to be committed even when a mismatch had been detected between the table indexes in the MySQL data dictionary and those for the same table's representation theNDBdictionary. This issue did not occur for tables created in NDB 8.0, where it is not necessary to upgrade the table metadata in this fashion.This problem is fixed by making sure that all such comparisons are actually performed before the table object is committed, regardless of whether the originating table was created with or without the use of
.frmfiles to store its metadata. (Bug #29783638) An error raised when obtaining cluster metadata caused a memory leak. (Bug #97737, Bug #30575163)