In this section, we discuss changes in the implementation of NDB Cluster in MySQL NDB Cluster 7.2, as compared to NDB Cluster 7.1 and earlier releases. Changes and features most likely to be of interest are shown in the following list:
NDB Cluster 7.2 is based on MySQL 5.5. For more information about new features in MySQL Server 5.5, see Section 1.4, “What Is New in MySQL 5.5”.
Version 2 binary log row events, to provide support for improvements in NDB Cluster Replication conflict detection (see next item). A given mysqld can be made to use Version 1 or Version 2 binary logging row events with the
Two new “primary wins” conflict detection and resolution functions
NDB$EPOCH_TRANS()for use in replication setups with 2 NDB Clusters. For more information, see Section 18.6, “NDB Cluster Replication”.
Distribution of MySQL users and privileges across NDB Cluster SQL nodes is now supported—see Section 18.5.14, “Distributed MySQL Privileges for NDB Cluster”.
Improved support for distributed pushed-down joins, which greatly improve performance for many joins that can be executed in parallel on the data nodes.
Support for the Memcache API using the loadable ndbmemcache storage engine. See ndbmemcache—Memcache API for NDB Cluster.
This section contains information about NDB Cluster 7.2 releases through 5.5.62-ndb-7.2.36, which is a previous GA release but still supported, as is NDB Cluster 7.3. NDB Cluster 7.1, NDB Cluster 7.0, and NDB Cluster 6.3 are previous GA release series which are no longer supported. We recommend that new deployments use NDB Cluster 7.4 or NDB Cluster 7.5, both of which are available as General Availability releases. NDB Cluster 7.6, currently under development, is available for evaluation and testing as a Developer Preview release. For information about NDB Cluster 7.1 and previous releases, see the MySQL 5.1 Reference Manual.
The following improvements to NDB Cluster have been made in NDB Cluster 7.2:
Based on MySQL Server 5.5. Previous NDB Cluster release series, including NDB Cluster 7.1, used MySQL 5.1 as a base. Beginning with NDB 7.2.1, NDB Cluster 7.2 is based on MySQL Server 5.5, so that NDB Cluster users can benefit from MySQL 5.5's improvements in scalability and performance monitoring. As with MySQL 5.5, NDB 7.2.1 and later use CMake for configuring and building from source in place of GNU Autotools (used in MySQL 5.1 and NDB Cluster releases based on MySQL 5.1). For more information about changes and improvements in MySQL 5.5, see Section 1.4, “What Is New in MySQL 5.5”.
Conflict detection using GCI Reflection. NDB Cluster Replication implements a new “primary wins” conflict detection and resolution mechanism. GCI Reflection applies in two-cluster circulation “active-active” replication setups, tracking the order in which changes are applied on the NDB Cluster designated as primary relative to changes originating on the other NDB Cluster (referred to as the secondary). This relative ordering is used to determine whether changes originating on the slave are concurrent with any changes that originate locally, and are therefore potentially in conflict. Two new conflict detection functions are added: When using
NDB$EPOCH(), rows that are out of sync on the secondary are realigned with those on the primary; with
NDB$EPOCH_TRANS(), this realignment is applied to transactions. For more information, see Section 18.6.11, “NDB Cluster Replication Conflict Resolution”.
Version 2 binary log row events. A new format for binary log row events, known as Version 2 binary log row events, provides support for improvements in NDB Cluster Replication conflict detection (see previous item) and is intended to facilitate further improvements in MySQL Replication. You can cause a given mysqld use Version 1 or Version 2 binary logging row events with the
--log-bin-use-v1-row-eventsoption. For backward compatibility, Version 2 binary log row events are also available in NDB Cluster 7.0 (7.0.27 and later) and NDB Cluster 7.1 (7.1.16 and later). However, NDB Cluster 7.0 and NDB Cluster 7.1 continue to use Version 1 binary log row events as the default, whereas the default in NDB 7.2.1 and later is use Version 2 row events for binary logging.
Distribution of MySQL users and privileges. Automatic distribution of MySQL users and privileges across all SQL nodes in a given NDB Cluster is now supported. To enable this support, you must first import an SQL script
share/mysql/ndb_dist_priv.sqlthat is included with the NDB Cluster 7.2 distribution. This script creates several stored procedures which you can use to enable privilege distribution and perform related tasks.
When a new MySQL Server joins an NDB Cluster where privilege distribution is in effect, it also participates in the privilege distribution automatically.
Once privilege distribution is enabled, all changes to the grant tables made on any mysqld attached to the cluster are immediately available on any other attached MySQL Servers. This is true whether the changes are made using
GRANT, or any of the other statements described elsewhere in this Manual (see Section 13.7.1, “Account Management Statements”.) This includes privileges relating to stored routines and views; however, automatic distribution of the views or stored routines themselves is not currently supported.
For more information, see Section 18.5.14, “Distributed MySQL Privileges for NDB Cluster”.
Distributed pushed-down joins. Many joins can now be pushed down to the NDB kernel for processing on NDB Cluster data nodes. Previously, a join was handled in NDB Cluster by means of repeated accesses of
NDBby the SQL node; however, when pushed-down joins are enabled, a pushable join is sent in its entirety to the data nodes, where it can be distributed among the data nodes and executed in parallel on multiple copies of the data, with a single, merged result being returned to mysqld. This can reduce greatly the number of round trips between an SQL node and the data nodes required to handle such a join, leading to greatly improved performance of join processing.
It is possible to determine when joins can be pushed down to the data nodes by examining the join with
EXPLAIN. A number of new system status variables (
Ndb_pushed_reads) and additions to the
counterstable (in the
ndbinfoinformation database) can also be helpful in determining when and how well joins are being pushed down.
More information and examples are available in the description of the
ndb_join_pushdownserver system variable. See also the description of the status variables referenced in the previous paragraph, as well as Section 184.108.40.206, “The ndbinfo counters Table”.
Improved default values for data node configuration parameters. In order to provide more resiliency to environmental issues and better handling of some potential failure scenarios, and to perform more reliably with increases in memory and other resource requirements brought about by recent improvements in join handling by
NDB, the default values for a number of NDB Cluster data node configuration parameters have been changed. The parameters and changes are described in the following list:
HeartbeatIntervalDbDb: Default increased from 1500 ms to 5000 ms.
ArbitrationTimeout: Default increased from 3000 ms to 7500 ms.
TimeBetweenEpochsTimeout: Now effectively disabled by default (default changed from 4000 ms to 0).
SharedGlobalMemory: Default increased from 20 MB to 128 MB.
MaxParallelScansPerFragment: Default increased from 32 to 256.
Beginning with NDB 7.2.10,
In addition, the value computed for
MaxNoOfLocalScanswhen this parameter is not set in
config.inihas been increased by a factor of 4.
Fail-fast data nodes. Beginning with NDB 7.2.1, data nodes handle corrupted tuples in a fail-fast manner by default. This is a change from previous versions of NDB Cluster where this behavior had to be enabled explicitly by enabling the
CrashOnCorruptedTupleconfiguration parameter. In NDB 7.2.1 and later, this parameter is enabled by default and must be explicitly disabled, in which case data nodes merely log a warning whenever they detect a corrupted tuple.
Memcache API support (ndbmemcache). The Memcached server is a distributed in-memory caching server that uses a simple text-based protocol. It is often employed with key-value stores. The Memcache API for NDB Cluster, available beginning with NDB 7.2.2, is implemented as a loadable storage engine for memcached version 1.6 and later. This API can be used to access a persistent NDB Cluster data store employing the memcache protocol. It is also possible for the memcached server to provide a strictly defined interface to existing NDB Cluster tables.
Each memcache server can both cache data locally and access data stored in NDB Cluster directly. Caching policies are configurable. For more information, see ndbmemcache—Memcache API for NDB Cluster, in the NDB Cluster API Developers Guide.
Rows per partition limit removed. Previously it was possible to store a maximum of 46137488 rows in a single NDB Cluster partition—that is, per data node. Beginning with NDB 7.2.9, this limitation has been lifted, and there is no longer any practical upper limit to this number. (Bug #13844405, Bug #14000373)
NDB Cluster 7.2 is also supported by MySQL Cluster Manager, which provides an advanced command-line interface that can simplify many complex NDB Cluster management tasks. See MySQL™ Cluster Manager 1.3.6 User Manual, for more information.