Documentation Home
MySQL 5.5 Reference Manual
Related Documentation Download this Manual Excerpts from this Manual

MySQL 5.5 Reference Manual  /  ...  /  MySQL Cluster Development in MySQL Cluster NDB 7.2 MySQL Cluster Development in MySQL Cluster NDB 7.2

The following improvements to MySQL Cluster have been made in MySQL Cluster NDB 7.2:

  • Based on MySQL Server 5.5. Previous MySQL Cluster release series, including MySQL Cluster NDB 7.1, used MySQL 5.1 as a base. Beginning with MySQL Cluster NDB 7.2.1, MySQL Cluster NDB 7.2 is based on MySQL Server 5.5, so that MySQL Cluster users can benefit from MySQL 5.5's improvements in scalability and performance monitoring. As with MySQL 5.5, MySQL Cluster NDB 7.2.1 and later use CMake for configuring and building from source in place of GNU Autotools (used in MySQL 5.1 and MySQL Cluster releases based on MySQL 5.1). For more information about changes and improvements in MySQL 5.5, see Section 1.4, “What Is New in MySQL 5.5”.

  • Conflict detection using GCI Reflection. MySQL Cluster Replication implements a new primary wins conflict detection and resolution mechanism. GCI Reflection applies in two-cluster circulation active-active replication setups, tracking the order in which changes are applied on the MySQL Cluster designated as primary relative to changes originating on the other MySQL Cluster (referred to as the secondary). This relative ordering is used to determine whether changes originating on the slave are concurrent with any changes that originate locally, and are therefore potentially in conflict. Two new conflict detection functions are added: When using NDB$EPOCH(), rows that are out of sync on the secondary are realigned with those on the primary; with NDB$EPOCH_TRANS(), this realignment is applied to transactions. For more information, see Section 18.6.11, “MySQL Cluster Replication Conflict Resolution”.

  • Version 2 binary log row events. A new format for binary log row events, known as Version 2 binary log row events, provides support for improvements in MySQL Cluster Replication conflict detection (see previous item) and is intended to facilitate further improvements in MySQL Replication. You can cause a given mysqld use Version 1 or Version 2 binary logging row events with the --log-bin-use-v1-row-events option. For backward compatibility, Version 2 binary log row events are also available in MySQL Cluster NDB 7.0 (7.0.27 and later) and MySQL Cluster NDB 7.1 (7.1.16 and later). However, MySQL Cluster NDB 7.0 and MySQL Cluster NDB 7.1 continue to use Version 1 binary log row events as the default, whereas the default in MySQL Cluster NDB 7.2.1 and later is use Version 2 row events for binary logging.

  • Distribution of MySQL users and privileges. Automatic distribution of MySQL users and privileges across all SQL nodes in a given MySQL Cluster is now supported. To enable this support, you must first import an SQL script share/mysql/ndb_dist_priv.sql that is included with the MySQL Cluster NDB 7.2 distribution. This script creates several stored procedures which you can use to enable privilege distribution and perform related tasks.

    When a new MySQL Server joins a MySQL Cluster where privilege distribution is in effect, it also participates in the privilege distribution automatically.

    Once privilege distribution is enabled, all changes to the grant tables made on any mysqld attached to the cluster are immediately available on any other attached MySQL Servers. This is true whether the changes are made using CREATE USER, GRANT, or any of the other statements described elsewhere in this Manual (see Section 13.7.1, “Account Management Statements”.) This includes privileges relating to stored routines and views; however, automatic distribution of the views or stored routines themselves is not currently supported.

    For more information, see Section 18.5.14, “Distributed MySQL Privileges for MySQL Cluster”.

  • Distributed pushed-down joins. Many joins can now be pushed down to the NDB kernel for processing on MySQL Cluster data nodes. Previously, a join was handled in MySQL Cluster by means of repeated accesses of NDB by the SQL node; however, when pushed-down joins are enabled, a pushable join is sent in its entirety to the data nodes, where it can be distributed among the data nodes and executed in parallel on multiple copies of the data, with a single, merged result being returned to mysqld. This can reduce greatly the number of round trips between an SQL node and the data nodes required to handle such a join, leading to greatly improved performance of join processing.

    It is possible to determine when joins can be pushed down to the data nodes by examining the join with EXPLAIN. A number of new system status variables (Ndb_pushed_queries_defined, Ndb_pushed_queries_dropped, Ndb_pushed_queries_executed, and Ndb_pushed_reads) and additions to the counters table (in the ndbinfo information database) can also be helpful in determining when and how well joins are being pushed down.

    More information and examples are available in the description of the ndb_join_pushdown server system variable. See also the description of the status variables referenced in the previous paragraph, as well as Section, “The ndbinfo counters Table”.

  • Improved default values for data node configuration parameters. In order to provide more resiliency to environmental issues and better handling of some potential failure scenarios, and to perform more reliably with increases in memory and other resource requirements brought about by recent improvements in join handling by NDB, the default values for a number of MySQL Cluster data node configuration parameters have been changed. The parameters and changes are described in the following list:

    In addition, the value computed for MaxNoOfLocalScans when this parameter is not set in config.ini has been increased by a factor of 4.

  • Fail-fast data nodes. Beginning with MySQL Cluster NDB 7.2.1, data nodes handle corrupted tuples in a fail-fast manner by default. This is a change from previous versions of MySQL Cluster where this behavior had to be enabled explicitly by enabling the CrashOnCorruptedTuple configuration parameter. In MySQL Cluster NDB 7.2.1 and later, this parameter is enabled by default and must be explicitly disabled, in which case data nodes merely log a warning whenever they detect a corrupted tuple.

  • Memcache API support (ndbmemcache). The Memcached server is a distributed in-memory caching server that uses a simple text-based protocol. It is often employed with key-value stores. The Memcache API for MySQL Cluster, available beginning with MySQL Cluster NDB 7.2.2, is implemented as a loadable storage engine for memcached version 1.6 and later. This API can be used to access a persistent MySQL Cluster data store employing the memcache protocol. It is also possible for the memcached server to provide a strictly defined interface to existing MySQL Cluster tables.

    Each memcache server can both cache data locally and access data stored in MySQL Cluster directly. Caching policies are configurable. For more information, see ndbmemcache—Memcache API for MySQL Cluster, in the MySQL Cluster API Developers Guide.

  • Rows per partition limit removed. Previously it was possible to store a maximum of 46137488 rows in a single MySQL Cluster partition—that is, per data node. Beginning with MySQL Cluster NDB 7.2.9, this limitation has been lifted, and there is no longer any practical upper limit to this number. (Bug #13844405, Bug #14000373)

MySQL Cluster NDB 7.2 is also supported by MySQL Cluster Manager, which provides an advanced command-line interface that can simplify many complex MySQL Cluster management tasks. See MySQL™ Cluster Manager 1.3.6 User Manual, for more information.

Download this Manual
User Comments
Sign Up Login You must be logged in to post a comment.