A number of limitations exist in MySQL Cluster with regard to the handling of transactions. These include the following:
Transaction isolation level.
NDBCLUSTER storage engine
supports only the
COMMITTED transaction isolation level.
InnoDB, for example, supports
REPEATABLE READ, and
Section 7.3.4, “MySQL Cluster Backup Troubleshooting”,
for information on how this can affect backing up and
restoring Cluster databases.)
NDBCLUSTER stores only part
of a column value that uses any of MySQL's
TEXT data types in the
table visible to MySQL; the remainder of the
TEXT is stored in a
separate internal table that is not accessible to MySQL.
This gives rise to two related issues of which you should
be aware whenever executing
SELECT statements on tables
that contain columns of these types:
SELECT from a
MySQL Cluster table: If the
SELECT includes a
TEXT column, the
transaction isolation level is converted to a read with
read lock. This is done to guarantee consistency.
Prior to MySQL Cluster NDB 7.0.12, for any
SELECT which used a
primary key lookup or unique key lookup to retrieve any
columns that used any of the
TEXT data types and that
was executed within a transaction, a shared read lock
was held on the table for the duration of the
transaction—that is, until the transaction was
either committed or aborted.
In MySQL Cluster NDB 7.0.12 and later, for primary key
lookups, the lock is released as soon as all
TEXT data has been read.
(Bug #49190) However, for unique key lookups, the shared
lock continues to be held for the lifetime of the
For example, consider the table
defined by the following
CREATE TABLE t ( a INT NOT NULL AUTO_INCREMENT PRIMARY KEY, b INT NOT NULL, c INT NOT NULL, d TEXT, INDEX i(b), UNIQUE KEY u(c) ) ENGINE = NDB,
Either of the following queries on
causes a shared read lock, because the first query uses
a primary key lookup and the second uses a unique key
SELECT * FROM t WHERE a = 1; SELECT * FROM t WHERE c = 1;
However, none of the four queries shown here causes a shared read lock:
SELECT * FROM t WHERE b 1; SELECT * FROM t WHERE d = '1'; SELECT * FROM t; SELECT b,c WHERE a = 1;
This is because, of these four queries, the first uses
an index scan, the second and third use table scans, and
the fourth, while using a primary key lookup, does not
retrieve the value of any
You can help minimize issues with shared read locks by
avoiding queries that use unique key lookups (or primary
key lookups in MySQL Cluster NDB 7.0.11 and earlier)
TEXT columns, or, in
cases where such queries are not avoidable, by
committing transactions as soon as possible afterward.
Rollbacks. There are no partial transactions, and no partial rollbacks of transactions. A duplicate key or similar error causes the entire transaction to be rolled back.
This behavior differs from that of other transactional
storage engines such as
that may roll back individual statements.
Transactions and memory usage. As noted elsewhere in this chapter, MySQL Cluster does not handle large transactions well; it is better to perform a number of small transactions with a few operations each than to attempt a single large transaction containing a great many operations. Among other considerations, large transactions require very large amounts of memory. Because of this, the transactional behavior of a number of MySQL statements is effected as described in the following list:
DELETE FROM (even with no
WHERE clause) is
transactional. For tables containing a great many rows,
you may find that performance is improved by using
DELETE FROM ... LIMIT ...
statements to “chunk” the delete operation.
If your objective is to empty the table, then you may
wish to use
When executing a
INFILE statement, the
NDB engine performs
commits at irregular intervals that enable better
utilization of the communication network. It is not
possible to know ahead of time when such commits take
LOAD DATA FROM MASTER is
not supported in MySQL Cluster.
Transactions and the
When using MySQL Cluster Replication, it is not possible
to guarantee the transactional consistency of the
COUNT() function on the slave. In other
words, when performing on the master a series of
DELETE, or both) that
changes the number of rows in a table within a single
SELECT COUNT(*) FROM
queries on the
slave may yield intermediate results. This is due to the
SELECT COUNT(...) may perform
dirty reads, and is not a bug in the
NDB storage engine. (See Bug
#31321 for more information.)