MySQL Cluster NDB 7.1.21 is a new release of MySQL Cluster,
incorporating new features in the
NDBCLUSTER storage engine and
fixing recently discovered bugs in previous MySQL Cluster NDB
Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.1.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.61 (see Changes in MySQL 5.1.61 (2012-01-10)).
TABLE ... REORGANIZE PARTITION statement can be used
to create new table partitions after new empty nodes have been
added to a MySQL Cluster. Usually, the number of partitions to
create is determined automatically, such that, if no new
partitions are required, then none are created. This behavior
can be overridden by creating the original table using the
MAX_ROWS option, which indicates that extra
partitions should be created to store a large number of rows.
However, in this case
ALTER ONLINE TABLE ... REORGANIZE
PARTITION simply uses the
value specified in the original
TABLE statement to determine the number of partitions
required; since this value remains constant, so does the number
of partitions, and so no new ones are created. This means that
the table is not rebalanced, and the new data nodes remain
To solve this problem, support is added for
ONLINE TABLE ...
newvalue is greater than the value
MAX_ROWS in the original
CREATE TABLE statement. This larger
MAX_ROWS value implies that more partitions
are required; these are allocated on the new data nodes, which
restores the balanced distribution of the table data.
TABLE failed when a
In some cases, restarting data nodes spent a very long time in
Start Phase 101, when API nodes must connect to the starting
when the API nodes trying to connect failed in a live-lock
scenario. This connection process uses a handshake during which
a small number of messages are exchanged, with a timeout used to
detect failures during the handshake.
Prior to this fix, this timeout was set such that, if one API node encountered the timeout, all other nodes connecting would do the same. The fix also decreases this timeout. This issue (and the effects of the fix) are most likely to be observed on relatively large configurations having 10 or more data nodes and 200 or more API nodes. (Bug #13825163)
An initial start using ndbmtd could sometimes hang. This was due to a state which occurred when several threads tried to flush a socket buffer to a remote node. In such cases, to minimize flushing of socket buffers, only one thread actually performs the send, on behalf of all threads. However, it was possible in certain cases for there to be data in the socket buffer waiting to be sent with no thread ever being chosen to perform the send. (Bug #13809781)
ndbmtd failed to restart when the size of a table definition exceeded 32K.
(The size of a table definition is dependent upon a number of factors, but in general the 32K limit is encountered when a table has 250 to 300 columns.) (Bug #13824773)
When trying to use ndb_size.pl
to connect to a MySQL server running on a nonstandard port, the
port argument was ignored.
(Bug #13364905, Bug #62635)
Cluster Replication: Error handling in conflict detection and resolution has been improved to include errors generated for reasons other than operation execution errors, and to distinguish better between permanent errors and transient errors. Transactions failing due to transient problems are now retried rather than leading to SQL node shutdown as occurred in some cases. (Bug #13428909)