MySQL Cluster NDB 7.1.21 is a new release of MySQL Cluster,
incorporating new features in the
NDB storage engine and fixing
recently discovered bugs in previous MySQL Cluster NDB 7.1
Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location. You can also access the MySQL Cluster NDB 7.1 development source tree at https://code.launchpad.net/~mysql/mysql-server/mysql-cluster-7.1.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.61 (see Changes in MySQL 5.1.61 (2012-01-10)).
Important Change: The
ALTER ONLINE TABLE ... REORGANIZE PARTITIONstatement can be used to create new table partitions after new empty nodes have been added to a MySQL Cluster. Usually, the number of partitions to create is determined automatically, such that, if no new partitions are required, then none are created. This behavior can be overridden by creating the original table using the
MAX_ROWSoption, which indicates that extra partitions should be created to store a large number of rows. However, in this case
ALTER ONLINE TABLE ... REORGANIZE PARTITIONsimply uses the
MAX_ROWSvalue specified in the original
CREATE TABLEstatement to determine the number of partitions required; since this value remains constant, so does the number of partitions, and so no new ones are created. This means that the table is not rebalanced, and the new data nodes remain empty.
To solve this problem, support is added for
ALTER ONLINE TABLE ... MAX_ROWS=, where
newvalueis greater than the value used with
MAX_ROWSin the original
CREATE TABLEstatement. This larger
MAX_ROWSvalue implies that more partitions are required; these are allocated on the new data nodes, which restores the balanced distribution of the table data.
ALTER ONLINE TABLEfailed when a
DEFAULToption was used. (Bug #13830980)
In some cases, restarting data nodes spent a very long time in Start Phase 101, when API nodes must connect to the starting node (using
NdbEventOperation), when the API nodes trying to connect failed in a live-lock scenario. This connection process uses a handshake during which a small number of messages are exchanged, with a timeout used to detect failures during the handshake.
Prior to this fix, this timeout was set such that, if one API node encountered the timeout, all other nodes connecting would do the same. The fix also decreases this timeout. This issue (and the effects of the fix) are most likely to be observed on relatively large configurations having 10 or more data nodes and 200 or more API nodes. (Bug #13825163)
ndbmtd failed to restart when the size of a table definition exceeded 32K.
(The size of a table definition is dependent upon a number of factors, but in general the 32K limit is encountered when a table has 250 to 300 columns.) (Bug #13824773)
An initial start using ndbmtd could sometimes hang. This was due to a state which occurred when several threads tried to flush a socket buffer to a remote node. In such cases, to minimize flushing of socket buffers, only one thread actually performs the send, on behalf of all threads. However, it was possible in certain cases for there to be data in the socket buffer waiting to be sent with no thread ever being chosen to perform the send. (Bug #13809781)
When trying to use ndb_size.pl
portto connect to a MySQL server running on a nonstandard port, the
portargument was ignored. (Bug #13364905, Bug #62635)
Cluster Replication: Error handling in conflict detection and resolution has been improved to include errors generated for reasons other than operation execution errors, and to distinguish better between permanent errors and transient errors. Transactions failing due to transient problems are now retried rather than leading to SQL node shutdown as occurred in some cases. (Bug #13428909)