MySQL Cluster NDB 6.3.31 was withdrawn shortly after release, due to Bug #51027. Users should upgrade to MySQL Cluster NDB 6.3.31a, which fixes this issue.
This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.41 (see Changes in MySQL 5.1.41 (2009-11-05)).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
The maximum permitted value of the
system variable has been increased from 256 to 65536.
Due to the fact that no timestamp is available for delete
operations, a delete using
actually processed as
because this is not optimal for some use cases,
NDB$MAX_DELETE_WIN() is added as a conflict
resolution function; if the “timestamp” column
value for a given row adding or updating an existing row coming
from the master is higher than that on the slave, it is applied
NDB$MAX()); however, delete
operations are treated as always having the higher value.
See NDB$MAX_DELETE_WIN(column_name), for more information. (Bug #50650)
Cluster Replication: In circular replication, it was sometimes possible for an event to propagate such that it would be reapplied on all servers. This could occur when the originating server was removed from the replication circle and so could no longer act as the terminator of its own events, as normally happens in circular replication.
To prevent this from occurring, a new
IGNORE_SERVER_IDS option is introduced for
CHANGE MASTER TO statement. This option
takes a list of replication server IDs; events having a server
ID which appears in this list are ignored and not applied. For
more information, see CHANGE MASTER TO Syntax.
In conjunction with the introduction of
SLAVE STATUS has two new fields.
information about ignored servers.
Master_Server_Id displays the
server_id value from the
References: See also Bug #25998, Bug #27808.
greater than 1 with more than 31 ordered indexes caused node and
system restarts to fail.
Dropping unique indexes in parallel while they were in use could cause node and cluster failures. (Bug #50118)
When setting the
configuration parameter failed, only the error Failed
to memlock pages... was returned. Now in such cases
the operating system's error code is also returned.
If a query on an
NDB table compared
a constant string value to a column, and the length of the
string was greater than that of the column, condition pushdown
did not work correctly. (The string was truncated to fit the
column length before being pushed down.) Now in such cases, the
condition is no longer pushed down.
Performing intensive inserts and deletes in parallel with a high
scan load could a data node crashes due to a failure in the
DBACC kernel block. This was because checking
for when to perform bucket splits or merges considered the first
4 scans only.
During Start Phases 1 and 2, the
command sometimes (falsely) returned
Connected for data nodes running
mysqld could sometimes crash during a commit while trying to handle NDB Error 4028 Node failure caused abort of transaction. (Bug #38577)
the stated memory was not allocated when the node was started,
but rather only when the memory was used by the data node
process for other reasons.
Trying to insert more rows than would fit into an
NDB table caused data nodes to crash. Now in
such situations, the insert fails gracefully with error 633
Table fragment hash index has reached maximum
When a crash occurs due to a problem in Disk Data code, the
currently active page list is printed to
stdout (that is, in one or more
files). One of these lists could contain an endless loop; this
caused a printout that was effectively never-ending. Now in such
cases, a maximum of 512 entries is printed from each list.
On Mac OS X or Windows, sending a
signal to the server or an asynchronous flush (triggered by
flush_time) caused the server
ARCHIVE storage engine lost
records during a bulk insert.
When using the
SHOW TABLE STATUS displayed incorrect