This is a new Beta development release, incorporating new
features in the
NDB storage engine
and fixing recently discovered bugs in MySQL Cluster NDB 6.4.0.
Obtaining MySQL Cluster NDB 6.4.1. MySQL Cluster NDB 6.4.1 is a source-only release. You can obtain the source code from ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/mysql-5.1.31-ndb-6.4.1/.
This Beta release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1, 6.2, 6.3, and 6.4 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.31 (see Changes in MySQL 5.1.31 (2009-01-19)).
This Beta release, as any other pre-production release, should not be installed on production level systems or systems with critical data. Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality Added or Changed
Formerly, when the management server failed to create a
transporter for a data node connection,
elapsed before the data node was actually permitted to
disconnect. Now in such cases the disconnection occurs
References: See also Bug #41713.
Formerly, when using MySQL Cluster Replication, records for
“empty” epochs—that is, epochs in which no
NDBCLUSTER data or
tables took place—were inserted into the
ndb_binlog_index tables on the slave even
disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL
Cluster NDB 6.3.13 this was changed so that these
“empty” epochs were no longer logged. However, it
is now possible to re-enable the older behavior (and cause
“empty” epochs to be logged) by using the
--ndb-log-empty-epochs option. For more
information, see Replication Slave Options and Variables.
References: See also Bug #37472.
Cluster Replication: IPv6 networking is now supported between MySQL Cluster SQL nodes. This means that it is now possible to replicate between instances of MySQL Cluster using IPv6 addresses.
Currently, other MySQL Cluster processes (ndbd, ndbmtd, ndb_mgmd, and ndb_mgm) do not support IPv6 connections. This means that all MySQL Cluster data nodes, management servers, and management clients must connect to and be accessible from one another using IPv4. In addition, SQL nodes must use IPv4 to communicate with the cluster. There is also not yet any support in the NDB and MGM APIs for IPv6, which means that applications written using the MySQL Cluster APIs must make connections using IPv4. For more information, see Known Issues in MySQL Cluster Replication.
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN statement while inserting rows into the
table caused mysqld to crash.
When a data node connects to the management server, the node sends its node ID and transporter type; the management server then verifies that there is a transporter set up for that node and that it is in the correct state, and then sends back an acknowledgment to the connecting node. If the transporter was not in the correct state, no reply was sent back to the connecting node, which would then hang until a read timeout occurred (60 seconds). Now, if the transporter is not in the correct state, the management server acknowledges this promptly, and the node immediately disconnects. (Bug #41713)
References: See also Bug #41965.
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug #34526)
When using ndbmtd, one thread could flood another thread, which would cause the system to stop with a job buffer full condition (currently implemented as an abort). This could be caused by committing or aborting a large transaction (50000 rows or more) on a single data node running ndbmtd. To prevent this from happening, the number of signals that can be accepted by the system threads is calculated before executing them, and only executing them if sufficient space is found. (Bug #42052)
A maximum of 11
TUP scans were permitted in
The management server could hang after attempting to halt it
STOP command in the management
References: See also Bug #40922.
EXIT in the management client
sometimes caused the client to hang.
If all data nodes were shut down, MySQL clients were unable to
NDBCLUSTER tables and data
even after the data nodes were restarted, unless the MySQL
clients themselves were restarted.
MySQL Cluster would not compile when using
libwrap. This issue was known to occur only
in MySQL Cluster NDB 6.4.0.