Documentation Home
MySQL Cluster 6.1 - 7.1 Release Notes
Download these Release Notes
PDF (US Ltr) - 3.0Mb
PDF (A4) - 3.0Mb
EPUB - 0.6Mb


MySQL Cluster 6.1 - 7.1 Release Notes  /  Changes in MySQL Cluster NDB 6.2  /  Changes in MySQL Cluster NDB 6.2.19 (5.1.51-ndb-6.2.19) (Not released)

Changes in MySQL Cluster NDB 6.2.19 (5.1.51-ndb-6.2.19) (Not released)

This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.

This release incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.51 (see Changes in MySQL 5.1.51 (2010-09-10)).

Note

Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.

Functionality Added or Changed

  • Important Change: More finely grained control over restart-on-failure behavior is provided with two new data node configuration parameters MaxStartFailRetries and StartFailRetryDelay. MaxStartFailRetries limits the total number of retries made before giving up on starting the data node; StartFailRetryDelay sets the number of seconds between retry attempts.

    These parameters are used only if StopOnError is set to 0.

    For more information, see Defining MySQL Cluster Data Nodes. (Bug #54341)

  • Important Change: The Id configuration parameter used with MySQL Cluster management, data, and API nodes (including SQL nodes) is now deprecated, and the NodeId parameter (long available as a synonym for Id when configuring these types of nodes) should be used instead. Id continues to be supported for reasons of backward compatibility, but now generates a warning when used with these types of nodes, and is subject to removal in a future release of MySQL Cluster.

    This change affects the name of the configuration parameter only, establishing a clear preference for NodeId over Id in the [mgmd], [ndbd], [mysql], and [api] sections of the MySQL Cluster global configuration (config.ini) file. The behavior of unique identifiers for management, data, and SQL and API nodes in MySQL Cluster has not otherwise been altered.

    The Id parameter as used in the [computer] section of the MySQL Cluster global configuration file is not affected by this change.

  • Cluster API: It is now possible to determine, using the ndb_desc utility or the NDB API, which data nodes contain replicas of which partitions. For ndb_desc, a new --extra-node-info option is added to cause this information to be included in its output. A new method Table::getFragmentNodes() is added to the NDB API for obtaining this information programmatically. (Bug #51184)

  • On Solaris platforms, the MySQL Cluster management server and NDB API applications now use CLOCK_REALTIME as the default clock. (Bug #46183)

  • Cluster Replication: In circular replication, it was sometimes possible for an event to propagate such that it would be reapplied on all servers. This could occur when the originating server was removed from the replication circle and so could no longer act as the terminator of its own events, as normally happens in circular replication.

    To prevent this from occurring, a new IGNORE_SERVER_IDS option is introduced for the CHANGE MASTER TO statement. This option takes a list of replication server IDs; events having a server ID which appears in this list are ignored and not applied. For more information, see CHANGE MASTER TO Syntax.

    In conjunction with the introduction of IGNORE_SERVER_IDS, SHOW SLAVE STATUS has two new fields. Replicate_Ignore_Server_Ids displays information about ignored servers. Master_Server_Id displays the server_id value from the master. (Bug #47037)

    References: See also: Bug #25998, Bug #27808.

Bugs Fixed

  • Important Change: The --with-ndb-port-base option for configure did not function correctly, and has been deprecated. Attempting to use this option produces the warning Ignoring deprecated option --with-ndb-port-base.

    Beginning with MySQL Cluster NDB 7.1.0, the deprecation warning itself is removed, and the --with-ndb-port-base option is simply handled as an unknown and invalid option if you try to use it. (Bug #47941)

    References: See also: Bug #38502.

  • Important Change; Cluster Replication: In a MySQL Cluster acting as a replication slave and having multiple SQL nodes, only the SQL node receiving events directly from the master recorded DDL statements in its binary logs unless this SQL node also had binary logging enabled; otherwise, other SQL nodes in the slave cluster failed to log DDL statements, regardless of their individual --log-bin settings.

    The fix for this issue aligns binary logging of DDL statements with that of DML statements. In particular, you should take note of the following:

    • DDL and DML statements on the master cluster are logged with the server ID of the server that actually writes the log.

    • DDL and DML statements on the master cluster are logged by any attached mysqld that has binary logging enabled.

    • Replicated DDL and DML statements on the slave are logged by any attached mysqld that has both --log-bin and --log-slave-updates enabled.

    • Replicated DDL and DML statements are logged with the server ID of the original (master) MySQL server by any attached mysqld that has both --log-bin and --log-slave-updates enabled.

    Affect on upgrades.  When upgrading from a previous MySQL CLuster release, you should perform either one of the following:

    1. Upgrade servers that are performing binary logging before those that are not; do not perform any DDL on old SQL nodes until all SQL nodes have been upgraded.

    2. Make sure that --log-slave-updates is enabled on all SQL nodes performing binary logging prior to the upgrade, so that all DDL is captured.

    Note

    Logging of DML statements was not affected by this issue.

    (Bug #45756)

  • Packaging: The pkg installer for MySQL Cluster on Solaris did not perform a complete installation due to an invalid directory reference in the postinstall script. (Bug #41998)

  • Two unused test files in storage/ndb/test/sql contained incorrect versions of the GNU Lesser General Public License. The files and the directory containing them have been removed. (Bug #11810156)

    References: See also: Bug #11810224.

  • It was possible for a DROP DATABASE statement to remove NDB hidden blob tables without removing the parent tables, with the result that the tables, although hidden to MySQL clients, were still visible in the output of ndb_show_tables but could not be dropped using ndb_drop_table. (Bug #54788)

  • When performing an online alter table where 2 or more SQL nodes connected to the cluster were generating binary logs, an incorrect message could be sent from the data nodes, causing mysqld processes to crash. This problem was often difficult to detect, because restarting SQL node or data node processes could clear the error, and because the crash in mysqld did not occur until several minutes after the erroneous message was sent and received. (Bug #54168)

  • A table having the maximum number of attributes permitted could not be backed up using the ndb_mgm client.

    Note

    The maximum number of attributes supported per table is not the same for all MySQL Cluster releases. See Limits Associated with Database Objects in MySQL Cluster, to determine the maximum that applies in the release which you are using.

    (Bug #54155)

  • Creating a Disk Data table, dropping it, then creating an in-memory table and performing a restart, could cause data node processes to fail with errors in the DBTUP kernel block if the new table's internal ID was the same as that of the old Disk Data table. This could occur because undo log handling during the restart did not check that the table having this ID was now in-memory only. (Bug #53935)

  • A table created while ndb_table_no_logging was enabled was not always stored to disk, which could lead to a data node crash with Error opening DIH schema files for table. (Bug #53934)

  • An internal buffer allocator used by NDB has the form alloc(wanted, minimum) and attempts to allocate wanted pages, but is permitted to allocate a smaller number of pages, between wanted and minimum. However, this allocator could sometimes allocate fewer than minimum pages, causing problems with multi-threaded building of ordered indexes. (Bug #53580)

  • With engine_condition_pushdown enabled, a query using LIKE on an ENUM column of an NDB table failed to return any results. This issue is resolved by disabling engine_condition_pushdown when performing such queries. (Bug #53360)

  • NDB truncated a column declared as DECIMAL(65,0) to a length of 64. Now such a column is accepted and handled correctly. In cases where the maximum length (65) is exceeded, NDB now raises an error instead of truncating. (Bug #53352)

  • When a watchdog shutdown occurred due to an error, the process was not terminated quickly enough, sometimes resulting in a hang. (To correct this, the internal _exit() function is now called in such situations, rather than exit().) (Bug #53246)

  • Setting DataMemory higher than 4G on 32-bit platforms caused ndbd to crash, instead of failing gracefully with an error. (Bug #52536, Bug #50928)

  • When running a SELECT on an NDB table with BLOB or TEXT columns, memory was allocated for the columns but was not freed until the end of the SELECT. This could cause problems with excessive memory usage when dumping (using for example mysqldump) tables with such columns and having many rows, large column values, or both. (Bug #52313)

    References: See also: Bug #56488, Bug #50310.

  • When using NoOfReplicas equal to 1 or 2, if data nodes from one node group were restarted 256 times and applications were running traffic such that it would encounter NDB error 1204 (Temporary failure, distribution changed), the live node in the node group would crash, causing the cluster to crash as well. The crash occurred only when the error was encountered on the 256th restart; having the error on any previous or subsequent restart did not cause any problems. (Bug #50930)

  • The AUTO_INCREMENT option for ALTER TABLE did not reset AUTO_INCREMENT columns of NDB tables. (Bug #50247)

  • If a query on an NDB table compared a constant string value to a column, and the length of the string was greater than that of the column, condition pushdown did not work correctly. (The string was truncated to fit the column length before being pushed down.) Now in such cases, the condition is no longer pushed down. (Bug #49459)

  • When performing tasks that generated large amounts of I/O (such as when using ndb_restore), an internal memory buffer could overflow, causing data nodes to fail with signal 6.

    Subsequent analysis showed that this buffer was not actually required, so this fix removes it. (Bug #48861)

  • Performing intensive inserts and deletes in parallel with a high scan load could a data node crashes due to a failure in the DBACC kernel block. This was because checking for when to perform bucket splits or merges considered the first 4 scans only. (Bug #48700)

  • The creation of an ordered index on a table undergoing DDL operations could cause a data node crash under certain timing-dependent conditions. (Bug #48604)

  • In certain cases, performing very large inserts on NDB tables when using ndbmtd caused the memory allocations for ordered or unique indexes (or both) to be exceeded. This could cause aborted transactions and possibly lead to data node failures. (Bug #48037)

    References: See also: Bug #48113.

  • When employing NDB native backup to back up and restore an empty NDB table that used a non-sequential AUTO_INCREMENT value, the AUTO_INCREMENT value was not restored correctly. (Bug #48005)

  • SHOW CREATE TABLE did not display the AUTO_INCREMENT value for NDB tables having AUTO_INCREMENT columns. (Bug #47865)

  • Under some circumstances, when a scan encountered an error early in processing by the DBTC kernel block (see The DBTC Block), a node could crash as a result. Such errors could be caused by applications sending incorrect data, or, more rarely, by a DROP TABLE operation executed in parallel with a scan. (Bug #47831)

  • When starting a node and synchronizing tables, memory pages were allocated even for empty fragments. In certain situations, this could lead to insufficient memory. (Bug #47782)

  • mysqld allocated an excessively large buffer for handling BLOB values due to overestimating their size. (For each row, enough space was allocated to accommodate every BLOB or TEXT column value in the result set.) This could adversely affect performance when using tables containing BLOB or TEXT columns; in a few extreme cases, this issue could also cause the host system to run out of memory unexpectedly. (Bug #47574)

    References: See also: Bug #47572, Bug #47573.

  • NDBCLUSTER uses a dynamically allocated buffer to store BLOB or TEXT column data that is read from rows in MySQL Cluster tables.

    When an instance of the NDBCLUSTER table handler was recycled (this can happen due to table definition cache pressure or to operations such as FLUSH TABLES or ALTER TABLE), if the last row read contained blobs of zero length, the buffer was not freed, even though the reference to it was lost. This resulted in a memory leak.

    For example, consider the table defined and populated as shown here:

    CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB;
    
    INSERT INTO t VALUES (1, REPEAT('F', 20000));
    INSERT INTO t VALUES (2, '');
    

    Now execute repeatedly a SELECT on this table, such that the zero-length LONGTEXT row is last, followed by a FLUSH TABLES statement (which forces the handler object to be re-used), as shown here:

         
    SELECT a, length(b) FROM bl ORDER BY a;
    FLUSH TABLES;
    

    Prior to the fix, this resulted in a memory leak proportional to the size of the stored LONGTEXT value each time these two statements were executed. (Bug #47573)

    References: See also: Bug #47572, Bug #47574.

  • Large transactions involving joins between tables containing BLOB columns used excessive memory. (Bug #47572)

    References: See also: Bug #47573, Bug #47574.

  • A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug #47505)

  • NDB stores blob column data in a separate, hidden table that is not accessible from MySQL. If this table was missing for some reason (such as accidental deletion of the file corresponding to the hidden table) when making a MySQL Cluster native backup, ndb_restore crashed when attempting to restore the backup. Now in such cases, ndb_restore fails with the error message Table table_name has blob column (column_name) with missing parts table in backup instead. (Bug #47289)

  • For very large values of MaxNoOfTables + MaxNoOfAttributes, the calculation for StringMemory could overflow when creating large numbers of tables, leading to NDB error 773 (Out of string memory, please modify StringMemory config parameter), even when StringMemory was set to 100 (100 percent). (Bug #47170)

  • The default value for the StringMemory configuration parameter, unlike other MySQL Cluster configuration parameters, was not set in ndb/src/mgmsrv/ConfigInfo.cpp. (Bug #47166)

  • Signals from a failed API node could be received after an API_FAILREQ signal (see Operations and Signals) has been received from that node, which could result in invalid states for processing subsequent signals. Now, all pending signals from a failing API node are processed before any API_FAILREQ signal is received. (Bug #47039)

    References: See also: Bug #44607.

  • Using triggers on NDB tables caused ndb_autoincrement_prefetch_sz to be treated as having the NDB kernel's internal default value (32) and the value for this variable as set on the cluster's SQL nodes to be ignored. (Bug #46712)

  • Full table scans failed to execute when the cluster contained more than 21 table fragments.

    Note

    The number of table fragments in the cluster can be calculated as the number of data nodes, times 8 (that is, times the value of the internal constant MAX_FRAG_PER_NODE), divided by the number of replicas. Thus, when NoOfReplicas = 1 at least 3 data nodes were required to trigger this issue, and when NoOfReplicas = 2 at least 4 data nodes were required to do so.

    (Bug #46490)

  • Ending a line in the config.ini file with an extra semicolon character (;) caused reading the file to fail with a parsing error. (Bug #46242)

  • When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug #46069)

  • Problems could arise when using VARCHAR columns whose size was greater than 341 characters and which used the utf8_unicode_ci collation. In some cases, this combination of conditions could cause certain queries and OPTIMIZE TABLE statements to crash mysqld. (Bug #45053)

  • Running an ALTER TABLE statement while an NDB backup was in progress caused mysqld to crash. (Bug #44695)

  • If a node failed while sending a fragmented long signal, the receiving node did not free long signal assembly resources that it had allocated for the fragments of the long signal that had already been received. (Bug #44607)

  • When performing auto-discovery of tables on individual SQL nodes, NDBCLUSTER attempted to overwrite existing MyISAM .frm files and corrupted them.

    Workaround.  In the mysql client, create a new table (t2) with same definition as the corrupted table (t1). Use your system shell or file manager to rename the old .MYD file to the new file name (for example, mv t1.MYD t2.MYD). In the mysql client, repair the new table, drop the old one, and rename the new table using the old file name (for example, RENAME TABLE t2 TO t1).

    (Bug #42614)

  • When starting a cluster with a great many tables, it was possible for MySQL client connections as well as the slave SQL thread to issue DML statements against MySQL Cluster tables before mysqld had finished connecting to the cluster and making all tables writeable. This resulted in Table ... is read only errors for clients and the Slave SQL thread.

    This issue is fixed by introducing the --ndb-wait-setup option for the MySQL server. This provides a configurable maximum amount of time that mysqld waits for all NDB tables to become writeable, before enabling MySQL clients or the slave SQL thread to connect. (Bug #40679)

    References: See also: Bug #46955.

  • Running ndb_restore with the --print or --print_log option could cause it to crash. (Bug #40428, Bug #33040)

  • When a slash character (/) was used as part of the name of an index on an NDB table, attempting to execute a TRUNCATE TABLE statement on the table failed with the error Index not found, and the table was rendered unusable. (Bug #38914)

  • When building MySQL Cluster, it was possible to configure the build using --with-ndb-port without supplying a port number. Now in such cases, configure fails with an error. (Bug #38502)

    References: See also: Bug #47941.

  • An insert on an NDB table was not always flushed properly before performing a scan. One way in which this issue could manifest was that LAST_INSERT_ID() sometimes failed to return correct values when using a trigger on an NDB table. (Bug #38034)

  • If the cluster crashed during the execution of a CREATE LOGFILE GROUP statement, the cluster could not be restarted afterward. (Bug #36702)

    References: See also: Bug #34102.

  • Some joins on large NDB tables having TEXT or BLOB columns could cause mysqld processes to leak memory. The joins did not need to reference the TEXT or BLOB columns directly for this issue to occur. (Bug #36701)

  • When the MySQL server SQL mode included STRICT_TRANS_TABLES, storage engine warnings and error codes specific to NDB were returned when errors occurred, instead of the MySQL server errors and error codes expected by some programming APIs (such as Connector/J) and applications. (Bug #35990)

  • On OS X 10.5, commands entered in the management client failed and sometimes caused the client to hang, although management client commands invoked using the --execute (or -e) option from the system shell worked normally.

    For example, the following command failed with an error and hung until killed manually, as shown here:

    ndb_mgm> SHOW     
    Warning, event thread startup failed, degraded printouts as result, errno=36
    ^C
    

    However, the same management client command, invoked from the system shell as shown here, worked correctly:

    shell> ndb_mgm -e "SHOW"
    

    (Bug #35751)

    References: See also: Bug #34438.

  • When a copying operation exhausted the available space on a data node while copying large BLOB columns, this could lead to failure of the data node and a Table is full error on the SQL node which was executing the operation. Examples of such operations could include an ALTER TABLE that changed an INT column to a BLOB column, or a bulk insert of BLOB data that failed due to running out of space or to a duplicate key error. (Bug #34583, Bug #48040)

    References: See also: Bug #41674, Bug #45768.

  • Trying to insert more rows than would fit into an NDB table caused data nodes to crash. Now in such situations, the insert fails gracefully with error 633 Table fragment hash index has reached maximum possible size. (Bug #34348)

  • The error message text for NDB error code 410 (REDO log files overloaded...) was truncated. (Bug #23662)

  • Replication: When mysqlbinlog --verbose was used to read a binary log that had been written using row-based format, the output for events that updated some but not all columns of tables was not correct. (Bug #47323)

  • Replication: In some cases, a STOP SLAVE statement could cause the replication slave to crash. This issue was specific to MySQL on Windows or Macintosh platforms. (Bug #45238, Bug #45242, Bug #45243, Bug #46013, Bug #46014, Bug #46030)

    References: See also: Bug #40796.

  • Disk Data: As an optimization when inserting a row to an empty page, the page is not read, but rather simply initialized. However, this optimzation was performed in all cases when an empty row was inserted, even though it should have been done only if it was the first time that the page had been used by a table or fragment. This is because, if the page had been in use, and then all records had been released from it, the page still needed to be read to learn its log sequence number (LSN).

    This caused problems only if the page had been flushed using an incorrect LSN and the data node failed before any local checkpoint was completed—which would remove any need to apply the undo log, hence the incorrect LSN was ignored.

    The user-visible result of the incorrect LSN was that it caused the data node to fail during a restart. It was perhaps also possible (although not conclusively proven) that this issue could lead to incorrect data. (Bug #54986)

  • Disk Data: For a Disk Data tablespace whose extent size was not equal to a whole multiple of 32K, the value of the FREE_EXTENTS column in the INFORMATION_SCHEMA.FILES table was smaller than the value of TOTAL_EXTENTS.

    As part of this fix, the implicit rounding of INITIAL_SIZE, EXTENT_SIZE, and UNDO_BUFFER_SIZE performed by NDBCLUSTER (see CREATE TABLESPACE Syntax) is now done explicitly, and the rounded values are used for calculating INFORMATION_SCHEMA.FILES column values and other purposes. (Bug #49709)

    References: See also: Bug #31712.

  • Disk Data: Inserts of blob column values into a Disk Data table that exhausted the tablespace resulted in misleading error messages about rows not being found in the table rather than the expected error Out of extents, tablespace full. (Bug #48113)

    References: See also: Bug #48037, Bug #41674.

  • Disk Data: A local checkpoint of an empty fragment could cause a crash during a system restart which was based on that LCP. (Bug #47832)

    References: See also: Bug #41915.

  • Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.

    Note

    This miscalculation was not reflected in the contents of the INFORMATION_SCHEMA.FILES table, as it applied to extents allocated to a fragment, and not to a file.

    (Bug #47072)

  • Disk Data: If the value set in the config.ini file for FileSystemPathDD, FileSystemPathDataFiles, or FileSystemPathUndoFiles was identical to the value set for FileSystemPath, that parameter was ignored when starting the data node with --initial option. As a result, the Disk Data files in the corresponding directory were not removed when performing an initial start of the affected data node or data nodes. (Bug #46243)

  • Disk Data: Repeatedly creating and then dropping Disk Data tables could eventually lead to data node failures. (Bug #45794, Bug #48910)

  • Disk Data: When a crash occurs due to a problem in Disk Data code, the currently active page list is printed to stdout (that is, in one or more ndb_nodeid_out.log files). One of these lists could contain an endless loop; this caused a printout that was effectively never-ending. Now in such cases, a maximum of 512 entries is printed from each list. (Bug #42431)

  • Disk Data: Once all data files associated with a given tablespace had been dropped, there was no way for MySQL client applications (including the mysql client) to tell that the tablespace still existed. To remedy this problem, INFORMATION_SCHEMA.FILES now holds an additional row for each tablespace. (Previously, only the data files in each tablespace were shown.) This row shows TABLESPACE in the FILE_TYPE column, and NULL in the FILE_NAME column. (Bug #31782)

  • Disk Data: It was possible to issue a CREATE TABLESPACE or ALTER TABLESPACE statement in which INITIAL_SIZE was less than EXTENT_SIZE. (In such cases, INFORMATION_SCHEMA.FILES erroneously reported the value of the FREE_EXTENTS column as 1 and that of the TOTAL_EXTENTS column as 0.) Now when either of these statements is issued such that INITIAL_SIZE is less than EXTENT_SIZE, the statement fails with an appropriate error message. (Bug #31712)

    References: See also: Bug #49709.

  • Cluster Replication: When expire_logs_days was set, the thread performing the purge of the log files could deadlock, causing all binary log operations to stop. (Bug #49536)

  • Cluster Replication: When using multiple active replication channels, it was sometimes possible that a node group failed on the slave cluster, causing the slave cluster to shut down. (Bug #47935)

  • Cluster Replication: When recording a binary log using the --ndb-log-update-as-write and --ndb-log-updated-only options (both enabled by default) and later attempting to apply that binary log with mysqlbinlog, any operations that were played back from the log but which updated only some (but not all) columns caused any columns that were not updated to be reset to their default values. (Bug #47674)

    References: See also: Bug #47323, Bug #46662.

  • Cluster Replication: mysqlbinlog failed to apply correctly a binary log that had been recorded using --ndb-log-update-as-write=1. (Bug #46662)

    References: See also: Bug #47323, Bug #47674.

  • Cluster Replication: When inserting rows into the mysql.ndb_binlog_index table, duplicate key errors occurred when the size of the epoch number (a 64-bit integer) exceeded 253. This happened because the NDB storage engine handler called the wrong overloaded variant of a MySQL Server internal API (the Field::store() method), which resulted in the epoch being mapped to a 64-bit double precision floating point number and a corresponding loss of accuracy for numbers greater than 253. (Bug #35217)

  • Cluster API: When reading blob data with lock mode LM_SimpleRead, the lock was not upgraded as expected. (Bug #51034)

  • Cluster API: When a DML operation failed due to a uniqueness violation on an NDB table having more than one unique index, it was difficult to determine which constraint caused the failure; it was necessary to obtain an NdbError object, then decode its details property, which in could lead to memory management issues in application code.

    To help solve this problem, a new API method Ndb::getNdbErrorDetail() is added, providing a well-formatted string containing more precise information about the index that caused the unque constraint violation. The following additional changes are also made in the NDB API:

    • Use of NdbError.details is now deprecated in favor of the new method.

    • The Dictionary::listObjects() method has been modified to provide more information.

    (Bug #48851)

  • Cluster API: The NDB API methods Dictionary::listEvents(), Dictionary::listIndexes(), Dictionary::listObjects(), and NdbOperation::getErrorLine() formerly had both const and non-const variants. The non-const versions of these methods have been removed. In addition, the NdbOperation::getBlobHandle() method has been re-implemented to provide consistent internal semantics. (Bug #47798)

  • Cluster API: In some circumstances, if an API node encountered a data node failure between the creation of a transaction and the start of a scan using that transaction, then any subsequent calls to startTransaction() and closeTransaction() could cause the same transaction to be started and closed repeatedly. (Bug #47329)

  • Cluster API: A duplicate read of a column caused NDB API applications to crash. (Bug #45282)

  • Cluster API: Performing multiple operations using the same primary key within the same NdbTransaction::execute() call could lead to a data node crash.

    Note

    This fix does not make change the fact that performing multiple operations using the same primary key within the same execute() is not supported; because there is no way to determine the order of such operations, the result of such combined operations remains undefined.

    (Bug #44065)

    References: See also: Bug #44015.

  • Cluster API: The error handling shown in the example file ndbapi_scan.cpp included with the MySQL Cluster distribution was incorrect. (Bug #39573)

  • Cluster API: When using blobs, calling getBlobHandle() requires the full key to have been set using equal(), because getBlobHandle() must access the key for adding blob table operations. However, if getBlobHandle() was called without first setting all parts of the primary key, the application using it crashed. Now, an appropriate error code is returned instead. (Bug #28116, Bug #48973)

  • API: The fix for Bug #24507 could lead in some cases to client application failures due to a race condition. Now the server waits for the dummy thread to return before exiting, thus making sure that only one thread can initialize the POSIX threads library. (Bug #42850)

    References: This issue is a regression of: Bug #24507.

  • On some Unix/Linux platforms, an error during build from source could be produced, referring to a missing LT_INIT program. This is due to versions of libtool 2.1 and earlier. (Bug #51009)

  • On OS X or Windows, sending a SIGHUP signal to the server or an asynchronous flush (triggered by flush_time) caused the server to crash. (Bug #47525)

  • 1) In rare cases, if a thread was interrupted during a FLUSH PRIVILEGES operation, a debug assertion occurred later due to improper diagnostics area setup. 2) A KILL operation could cause a console error message referring to a diagnostic area state without first ensuring that the state existed. (Bug #33982)

  • When using the ARCHIVE storage engine, SHOW TABLE STATUS displayed incorrect information for Max_data_length, Data_length and Avg_row_length. (Bug #29203)

  • Installation of MySQL on Windows failed to set the correct location for the character set files, which could lead to mysqld and mysql failing to initialize properly. (Bug #17270)