Packaging: Yum repo packages are added for EL5, EL6, EL7, and SLES12.
Apt repo packages are added for Debian 7, Debian 8, Ubuntu 14.04, and Ubuntu 16.04
This was due to the fact that, when processing an
EXPLAINstatement, mysqld calculates the partition ID for a hash value as (
), which is correct only when the table is partitioned by
HASH, since other partitioning types use different methods of mapping hash values to partition IDs. This fix replaces the partition ID calculation performed by mysqld with an internal
NDBfunction which calculates the partition ID correctly, based on the table's partitioning type. (Bug #21068548)
References: See also: Bug #25501895, Bug #14672885.
NDB Disk Data: Stale data from NDB Disk Data tables that had been dropped could potentially be included in backups due to the fact that disk scans were enabled for these. To prevent this possibility, disk scans are now disabled—as are other types of scans—when taking a backup. (Bug #84422, Bug #25353234)
NDB Disk Data: In some cases, setting dynamic in-memory columns of an NDB Disk Data table to
NULLwas not handled correctly. (Bug #79253, Bug #22195588)
NDB Cluster APIs: When signals were sent while the client process was receiving signals such as
TC_COMMIT_ACK, these signals were temporary buffered in the send buffers of the clients which sent them. If not explicitly flushed, the signals remained in these buffers until the client woke up again and flushed its buffers. Because there was no attempt made to enforce an upper limit on how long the signal could remain unsent in the local client buffers, this could lead to timeouts and other misbehavior in the components waiting for these signals.
In addition, the fix for a previous, related issue likely made this situation worse by removing client wakeups during which the client send buffers could have been flushed.
The current fix moves responsibility for flushing messages sent by the receivers, to the receiver (
poll_ownerclient). This means that it is no longer necessary to wake up all clients merely to have them flush their buffers. Instead, the
poll_ownerclient (which is already running) performs flushing the send buffer of whatever was sent while delivering signals to the recipients. (Bug #22705935)
References: See also: Bug #18753341, Bug #23202735.
CPU usage of the data node's main thread by the
DBDIHmaster block as the end of a local checkpoint could approach 100% in certain cases where the database had a very large number of fragment replicas. This is fixed by reducing the frequency and range of fragment queue checking during an LCP. (Bug #25443080)
The ndb_print_backup_file utility failed when attempting to read from a backup file when the backup included a table having more than 500 columns. (Bug #25302901)
References: See also: Bug #25182956.
Multiple data node failures during a partial restart of the cluster could cause API nodes to fail. This was due to expansion of an internal object ID map by one thread, thus changing its location in memory, while another thread was still accessing the old location, leading to a segmentation fault in the latter thread.
unmap()functions in which this issue arose have now been made thread-safe. (Bug #25092498)
References: See also: Bug #25306089.
During the initial phase of a scan request, the
DBTCkernel block sends a series of
DIGETNODESREQsignals to the
DBDIHblock in order to obtain dictionary information for each fragment to be scanned. If
DIGETNODESREF, the error code from that signal was not read, and Error 218 Out of LongMessageBuffer was always returned instead. Now in such cases, the error code from the DIGETNODESREF signal is actually used. (Bug #85225, Bug #25642405)
There existed the possibility of a race condition between schema operations on the same database object originating from different SQL nodes; this could occur when one of the SQL nodes was late in releasing its metadata lock on the affected schema object or objects in such a fashion as to appear to the schema distribution coordinator that the lock release was acknowledged for the wrong schema change. This could result in incorrect application of the schema changes on some or all of the SQL nodes or a timeout with repeated waiting max
###sec for distributing... messages in the node logs due to failure of the distribution protocol. (Bug #85010, Bug #25557263)
References: See also: Bug #24926009.
When a foreign key was added to or dropped from an NDB table using an
ALTER TABLEstatement, the parent table's metadata was not updated, which made it possible to execute invalid alter operations on the parent afterwards.
Until you can upgrade to this release, you can work around this problem by running
SHOW CREATE TABLEon the parent immediately after adding or dropping the foreign key; this statement causes the table's metadata to be reloaded. (Bug #82989, Bug #24666177)
NDBtables with cascading foreign keys returned inconsistent results when the query cache was also enabled, due to the fact that mysqld was not aware of child table updates. This meant that results for a later
SELECTfrom the child table were fetched from the query cache, which at that point contained stale data.
This is fixed in such cases by adding all children of the parent table to an internal list to be checked by
NDBfor updates whenever the parent is updated, so that mysqld is now properly informed of any updated child tables that should be invalidated from the query cache. (Bug #81776, Bug #23553507)