MySQL NDB Cluster 7.4.7 is a new release of MySQL NDB Cluster 7.4,
based on MySQL Server 5.6 and including features in version 7.4 of
NDB storage engine, as well as
fixing recently discovered bugs in previous NDB Cluster releases.
Obtaining MySQL NDB Cluster 7.4. MySQL NDB Cluster 7.4 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in MySQL NDB Cluster 7.4, see What is New in NDB Cluster 7.4.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.6 through MySQL 5.6.25 (see Changes in MySQL 5.6.25 (2015-05-29, General Availability)).
MySQL NDB ClusterJ: Under high workload, it was possible to overload the direct memory used to back domain objects, because direct memory is not garbage collected in the same manner as objects allocated on the heap. Two strategies have been added to the ClusterJ implementation: first, direct memory is now pooled, so that when the domain object is garbage collected, the direct memory can be reused by another domain object. Additionally, a new user-level method,
release(instance), has been added to the
Sessioninterface, which allows users to release the direct memory before the corresponding domain object is garbage collected. See the description for release(T) for more information. (Bug #20504741)
Deprecated MySQL NDB Cluster node configuration parameters are now indicated as such by ndb_config
--xml. For each parameter currently deprecated, the corresponding
<param/>tag in the XML output now includes the attribute
deprecated="true". (Bug #21127135)
A number of improvements, listed here, have been made with regard to handling issues that could arise when an overload arose due to a great number of inserts being performed during a local checkpoint (LCP):
Failures sometimes occurred during restart processing when trying to execute the undo log, due to a problem with finding the end of the log. This happened when there remained unwritten pages at the end of the first undo file when writing to the second undo file, which caused the execution of undo logs in reverse order and so execute old or even nonexistent log records.
This is fixed by ensuring that execution of the undo log begins with the proper end of the log, and, if started earlier, that any unwritten or faulty pages are ignored.
It was possible to fail during an LCP, or when performing a
COPY_FRAGREQ, due to running out of operation records. We fix this by making sure that LCPs and
COPY_FRAGuse resources reserved for operation records, as was already the case with scan records. In addition, old code for ACC operations that was no longer required but that could lead to failures was removed.
When an LCP was performed while loading a table, it was possible to hit a livelock during LCP scans, due to the fact that each record that was inserted into new pages after the LCP had started had its
LCP_SKIPflag set. Such records were discarded as intended by the LCP scan, but when inserts occurred faster than the LCP scan could discard records, the scan appeared to hang. As part of this issue, the scan failed to report any progress to the LCP watchdog, which after 70 seconds of livelock killed the process. This issue was observed when performing on the order of 250000 inserts per second over an extended period of time (120 seconds or more), using a single LDM.
This part of the fix makes a number of changes, listed here:
We now ensure that pages created after the LCP has started are not included in LCP scans; we also ensure that no records inserted into those pages have their
Handling of the scan protocol is changed such that a certain amount of progress is made by the LCP regardless of load; we now report progress to the LCP watchdog so that we avoid failure in the event that an LCP is making progress but not writing any records.
We now take steps to guarantee that LCP scans proceed more quickly than inserts can occur, by ensuring that scans are prioritized this scanning activity, and thus, that the LCP is in fact (eventually) completed.
In addition, scanning is made more efficient, by prefetching tuples; this helps avoid stalls while fetching memory in the CPU.
Row checksums for preventing data corruption now include the tuple header bits.
(Bug #76373, Bug #20727343, Bug #76741, Bug #69994, Bug #20903880, Bug #76742, Bug #20904721, Bug #76883, Bug #20980229)
Incompatible Change; NDB Cluster APIs: The
pollEvents2()method now returns -1, indicating an error, whenever a negative value is used for the time argument. (Bug #20762291)
Important Change; NDB Cluster APIs: The
Ndb::getHighestQueuedEpoch()method returned the greatest epoch in the event queue instead of the greatest epoch found after calling
pollEvents2(). (Bug #20700220)
Important Change; NDB Cluster APIs:
Ndb::pollEvents()is now compatible with the
TE_OUT_OF_MEMORYevent types introduced in MySQL NDB Cluster 7.4.3. For detailed information about this change, see the description of this method in the MySQL NDB Cluster API Developer Guide. (Bug #20646496)
The behavior of
Ndb::pollEvents()has also been modified such that it now returns NDB_FAILURE_GCI (equal to
~(Uint64) 0) when a cluster failure has been detected. (Bug #18753887)
NDB Cluster APIs: Added the
Column::getSizeInBytesForRecord()method, which returns the size required for a column by an
NdbRecord, depending on the column's type (text/blob, or other). (Bug #21067283)
NDB Cluster APIs:
TE_INCONSISTENTtable event type (see Event::TableEvent). This caused a subsequent call to
getEventType()to fail. (Bug #20729091)
NDB Cluster APIs: Creation and destruction of
Ndb_cluster_connectionobjects by multiple threads could make use of the same application lock, which in some cases led to failures in the global dictionary cache. To alleviate this problem, the creation and destruction of several internal NDB API objects have been serialized. (Bug #20636124)
NDB Cluster APIs: A number of timeouts were not handled correctly in the NDB API. (Bug #20617891)
NDB Cluster APIs: When an
Ndbobject created prior to a failure of the cluster was reused, the event queue of this object could still contain data node events originating from before the failure. These events could reference “old” epochs (from before the failure occurred), which in turn could violate the assumption made by the
nextEvent()method that epoch numbers always increase. This issue is addressed by explicitly clearing the event queue in such cases. (Bug #18411034)
References: See also: Bug #20888668.
MySQL NDB ClusterJ: When used with Java 1.7 or higher, ClusterJ might cause the Java VM to crash when querying tables with BLOB columns, because
NdbDictionary::createRecordcalculates the wrong size needed for the record. Subsequently, when ClusterJ called
NdbScanOperation::nextRecordCopyOut, the data overran the allocated buffer space. With this fix, ClusterJ checks the size calculated by
NdbDictionary::createRecordand uses the value for the buffer size, if it is larger than the value ClusterJ itself calculates. (Bug #20695155)
After restoring the database metadata (but not any data) by running ndb_restore
-m), SQL nodes would hang while trying to
SELECTfrom a table in the database to which the metadata was restored. In such cases the attempt to query the table now fails as expected, since the table does not actually exist until ndb_restore is executed with
-r). (Bug #21184102)
References: See also: Bug #16890703.
When a great many threads opened and closed blocks in the NDB API in rapid succession, the internal
close_clnt()function synchronizing the closing of the blocks waited an insufficiently long time for a self-signal indicating potential additional signals needing to be processed. This led to excessive CPU usage by ndb_mgmd, and prevented other threads from opening or closing other blocks. This issue is fixed by changing the function polling call to wait on a specific condition to be woken up (that is, when a signal has in fact been executed). (Bug #21141495)
Previously, multiple send threads could be invoked for handling sends to the same node; these threads then competed for the same send lock. While the send lock blocked the additional send threads, work threads could be passed to other nodes.
This issue is fixed by ensuring that new send threads are not activated while there is already an active send thread assigned to the same node. In addition, a node already having an active send thread assigned to it is no longer visible to other, already active, send threads; that is, such a node is longer added to the node list when a send thread is currently assigned to it. (Bug #20954804, Bug #76821)
Queueing of pending operations when the redo log was overloaded (
DefaultOperationRedoProblemActionAPI node configuration parameter) could lead to timeouts when data nodes ran out of redo log space (P_TAIL_PROBLEM errors). Now when the redo log is full, the node aborts requests instead of queuing them. (Bug #20782580)
References: See also: Bug #20481140.
NDBevent buffer can be used with an
Ndbobject to subscribe to table-level row change event streams. Users subscribe to an existing event; this causes the data nodes to start sending event data signals (
SUB_TABLE_DATA) and epoch completion signals (
SUB_GCP_COMPLETE) to the
SUB_GCP_COMPLETE_REPsignals can arrive for execution in concurrent receiver thread before completion of the internal method call used to start a subscription.
SUB_GCP_COMPLETE_REPsignals depends on the total number of
SUMAbuckets (sub data streams), but this may not yet have been set, leading to the present issue, when the counter used for tracking the
TOTAL_BUCKETS_INIT) was found to be set to erroneous values. Now
TOTAL_BUCKETS_INITis tested to be sure it has been set correctly before it is used. (Bug #20575424, Bug #76255)
References: See also: Bug #20561446, Bug #21616263.
NDBstatistics queries could be delayed by the error delay set for
ndb_index_stat_option(default 60 seconds) when the index that was queried had been marked with internal error. The same underlying issue could also cause
ANALYZE TABLEto hang when executed against an
NDBtable having multiple indexes where an internal error occurred on one or more but not all indexes.
Now in such cases, any existing statistics are returned immediately, without waiting for any additonal statistics to be discovered. (Bug #20553313, Bug #20707694, Bug #76325)
The multithreaded scheduler sends to remote nodes either directly from each worker thread or from dedicated send threadsL, depending on the cluster's configuration. This send might transmit all, part, or none of the available data from the send buffers. While there remained pending send data, the worker or send threads continued trying to send in a loop. The actual size of the data sent in the most recent attempt to perform a send is now tracked, and used to detect lack of send progress by the send or worker threads. When no progress has been made, and there is no other work outstanding, the scheduler takes a 1 millisecond pause to free up the CPU for use by other threads. (Bug #18390321)
References: See also: Bug #20929176, Bug #20954804.
In some cases, attempting to restore a table that was previously backed up failed with a File Not Found error due to a missing table fragment file. This occurred as a result of the NDB kernel
BACKUPblock receiving a Busy error while trying to obtain the table description, due to other traffic from external clients, and not retrying the operation.
The fix for this issue creates two separate queues for such requests—one for internal clients such as the
BACKUPblock or ndb_restore, and one for external clients such as API nodes—and prioritizing the internal queue.
Note that it has always been the case that external client applications using the NDB API (including MySQL applications running against an SQL node) are expected to handle Busy errors by retrying transactions at a later time; this expectation is not changed by the fix for this issue. (Bug #17878183)
References: See also: Bug #17916243.
On startup, API nodes (including mysqld processes running as SQL nodes) waited to connect with data nodes that had not yet joined the cluster. Now they wait only for data nodes that have actually already joined the cluster.
In the case of a new data node joining an existing cluster, API nodes still try to connect with the new data node within
HeartbeatIntervalDbApimilliseconds. (Bug #17312761)
In some cases, the
DBDICTblock failed to handle repeated
GET_TABINFOREQsignals after the first one, leading to possible node failures and restarts. This could be observed after setting a sufficiently high value for
MaxNoOfExecutionThreadsand low value for
LcpScanProgressTimeout. (Bug #77433, Bug #21297221)
Client lookup for delivery of API signals to the correct client by the internal
TransporterFacade::deliver_signal()function had no mutex protection, which could cause issues such as timeouts encountered during testing, when other clients connected to the same
TransporterFacade. (Bug #77225, Bug #21185585)
It was possible to end up with a lock on the send buffer mutex when send buffers became a limiting resource, due either to insufficient send buffer resource configuration, problems with slow or failing communications such that all send buffers became exhausted, or slow receivers failing to consume what was sent. In this situation worker threads failed to allocate send buffer memory for signals, and attempted to force a send in order to free up space, while at the same time the send thread was busy trying to send to the same node or nodes. All of these threads competed for taking the send buffer mutex, which resulted in the lock already described, reported by the watchdog as
Stuck in Send. This fix is made in two parts, listed here:
The send thread no longer holds the global send thread mutex while getting the send buffer mutex; it now releases the global mutex prior to locking the send buffer mutex. This keeps worker threads from getting stuck in send in such cases.
Locking of the send buffer mutex done by the send threads now uses a try-lock. If the try-lock fails, the node to make the send to is reinserted at the end of the list of send nodes in order to be retried later. This removes the
Stuck in Sendcondition for the send threads.
(Bug #77081, Bug #21109605)