-
As part of ongoing work to improve handling of local checkpoints and minimize the occurrence of issues relating to Error 410 (REDO log overloaded) during LCPs,
NDB
now implements adaptive LCP control, which moderates LCP scan priority and LCP writes according to redo log usage.The following changes have been made with regard to
NDB
configuration parameters:The default value of
RecoveryWork
is increased from 50 to 60 (60% of storage reserved for LCP files).The new
InsertRecoveryWork
parameter controls the percentage ofRecoveryWork
that is reserved for insert operations. The default value is 40 (40% ofRecoveryWork
); the minimum and maximum are 0 and 70, respectively. Increasing this value allows for more writes during an LCP, while limiting the total size of the LCP. DecreasingInsertRecoveryWork
limits the number of writes used during an LCP, but results in more space being used.
Implementing LCP control provides several benefits to
NDB
deployments. Clusters should now survive heavy loads using default configurations much better than previously, and it should now be possible to run them reliably on systems where the available disk space is approximately 2.1 times the amount of memory allocated to the cluster (that is, the amount ofDataMemory
) or more. It is important to bear in mind that the figure just cited does not account for disk space used by tables on disk.During load testing into a single data node with decreasing redo log sizes, it was possible to successfully load a very large quantity of data into NDB with 16GB reserved for the redo log while using no more than 50% of the redo log at any point in time.
See What is New in NDB Cluster 7.6, as well as the descriptions of the parameters mentioned previously, for more information. (Bug #90709, Bug #27942974, Bug #27942583, WL #9638)
References: See also: Bug #27926532, Bug #27169282.
ndbinfo Information Database: It was possible following a restart for (sometimes incomplete) fallback data to be used in populating the
ndbinfo.processes
table, which could lead to rows in this table with emptyprocess_name
values. Such fallback data is no longer used for this purpose. (Bug #27985339)-
NDB Client Programs: The executable file host_info is no longer used by ndb_setup.py. This file, along with its parent directory
share/mcc/host_info
, has been removed from the NDB Cluster distribution.In addition, installer code relating to an unused
dojo.zip
file was removed. (Bug #90743, Bug #27966467, Bug #27967561)References: See also: Bug #27621546.
MySQL NDB ClusterJ: ClusterJ could not be built from source using JDK 9. (Bug #27977985)
-
An
NDB
restore operation failed under the following conditions:A data node was restarted
The local checkpoint for the fragment being restored used two
.ctl
filesThe first of these
.ctl
files was the file in useThe LCP in question consisted of more than 2002 parts
This happened because an array used in decompression of the
.ctl
file contained only 2002 elements, which led to memory being overwritten, since this data can contain up to 2048 parts. This issue is fixed by increasing the size of the array to accommodate 2048 elements. (Bug #28303209) -
Local checkpoints did not always handle
DROP TABLE
operations correctly. (Bug #27926532)References: This issue is a regression of: Bug #26908347, Bug #26968613.
During the execution of
CREATE TABLE ... IF NOT EXISTS
, the internalopen_table()
function callsha_ndbcluster::get_default_num_partitions()
implicitly wheneveropen_table()
finds out that the requested table already exists. In certain cases,get_default_num_partitions()
was called without the associatedthd_ndb
object being initialized, leading to failure of the statement with MySQL error 157 Could not connect to storage engine. Nowget_default_num_partitions()
always checks for the existence of thisthd_ndb
object, and initializes it if necessary.