New features and other important changes in NDB Cluster 7.6 which are likely to be of interest are shown in the following list:
New Disk Data table file format. A new file format was introduced in NDB 7.6.2 for NDB Disk Data tables, which makes it possible for each Disk Data table to be uniquely identified without reusing any table IDs. This should help resolve issues with page and extent handling that were visible to the user as problems with rapid creating and dropping of Disk Data tables, and for which the old format did not provide a ready means to fix.
The new format is now used whenever new undo log file groups and tablespace data files are created. Files relating to existing Disk Data tables continue to use the old format until their tablespaces and undo log file groups are re-created.Important
The old and new formats are not compatible; different data files or undo log files that are used by the same Disk Data table or tablespace cannot use a mix of formats.
To avoid problems relating to the old format, you should re-create any existing tablespaces and undo log file groups when upgrading. You can do this by performing an initial restart of each data node (that is, using the
--initialoption) as part of the upgrade process. When NDB 7.6 achieves GA status, you can expect this step to be made mandatory as part of upgrading from NDB 7.5 or an earlier release series.
If you are using Disk Data tables, a downgrade from any NDB 7.6 release—without regard to release status—to any NDB 7.5 or earlier release requires that you restart all data nodes with
--initialas part of the downgrade process. This is because NDB 7.5 and earlier release series are not able to read the new Disk Data file format.
Data memory pooling and dynamic index memory. Memory required for indexes on
NDBtable columns is now allocated dynamically from that allocated for
DataMemory. For this reason, the
IndexMemoryconfiguration parameter is now deprecated, and subject to removal in a future release series.Important
Starting with NDB 7.6.2, if
IndexMemoryis set in the
config.inifile, the management server issues the warning IndexMemory is deprecated, use Number bytes on each ndbd(DB) node allocated for storing indexes instead on startup, and any memory assigned to this parameter is automatically added to
In addition, the default value for
DataMemoryhas been increased to 98M; the default for
IndexMemoryhas been decreased to 0.
The pooling together of index memory with data memory simplifies the configuration of
NDB; a further benefit of these changes is that scaling up by increasing the number of LDM threads is no longer limited by having set an insufficiently large value for
IndexMemory.This is because index memory is no longer a static quantity which is allocated only once (when the cluster starts), but can now be allocated and deallocated as required. Previously, it was sometimes the case that increasing the number of LDM threads could lead to index memory exhaustion while large amounts of
As part of this work, a number of instances of
DataMemoryusage not directly related to storage of table data now use transaction memory instead.
For this reason, it may be necessary on some systems to increase
SharedGlobalMemoryto allow transaction memory to increase when needed, such as when using NDB Cluster Replication, which requires a great deal of buffering on the data nodes. On systems performing initial bulk loads of data, it may be necessary to break up very large transactions into smaller parts.
In addition, data nodes now generate
MemoryUsageevents (see Section 188.8.131.52, “NDB Cluster Log Events”) and write appropriate messages in the cluster log when resource usage reaches 99%, as well as when it reaches 80%, 90%, or 100%, as before.
Other related changes are listed here:
REPORT MEMORYUSAGEand other commands which expose memory consumption now shows index memory consumption using 32K pages (previously these were 8K pages).
ndbinfo.resourcestable now shows the
TRANSACTION_MEMORY, and the
RESERVEDresource has been removed.
ndbinfo processes and config_nodes tables. NDB 7.6.2 adds two tables to the
ndbinfoinformation database to provide information about cluster nodes; these tables are listed here:
config_nodes: This table the node ID, process type, and host name for each node listed in an NDB cluster's configuration file.
processesshows information about nodes currently connected to the cluster; this information includes the process name and system process ID; for each data node and SQL node, it also shows the process ID of the node's angel process. In addition, the table shows a service address for each connected node; this address can be set in NDB API applications using the
Ndb_cluster_connection::set_service_uri()method, which is also added in NDB 7.6.2.
System name. The system name of an NDB cluster can be used to identify a specific cluster. Beginning with NDB 7.6.2, the MySQL Server shows this name as the value of the
Ndb_system_namestatus variable; NDB API applications can use the
Ndb_cluster_connection::get_system_name()method which is added in the same release.
A system name based on the time the management server was started is generated automatically>; you can override this value by adding a
[system]section to the cluster's configuration file and setting the
Nameparameter to a value of your choice in this section, prior to starting the management server.
ndb_import CSV import tool. ndb_import, added in in NDB Cluster 7.6.2, loads CSV-formatted data directly into an
NDBtable using the NDB API (a MySQL server is needed only to create the table and database in which it is located). ndb_import can be regarded as an analog of mysqlimport or the
LOAD DATA INFILESQL statement, and supports many of the same or similar options for formatting of the data.
Assuming that the database and target
NDBtable exist, ndb_import needs only a connection to the cluster's management server (ndb_mgmd) to perform the importation; for this reason, there must be an
[api]slot available to the tool in the cluster's
See Section 21.4.14, “ndb_import — Import CSV Data Into NDB”, for more information.
ndb_top monitoring tool. Added the ndb_top utility, which shows CPU load and usage information for an
NDBdata node in real time. This information can be displayed in text format, as an ASCII graph, or both. The graph can be shown in color, or using grayscale.
ndb_top connects to an NDB Cluster SQL node (that is, a MySQL Server). For this reason, the program must be able to connect as a MySQL user having the
SELECTprivilege on tables in the
ndb_top is available for Linux, Solaris, and Mac OS X platforms beginning with NDB 7.6.3. It is not currently available for Windows platforms.
For more information, see Section 21.4.30, “ndb_top — View CPU usage information for NDB threads”.
Code cleanup. A significant number of debugging statements and printouts not necessary for normal operations have been moved into code used only when testing or debugging
NDB, or dispensed with altogether. This removal of overhead should result in a noticeable improvement in the performance of LDM and TC threads on the order of 10% in many cases.
LDM thread and LCP improvements. Previously, when a local data mangement thread experienced I/O lag, it wrote to local checkpoints more slowly. This could happen, for example, during a disk overload condition. Problems could occur because other LDM threads did not always observe this state, or do likewise.
NDBnow tracks I/O lag mode globally, so that this state is reported as soon as at least one thread is writing in I/O lag mode; it then makes sure that the reduced write speed for this LCP is enforced for all LDM threads for the duration of the slowdown condition. Because the reduction in write speed is now observed by other LDM instances, overall capacity is increased; this enables the disk overload (or other condition inducing I/O lag) to be overcome more quickly in such cases than it was previously.
NDB error identification. Error messages and information can be obtained using the mysql client in NDB 7.6.4 and later from a new
error_messagestable in the
ndbinfoinformation database. In addition, the 7.6.4 release introduces a command-line client ndb_perror for obtaining information from NDB error codes; this replaces using perror with
--ndb, which is now deprecated and subject to removal in a future release.
For more information, see Section 184.108.40.206, “The ndbinfo error_messages Table”, and Section 21.4.17, “ndb_perror — Obtain NDB error message information”.
SPJ improvements. When executing a scan as a pushed join (that is, the root of the query is a scan), the
DBTCblock sends an SPJ request to a
DBSPJinstance on the same node as the fragment to be scanned. Formerly, one such request was sent for each of the node's fragments. As the number of
DBSPJinstances is normally set less than the number of LDM instances, this means that all SPJ instances were involved in the execution of a single query, and, in fact, some SPJ instances could (and did) receive multiple requests from the same query. In NDB 7.6.4, it becomes possible for a single SPJ request to handle a set of root fragments to be scanned, so that only a single SPJ request (
SCAN_FRAGREQ) needs to be sent to any given SPJ instance (
DBSPJblock) on each node.
DBSPJconsumes a relatively small amount of the total CPU used when evaluating a pushed join, unlike the LDM block (which is repsonsible for the majority of the CPU usage), introducing multiple SPJ blocks adds some parallelism, but the additional overhead also increases. By enabling a single SPJ request to handle a set of root fragments to be scanned, such that only a single SPJ request is sent to each
DBSPJinstance on each node and batch sizes are allocated per fragment, the multi-fragment scan can obtain a larger total batch size, allowing for some scheduling optimizations to be done within the SPJ block, which can scan a single fragment at at a time (giving it the total batch size allocation), scan all fragments in parallel using smaller sub-batches, or some combination of the two.
This work is expected to increase performance of pushed-down joins for the following reasons:
Since multiple root fragments can be scanned for each SPJ request, it is necessary to request fewer SPJ instances when executing a pushed join
Increased available batch size allocation, and for each fragment, should also in most cases result in fewer requests being needed to complete a join