The location of this directory can be set using
directory itself is always named
nodeid is the data
node's node ID. The file system directory contains the
D2, each of which contains 2 subdirectories:
DBDICT: Contains data dictionary information. This is stored in:
A set of directories
T2, ..., each of which contains an
D11, each of which contains a directory named
DBLQH. These contain the redo log, which is divided into four parts that are stored in these directories. with redo log part 0 being stored in
D8, part 1 in
D9, and so on.
Within each directory can be found a
DBLQHsubdirectory containing the
Nredo log files; these are named
Nis equal to the value of the
NoOfFragmentLogFilesconfiguration parameter. The default value for
NoOfFragmentLogFilesis 16. The default size of each of these files is 16 MB, controlled by the
The size of each of the four redo log parts is
NoOfFragmentLogFiles * FragmentLogFileSize. You can find out how much space the redo log is using with
DBDIH: This directory contains the file
P, which records information such as the last GCI, restart status, and node group membership of each node; its structure is defined in
storage/ndb/src/kernel/blocks/dbdih/Sysfile.hppin the NDB Cluster source tree. In addition, the
Sfiles keep records of the fragments belonging to each table.
The format used for the
sysfilewas updated from version 1 to version 2 in NDB 8.0.
LCP: When using full local checkpoints (LCPs), this directory holds 2 subdirectories, named
1, each of which which contain local checkpoint data files, one per local checkpoint. In NDB 7.6 (and later), when using partial LCPs (
true), there can be as many as 2064 subdirectories under
2063, with a data file stored in each one. These directories are created as needed, in sequential order; for example, if the last data file used in the previous partial LCP was numbered 61 (in
LCP/61), the next partial LCP data file is created in
These subdirectories each contain a number of files whose names follow the pattern
Nis a table ID and M is a fragment number. Each data node typically has one primary fragment and one backup fragment. This means that, for an NDB Cluster having 2 data nodes, and with
NoOfReplicasequal to 2,
Mis either 0 or 1. For a 4-node cluster with
NoOfReplicasequal to 2,
Mis either 0 or 2 on node group 1, and either 1 or 3 on node group 2.
For a partial local checkpoint, a single data file is normally used, but when more than 12.5% of the table rows stored are to be checkpointed up to 8 data files can be used for each LCP. Altogether, there can be from 1 to 2048 data files at any given time.
When using ndbmtd there may be more than one primary fragment per node. In this case,
Mis a number in the range of 0 to the number of LQH worker threads in the entire cluster, less 1. The number of fragments on each data node is equal to the number of LQH on that node times
MaxNoOfExecutionThreadsdoes not change the number of fragments used by existing tables; only newly-created tables automatically use the new fragment count. To force the new fragment count to be used by an existing table after increasing
MaxNoOfExecutionThreads, you must perform an
ALTER TABLE ... REORGANIZE PARTITIONstatement (just as when adding new node groups).
LG: Default location for Disk Data undo log files. See Section 1.1.4, “Files Used by NDB Cluster Disk Data Tables”, NDB Cluster Disk Data Tables, and CREATE LOGFILE GROUP Statement, for more information.
TS: Default location for Disk Data tablespace data files. See Section 1.1.4, “Files Used by NDB Cluster Disk Data Tables”, NDB Cluster Disk Data Tables, and CREATE TABLESPACE Statement, for more information.