The lsting in this section provides information about parameters
used in the
[mgm] section of a
config.ini file for configuring NDB Cluster
management nodes. For detailed descriptions and other additional
information about each of these parameters, see
Section 18.104.22.168, “Defining an NDB Cluster Management Server”.
ArbitrationDelay: When asked to arbitrate, arbitrator waits this long before voting (milliseconds)
ArbitrationRank: If 0, then management node is not arbitrator. Kernel selects arbitrators in order 1, 2
DataDir: Data directory for this node
ExecuteOnComputer: String referencing an earlier defined COMPUTER
ExtraSendBufferMemory: Memory to use for send buffers in addition to any allocated by TotalSendBufferMemory or SendBufferMemory. Default (0) allows up to 16MB.
HeartbeatIntervalMgmdMgmd: Time between management node-to-management node heartbeats; the connection between management node is considered lost after 3 missed heartbeats.
HeartbeatThreadPriority: Set heartbeat thread policy and priority for management nodes; see manual for allowed values
HostName: Host name or IP address for this management node.
Id: Number identifying the management node (Id). Now deprecated; use NodeId instead.
LogDestination: Where to send log messages: console, system log, or specified log file
MaxNoOfSavedEvents: Not used
NodeId: Number uniquely identifying the management node among all nodes in the cluster.
PortNumber: Port number to send commands to and fetch configuration from management server
PortNumberStats: Port number used to get statistical information from a management server
TotalSendBufferMemory: Total memory to use for all transporter send buffers
wan: Use WAN TCP setting as default
After making changes in a management node's configuration, it is necessary to perform a rolling restart of the cluster for the new configuration to take effect. See Section 22.214.171.124, “Defining an NDB Cluster Management Server”, for more information.
To add new management servers to a running NDB Cluster, it is
also necessary perform a rolling restart of all cluster nodes
after modifying any existing
files. For more information about issues arising when using
multiple management nodes, see
Section 126.96.36.199, “Limitations Relating to Multiple NDB Cluster Nodes”.