You need a minimum of three instances in an InnoDB Cluster to make it tolerant to the failure of one instance. Adding further instances increases the tolerance to failure of an InnoDB Cluster.
Group Replication implements compatibility policies which
consider the patch version of the instances, and the
operation detects this and in the event of an incompatibility
the operation terminates with an error. See
Checking the MySQL Version on Instances and
Combining Different Member Versions in a Group.
Cluster
.addInstance()
If the instance already contains data, use the
cluster.checkInstanceState()
function first
to verify the existing data does not prevent the instance from
joining a cluster. See Checking Instance State.
is deprecated and subject to removal in a future version of
MySQL Shell. The checks it performed are integrated in
Cluster
.checkInstanceState()Cluster
.addInstance()
Use the
function to add an instance to the cluster, where
Cluster
.addInstance(instance
)instance
is connection information to
a configured instance, see
Section 7.4.2, “Configuring Production Instances for InnoDB Cluster Usage”. For example:
mysql-js> cluster.addInstance('icadmin@ic-2:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'icadmin@ic-2:3306': ********
Adding instance to the cluster ...
Validating instance at ic-2:3306...
This instance reports its own address as ic-2
Instance configuration is suitable.
The instance 'icadmin@ic-2:3306' was successfully added to the cluster.
The options dictionary of the addInstance(instance[,
options])
function provides the following attributes:
-
label
: an identifier for the instance being added.The label must be non-empty and no greater than 256 characters long. It must be unique within the Cluster and can only contain alphanumeric, _ (underscore), . (period), - (hyphen), or : (colon) characters.
recoveryMethod
: Preferred method of state recovery. May be auto, clone, or incremental. Default is auto.waitRecovery
: Integer value to indicate if the command shall wait for the recovery process to finish and its verbosity level. Deprecated and subject to removal in a future version. UserecoveryProgress
instead.-
recoveryProgress
: Integer value which defines the recovery process verbosity level.0: do not show any progress information.
1: show detailed static progress information.
2: show detailed dynamic progress information using progress bars.
-
password
: the instance connection password.This option is deprecated, and scheduled for removal in a future version.
memberSslMode
: SSL mode used on the instance.-
ipWhitelist
: The list of hosts allowed to connect to the instance for group replication. Deprecated.NoteRemoved in MySQL Shell 8.3.0. Use
ipAllowlist
instead. ipAllowlist
: The list of hosts allowed to connect to the instance for group replication.localAddress
: string value with the Group Replication local address to be used instead of the automatically generated one.groupSeeds
: string value with a comma-separated list of the Group Replication peer addresses to be used instead of the automatically generated one. Deprecated and ignored.-
interactive
: Boolean value used to disable/enable the wizards in the command execution, meaning that prompts and confirmations will be provided or not according to the value set. The default value is equal to MySQL Shell wizard mode.This option is deprecated, and scheduled for removal in a future version.
exitStateAction
: string value indicating the group replication exit state action.memberWeight
: integer value with a percentage weight for automatic primary election on failover.autoRejoinTries
: integer value to define the number of times an instance will attempt to rejoin the cluster after being expelled.
When a new instance is added to the cluster, the local address
for this instance is automatically added to the
group_replication_group_seeds
variable on all online cluster instances in order to allow them
to use the new instance to rejoin the group, if needed.
The instances listed in
group_replication_group_seeds
are used according to the order in which they appear in the
list. This ensures user specified settings are used first and
preferred. See Section 7.5.2, “Customizing InnoDB Cluster Member Servers” for
more information.
If you are using MySQL 8.0.17 or later you can choose how the instance recovers the transactions it requires to synchronize with the cluster. Only when the joining instance has recovered all of the transactions previously processed by the cluster can it then join as an online instance and begin processing transactions. For more information, see Section 7.4.6, “Using MySQL Clone with InnoDB Cluster”.
You can configure how
behaves, letting recovery operations proceed in the background
or monitoring different levels of progress in MySQL Shell.
Cluster
.addInstance()
Depending on which option you choose to recover the instance from the cluster, you see different output in MySQL Shell. Suppose that you are adding the instance ic-2 to the cluster, and ic-1 is the seed or donor.
-
When you use MySQL Clone to recover an instance from the cluster, the output looks like:
Validating instance at ic-2:3306... This instance reports its own address as ic-2:3306 Instance configuration is suitable. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: ic-2:3306 is being cloned from ic-1:3306 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: ic-2:3306 is shutting down... * Waiting for server restart... ready * ic-2:3306 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 2.18 GB transferred in 7 sec (311.26 MB/s) State recovery already finished for 'ic-2:3306' The instance 'ic-2:3306' was successfully added to the cluster.
The warnings about server restart should be observed, you might have to manually restart an instance. See RESTART Statement.
-
When you use incremental recovery to recover an instance from the cluster, the output looks like:
Incremental distributed state recovery is now in progress. * Waiting for incremental recovery to finish... NOTE: 'ic-2:3306' is being recovered from 'ic-1:3306' * Distributed recovery has finished
To cancel the monitoring of the recovery phase, issue
CONTROL+C. This stops the monitoring but the
recovery process continues in the background. The
recoveryProgress
integer option can be used
with the
operation to display the progress of the recovery phase.
Cluster
.addInstance()
To verify the instance has been added, use the cluster
instance's status()
function. For
example this is the status output of a sandbox cluster after
adding a second instance:
mysql-js> cluster.status()
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "ic-1:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"ic-1:3306": {
"address": "ic-1:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"ic-2:3306": {
"address": "ic-2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://icadmin@ic-1:3306"
}
How you proceed depends on whether the instance is local or
remote to the instance MySQL Shell is running on, and whether
the instance supports persisting configuration changes
automatically, see
Section 6.2.4, “Persisting Settings”. If the instance
supports persisting configuration changes automatically, you do
not need to persist the settings manually and can either add
more instances or continue to the next step. If the instance
does not support persisting configuration changes automatically,
you have to configure the instance locally. See
Configuring Instances with
dba.configureLocalInstance()
. This is essential
to ensure that instances rejoin the cluster in the event of
leaving the cluster.
If the instance has
super_read_only=ON
then you
might need to confirm that AdminAPI can set
super_read_only=OFF
. See
Instance Configuration in Super Read-only Mode for more
information.
Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 6.10, “Using MySQL Router with AdminAPI, InnoDB Cluster, and InnoDB ReplicaSet”.
is deprecated and subject to removal in a future version of
MySQL Shell. The checks implemented by this method are
integrated into methods which require it, such as
Cluster
.checkInstanceState()Cluster.addInstance()
,
Cluster.rejoinInstance()
, and so on.
The cluster.checkInstanceState()
function
can be used to verify the existing data on an instance does
not prevent it from joining a cluster. This process works by
validating the instance's global transaction identifier (GTID)
state compared to the GTIDs already processed by the cluster.
For more information on GTIDs see
GTID Format and Storage. This check
enables you to determine if an instance which has processed
transactions can be added to the cluster.
The following demonstrates issuing this in a running MySQL Shell:
mysql-js> cluster.checkInstanceState('icadmin@ic-4:3306')
The output of this function can be one of the following:
OK new: the instance has not executed any GTID transactions, therefore it cannot conflict with the GTIDs executed by the cluster
OK recoverable: the instance has executed GTIDs which do not conflict with the executed GTIDs of the cluster seed instances
ERROR diverged: the instance has executed GTIDs which diverge with the executed GTIDs of the cluster seed instances
ERROR lost_transactions: the instance has more executed GTIDs than the executed GTIDs of the cluster seed instances
Instances with an OK status can be added to the cluster because any data on the instance is consistent with the cluster. In other words the instance being checked has not executed any transactions which conflict with the GTIDs executed by the cluster, and can be recovered to the same state as the rest of the cluster instances.