You need a minimum of three instances in an InnoDB Cluster to make it tolerant to the failure of one instance. Adding further instances increases the tolerance to failure of an InnoDB Cluster.
Group Replication implements compatibility policies which
consider the version of the instances, and the
operation detects this and in the event of an incompatibility
the operation terminates with an error. See
Checking the MySQL Version on Instances and
Combining Different Member Versions in a Group.
Cluster
.addInstance()
Use the
function to add an instance to the cluster, where
Cluster
.addInstance(instance
)instance
is connection information to
a configured instance, see
Section 7.4.2, “Configuring Production Instances for InnoDB Cluster Usage”. For example:
mysql-js> cluster.addInstance('icadmin@ic-2:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'icadmin@ic-2:3306': ********
Adding instance to the cluster ...
Validating instance at ic-2:3306...
This instance reports its own address as ic-2
Instance configuration is suitable.
The instance 'icadmin@ic-2:3306' was successfully added to the cluster.
The options dictionary of the addInstance(instance[,
options])
function provides the following attributes:
-
label
: an identifier for the instance being added.The label must be non-empty and no greater than 256 characters long. It must be unique within the Cluster and can only contain alphanumeric, _ (underscore), . (period), - (hyphen), or : (colon) characters.
recoveryMethod
: Preferred method of state recovery. May be auto, clone, or incremental. Default is auto.-
recoveryProgress
: Integer value which defines the recovery process verbosity level.0: do not show any progress information.
1: show detailed static progress information.
2: show detailed dynamic progress information using progress bars.
ipAllowlist
: The list of hosts allowed to connect to the instance for group replication.localAddress
: string value with the Group Replication local address to be used instead of the automatically generated one.exitStateAction
: string value indicating the group replication exit state action.memberWeight
: integer value with a percentage weight for automatic primary election on failover.autoRejoinTries
: integer value to define the number of times an instance will attempt to rejoin the cluster after being expelled.
When a new instance is added to the cluster, the local address
for this instance is automatically added to the
group_replication_group_seeds
variable on all online cluster instances in order to allow them
to use the new instance to rejoin the group, if needed.
The instances listed in
group_replication_group_seeds
are used according to the order in which they appear in the
list. This ensures user specified settings are used first and
preferred. See Section 7.5.2, “Customizing InnoDB Cluster Member Servers” for
more information.
If you are using MySQL 8.0.17 or later you can choose how the instance recovers the transactions it requires to synchronize with the cluster. Only when the joining instance has recovered all of the transactions previously processed by the cluster can it then join as an online instance and begin processing transactions. For more information, see Section 7.4.6, “Using MySQL Clone with InnoDB Cluster”.
You can configure how
behaves, letting recovery operations proceed in the background
or monitoring different levels of progress in MySQL Shell.
Cluster
.addInstance()
Depending on which option you choose to recover the instance from the cluster, you see different output in MySQL Shell. Suppose that you are adding the instance ic-2 to the cluster, and ic-1 is the seed or donor.
-
When you use MySQL Clone to recover an instance from the cluster, the output looks like:
Validating instance at ic-2:3306... This instance reports its own address as ic-2:3306 Instance configuration is suitable. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: ic-2:3306 is being cloned from ic-1:3306 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: ic-2:3306 is shutting down... * Waiting for server restart... ready * ic-2:3306 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 2.18 GB transferred in 7 sec (311.26 MB/s) State recovery already finished for 'ic-2:3306' The instance 'ic-2:3306' was successfully added to the cluster.
The warnings about server restart should be observed, you might have to manually restart an instance. See RESTART Statement.
-
When you use incremental recovery to recover an instance from the cluster, the output looks like:
Incremental distributed state recovery is now in progress. * Waiting for incremental recovery to finish... NOTE: 'ic-2:3306' is being recovered from 'ic-1:3306' * Distributed recovery has finished
To cancel the monitoring of the recovery phase, issue
CONTROL+C. This stops the monitoring but the
recovery process continues in the background. The
recoveryProgress
integer option can be used
with the
operation to display the progress of the recovery phase.
Cluster
.addInstance()
To verify the instance has been added, use the cluster
instance's status()
function. For
example this is the status output of a sandbox cluster after
adding a second instance:
mysql-js> cluster.status()
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "ic-1:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"ic-1:3306": {
"address": "ic-1:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"ic-2:3306": {
"address": "ic-2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
},
"groupInformationSourceMember": "mysql://icadmin@ic-1:3306"
}
How you proceed depends on whether the instance is local or remote to the instance MySQL Shell is running on, and whether the instance supports persisting configuration changes automatically, see Section 6.2.3, “Persisting Settings”. If the instance supports persisting configuration changes automatically, you do not need to persist the settings manually and can either add more instances or continue to the next step. If the instance does not support persisting configuration changes automatically, you have to configure the instance locally. This is essential to ensure that instances rejoin the cluster in the event of leaving the cluster.
If the instance has
super_read_only=ON
then you
might need to confirm that AdminAPI can set
super_read_only=OFF
. See
Instance Configuration in Super Read-only Mode for more
information.
Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 6.10, “Using MySQL Router with AdminAPI, InnoDB Cluster, and InnoDB ReplicaSet”.