The example tasks use a sandbox deployment with AdminAPI to deploy an InnoDB Cluster.
Deploying and using local sandbox instances of MySQL enables you to test the functionality locally, before deployment on your production servers. AdminAPI has built-in functionality for creating sandbox instances that are preconfigured to work with InnoDB Cluster, InnoDB ClusterSet, and InnoDB ReplicaSet in a locally deployed scenario.
Sandbox instances are only suitable for deploying and running on your local machine for testing purposes.
To provide tolerance for one failure, create an InnoDB Cluster with three instances. In this example, you use three sandbox instances running on the same machine. In a real-world setup, those three instances could run on different hosts on the network.
Open a MySQL Shell Console session.
-
Change the active language to Python:
\py
-
To create and start the MySQL sandbox instances, use the
dba.deploy_sandbox_instance()
method. Issue the following three statements in the MySQL Shell and enter a root password for each instance:dba.deploy_sandbox_instance(3310) dba.deploy_sandbox_instance(3320) dba.deploy_sandbox_instance(3330)
NoteUse the same root password for all instances.
To create an InnoDB Cluster, complete the following steps:
-
Connect to the MySQL instance you want to be the primary instance in the InnoDB Cluster.
shell.connect('root@localhost:3310')
-
Issue the
dba.create_cluster()
method to create an InnoDB Cluster nameddevCluster
, and use the assigned variablecluster
to hold the outputted value:cluster = dba.create_cluster('devCluster')
This command outputs:
A new InnoDB cluster will be created on instance 'localhost:3310'. Validating instance configuration at localhost:3310... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3310 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33101'. Use the localAddress option to override. Creating InnoDB cluster 'devCluster' on '127.0.0.1:3310'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure. <Cluster:devCluster>
-
Verify that the creation was successful by using the
cluster.status()
method with the assigned variablecluster
:cluster.status()
The following status is output:
{ “clusterName”: “devCluster”, “defaultReplicaSet”: { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, “groupInformationSourceMember”: “127.0.0.1:3310” }
-
The cluster is up and running but not yet tolerant to a failure. Add another MySQL server instances to the Cluster using the
add_instance()
method:{ cluster.add_instance('root@localhost:3320')
The output is:
NOTE: The target instance '127.0.0.1:3320' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3320' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): ) }
-
Select a recovery method from the prompt. The options are:
Clone
: Clones the instance that you are adding to the primary cluster, deleting any transactions the instance contains. The MySQL Clone plugin is automatically installed. Assuming you are adding either an empty instance (has not processed any transactions) or an instance that contains transactions you prefer not to retain, select the Clone option.Incremental recovery
: Recovers all transactions processed by the cluster to the joining instance using asynchronous replication. Incremental recovery is appropriate if you are sure all updates ever processed by the cluster were completed with global transaction IDs (GTID
) enabled. There are no purged transactions, and the new instance contains the sameGTID
set as the cluster or a subset of it.
In this example, select C for
Clone
:After selecting the
Clone
option, the output is:Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C Validating instance configuration at localhost:3320... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3320 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33201'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3310 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: 127.0.0.1:3320 is shutting down... * Waiting for server restart... ready * 127.0.0.1:3320 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s) State recovery already finished for '127.0.0.1:3320' The instance '127.0.0.1:3320' was successfully added to the cluster.
-
Add the third instance created and again select
C
for the clone-recovery method:cluster.add_instance('root@localhost:3330')
-
Check the status of the cluster, by entering:
cluster.status()
This outputs the following:
{ "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3310" } The setup of the InnoDB Cluster was successful!
The cluster can now tolerate up to one failure.
Stop the primary instance, check the status of the InnoDB Cluster, and finally restart the instance.
-
Use the
dba.kill_sandbox_instance()
method to stop the MySQL server instance:dba.kill_sandbox_instance(3310)
The output is:
Killing MySQL instance... Instance localhost:3310 successfully killed.
-
Check the status of the InnoDB Cluster:
cluster.status()
The output is:
{ "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3320", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "SECONDARY", "mode": "n/a", "readReplicas": {}, "role": "HA", "shellConnectError": "MySQL Error 2003: Could not open connection to '127.0.0.1:3310': Can't connect to MySQL server on '127.0.0.1:3310' (10061)", "status": "(MISSING)" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3320" }
The MySQL server instance formally running on port 3310 is
MISSING
. -
Restart this missing instance, by issuing the
dba.start_sandbox_instance()
method with the port number:dba.start_sandbox_instance(3310)
Checking the status of the cluster shows that the instance has been restored as active in the cluster, but as a secondary member:
cluster.status()
The output is:
{ "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3320", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3320" }
All instances are back online, and the InnoDB Cluster can tolerate one failure again.