This section explains how to set up a sandbox InnoDB Cluster deployment. You create and administer your InnoDB clusters using MySQL Shell with the included AdminAPI. This section assumes familiarity with MySQL Shell, see MySQL Shell 8.0 (part of MySQL 8.0) for further information.
Initially deploying and using local sandbox instances of MySQL is a good way to start your exploration of InnoDB Cluster. You can fully test out InnoDB Cluster locally, prior to deployment on your production servers. MySQL Shell has built-in functionality for creating sandbox instances that are correctly configured to work with Group Replication in a locally deployed scenario.
Sandbox instances are only suitable for deploying and running on your local machine for testing purposes. In a production environment the MySQL Server instances are deployed to various host machines on the network. See Section 20.2.5, “Production Deployment of InnoDB Cluster” for more information.
This tutorial shows how to use MySQL Shell to create an InnoDB Cluster consisting of three MySQL server instances.
MySQL Shell includes the AdminAPI that adds the
dba global variable, which provides
functions for administration of sandbox instances. In this
example setup, you create three sandbox instances using
Start MySQL Shell from a command prompt by issuing the command:
and Python, in addition to a native SQL mode. Throughout this
default. Switch modes by issuing
\py for Python mode, and
\sql for SQL mode. Ensure you are in
The argument passed to
deploySandboxInstance() is the TCP port
number where the MySQL Server instance listens for
connections. By default the sandbox is created in a directory
on Unix systems. For Microsoft Windows systems the directory
The root password for the instance is prompted for.
Each instance has its own password. Defining the same password for all sandboxes in this tutorial makes it easier, but remember to use different passwords for each instance in production deployments.
To deploy further sandbox server instances, repeat the steps followed for the sandbox instance at port 3310, choosing different port numbers. For each additional sandbox instance issue:
To follow this tutorial, use port numbers 3310, 3320 and 3330 for the three sandbox server instances. Issue:
mysql-js> dba.deploySandboxInstance(3320) mysql-js> dba.deploySandboxInstance(3330)
The next step is to create the InnoDB Cluster while connected to the seed MySQL Server instance. The seed instance contains the data that you want to replicate to the other instances. In this example the sandbox instances are blank, therefore we can choose any instance.
Connect MySQL Shell to the seed instance, in this case the one at port 3310:
mysql-js> \connect root@localhost:3310
\connect MySQL Shell command is a
shortcut for the
Once you have connected, AdminAPI can write to the local instance's option file. This is different to working with a production deployment, where you would need to connect to the remote instance and run the MySQL Shell application locally on the instance before AdminAPI can write to the instance's option file.
dba.createCluster() method to
create the InnoDB Cluster with the currently connected
instance as the seed:
mysql-js> var cluster = dba.createCluster('testCluster')
createCluster() method deploys the
InnoDB Cluster metadata to the selected instance, and adds
the instance you are currently connected to as the seed
returns the created cluster, in the example above this is
assigned to the
cluster variable. The
parameter passed to the
method is a symbolic name given to this InnoDB Cluster, in
The next step is to add more instances to the InnoDB Cluster. Any transactions that were executed by the seed instance are re-executed by each secondary instance as it is added. This tutorial uses the sandbox instances that were created earlier at ports 3320 and 3330.
The seed instance in this example was recently created, so it is nearly empty. Therefore, there is little data that needs to be replicated from the seed instance to the secondary instances. In a production environment, where you have an existing database on the seed instance, you could use a tool such as MySQL Enterprise Backup to ensure that the secondaries have matching data before replication starts. This avoids the possibility of lengthy delays while data replicates from the primary to the secondaries. See Section 17.4.4, “Using MySQL Enterprise Backup with Group Replication”.
Add the second instance to the InnoDB Cluster:
The root user's password is prompted for.
Add the third instance:
The root user's password is prompted for.
At this point you have created a cluster with three instances: a primary, and two secondaries.
You can only specify
addInstance() if the instance is a
sandbox instance. This also applies to the implicit
addInstance() after issuing
Once the sandbox instances have been added to the cluster, the configuration required for InnoDB Cluster must be persisted to each of the instance's option files. Connect to each instance.
mysql-js> \connect instance
You are prompted for the instance's password. The configuration changes are persisted to the instance.
dba.configureLocalInstance() is not
issued when connected to the instance, the configuration is
not persisted to the option file. This does not stop the
instance from initially joining a cluster, but it does mean
that the instance cannot rejoin the cluster automatically,
for example after being stopped.
Repeat the process of connecting to each sandbox instance you added to the cluster and persisting the configuration. For this example we added sandbox instances at ports 3310, 3320 and 3330. Therefore issue this for ports 3320 and 3330:
mysql-js> \connect root@localhost:port_number mysql-js> dba.configureLocalInstance('root@localhost:port_number')
To check the cluster has been created, use the cluster
status() function. See
Checking the InnoDB Cluster Status.
Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 20.3, “Using MySQL Router with InnoDB Cluster”.