This section explains how to set up a sandbox InnoDB cluster deployment. You create and administer your InnoDB clusters using MySQL Shell with the included AdminAPI. This section assumes familiarity with MySQL Shell, see MySQL Shell 8.0 (part of MySQL 8.0) for further information.
Initially deploying and using local sandbox instances of MySQL is a good way to start your exploration of InnoDB cluster. You can fully test out InnoDB cluster locally, prior to deployment on your production servers. MySQL Shell has built-in functionality for creating sandbox instances that are correctly configured to work with Group Replication in a locally deployed scenario.
Sandbox instances are only suitable for deploying and running on your local machine for testing purposes. In a production environment the MySQL Server instances are deployed to various host machines on the network. See Section 21.2.4, “Production Deployment of InnoDB Cluster” for more information.
This tutorial shows how to use MySQL Shell to create an InnoDB cluster consisting of three MySQL server instances. It consists of the following steps:
MySQL Shell includes the AdminAPI that adds the
dba global variable, which provides
functions for administration of sandbox instances. In this
example setup, you create three sandbox instances using
Start MySQL Shell from a command prompt by issuing the command:
and Python, in addition to a native SQL mode. Throughout this
default. Switch modes by issuing
\py for Python mode, and
\sql for SQL mode. Ensure you are in
The argument passed to
deploySandboxInstance() is the TCP port
number where the MySQL Server instance listens for
connections. By default the sandbox is created in a directory
on Unix systems. For Microsoft Windows systems the directory
The root user's password for the instance is prompted for.
Each sandbox instance uses the root user and password. This
is not recommended in production, see
Configuring Production Instances for
information about the
To deploy another sandbox server instance, repeat the steps followed for the sandbox instance at port 3310, choosing different port numbers for each instance. For each additional sandbox instance issue:
clusterAdmin user name and password
must be the same on all instances that belong to a cluster.
In the case of a sandbox deployment, the root user name and
password must be the same on all instances.
To follow this tutorial, use port numbers 3310, 3320 and 3330 for the three sandbox server instances. Issue:
mysql-js> dba.deploySandboxInstance(3320) mysql-js> dba.deploySandboxInstance(3330)
To change the directory which sandboxes are stored in, for
example to run multiple sandboxes on one host for testing
purposes, use the MySQL Shell sandboxDir option. For example
to use a sandbox in the
All subsequent sandbox related operations are then executed
against the instances found at
When you deploy sandboxes, MySQL Shell searches for the
mysqld binary which it then uses to create
the sandbox instance. You can configure where MySQL Shell
searches for the mysqld binary by
PATH environment variable.
This can be useful to test a new version of MySQL locally
before deploying it to production. For example, to use a
mysqld binary at the path
Then run MySQL Shell from the terminal where the
PATH environment variable is set. Any
sandboxes you deploy then use the mysqld
binary found at the configured path.
The next step is to create the InnoDB cluster while connected to the seed MySQL Server instance. The seed instance contains the data that you want to replicate to the other instances. In this example the sandbox instances are blank, therefore we can choose any instance. Sandboxes also use MySQL Clone, see Section 21.2.5, “Using MySQL Clone with InnoDB cluster”.
Connect MySQL Shell to the seed instance, in this case the one at port 3310:
mysql-js> \connect root@localhost:3310
\connect MySQL Shell command is a
shortcut for the
Once you have connected, AdminAPI can write to the local instance's option file. This is different to working with a production deployment, where you would need to connect to the remote instance and run the MySQL Shell application locally on the instance before AdminAPI can write to the instance's option file.
dba.createCluster() method to
create the InnoDB cluster with the currently connected
instance as the seed:
mysql-js> var cluster = dba.createCluster('testCluster')
createCluster() method deploys the
InnoDB cluster metadata to the selected instance, and adds
the instance you are currently connected to as the seed
returns the created cluster, in the example above this is
assigned to the
cluster variable. The
parameter passed to the
method is a symbolic name given to this InnoDB cluster, in
The next step is to add more instances to the InnoDB cluster. Any transactions that were executed by the seed instance are re-executed by each secondary instance as it is added. This tutorial uses the sandbox instances that were created earlier at ports 3320 and 3330.
The seed instance in this example was recently created, so it is nearly empty. Therefore, there is little data that needs to be replicated from the seed instance to the secondary instances. In a production environment, where you have an existing database on the seed instance, you could use a tool such as MySQL Enterprise Backup to ensure that the secondaries have matching data before replication starts. This avoids the possibility of lengthy delays while data replicates from the primary to the secondaries. See Section 18.4.6, “Using MySQL Enterprise Backup with Group Replication”.
Add the second instance to the InnoDB cluster:
The root user's password is prompted for. The specified instance is recovered from the seed instance. In other words the transactions from the seed are copied to the instance joining the cluster.
Add the third instance in the same way:
The root user's password is prompted for.
At this point you have created a cluster with three instances: a primary, and two secondaries.
You can only specify
addInstance() if the instance is a
sandbox instance. This also applies to the implicit
addInstance() after issuing
Once the sandbox instances have been added to the cluster, the
configuration required for InnoDB cluster must be persisted
to each of the instance's option files. How you proceed
depends on whether the instance supports persisting
configuration changes automatically, see
When the MySQL instance which you are using supports
persisting configuration changes automatically, adding the
instance automatically configures the instance. When the MySQL
instance which you are using does not support persisting
configuration changes automatically, you have to configure the
instance locally. See
Configuring Instances with
To check the cluster has been created, use the cluster
status() function. See
Checking a cluster's Status with
Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 21.4, “Using MySQL Router with InnoDB Cluster”.