Related Documentation Download this Manual
PDF (US Ltr) - 37.4Mb
PDF (A4) - 37.4Mb
PDF (RPM) - 36.9Mb
EPUB - 10.6Mb
HTML Download (TGZ) - 10.3Mb
HTML Download (Zip) - 10.4Mb
HTML Download (RPM) - 9.0Mb
Eclipse Doc Plugin (TGZ) - 11.2Mb
Eclipse Doc Plugin (Zip) - 13.4Mb
Man Pages (TGZ) - 204.7Kb
Man Pages (Zip) - 311.7Kb
Info (Gzip) - 3.4Mb
Info (Zip) - 3.4Mb
Excerpts from this Manual

MySQL 5.7 Reference Manual  /  InnoDB Cluster User Guide  /  Getting Started with InnoDB Cluster

19.4 Getting Started with InnoDB Cluster

This section explains how to use MySQL Shell with AdminAPI to set up an InnoDB cluster and configure MySQL Router to achieve high availability.

InnoDB cluster instances are created and managed through the MySQL Shell. The MySQL Shell offers a specific Administrative Module for this purpose, called dba, that is automatically initialized at startup.

To create a new InnoDB cluster, the MySQL Shell must be connected to the MySQL Server instance. By default, this MySQL Server instance is the seed instance of the new InnoDB cluster and hold the initial data set.

This tutorial describes how to create three local sandbox instances, one primary and two secondaries, the minimum required to provide high availability.

DBA Module

MySQL Shell includes the AdminAPI, which provides the dba global variable and its associated methods. These dba methods help you to administer your cluster, for example by using dba.deploySandboxInstance() to add a sandbox MySQL instance.

Note

AdminAPI is available as of MySQL Shell 1.0.8.

To list all available dba commands, use the dba.help() method. You can obtain detailed information for a specific method using the general format object.help('methodname'). For example:

mysql-js> dba.help('getCluster')

Retrieves a cluster from the Metadata Store.

SYNTAX
  <Dba>.getCluster([name])

WHERE
  name: Parameter to specify the name of the cluster to be returned.

DESCRIPTION

If name is not specified, the default cluster will be returned.

If name is specified, and no cluster with the indicated name is found, an error
will be raised.

Deploying Sandbox Instances

Initially deploying and using local sandbox instances of MySQL is a good way to start your exploration of InnoDB cluster. You can fully test out InnoDB cluster locally, prior to deployment on your production servers. MySQL Shell has built in functionality for creating sandbox instances. MySQL Shell creates the sandbox instances correctly configured to work with Group Replication in a locally deployed clustered scenario.

Note

Sandbox instance are only suitable for deploying and running on your local machine.

In a production environment the MySQL Server instances would be deployed on various hosts on the network. This is explained later in this guide.

The first step is to create sandbox MySQL Server instances using MySQL Shell.

Note

A minimum of three instances are required to create an InnoDB cluster that is tolerant to the failure of one instance. If two instances leave the group unexpectedly, then the cluster is no longer highly available and capable of supporting writes, and reverts to read-only mode. A cluster of five nodes would be tolerant to the simultaneous failure of two instances. In the general case, the number of simultaneous failures that can be sustained while retaining functioning high availability is (nodes - 1)/2.

The dba module provides several functions for administration of sandbox instances. For this example setup, you create three sandbox instances. The AdminAPI provides a function for that purpose: dba.deploySandboxInstance().

In the following example, the MySQL Shell opens a Session to a server running on the local host at port 3310.

Start MySQL Shell from a command prompt by issuing the command:

shell> mysqlsh

MySQL Shell provides two scripting languages: JavaScript and Python.

The Python scripting language method naming conforms to the PEP-8 Style Guide for Python Code.

Note

You can also write SQL code in a shell script file.

Throughout this guide you see MySQL Shell used primarily in JavaScript mode. For more information on MySQL Shell see Section 3.8, “MySQL Shell User Guide”

When MySQL Shell starts it is in JavaScript mode by default. You switch into JavaScript mode, Python mode and SQL mode using the commands \js, \py, and \sql.

Ensure you are in JavaScript mode by issuing the \js command, then execute:

mysql-js> dba.deploySandboxInstance(3310)
Note

Semi-colons are not required at the end of the line in JavaScript mode.

The argument passed to deploySandboxInstance() is the TCP port number where the MySQL Server instance will listen for connections. By default the sandbox is created in a directory named $HOME/mysql-sandboxes/port on Unix systems. For Microsoft Windows systems the directory is %userprofile%\MySQL\mysql-sandboxes\port.

The root password for the instance is prompted for.

Note

Each instance has its own password. Defining the same password for all sandboxes in this tutorial makes it easier, but remember to use different passwords for each instance on production systems.

Repeat the above command two more times using different port numbers:

mysql-js> dba.deploySandboxInstance(3320)
mysql-js> dba.deploySandboxInstance(3330)

You now have three MySQL server sandbox instances running on ports 3310, 3320 and 3330.

Managing Sandbox Instances

Once a sandbox instance is running, it is possible to change its status at any time using the following:

  • Stop: dba.stopSandboxInstance()

  • Start: dba.startSandboxInstance()

  • Kill: dba.killSandboxInstance()

    Kills the MySQL Server instance process on the local host, useful to help simulate an unexpected halt while testing failover.

  • Delete: dba.deleteSandboxInstance()

    Completely removes the sandbox instance from your file system.

Creating the InnoDB Cluster

The next step is to create the InnoDB cluster while connected to the seed MySQL Server instance. The seed instance is the instance that you are connected to via MySQL Shell and that you want to replicate. In this example, the sandbox instances are blank instances, therefore we can choose any instance.

Connect MySQL Shell to the seed instance, in this case the one at port 3310:

mysql-js> \connect root@localhost:3310

The syntax \connect is a shortcut for the MySQL Shell connect method shell.connect(). Alternatively use the following command:

mysql-js> shell.connect(root@localhost:3310)

Create the InnoDB cluster:

mysql-js> var cluster = dba.createCluster('test')

The parameter passed to the createCluster() function is a symbolic name given to this InnoDB cluster. The resulting InnoDB cluster is assigned to the cluster variable. This function deploys the metadata to the selected instance, configures it for Group Replication and adds the instance as the seed of the new InnoDB cluster.

After validating that the instance is properly configured, it is added to the InnoDB cluster as the seed instance and the replication subsystem is started.

The provided sandbox instances are pre-configured to work with Group Replication, but if you use a pre-existing instance, it is possible that some configuration options might not be set in a compatible way. The createCluster() command ensures that the settings are correct and if not, it changes their values. If a change requires MySQL Server to be restarted, you are prompted to restart it manually whenever convenient.

In summary, when dba.createCluster() is executed, the following steps are carried out:

  1. The InnoDB cluster Metadata Schema is created (if it does not already exist) or is updated to the latest version. Schema objects or columns are only added, never removed.

  2. The new InnoDB cluster information, including the specified name and password, is inserted into the InnoDB cluster Metadata.

  3. The seed instance (current session) is added to the InnoDB cluster as first instance of the Default ReplicaSet.

  4. The seed instance information is inserted into the InnoDB cluster Metadata.

Obtaining the cluster Instance Variable

Once you have created a cluster you can obtain the cluster instance variable using a command such as:

mysql-js> var cluster = dba.getCluster("devCluster")

You specify the name of the cluster you wish to obtain the instance variable for. If you do not specify the name of the cluster the default cluster is returned.

Adding Instances to InnoDB Cluster

The next step is to add replicas to the InnoDB cluster. Any transactions that were executed by the seed instance are re-executed by each replica as they are added. To demonstrate this you use the sandbox instances that you created earlier.

The seed instance in this example was recently created, so it is nearly empty and had replication enabled when it was created. Therefore, there is little data that needs to be replicated from the primary to the secondaries. In a production environment, where you have an existing database on the seed instance, you could use a tool such as MySQL Enterprise Backup to ensure that the secondaries have matching data before replication starts. This avoids the possibility of lengthy delays while data replicates from the primary to the secondaries. Once the cluster is formed, writes to the primary result in data being replicated to the secondaries.

Add the second instance to the InnoDB cluster:

mysql-js> cluster.addInstance('root@localhost:3320')

The root user's password is prompted for.

Add the third instance:

mysql-js> cluster.addInstance('root@localhost:3330')

The root user's password is prompted for.

At this point you have created a high availability cluster with three instances: a primary, and two secondaries.

Note

You could have added additional details to the logs when adding an instance to a cluster. Pass in 'verbose' to enable additional logging, so our last example would have looked like this:

mysql-js> cluster.addInstance('root@localhost:3330', {verbose: true})

You can only specify localhost in addInstance() if the instance is a sandbox instance. This also applies to the implicit addInstance() after issuing createCluster().

Removing Instances from the InnoDB Cluster

You can remove an instance from a cluster at any time should you wish to do so. This can be done with the removeInstance() method, as in the following example:

mysql-js> cluster.removeInstance("192.168.1.1:3306")

Checking the InnoDB Cluster Status

With three instances now in our InnoDB cluster sandbox, use cluster.status() to check its status:

mysql-js> cluster.status()

This retrieves the current InnoDB cluster status and outputs a status report. There are several attributes, including the following:

The instance status is either ONLINE, OFFLINE, RECOVERING, UNREACHABLE, or (MISSING).

  • ONLINE: The instance is online.

  • OFFLINE: The instance may have lost connection to the other instances.

  • RECOVERING: The instance is receiving updates from the seed instance and should eventually switch to ONLINE.

  • UNREACHABLE: The instance has lost communication with the cluster.

  • (MISSING): The state of an instance which belongs to a cluster's metadata, but is not currently active on the corresponding Group Replication group.

The mode indicates either R/W (read and writable) or R/O (read only). Only the instance marked "R/W" can execute transactions that update the database, so it is the PRIMARY. If that instance becomes unreachable for any reason (like an unexpected halt), one of the remaining "R/O" instances automatically takes over its place and becomes the new "R/W" PRIMARY.

To check the status of the InnoDB cluster at a later time, you can get a reference to the InnoDB cluster object by connecting to any of its instances. However, if you want to make changes to the InnoDB cluster, you must connect to the PRIMARY. For information about how the InnoDB cluster is running, use the status() method:

mysql-js> var cluster = dba.getCluster()
mysql-js> cluster.status()
{
    "clusterName": "test",
    "defaultReplicaSet": {
        "status": "Cluster tolerant to up to ONE failure.",
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {
                    "localhost:3320": {
                        "address": "localhost:3320",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3330": {
                        "address": "localhost:3330",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    }
                }
            }
        }
    }
}

As the above output demonstrates, status information includes the InnoDB cluster name, topology, default ReplicaSet, PRIMARY, and more.

Describing the Structure of the InnoDB Cluster

To get information about the structure of the InnoDB cluster itself, use the cluster.describe() function:

mysql-js> cluster.describe();
{
    "clusterName": "test",
    "adminType": "local",
    "defaultReplicaSet": {
        "name": "default",
        "instances": [
            {
                "name": "localhost:3310",
                "host": "localhost:3310",
                "role": "HA"
            },
            {
                "name": "localhost:3320",
                "host": "localhost:3320",
                "role": "HA"
            },
            {
                "name": "localhost:3330",
                "host": "localhost:3330",
                "role": "HA"
            }
        ]
    }
}

The output from this function shows the structure of the InnoDB cluster including all of its configuration information, ReplicaSets and Instances.

Rejoining a Cluster

If an instance leaves the cluster, for example because it lost connection and did not or could not automatically rejoin the cluster, it may be necessary to rejoin it to the cluster at a later stage. Because the Group Replication configuration is not stored in the configuration file, restarting an instance causes it to leave the Replication Group, so it must rejoin to add the instance back into the Default ReplicaSet.

The command to rejoin an instance to a cluster is cluster.rejoinInstance().

In the case where an instance has been configured using dba.configureLocalInstance(), its Group Replication information is persisted to the configuration file, and will rejoin the cluster automatically. More information on this can be found in the section Configuring the Instance.

Deploying MySQL Router

In order for client applications to handle failover, they need to be aware of the InnoDB cluster topology. They also need to know which instance is the PRIMARY. While it is possible for applications to implement that logic, MySQL Router can provide this functionality for you.

The recommended deployment of MySQL Router is on the same host as the application. In this tutorial, everything is running on a single host, so you deploy MySQL Router to the same host.

Assuming MySQL Router is already installed, the only required step is to bootstrap it with the location of the metadata server. The following does this, and uses all defaults:

shell> mysqlrouter --bootstrap root@localhost:3310
Please enter MySQL password for root:
MySQL Router needs to create a InnoDB cluster metadata client account.
To allow secure storage of its password, please provide an encryption key.
To generate a random encryption key to be stored in a local obscured file,
and allow the router to start without interaction, press Return to cancel
and use the --master-key-path option to specify a file location.

Please provide an encryption key:
Please confirm encryption key:

Bootstrapping system MySQL Router instance...
MySQL Router  has now been configured for the InnoDB cluster 'test'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'test':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447

X protocol connections to cluster 'test':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470

You are prompted for the instance password and encryption key for MySQL Router to use. This encryption key is used to encrypt the instance password used by MySQL Router to connect to the cluster. The ports you can use to connect to the InnoDB cluster are also displayed.

Note

Currently only Classic Protocol connections are supported between MySQL Router and InnoDB cluster.

MySQL Router connects to the InnoDB cluster, fetches its metadata and configures itself for use. The generated configuration creates 2 TCP ports: one for read-write sessions (which redirect connections to the PRIMARY) and one for read-only sessions (which redirect connections to one of the SECONDARY instances).

Once bootstrapped and configured, start MySQL Router (or set up a service for it to start automatically when the system boots):

shell> mysqlrouter &

You can now connect a MySQL client, such as MySQL Shell to one of the incoming MySQL Router ports and see how the client gets transparently connected to one of the InnoDB cluster instances. To see which instance you are actually connected to, simply query the port status variable.

shell> mysqlsh --uri root@localhost:6442
mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> select @@port;
+--------+
| @@port |
+--------+
|   3310 |
+--------+
1 row in set (0.00 sec)

MySQL Router and Metadata Servers

When MySQL Router is bootstrapped it records the bootstrap server addresses in its configuration. These servers contain metadata used my MySQL Router in order to route correctly. If additional nodes are now added to the cluster, MySQL Router uses them. If however, all of the original metadata servers go offline for some reason, MySQL Router would no longer be able to route correctly. Consider the following line in a mysqlrouter.conf file:

...
bootstrap_server_addresses=mysql://192.168.56.101:3310,mysql://192.168.56.101:3320,mysql://192.168.56.101:3330
...

There are three original metadata servers specified here. Now if two additional servers (call them D and E) were added, you would have a five node cluster, and MySQL Router routes to these additional nodes as required. If the original metadata nodes, A, B and C, now went down over a period of time, you would be left with only nodes D and E running. At this point, nodes D and E are still alive and form a quorum. So it should be possible to route calls to them. However, as all original metadata servers are down (nodes A, B and C), MySQL Router shuts off all routing.

Testing Failover

To test if failover works, simulate an unexpected halt by killing the PRIMARY instance using the dba.killSandboxInstance() function and check that one of the other instances takes over automatically.

mysql-js> dba.killSandboxInstance(3310)

Then you can again check which instance you are connected to. The first SELECT fails as the connection to the original PRIMARY was lost. MySQL Shell automatically reconnects for you and when you issue the command again the new port is confirmed.

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> select @@port;
ERROR: 2013 (HY000): Lost connection to MySQL server during query
The global session got disconnected.
Attempting to reconnect to 'root@localhost:6446'...
The global session was successfully reconnected.
mysql-sql> select @@port;
+--------+
| @@port |
+--------+
|   3330 |
+--------+
1 row in set (0.00 sec)

This shows that the InnoDB cluster provided us with automatic failover, that MySQL Router has automatically reconnected us to the new PRIMARY instance, and that we have high availability.

You can bring the instance that you killed back online.

mysql-js> dba.startSandboxInstance(3310)
mysql-js> cluster.rejoinInstance('root@localhost:3310')
mysql-js> cluster.status()

Dissolving InnoDB Cluster

If you want to remove all information associated with a cluster, you can use the cluster.dissolve() method. This removes all metadata and configuration associated with the cluster. Once you have dissolved the cluster you need to create it again from scratch, using dba.createCluster().

Note

After calling cluster.dissolve(), the cluster object is no longer valid.

Using MySQL Shell to Execute a Script

You can automate cluster configuration with scripts. For example:

shell> mysqlsh -f setup-innodb-cluster.js
Note

Any command line options specified after the script file name are passed to the script and not to MySQL Shell. You can access those options using the os.argv array in JavaScript, or the sys.argv array in Python. In both cases, the first option picked up in the array is the script name.

The contents for an example script file is shown here:

  print('MySQL InnoDB cluster sandbox set up\n');
  print('==================================\n');
  print('Setting up a MySQL InnoDB cluster with 3 MySQL Server sandbox instances.\n');
  print('The instances will be installed in ~/mysql-sandboxes.\n');
  print('They will run on ports 3310, 3320 and 3330.\n\n');

  var dbPass = shell.prompt('Please enter a password for the MySQL root account: ', {type:"password"});

  try {
     print('\nDeploying the sandbox instances.');
     dba.deploySandboxInstance(3310, {password: dbPass});
     print('.');
     dba.deploySandboxInstance(3320, {password: dbPass});
     print('.');
     dba.deploySandboxInstance(3330, {password: dbPass});
     print('.\nSandbox instances deployed successfully.\n\n');

     print('Setting up InnoDB cluster...\n');
     shell.connect('root@localhost:3310', dbPass);

     var cluster = dba.createCluster("devCluster");

     print('Adding instances to the cluster.');
     cluster.addInstance({user: "root", host: "localhost", port: 3320, password: dbPass});
     print('.');
     cluster.addInstance({user: "root", host: "localhost", port: 3330, password: dbPass});
     print('.\nInstances successfully added to the cluster.');

     print('\nInnoDB cluster deployed successfully.\n');
  } catch(e) {
     print('\nThe InnoDB cluster could not be created.\n\nError: ' +
     + e.message + '\n');
}

User Comments
Sign Up Login You must be logged in to post a comment.