Related Documentation Download this Manual
PDF (US Ltr) - 38.0Mb
PDF (A4) - 38.1Mb
PDF (RPM) - 37.4Mb
HTML Download (TGZ) - 10.2Mb
HTML Download (Zip) - 10.2Mb
HTML Download (RPM) - 8.9Mb
Man Pages (TGZ) - 216.6Kb
Man Pages (Zip) - 329.5Kb
Info (Gzip) - 3.5Mb
Info (Zip) - 3.5Mb
Excerpts from this Manual

MySQL 5.7 Reference Manual  /  InnoDB Cluster  /  Working with InnoDB Cluster

20.6 Working with InnoDB Cluster

This section explains how to work with InnoDB cluster, and how to handle common administration tasks.

Checking the InnoDB Cluster Status

Cluster objects provide the status() method that enables you to check how a cluster is running. Before you can check the status of the InnoDB cluster, you need to get a reference to the InnoDB cluster object by connecting to any of its instances. However, if you want to make changes to the configuration of the cluster, you must connect to a "R/W" instance. Issuing status() retrieves the status of the cluster based on the view of the cluster which the server instance you are connected to is aware of and outputs a status report.

Important

The instance's state in the cluster directly influences the information provided in the status report. An instance which has left the cluster provides a different view of the cluster compared to a instance which belongs to the cluster. Therefore ensure the instance you are connected to has a status of ONLINE.

For information about how the InnoDB cluster is running, use the cluster's status() method:

mysql-js> var cluster = dba.getCluster()
mysql-js> cluster.status()
{
    "clusterName": "testCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "localhost:3320", 
	"ssl": "REQUIRED",
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "localhost:3320": {
                "address": "localhost:3320", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "localhost:3330": {
                "address": "localhost:3330", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

The information output by cluster.status() provides the following information:

  • clusterName: name assigned to this cluster during dba.createCluster().

  • ssl: whether secure connections are used by the cluster or not. Shows values of REQUIRED or DISABLED, depending on how the memberSslMode option was configured during either createCluster() or addInstance(). The value returned by this parameter corresponds to the value of the group_replication_ssl_mode server variable on the instance.

  • status: The status of this element of the cluster. For the overall cluster this describes the high availability provided by this cluster. For instance state see the below details

  • topology: The instances which have been added to the cluster.

  • Host name of instance: The host name of an instance, for example localhost:3310.

  • mode: See the below description of mode.

The instance status is one of the following:

  • ONLINE: The instance is online and participating in the cluster.

  • OFFLINE: The instance has lost connection to the other instances.

  • RECOVERING: The instance is attempting to synchronize with the cluster by retrieving transactions it needs before it can become an ONLINE member.

  • UNREACHABLE: The instance has lost communication with the cluster.

  • ERROR: The instance has encountered an error during the recovery phase or while applying a transaction.

  • (MISSING): The state of an instance which is part of the configured cluster, but is currently unavailable.

    Note

    The MISSING state is specific to InnoDB cluster, it is not a state generated by Group Replication. MySQL Shell uses this state to indicate instances that are registered in the metadata, but cannot be found in the live cluster view.

Important

Once an instance enters ERROR state, the super_read_only option is set to ON. To leave the ERROR state you must manually configure the instance with super_read_only=OFF.

The mode indicates either R/W (read and writable) or R/O (read only). In single-primary mode, only the instance marked "R/W" can execute transactions that update the database, so it is the PRIMARY. If that instance becomes unreachable for any reason (like an unexpected halt), one of the remaining "R/O" instances automatically takes over its place and becomes the new "R/W" primary. In multi-primary mode, multiple instances are marked as "R/W" and there is no elected PRIMARY.

Describing the Structure of the InnoDB Cluster

To get information about the structure of the InnoDB cluster itself, use the cluster.describe() function:

mysql-js> cluster.describe();
{
    "clusterName": "test",
    "adminType": "local",
    "defaultReplicaSet": {
        "name": "default",
        "instances": [
            {
                "name": "localhost:3310",
                "host": "localhost:3310",
                "role": "HA"
            },
            {
                "name": "localhost:3320",
                "host": "localhost:3320",
                "role": "HA"
            },
            {
                "name": "localhost:3330",
                "host": "localhost:3330",
                "role": "HA"
            }
        ]
    }
}

The output from this function shows the structure of the InnoDB cluster including all of its configuration information, and so on.

Managing Sandbox Instances

Once a sandbox instance is running, it is possible to change its status at any time using the following:

  • To stop a sandbox instance use dba.stopSandboxInstance(instance). This stops the instance gracefully, unlike dba.killSandboxInstance(instance).

  • To start a sandbox instance use dba.startSandboxInstance(instance).

  • To kill a sandbox instance use dba.killSandboxInstance(instance). This stops the instance without gracefully stopping it and is useful in simulating unexpected halts.

  • To delete a sandbox instance use dba.deleteSandboxInstance(instance). This completely removes the sandbox instance from your file system.

Removing Instances from the InnoDB Cluster

You can remove an instance from a cluster at any time should you wish to do so. This can be done with the removeInstance() method, as in the following example:

mysql-js> cluster.removeInstance('root@localhost:3310')

Rejoining a Cluster

If an instance leaves the cluster, for example because it lost connection and did not or could not automatically rejoin the cluster, it might be necessary to rejoin it to the cluster at a later stage. To rejoin an instance to a cluster issue cluster.rejoinInstance().

In the case where an instance has not had it's configuration persisted, for example when you have not issued dba.configureLocalInstance() locally on the instance but it has been added to a cluster, upon restart the instance does not rejoin the cluster automatically. The solution is to issue cluster.rejoinInstance() so that the instance is added to the cluster again. Then connect to the instance, run MySQL Shell locally and issue dba.configureLocalInstance(). This ensures the InnoDB cluster configuration is persisted to the instance's option file and ensure it rejoins the cluster automatically.

MySQL Router and Metadata Servers

When MySQL Router is bootstrapped it records the bootstrap server addresses in its configuration. These servers contain metadata used my MySQL Router in order to route correctly. If any additional instances are added to the cluster after bootstrapping the MySQL Router, they are automatically detected and used for connection routing. If however, all of the original metadata servers go offline for some reason, MySQL Router would no longer be able to route correctly. Consider the following line in a mysqlrouter.conf file:

...
bootstrap_server_addresses=mysql://192.168.56.101:3310,mysql://192.168.56.101:3320,mysql://192.168.56.101:3330
...

There are three original metadata servers specified here. Now if two additional servers (call them D and E) were added, you would have a five instance cluster, and MySQL Router routes to these additional instances as required. If the original metadata instances, A, B and C, stopped unexpectedly and left the cluster, you would be left with only instances D and E running. At this point, instances D and E are still alive and form a quorum. So it should be possible to route calls to them. However, as all original metadata servers are down (instances A, B and C), MySQL Router shuts off all routing. In such a situation you can configure MySQL Router manually.

  • Modify the MySQL Router instance's configuration file to specify the correct metadata servers in the bootstrap_server_addresses option.

  • Restart the MySQL Router instance, the updated metadata server is detected and used.

Dissolving InnoDB Cluster

To remove all information associated with a cluster, use the cluster.dissolve() method. This removes all metadata and configuration associated with the cluster, and disables Group Replication on the instances. There is no way to undo the dissolving of a cluster, to create it again use dba.createCluster().

Note

After issuing cluster.dissolve(), any variable assigned to the Cluster object is no longer valid.

Using MySQL Shell to Execute a Script

You can automate cluster configuration with scripts. For example:

shell> mysqlsh -f setup-innodb-cluster.js
Note

Any command line options specified after the script file name are passed to the script and not to MySQL Shell. You can access those options using the os.argv array in JavaScript, or the sys.argv array in Python. In both cases, the first option picked up in the array is the script name.

The contents of an example script file is shown here:

  print('MySQL InnoDB cluster sandbox set up\n');
  print('==================================\n');
  print('Setting up a MySQL InnoDB cluster with 3 MySQL Server sandbox instances.\n');
  print('The instances will be installed in ~/mysql-sandboxes.\n');
  print('They will run on ports 3310, 3320 and 3330.\n\n');

  var dbPass = shell.prompt('Please enter a password for the MySQL root account: ', {type:"password"});

  try {
     print('\nDeploying the sandbox instances.');
     dba.deploySandboxInstance(3310, {password: dbPass});
     print('.');
     dba.deploySandboxInstance(3320, {password: dbPass});
     print('.');
     dba.deploySandboxInstance(3330, {password: dbPass});
     print('.\nSandbox instances deployed successfully.\n\n');

     print('Setting up InnoDB cluster...\n');
     shell.connect('root@localhost:3310', dbPass);

     var cluster = dba.createCluster("devCluster");

     print('Adding instances to the cluster.');
     cluster.addInstance({user: "root", host: "localhost", port: 3320, password: dbPass});
     print('.');
     cluster.addInstance({user: "root", host: "localhost", port: 3330, password: dbPass});
     print('.\nInstances successfully added to the cluster.');

     print('\nInnoDB cluster deployed successfully.\n');
  } catch(e) {
     print('\nThe InnoDB cluster could not be created.\n\nError: ' +
     + e.message + '\n');
}

Creating a Whitelist of Servers

When using a cluster's createCluster(), addInstance(), and rejoinInstance() methods you can optionally specify a list of approved servers that belong to the cluster, referred to as a whitelist. By specifying the whitelist explicitly in this way you can increase the security of your cluster because only servers in the whitelist can connect to the cluster. By default, if not specified explicitly, the whitelist is automatically set to the private network addresses that the server has network interfaces on. To configure the whitelist, specify the servers to add with the ipWhitelist option when using the method. For example:

mysql-js> cluster.addInstance("root:guidev!@localhost:3320", {ipWhitelist: "10.157.120.0/24, 192.168.1.110"})

This configures the instance to only accept connections from servers at addresses 10.157.120.0/24 and 192.168.1.110.

Using the ipWhitelist option configures the group_replication_ip_whitelist system variable on the instance.

Restoring a Cluster from Quorum Loss

If a instance (or instances) fail, then a cluster can lose its quorum, which is the ability to vote in a new primary. In this case you can re-establish quorum using the method cluster.forceQuorumUsingPartitionOf(), as shown in the following MySQL Shell example:

  // open session to a cluster

mysql-js> cluster = dba.getCluster("devCluster")

  // The cluster lost its quorum and its status shows
  // "status": "NO_QUORUM"

mysql-js> cluster.forceQuorumUsingPartitionOf("localhost:3310")

  Restoring replicaset 'default' from loss of quorum, by using the partition composed of [localhost:3310]

  Please provide the password for 'root@localhost:3310': ******
  Restoring the InnoDB cluster ...

  The InnoDB cluster was successfully restored using the partition from the instance 'root@localhost:3310'.

  WARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset
  are removed or joined back to the group that was restored.

Rebooting a Cluster from a Major Outage

If your cluster suffers from a complete outage, you can ensure it is reconfigured correctly using dba.rebootClusterFromCompleteOutage(). In the event that a cluster has completely stopped, the instances must be started and only then can the cluster be started. For example if the machine a sandbox cluster was running on has been restarted, and the instances were at ports 3310, 3320 and 3330, issue:

mysql-js> dba.startSandboxInstance(3310)
mysql-js> dba.startSandboxInstance(3320)
mysql-js> dba.startSandboxInstance(3330)

This ensures the sandbox instances are running. In the case of a production deployment you would have to start the instances outside of MySQL Shell. Once the instances have started, connect to an instance and run MySQL Shell. Then restart the cluster by issuing:

mysql-js> shell.connect('root@localhost:3310');
mysql-js> var cluster = dba.rebootClusterFromCompleteOutage();

This ensures the cluster is correctly reconfigured after a complete outage. It uses the instance that MySQL Shell is connected to as the new seed instance and recovers the cluster based on the existing metadata of that instance.

If this process fails, and the cluster metadata has become badly corrupted, you may need to drop the metadata and create the cluster again from scratch. You can drop the cluster metadata using dba.dropMetaDataSchema().

Warning

The dba.dropMetaDataSchema() method should only be used as a last resort, when it is not possible to restore the cluster. It cannot be undone.

Rescanning a Cluster

If changes to an instance's configuration are made without using AdminAPI, you need to rescan the cluster to update the InnoDB cluster metadata. For example, if a new instance joins the Group Replication group the InnoDB cluster metadata is not modified based on this change to the cluster because MySQL Shell was not used. In such a scenario it is necessary to rescan the cluster with cluster.rescan() to update the InnoDB cluster metadata.

After the command cluster.rescan() has been run, instances are identified that are newly discovered instances. You are prompted to add each of these newly discovered instances into your cluster as required, or you can choose to ignore them.

Instances that no longer belong to the cluster or which are unavailable are also reported. In this case you are prompted to remove the instance, or you can later attempt to add it back into the cluster using a command such as cluster.rejoin('instance.example.com:3340').

Checking Instance State

The cluster.checkInstanceState() function can be used to to verify the existing data on an instance does not prevent it from joining a cluster. This process works by validating the instance's global transaction identifier (GTID) state compared to the GTIDs already processed by the cluster. For more information on GTIDs see Section 16.1.3.1, “GTID Concepts”. This check enables you to determine if an instance which has processed transactions can be added to the cluster.

The following demonstrates issuing this in a running MySQL Shell:

mysql-js> dba.checkInstanceState('user@139.59.177.10:3306')

The output of this function can be one of the following:

  • OK new: the instance has not executed any GTID transactions, therefore it cannot conflict with the GTIDs executed by the cluster

  • OK recoverable: the instance has executed GTIDs which do not conflict with the executed GTIDs of the cluster seed instances

  • ERROR diverged: the instance has executed GTIDs which diverge with the executed GTIDs of the cluster seed instances

  • ERROR lost_transactions: the instance has more executed GTIDs than the executed GTIDs of the cluster seed instances

Instances with an OK status can be added to the cluster because any data on the instance is consistent with the cluster. In other words the instance being checked has not executed any transactions which conflict with the GTIDs executed by the cluster, and can be recovered to the same state as the rest of the cluster instances.

Creating an InnoDB Cluster From an Existing Group Replication Deployment

If you have an existing deployment of Group Replication and you want to use it to create a cluster, pass the adoptFromGR option to the dba.createCluster() function. The created InnoDB cluster matches whether the replication group is running as single-primary or multi-primary. This means you cannot use the multiPrimary option in combination with the adoptFromGR option.

shell> mysqlsh --uri root@192.168.0.11:3306
Creating a Session to 'root@192.168.0.11:3306'
Enter password: ****
Classic Session successfully established. No default schema selected.

MySQL Shell JavaScript Code:

mysql-js> var cluster = dba.createCluster('prodCluster', {adoptFromGR: true});

A new InnoDB cluster will be created on instance 'root@192.168.0.11:3306'.

Creating InnoDB cluster 'prodCluster' on 'root@192.168.0.11:3306'...
Adding Seed Instance...

Cluster successfully created. Use cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
mysql-js> cluster.describe();
{
"clusterName": "prodCluster",
"adminType": "local",
"defaultReplicaSet": {
"name": "default",
"instances": [
{
"name": "localhost:3306",
"host": "localhost:3306",
"role": "HA"
},
{
"name": "localhost:3307",
"host": "localhost:3307",
"role": "HA"
},
{
"name": "localhost:3308",
"host": "localhost:3308",
"role": "HA"
}
]
}
}

Securing your Cluster

Server instances can be configured to use secure connections. For general information on using SSL with MySQL see Section 6.4, “Using Encrypted Connections”. This section explains how to configure a cluster to use SSL. An additional security possibility is to configure which servers can access the cluster, see Creating a Whitelist of Servers.

When using createCluster() to set up a cluster, if the server instance provides SSL encryption then it is automatically enabled on the seed instance. Pass the memberSslMode option to the createCluster() method to specify a different SSL mode. The memberSslMode option is a string that configures the SSL mode to be used, it defaults to AUTO. The permitted values are DISABLED, REQUIRED, and AUTO. These modes are defined as:

  • Setting createCluster(memberSslMode=DISABLED) ensures SSL encryption is disabled for the seed instance in the cluster.

  • Setting createCluster(memberSslMode=REQUIRED) then SSL encryption is enabled for the seed instance in the cluster. If it cannot be enabled an error is raised.

  • Setting createCluster(memberSslMode=AUTO) (the default) then SSL encryption is automatically enabled if the server instance supports it, or disabled if the server does not support it.

When you issue the addInstance() and rejoinInstance() commands, SSL encryption on the instance is enabled or disabled based on the setting found for the seed instance. For more control, the addInstance(), and rejoinInstance() commands accept the memberSslMode option. The behavior of the commands in this case is:

  • Setting memberSslMode=DISABLED ensures SSL encryption is disabled for the instance in the cluster.

  • Setting memberSslMode=REQUIRED forces SSL encryption to be enabled for the instance in the cluster.

  • Setting memberSslMode=AUTO (the default) then SSL encryption is automatically enabled or disabled based on the setting used by the seed instance (other members of the cluster) and the available SSL support provided by the instance itself.

When using createCluster() with the adoptFromGR option to adopt an existing Group Replication group, no SSL settings are changed on the adopted cluster:

  • memberSslMode cannot be used with adoptFromGR.

  • If the SSL settings of the adopted cluster are different from the ones supported by the MySQL Shell, in other words SSL for Group Replication recovery and Group Communication, both settings are not modified. This means you are not be able to add new instances to the cluster, unless you change the settings manually for the adopted cluster.

MySQL Shell always enables or disables SSL for the cluster for both Group Replication recovery and Group Communication, see Section 17.5.2, “Secure Socket Layer Support (SSL)”. A verification is performed and an error issued in case those settings are different for the seed instance (for example as the result of a createCluster() using adoptFromGR) when adding a new instance to the cluster. SSL encryption must be enabled or disabled for all instances in the cluster. Verifications are performed to ensure that this invariant holds when adding a new instance to the cluster.

The deploySandboxInstance() command attempts to deploy sandbox instances with SSL encryption support by default. If it is not possible, the server instance is deployed without SSL support. Use the ignoreSslError option set to false to ensure that sandbox instances are deployed with SSL support, issuing an error if SSL support cannot be provided. When ignoreSslError is true, which is the default, no error is issued during the operation if the SSL support cannot be provided and the server instance is deployed without SSL support.


User Comments
Sign Up Login You must be logged in to post a comment.