Related Documentation Download this Manual
PDF (US Ltr) - 38.5Mb
PDF (A4) - 38.6Mb
PDF (RPM) - 33.3Mb
HTML Download (TGZ) - 8.1Mb
HTML Download (Zip) - 8.2Mb
HTML Download (RPM) - 7.0Mb
Man Pages (TGZ) - 134.2Kb
Man Pages (Zip) - 190.2Kb
Info (Gzip) - 3.4Mb
Info (Zip) - 3.4Mb
Excerpts from this Manual

MySQL 8.0 Reference Manual  /  InnoDB Cluster  /  Working with InnoDB Cluster

21.4 Working with InnoDB Cluster

This section explains how to work with InnoDB cluster, and how to handle common administration tasks.

Checking Instance Configuration

Before creating a production deployment from server instances you need to check that MySQL on each instance is correctly configured. In addition to dba.configureInstance(), which checks the configuration as part of configuring an instance, you can use the dba.checkInstanceConfiguration() function. This ensures that the instance satisfies the Section 21.2.2, “InnoDB Cluster Requirements” without changing any configuration on the instance. This does not check any data that is on the instance, see Checking Instance State for more information. The following demonstrates issuing this in a running MySQL Shell:

mysql-js> dba.checkInstanceConfiguration('ic@ic-1:3306')
Please provide the password for 'ic@ic-1:3306': ***
Validating MySQL instance at ic-1:3306 for use in an InnoDB cluster...

This instance reports its own address as ic-1
Clients and other cluster members will communicate with it through this address by default.
If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             |                | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Please use the dba.configureInstance() command to repair these issues.

{
    "config_errors": [
        {
            "action": "server_update",
            "current": "CRC32",
            "option": "binlog_checksum",
            "required": "NONE"
        },
        {
            "action": "restart",
            "current": "OFF",
            "option": "enforce_gtid_consistency",
            "required": "ON"
        },
        {
            "action": "restart",
            "current": "OFF",
            "option": "gtid_mode",
            "required": "ON"
        },
        {
            "action": "restart",
            "current": "1",
            "option": "server_id",
            "required": ""
        }
    ],
    "errors": [],
    "status": "error"
}

Repeat this process for each server instance that you plan to use as part of your cluster. The report generated after running dba.checkInstanceConfiguration() provides information about any configuration changes required before you can proceed. The action field in the config_error section of the report tells you whether MySQL on the instance requires a restart to detect any change made to the configuration file.

Configuring Local Instances

Instances which do not support persisting configuration changes automatically (see Persisting Settings) require you to connect to the server, run MySQL Shell, connect to the instance locally and issue dba.configureLocalInstance(). This enables MySQL Shell to modify the instance's option file after running the following commands against a remote instance:

  • dba.configureInstance()

  • dba.createCluster()

  • Cluster.addInstance()

  • Cluster.removeInstance()

  • Cluster.rejoinInstance()

Important

Failing to persist configuration changes to an instance's option file can result in the instance not rejoining the cluster after the next restart.

The recommended method is to log in to the remote machine, for example using SSH, run MySQL Shell as the root user and then connect to the local MySQL server. For example, use the --uri option to connect to the local instance:

shell> sudo -i mysqlsh --uri=instance

Alternatively use the \connect command to log in to the local instance. Then issue dba.configureInstance(instance), where instance is the connection information to the local instance, to persist any changes made to the local instance's option file.

mysql-js> dba.configureLocalInstance('ic@ic-2:3306')

Repeat this process for each instance in the cluster which does not support persisting configuration changes automatically. For example if you add 2 instances to a cluster which do not support persisting configuration changes automatically, you must connect to each server and persist the configuration changes required for InnoDB cluster before the instance restarts. Similarly if you modify the cluster structure, for example changing the number of instances, you need to repeat this process for each server instance to update the InnoDB cluster metadata accordingly for each instance in the cluster.

Retrieving an InnoDB cluster

When you create a cluster using dba.createCluster(), the operation returns a Cluster object which can be assigned to a variable. You use this object to work with the cluster, for example to add instances or check the cluster's status. If you want to retrieve a cluster again at a later date, for example after restarting MySQL Shell, use the dba.getCluster([name],[options]) function. For example:

mysql-js> var cluster1 = dba.getCluster()

If you do not specify a cluster name then the default cluster is returned. By default MySQL Shell attempts to connect to the primary instance of the cluster when you use dba.getCluster(). Set the connectToPrimary option to configure this behavior. If connectToPrimary is true and the active global MySQL Shell session is not to a primary instance, the cluster is queried for the primary member and the cluster object connects to it. If there is no quorum in the cluster, the operation fails. If connectToPrimary is false, the cluster object uses the active session, in other words the same instance as the MySQL Shell's current global session. If connectToPrimary is not specified, MySQL Shell treats connectToPrimary as true, and falls back to connectToPrimary being false.

To force connecting to a secondary when getting a cluster, establish a connection to the secondary member of the cluster and use the connectToPrimary option by issuing:

mysql-js> shell.connect(secondary_member)
mysql-js> var cluster1 = dba.getCluster(testCluster, {connectToPrimary:false})
Tip

Remember that secondary instances have super_read_only=ON, so you cannot write changes to them.

Checking the InnoDB Cluster Status

Cluster objects provide the status() method that enables you to check how a cluster is running. Before you can check the status of the InnoDB cluster, you need to get a reference to the InnoDB cluster object by connecting to any of its instances. However, if you want to make changes to the configuration of the cluster, you must connect to a "R/W" instance. Issuing status() retrieves the status of the cluster based on the view of the cluster which the server instance you are connected to is aware of and outputs a status report.

Important

The instance's state in the cluster directly influences the information provided in the status report. Therefore ensure the instance you are connected to has a status of ONLINE.

For information about how the InnoDB cluster is running, use the cluster's status() method:

mysql-js> var cluster = dba.getCluster()
mysql-js> cluster.status()
{
    "clusterName": "testcluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "ic-1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "ic-1:3306": {
                "address": "ic-1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "ic-2:3306": {
                "address": "ic-2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "ic-3:3306": {
                "address": "ic-3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://ic@ic-1:3306"
}

The information output by cluster.status() provides the following information:

  • clusterName: name assigned to this cluster during dba.createCluster().

  • defaultReplicaSet: the server instances which belong to an InnoDB cluster and contain the data set.

  • primary: displayed when the cluster is operating in single-primary mode only. Shows the address of the current primary instance. If this field is not displayed, the cluster is operating in multi-primary mode.

  • ssl: whether secure connections are used by the cluster or not. Shows values of REQUIRED or DISABLED, depending on how the memberSslMode option was configured during either createCluster() or addInstance(). The value returned by this parameter corresponds to the value of the group_replication_ssl_mode server variable on the instance. See Securing your Cluster.

  • status: The status of this element of the cluster. For the overall cluster this describes the high availability provided by this cluster. The status is one of the following:

    • ONLINE: The instance is online and participating in the cluster.

    • OFFLINE: The instance has lost connection to the other instances.

    • RECOVERING: The instance is attempting to synchronize with the cluster by retrieving transactions it needs before it can become an ONLINE member.

    • UNREACHABLE: The instance has lost communication with the cluster.

    • ERROR: The instance has encountered an error during the recovery phase or while applying a transaction.

      Important

      Once an instance enters ERROR state, the super_read_only option is set to ON. To leave the ERROR state you must manually configure the instance with super_read_only=OFF.

    • (MISSING): The state of an instance which is part of the configured cluster, but is currently unavailable.

      Note

      The MISSING state is specific to InnoDB cluster, it is not a state generated by Group Replication. MySQL Shell uses this state to indicate instances that are registered in the metadata, but cannot be found in the live cluster view.

  • topology: The instances which have been added to the cluster.

  • Host name of instance: The host name of an instance, for example localhost:3310.

  • role: what function this instance provides in the cluster. Currently only HA, for high availability.

  • mode: whether the server is read-write ("R/W") or read-only ("R/O"). The mode indicates either R/W (read and writable) or R/O (read only). In single-primary mode, only the one instance marked "R/W" can execute transactions that update the database, so it is the primary. If that instance becomes unreachable for any reason (like an unexpected halt), one of the remaining "R/O" instances automatically takes over its place and becomes the new "R/W" primary. In multi-primary mode, all instances are marked as "R/W" and there is no single elected primary.

  • groupInformationSourceMember: the internal connection used to get information about the cluster, shown as a URI-type string. Usually the connection initially used to create the cluster.

Describing the Structure of the InnoDB Cluster

To get information about the structure of the InnoDB cluster itself, use the cluster.describe() function:

mysql-js> cluster.describe();
{
    "clusterName": "testCluster",
    "defaultReplicaSet": {
        "name": "default",
        "topology": [
            {
                "address": "ic-1:3306",
                "label": "ic-1:3306",
                "role": "HA"
            },
            {
                "address": "ic-2:3306",
                "label": "ic-1:2306",
                "role": "HA"
            },
            {
                "address": "ic-3:3306",
                "label": "ic-3:3306",
                "role": "HA"
            }
        ]
    }
}

The output from this function shows the structure of the InnoDB cluster including all of its configuration information, and so on. The address, label and role values match those described at Checking the InnoDB Cluster Status .

Super Read-only and Instances

Whenever Group Replication stops, the super_read_only variable is set to ON to ensure no writes are made to the instance. When you try to use such an instance with the following AdminAPI commands you are given the choice to set super_read_only=OFF on the instance:

  • dba.configureInstance()

  • dba.configureLocalInstance()

  • dba.createCluster()

  • dba.rebootClusterFromCompleteOutage()

  • dba.dropMetadataSchema()

When AdminAPI encounters an instance which has super_read_only=ON, in interactive mode you are given the choice to set super_read_only=OFF. For example:

mysql-js> var myCluster = dba.createCluster('testCluster')
A new InnoDB cluster will be created on instance 'ic@ic-1:3306'.

The MySQL instance at 'ic@ic-1:3306' currently has the super_read_only 
system variable set to protect it from inadvertent updates from applications. 
You must first unset it to be able to perform any changes to this instance. 
For more information see: https://dev.mysql.com/doc/refman/en/server-system-variables.html#sysvar_super_read_only.

Note: there are open sessions to 'ic@ic-1:3306'.
You may want to kill these sessions to prevent them from performing unexpected updates: 

1 open session(s) of 'ic@ic-1:3306'. 


Do you want to disable super_read_only and continue? [y|N]:

The number of current active sessions to the instance is shown. You must ensure that no applications might write to the instance inadvertently. By answering y you confirm that AdminAPI can write to the instance. If there is more than one open session to the instance listed, exercise caution before permitting AdminAPI to set super_read_only=OFF.

To force the function to set super_read_only=OFF in a script, pass the clearReadOnly option set to true. For example dba.configureInstance(instance, {clearReadOnly: true}).

Managing Sandbox Instances

Once a sandbox instance is running, it is possible to change its status at any time using the following:

  • To stop a sandbox instance use dba.stopSandboxInstance(instance). This stops the instance gracefully, unlike dba.killSandboxInstance(instance).

  • To start a sandbox instance use dba.startSandboxInstance(instance).

  • To kill a sandbox instance use dba.killSandboxInstance(instance). This stops the instance without gracefully stopping it and is useful in simulating unexpected halts.

  • To delete a sandbox instance use dba.deleteSandboxInstance(instance). This completely removes the sandbox instance from your file system.

Removing Instances from the InnoDB Cluster

You can remove an instance from a cluster at any time should you wish to do so. This can be done with the Cluster.removeInstance(instance) method, as in the following example:

mysql-js> cluster.removeInstance('root@localhost:3310')

The instance will be removed from the InnoDB cluster. Depending on the instance
being the Seed or not, the Metadata session might become invalid. If so, please
start a new session to the Metadata Storage R/W instance.

Attempting to leave from the Group Replication group...

The instance 'localhost:3310' was successfully removed from the cluster.

You can optionally pass in the interactive option to control whether you are prompted to confirm the removal of the instance from the cluster. In interactive mode, you are prompted to continue with the removal of the instance (or not) in case it is not reachable. The cluster.removeInstance() operation ensures that the instance is removed from the metadata of all the cluster members which are ONLINE, and the instance itself.

When the instance being removed has transactions which still need to be applied, AdminAPI waits for up to the number of seconds configured by the MySQL Shell dba.gtidWaitTimeout option for transactions (GTIDs) to be applied. The MySQL Shell dba.gtidWaitTimeout option has a default value of 60 seconds, see Configuring MySQL Shell for information on changing the default. If the timeout value defined by dba.gtidWaitTimeout is reached when waiting for transactions to be applied and the force option is false (or not defined) then an error is issued and the remove operation is aborted. If the timeout value defined by dba.gtidWaitTimeout is reached when waiting for transactions to be applied and the force option is set to true then the operation continues without an error and removes the instance from the cluster.

Important

The force option should only be used with Cluster.removeInstance(instance) when you want to ignore any errors, for example unprocessed transactions or an instance being UNREACHABLE, and do not plan to reuse the instance with the cluster. Ignoring errors when removing an instance from the cluster could result in an instance which is not in synchrony with the cluster, preventing it from rejoining the cluster at a later time. Only use the force option when you plan to no longer use the instance with the cluster, in all other cases you should always try to recover the instance and only remove it when it is available and healthy, in other words with the status ONLINE.

Customizing InnoDB clusters

When you create a cluster and add instances to it, values such as the group name, the local address, and the seed instances are configured automatically by AdminAPI. These default values are recommended for most deployments, but advanced users can override these defaults by passing the following options to the dba.createCluster() and cluster.addInstance().

To customize the name of the replication group created by InnoDB cluster, pass the groupName option to the dba.createCluster() command. This sets the group_replication_group_name system variable. The name must be a valid UUID.

To customize the address which an instance provides for connections from other instances, pass the localAddress option to the dba.createCluster() and cluster.addInstance() commands. Specify the address in the format host:port. This sets the group_replication_local_address system variable on the instance. The address must be accessible to all instances in the cluster, and must be reserved for internal cluster communication only. In other words do not use this address for communication with the instance.

To customize the instances used as seeds when an instance joins the cluster, pass the groupSeeds option to the dba.createCluster() and cluster.addInstance() commands. Seed instances are contacted when a new instance joins a cluster and used to provide data to the new instance. The addresses are specified as a comma separated list such as host1:port1,host2:port2. This configures the group_replication_group_seeds system variable.

For more information see the documentation of the system variables configured by these AdminAPI options.

Rejoining a Cluster

If an instance leaves the cluster, for example because it lost connection and did not or could not automatically rejoin the cluster, it might be necessary to rejoin it to the cluster at a later stage. To rejoin an instance to a cluster issue cluster.rejoinInstance().

Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

In the case where an instance has not had it's configuration persisted (see Persisting Settings), upon restart the instance does not rejoin the cluster automatically. The solution is to issue cluster.rejoinInstance() so that the instance is added to the cluster again and ensure the changes are persisted. Once the InnoDB cluster configuration is persisted to the instance's option file it rejoins the cluster automatically.

Restoring a Cluster from Quorum Loss

If a instance (or instances) fail, then a cluster can lose its quorum, which is the ability to vote in a new primary. If you have an instance which contains the InnoDB cluster metadata, it is possible to restore the cluster. This assumes you can connect to an instance that contains the InnoDB cluster metadata, and that instance can contact the other instances you want to use to restore the cluster.

Important

This operation is potentially dangerous because it can create a split-brain scenario if incorrectly used and should be considered a last resort. Make absolutely sure that there are no partitions of this group that are still operating somewhere in the network, but not accessible from your location.

You connect to an instance which contains the cluster's metadata, then use the Cluster.forceQuorumUsingPartitionOf(instance) operation, which restore the cluster based on the metadata on instance, and then all the instances that are ONLINE from the point of view of the given instance definition are added to the restored cluster.

mysql-js> cluster.forceQuorumUsingPartitionOf("localhost:3310")

  Restoring replicaset 'default' from loss of quorum, by using the partition composed of [localhost:3310]

  Please provide the password for 'root@localhost:3310': ******
  Restoring the InnoDB cluster ...

  The InnoDB cluster was successfully restored using the partition from the instance 'root@localhost:3310'.

  WARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset
  are removed or joined back to the group that was restored.

In the event that an instance is not automatically added to the cluster, for example if its settings were not persisted, use Cluster.rejoinInstance() to manually add the instance back to the cluster.

The restored cluster might not, and does not have to, consist of all of the original instances which made up the cluster. For example, if the original cluster consisted of the following five instances:

  • ic-1

  • ic-2

  • ic-3

  • ic-4

  • ic-5

and the cluster experiences a split-brain scenario, with ic-1, ic-2, and ic-3 forming one partition while ic-4 and ic-5 form another partition. If you connect to ic-1 and issue Cluster.forceQuorumUsingPartitionOf('user@ic-1:3306') to restore the cluster the reulting cluster would consist of these three instances:

  • ic-1

  • ic-2

  • ic-3

because ic-1 sees ic-2 and ic-3 as ONLINE and does not see ic-4 and ic-5.

Rebooting a Cluster from a Major Outage

If your cluster suffers from a complete outage, you can ensure it is reconfigured correctly using dba.rebootClusterFromCompleteOutage(). In the event that a cluster has completely stopped, the instances must be started and only then can the cluster be started. For example if the machine a sandbox cluster was running on has been restarted, and the instances were at ports 3310, 3320 and 3330, issue:

mysql-js> dba.startSandboxInstance(3310)
mysql-js> dba.startSandboxInstance(3320)
mysql-js> dba.startSandboxInstance(3330)

This ensures the sandbox instances are running. In the case of a production deployment you would have to start the instances outside of MySQL Shell. Once the instances have started, connect to an instance and run MySQL Shell. Then restart the cluster by issuing:

mysql-js> shell.connect('root@localhost:3310');
mysql-js> var cluster = dba.rebootClusterFromCompleteOutage();
Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

This ensures the cluster is correctly reconfigured after a complete outage. It uses the instance that MySQL Shell is connected to as the new seed instance and recovers the cluster based on the existing metadata of that instance.

If this process fails, and the cluster metadata has become badly corrupted, you might need to drop the metadata and create the cluster again from scratch. You can drop the cluster metadata using dba.dropMetadataSchema().

Warning

The dba.dropMetadataSchema() method should only be used as a last resort, when it is not possible to restore the cluster. It cannot be undone.

Rescanning a Cluster

If changes to an instance's configuration are made without using AdminAPI, you need to rescan the cluster to update the InnoDB cluster metadata. For example, if you manually add a new instance to the Group Replication group, the InnoDB cluster metadata is not modified based on this change to the cluster because MySQL Shell was not used. In such a scenario it is necessary to rescan the cluster with cluster.rescan() to update the InnoDB cluster metadata.

After the command cluster.rescan() has been run, instances are identified that are newly discovered instances. You are prompted to add each of these newly discovered instances into your cluster as required, or you can choose to ignore them.

Instances that no longer belong to the cluster or which are unavailable are also reported. In this case you are prompted to remove the instance, or you can later attempt to add it back into the cluster using a command such as cluster.rejoin('ic@ic-4:3306').

Checking Instance State

The cluster.checkInstanceState() function can be used to to verify the existing data on an instance does not prevent it from joining a cluster. This process works by validating the instance's global transaction identifier (GTID) state compared to the GTIDs already processed by the cluster. For more information on GTIDs see Section 17.1.3.1, “GTID Format and Storage”. This check enables you to determine if an instance which has processed transactions can be added to the cluster.

The following demonstrates issuing this in a running MySQL Shell:

mysql-js> cluster.checkInstanceState('ic@ic-4:3306')

The output of this function can be one of the following:

  • OK new: the instance has not executed any GTID transactions, therefore it cannot conflict with the GTIDs executed by the cluster

  • OK recoverable: the instance has executed GTIDs which do not conflict with the executed GTIDs of the cluster seed instances

  • ERROR diverged: the instance has executed GTIDs which diverge with the executed GTIDs of the cluster seed instances

  • ERROR lost_transactions: the instance has more executed GTIDs than the executed GTIDs of the cluster seed instances

Instances with an OK status can be added to the cluster because any data on the instance is consistent with the cluster. In other words the instance being checked has not executed any transactions which conflict with the GTIDs executed by the cluster, and can be recovered to the same state as the rest of the cluster instances.

Dissolving an InnoDB Cluster

To dissolve an InnoDB cluster you connect to a read-write instance, for example the primary in a single-primary cluster, and use the Cluster.dissolve() command. This removes all metadata and configuration associated with the cluster, and disables Group Replication on the instances. Any data that was replicated between the instances is not removed. There is no way to undo the dissolving of a cluster, therefore you must pass force: true to confirm you want to dissolve the cluster. For example: to create it again use dba.createCluster().

mysql-js> session
<ClassicSession:root@localhost:3310>
mysql-js> cluster.dissolve({force:true})
The cluster was successfully dissolved.
Replication was disabled but user data was left intact.
Note

After issuing cluster.dissolve(), any variable assigned to the Cluster object is no longer valid.

Securing your Cluster

Server instances can be configured to use secure connections. For general information on using SSL with MySQL see Section 6.4, “Using Encrypted Connections”. This section explains how to configure a cluster to use SSL. An additional security possibility is to configure which servers can access the cluster, see Creating a Whitelist of Servers.

Important

Once you have configured a cluster to use SSL you must add the servers to the ipWhitelist.

When using dba.createCluster() to set up a cluster, if the server instance provides SSL encryption then it is automatically enabled on the seed instance. Pass the memberSslMode option to the dba.createCluster() method to specify a different SSL mode. The SSL mode of a cluster can only be set at the time of creation. The memberSslMode option is a string that configures the SSL mode to be used, it defaults to AUTO. The permitted values are DISABLED, REQUIRED, and AUTO. These modes are defined as:

  • Setting createCluster({memberSslMode:'DISABLED'}) ensures SSL encryption is disabled for the seed instance in the cluster.

  • Setting createCluster({memberSslMode:'REQUIRED'}) then SSL encryption is enabled for the seed instance in the cluster. If it cannot be enabled an error is raised.

  • Setting createCluster({memberSslMode:'AUTO'}) (the default) then SSL encryption is automatically enabled if the server instance supports it, or disabled if the server does not support it.

Note

When using the commercial version of MySQL, SSL is enabled by default and you might need to configure the whitelist for all instances. See Creating a Whitelist of Servers.

When you issue the cluster.addInstance() and cluster.rejoinInstance() commands, SSL encryption on the instance is enabled or disabled based on the setting found for the seed instance.

When using createCluster() with the adoptFromGR option to adopt an existing Group Replication group, no SSL settings are changed on the adopted cluster:

  • memberSslMode cannot be used with adoptFromGR.

  • If the SSL settings of the adopted cluster are different from the ones supported by the MySQL Shell, in other words SSL for Group Replication recovery and Group Communication, both settings are not modified. This means you are not be able to add new instances to the cluster, unless you change the settings manually for the adopted cluster.

MySQL Shell always enables or disables SSL for the cluster for both Group Replication recovery and Group Communication, see Section 18.5.2, “Secure Socket Layer Support (SSL)”. A verification is performed and an error issued in case those settings are different for the seed instance (for example as the result of a dba.createCluster() using adoptFromGR) when adding a new instance to the cluster. SSL encryption must be enabled or disabled for all instances in the cluster. Verifications are performed to ensure that this invariant holds when adding a new instance to the cluster.

The deploySandboxInstance() command attempts to deploy sandbox instances with SSL encryption support by default. If it is not possible, the server instance is deployed without SSL support. Use the ignoreSslError option set to false to ensure that sandbox instances are deployed with SSL support, issuing an error if SSL support cannot be provided. When ignoreSslError is true, which is the default, no error is issued during the operation if the SSL support cannot be provided and the server instance is deployed without SSL support.

Creating a Whitelist of Servers

When using a cluster's createCluster(), addInstance(), and rejoinInstance() methods you can optionally specify a list of approved servers that belong to the cluster, referred to as a whitelist. By specifying the whitelist explicitly in this way you can increase the security of your cluster because only servers in the whitelist can connect to the cluster. Using the ipWhitelist option configures the group_replication_ip_whitelist system variable on the instance. By default, if not specified explicitly, the whitelist is automatically set to the private network addresses that the server has network interfaces on. To configure the whitelist, specify the servers to add with the ipWhitelist option when using the method. Pass the servers as a comma separated list, surrounded by quotes. For example:

mysql-js> cluster.addInstance("ic@ic-3:3306", {ipWhitelist: "203.0.113.0/24, 198.51.100.110"})

This configures the instance to only accept connections from servers at addresses 203.0.113.0/24 and 198.51.100.110. The whitelist can also include host names, which are resolved only when a connection request is made by another server.

Warning

Host names are inherently less secure than IP addresses in a whitelist. MySQL carries out FCrDNS verification, which provides a good level of protection, but can be compromised by certain types of attack. Specify host names in your whitelist only when strictly necessary, and ensure that all components used for name resolution, such as DNS servers, are maintained under your control. You can also implement name resolution locally using the hosts file, to avoid the use of external components.

Using MySQL Shell to Execute a Script

You can automate cluster configuration with scripts. For example:

shell> mysqlsh -f setup-innodb-cluster.js
Note

Any command line options specified after the script file name are passed to the script and not to MySQL Shell. You can access those options using the os.argv array in JavaScript, or the sys.argv array in Python. In both cases, the first option picked up in the array is the script name.

The contents of an example script file is shown here:

print('MySQL InnoDB cluster sandbox set up\n');
print('==================================\n');
print('Setting up a MySQL InnoDB cluster with 3 MySQL Server sandbox instances.\n');
print('The instances will be installed in ~/mysql-sandboxes.\n');
print('They will run on ports 3310, 3320 and 3330.\n\n');

var dbPass = shell.prompt('Please enter a password for the MySQL root account: ', {type:"password"});

try {
   print('\nDeploying the sandbox instances.');
   dba.deploySandboxInstance(3310, {password: dbPass});
   print('.');
   dba.deploySandboxInstance(3320, {password: dbPass});
   print('.');
   dba.deploySandboxInstance(3330, {password: dbPass});
   print('.\nSandbox instances deployed successfully.\n\n');

   print('Setting up InnoDB cluster...\n');
   shell.connect('root@localhost:3310', dbPass);

   var cluster = dba.createCluster("prodCluster");

   print('Adding instances to the cluster.');
   cluster.addInstance({user: "root", host: "localhost", port: 3320, password: dbPass});
   print('.');
   cluster.addInstance({user: "root", host: "localhost", port: 3330, password: dbPass});
   print('.\nInstances successfully added to the cluster.');

   print('\nInnoDB cluster deployed successfully.\n');
} catch(e) {
   print('\nThe InnoDB cluster could not be created.\n\nError: ' +
   + e.message + '\n');
}

User Comments
User comments in this section are, as the name implies, provided by MySQL users. The MySQL documentation team is not responsible for, nor do they endorse, any of the information provided here.
Sign Up Login You must be logged in to post a comment.