MySQL Shell 8.0  /  MySQL InnoDB Cluster  /  Configuring InnoDB Cluster

7.5 Configuring InnoDB Cluster

This section describes how to use AdminAPI for further detailed configuration of an InnoDB Cluster during the cluster creation process and after you have created it. You can use this information to modify the settings that AdminAPI applies by default when you create a cluster.

Setting Options for InnoDB Cluster

You can check and modify the settings in place for an InnoDB Cluster while the instances are online. To check the current settings of a cluster, use the following operation:

  • Cluster.options(), which lists the configuration options for the cluster and its instances. A boolean option all can also be specified to include information about all Group Replication system variables in the output.

You can configure the options of an InnoDB Cluster at a cluster level or instance level, while instances remain online. This avoids the need to remove, reconfigure and then again add the instance to change InnoDB Cluster options. Use the following operations:

  • Cluster.setOption(option, value) to change the settings of all cluster instances globally or cluster global settings such as clusterName.

  • Cluster.setInstanceOption(instance, option, value) to change the settings of individual cluster instances

The way which you use InnoDB Cluster options with the operations listed depends on whether the option can be changed to be the same on all instances or not. These options are changeable at both the cluster (all instances) and per instance level:

These options are changeable at the cluster level only:

This option is changeable at the per instance level only:

  • label: a string identifier of the instance

Customizing InnoDB Cluster Member Servers

When you create a cluster and add instances to it, values such as the group name and the local address are configured automatically by AdminAPI. The default values are recommended for most deployments, but advanced users can override the defaults by passing the following options to the dba.createCluster() and Cluster.addInstance() commands:

  • Pass the groupName option to the dba.createCluster() command to customize the name of the replication group created by InnoDB Cluster. This sets the group_replication_group_name system variable. The name must be a valid UUID.

  • Pass the localAddress option to the dba.createCluster() and cluster.addInstance() commands to customize the address which an instance provides for connections from other instances. Specify the address in the format host:port. This sets the group_replication_local_address system variable on the instance. The address must be accessible to all instances in the cluster, and must be reserved for internal cluster communication only. In other words, do not use this address for communication with the instance.

For more information see the documentation of the system variables configured by these AdminAPI options.

Configuring the Election Process

You can optionally configure how a single-primary cluster elects a new primary, for example to prefer one instance as the new primary to fail over to. Use the memberWeight option and pass it to the dba.createCluster() and Cluster.addInstance() methods when creating your cluster. The memberWeight option accepts an integer value between 0 and 100, which is a percentage weight for automatic primary election on failover. When an instance has a higher precentage number set by memberWeight, it is more likely to be elected as primary in a single-primary cluster. When a primary election takes place, if multiple instances have the same memberWeight value, the instances are then prioritized based on their server UUID in lexicographical order (the lowest) and by picking the first one.

Setting the value of memberWeight configures the group_replication_member_weight system variable on the instance. Group Replication limits the value range from 0 to 100, automatically adjusting it if a higher or lower value is provided. Group Replication uses a default value of 50 if no value is provided. See Single-Primary Mode for more information.

For example to configure a cluster where ic-3 is the preferred instance to fail over to in the event that ic-1, the current primary, leaves the cluster unexpectedly use memberWeight as follows:

dba.createCluster('cluster1', {memberWeight:35})
var mycluster = dba.getCluster()
mycluster.addInstance('icadmin@ic2', {memberWeight:25})
mycluster.addInstance('icadmin@ic3', {memberWeight:50})

Configuring Failover Consistency

Group Replication provides the ability to specify the failover guarantees (eventual or read your writes) if a primary failover happens in single-primary mode (see Configuring Transaction Consistency Guarantees). You can configure the failover guarantees of an InnoDB Cluster at creation by passing the consistency option (prior to version 8.0.16 this option was the failoverConsistency option, which is now deprecated) to the dba.createCluster() operation, which configures the group_replication_consistency system variable on the seed instance. This option defines the behavior of a new fencing mechanism used when a new primary is elected in a single-primary group. The fencing restricts connections from writing and reading from the new primary until it has applied any pending backlog of changes that came from the old primary (sometimes referred to as read your writes). While the fencing mechanism is in place, applications effectively do not see time going backward for a short period of time while any backlog is applied. This ensures that applications do not read stale information from the newly elected primary.

The consistency option is only supported if the target MySQL server version is 8.0.14 or later, and instances added to a cluster which has been configured with the consistency option are automatically configured to have group_replication_consistency the same on all cluster members that have support for the option. The variable default value is controlled by Group Replication and is EVENTUAL, change the consistency option to BEFORE_ON_PRIMARY_FAILOVER to enable the fencing mechanism. Alternatively use consistency=0 for EVENTUAL and consistency=1 for BEFORE_ON_PRIMARY_FAILOVER.

Note

Using the consistency option on a multi-primary InnoDB Cluster has no effect but is allowed because the cluster can later be changed into single-primary mode with the Cluster.switchToSinglePrimaryMode() operation.

Configuring Automatic Rejoin of Instances

Instances running MySQL 8.0.16 and later support the Group Replication automatic rejoin functionality, which enables you to configure instances to automatically rejoin the cluster after being expelled. See Responses to Failure Detection and Network Partitioning for background information. AdminAPI provides the autoRejoinTries option to configure the number of tries instances make to rejoin the cluster after being expelled. By default instances do not automatically rejoin the cluster. You can configure the autoRejoinTries option at either the cluster level or for an individual instance using the following commands:

  • dba.createCluster()

  • Cluster.addInstance()

  • Cluster.setOption()

  • Cluster.setInstanceOption()

The autoRejoinTries option accepts positive integer values between 0 and 2016 and the default value is 0, which means that instances do not try to automatically rejoin. When you are using the automatic rejoin functionality, your cluster is more tolerant to faults, especially temporary ones such as unreliable networks. But if quorum has been lost, you should not expect members to automatically rejoin the cluster, because majority is required to rejoin instances.

Instances running MySQL version 8.0.12 and later have the group_replication_exit_state_action variable, which you can configure using the AdminAPI exitStateAction option. This controls what instances do in the event of leaving the cluster unexpectedly. By default the exitStateAction option is READ_ONLY, which means that instances which leave the cluster unexpectedly become read-only. If exitStateAction is set to OFFLINE_MODE (available from MySQL 8.0.18), instances which leave the cluster unexpectedly become read-only and also enter offline mode, where they disconnect existing clients and do not accept new connections (except from clients with administrator privileges). If exitStateAction is set to ABORT_SERVER then in the event of leaving the cluster unexpectedly, the instance shuts down MySQL, and it has to be started again before it can rejoin the cluster. Note that when you are using the automatic rejoin functionality, the action configured by the exitStateAction option only happens in the event that all attempts to rejoin the cluster fail.

There is a chance you might connect to an instance and try to configure it using the AdminAPI, but at that moment the instance could be rejoining the cluster. This could happen whenever you use any of these operations:

  • Cluster.status()

  • dba.getCluster()

  • Cluster.rejoinInstance()

  • Cluster.addInstance()

  • Cluster.removeInstance()

  • Cluster.rescan()

  • Cluster.checkInstanceState()

These operations might provide extra information while the instance is automatically rejoining the cluster. In addition, when you are using Cluster.removeInstance(), if the target instance is automatically rejoining the cluster the operation aborts unless you pass in force:true.

Configuring the Parallel Replication Applier

From version 8.0.23 instances support and enable parallel replication applier threads, sometimes referred to as a multi-threaded replica. Using multiple replica applier threads in parallel improves the throughput of both the replication applier and incremental recovery.

This means that on instances running 8.0.23 and later, the following system variables must be configured:

By default, the number of applier threads (configured by the slave_parallel_workers system variable) is set to 4.

When you upgrade a cluster that has been running a version of MySQL server and MySQL Shell earlier than 8.0.23, the instances are not configured to use the parallel replication applier. If the parallel applier is not enabled, the output of the Cluster.status() operation shows a message in the instanceErrors field, for example:

...
"instanceErrors": [
	"NOTE: The required parallel-appliers settings are not enabled on 
		the instance. Use dba.configureInstance() to fix it."
...

In this situation you should reconfigure your instances, so that they use the parallel replication applier. For each instance that belongs to the InnoDB Cluster, update the configuration by issuing dba.configureInstance(instance). Note that usually dba.configureInstance() is used before adding the instance to a cluster, but in this special case there is no need to remove the instance and the configuration change is made while it is online.

Information about the parallel replication applier is displayed in the output of the Cluster.status(extended=1) operation. For example, if the parallel replication applier is enabled, then the topology section output for the instance shows the number of threads under applierWorkerThreads. The system variables configured for the parallel replication applier are shown in the output of the Cluster.options() operation.

You can configure the number of threads which an instance uses for the parallel replication applier with the applierWorkerThreads option, which defaults to 4 threads. The option accepts integers in the range of 0 to 1024 and can only be used with the dba.configureInstance() and dba.configureReplicaSetInstance() operations. For example, to use 8 threads, issue:

mysql-js> dba.configureInstance(instance, {applierWorkerThreads: 8, restart: true})
Note

The change to the number of threads used by the parallel replication applier only occurs after the instance is restarted and has rejoined the cluster.

To disable the parallel replication applier, set the applierWorkerThreads option to 0.

InnoDB Cluster and Auto-increment

When you are using an instance as part of an InnoDB Cluster, the auto_increment_increment and auto_increment_offset variables are configured to avoid the possibility of auto increment collisions for multi-primary clusters up to a size of 9 (the maximum supported size of a Group Replication group). The logic used to configure these variables can be summarized as:

InnoDB Cluster and Binary Log Purging

In MySQL 8, the binary log is automatically purged (as defined by binlog_expire_logs_seconds). This means that a cluster which has been running for a longer time than binlog_expire_logs_seconds could eventually not contain an instance with a complete binary log that contains all of the transactions applied by the instances. This could result in instances needing to be provisioned automatically, for example using MySQL Enterprise Backup, before they could join the cluster. Instances running 8.0.17 and later support the MySQL Clone plugin, which resolves this issue by providing an automatic provisioning solution which does not rely on incremental recovery, see Section 7.4.6, “Using MySQL Clone with InnoDB Cluster”. Instances running a version earlier than 8.0.17 only support incremental recovery, and the result is that, depending on which version of MySQL the instance is running, instances might have to be provisioned automatically. Otherwise operations which rely on distributed recovery, such as Cluster.addInstance() and so on might fail.

On instances running earlier versions of MySQL the following rules are used for binary log purging:

  • Instances running a version earlier than 8.0.1 have no automatic binary log purging because the default value of expire_logs_days is 0.

  • Instances running a version later than 8.0.1 but earlier than 8.0.4 purge the binary log after 30 days because the default value of expire_logs_days is 30.

  • Instances running a version later than 8.0.10 purge the binary log after 30 days because the default value of binlog_expire_logs_seconds is 2592000 and the default value of expire_logs_days is 0.

Thus, depending on how long the cluster has been running binary logs could have been purged and you might have to provision instances manually. Similarly, if you manually purged binary logs you could encounter the same situation. Therefore you are strongly advised to upgrade to a version of MySQL later than 8.0.17 to take full advantage of the automatic provisioning provided by MySQL Clone for distributed recovery, and to minimize downtime while provisioning instances for your InnoDB Cluster.