You work with an InnoDB ReplicaSet in much the same way as you
would work with an InnoDB Cluster. For example as seen in
Adding Instances to a ReplicaSet, you assign a
ReplicaSet
object to a variable and call
operations that administer the ReplicaSet, such as
to add instances, which is the equivalent of
ReplicaSet
.addInstance()
in InnoDB Cluster. Thus, much of the documentation at
Section 6.2.5, “Working with InnoDB Cluster” also
applies to InnoDB ReplicaSet. The following operations are
supported by Cluster
.addInstance()ReplicaSet
objects:
You get online help for
ReplicaSet
objects, and the AdminAPI, using\help ReplicaSet
or
andReplicaSet
.help()\help dba
ordba.help()
. See Section 6.1, “MySQL AdminAPI”.-
You can quickly check the name of a
ReplicaSet
object using eithername
or
. For example the following are equivalent:ReplicaSet
.getName()mysql-js> rs.name example mysql-js> rs.getName() example
-
You check information about a ReplicaSet using the
operation, which supports theReplicaSet
.status()extended
option to get different levels of detail. For example:the default for
extended
is 0, a regular level of details. Only basic information about the status of the instance and replication is included, in addition to non-default or unexpected replication settings and status.setting
extended
to 1 includes Metadata Version, server UUID, replication information such as lag and worker threads, the raw information used to derive the status of the instance, size of the applier queue, value of system variables that protect against unexpected writes and so on.setting
extended
to 2 includes important replication related configuration settings, such as encrypted connections, and so on.
The output of
is very similar toReplicaSet
.status(extended=1)
, but the main difference is that theCluster
.status(extended=1)replication
field is always available because InnoDB ReplicaSet relies on MySQL Replication all of the time, unlike InnoDB Cluster which uses it during incremental recovery. For more information on the fields, see Checking a cluster's Status with
.Cluster
.status() You change the instances being used for a ReplicaSet using the
andReplicaSet
.addInstance()
operations. See Adding Instances to a ReplicaSet, and Removing Instances from the InnoDB Cluster.ReplicaSet
.removeInstance()Use
to add an instance that was removed back to a ReplicaSet, for example after a failover.ReplicaSet
.rejoinInstance()Use the
operation to safely perform a change of the primary of a ReplicaSet to another instance. See Planned Changes of the ReplicaSet Primary.ReplicaSet
.setPrimaryInstance()Use the
operation to perform a forced failover of the primary. See Forcing the Primary Instance in a ReplicaSet.ReplicaSet
.forcePrimaryInstance()You work with the MySQL Router instances which have been bootstrapped against a ReplicaSet in exactly the same way as with InnoDB Cluster. See Working with a Cluster's Routers for information on
andReplicaSet
.listRouters()
. For specific information on using MySQL Router with InnoDB ReplicaSet see Using ReplicaSets with MySQL Router.ReplicaSet
.removeRouterMetadata()-
From version 8.0.23 InnoDB ReplicaSet supports and enables the parallel replication applier, sometimes referred to as a multi-threaded replica. Using the parallel replication applier with InnoDB ReplicaSet requires that your instances have the correct settings configured. If you are upgrading from an earlier version, instances require an updated configuration. For each instance that belongs to the InnoDB ReplicaSet, update the configuration by issuing
dba.configureReplicaSetInstance(instance)
. Note that usuallydba.configureReplicaSetInstance()
is used before adding the instance to a replica set, but in this special case there is no need to remove the instance and the configuration change is made while it is online. For more information, see Configuring the Parallel Replication Applier.InnoDB ReplicaSet instances report information about the parallel replication applier in the output of the
operation under theReplicaSet
.status(extended=1)replication
field.
For more information, see the linked InnoDB Cluster sections.
The following operations are specific to InnoDB ReplicaSet and
can only be called against a ReplicaSet
object:
Use the
operation to safely perform a change of the primary of a
ReplicaSet to another instance. The current primary is demoted
to a secondary and made read-only, while the promoted instance
becomes the new primary and is made read-write. All other
secondary instances are updated to replicate from the new
primary. MySQL Router instances which have been bootstrapped
against the ReplicaSet automatically start redirecting
read-write clients to the new primary.
ReplicaSet
.setPrimaryInstance()
For a safe change of the primary to be possible, all replica
set instances must be reachable by MySQL Shell and have
consistent GTID_EXECUTED
sets. If the
primary is not available, and there is no way to restore it, a
forced failover might be the only option instead, see
Forcing the Primary Instance in a ReplicaSet.
During a change of primary, the promoted instance is synchronized with the old primary, ensuring that all transactions present on the primary are applied before the topology change is committed. If this synchronization step takes too long or is not possible on any of the secondary instances, the operation is aborted. In such a situation, these problematic secondary instances must be either repaired or removed from the ReplicaSet for the fail over to be possible.
Unlike InnoDB Cluster, which supports automatic failover in
the event of an unexpected failure of the primary,
InnoDB ReplicaSet does not have automatic failure detection
or a consensus based protocol such as that provided by Group
Replication. If the primary is not available, a manual
failover is required. An InnoDB ReplicaSet which has lost
its primary is effectively read-only, and for any write
changes to be possible a new primary must be chosen. In the
event that you cannot connect to the primary, and you cannot
use
to safely perform a switchover to a new primary as described
at Planned Changes of the ReplicaSet Primary, use the
ReplicaSet
.setPrimaryInstance()
operation to perform a forced failover of the primary. This is
a last resort operation that must only be used in a disaster
type scenario where the current primary is unavailable and
cannot be restored in any way.
ReplicaSet
.forcePrimaryInstance()
A forced failover is a potentially destructive action and must be used with caution.
If a target instance is not given (or is null), the most
up-to-date instance is automatically selected and promoted to
be the new primary. If a target instance is provided, it is
promoted to a primary, while other reachable secondary
instances are switched to replicate from the new primary. The
target instance must have the most up-to-date
GTID_EXECUTED
set among reachable
instances, otherwise the operation fails.
A failover is different from a planned primary change because it promotes a secondary instance without synchronizing with or updating the old primary. That has the following major consequences:
Any transactions that had not yet been applied by a secondary at the time the old primary failed are lost.
If the old primary is actually still running and processing transactions, there is a split-brain and the datasets of the old and new primaries diverge.
If the last known primary is still reachable, the
operation fails, to reduce the risk of split-brain situations.
But it is the administrator's responsibility to ensure that
the old primary it is not reachable by the other instances to
prevent or minimize such scenarios.
ReplicaSet
.forcePrimaryInstance()
After a forced failover, the old primary is considered invalid by the new primary and can no longer be part of the replica set. If at a later date you find a way to recover the instance, it must be removed from the ReplicaSet and re-added as a new instance. If there were any secondary instances that could not be switched to the new primary during the failover, they are also considered invalid.
Data loss is possible after a failover, because the old primary might have had transactions that were not yet replicated to the secondary being promoted. Moreover, if the instance that was presumed to have failed is still able to process transactions, for example because the network where it is located is still functioning but unreachable from MySQL Shell, it continues diverging from the promoted instances. Recovering once transaction sets on instances have diverged requires manual intervention and could not be possible in some situations, even if the failed instances can be recovered. In many cases, the fastest and simplest way to recover from a disaster that required a forced failover is by discarding such diverged transactions and re-provisioning a new instance from the newly promoted primary.
From version 8.0.20, AdminAPI uses a locking mechanism to
avoid different operations from performing changes on an
InnoDB ReplicaSet simultaneously. Previously, different
instances of MySQL Shell could connect to an
InnoDB ReplicaSet at the same time and execute AdminAPI
operations simultaneously. This could lead to inconsistent
instance states and errors, for example if
and
ReplicaSet
.addInstance()
were executed in parallel.
ReplicaSet
.setPrimaryInstance()
The InnoDB ReplicaSet operations have the following locking:
dba.upgradeMetadata()
anddba.createReplicaSet()
are globally exclusive operations. This means that if MySQL Shell executes these operations on an InnoDB ReplicaSet, no other operations can be executed against the InnoDB ReplicaSet or any of its instances.
andReplicaSet
.forcePrimaryInstance()
are operations that change the primary. This means that if MySQL Shell executes these operations against an InnoDB ReplicaSet, no other operations which change the primary, or instance change operations can be executed until the first operation completes.ReplicaSet
.setPrimaryInstance()ReplicaSet
.addInstance(),
, andReplicaSet
.rejoinInstance()
are operations that change an instance. This means that if MySQL Shell executes these operations on an instance, the instance is locked for any further instance change operations. However, this lock is only at the instance level and multiple instances in an InnoDB ReplicaSet can each execute one of this type of operation simultaneously. In other words, at most one instance change operation can be executed at a time, per instance in the InnoDB ReplicaSet.ReplicaSet
.removeInstance()dba.getReplicaSet()
and
are InnoDB ReplicaSet read operations and do not require any locking.ReplicaSet
.status()
In practice, if you try to execute an InnoDB ReplicaSet related operation while another operation that cannot be executed concurrently is still running, you get an error indicating that a lock on a needed resource failed to be acquired. In this case, you should wait for the running operation which holds the lock to complete, and only then try to execute the next operation. For example:
mysql-js> rs.addInstance("admin@rs2:3306");
ERROR: The operation cannot be executed because it failed to acquire the lock on
instance 'rs1:3306'. Another operation requiring exclusive access to the
instance is still in progress, please wait for it to finish and try again.
ReplicaSet.addInstance: Failed to acquire lock on instance 'rs1:3306' (MYSQLSH
51400)
In this example, the
operation failed because the lock on the primary instance
(ReplicaSet
.addInstance()rs1:3306
) could not be acquired, for
example because a
operation (or other similar operation) was still running.
ReplicaSet
.setPrimaryInstance()
Tagging is supported by ReplicaSets, and their instances. For
the purpose of tagging, ReplicaSets support the
setOption()
,
setInstanceOption()
and
options()
operations. These operations
function in generally the same way as their
Cluster
equivalents. For more information,
see Section 6.2.9, “Tagging the Metadata”. This section
documents the differences in working with tags for
ReplicaSets.
There are no other options which can be configured for ReplicaSets and their instances. For ReplicaSets, the options documented at Setting Options for InnoDB Cluster are not supported. The only supported option is the tagging described here.
The
operation shows information about the tags assigned to
individual ReplicaSet instances as well as to the ReplicaSet
itself.
ReplicaSet
.options()
The option
argument of
and
ReplicaSet
.setOption()
only support options with the ReplicaSet
.setInstanceOption()tag
namespace
and throw an error otherwise.
The
and
ReplicaSet
.setInstanceOption(instance
,
option
,
value
)
operations behave
in the same way as the ReplicaSet
.setOption(option
,
value
)Cluster
equivalent
operations.
There are no differences in hiding instances as described at
Removing Instances from Routing.
For example, to hide the ReplicaSet instance
rs-1
, issue:
mysql-js> myReplicaSet.setInstanceOption(icadmin@rs-1:3306, "tag:_hidden", true);
A MySQL Router that has been bootstrapped against the ReplicaSet
detects the change and removes the rs-1
instance from the routing destinations.