This section describes the following:
AdminAPI's
command returns a JSON object describing the status of an
InnoDB ClusterSet deployment. The output includes the status
of the InnoDB ClusterSet deployment itself and the global and
cluster status of each InnoDB Cluster in the ClusterSet. The
extended output adds the status of each member server in each
cluster, information about the asynchronous replication channels
managed by InnoDB ClusterSet, and other configuration and
status information. The command reports the status of ClusterSet
replication as well as of the servers themselves. If there are
any issues, warning and error messages are included to explain
the problem in more detail.
clusterSet
.status()
The MySQL Shell instance where you use
can be connected to any active member of the
InnoDB ClusterSet. The metadata can be retrieved from the
primary cluster by way of any other cluster that is active in
the InnoDB ClusterSet.
clusterSet
.status()
If there is an issue with any of the clusters in the InnoDB ClusterSet, Section 8.9, “InnoDB ClusterSet Repair and Rejoin” explains the procedure for fixing it and rejoining the cluster to the ClusterSet (or removing it if the issue cannot be fixed). If the cluster with the issue is the primary cluster, you first need to carry out a controlled switchover if it is still functioning (as described in Section 8.7, “InnoDB ClusterSet Controlled Switchover”), or an emergency failover if it is not functioning or cannot be contacted (as described in Section 8.8, “InnoDB ClusterSet Emergency Failover”).
You can use the extended
option, which defaults
to 0, to increase the verbosity level of the output as follows:
extended: 0
or omitting the option returns basic information about the availability status of the InnoDB ClusterSet deployment, each InnoDB Cluster in the ClusterSet, and the ClusterSet replication status for each replica cluster.extended: 1
adds the topology for each InnoDB Cluster in the ClusterSet, the status of each individual member server in each cluster, and more detailed information about the ClusterSet replication channel's status for each replica cluster.extended: 2
adds further details about each individual member server in each cluster and about the ClusterSet replication channel, including the GTID set.extended: 3
adds important configuration settings for the ClusterSet replication channel, such as the connection retry settings.
For example:
mysql-js> myclusterset.status({extended: 1})
{
"clusters": {
"clusterone": {
"clusterRole": "PRIMARY",
"globalStatus": "OK",
"primary": "127.0.0.1:3310",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3310": {
"address": "127.0.0.1:3310",
"memberRole": "PRIMARY",
"mode": "R/W",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:3320": {
"address": "127.0.0.1:3320",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:3330": {
"address": "127.0.0.1:3330",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
}
},
"transactionSet": "953a51d5-2690-11ec-ba07-00059a3c7a00:1,c51c1b15-269e-11ec-b9ba-00059a3c7a00:1-131,c51c29ad-269e-11ec-b9ba-00059a3c7a00:1-8"
},
"clustertwo": {
"clusterRole": "REPLICA",
"clusterSetReplication": {
"applierStatus": "APPLIED_ALL",
"applierThreadState": "Waiting for an event from Coordinator",
"applierWorkerThreads": 4,
"receiver": "127.0.0.1:4410",
"receiverStatus": "ON",
"receiverThreadState": "Waiting for source to send event",
"source": "127.0.0.1:3310"
},
"clusterSetReplicationStatus": "OK",
"globalStatus": "OK",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:4410": {
"address": "127.0.0.1:4410",
"memberRole": "PRIMARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:4420": {
"address": "127.0.0.1:4420",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:4430": {
"address": "127.0.0.1:4430",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
}
},
"transactionSet": "0f6ff279-2764-11ec-ba06-00059a3c7a00:1-5,953a51d5-2690-11ec-ba07-00059a3c7a00:1,c51c1b15-269e-11ec-b9ba-00059a3c7a00:1-131,c51c29ad-269e-11ec-b9ba-00059a3c7a00:1-8",
"transactionSetConsistencyStatus": "OK",
"transactionSetErrantGtidSet": "",
"transactionSetMissingGtidSet": ""
}
},
"domainName": "testclusterset",
"globalPrimaryInstance": "127.0.0.1:3310",
"metadataServer": "127.0.0.1:3310",
"primaryCluster": "clusterone",
"status": "HEALTHY",
"statusText": "All Clusters available."
}
To get a handle to a ClusterSet
object
representing the InnoDB ClusterSet for a target server
instance, use a dba.getClusterSet()
or
command. These commands work if the target server instance is a
member of an InnoDB Cluster that is part of an
InnoDB ClusterSet deployment, even if the primary cluster for
the InnoDB ClusterSet deployment is not currently reachable.
The target server instance itself must be reachable when you use
the object. If the target instance is a member of a cluster that
has been marked as invalidated, the command returns a warning,
but still returns the cluster
.getClusterSet()ClusterSet
object. If
the target instance is not currently a member of an
InnoDB ClusterSet deployment, the command returns an error.
The ClusterSet
object contains the connection
details of the server that you retrieved it from, so a
ClusterSet
object that you previously
retrieved from a member server that is now offline will not work
any more, and you would need to get it again from a server that
is online in the InnoDB ClusterSet deployment.
The ClusterSet
object defaults to using the
account it was fetched with for operations where permissions are
required. It is important to get the object when you are
connected to the server instance using an appropriate user
account for the operations you want to perform using it. Some
operations during the InnoDB ClusterSet deployment process
require permissions, and the default user account stored in the
object is used for this, so that the process does not need to
store any other user accounts. For monitoring and
troubleshooting an InnoDB ClusterSet that you already set up,
an InnoDB Cluster administrator account is appropriate. For
the initial cluster deployment process, the InnoDB Cluster
server configuration account is appropriate. For more
information, see
Section 8.3, “User Accounts for InnoDB ClusterSet”.
When you use the
function, the overall ClusterSet status
(clusterSet
.status()status
field) reported for an
InnoDB ClusterSet deployment can be one of the following:
-
HEALTHY
The primary cluster in the InnoDB ClusterSet is functioning acceptably, and all of the replica clusters are functioning acceptably.
-
AVAILABLE
The primary cluster in the InnoDB ClusterSet is functioning acceptably, but one or more of the replica clusters has impaired functioning or is not functioning.
-
UNAVAILABLE
The primary cluster in the InnoDB ClusterSet is not functioning, because it is offline or has lost quorum, or MySQL Shell cannot contact the primary cluster to determine its status.
The overall ClusterSet status reported for an InnoDB ClusterSet deployment depends on the overall status of each InnoDB Cluster. An InnoDB Cluster in a ClusterSet reports three statuses:
The global status (
globalStatus
field) is the status of the InnoDB Cluster with regards to its role in the InnoDB ClusterSet. This status shows whether the cluster can still function acceptably in the InnoDB ClusterSet deployment, even if it has some issues, such as a member server being currently offline. An InnoDB Cluster can be marked as invalidated during a failover, regardless of the status of the member servers, and if so this is shown as the global status.The cluster status (
status
field) is the status of the InnoDB Cluster with regards to its own functioning. This status shows whether the cluster has any technical issues, such as one or more members being offline, a loss of quorum, or a Group Replication error state. A cluster can tolerate certain issues but still function acceptably as part of an InnoDB ClusterSet deployment. For this reason, with the default verbosity level, the
function only reports the cluster status for those clusters where it is causing a global status issue. To view the cluster status for all clusters in the InnoDB ClusterSet whether or not it is causing a global status issue, use theclusterSet
.status()extended
option to specify a higher verbosity level.The ClusterSet replication status (
clusterSetReplicationStatus
field) is the status of the ClusterSet replication channel for a replica InnoDB Cluster. This status shows whether the replica cluster has any issues with replicating from the primary cluster, so that these can be considered separately from any technical issues with the member servers in the cluster. A replica InnoDB Cluster reports the ClusterSet replication status whether or not it is causing a global status issue. A primary InnoDB Cluster does not have this status field, because the ClusterSet replication channel is not operating on the primary cluster.
At higher verbosity levels, the extended output for the
function shows the status of each member server in each
InnoDB Cluster. The output includes the member's Group
Replication state (clusterSet
.status()memberState
field) and for
a server in a replica cluster, the state of replication on the
member. For information on the Group Replication states, see
Group Replication Server States.
The global status (globalStatus
field)
reported for an InnoDB Cluster can be one of the following:
-
OK
The cluster is functioning acceptably in the InnoDB ClusterSet deployment. At least one of the member servers in the cluster is in Group Replication's
ONLINE
state, and the replication group has quorum. If the cluster is a replica cluster, the ClusterSet replication status is alsoOK
. This global status does not necessarily mean there are no technical issues with the cluster. Some members might be offline, or the cluster might have too few members to provide tolerance for failures. However, the cluster is functioning well enough to continue as part of the InnoDB ClusterSet deployment. A primary cluster or a replica cluster can have this global status.-
OK_NOT_REPLICATING
The cluster is functioning acceptably, but replication has stopped on the ClusterSet replication channel, either as a controlled stop or due to a replication error. Only a replica cluster can have this global status.
-
OK_NOT_CONSISTENT
The cluster is functioning acceptably, but the set of transactions on the cluster (the GTID set) has diverged from that on the primary cluster, such that there are extra transactions on the replica cluster that the primary cluster does not have. Replication might have stopped on the ClusterSet replication channel, either as a controlled stop or due to a replication error, or the channel might still be replicating. Only a replica cluster can have this global status. A replica cluster with this status is not available for a planned switchover, although a forced failover is possible.
-
OK_MISCONFIGURED
The cluster is functioning acceptably, but an incorrect configuration has been detected for the ClusterSet replication channel. For example, the channel might be replicating from the wrong source. The replication channel might be still running, or replication might have stopped. Only a replica cluster can have this global status.
-
NOT_OK
The cluster is not functioning at all as part of the InnoDB ClusterSet deployment due to a technical issue. It has lost quorum or all member servers are in Group Replication's
OFFLINE
status. A primary cluster or a replica cluster can have this global status. If a primary cluster has this global status, the InnoDB ClusterSet deployment is given the statusUNAVAILABLE
.-
UNKNOWN
The cluster is the primary cluster for the InnoDB ClusterSet deployment but MySQL Shell currently cannot contact it to determine its status. While the primary cluster cannot be contacted, the InnoDB ClusterSet deployment is given the status
UNAVAILABLE
.-
INVALIDATED
The cluster was invalidated during a failover process. During a controlled switchover process, data consistency is assured, and the original primary cluster is demoted to a working read-only replica cluster. However, during an emergency failover process, data consistency is not assured, so for safety, the original primary cluster is marked as invalidated during the failover process. Replica clusters are also marked as invalidated if they are unreachable or unavailable at the time of the failover, or during a controlled switchover. A cluster with this global status is not functioning at all as part of the InnoDB ClusterSet deployment. The cluster does not necessarily have any technical issues, and might be capable of rejoining the InnoDB ClusterSet deployment after manual validation. If the cluster can be contacted, you should verify that it has been shut down, so that it is not accepting new transactions.
The cluster status (status
field) reported
for an InnoDB Cluster can be one of the following, which can
all be reported for a primary cluster or a replica cluster:
-
OK
All the member servers in the cluster are in Group Replication's
ONLINE
state, and there are three or more members in the cluster.-
OK_PARTIAL
At least three of the member servers in the cluster are in Group Replication's
ONLINE
state. However, one or more member servers are in Group Replication'sOFFLINE
,RECOVERING
,ERROR
, orUNREACHABLE
state, so they are not currently participating as active members of the cluster. A cluster in this situation is functioning well enough to continue as part of the InnoDB ClusterSet deployment, but to bring it up toOK
status, resolve the issues with the member servers.-
OK_NO_TOLERANCE
All the member servers in the cluster are in Group Replication's
ONLINE
state, but there are less than three members in the cluster, so it does not have sufficient tolerance for failures. A cluster in this situation is functioning well enough to continue as part of the InnoDB ClusterSet deployment, but to bring it up toOK
status, add more member servers.-
OK_NO_TOLERANCE_PARTIAL
One or two member servers in the cluster are in Group Replication's
ONLINE
state, but one or more are in Group Replication'sOFFLINE
,RECOVERING
,ERROR
, orUNREACHABLE
state. The cluster therefore does not have sufficient tolerance for failures because of the unavailability of some members. A cluster in this situation is functioning well enough to continue as part of the InnoDB ClusterSet deployment, but to bring it up toOK
status, resolve the issues with the member servers.-
NO_QUORUM
The cluster does not have quorum, meaning that a majority of the replication group's member servers are unavailable for agreeing on a decision. Group Replication is able to reconfigure itself to the new group number if members leave voluntarily or are expelled by a group decision, so a loss of quorum means that the missing member servers have either failed or been cut off from the others by a network partition. A cluster in this situation cannot function as part of the InnoDB ClusterSet deployment. To bring a cluster in this state up to
OK
status, see Section 8.9, “InnoDB ClusterSet Repair and Rejoin”.-
OFFLINE
All the member servers in the cluster are in Group Replication's
OFFLINE
state. A cluster in this situation cannot function as part of the InnoDB ClusterSet deployment. To bring a cluster in this state up toOK
status if it is not currently supposed to be offline, see Section 8.9, “InnoDB ClusterSet Repair and Rejoin”.-
ERROR
All the member servers in the cluster are in Group Replication's
ERROR
state. A cluster in this situation cannot function as part of the InnoDB ClusterSet deployment. To bring a cluster in this state up toOK
status, see Section 8.9, “InnoDB ClusterSet Repair and Rejoin”.-
UNKNOWN
MySQL Shell cannot currently contact any member servers to determine the cluster's status. If this is the primary cluster, the InnoDB ClusterSet deployment is given the status
UNAVAILABLE
.-
INVALIDATED
The cluster was invalidated during a failover process. During a controlled switchover process, data consistency is assured, and the original primary cluster is demoted to a working read-only replica cluster. However, during an emergency failover process, data consistency is not assured, so for safety, the original primary cluster is marked as invalidated during the failover process. Replica clusters are also marked as invalidated if they are unreachable or unavailable at the time of the failover, or during a controlled switchover. A cluster with this global status is not functioning at all as part of the InnoDB ClusterSet deployment. The cluster does not necessarily have any technical issues, and might be capable of rejoining the InnoDB ClusterSet deployment after manual validation. If the cluster can be contacted, you should verify that it has been shut down, so that it is not accepting new transactions. To handle this situation, see Section 8.9, “InnoDB ClusterSet Repair and Rejoin”.
The cluster status relates to technical issues with the
InnoDB Cluster as a Group Replication group, rather than to
the process of replication. For a replica cluster, the
ClusterSet replication status
(clusterSetReplicationStatus
field) is also
reported as follows:
-
OK
The ClusterSet replication channel is running.
-
STOPPED
The ClusterSet replication channel has been stopped in a controlled manner. This status is shown when the receiver thread, applier thread, or both threads have been stopped.
-
CONNECTING
The replication channel is connecting. If an error occurs during connection, it is ignored until the channel state updated to either ON or OFF.
-
ERROR
The ClusterSet replication channel has stopped due to a replication error, such as an incorrect configuration or a set of transactions that differs from the set on the primary cluster.
-
MISCONFIGURED
An incorrect configuration has been detected for the ClusterSet replication channel, such as replicating from the wrong source. The channel might be still running, or replication might have stopped.
-
MISSING
The ClusterSet replication channel does not exist on the servers in this cluster.
-
UNKNOWN
MySQL Shell cannot currently contact the replica cluster to determine the replication channel's status.
If a cluster's only issue is with the ClusterSet replication
channel, issuing the
command for the cluster automatically corrects the channel's
configuration if necessary and restarts the channel. This might
be sufficient to fix the issue. For instructions to do this, see
Section 8.9.5, “Rejoining a Cluster to an InnoDB ClusterSet”.
clusterSet
.rejoinCluster()
If you just want to view the topology of the
InnoDB ClusterSet, and do not need status information, you can
use the
function instead. This function returns a JSON object describing
the topology of an InnoDB ClusterSet deployment, and giving
the IP address and identifier of each member server in each
InnoDB Cluster. For example:
clusterSet
.describe()
mysql-js> myclusterset.describe()
{
"clusters": {
"clusterone": {
"clusterRole": "PRIMARY",
"topology": [
{
"address": "127.0.0.1:3310",
"label": "127.0.0.1:3310"
},
{
"address": "127.0.0.1:3320",
"label": "127.0.0.1:3320"
},
{
"address": "127.0.0.1:3330",
"label": "127.0.0.1:3330"
}
]
},
"clustertwo": {
"clusterRole": "REPLICA",
"topology": [
{
"address": "127.0.0.1:4410",
"label": "127.0.0.1:4410"
},
{
"address": "127.0.0.1:4420",
"label": "127.0.0.1:4420"
},
{
"address": "127.0.0.1:4430",
"label": "127.0.0.1:4430"
}
]
}
},
"domainName": "testclusterset",
"primaryCluster": "clusterone"
}
This information is also provided by the extended output for the
function.
clusterSet
.status()
For information on
,
see Section 6.10.4, “Routing Options”.
clusterSet
.setRoutingOption()
To see the MySQL Router instances that are registered for the
InnoDB ClusterSet, issue the
command in MySQL Shell while connected to any member server in
the InnoDB ClusterSet deployment. The command returns details
of all the registered MySQL Router instances, or a single router
instance that you specify using its router instance definition.
For example:
clusterSet
.listRouters()
mysql-js> myclusterset.listRouters()
{
"domainName": "testclusterset",
"routers": {
"mymachine::Rome1": {
"hostname": "mymachine",
"lastCheckIn": 2021-10-15 11:58:37,
"roPort": 6447,
"roXPort": 6449,
"rwPort": 6446,
"rwXPort": 6448,
"targetCluster": "primary",
"version": "8.0.27"
},
"mymachine2::Rome2": {
"hostname": "mymachine2",
"lastCheckIn": 2021-10-15 11:58:37,
"roPort": 6447,
"roXPort": 6449,
"rwPort": 6446,
"rwXPort": 6448,
"targetCluster": "primary",
"version": "8.0.27"
}
}
}
The instance information includes the name of the MySQL Router instance, the port numbers for read and write traffic using MySQL classic protocol and X Protocol, the target cluster, and the time the instance last checked in with the target cluster. If MySQL Router is at a lower version than that required to work with this InnoDB ClusterSet deployment, the instance information states this.
To see the routing options that are set for each MySQL Router
instance, and the global policy for the InnoDB ClusterSet
deployment, issue
in MySQL Shell while connected to any member server in the
InnoDB ClusterSet deployment. A setting for a specific
MySQL Router instance overrides a global policy. For example:
clusterSet
.routingOptions()
mysql-js> myclusterset.routingOptions()
{
"domainName": "testclusterset",
"global": {
"invalidated_cluster_policy": "drop_all",
"target_cluster": "primary"
},
"routers": {
"mymachine::Rome1": {
"target_cluster": "primary"
"invalidated_cluster_policy": "accept_ro"
},
"mymachine2::Rome2": {}
}
}
If a particular routing option is not displayed for a MySQL Router
instance, as in the example above for Rome2
,
it means the instance does not have that policy set, and it
follows the global policy. The output for
Rome1
shows "target_cluster":
"primary"
, which is the same as the global policy.
This is because Rome1
has had the routing
option explicitly set to "primary"
by a
command, in which case it is displayed. To clear a routing
option, set it to clusterSet
.setRoutingOption()null
.