MySQL Shell 8.0  /  ...  /  Inconsistent Transaction Sets (GTID Sets) in InnoDB ClusterSet Clusters

8.9.2 Inconsistent Transaction Sets (GTID Sets) in InnoDB ClusterSet Clusters

AdminAPI's clusterSet.status() command warns you if an InnoDB Cluster's GTID set is inconsistent with the GTID set on the primary cluster in the InnoDB ClusterSet. A cluster in this state has extra transactions compared to the other clusters in the InnoDB ClusterSet, and has the global status OK_NOT_CONSISTENT. The cluster continues to function in the InnoDB ClusterSet with this status, and you can carry out an emergency failover to it if its GTID set is the most up to date of the available replica clusters. However, it is not eligible for a controlled switchover, because the difference in transactions might result in clients accessing incorrect data. The cluster also cannot rejoin the InnoDB ClusterSet with extra transactions if it goes offline.

A replica cluster in an InnoDB ClusterSet is read-only, so if it has always been a replica cluster, it should not contain extra transactions unless changes were made on the cluster without using AdminAPI commands. If you need to carry out administrative transactions on an instance while Group Replication is stopped, always set the value of the sql_log_bin system variable to OFF before issuing administrative statements, and back to ON afterwards:

SET SQL_LOG_BIN=0; 
<administrator action> 
SET SQL_LOG_BIN=1;

Setting this system variable to OFF means that the transactions that occur from that point until you set it back to ON are not written to the binary log and do not have GTIDs assigned to them.

A situation that can create a diverged set of transactions with no outside changes is when the primary cluster becomes unreachable and an emergency failover procedure is used. If the primary cluster remains online after the failover, it could continue to accept transactions from clients through any MySQL Router instances that are still connected to it, and pass these to any replica clusters that are still connected to it. Alternatively, significant replication lag might cause the replica cluster selected as the replacement primary cluster to be missing some transactions from the primary cluster. In that case, when the old primary cluster initially comes back online as an invalidated replica cluster, the transactions that were never transferred to the replica are identified as extra transactions.

The extended output for the clusterSet.status() command identifies any clusters that have extra transactions, and assigns them the OK_NOT_CONSISTENT global status. For example:

mysql-js> myclusterset.status({extended: 1})                                                                                                                                                                        
{                                                                                                                                                                                                                                      
    "clusters": {                                                                                                                                                                                                                      
        "clusterone": {                                                                                                                                                                                                                       
            "clusterErrors": [                                                                                                                                                                                                         
                "ERROR: Errant transactions detected"                                                                                                                                                                                  
            ],                                                                                                                                                                                                                         
            "clusterRole": "REPLICA",                                                                                                                                                                                                  
            "clusterSetReplication": {                                                                                                                                                                                                 
                "applierStatus": "APPLIED_ALL",                                                                                                                                                                                        
                "applierThreadState": "Waiting for an event from Coordinator",                                                                                                                                                         
                "applierWorkerThreads": 4,                                                                                                                                                                                             
                "receiver": "127.0.0.1:3310",                                                                                                                                                                                          
                "receiverStatus": "ON",                                                                                                                                                                                                
                "receiverThreadState": "Waiting for source to send event",                                                                                                                                                             
                "source": "127.0.0.1:4410"                                                                                                                                                                                             
            },                                                                                                                                                                                                                         
            "clusterSetReplicationStatus": "OK",                                                                                                                                                                                       
            "globalStatus": "OK_NOT_CONSISTENT",                                                                                                                                                                                       
            "status": "OK",                                                                                                                                                                                                            
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",                                                                                                                                                     
            "topology": {                                                                                                                                                                                                              
                "127.0.0.1:3310": {                                                                                                                                                                                                    
                    "address": "127.0.0.1:3310",                                                                                                                                                                                       
                    "memberRole": "PRIMARY",                                                                                                                                                                                           
                    "mode": "R/O",                                                                                                                                                                                                     
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.27"
                },
                "127.0.0.1:3320": {
                    "address": "127.0.0.1:3320",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.27"
                },
                "127.0.0.1:3330": {
                    "address": "127.0.0.1:3330",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.27"
                }
            },
            "transactionSet": "54ff337b-2ccf-11ec-95da-3c6aa7197deb:1-131,54ff3ed7-2ccf-11ec-95da-3c6aa7197deb:1-5,c06527d6-2ce3-11ec-a55e-3c6aa7197deb:1,c0653492-2ce3-11ec-a55e-3c6aa7197deb:1-5",
            "transactionSetConsistencyStatus": "INCONSISTENT",
            "transactionSetConsistencyStatusText": "There are 1 transactions that were executed in this instance that did not originate from the PRIMARY.",
            "transactionSetErrantGtidSet": "c06527d6-2ce3-11ec-a55e-3c6aa7197deb:1",
            "transactionSetMissingGtidSet": ""
        },
        "clustertwo": {
            "clusterRole": "PRIMARY",
            "globalStatus": "OK",
            "primary": "127.0.0.1:4410",
            "status": "OK",
            "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
            "topology": {
                "127.0.0.1:4410": {
                    "address": "127.0.0.1:4410",
                    "memberRole": "PRIMARY",
                    "mode": "R/W",
                    "status": "ONLINE",
                    "version": "8.0.27"
                },
                "127.0.0.1:4420": {
                    "address": "127.0.0.1:4420",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.27"
                },
                "127.0.0.1:4430": {
                    "address": "127.0.0.1:4430",
                    "memberRole": "SECONDARY",
                    "mode": "R/O",
                    "replicationLagFromImmediateSource": "",
                    "replicationLagFromOriginalSource": "",
                    "status": "ONLINE",
                    "version": "8.0.27"
                }
            },
            "transactionSet": "54ff337b-2ccf-11ec-95da-3c6aa7197deb:1-131,54ff3ed7-2ccf-11ec-95da-3c6aa7197deb:1-5"
        }
    },
    "domainName": "testclusterset",
    "globalPrimaryInstance": "127.0.0.1:4410",
    "metadataServer": "127.0.0.1:4410",
    "primaryCluster": "clustertwo",
    "status": "AVAILABLE",
    "statusText": "Primary Cluster available, there are issues with a Replica cluster."
}

The safest method to reconcile an individual server's data with the rest of the InnoDB Cluster is to identify the server in the InnoDB ClusterSet deployment that has the best data (the most transactions, the most recent transactions, or the most important transactions) and use MySQL's cloning functionality to transfer the content from that server to the affected server. For instructions to do this, see Cloning Remote Data. Then use the cluster.rejoinInstance() command to have the instance rejoin the InnoDB Cluster. For details of this operation, see Rejoining an Instance to a Cluster.

If the whole InnoDB Cluster is affected, remove the affected cluster from the InnoDB ClusterSet deployment following the procedure in Section 8.9.4, “Removing a Cluster from an InnoDB ClusterSet”, and set up a new InnoDB Cluster in its place. The server instances in the new InnoDB Cluster will receive the correct transaction set as part of the setup process.

If you want to keep the extra transactions, an emergency failover can be carried out to make the InnoDB Cluster with those transactions into the primary cluster, following the procedure in Section 8.8, “InnoDB ClusterSet Emergency Failover”.

If you are able to deal with the problem transactions, use a clusterSet.rejoinCluster() operation to rejoin the InnoDB Cluster to the InnoDB ClusterSet deployment. For instructions to do that, see Section 8.9.5, “Rejoining a Cluster to an InnoDB ClusterSet”.