Documentation Home
MySQL 8.0 Reference Manual
Related Documentation Download this Manual
PDF (US Ltr) - 40.3Mb
PDF (A4) - 40.4Mb
PDF (RPM) - 39.9Mb
HTML Download (TGZ) - 10.6Mb
HTML Download (Zip) - 10.6Mb
HTML Download (RPM) - 9.2Mb
Man Pages (TGZ) - 241.2Kb
Man Pages (Zip) - 346.4Kb
Info (Gzip) - 3.9Mb
Info (Zip) - 3.9Mb
Excerpts from this Manual

MySQL 8.0 Reference Manual  /  ...  /  Using AdminAPI and MySQL Router

21.4.2 Using AdminAPI and MySQL Router

This section explains how to use MySQL Router and AdminAPI.

Testing InnoDB Cluster High Availability

To test if InnoDB Cluster high availability works, simulate an unexpected halt by killing an instance. The cluster detects the fact that the instance left the cluster and reconfigures itself. Exactly how the cluster reconfigures itself depends on whether you are using a single-primary or multi-primary cluster, and the role the instance serves within the cluster.

In single-primary mode:

  • If the current primary leaves the cluster, one of the secondary instances is elected as the new primary, with instances prioritized by the lowest server_uuid. MySQL Router redirects read-write connections to the newly elected primary.

  • If a current secondary leaves the cluster, MySQL Router stops redirecting read-only connections to the instance.

For more information see Section 18.1.3.1, “Single-Primary Mode”.

In multi-primary mode:

  • If a current "R/W" instance leaves the cluster, MySQL Router redirects read-write connections to other primaries. If the instance which left was the last primary in the cluster then the cluster is completely gone and you cannot connect to any MySQL Router port.

For more information see Section 18.1.3.2, “Multi-Primary Mode”.

There are various ways to simulate an instance leaving a cluster, for example you can forcibly stop the MySQL server on an instance, or use the AdminAPI dba.killSandboxInstance() if testing a sandbox deployment. In this example assume there is a single-primary sandbox cluster deployment with three server instances and the instance listening at port 3310 is the current primary. Simulate the instance leaving the cluster unexpectedly:

mysql-js> dba.killSandboxInstance(3310)

The cluster detects the change and elects a new primary automatically. Assuming your session is connected to port 6446, the default read-write classic MySQL protocol port, MySQL Router should detect the change to the cluster's topology and redirect your session to the newly elected primary. To verify this, switch to SQL mode in MySQL Shell using the \sql command and select the instance's port variable to check which instance your session has been redirected to. Notice that the first SELECT statement fails as the connection to the original primary was lost. This means the current session has been closed, MySQL Shell automatically reconnects for you and when you issue the command again the new port is confirmed.

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> SELECT @@port;
ERROR: 2013 (HY000): Lost connection to MySQL server during query
The global session got disconnected.
Attempting to reconnect to 'root@localhost:6446'...
The global session was successfully reconnected.
mysql-sql> SELECT @@port;
+--------+
| @@port |
+--------+
|   3330 |
+--------+
1 row in set (0.00 sec)

In this example, the instance at port 3330 has been elected as the new primary. This shows that the InnoDB Cluster provided us with automatic failover, that MySQL Router has automatically reconnected us to the new primary instance, and that we have high availability.

Working with a Cluster's Routers

You can bootstrap multiple instances of MySQL Router against a cluster or ReplicaSet. From version 8.0.19, to show a list of all registered MySQL Router instances, issue:

Cluster.listRouters()

The result provides information about each registered MySQL Router instance, such as its name in the metadata, the hostname, ports, and so on. For example, issue:

mysql-js> Cluster.listRouters()
{
    "clusterName": "example", 
    "routers": {
        "ic-1:3306": {
            "hostname": "ic-1:3306", 
            "lastCheckIn": "2020-01-16 11:43:45", 
            "roPort": 6447, 
            "roXPort": 64470, 
            "rwPort": 6446, 
            "rwXPort": 64460, 
            "version": "8.0.19"
        }
    }
}

The returned information shows:

  • The name of the MySQL Router instance.

  • Last check-in timestamp, which is generated by a periodic ping from the MySQL Router stored in the metadata

  • Hostname where the MySQL Router instance is running

  • Read-Only and Read-Write ports which the MySQL Router publishes for classic MySQL protocol connections

  • Read-Only and Read-Write ports which the MySQL Router publishes for X Protocol connections

  • Version of this MySQL Router instance. The support for returning version was added in 8.0.19. If this operation is run against an earlier version of MySQL Router, the version field is null.

Additionally, the Cluster.listRouters() operation can show a list of instances that do not support the metadata version supported by MySQL Shell. Use the onlyUpgradeRequired option, for example by issuing Cluster.listRouters({'onlyUpgradeRequired':'true'}). The returned list shows only the MySQL Router instances registered with the Cluster which require an upgrade of their metadata. See Section 21.2.8.2, “Upgrading InnoDB Cluster Metadata”.

MySQL Router instances are not automatically removed from the metadata, so for example as you bootstrap more instances the InnoDB Cluster metadata contains a growing number of references to instances. To remove a registered MySQL Router instance from a cluster's metadata, use the Cluster.removeRouterMetadata(router) operation, added in version 8.0.19. Use the Cluster.listRouters() operation to get the name of the MySQL Router instance you want to remove, and pass it in as router. For example suppose your MySQL Router instances registered with a cluster were:

mysql-js> Cluster.listRouters(){

    "clusterName": "testCluster",
    "routers": {
        "myRouter1": {
            "hostname": "example1.com",
            "lastCheckIn": null,
            "routerId": "1",
            "roPort": "6447",
            "rwPort": "6446"
            "version": null
        },
        "myRouter2": {
            "hostname": "example2.com",
            "lastCheckIn": "2019-11-27 16:25:00",
            "routerId": "3",
            "roPort": "6447",
            "rwPort": "6446"
            "version": "8.0.19"
        }
    }
}

Based on the fact that the instance named myRouter1 has null for lastCheckIn and version, we decide to remove this old instance from the metadata by issuing:

mysql-js> cluster.removeRouterMetadata('myRouter1')

The MySQL Router instance specified is unregistered from the cluster by removing it from the InnoDB Cluster metadata.