Documentation Home
MySQL 8.0 Reference Manual
Related Documentation Download this Manual
PDF (US Ltr) - 48.2Mb
PDF (A4) - 48.2Mb
PDF (RPM) - 43.8Mb
HTML Download (TGZ) - 11.0Mb
HTML Download (Zip) - 11.1Mb
HTML Download (RPM) - 9.5Mb
Man Pages (TGZ) - 239.8Kb
Man Pages (Zip) - 343.5Kb
Info (Gzip) - 4.4Mb
Info (Zip) - 4.4Mb
Excerpts from this Manual

MySQL 8.0 Reference Manual  /  InnoDB Cluster  /  Using MySQL Router with InnoDB Cluster

21.4 Using MySQL Router with InnoDB Cluster

This section describes how to use MySQL Router with InnoDB cluster to achieve high availability. Regardless of whether you have deployed a sandbox or production cluster, MySQL Router can configure itself based on the InnoDB cluster's metadata using the --bootstrap option. This configures MySQL Router automatically to route connections to the cluster's server instances. Client applications connect to the ports MySQL Router provides, without any need to be aware of the InnoDB cluster topology. In the event of a unexpected failure, the InnoDB cluster adjusts itself automatically and MySQL Router detects the change. This removes the need for your client application to handle failover. For more information, see Routing for MySQL InnoDB cluster.

Note

Do not attempt to configure MySQL Router manually to redirect to the ports of an InnoDB cluster. Always use the --bootstrap option as this ensures that MySQL Router takes its configuration from the InnoDB cluster's metadata. See Cluster Metadata and State.

The recommended deployment of MySQL Router is on the same host as the application. When using a sandbox deployment, everything is running on a single host, therefore you deploy MySQL Router to the same host. When using a production deployment, we recommend deploying one MySQL Router instance to each machine used to host one of your client applications. It is also possible to deploy MySQL Router to a common machine through which your application instances connect.

Assuming MySQL Router is already installed (see Installing MySQL Router), use the --bootstrap option to provide the location of a server instance that belongs to the InnoDB cluster. MySQL Router uses the included metadata cache plugin to retrieve the InnoDB cluster's metadata, consisting of a list of server instance addresses which make up the InnoDB cluster and their role in the cluster. You pass the URI-like connection string of the server that MySQL Router should retrieve the InnoDB cluster metadata from. For example:

shell> mysqlrouter --bootstrap icadmin@ic-1:3306 --user=mysqlrouter

You are prompted for the instance password and encryption key for MySQL Router to use. This encryption key is used to encrypt the instance password used by MySQL Router to connect to the cluster. The ports you can use to connect to the InnoDB cluster are also displayed. The MySQL Router bootstrap process creates a mysqlrouter.conf file, with the settings based on the cluster metadata retrieved from the address passed to the --bootstrap option, in the above example icadmin@ic-1:3306. Based on the InnoDB cluster metadata retrieved, MySQL Router automatically configures the mysqlrouter.conf file, including a metadata_cache section. If you are using MySQL Router 8.0.14 and later, the --bootstrap option automatically configures MySQL Router to track and store active MySQL InnoDB cluster Metadata server addresses at the path configured by dynamic_state. This ensures that when MySQL Router is restarted it knows which MySQL InnoDB cluster Metadata server addresses are current. For more information see the dynamic_state documentation.

In earlier MySQL Router versions, metadata server information was defined during Router's initial bootstrap operation and stored statically as bootstrap_server_addresses in the configuration file, which contained the addresses for all server instances in the cluster. For example:

[metadata_cache:prodCluster]
router_id=1
bootstrap_server_addresses=mysql://icadmin@ic-1:3306,mysql://icadmin@ic-2:3306,mysql://icadmin@ic-3:3306
user=mysql_router1_jy95yozko3k2
metadata_cluster=prodCluster
ttl=300
Tip

If using MySQL Router 8.0.13 or earlier, when you change the topology of a cluster by adding another server instance after you have bootstrapped MySQL Router, you need to update bootstrap_server_addresses based on the updated metadata. Either restart MySQL Router using the --bootstrap option, or manually edit the bootstrap_server_addresses section of the mysqlrouter.conf file and restart MySQL Router.

The generated MySQL Router configuration creates TCP ports which you use to connect to the cluster. By default, ports for communicating with the cluster using both classic MySQL protocol and X Protocol are created. To use X Protocol the server instances must have X Plugin installed and configured, which is the default for MySQL 8.0 and later. The default available TCP ports are:

  • 6446 - for classic MySQL protocol read-write sessions, which MySQL Router redirects incoming connections to primary server instances.

  • 6447 - for classic MySQL protocol read-only sessions, which MySQL Router redirects incoming connections to one of the secondary server instances.

  • 64460 - for X Protocol read-write sessions, which MySQL Router redirects incoming connections to primary server instances.

  • 64470 - for X Protocol read-only sessions, which MySQL Router redirects incoming connections to one of the secondary server instances.

Depending on your MySQL Router configuration the port numbers might be different to the above. For example if you use the --conf-base-port option, or the group_replication_single_primary_mode variable. The exact ports are listed when you start MySQL Router.

The way incoming connections are redirected depends on the type of cluster being used. When using a single-primary cluster, by default MySQL Router publishes a X Protocol and a classic MySQL protocol port, which clients connect to for read-write sessions and which are redirected to the cluster's single primary. With a multi-primary cluster read-write sessions are redirected to one of the primary instances in a round-robin fashion. For example, this means that the first connection to port 6446 would be redirected to the ic-1 instance, the second connection to port 6446 would be redirected to the ic-2 instance, and so on. For incoming read-only connections MySQL Router redirects connections to one of the secondary instances, also in a round-robin fashion. To modify this behavior see the routing_strategy option.

Once bootstrapped and configured, start MySQL Router. If you used a system wide install with the --bootstrap option then issue:

shell> mysqlrouter &

If you installed MySQL Router to a directory using the --directory option, use the start.sh script found in the directory you installed to. Alternatively set up a service to start MySQL Router automatically when the system boots, see Starting MySQL Router. You can now connect a MySQL client, such as MySQL Shell to one of the incoming MySQL Router ports as described above and see how the client gets transparently connected to one of the InnoDB cluster instances.

shell> mysqlsh --uri root@localhost:6442

To verify which instance you are actually connected to, simply issue an SQL query against the port status variable.

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> select @@port;
+--------+
| @@port |
+--------+
|   3310 |
+--------+

Testing High Availability

To test if high availability works, simulate an unexpected halt by killing an instance. The cluster detects the fact that the instance left the cluster and reconfigures itself. Exactly how the cluster reconfigures itself depends on whether you are using a single-primary or multi-primary cluster, and the role the instance serves within the cluster.

In single-primary mode:

  • If the current primary leaves the cluster, one of the secondary instances is elected as the new primary, with instances prioritized by the lowest server_uuid. MySQL Router redirects read-write connections to the newly elected primary.

  • If a current secondary leaves the cluster, MySQL Router stops redirecting read-only connections to the instance.

For more information see Section 18.1.3.1, “Single-Primary Mode”.

In multi-primary mode:

  • If a current "R/W" instance leaves the cluster, MySQL Router redirects read-write connections to other primaries. If the instance which left was the last primary in the cluster then the cluster is completely gone and you cannot connect to any MySQL Router port.

For more information see Section 18.1.3.2, “Multi-Primary Mode”.

There are various ways to simulate an instance leaving a cluster, for example you can forcibly stop the MySQL server on an instance, or use the AdminAPI dba.killSandboxInstance() if testing a sandbox deployment. In this example assume there is a single-primary sandbox cluster deployment with three server instances and the instance listening at port 3310 is the current primary. Simulate the instance leaving the cluster unexpectedly:

mysql-js> dba.killSandboxInstance(3310)

The cluster detects the change and elects a new primary automatically. Assuming your session is connected to port 6446, the default read-write classic MySQL protocol port, MySQL Router should detect the change to the cluster's topology and redirect your session to the newly elected primary. To verify this, switch to SQL mode in MySQL Shell using the \sql command and select the instance's port variable to check which instance your session has been redirected to. Notice that the first SELECT statement fails as the connection to the original primary was lost. This means the current session has been closed, MySQL Shell automatically reconnects for you and when you issue the command again the new port is confirmed.

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> SELECT @@port;
ERROR: 2013 (HY000): Lost connection to MySQL server during query
The global session got disconnected.
Attempting to reconnect to 'root@localhost:6446'...
The global session was successfully reconnected.
mysql-sql> SELECT @@port;
+--------+
| @@port |
+--------+
|   3330 |
+--------+
1 row in set (0.00 sec)

In this example, the instance at port 3330 has been elected as the new primary. This shows that the InnoDB cluster provided us with automatic failover, that MySQL Router has automatically reconnected us to the new primary instance, and that we have high availability.

MySQL Router and Metadata Servers

When MySQL Router is bootstrapped against a cluster, it records the server instance's addresses in its configuration file. If any additional instances are added to the cluster after bootstrapping the MySQL Router, they are not automatically detected and therefore are not used for connection routing.

To ensure that newly added instances are routed to correctly you must bootstrap MySQL Router against the cluster to read the updated metadata. This means that you must restart MySQL Router and include the --bootstrap option.

Working with a Cluster's Routers

You can bootstrap multiple instances of MySQL Router against a cluster or ReplicaSet. From version 8.0.19, to show a list of all registered MySQL Router instances, issue:

Cluster.listRouters()

The result provides information about each registered MySQL Router instance, such as its name in the metadata, the hostname, ports, and so on. For example, issue:

mysql-js> Cluster.listRouters()
{
    "clusterName": "example",
    "routers": {
        "ic-1:3306": {
            "hostname": "ic-1:3306",
            "lastCheckIn": "2020-01-16 11:43:45",
            "roPort": 6447,
            "roXPort": 64470,
            "rwPort": 6446,
            "rwXPort": 64460,
            "version": "8.0.19"
        }
    }
}

The returned information shows:

  • The name of the MySQL Router instance.

  • Last check-in timestamp, which is generated by a periodic ping from the MySQL Router stored in the metadata

  • Hostname where the MySQL Router instance is running

  • Read-Only and Read-Write ports which the MySQL Router publishes for classic MySQL protocol connections

  • Read-Only and Read-Write ports which the MySQL Router publishes for X Protocol connections

  • Version of this MySQL Router instance. The support for returning version was added in 8.0.19. If this operation is run against an earlier version of MySQL Router, the version field is null.

Additionally, the Cluster.listRouters() operation can show a list of instances that do not support the metadata version supported by MySQL Shell. Use the onlyUpgradeRequired option, for example by issuing Cluster.listRouters({'onlyUpgradeRequired':'true'}). The returned list shows only the MySQL Router instances registered with the Cluster which require an upgrade of their metadata. See Section 21.3.2, “Upgrading InnoDB cluster Metadata”.

MySQL Router instances are not automatically removed from the metadata, so for example as you bootstrap more instances the InnoDB cluster metadata contains a growing number of references to instances. To remove a registered MySQL Router instance from a cluster's metadata, use the Cluster.removeRouterMetadata(router) operation, added in version 8.0.19. Use the Cluster.listRouters() operation to get the name of the MySQL Router instance you want to remove, and pass it in as router. For example suppose your MySQL Router instances registered with a cluster were:

mysql-js> Cluster.listRouters(){

    "clusterName": "testCluster",
    "routers": {
        "myRouter1": {
            "hostname": "example1.com",
            "lastCheckIn": null,
            "routerId": "1",
            "roPort": "6447",
            "rwPort": "6446"
            "version": null
        },
        "myRouter2": {
            "hostname": "example2.com",
            "lastCheckIn": "2019-11-27 16:25:00",
            "routerId": "3",
            "roPort": "6447",
            "rwPort": "6446"
            "version": "8.0.19"
        }
    }
}

Based on the fact that the instance named myRouter1 has null for lastCheckIn and version, we decide to remove this old instance from the metadata by issuing:

mysql-js> cluster.removeRouterMetadata('myRouter1')

The MySQL Router instance specified is unregistered from the cluster by removing it from the InnoDB cluster metadata.

Configuring the MySQL Router User

When MySQL Router connects to an InnoDB cluster or InnoDB ReplicaSet, it requires a user account which has the correct privileges. From MySQL Router version 8.0.19 this internal user can be specified using the --account option. In previous versions, MySQL Router created internal accounts at each bootstrap of the cluster, which could result in a number of accounts building up over time. From MySQL Shell version 8.0.20, you can use AdminAPI to set up the user account required for MySQL Router. Use the setupRouterAccount(user, [options]) operation to create a MySQL user account or upgrade an existing account so that it that can be used by MySQL Router to operate on an InnoDB cluster or InnoDB ReplicaSet. This is the recommended method of configuring MySQL Router with InnoDB cluster and InnoDB ReplicaSet.

To add a new MySQL Router account named myRouter1 to the InnoDB cluster referenced by the variable testCluster, issue:

mysqlsh> testCluster.setupRouterAccount(myRouter1)

In this case, no domain is specified and so the account is created with the wildcard (%) character, which ensures that the created user can connect from any domain. To limit the account to only be able to connect from the example.com domain, issue:

mysqlsh> testCluster.setupRouterAccount(myRouter1@example.com)

The operation prompts for a password, and then sets up the MySQL Router user with the correct privileges. If the InnoDB cluster or InnoDB ReplicaSet has multiple instances, the created MySQL Router user is propagated to all of the instances. Once you have a user set up, you can use it to bootstrap MySQL Router using the --account option. For example, suppose you created a cluster administrator named adminAccount using the setupAdminAccount() operation. To bootstrap MySQL Router using the myRouter1 user created by the setupRouterAccount() operation, issue:

mysqlrouter --bootstrap adminAccount@ic-1:3306 --account=myRouter1

When you already have a MySQL Router user configured, for example if you were using a version prior to 8.0.20, you can use the setupRouterAccount() operation to reconfigure the existing user. In this case, pass in the update option set to true. For example, to reconfigure the myOldRouter user, issue:

mysqlsh> testCluster.setupRouterAccount(myOldRouter, {'update':1})