Related Documentation Download this Manual
PDF (US Ltr) - 38.0Mb
PDF (A4) - 38.1Mb
PDF (RPM) - 37.4Mb
HTML Download (TGZ) - 10.2Mb
HTML Download (Zip) - 10.2Mb
HTML Download (RPM) - 8.9Mb
Man Pages (TGZ) - 216.6Kb
Man Pages (Zip) - 329.5Kb
Info (Gzip) - 3.5Mb
Info (Zip) - 3.5Mb
Excerpts from this Manual

MySQL 5.7 Reference Manual  /  InnoDB Cluster  /  Using MySQL Router with InnoDB Cluster

20.5 Using MySQL Router with InnoDB Cluster

This section describes how to use MySQL Router with InnoDB cluster to achieve high availability. Regardless of whether you have deployed a sandbox or production cluster, MySQL Router can be configured to route connections to the cluster. Using MySQL Router with InnoDB cluster means your client applications no longer need to handle failover. They only need to connect to the ports MySQL Router provides, without any need to be aware of the InnoDB cluster topology. The recommended deployment of MySQL Router is on the same host as the application. When using a sandbox deployment, everything is running on a single host, so you deploy MySQL Router to the same host. When using a production deployment, we recommend deploying one MySQL Router instance to each machine used to host one of your client applications. It is also possible to deploy MySQL Router to a common machine through which your application instances connect. You need the MASTER key of the InnoDB cluster to auto-configure MySQL Router.

Assuming MySQL Router is already installed (see Installing MySQL Router), the only required step is to start MySQL Router using the --bootstrap option to provide the location of the InnoDB cluster metadata. You pass the URI type string of the server that MySQL Router should retrieve the InnoDB cluster metadata from. For example:

shell> mysqlrouter --bootstrap root@localhost:3310 --user=mysqlrouter

You are prompted for the instance password and encryption key for MySQL Router to use. This encryption key is used to encrypt the instance password used by MySQL Router to connect to the cluster. The ports you can use to connect to the InnoDB cluster are also displayed. The MySQL Router bootstrap process creates a mysqlrouter.conf file, with the settings based on the cluster metadata retrieved from the address passed to the --bootstrap option, in the above example root@localhost:3310.

The generated MySQL Router configuration creates TCP ports which you use to connect to the cluster. Ports for communicating with the cluster using both Classic MySQL protocol and X Protocol are created. To use X Protocol the server instances must have X Plugin installed and configured. For a sandbox deployment, instances have X Plugin set up automatically. For a production deployment, if you want to use X Protocol you need to install and configure X Plugin on each instance, see Section 19.3, “Setting Up MySQL as a Document Store”. The default available TCP ports are:

  • 6446 - for Classic MySQL protocol read-write sessions, which MySQL Router redirects incoming connections to primary server instances.

  • 6447 - for Classic MySQL protocol read-only sessions, which MySQL Router redirects incoming connections to one of the secondary server instances.

  • 64460 - for X Protocol read-write sessions, which MySQL Router redirects incoming connections to primary server instances.

  • 64470 - for X Protocol read-only sessions, which MySQL Router redirects incoming connections to one of the secondary server instances.

Depending on your MySQL Router configuration the port numbers might be different to the above. For example if you use the --conf-base-port option, or the group_replication_single_primary_mode variable. The exact ports are listed when you start MySQL Router.

The way incoming connections are redirected depends on the type of cluster being used. When using a single-primary cluster, read-write sessions are redirected to the single primary, with a multi-primary cluster read-write sessions are redirected to one of the primary instances. For incoming read-only connections MySQL Router redirects connections to one of the secondary instances in a round-robin fashion.

Once bootstrapped and configured, start MySQL Router:

shell> mysqlrouter &

Alternatively set up a service to start MySQL Router automatically when the system boots. You can now connect a MySQL client, such as MySQL Shell to one of the incoming MySQL Router ports as described above and see how the client gets transparently connected to one of the InnoDB cluster instances.

shell> mysqlsh --uri root@localhost:6442

To verify which instance you are actually connected to, simply issue an SQL query against the port status variable.

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> select @@port;
+--------+
| @@port |
+--------+
|   3310 |
+--------+
1 row in set (0.00 sec)

Testing High Availability

To test if high availability works, simulate an unexpected halt by killing an instance. The cluster detects the fact that the instance left the cluster and reconfigures itself. Exactly how the cluster reconfigures itself depends on whether you are using a single-primary or multi-primary cluster, and the role the instance serves within the cluster.

In single-primary mode:

  • If the current primary leaves the cluster, one of the secondary instances is elected as the new primary, with instances prioritized by the lowest server_uuid. MySQL Router redirects read-write connections to the newly elected primary.

  • If a current secondary leaves the cluster, MySQL Router stops redirecting read-only connections to the instance.

For more information see Section 17.4.1.1, “Single-Primary Mode”.

In multi-primary mode:

  • If a current "R/W" instance leaves the cluster, MySQL Router redirects read-write connections to other primaries. If the instance which left was the last primary in the cluster then X?

  • If a current "R/O" instance leaves the cluster,

For more information see Section 17.4.1.1, “Single-Primary Mode”.

There are various ways to simulate an instance leaving a cluster, for example you can forcibly stop the MySQL server on an instance, or use the AdminAPI dba.killSandboxInstance() if testing a sandbox deployment. In this example assume there is a single-primary sandbox cluster deployment with three server instances and the instance listening at port 3310 is the current primary. Simulate the instance leaving the cluster unexpectedly:

mysql-js> dba.killSandboxInstance(3310)

Switch to SQL mode in MySQL Shell using the \sql command and verify the port variable to check which instance you are connected to. Notice that the first SELECT statement fails as the connection to the original primary was lost. This means the current session has been closed, MySQL Shell automatically reconnects for you and when you issue the command again the new port is confirmed.

mysql-js> \sql
Switching to SQL mode... Commands end with ;
mysql-sql> SELECT @@port;
ERROR: 2013 (HY000): Lost connection to MySQL server during query
The global session got disconnected.
Attempting to reconnect to 'root@localhost:6446'...
The global session was successfully reconnected.
mysql-sql> select @@port;
+--------+
| @@port |
+--------+
|   3330 |
+--------+
1 row in set (0.00 sec)

This shows that the InnoDB cluster provided us with automatic failover, that MySQL Router has automatically reconnected us to the new primary instance, and that we have high availability.


User Comments
Sign Up Login You must be logged in to post a comment.