MySQL Router 8.0  /  Deploying MySQL Router  /  Trying out MySQL Router in a Sandbox

3.2 Trying out MySQL Router in a Sandbox

Test a MySQL Router installation by setting up a Router sandbox with InnoDB Cluster. In this case, Router acts as an intermediate node redirecting client connections to a list of servers. If one server fails, clients are redirected to the next available server in the list.

Set Up a MySQL Server Sandbox

Begin by starting three MySQL Servers. You can do this in a variety of ways, including:

  • Using the MySQL Shell AdminAPI interface that InnoDB Cluster provides. This is the recommended and simplest approach, and is documented in this section. For additional information, see MySQL AdminAPI.

    For a scripted approach, see Scripting AdminAPI.

  • By installing three MySQL Server instances on three different hosts, or on the same host.

  • Using the mysql-test-run.pl script that is part of the MySQL Test Suite framework. For additional information, see The MySQL Test Suite.

The following example uses the AdminAPI method to set up our cluster sandbox. This is a brief overview, so see MySQL InnoDB Cluster in the InnoDB Cluster manual for additional details. The following assumes you have a current version of MySQL Shell, MySQL Server, and MySQL Router installed.

Deploy a Sandbox cluster

This example uses MySQL Shell AdminAPI to set up a InnoDB Cluster with three MySQL instances (one primary and two secondaries), and a bootstrapped standalone MySQL Router with a generate configuration file. Output was shortened using "...".

$> mysqlsh

mysql-js> dba.deploySandboxInstance(3310)
  ...
mysql-js> dba.deploySandboxInstance(3320)
  ...
mysql-js> dba.deploySandboxInstance(3330)
  ...

mysql-js> \connect root@localhost:3310
  ...

mysql-js> cluster = dba.createCluster("myCluster")
  ...

mysql-js> cluster.addInstance("root@localhost:3320")
  ...
mysql-js> cluster.addInstance("root@localhost:3330")
  ...

mysql-js> cluster.status()
{
  "clusterName": "myCluster", 
  "defaultReplicaSet": {
      "name": "default", 
      "primary": "127.0.0.1:3310", 
      "ssl": "REQUIRED", 
      "status": "OK", 
      "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
      "topology": {
          "127.0.0.1:3310": {
              "address": "127.0.0.1:3310", 
              "memberRole": "PRIMARY", 
              "mode": "R/W", 
              "readReplicas": {}, 
              "replicationLag": null, 
              "role": "HA", 
              "status": "ONLINE", 
              "version": "8.0.27"
          }, 
          "127.0.0.1:3320": {
              "address": "127.0.0.1:3320", 
              "memberRole": "SECONDARY", 
              "mode": "R/O", 
              "readReplicas": {}, 
              "replicationLag": null, 
              "role": "HA", 
              "status": "ONLINE", 
              "version": "8.0.27"
          }, 
          "127.0.0.1:3330": {
              "address": "127.0.0.1:3330", 
              "memberRole": "SECONDARY", 
              "mode": "R/O", 
              "readReplicas": {}, 
              "replicationLag": null, 
              "role": "HA", 
              "status": "ONLINE", 
              "version": "8.0.27"
          }
      }, 
      "topologyMode": "Single-Primary"
  }, 
  "groupInformationSourceMember": "127.0.0.1:3310"
}

mysql-js> \q

Bye!

Set Up the Router

Next, set up MySQL Router to redirect to these MySQL instances. We'll use bootstrapping (using --bootstrap), and create a self-contained MySQL Router installation using --directory. This uses the metadata cache plugin to securely store the credentials.

$> mysqlrouter --bootstrap root@localhost:3310 --directory /tmp/router

Please enter MySQL password for root: 
# Bootstrapping MySQL Router instance at '/tmp/router'...

- Creating account(s) (only those that are needed, if any)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /tmp/router/mysqlrouter.conf

# MySQL Router configured for the InnoDB Cluster 'myCluster'

After this MySQL Router has been started with the generated configuration

    $ mysqlrouter -c /tmp/router/mysqlrouter.conf

InnoDB Cluster 'myCluster' can be reached by connecting to:

## MySQL Classic protocol

- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447

## MySQL X protocol

- Read/Write Connections: localhost:6448
- Read/Only Connections:  localhost:6449

$> cd /tmp/router

$> ./start.sh

MySQL Router is now configured and running, and is using the myCluster cluster that we set up earlier.

Testing the Router

Now connect to MySQL Router as you would any other MySQL Server by connecting to a configured MySQL Router port.

The following example connects to MySQL Router on port 6446, the port we configured for read-write connections:

$> mysql -u root -h 127.0.0.1 -P 6446 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3310 |
+--------+

As demonstrated, we connected to MySQL Router using port 6446 but see we are connected to our MySQL instance on port 3310 (our PRIMARY). Next let's connect to a read-only MySQL instance:

$> mysql -u root -h 127.0.0.1 -P 6447 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3320 |
+--------+

As demonstrated, we connected to MySQL Router using port 6447 but see we are connected to our MySQL instance on port 3320 (one of our secondaries). The read-only mode defaults to the round-robin strategy where the next connection refers to a different secondary:

$> mysql -u root -h 127.0.0.1 -P 6447 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3330 |
+--------+

As demonstrated, our second read-only connection to port 6447 connected to a different MySQL secondary, in this case to port 3330 instead of 3320.

Now test failover by first killing the primary MySQL instance (port 3310) that we connected to above.

$> mysqlsh --uri root@127.0.0.1:6446

mysql-js> dba.killSandboxInstance(3310)

The MySQL sandbox instance on this host in 
/home/philip/mysql-sandboxes/3310 will be killed

Killing MySQL instance...

Instance localhost:3310 successfully killed.

You can continue using MySQL Shell to check the connection but let us use the same mysql client example we did above:

$> mysql -u root -h 127.0.0.1 -P 6446 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3320 |
+--------+

$> mysql -u root -h 127.0.0.1 -P 6447 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3330 |
+--------+

As shown, despite connecting to the same ports (6446 for the primary and 6447 for a secondary), the underlying ports changed. Our new primary server changed from port 3310 to 3320 while our secondary changed from 3320 to 3330.

We have now demonstrated MySQL Router performing simple redirects to a list of primary and secondary MySQL instances.

Router also enables a REST API by default in the generated mysqlrouter.conf at bootstrap, and by default the following URL displays a swagger.json for your local setup: https://127.0.0.1:8443/api/20190715/swagger.json. See also Chapter 6, MySQL Router REST API.