MySQL Router  /  Deploying MySQL Router  /  Trying out MySQL Router in a Sandbox

3.2 Trying out MySQL Router in a Sandbox

Test a MySQL Router installation by setting up a Router sandbox with InnoDB cluster. In this case, Router acts as an intermediate node redirecting client connections to a list of servers. If one server fails, clients are redirected to the next available server in the list.

Set Up a MySQL Server Sandbox

Begin by starting three MySQL Servers. You can do this in a variety of ways, including:

  • Using the MySQL Shell AdminAPI interface that InnoDB cluster provides. This is the recommended and simplest approach, and is documented in this section. For additional information, see InnoDB Cluster.

  • By installing three MySQL Server instances on three different hosts, or on the same host.

  • Using the mysql-test-run.pl script that is part of the MySQL Test Suite framework. For additional information, see The MySQL Test Suite.

  • Using the mysqlcloneserver MySQL Utility.

The following example uses the AdminAPI method to set up our cluster sandbox. This is a brief overview, so see Getting Started with InnoDB Cluster in the InnoDB cluster manual for additional details. The following assumes you have a current version of MySQL Shell, MySQL Server, and MySQL Router installed.

Deploy a Sandbox cluster

This example uses MySQL Shell AdminAPI to set up a InnoDB cluster with three MySQL instances (one primary and two secondaries), and a bootstrapped standalone MySQL Router with a generate configuration file. Output was shortened using "...".

shell> mysqlsh

mysql-js> dba.deploySandboxInstance(3310)
  ...
mysql-js> dba.deploySandboxInstance(3320)
  ...
mysql-js> dba.deploySandboxInstance(3330)
  ...

mysql-js> \connect root@localhost:3310
  ...

mysql-js> cluster = dba.createCluster("myCluster")
  ...

mysql-js> cluster.addInstance("root@localhost:3320")
  ...
mysql-js> cluster.addInstance("root@localhost:3330")
  ...

mysql-js> cluster.status()
{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "localhost:3310",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "localhost:3320": {
                "address": "localhost:3320",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "localhost:3330": {
                "address": "localhost:3330",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    }
}

Set Up the Router

Next, set up MySQL Router to redirect to these MySQL instances. We'll use bootstrapping (using --bootstrap), and create a self-contained MySQL Router installation using --directory. This will also use the metadata cache plugin to securely store the credentials.

shell> mysqlrouter --bootstrap root@localhost:3310 --directory /opt/myrouter

Please enter MySQL password for root:

Bootstrapping MySQL Router instance at /opt/myrouter...
MySQL Router  has now been configured for the InnoDB cluster 'myCluster'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'myCluster':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447

X protocol connections to cluster 'myCluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470

shell> cd /opt/myrouter

shell> ./start.sh

PID 28817 written to /opt/myrouter/mysqlrouter.pid

MySQL Router is now configured and running, and is using the myCluster cluster that we set up earlier.

Testing the Router

Now connect to router as you would any other MySQL Server, for example using the standard mysql client:

Note

Using the option --conf-use-sockets during MySQL Router bootstrap also configures Unix domain socket connections.

shell> mysql -u root -h 127.0.0.1 -P 6446 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3310 |
+--------+

shell> mysql -u root -h 127.0.0.1 -P 6447 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3320 |
+--------+

As demonstrated, we connected to MySQL Router on port 6446 but see we are connected to port 3310 (our PRIMARY). It also shows how connecting to one of the secondaries on port 6447 shows a connection to one of the secondary MySQL instances, in this case on port 3320.

Now test failover by first killing the primary MySQL instance (port 3310) that we connected to above.

shell> mysqlsh --uri root@127.0.0.1:6446

mysql-js> dba.killSandboxInstance(3310)

mysql-js> dba.killSandboxInstance(3310)
The MySQL sandbox instance on this host in
/Users/philip/mysql-sandboxes/3310 will be killed

Killing MySQL instance...

Instance localhost:3310 successfully killed.

You can continue using MySQL Shell to check the connection but let us use the same mysql client example we did above:

shell> mysql -u root -h 127.0.0.1 -P 6446 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3320 |
+--------+

shell> mysql -u root -h 127.0.0.1 -P 6447 -p
mysql> SELECT @@port;

+--------+
| @@port |
+--------+
|   3330 |
+--------+

As shown, despite connecting to the same ports (6446 for the primary and 6447 for a secondary), the underlying ports changed. Our new primary server changed from port 3310 to 3320 while our secondary changed from 3320 to 3330.

We have now demonstrated MySQL Router performing simple redirects to a list of primary and secondary MySQL instances.


User Comments
Sign Up Login You must be logged in to post a comment.