MySQL Shell 8.0  /  MySQL InnoDB ClusterSet  /  Integrating MySQL Router With InnoDB ClusterSet

8.5 Integrating MySQL Router With InnoDB ClusterSet

MySQL Router routes client application traffic to the appropriate clusters in an InnoDB ClusterSet deployment. You can set a global policy for MySQL Router instances that are used with the InnoDB ClusterSet deployment, and override this with settings for individual MySQL Router instances.

When you bootstrap a MySQL Router instance against an InnoDB ClusterSet deployment, it is aware of the complete topology of the ClusterSet, and can manage write and read traffic appropriately. If a controlled switchover or emergency failover takes place, the MySQL Router instances connected with the InnoDB ClusterSet are aware of this and route traffic to the new primary cluster, except for any instances that you have configured to send traffic to a specific cluster. If a cluster is invalidated, MySQL Router instances stop read and write traffic to it, except for any instances that you have configured to continue sending read traffic in that situation.

For each MySQL Router instance that you are using with InnoDB ClusterSet, you can choose to configure it to follow the primary cluster, or to connect only to a specific target InnoDB Cluster. You can change between these modes online using MySQL Shell.

Follow the primary

In this mode, MySQL Router directs application traffic, both writes and reads, to the cluster in the InnoDB ClusterSet deployment that is currently the primary cluster. This mode is the default.

Named target cluster

In this mode, MySQL Router directs application traffic to the InnoDB Cluster that you specify. This can be the primary cluster in the InnoDB ClusterSet deployment, or it can be a replica cluster. If the target cluster is currently the primary cluster, MySQL Router opens the write port and applications can write to the instance. If the target cluster is currently a read-only replica cluster, MySQL Router allows only read traffic, and denies write traffic. If this situation changes due to a switchover or failover to or from the target cluster, MySQL Router changes the permitted request types accordingly. This mode is useful if an application makes only read requests, which can be made on a replica cluster, and you want to keep that traffic routed to a local cluster.

You can also configure MySQL Router to allow or disallow read traffic to a cluster that has been marked as INVALIDATED. A cluster in this state is not currently functioning at all as part of the InnoDB ClusterSet deployment, and cannot receive writes. Although the cluster does not necessarily have any technical issues, its data is becoming stale. The default is that MySQL Router disallows reads as well as writes to an invalidated cluster (the drop_all setting), but you can choose to allow reads (the accept_ro setting).

To bootstrap MySQL Router against InnoDB ClusterSet, you need to use an InnoDB Cluster administrator account, or the InnoDB Cluster server configuration account, which also has the required permissions. MySQL Router then uses the MySQL Router administrator account to connect to the instances in the InnoDB ClusterSet deployment. You need to specify the user name and password for both these accounts during the bootstrap operation. See Section 8.3, “User Accounts for InnoDB ClusterSet” for more information.

Important

If you are using an existing InnoDB Cluster as the primary cluster in your InnoDB ClusterSet deployment, and you bootstrapped MySQL Router against that cluster already, follow the relevant parts of this process to bootstrap it again using the --force option against the InnoDB ClusterSet, then stop and restart MySQL Router. The settings in the MySQL Router instance's static configuration file need to be updated for InnoDB ClusterSet.

To integrate MySQL Router with an InnoDB ClusterSet deployment, follow this process:

  1. If you haven't already done so, install MySQL Router instances as appropriate for your topology. The recommended deployment of MySQL Router is on the same host as the client application. When using a sandbox deployment, everything is running on a single host, therefore you deploy MySQL Router to the same host. When using a production deployment, we recommend deploying one MySQL Router instance to each machine used to host one of your client applications. It is also possible to deploy MySQL Router to a common machine through which your application instances connect. For instructions, see Installing MySQL Router.

  2. Connect to any active member server instance in the InnoDB ClusterSet deployment, using an InnoDB Cluster administrator account. You may also use the InnoDB Cluster server configuration account, which also has the required permissions. Get the ClusterSet object using a dba.getClusterSet() or cluster.getClusterSet() command. It is important to get the ClusterSet object when you are connected to the server instance using an appropriate account. The default user account stored in the object is used for some operations, regardless of the account that you specify on the connection. For example:

    mysql-js> \connect admin2@127.0.0.1:3310
    ...
    mysql-js> myclusterset = dba.getClusterSet()
    <ClusterSet:testclusterset>

    In this example:

    • admin2@127.0.0.1:3310 is the URI-like connection string for any member server instance that is online in the cluster.

      The URI-like connection string is comprised of the following elements:

    • admin2 is the user name for the InnoDB Cluster administrator account.

    • 127.0.0.1:3310 is the host and port for the member server instance, as displayed by the cluster.status() command.

    • The returned ClusterSet object is assigned to the variable myclusterset.

  3. Verify that the InnoDB ClusterSet deployment is healthy, by issuing clusterSet.status() in MySQL Shell while connected to any member server in the cluster. For example:

    mysql-js> myclusterset.status({extended: 1})

    Select the extended output to see the detailed status for the clusters in the InnoDB ClusterSet topology. This gives you the host and port for each member server, so you can choose one to bootstrap MySQL Router against. See Section 8.6, “InnoDB ClusterSet Status and Topology” for more information.

  4. For each MySQL Router instance, run the mysqlrouter command in a suitable shell on the instance where MySQL Router is installed, to bootstrap MySQL Router against InnoDB ClusterSet. In this example, the force option is used because MySQL Router has previously been bootstrapped against the primary InnoDB Cluster:

    $> mysqlrouter --bootstrap icadmin@127.0.0.1:3310 --account=myRouter1 --name='Rome1' --force
    Please enter MySQL password for icadmin:
    # Bootstrapping system MySQL Router instance...
    
    Please enter MySQL password for myRouter1:
    - Creating account(s) (only those that are needed, if any)
    - Verifying account (using it to run SQL queries that would be run by Router)
    - Storing account in keyring
    - Creating configuration C:/Program Files/MySQL/MySQL Router 8.0/mysqlrouter.conf
    
    # MySQL Router configured for the ClusterSet 'testclusterset'
    
    After this MySQL Router has been started with the generated configuration
    
        > net start mysqlrouter
    or
        > C:\Program Files\MySQL\MySQL Router 8.0\bin\mysqlrouter.exe -c C:/Program Files/MySQL/MySQL Router 8.0/mysqlrouter.conf
    
    ClusterSet 'testclusterset' can be reached by connecting to:
    
    ## MySQL Classic protocol
    
    - Read/Write Connections: localhost:6446
    - Read/Only Connections:  localhost:6447
    
    ## MySQL X protocol
    
    - Read/Write Connections: localhost:6448
    - Read/Only Connections:  localhost:6449

    In this example:

    • icadmin@127.0.0.1:3310 is the URI-like connection string for any member server instance that is online in the InnoDB ClusterSet deployment. The instance can be in the primary cluster or in a replica cluster. If the instance is not the primary server in the primary cluster, InnoDB ClusterSet will route the transaction to that server, provided that the InnoDB ClusterSet deployment is healthy.

      The URI-like connection string is comprised of the following elements:

    • icadmin is the user name for an InnoDB Cluster administrator account that was set up using the cluster.setupAdminAccount() command on the primary cluster, then replicated to the replica clusters. The bootstrap operation prompts you for the password for the account. The password for an InnoDB Cluster administrator account is the same on all the server instances in the InnoDB ClusterSet deployment.

    • 127.0.0.1:3310 is the host and port for the member server instance, as displayed by the clusterSet.status() command.

    • myRouter1 is the user name for a MySQL Router administrator account that was set up using the cluster.setupRouterAccount() command on the primary cluster. The account is the same on all the server instances in the InnoDB ClusterSet deployment. The bootstrap operation prompts you for the password for the account.

    • --name can be used to assign a non-default name to the MySQL Router instance, to make it easily identifiable in the output from InnoDB ClusterSet status commands.

    • --force is required if you are bootstrapping MySQL Router again for an existing InnoDB Cluster where it was previously bootstrapped.

    MySQL Router connects to the server instance and retrieves the InnoDB ClusterSet metadata. The process is the same as when you bootstrap MySQL Router against an individual InnoDB Cluster. For more details about the process, see Section 6.10.3, “Deploying MySQL Router”.

  5. After you bootstrap each MySQL Router instance, verify that it is now correctly bootstrapped against the InnoDB ClusterSet deployment, by issuing clusterSet.listRouters() in MySQL Shell while connected to any member server in the InnoDB ClusterSet. The command returns details of all the registered MySQL Router instances, or a router instance that you specify. For example:

    mysql-js> myclusterset.listRouters()
    {
        "domainName": "testclusterset",
        "routers": {
           "Rome1": {
                "hostname":: "mymachine",
                "lastCheckIn": 2021-10-15 11:58:37,
                "roPort": 6447,
                "roXPort": 6449,
                "rwPort": 6446,
                "rwXPort": 6448,
                "targetCluster": "primary",
                "version": "8.0.27"
            },
            "Rome2": {
                "hostname":: "mymachine2",
                "lastCheckIn": 2021-10-15 11:58:37,
                "roPort": 6447,
                "roXPort": 6449,
                "rwPort": 6446,
                "rwXPort": 6448,
                "targetCluster": "primary",
                "version": "8.0.27"
            }
        }
    }

    See MySQL Router Status for InnoDB ClusterSet for more information.

  6. To see the routing options that are set for each MySQL Router instance, and the global policy for the InnoDB ClusterSet deployment, issue clusterSet.routingOptions() in MySQL Shell while connected to any member server in the InnoDB ClusterSet deployment. For example:

    mysql-js> myclusterset.routingOptions()
    {
        "domainName": "testclusterset",
        "global": {
            "invalidated_cluster_policy": "drop_all",
            "target_cluster": "primary"
        },
        "routers": {
            "Rome1":  {
                "target_cluster": "primary"
                "invalidated_cluster_policy": "accept_ro"
            },
            "Rome2": {}
        }
    }

    By default, a MySQL Router instance sends traffic to the primary cluster, and disallows both read and write traffic to a cluster that is marked as INVALIDATED. See MySQL Router Status for InnoDB ClusterSet for more information and an explanation of the output of the clusterSet.routingOptions() command.

  7. If you want to change the global routing policy or the routing policy for an individual MySQL Router instance, issue clusterSet.setRoutingOption() in MySQL Shell while connected to any member server in the InnoDB ClusterSet deployment. You can only set one routing option at a time. It takes a few seconds for a MySQL Router instance to pick up changes to a routing policy.

    For example, this command issued for the InnoDB ClusterSet myclusterset changes the target cluster for a MySQL Router instance to the cluster clustertwo:

    mysql-js> myclusterset.setRoutingOption('Rome1', 'target_cluster', 'clustertwo')
    Routing option 'target_cluster' successfully updated in router 'Rome1'.

    In this example, myclusterset is the variable for the ClusterSet object, Rome1 is the name of the MySQL Router instance, and clustertwo is the name of the specific cluster to target.

    To set the routing policy for the instance back to following the primary, issue this command:

    mysql-js> myclusterset.setRoutingOption('Rome1', 'target_cluster', 'primary')
    Routing option 'target_cluster' successfully updated in router 'Rome1'.

    To clear a routing policy for an instance, use the clusterSet.setRoutingOption() command to set the relevant policy to null. For example:

    mysql-js> myclusterset.setRoutingOption('Rome1', 'target_cluster', null)
    Routing option 'target_cluster' successfully updated in router 'Rome1'.

    To set the global routing policy, do not specify a MySQL Router instance, just the policy name and the setting. See MySQL Router Status for InnoDB ClusterSet for more information and an explanation of the available routing options.

  8. When you are ready to start accepting connections, configure the applications to use the ports where MySQL Router is listening for traffic to the InnoDB ClusterSet deployment. Then start the MySQL Router instances using a suitable shell or script in the servers where MySQL Router is installed. See Starting MySQL Router.