MySQL Router works by sitting in between applications and MySQL servers. Applications connect to Router normally as if they were connecting to an ordinary MySQL server. Whenever an application connects to Router, Router chooses a suitable MySQL server from the pool of candidates that it knows about, and then connects to it. From that moment on, Router forwards all network traffic between the application and MySQL, including responses coming back from it.
MySQL Router keeps a cached list of the online MySQL servers, or the
topology and state of the configured InnoDB cluster. Initially,
the list is loaded from Router's configuration file when Router is
started. This list was generated with InnoDB Cluster servers
when Router was bootstrapped using the
To keep the cache updated, the metadata cache component keeps an open connection to one of the InnoDB Cluster servers that contains metadata. It does so by querying the metadata database and live state information from MySQL's performance schema. The cluster metadata is changed whenever the InnoDB Cluster is modified, such as adding or removing a MySQL server using the MySQL Shell, and the performance_schema tables are updated in real-time by the MySQL server's Group Replication plugin whenever a cluster state change is detected. For example, if one of the MySQL servers had an unexpected shutdown.
When Router detects that a connected MySQL server shuts down, for example because the metadata cache has lost its connection and can not connect again, it attempts to connect to a different MySQL server to fetch metadata and InnoDB Cluster state from the new MySQL server.
Dropping cluster metadata using MySQL Shell, such as
dba.dropMetadataSchema(), causes Router to drop all
current connections and forbid new connections. This causes a
Application connections to a MySQL server that shuts down are automatically closed. They must then reconnect to Router, which redirects them to an online MySQL server.