MySQL Router works by sitting in between applications and MySQL servers. Applications connect to Router normally as if they were connecting to an ordinary MySQL server. Whenever an application connects to Router, Router chooses a suitable MySQL server from the pool of candidates that it knows about, and then connects to it. From that moment on, Router forwards all network traffic between the application and MySQL, including responses coming back from it.
MySQL Router keeps a cached list of the online MySQL servers, or the
topology and state of the configured InnoDB cluster. Initially,
the list is loaded from Router's configuration file when Router is
started. This list was generated with InnoDB cluster servers
when Router was bootstrapped using the
To keep the cache updated, the metadata cache component keeps an open connection to one of the InnoDB cluster servers that contains metadata. It does so by querying the metadata database and live state information from MySQL's performance schema. The cluster metadata is changed whenever the InnoDB cluster is modified, such as adding or removing a MySQL server using the MySQL Shell, and the performance_schema tables are updated in real-time by the MySQL server's Group Replication plugin whenever a cluster state change is detected. For example, if one of the MySQL servers unexpectedly exits.
When Router detects that a connected MySQL server crashes, for example because the metadata cache has lost its connection and can not connect again, it attempts to connect to a different MySQL server to fetch metadata and InnoDB cluster state from the new MySQL server.
Application connections to a MySQL server that crashes are automatically closed. They must then reconnect to Router, which redirects them to an online MySQL server.