This section describes the known limitations of InnoDB Cluster. As InnoDB Cluster uses Group Replication, you should also be aware of its limitations, see Group Replication Limitations.
If a session type is not specified when creating the global session, MySQL Shell provides automatic protocol detection which attempts to first create a NodeSession and if that fails it tries to create a ClassicSession. With an InnoDB cluster that consists of three server instances, where there is one read-write port and two read-only ports, this can cause MySQL Shell to only connect to one of the read-only instances. Therefore it is recommended to always specify the session type when creating the global session.
When adding non-sandbox server instances (instances which you have configured manually rather than using
dba.deploySandboxInstance()) to a cluster, MySQL Shell is not able to persist any configuration changes in the instance's configuration file. This leads to one or both of the following scenarios:
The Group Replication configuration is not persisted in the instance's configuration file and upon restart the instance does not rejoin the cluster.
The instance is not valid for cluster usage. Although the instance can be verified with
dba.checkInstanceConfiguration(), and MySQL Shell makes the required configuration changes in order to make the instance ready for cluster usage, those changes are not persisted in the configuration file and so are lost once a restart happens.
ahappens, the instance does not rejoin the cluster after a restart.
balso happens, and you observe that the instance did not rejoin the cluster after a restart, you cannot use the recommended
dba.rebootClusterFromCompleteOutage()in this situation to get the cluster back online. This is because the instance loses any configuration changes made by MySQL Shell, and because they were not persisted, the instance reverts to the previous state before being configured for the cluster. This causes Group Replication to stop responding, and eventually the command times out.
To avoid this problem it is strongly recommended to use
dba.configureInstance()before adding instances to a cluster in order to persist the configuration changes.
The use of the
--defaults-extra-fileoption to specify an option file is not supported by InnoDB Cluster server instances. InnoDB Cluster only supports a single option file on instances and no extra option files are supported. Therefore for any operation working with the instance's option file the main one should be specified. If you want to use multiple option files you have to configure the files manually and make sure they are updated correctly considering the precedence rules of the use of multiple option files and ensuring that the desired settings are not incorrectly overwritten by options in an extra unrecognized option file.
Attempting to use instances with a host name that resolves to an IP address which does not match a real network interface fails with an error that This instance reports its own address as
the hostname. This is not supported by the Group Replication communication layer. On Debian based instances this means instances cannot use addresses such as
user@localhostbecause localhost resolves to a non-existent IP (such as 127.0.1.1). This impacts on using a sandbox deployment, which usually uses local instances on a single machine.
A workaround is to configure the
report_hostsystem variable on each instance to use the actual IP address of your machine. Retrieve the IP of your machine and add
IP of your machine
my.cnffile of each instance. You need to ensure the instances are then restarted to make the change.
dba.createCluster()or adding an instance to an existing InnoDB Cluster by running
, the following errors are logged to MySQL error log:
2020-02-10T10:53:43.727246Z 12 [ERROR] [MY-011685] [Repl] Plugin group_replication reported: 'The group name option is mandatory' 2020-02-10T10:53:43.727292Z 12 [ERROR] [MY-011660] [Repl] Plugin group_replication reported: 'Unable to start Group Replication on boot'
These messages are harmless and relate to the way AdminAPI starts Group Replication.
When using a sandbox deployment, each sandbox instance uses a copy of the mysqld binary found in the
$PATHin the local mysql-sandboxes directory. If the version of mysqld changes, for example after an upgrade, sandboxes based on the previous version fail to start. This is because the sandbox binary is outdated compared to the dependencies found under the
basedir. Sandbox instances are not designed for production, therefore they are considered transient and are not supported for upgrade.
A workaround for this issue is to manually copy the upgraded mysqld binary into the
bindirectory of each sandbox. Then start the sandbox by issuing
dba.startSandboxInstance(). The operation fails with a timeout, and the error log contains:
2020-03-26T11:43:12.969131Z 5 [System] [MY-013381] [Server] Server upgrade from '80019' to '80020' started. 2020-03-26T11:44:03.543082Z 5 [System] [MY-013381] [Server] Server upgrade from '80019' to '80020' completed.
Although the operation seems to fail with a timeout, the sandbox has started successfully.
InnoDB Cluster does not manage manually configured asynchronous replication channels. Group Replication and AdminAPI do not ensure that the asynchronous replication is active on the primary only, and state is not replicated across instances. This can lead to various scenarios where replication no longer works, as well as potentially causing a split brain. Therefore, replication between one InnoDB Cluster and another is also not supported.