Related Documentation Download this Manual
PDF (US Ltr) - 38.0Mb
PDF (A4) - 38.1Mb
PDF (RPM) - 37.4Mb
HTML Download (TGZ) - 10.2Mb
HTML Download (Zip) - 10.2Mb
HTML Download (RPM) - 8.9Mb
Man Pages (TGZ) - 216.6Kb
Man Pages (Zip) - 329.5Kb
Info (Gzip) - 3.5Mb
Info (Zip) - 3.5Mb
Excerpts from this Manual

MySQL 5.7 Reference Manual  /  InnoDB Cluster  /  Production Deployment of InnoDB Cluster

20.4 Production Deployment of InnoDB Cluster

When working in a production environment, the MySQL server instances which make up an InnoDB cluster run on multiple host machines as part of a network rather than on single machine as described in Section 20.3, “Sandbox Deployment of InnoDB Cluster”. Before proceeding with these instructions you must install the required software to each machine that you intend to add as a server instance to your cluster, see Section 20.2, “Creating an InnoDB Cluster”.

The following diagram illustrates the scenario you work with in this section:

Figure 20.2 Production Deployment

Production Deployment

Important

Unlike a sandbox deployment, where all instances are deployed locally to one machine, for a production deployment you must connect to each machine and run MySQL Shell locally before issuing dba.configureLocalInstance() on the instance. This ensures that any configuration changes are persisted into the mysql.cfg file on the instance. This also requires that you have access to the server and the required permissions to execute MySQL Shell.

To pass a server's connection information to AdminAPI use URI type strings. See Section 18.2.2.1, “Connecting using a URI String” for more information.

User Privileges

The user account used to administer an instance does not have to be the root account, however the user needs to be assigned full read and write privileges on the InnoDB cluster metadata tables in addition to full MySQL administrator privileges (SUPER, GRANT OPTION, CREATE, DROP and so on). To give the user your_user the privileges needed to administer InnoDB cluster issue:

GRANT ALL PRIVILEGES ON mysql_innodb_cluster_metadata.* TO your_user@'%' WITH GRANT OPTION;
GRANT RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, \
CREATE USER ON *.* TO your_user@'%' WITH GRANT OPTION;
GRANT SELECT ON performance_schema.* TO your_user@'%' WITH GRANT OPTION;
GRANT SELECT ON *.* TO your_user@'%' WITH GRANT OPTION;
GRANT SELECT, INSERT, UPDATE, DELETE ON mysql.* TO your_user@'%' WITH GRANT OPTION;

If only read operations are needed (such as for monitoring purposes), an account with more restricted privileges can be used. To give the user your_user the privileges needed to monitor InnoDB cluster issue:

GRANT SELECT ON mysql_innodb_cluster_metadata.* TO your_user@'%';
GRANT SELECT ON performance_schema.global_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_configuration TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status_by_coordinator TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status_by_worker TO your_user@'%';
GRANT SELECT ON performance_schema.replication_connection_configuration TO your_user@'%';
GRANT SELECT ON performance_schema.replication_connection_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_group_member_stats TO your_user@'%';
GRANT SELECT ON performance_schema.replication_group_members TO your_user@'%';

In this procedure the user ic is used in examples.

Configuring Hostname

The production instances which make up a cluster run on separate machines, therefore each machine must have a unique host name and be able to resolve the host names of the other machines which run server instances in the cluster. If this is not the case, you can:

  • configure each machine to map the IP of each other machine to a hostname. See your operating system documentation for details. This is the recommended solution.

  • set up a DNS service

  • configure the report_host variable in the MySQL configuration of each instance

In this procedure the host name ic-number is used in examples.

Verbose Logging

When working with a production deployment it can be useful to configure verbose logging for MySQL Shell, the information in the log can help you to find and resolve any issues that might occur when you are preparing server instances to work as part of InnoDB cluster. To start MySQL Shell with a verbose logging level use the --log-level option:

shell> mysqlsh --log-level=DEBUG3

The DEBUG3 is recommended, see --log-level for more information. When DEBUG3 is set the MySQL Shell log file contains lines such as Debug: execute_sql( ... ) which contain the SQL queries that are executed as part of each AdminAPI call. The log file generated by MySQL Shell is located in ~/.mysqlsh/mysqlsh.log for Unix-based systems; on Microsoft Windows systems it is located in %APPDATA%\MySQL\mysqlsh\mysqlsh.log. See Section 18.5, “MySQL Shell Application Log” for more information.

In addition to enabling the MySQL Shell log level, you can configure the amount of output AdminAPI provides in MySQL Shell after each call to the API. To enable the amount of AdminAPI output, in MySQL Shell issue:

mysql-js> dba.verbose=2

This enables the maximum output from AdminAPI calls. The available levels of output are:

  • 0 or OFF is the default. This provides minimal output and is the recommended level when not troubleshooting.

  • 1 or ON adds verbose output from each call to the AdminAPI.

  • 2 adds debug output to the verbose output providing full information about what each call to AdminAPI executes.

Checking Instance Configuration

Before creating a production deployment from server instances you need to check that MySQL on each instance is correctly configured by using the dba.checkInstanceConfiguration() function. The following demonstrates issuing this in a running MySQL Shell:

mysql-js> dba.checkInstanceConfiguration('ic@ic-1:3306')

Please provide the password for 'ic@ic-1:3306':
Validating instance...

The instance 'ic-1:3306' is not valid for Cluster usage.

The following issues were encountered:

- Some configuration options need to be fixed.

+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                         | Current Value | Required Value | Note                                             |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum                  | CRC32         | NONE           | Update the server variable or restart the server |
| enforce_gtid_consistency         | OFF           | ON             | Restart the server                               |
| gtid_mode                        | OFF           | ON             | Restart the server                               |
| log_bin                          | 0             | 1              | Restart the server                               |
| log_slave_updates                | 0             | ON             | Restart the server                               |
| master_info_repository           | FILE          | TABLE          | Restart the server                               |
| relay_log_info_repository        | FILE          | TABLE          | Restart the server                               |
| transaction_write_set_extraction | OFF           | XXHASH64       | Restart the server                               |
+----------------------------------+---------------+----------------+--------------------------------------------------+


Please fix these issues , restart the server and try again.

{
  "config_errors": [
    {
      "action": "server_update",
      "current": "CRC32",
      "option": "binlog_checksum",
      "required": "NONE"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "enforce_gtid_consistency",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "gtid_mode",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "0",
      "option": "log_bin",
      "required": "1"
    },
    {
      "action": "restart",
      "current": "0",
      "option": "log_slave_updates",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "master_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "relay_log_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "transaction_write_set_extraction",
      "required": "XXHASH64"
    }
  ],
  "errors": [],
  "restart_required": true,
  "status": "error"
}
mysql-js>

Repeat this process for each server instance that you plan to use as part of your cluster. The reports generated after running dba.checkInstanceConfiguration() provide information about any configuration changes required before you can proceed. The restart_required field in the final part of the report tells you whether MySQL on the instance requires a restart to detect any change made to the configuration file.

Configuring the Instance

If configuration issues have been identified in the report generated by running dba.checkInstanceConfiguration() against the instance, connect to the machine and reconfigure the server instance. AdminAPI provides the dba.configureLocalInstance() function that finds the MySQL server's configuration file and modifies it to ensure that the instance is correctly configured for InnoDB cluster. Alternatively make the changes to the instance's option file manually based on the information in the report. Regardless of the way you make the configuration changes, you might have to restart MySQL to ensure the configuration changes are detected.

The recommended method is to log in to the remote machine, run MySQL Shell as the root user and then connect to the local MySQL server:

shell> sudo -i mysqlsh --log-level=DEBUG3

The dba.configureLocalInstance() method verifies that a suitable user is available for cluster usage, which is used for connections between members of the cluster. The root user cannot do remote logins by default, therefore you have three options to continue with the configuration: enable remote connections for the root user , create a new user or neither of these two previous options. The following example demonstrates the second option, creating a new user for cluster usage. The format of the user names accepted follows the standard MySQL account name format, see Section 6.2.3, “Specifying Account Names”.

mysql-js> dba.configureLocalInstance('root@localhost:3306')

Please provide the password for 'root@localhost:3306':

Please specify the path to the MySQL configuration file: /etc/mysql/mysql.conf.d/mysqld.cnf
Validating instance...

The configuration has been updated but it is required to restart the server.
{
  "config_errors": [
    {
      "action": "restart",
      "current": "OFF",
      "option": "enforce_gtid_consistency",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "gtid_mode",
      "required": "ON"
      },
    {
      "action": "restart",
      "current": "0",
      "option": "log_bin",
      "required": "1"
    },
    {
      "action": "restart",
      "current": "0",
      "option": "log_slave_updates",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "master_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "relay_log_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "transaction_write_set_extraction",
      "required": "XXHASH64"
    }
  ],
  "errors": [],
  "restart_required": true,
  "status": "error"
}
mysql-js>

As with dba.checkInstanceConfiguration(), the configuration requirements are identified, but this time the chosen configuration file is modified. For the configuration changes to take effect you might need to restart the MySQL Server.

The dba.configureLocalInstance() function also accepts the clusterAdmin and clusterAdminPassword options, which enable you to configure the cluster user and password when calling the function. clusterAdmin supports identifiers or strings for the user name and host name. By default if unquoted it assumes input is a string. For example:

mysql-js> dba.configureLocalInstance('ic@ic-1:3306', \ 
	      {clusterAdmin: 'icadmin@ic-1%',clusterAdminPassword: 'secret'});

This user is granted the privileges for an administrative user described at User Privileges.

Creating the Cluster

Once you have prepared your instances, use the dba.createCluster function to create the cluster. The machine which you are running MySQL Shell on is used as the seed instance for the cluster. The seed instance is replicated to the other instances which you add to the cluster, making them replicas of the seed instance. Log in to the instance and run MySQL Shell locally.

shell> mysqlsh --uri ic@ic-1:3306

Creating a Session to 'ic@ic-1:3306'
Enter password: *********
Classic Session successfully established. No default schema selected.

Use the dba.createCluster(name) function to create the cluster and assign the returned cluster to a variable called cluster:

mysql-js> var cluster = dba.createCluster('devCluster')

      A new InnoDB cluster will be created on instance 'ic@ic-1:3306'.

      Creating InnoDB cluster 'devCluster' on 'ic@ic-1:3306'...
      Adding Seed Instance...

      Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
      At least 3 instances are needed for the cluster to be able to withstand up to
      one server failure.
Note

If you encounter an error related to metadata being inaccessible you might have the loopback network interface configured. For correct InnoDB cluster usage disable the loopback interface.

To check the cluster has been created, use the cluster instance's status() function. See Checking the InnoDB Cluster Status.

Use the cluster.addInstance(instance) function to add more instances to the cluster, where instance is a URI type string to connect to the local instance. The instances must have been configured for cluster usage. You need a minimum of three instances in the cluster to make it tolerant to the failure of one instance. Adding further instances increases the tolerance to failure of an instance. To add an instance to the cluster issue:

mysql-js> cluster.addInstance('ic@ic-2:3306');

To verify the instance has been added, use the cluster instance's status() function.

Important

At this stage, the server instances have been added to the cluster but the changes to the InnoDB cluster metadata have only been made on the instance which you are currently connected to. To make the configuration changes persistent for all instances in the cluster, you must connect to each instance and run dba.configureLocalInstance() locally on each instance you have added. This is essential to ensure that instances rejoin the cluster in the event of leaving the cluster.

To persist the InnoDB cluster metadata for all instances, log in to each instance that you added to the cluster and run MySQL Shell locally.

shell> mysqlsh

Use the \connect command to log in to MySQL server. Execute the dba.configureLocalInstance('instance') function, where instance is a URI type string to connect to the local instance. For example:

mysql-js> dba.configureLocalInstance('ic@ic-2:3306')

Repeat this process for each server instance that you added to the cluster. Similarly if you modify the cluster structure, for example changing the number of instances, you need to repeat this process for each server instance to update the InnoDB cluster metadata accordingly for each instance in the cluster.

Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 20.5, “Using MySQL Router with InnoDB Cluster”.


User Comments
Sign Up Login You must be logged in to post a comment.