- 10.11.1 The replication_connection_configuration Table
- 10.11.2 The replication_connection_status Table
- 10.11.3 The replication_applier_configuration Table
- 10.11.4 The replication_applier_status Table
- 10.11.5 The replication_applier_status_by_coordinator Table
- 10.11.6 The replication_applier_status_by_worker Table
- 10.11.7 The replication_group_members Table
- 10.11.8 The replication_group_member_stats Table
The Performance Schema provides tables that expose replication
information. This is similar to the information available from
SHOW SLAVE STATUS statement,
but representation in table form is more accessible and has
SHOW SLAVE STATUSoutput is useful for visual inspection, but not so much for programmatic use. By contrast, using the Performance Schema tables, information about slave status can be searched using general
SELECTqueries, including complex
WHEREconditions, joins, and so forth.
Query results can be saved in tables for further analysis, or assigned to variables and thus used in stored procedures.
The replication tables provide better diagnostic information. For multithreaded slave operation,
SHOW SLAVE STATUSreports all coordinator and worker thread errors using the
Last_SQL_Errorfields, so only the most recent of those errors is visible and information can be lost. The replication tables store errors on a per-thread basis without loss of information.
The last seen transaction is visible in the replication tables on a per-worker basis. This is information not avilable from
SHOW SLAVE STATUS.
Developers familiar with the Performance Schema interface can extend the replication tables to provide additional information by adding rows to the tables.
The Performance Schema provides the following replication-related tables:
Tables that contain information about the connection of the slave server to the master server:
Tables that contain general (not thread-specific) information about the transaction applier:
Tables that contain information about specific threads responsible for applying transactions received from the master:
Tables that contain information about replication group members:
The following sections describe each replication table in more
detail, including the correspondence between the columns
SHOW SLAVE STATUS and
the replication table columns in which the same information
The remainder of this introduction to the replication tables
describes how the Performance Schema populates them and which
SHOW SLAVE STATUS are
not represented in the tables.
The Performance Schema populates the replication tables as follows:
Prior to execution of
CHANGE MASTER TO, the tables are empty.
CHANGE MASTER TO, the configuration parameters can be seen in the tables. At this time, there are no active slave threads, so the
SERVICE_STATEcolumns have a value of
START SLAVE, non-
THREAD_IDvalues can be seen. Threads that are idle or active have a
ON. The thread that connects to the master server has a value of
CONNECTINGwhile it establishes the connection, and
ONthereafter as long as the connection lasts.
STOP SLAVE, the
SERVICE_STATEcolumns for threads that no longer exist have a value of
The tables are preserved after
STOP SLAVEor threads dying due to an error.
replication_applier_status_by_workertable is nonempty only when the slave is operating in multithreaded mode. That is, if the
slave_parallel_workerssystem variable is greater than 0, this table is populated when
START SLAVEis executed, and the number of rows shows the number of workers.
The information in the Performance Schema replication tables
differs somewhat from the information available from
SHOW SLAVE STATUS because the
tables are oriented toward use of global transaction identifiers
(GTIDs), not file names and positions, and they represent server
UUID values, not server ID values. Due to these differences,
SHOW SLAVE STATUS columns
are not preserved in the Performance Schema replication tables,
or are represented a different way:
The following fields refer to file names and positions and are not preserved:
Master_Log_File Read_Master_Log_Pos Relay_Log_File Relay_Log_Pos Relay_Master_Log_File Exec_Master_Log_Pos Until_Condition Until_Log_File Until_Log_Pos
Master_Info_Filefield is not preserved. It refers to the
master.infofile, which has been superseded by crash-safe slave tables.
Skip_Counterfield is based on event counts, not GTIDs, and is not preserved.
These error fields are aliases for
Last_SQL_Error, so they are not preserved:
In the Performance Schema, this error information is available in the
LAST_ERROR_MESSAGEcolumns of the
replication_applier_status_by_workerif the slave is multithreaded). Those tables provide more specific per-thread error information than is available from
Fields that provide information about command-line filtering options is not preserved:
Replicate_Do_DB Replicate_Ignore_DB Replicate_Do_Table Replicate_Ignore_Table Replicate_Wild_Do_Table Replicate_Wild_Ignore_Table
Slave_SQL_Running_Statefields are not preserved. If needed, these values can be obtained from the process list by using the
THREAD_IDcolumn of the appropriate replication table and joining it with the
IDcolumn in the
PROCESSLISTtable to select the
STATEcolumn of the latter table.
Executed_Gtid_Setfield can show a large set with a great deal of text. Instead, the Performance Schema tables show GTIDs of transactions that are currently being applied by the slave. Alternatively, the set of executed GTIDs can be obtained from the value of the
Relay_Log_Spacefields are in to-be-decided status and are not preserved.
As of MySQL version 5.7.5, the following status variables
(previously monitored using
STATUS) were moved to the Perfomance Schema
These status variables are now only relevant when a single replication channel is being used because they only report the status of the default replication channel. When multiple replication channels exist, use the Performance Schema replication tables described in this section, which report these variables for each existing replication channel.
The first column of the replication Performance Schema tables is
CHANNEL_NAME. This enables the tables to be
viewed per replication channel. In a non-multisource replication
setup there is a single default replication channel. When you
are using multiple replication channels on a slave, you can
filter the tables per replication channel to monitor a specific
replication channel. See Replication Channels
and Multi-Source Replication Monitoring for