With the new Group Replication Beta release—0.8 for MySQL 5.7.14—available for download on labs.mysql.com, I expect more and more people to begin engaging in various forms of proof-of-concept attempts. So I wanted to create a “quick start guide” (to replace this outdated getting started post) focused on an example/test Linux environment, that covers how to get started, what a full working example test setup can look like, and also include some best practice and operations advice along the way.
Note: While I use Linux in the example walkthrough, Group Replication will be supported on all MySQL 5.7 supported platforms. For the 0.8 Beta release we’ve provided a common subset of packages for testing (all 64 bit): Ubuntu 16.04, EL6, EL7, FreeBSD 10, Solaris 11 (SPARC and x86), and Windows.
What is Group Replication?
Group Replication is a plugin for the standard MySQL 5.7 Server. This plugin provides virtually synchronous replication, with built-in conflict detection/handling and consistency guarantees, all of which supports multi-master write anywhere usage. It allows you to move from a stand-alone instance of MySQL, which is a single point of failure, to a natively distributed highly available MySQL service (the Group Replication set) that’s made up of N MySQL instances (the group members). Then individual machines and/or MySQL instances can fail or be taken offline for maintenance while the distributed MySQL service continues to operate and handle application traffic.
Note: Group Replication is all about providing highly available replica sets; data and writes are duplicated on each member in the group. For scaling beyond what a single system can bear, you will need an orchestration and sharding framework built around N Group Replication sets, where each replica set maintains and manages a given shard or partition of your total dataset. This type of setup—often called a “sharded cluster” today—will allow you to scale reads and writes linearly and without limit. Stay tuned for future announcements on related work!
Why Would I Use Group Replication?
If MySQL downtime would have a serious impact on your business and its operations, then you should use Group Replication. In practice, this means that most user-facing production systems should leverage Group Replication as it provides a native means for making a MySQL database highly available.
Where Can I Get It?
Given the early development stages, it’s always important to start with the latest available packages. Before GA, they will be available on labs.mysql.com, and after GA they will be available as part of the standard MySQL 5.7 Server releases (just as, for example, the Semi-Sync replication plugin is today).
Group Replication has also been marked as a “rapid” server plugin (see the informal rapid plugin concept introduced here), so after the initial GA release you can expect new releases of the plugin—that contain bug fixes and new features—in later MySQL 5.7 Enterprise and Community builds. The rate of bug fixes and improvements are quite high with every passing month—a testament to the QA and development teams—so you should always start with the latest and plan to upgrade with some regularity. I would note that the ability to do rolling upgrades also helps us significantly here.
How Would I Set It Up?
You would then install the given MySQL 5.7 packages that include the Group Replication plugin (the group_replication.so file located in the @@global.plugin_dir
directory) on the machines that you want to participate in Group Replication. You want these machines to be on different physical hardware if possible—as hardware fails, and you need redundancy there as well—with a reliable low latency network connection between all of them (as they need to coordinate, communicate, form consensus, and of course replicate data between them).
You also want the group to consist of at least 3 members because the consensus protocol used for writes relies on majority consensus. So with 3 members, one can fail and the other two can continue on w/o needing agreement from the failed member on the writes, as 2/3rds is still a majority. You also don’t generally want to have an even number, because it’s then too easy to end up with a split-brain situation where both sides—the even number of group members on either side of the network partition, or the half of the members left which simply don’t know if the other half of the group has failed or it just can’t communicate with them—will block writes in order to preserve consistency, requiring manual intervention to unblock it by forcing a reconfiguration of the group membership using the group_replication_force_members
option.
Note: From here on out, I’ll use a 3 member group as the example setup—consisting of hosts hanode2/192.168.1.92
, hanode3/192.168.1.93
, and hanode4/192.168.1.94
—where each node is running Oracle Linux 7. A 3 node group is a good starting point, and you can always go from 3 to 5 (or more) later.
Once you have the packages installed on all the machines that you want to participate, we’ll then move on to setting the instances of MySQL up for participating in Group Replication in order to form a single highly available service. Below is the basic MySQL configuration that I use on all 3 members in my lab setup, this one being specifically from hanode2/192.168.1.92
(some things are variable, e.g.: server-id, group_replication_local_address, group_replication_group_seeds). This can give you a good place to start, modifying the parameters to fit your setup (filesystem paths, IP addresses, etc.):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
|
# only the last two sub-sections are directly related to Group Replication [mysqld] server-id = 2 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid # SSL options ssl-ca = ca.pem ssl-cert = server-cert.pem ssl-key = server-key.pem # replication and binlog related options binlog-row-image = MINIMAL binlog-rows-query-log-events = ON log-bin-trust-function-creators = TRUE expire-logs-days = 90 max-binlog-size = 1G relay-log-recovery = ON slave-parallel-type = LOGICAL_CLOCK slave-preserve-commit-order = ON slave-parallel-workers = 8 slave-rows-search-algorithms = 'INDEX_SCAN,HASH_SCAN' slave-type-conversions = ALL_NON_LOSSY sync-master-info = 1000 sync-relay-log = 1000 # group replication pre-requisites & recommendations log-bin binlog-format = ROW gtid-mode = ON enforce-gtid-consistency = ON log-slave-updates = ON master-info-repository = TABLE relay-log-info-repository = TABLE binlog-checksum = NONE # prevent use of non-transactional storage engines disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE" # InnoDB gap locks are problematic for multi-primary conflict detection; none are used with READ-COMMITTED # So if you don't rely on REPEATABLE-READ semantics and/or wish to use multi-primary mode then this # isolation level is recommended transaction-isolation = 'READ-COMMITTED' # group replication specific options plugin-load = group_replication.so group_replication = FORCE_PLUS_PERMANENT transaction-write-set-extraction = XXHASH64 group_replication_start_on_boot = ON group_replication_bootstrap_group = OFF group_replication_group_name = 550fa9ee-a1f8-4b6d-9bfe-c03c12cd1c72 group_replication_local_address = '192.168.1.92:6606' group_replication_group_seeds = '192.168.1.93:6606,192.168.1.94:6606' |
Aside from the static config file, there’s also one runtime configuration step needed. We need to specify valid MySQL account credentials that this node can use when requesting GTIDs from existing members of the group—any available seed or donor from the those listed in group_replication_group_seeds
—when it’s necessary to perform an automated recovery (e.g. when taking the node offline for maintenance and then having it re-join the group):
1
2
3
|
SET sql_log_bin=0; # disable GTID generation and binary logging for our session CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL 'group_replication_recovery'; SET sql_log_bin=1; # re-enable GTID generation and binary logging for our session |
Note: If SELinux is enabled—you can check with sestatus -v—then you’ll need to enable the use of the Group Replication communication port (6606 in my example config) by mysqld so that it can bind to it and listen there. You can see what ports MySQL is currently allowed to use with semanage port -l | grep mysqld and you can add the necessary port with (using 6606 again from my example config) semanage port -a -t mysqld_port_t -p tcp 6606 .
Note: If iptables is enabled, then you’ll also need to open up the same port for communication between the machines. You can see the current firewall rules in place on each machine with iptables -L and you can allow communication over the necessary port with (again using port 6606 from my example config) iptables -A INPUT -p tcp --dport 6606 -j ACCEPT .
How Can I Start or Bootstrap the Group?
A Group Replication set is meant to be alive 24/7/365, so the initial bootstrapping of the group is a special case. We need to pick one member and declare that it is the bootstrap node by setting group_replication_bootstrap_group=ON
. This means that:
- It will not try and participate in any group communication when starting but will instead configure the group as consisting only of itself.
- Any subsequent member that attempts to join the group will sync itself up with the state of this instance.
Just remember to turn group_replication_bootstrap_group=OFF
again after the first member is up. You can verify that it’s bootstrapped the group by looking to see that it lists itself as the only ONLINE member:
1
2
3
4
5
6
7
|
mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 39a07a39-4b82-44d2-a3cd-978511564a57 | hanode2 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 1 row in set (0.00 sec) |
How Can I Add Additional Members?
Now that we’ve bootstrapped the group (we have a group made up of only one member), we can add additional members (2 more in our 3 node test setup). In order to add a new member to an existing group, you need to take the following steps:
- Take a backup from one of the current members of the group using your preferred MySQL backup client tool, e.g. mysqldump or mysqlbackup. You would then restore that backup onto the node that we want to add to the group, thus applying a snapshot of the state (tables, rows, and other SQL objects, along with the GTID metadata) from that current member of the group. This part is no different than when setting up a slave in a standard async MySQL master/slave replication setup.
- Set up the configuration file so that this new node can participate in group replication generally (see the two group_replication specific sections in the example config above), and become a member of this group specifically (
group_replication_group_name
). - Specify valid MySQL credentials that this node will use when requesting GTIDs from existing members of the group (a seed or donor) necessary to perform an automated recovery (such as when joining the group, which we’ll do next): CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL 'group_replication_recovery';
- Have the new node join to become a member with: STOP GROUP_REPLICATION; START GROUP_REPLICATION; (STOP is necessary because we have start_on_boot enabled).
At this point after the node has joined the group and become a member, it will enter the RECOVERING state where it will automatically sync up with the rest of the group—using the MySQL account credentials we specified in step 3—by requesting and applying all GTIDs which the group has executed but the joining node has not (any transactions the group has executed since the snapshot taken via the backup in step 1). Once the node has reached a synchronization point with the group (also referred to as a view), then its state will go from RECOVERING to ONLINE as it is now a fully functioning participant in the virtually synchronous replication group.
Note: You should also now execute step 3 (the CHANGE MASTER TO
) on the node we used to bootstrap the group, if you haven’t previously done so. Then that node can later perform any automated recovery work as needed. If you’re unsure, you can check the current credentials in place with:
select user_name, user_password from mysql.slave_master_info where channel_name = 'group_replication_recovery';
How Can I Monitor the Group and Member Status?
1
2
3
4
5
6
7
8
9
|
mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 39a07a39-4b82-44d2-a3cd-978511564a57 | hanode2 | 3306 | ONLINE | | group_replication_applier | 49311a3a-e058-46ba-8e7b-857b5db7d33f | hanode3 | 3306 | ONLINE | | group_replication_applier | de6858e8-0669-4b82-a188-d2906daa6d91 | hanode4 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 3 rows in set (0.00 sec) |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
mysql> select * from performance_schema.replication_group_member_stats\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 14696404296429761:32 MEMBER_ID: 49311a3a-e058-46ba-8e7b-857b5db7d33f COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 4 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: 49311a3a-e058-46ba-8e7b-857b5db7d33f:1, 550fa9ee-a1f8-4b6d-9bfe-c03c12cd1c72:1-851, de6858e8-0669-4b82-a188-d2906daa6d91:1-119927 LAST_CONFLICT_FREE_TRANSACTION: 550fa9ee-a1f8-4b6d-9bfe-c03c12cd1c72:850 1 row in set (0.00 sec) |
For past status and to piece together a timeline and process flow across the group, you would piece together timestamped MySQL error log messages on each member. All messages will be prepended with the Plugin group_replication tag. For example:
1
2
3
4
5
|
2016-08-01T15:07:38.488099Z 0 [Warning] Plugin group_replication reported: 'The member with address 192.168.1.92:6606 has already sent the stable set. Therefore discarding the second message.' 2016-08-01T15:14:53.415258Z 0 [Note] Plugin group_replication reported: 'getstart group_id 514e0b66' 2016-08-01T15:14:55.575816Z 0 [Note] Plugin group_replication reported: 'Marking group replication view change with view_id 14696404296429761:34' 2016-08-01T15:14:55.611484Z 57 [Note] Start binlog_dump to master_thread_id(57) slave_server(4), pos(, 4) 2016-08-01T15:14:55.678086Z 0 [Note] Plugin group_replication reported: 'Server de6858e8-0669-4b82-a188-d2906daa6d91 was declared online within the replication group' |
How Can I Debug Failures?
Your primary tools are the same ones noted above for monitoring. When something goes wrong, your primary tool for figuring out what happened and why are the MySQL error logs on each node. We have pretty good info/debug style logs today, that cover the entire code path: MySQL_Server->Group_Replication_plugin->MySQL_GCS_library (see upcoming blog posts that cover the internal MySQL Group Communication System component in more detail). If something fails or doesn’t work as expected, always look to the error log for answers.
What About Application Traffic Routing and Failover?
While we work on a full end-to-end solution around Group Replication—for routing, sharding, automation/orchestration, administration, and more—we can use HAProxy for the client traffic routing in our test setup, utilizing its easily customizable nature to specify routing checks and rules specific to Group Replication—we only want it to route connections to ONLINE members of the group that are in the primary partition and not in read-only mode. In order to do this, I’ve installed HA Proxy 1.5 on one of my primary physical lab machines—a 4th machine in this test setup: mylab/192.168.1.10
—which is running Oracle Linux 6, where the yum repo contains HAProxy 1.5.4.
Until we’re able to add some similar things to the official SYS schema, let’s first go ahead and add the following helpful functions and views on any one member of the group so that they get replicated everywhere (we’ll leverage these in the HAProxy setup next):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
|
USE sys; DELIMITER $$ CREATE FUNCTION IFZERO(a INT, b INT) RETURNS INT RETURN IF(a = 0, b, a)$$ CREATE FUNCTION LOCATE2(needle TEXT(10000), haystack TEXT(10000), offset INT) RETURNS INT RETURN IFZERO(LOCATE(needle, haystack, offset), LENGTH(haystack) + 1)$$ CREATE FUNCTION GTID_NORMALIZE(g TEXT(10000)) RETURNS TEXT(10000) RETURN GTID_SUBTRACT(g, '')$$ CREATE FUNCTION GTID_COUNT(gtid_set TEXT(10000)) RETURNS INT BEGIN DECLARE result BIGINT DEFAULT 0; DECLARE colon_pos INT; DECLARE next_dash_pos INT; DECLARE next_colon_pos INT; DECLARE next_comma_pos INT; SET gtid_set = GTID_NORMALIZE(gtid_set); SET colon_pos = LOCATE2(':', gtid_set, 1); WHILE colon_pos != LENGTH(gtid_set) + 1 DO SET next_dash_pos = LOCATE2('-', gtid_set, colon_pos + 1); SET next_colon_pos = LOCATE2(':', gtid_set, colon_pos + 1); SET next_comma_pos = LOCATE2(',', gtid_set, colon_pos + 1); IF next_dash_pos < next_colon_pos AND next_dash_pos < next_comma_pos THEN SET result = result + SUBSTR(gtid_set, next_dash_pos + 1, LEAST(next_colon_pos, next_comma_pos) - (next_dash_pos + 1)) - SUBSTR(gtid_set, colon_pos + 1, next_dash_pos - (colon_pos + 1)) + 1; ELSE SET result = result + 1; END IF; SET colon_pos = next_colon_pos; END WHILE; RETURN result; END$$ CREATE FUNCTION gr_applier_queue_length() RETURNS INT BEGIN RETURN (SELECT sys.gtid_count( GTID_SUBTRACT( (SELECT Received_transaction_set FROM performance_schema.replication_connection_status WHERE Channel_name = 'group_replication_applier' ), (SELECT @@global.GTID_EXECUTED) ))); END$$ CREATE FUNCTION gr_member_in_primary_partition() RETURNS VARCHAR(3) BEGIN RETURN (SELECT IF( MEMBER_STATE='ONLINE' AND ((SELECT COUNT(*) FROM performance_schema.replication_group_members WHERE MEMBER_STATE != 'ONLINE') >= ((SELECT COUNT(*) FROM performance_schema.replication_group_members)/2) = 0), 'YES', 'NO' ) FROM performance_schema.replication_group_members JOIN performance_schema.replication_group_member_stats USING(member_id)); END$$ CREATE VIEW gr_member_routing_candidate_status AS SELECT sys.gr_member_in_primary_partition() as viable_candidate, IF( (SELECT (SELECT GROUP_CONCAT(variable_value) FROM performance_schema.global_variables WHERE variable_name IN ('read_only', 'super_read_only')) != 'OFF,OFF'), 'YES', 'NO') as read_only, sys.gr_applier_queue_length() as transactions_behind$$ DELIMITER ; |
We can then use the gr_member_routing_candidate_status
view in a shell script—mysql_gr_routing_check.sh
—which we’ll then place on all 3 Group Replication member machines (modify the username and password to match valid accounts on your 3 Group Replication members):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
#!/bin/bash # # Script to make HAProxy capable of determining the routing candidate viability for MySQL Group Replication members # Author: Matt Lord <matt.lord@oracle.com> # Based on the original script from Unai Rodriguez and later work by Olaf van Zandwijk and Raghavendra Prabhu # # This password method is insecure and should not be used in a production environment! MYSQL_USERNAME="minprivusr" MYSQL_PASSWORD="!PassWrd29x" MYSQL_HOST=localhost MYSQL_PORT=3306 ROUTING_CANDIDATE=`mysql --no-defaults -nsLNE --connect-timeout=10 --host=$MYSQL_HOST --port=$MYSQL_PORT --protocol=tcp --user="$MYSQL_USERNAME" --password="$MYSQL_PASSWORD" -e 'SELECT GROUP_CONCAT(viable_candidate, read_only) FROM sys.gr_member_routing_candidate_status' 2>/dev/null | grep -v '*'` if [ "$ROUTING_CANDIDATE" == "YESNO" ] then # Member is a viable routing candidate => return HTTP 200 # Shell return-code is 0 echo -en "HTTP/1.1 200 OK\r\n" echo -en "Content-Type: text/plain\r\n" echo -en "Connection: close\r\n" echo -en "Content-Length: 57\r\n" echo -en "\r\n" echo -en "Group Replication member is a viable routing candidate.\r\n" exit 0 else # Member is not a vaiable routing candidate => return HTTP 503 # Shell return-code is 1 echo -en "HTTP/1.1 503 Service Unavailable\r\n" echo -en "Content-Type: text/plain\r\n" echo -en "Connection: close\r\n" echo -en "Content-Length: 61\r\n" echo -en "\r\n" echo -en "Group Replication member is not a viable routing candidate.\r\n" exit 1 fi |
I then copied mysql_gr_routing_check.sh
to /var/lib/mysql-files
— which is a directory owned by mysql:mysql and that has 750 permissions, so that we have at least some basic security in place:
drwxr-x---. 2 mysql mysql 60 Aug 2 13:33 /var/lib/mysql-files — on all three member machines. We can then leverage xinet.d to provide HAProxy with the ability to call that shell script using its httpchk module over port 6446 by first creating a service file called /etc/xinetd.d/mysql_gr_routing_check
(modify the port used and IP CIDR range allowed as needed) on all three members:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
# default: on # description: check to see if the node is a viable routing candidate service mysql_gr_routing_check { disable = no flags = REUSE socket_type = stream port = 6446 wait = no user = mysql server = /var/lib/mysql-files/mysql_gr_routing_check.sh log_on_failure += USERID only_from = localhost 192.168.1.0/24 per_source = UNLIMITED } |
Note: If you don’t have xinetd installed, you can install it this way on many RPM based distros
yum install xinetd. You would then also need to ensure that the xinetd service is enabled using chkconfig
or systemctl
etc.
Note: You will also need to specify the service/port combination used in /etc/services
in order for xinetd to successfully load the service. So for my example here, make sure that you have this line in there (replacing any existing MySQL Proxy line for the same port):
1 |
mysql_gr_routing_check 6446/tcp # MySQL HAProxy Check |
Once our new xinetd service config file is in place, let’s (re)start the service with
service xinetd restart or
systemctl restart xinetd (etc.) to be sure that our new mysql_gr_routing_check
service config file is loaded and active. You can check syslog to see that the service was loaded OK and there were no errors.
Let’s now verify that we can get the routing candidate’s status from port 6446 on any of the three member’s machines with:
1
2
3
4
5
6
7
8
9
10
11
|
[root@mylab ~]# telnet 192.168.1.92 6446 Trying 192.168.1.92... Connected to 192.168.1.92. Escape character is '^]'. HTTP/1.1 200 OK Content-Type: text/plain Connection: close Content-Length: 40 Group Replication member is a viable routing candidate. Connection closed by foreign host. |
We can then modify HAProxy’s /etc/haproxy/haproxy.cfg
config file on the machine we want to use for MySQL traffic routing—the application’s single-point-of-entry (mylab/192.168.1.10
in my test setup) for our distributed and highly available MySQL service—and utilize the new HTTP service available on port 6446 on each of the Group Replication member candidates by adding the following at the end of the file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
#--------------------------------------------------------------------------------- # round robin multi-master load balancing for our MySQL Group Replication members #--------------------------------------------------------------------------------- frontend mysql-gr-front bind *:3306 mode tcp default_backend mysql-gr-back backend mysql-gr-back mode tcp balance leastconn option httpchk server mysql1 192.168.1.92:3306 check port 6446 inter 1000 rise 1 fall 2 server mysql2 192.168.1.93:3306 check port 6446 inter 1000 rise 1 fall 2 server mysql3 192.168.1.94:3306 check port 6446 inter 1000 rise 1 fall 2 |
Note: If you would instead prefer to do active/passive or “first available” style routing, where all connections get routed to one node and the others serve only as backups, then you can declare the server list this way:
1
2
3
|
server mysql1 192.168.1.92:3306 check port 6446 inter 1000 rise 1 fall 2 on-marked-up shutdown-backup-sessions server mysql2 192.168.1.93:3306 check port 6446 inter 1000 rise 1 fall 2 backup server mysql3 192.168.1.94:3306 check port 6446 inter 1000 rise 1 fall 2 backup |
We’ll then need to restart the haproxy service with service haproxy restart or systemctl restart haproxy (etc.) so that the new config file is loaded and active. We can then use the mysql command-line client to verify that MySQL traffic is getting routed to all 3 member nodes now as we expect:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
# the current makeup of my Group Replication set (executed on hanode4/192.168.1.94) mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 39a07a39-4b82-44d2-a3cd-978511564a57 | hanode2 | 3306 | ONLINE | | group_replication_applier | 49311a3a-e058-46ba-8e7b-857b5db7d33f | hanode3 | 3306 | ONLINE | | group_replication_applier | de6858e8-0669-4b82-a188-d2906daa6d91 | hanode4 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 3 rows in set (0.00 sec) # connecting to/through HAProxy (executed on mylab/192.168.1.10) [root@mylab ~]# mysql --no-defaults --protocol=tcp -u root -p -e "select @@global.server_uuid" Enter password: +--------------------------------------+ | @@global.server_uuid | +--------------------------------------+ | de6858e8-0669-4b82-a188-d2906daa6d91 | +--------------------------------------+ [root@mylab ~]# mysql --no-defaults --protocol=tcp -u root -p -e "select @@global.server_uuid" Enter password: +--------------------------------------+ | @@global.server_uuid | +--------------------------------------+ | 49311a3a-e058-46ba-8e7b-857b5db7d33f | +--------------------------------------+ [root@mylab ~]# mysql --no-defaults --protocol=tcp -u root -p -e "select @@global.server_uuid" Enter password: +--------------------------------------+ | @@global.server_uuid | +--------------------------------------+ | 39a07a39-4b82-44d2-a3cd-978511564a57 | +--------------------------------------+ |
What’s Next?
As you can see, getting a working MySQL service consisting of 3 Group Replication members is not an easy “point and click” or orchestrated single command style operation. The entire process is ripe for orchestration tooling and a new MySQL Router plugin. Our goal is to have an integrated easy to use end to end solution, with Group Replication as its foundation. So please stay tuned as we complete the bigger native MySQL HA picture built around Group Replication!
Conclusion
I hope that you’re able to try out the latest Group Replication Beta, and if you are, that this quick start guide makes the process easier for you. If you have any questions or feedback, please feel free to post a comment here or reach out to me directly. I’d love to get your thoughts on how we can make it better as we approach GA, and beyond.
If you encounter any issues with the release, I would encourage you to file a bug report or open a support ticket. Your feedback is really critical to myself and the rest of the team as we work on Group Replication!
As always, THANK YOU for using MySQL!