WL#4195: Test: Online Backup: Binlogging at restore time

Affects: Server-6.0   —   Status: Complete   —   Priority: Medium

This worklog will verify that the backup and the replication features can work
together without any issues. In other words, there shouldn’t be any consequences
as a result of interoperability of MySQL backup and replication.

This worklog takes reference from WL#4209
We need to verify that both backup and replication does not result in failure if
they are performed together. The following requirements need to be validated to
confirm the proper interoperability of the MySQL backup feature with replication.

R01: Backup commands shall not be logged when run on master.

R02: Restore commands shall not be logged when run on a master. 

R03: Verify that slave’s binary log information is recorded in backup logs when
backup commands are run on slaves. 

R04: Restore on slave is not permitted unless replication is turned off. 

R05: Replication is not allowed to start while restore is on-going

R06: Effects of restore shall not be logged.

R07: Changes to backup logs shall not be logged.

R08: Backup on master should not affect its slaves.

R09: Backup and restore commands are not written to binary log at any point of time.

R10: Restore shall not record any data events in the binary log.

We also need to test the following scenarios for this worklog.

1) Point in time recovery: Recovery of data till point of backup using binary
log position from backup logs and verify that recovered database is replicated
to slave.

2) Perform backup and restore on the master that replicates data to two slaves.

3) Perform backup and restore on the master that replicates data to two slaves
with one slave acting as master to another slave. Also perform backup and
restore on the slave that acts as the master to another slave.

4) Perform backup on the master that replicates to a slave. Fail (shutdown) the
master and replace the master, restore the backup image in new master as well as
slave and continue replication. Repeat the same test by replacing failed slave.

Note: 
* Need to expedite on implementation of scenarios 3 and 4 using mysql test
framework
* There is an existing test in suite/rpl/t/rpl_backup.test that covers some of
the requirements(R01, R02, R04, R05, R08, R09) of this worklog. 
* The requirements R03, R06, R07, R10 and the scenarios listed above will 
tested in this worklog.
The following test cases are identified for this worklog:

General Requirements for all test cases
=======================================
* Create databases (db1 and db2) in master 
* Create tables with indexes, keys and load some data in tables.
* Create objects in databases (views, triggers, events, stored procedures and
functions)
* We need to have 2 servers, one to act as master and other as slave.(This
pre-requisite is valid for tests 1 to 5)

Test 1: Backup commands are run on slaves. 
======
* Clear all the backup logs using PURGE BACKUP LOGS command.
* Perform backup database to db12s.bak in slave.
* Check the master status in slave to note down the binary log information.
* Verify that slaves binary log information is recorded in backup history logs. 
Performing backup does not advance the binary log position in slave status.

Test 2: Restore commands run on a master
======
* Clear all the backup logs using PURGE BACKUP LOGS command.
* Note down the binary log position from master status.
* Execute restore from db12m.bak in master. Note that when restore performed on
master, an incident event will be generated which will cause slave to stop. 
* Check the binary log events starting from the binary log position before restore.
* Note that restore incident event will be generated and verify that binary log
position is not advanced after restore operation.
* This test will verify the requirements R06 and R10.

Test 3: Changes to backup logs shall not be logged.
======
* Clear all the backup logs using PURGE BACKUP LOGS command.
* Check the master status and note down the binary log position. 
* Check the backup logs in slaves and also note down the binary log position.
* Perform backup operation in master to db12m.bak
* Check the backup logs in master. Verify that performing backup on the master
will not lead to changes to the backup logs on the slave.
* Remove the backup image file db12m.bak

Test 4: Point in time recovery
======
* Turn of replication by stopping slaves.
* Execute some DML (insert, update, delete) and DDL (create, drop, alter) in
databases. .
* Clear all the backup logs using PURGE BACKUP LOGS command.
* Perform backup database operation to db12m_ptr.bak and note down the binary
log position from backup logs. Drop the databases.
* Turn on replication by starting slave.
* Execute point in time recovery by using backup binary log position to recover
the data till the point of backup.
* Verify that this data is replicated in slave.
* Check the data contents are recovered completely in both master and slave.
* Remove the backup image file db12m_ptr.bak

Test 5: Fail & replace the master, restore the backup image in new master &
======  slave

* Clear all the backup logs using PURGE BACKUP LOGS command.
* Perform backup database operation of the databases in master to db12m.bak
* Shutdown the master, replace the master by a new server.
* Restore data for the new master server from backup image (db12m.bak)
* Restore backup image in slave (db12m.bak) and start the replication.
* Verify all the objects and data contents in databases.
* Remove the backup image file db12m.bak
* Repeat the same test by shutdown of slave and replacing the slave by a new server

Test 6 : Backup and restore on master that has multiple slaves
======
Pre-requisites: Establish replication between a master (m1) and 2 slaves (s1 and
s2). Let s1 be the master to another slave s3 and establish replication. 

* Clear all the backup logs using PURGE BACKUP LOGS command.
* Perform backup database operation of the databases in master m1 to db12.bak
* Drop database from master.
* Restore data from backup image (db12.bak) to master m1, slave s1, s2 and
continue replication.
* Perform backup database operation of the databases in slave s1 to db12s1.bak
* Drop database from s1.
* Restore data from backup image (db12s1.bak) to s1 (master in this case), slave
s3 and continue replication
* Verify all the objects and data contents in databases.
* Remove the backup image file db12.bak, db12s1.bak

Test case mapping
=================

The following table indicates one to one mapping of test cases in LLD to HLS

-----------------------------
LLD 	|HLS
-----------------------------
Test 1  |R03
Test 2  |R10, R06
Test 3	|R07
Test 4	|Test scenario 1(PTR)
Test 5	|Test scenario 4
Test 6	|Test scenario 2,3
-----------------------------