WL#4769: Backup: Test validity point without micro-managing sync points

Affects: Server-6.0   —   Status: Complete   —   Priority: Medium

In backup terminology, the validity point (VP) is the transaction consistent
point in the transaction history where the backed up data is captured. All
transactions committed before the VP should be fully reflected after restore.
Further, nothing from transactions that are uncommitted at VP time should be
reflected after restore.

backup_vp_[nontx|tx].test tests the validity point with a few (up to 5)
"concurrent" transactions by micro-managing synchronization points. However,
these tests are only pseudo-concurrent, since the sync points are completely

This WL will result in a test where numerous transactions are executed in
parallel with BACKUP. When BACKUP has completed, the concurrent transactional
load is stopped, and the database is restored. The content of the database is
then checked to confirm that 
 a) All transactions committed before the VP are reflected
 b) No transactions committed after the VP are reflected

Bugs found:
Found BUG#45737, which this WL now depends on.
Requirements for successfully testing Validity Point
[R1] There must be many (~20) concurrent transactions executing DML operations 
     while the BACKUP command executes.
[R2] After RESTORE, the test must be able to confirm that all transactions 
     committed before the BACKUP validity point are reflected in the restored
[R3] After RESTORE, the test must be able to confirm that no transactions 
     committed after the BACKUP validity point are reflected in the restored

The relationship between BACKUP VP and binlog_start_pos
The backup image is required to be transactionally consistent. To achieve this,
the backup code blocks all commits for a short time to synchronize the data from
all involved storage engines. In MySQL Backup terminology, this synchronization
establishes the Validity Point of the backup image. 

During the same time when all commits are blocked, backup also records the
current binary log position so that the backup image can be used to setup a
slave for replication. Since only commit operations write to the binary log, the
VP and binary log position of the backup image coincide.

The test will be performed by making two instances of the same database. 

* The Restore Image database instance
The first instance is made from performing BACKUP in parallel with a highly
concurrent load. When BACKUP has completed, the concurrent load is stopped, and
the backup image is used to RESTORE the database. This database instance is
called the "Restored Image" (RI).

* The Binlog Replay database instance
Another version of the same database is created by applying the binary log. When
BACKUP has completed, the binary log is replayed to a separate instance of the
database up to the backup binlog position. This instance of the database is
called "Binlog Replay", BR.

We now have two instances of what should be completely equal databases:
 * One containing the state of the backup image.
 * Another containing an early version of the database with the binary log
   applied to it. The binary log is nothing more than a serialization of the 
   same transactions that executed concurrently with the BACKUP command. 

These two databases shall be compared to confirm that the BACKUP command creates
a backup image that is transactionally consistent at the VP.

Meeting requirements
* R1 is met by executing 20 transactions in parallel with BACKUP using the
  Stress Test tool
* R2 and R3 are met by comparing the two databases after BACKUP and RESTORE 
  have completed.

An early idea for this WL that was discarded:
Make RQG execute transactions that insert timestamps and check that the restored
database only contains the transactions with timestamps prior to the VP timestamp.

However, idea turned out to be insufficient: Since the ts has to be inserted
into a table *before* the transaction commits, this sequence is possible:

trans1: insert timestamp: 1000
trans2: backup reaches VP. Timestamp: 1001
trans1: commit

In this history, it looks like trans1 should be reflected in the backup image
(inserted timestamp lower than VP timestamp). However, the commit happens after
the VP and the transaction should therefore not be reflected.

Thus, timestamps can be used to check that no transactions committed *after* VP
are reflected in the backup image. On the other hand, it may be impossible to
use TS to check if transactions that should have been included in the backup
image actually are reflected.
The test will be written in Perl and pushed to the QA tree in the new directory
mysql-test-extra-6.0/mysql-test/backup-suite/. The test will be executed on a
weekly basis.

The Perl test program will go through the following steps:

1. Initialize:
  * Create database schema
  * Import data
  * Make a (file system) copy of the database 
  * Enable binary logging

2. Start transactional load
  * The load will be executed by the Stress Test (ST) tool
  * The Stress Test tool will run with 20 concurrent threads
  * The load will consist of insert/update/delete operations
  * Each transaction will consist of multiple operations

3. Execute BACKUP and RESTORE
  * BACKUP shall be performed concurrently with the transactions generated by 
    the ST.
  * Stop concurrent transactional load when backup has completed
  * The binlog position of the backup image is extracted by selecting from
  * RESTORE the backup image.

4. Set up the Binlog Replayed (BR) database
  * Boot the database copy created in step 1 in a separate MySQL instance
  * Apply the binary log up to the binary log position of the backup image

6. Compare the two database instances
  * Dump both databases to file by using mysqldump
  * Compare the content by using the diff tool
  * The test is a success iff the diff between the dumped databases is empty.