MySQL Development Cycle  /  Performance Testing

This documentation is for an older version. If you're using the most current version, select the documentation for that version with the version switch in the upper right corner of the online documentation, or by downloading a newer PDF or EPUB file.

4 Performance Testing

We execute performance testing to assure that the performance profile of the products does not regress. Performance testing is done all throughout the development process (in the current TRUNK branch), and focuses on throughput testing and response time testing.

Throughput Testing

The focus of throughput testing are high concurrency scenarios, with concurrent connections ranging from 8 to 1024. Testing is performed with open source applications like sysbench, with an emphasis on simple queries and simple OLTP transactions, but also including custom workloads. Other tests offer a much deeper look into complex OLTP throughput. These tests use short durations per test (5 to 10 minutes per test), varying server configuration options, and varying dataset sizes for both batched and atomic transaction types. The primary purpose of those tests is to expose bottlenecks, eliminate regressions, and benchmark the throughput. Results are archived in a database in order to facilitate retrieval for comparisons, and automated comparisons are performed to standardize evaluation.

Response Time Testing

Response time testing is done for single threads, because the main focus is to ensure consistency regarding query plan and execution time performance. An open source standardized test is used to improve and guard against regression in the parser or optimizer. We use two different configurations, one of them performing CPU-bound testing, and the other performing I/O-bound testing. Results are automatically compared with results from previous tests, and archived in a database. From those data, summaries and reports are created.

Apart from testing new features, all the above tests are performed in regular intervals (daily and weekly). These tests are run on Linux and Windows platforms. For interval testing, automation is used to pull the current code, execute testing, compare against previously established baselines, and report results. If any of the results exceed preset thresholds, this is tracked as a failure, and prompts further investigation. That research may reveal that the failure is due to recently introduced changes, in which case the development teams are alerted, who will either fix the issue or revert the change. For all tests, baselines are established during milestone revisions, and used to compare only against the same server, framework revision, server configuration, and environment. At the end of the development cycle, all tests are repeated with larger data sets on the release candidate to catch any regressions that were caused by last minute code changes, and any performance regressions caused by code integration.