WL#3132: Protect against avalanche of competing table scans

Affects: WorkLog-3.4   —   Status: Un-Assigned

Many websites using MySQL suffer from one big issue.

Many sites have some seldom used pages that depend on a table scan.
Usually these pages need a few (1-3) seconds to build
and are only called a few times the day - so everything is fine.

But when some robot/spider or human calls such a page 20 times in a row
and the mysql server starts a number of table scans which compete for IO.
The time needed for competing table scan rises extrem.
10 simualtanious table scans which need 2 seconds each
don't need 20 seconds but several minutes to complete.

If something like this happens the sites "will go down".
We have seen this so often. Even bugs.mysql.com and www.mysql.com
were unavailable because of these problems a number of times.

Often the result of the table scan can be stored in the query cache.
So if these table scan would not run simultanious but would wait for the first
one to complete then the 10 scans would not need 5 minutes but only 2 seconds in
total.

If the MySQL server could serialize competing table scans then
this lead to better performance and more stability in such cases.