MySQL Blog Archive
For the latest blogs go to blogs.oracle.com/mysql
The MySQL Optimizer Cost Model Project

You may not be aware of this but the foundation that the MySQL optimizer builds on when choosing a query plan – the cost model – is for the most part very old. At least in tech terms.

Much of it was written in another millennium, at a time when “Forest Gump” and “Titanic” won Oscars and “Baywatch” was the big thing on TV. Although the revision history doesn’t go that far back, it wouldn’t come as a surprise if it predates that annoying “Macarena” song and even “The Sign” (Ace Of Base) – don’t follow those links unless you’re feeling very brave…

Thankfully, a lot has happened since Ace of Base. Modern CPUs have three orders of magnitude more transistors than the Pentium had. A cheap laptop has a bigger memory size than desktop hard drive sizes back then. And then there’s new kinds of hardware like SSDs and flash and whatnot.

In the end, it boils down to this: The foundation of the MySQL optimizer – the cost model – is in dire need of refactoring because it no longer models the hardware MySQL runs on in a convincing way. This is something the MySQL optimizer engineers have to deal with, and that’s why there is an ongoing project simply called “The Cost Model Project”.

The Cost Model Project

We want to give a heads up on what we’re discussing and/or working on in this project, and here it is. 🙂

The following are considered by the MySQL optimizer team to be the most important limitations of the current cost model, and some thoughts on how they can best be solved:

  1. The “cost” in MySQL is an abstract number closely related to the cost of doing a random read from a HDD. At the time, this was an obvious choice since the main goal was to efficiently utilize the disk bandwidth. In the future we would probably like to relate cost to something else that makes more sense to users, e.g. wall clock time.
  2. There is a hard-coded relationship between CPU cost and IO cost. This relationship wasn’t half bad 15 or 20 years ago, but CPU speeds have improved considerably more than IO speeds since then. This will probably be fixed by making costs configurable and in a way that can better model future hardware when that time comes.
  3. IO is assumed to be HDD in the model. There is no concept of SSDs, flashdrives or FusionIO. E.g., some of the IO optimizations are only relevant for spinning disks. To fix this, we need to make the cost model aware of the underlying hardware, and different kinds of hardware will need different configurations.
  4. The optimizer has no idea how much of a table (or index) has been cached. It is typically assumed that every query has to read all rows from HDD. Since memory gets cheaper at an amazing rate, more and more data can be cached. This, in turn, affects the cost of row fetch operations. The MySQL optimizer has to get info about caching from the storage engines, but we also have to modify the cost model to make use of the new information.
  5. Before the InnoDB and MySQL server teams were joined, InnoDB didn’t have a lot of influence on the MySQL optimizer development. Instead of getting incorrect cost calculations in MySQL fixed, InnoDB had to hack their own code to get reasonable execution plans. However, two wrongs don’t make a right, so it’s better to remove the hacks and fix the cost model instead.
  6. Index statistics has a way too low precision for indexes with good selectivity. Currently, there is no way to separate index statistics for an index that has an average of 1.9 rows per lookup value and another one that has 1.1 rows per lookup value. The fix is, of course, to improve the precision of the statistics. Some traces of this is already in the 5.7.4 milestone release, and you can read more in worklog 7338.
  7. The cost model would benefit from additional and improved statistics, both on indexed and non-indexed columns. Simple statistics for non-indexed columns similar to what we currently have for indexes, histograms and correlation histograms could all play their valuable part, but of course we’ll have to prioritize.
  8. The number of rows from a table that will be joined with the next table only takes into account the conditions used by the access method. This will often lead to very wrong cost estimates. The fix was to implement condition filtering; it is already in the latest 5.7 lab release and was discussed in two earlier blog posts (part1, part2). Also note that better statistics would improve the precision of this work.
  9. Cost calculations are spread out all over the place and duplicated extensively. If we’re ever going to get consistency and maintainability into the cost model we need to refactor the code and hide calculations behind APIs. Before we get there, the cost model will be extremely expensive and error prone to modify because there are so many lines of code to change. Again, some traces of this can be seen in the 5.7.4 milestone, and by reading worklog 7182 and worklog 7209.
  10. Add networking cost to the model so that MySQL Cluster can get proper cost calculations.

Finally, with all of this major refactoring, you will also need the ability to provide a wide variety of additional Optimizer Hints in order to get a specific execution plan in various cases. But that is a topic for another blog. 🙂

Please let us know if you have any suggestions or comments on the project. We’d love to hear from you!