MySQL 8.0.32
Source Code Documentation

Classes  
struct  JoinStatus 
Functions  
bool  IsSubjoin (Hyperedge a, Hyperedge b) 
Returns whether A is already a part of B, ie., whether it is impossible to execute A before B. More...  
bool  CombiningWouldViolateConflictRules (const Mem_root_array< ConflictRule > &conflict_rules, const int *in_component, int left_component, int right_component) 
int  GetComponent (const NodeMap *components, const int *in_component, NodeMap tables) 
template<class Func >  
void  ConnectComponentsThroughJoins (const JoinHypergraph &graph, const OnlineCycleFinder &cycles, Func &&callback_on_join, NodeMap *components, int *in_component) 
Helper algorithm for GetCardinality() and GraphIsJoinable(); given a set of components (each typically connecting a single table at the start), connects them incrementally up through joins and calls a given callback every time we do it. More...  
double  GetCardinality (NodeMap tables_to_join, const JoinHypergraph &graph, const OnlineCycleFinder &cycles) 
For a given set of tables, try to estimate the cardinality of joining them together. More...  
double  GetCardinalitySingleJoin (NodeMap left, NodeMap right, double left_rows, double right_rows, const JoinHypergraph &graph, const JoinPredicate &pred) 
A special, much faster version of GetCardinality() that can be used when joining two partitions along a known edge. More...  
OnlineCycleFinder  FindJoinDependencies (const Hypergraph &graph, MEM_ROOT *mem_root) 
Initialize a DAG containing all inferred join dependencies from the hypergraph. More...  
bool  IsQueryGraphSimpleEnough (THD *thd, const JoinHypergraph &graph, int subgraph_pair_limit, MEM_ROOT *mem_root, int *seen_subgraph_pairs) 
void  SetNumberOfSimplifications (int num_simplifications, GraphSimplifier *simplifier) 
JoinStatus  SimulateJoin (JoinStatus left, JoinStatus right, const JoinPredicate &pred) 
Simulate the (total) costs and cardinalities of joining two sets of tables, without actually having an AccessPath for each (which is a bit heavyweight for just cost and cardinality). More...  
JoinStatus  SimulateJoin (double left_rows, JoinStatus right, const JoinPredicate &pred) 
JoinStatus  SimulateJoin (JoinStatus left, double right_rows, const JoinPredicate &pred) 
JoinStatus  SimulateJoin (double left_rows, double right_rows, const JoinPredicate &pred) 
bool  GraphIsJoinable (const JoinHypergraph &graph, const OnlineCycleFinder &cycles) 
See if a given hypergraph is impossible to join, in any way. More...  
bool anonymous_namespace{graph_simplification.cc}::CombiningWouldViolateConflictRules  (  const Mem_root_array< ConflictRule > &  conflict_rules, 
const int *  in_component,  
int  left_component,  
int  right_component  
) 
void anonymous_namespace{graph_simplification.cc}::ConnectComponentsThroughJoins  (  const JoinHypergraph &  graph, 
const OnlineCycleFinder &  cycles,  
Func &&  callback_on_join,  
NodeMap *  components,  
int *  in_component  
) 
Helper algorithm for GetCardinality() and GraphIsJoinable(); given a set of components (each typically connecting a single table at the start), connects them incrementally up through joins and calls a given callback every time we do it.
The callback must be of type
bool callback(int left_component, int right_component, const JoinPredicate &pred, int num_changed);
where num_changed is the number of tables that was in right_component but has now been combined with the ones in left_component and were moved there (we always move into the component with the lowest index). The algorithm ends when callback() returns true, or if no more joins are possible.
In theory, it would be possible to accelerate this mechanism by means of the standard unionfind algorithm (see e.g. https://en.wikipedia.org/wiki/Disjointset_data_structure), but since MAX_TABLES is so small, just using bitsets seems to work just as well. And instead of spending time on that, it would probably be better to find a complete join inference algorithm that would make GraphIsJoinable() obsolete and thus reduce the number of calls to this function.
OnlineCycleFinder anonymous_namespace{graph_simplification.cc}::FindJoinDependencies  (  const Hypergraph &  graph, 
MEM_ROOT *  mem_root  
) 
Initialize a DAG containing all inferred join dependencies from the hypergraph.
These are join dependencies that we cannot violate no matter what we do, so we need to make sure we do not try to force join reorderings that would be in conflict with them (whether directly or transitively) – and the returned OnlineCycleFinder allows us to check out exactly that, and also keep maintaining the DAG as we impose more orderings on the graph.
This graph doesn't necessarily contain all dependencies inherent in the hypergraph, but it usually contains most of them. For instance, {t2,t3}t4 is not a subjoin of t1{t2,t4}, but must often be ordered before it anyway, since t2 and t4 are on opposite sides of the former join. See GraphSimplificationTest.IndirectHierarcicalJoins for a concrete test.
Also, in the case of cyclic hypergraphs, the constraints in this DAG may be too strict, since it doesn't take into account that in cyclic hypergraphs we don't end up using all the edges (since the cycles are caused by redundant edges). So even if a constraint cannot be added because it would cause a cycle in the DAG, it doesn't mean that the hypergraph is unjoinable, because one of the edges involved in the cycle might be redundant and can be bypassed. See GraphSimplificationTest.CycleNeighboringHyperedges for a concrete test.
We really ought to fix this, but it's not obvious how to implement it; it seems very difficult to create a test that catches all cases and does not have any false positives in the presence of cycles (which often enable surprising orderings). Because it doesn't, we need additional and fairly expensive checks later on; see comments on GraphIsJoinable().
double anonymous_namespace{graph_simplification.cc}::GetCardinality  (  NodeMap  tables_to_join, 
const JoinHypergraph &  graph,  
const OnlineCycleFinder &  cycles  
) 
For a given set of tables, try to estimate the cardinality of joining them together.
(This essentially simulates the cardinality we'd get out of CostingReceiver, but without computing any costs or actual AccessPaths.)
This is a fairly expensive operation since we need to iterate over all hyperedges several times, so we cache the cardinalities for each hyperedge in GraphSimplifier's constructor and then reuse them until the hyperedge is changed. We could probably go even further by having a cache based on tables_to_join, as many of the hyperedges will share endpoints, but it does not seem to be worth it (based on the microbenchmark profiles).
double anonymous_namespace{graph_simplification.cc}::GetCardinalitySingleJoin  (  NodeMap  left, 
NodeMap  right,  
double  left_rows,  
double  right_rows,  
const JoinHypergraph &  graph,  
const JoinPredicate &  pred  
) 
A special, much faster version of GetCardinality() that can be used when joining two partitions along a known edge.
It reuses the existing cardinalities, and just applies the single edge and any missing WHERE predicates; this allows it to just make a single pass over those predicates and do no other work.
int anonymous_namespace{graph_simplification.cc}::GetComponent  (  const NodeMap *  components, 
const int *  in_component,  
NodeMap  tables  
) 
bool anonymous_namespace{graph_simplification.cc}::GraphIsJoinable  (  const JoinHypergraph &  graph, 
const OnlineCycleFinder &  cycles  
) 
See if a given hypergraph is impossible to join, in any way.
This is a hack to work around the fact that our inference of implicit join ordering from the hypergraph is imperfect, so that we can end up creating an impossible situation (try to force join A before join B, but B must be done before A due to graph constraints). The paper mentions that joins must be inferred, but does not provide a complete procedure, and the authors were unaware that their assumed procedure did not cover all cases (Neumann, personal communication). Thus, we run this after each join simplification we apply, to see whether we created such a contradiction (if so, we know the opposite ordering is true).
The algorithm is barebones: We put each node (table) into its own component, and then run through all join edges to see if we can connect those components into larger components. If we can apply enough edges (by repeated application of the entire list) that everything is connected into the same component, then there is at least one valid join order, and the graph is joinable. If not, it is impossible and we return true.
bool anonymous_namespace{graph_simplification.cc}::IsQueryGraphSimpleEnough  (  THD *  thd, 
const JoinHypergraph &  graph,  
int  subgraph_pair_limit,  
MEM_ROOT *  mem_root,  
int *  seen_subgraph_pairs  
) 
Returns whether A is already a part of B, ie., whether it is impossible to execute A before B.
E.g., for t1 LEFT JOIN (t2 JOIN t3), the t2t3 join will be part of the t1{t2,t3} hyperedge, and this will return true.
Note that this definition is much more lenient than the one in the paper (Figure 4), which appears to be wrong.
void anonymous_namespace{graph_simplification.cc}::SetNumberOfSimplifications  (  int  num_simplifications, 
GraphSimplifier *  simplifier  
) 
JoinStatus anonymous_namespace{graph_simplification.cc}::SimulateJoin  (  double  left_rows, 
double  right_rows,  
const JoinPredicate &  pred  
) 
JoinStatus anonymous_namespace{graph_simplification.cc}::SimulateJoin  (  double  left_rows, 
JoinStatus  right,  
const JoinPredicate &  pred  
) 
JoinStatus anonymous_namespace{graph_simplification.cc}::SimulateJoin  (  JoinStatus  left, 
double  right_rows,  
const JoinPredicate &  pred  
) 
JoinStatus anonymous_namespace{graph_simplification.cc}::SimulateJoin  (  JoinStatus  left, 
JoinStatus  right,  
const JoinPredicate &  pred  
) 
Simulate the (total) costs and cardinalities of joining two sets of tables, without actually having an AccessPath for each (which is a bit heavyweight for just cost and cardinality).
Returns the same type, so that we can succinctly simulate joining this to yet more tables.
The paper generally uses merge join as the cost function heuristic, but since we don't have merge join, and nestedloop joins are heavily dependent on context such as available indexes, we use instead our standard hash join estimation here. When we get merge joins, we should probably have a look to see whether switching to its cost function here makes sense. (Of course, we don't know what join type we will actually be using until we're done with the entire planning!)
NOTE: Keep this in sync with the cost estimation in ProposeHashJoin().