MySQL 9.0.0
Source Code Documentation
Go to the documentation of this file.
1/* Copyright (c) 2021, 2024, Oracle and/or its affiliates.
3 This program is free software; you can redistribute it and/or modify
4 it under the terms of the GNU General Public License, version 2.0,
5 as published by the Free Software Foundation.
7 This program is designed to work with certain software (including
8 but not limited to OpenSSL) that is licensed under separate terms,
9 as designated in a particular file or component or in included license
10 documentation. The authors of MySQL hereby grant you an additional
11 permission to link the program and your derivative works with the
12 separately licensed software that they have either included with
13 the program or referenced in the documentation.
15 This program is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
18 GNU General Public License, version 2.0, for more details.
20 You should have received a copy of the GNU General Public License
21 along with this program; if not, write to the Free Software
22 Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */
28 @file
30 Heuristic simplification of query graphs to make them execute faster,
31 largely a direct implementation of [Neu09] (any references to just
32 “the paper” will generally be to that). This is needed for when
33 query hypergraphs have too many possible (connected) subgraphs to
34 evaluate all of them, and we need to resort to heuristics.
36 The algorithm works by evaluating pairs of neighboring joins
37 (largely, those that touch some of the same tables), finding obviously _bad_
38 pairwise orderings and then disallowing them. I.e., if join A must
39 very likely happen before join B (as measured by cost heuristics),
40 we disallow the B-before-A join by extending the hyperedge of
41 B to include A's nodes. This makes the graph more visually complicated
42 (thus making “simplification” a bit of a misnomer), but reduces the search
43 space, so that the query generally is faster to plan.
45 Obviously, as the algorithm is greedy, it will sometimes make mistakes
46 and make for a more expensive (or at least higher-cost) query.
47 This isn't necessarily an optimal or even particularly good algorithm;
48 e.g. LinDP++ [Rad19] claims significantly better results, especially
49 on joins that are 40 tables or more. However, using graph simplification
50 allows us to handle large queries reasonably well, while still reusing nearly
51 all of our query planning machinery (i.e., we don't have to implement a
52 separate query planner and cost model for large queries).
54 Also note that graph simplification only addresses the problem of subgraph
55 pair explosion. If each subgraph pair generates large amounts of candidate
56 access paths (e.g. through parameterized paths), each subgraph pair will in
57 itself be expensive, and graph simplification does not concern itself with
58 this at all. Thus, to get a complete solution, we must _also_ have heuristic
59 pruning of access paths within a subgraph, which we're currently missing.
62 [Neu09] Neumann: “Query Simplification: Graceful Degradation for Join-Order
63 Optimization”.
64 [Rad19] Radke and Neumann: “LinDP++: Generalizing Linearized DP to
65 Crossproducts and Non-Inner Joins”.
66 */
68#include <assert.h>
69#include <stddef.h>
71#include <limits>
72#include <vector>
74#include "my_compiler.h"
75#include "priority_queue.h"
79#include "sql/mem_root_array.h"
80#include "sql/sql_array.h"
82class THD;
83struct JoinHypergraph;
85// Exposed for unit testing.
87 public:
90 // Do a single simplification step. The return enum is mostly for unit tests;
91 // general code only needs to care about whether it returned
94 // No (more) simplifications are possible on this hypergraph.
97 // We applied a simplification of the graph (forcing one join ahead of
98 // another).
101 // We applied a simplification, but it was one that was forced upon us;
102 // we intended to apply the opposite, but discovered it would leave the
103 // graph
104 // in an impossible state. Thus, the graph has been changed, but the actual
105 // available join orderings are exactly as they were.
108 // We applied a step that was earlier undone using UndoSimplificationStep().
109 // (We do not know whether it was originally APPLIED_SIMPLIFICATION or
112 };
115 // Undo the last applied simplification step (by DoSimplificationStep()).
116 // Note that this does not reset the internal state, i.e., it only puts
117 // the graph back into the state before the last DoSimplificationStep()
118 // call. This means that the internal happens-before graph and cardinalities
119 // remain as if the step was still done. This is because if calling
120 // DoSimplificationStep() after an UndoSimplificationStep() call,
121 // no new work is done; the change is simply replayed again, with no
122 // new computation done. We only need to search for more simplifications
123 // once we've replayed all undone steps. This also means that we make
124 // the assumption that nobody else is changing the graph during the
125 // lifetime of GraphSimplifier.
126 //
127 // You can call UndoSimplificationStep() several times, as long as there
128 // is at least one simplification step to undo; undo/redo works essentially
129 // as a stack.
132 // How many steps we've (successfully) done and not undone.
133 int num_steps_done() const {
134 assert(m_done_steps.size() < size_t{std::numeric_limits<int>::max()});
135 return static_cast<int>(m_done_steps.size());
136 }
138 // How many steps we've undone.
139 int num_steps_undone() const {
140 assert(m_undone_steps.size() < size_t{std::numeric_limits<int>::max()});
141 return static_cast<int>(m_undone_steps.size());
142 }
144 private:
145 // Update the given join's cache in the priority queue (or take it in
146 // or out of the queue), presumably after best_step.benefit has changed
147 // for that join.
148 //
149 // After this operation, m_pq should be in a consistent state.
150 void UpdatePQ(size_t edge_idx);
152 // Recalculate the benefit of all orderings involving the given edge,
153 // i.e., the advantage of ordering any other neighboring join before
154 // or after it. (These are stored in m_cache; see NeighborCache for
155 // more information on the scheme.) You will typically need to call this
156 // after having modified the given join (hyperedge endpoint). Note that
157 // if a given ordering has become less advantageous, this may entail
158 // recalculating other nodes recursively as well, but this should be rare
159 // (again, see the comments on NeighborCache).
160 //
161 // “begin” and “end” are the range of other joins to compare against
162 // (edge1_idx itself is always excluded). It should normally be set to
163 // 0 and N (the number of edges) to compare against all, but during the
164 // initial population in the constructor, where every pair is considered,
165 // it is be used to avoid redundant computation.
166 //
167 // It would have been nice to somehow be able to use neighbor-of-neighbor
168 // information to avoid rescanning all candidates for neighbors
169 // (and the paper mentions “materializing all neighbors of a join”),
170 // but given how hyperedges work, there doesn't seem to be a trivial way
171 // of doing that (after A has absorbed B's into one of its hyperedges,
172 // it seems it could gain new neighbors that were neither neighbors of
173 // A nor B).
174 void RecalculateNeighbors(size_t edge1_idx, size_t begin, size_t end);
177 double benefit;
180 };
182 // Returns whether two joins are neighboring (share edges),
183 // and if so, estimates the benefit of joining one before the other
184 // (including which one should be first) and writes into “step”.
185 ALWAYS_INLINE bool EdgesAreNeighboring(size_t edge1_idx, size_t edge2_idx,
192 // Old and new versions of after_edge_idx.
195 };
197 // Convert a simplification step (join A before join B) to an actual
198 // idea of how to modify the given edge (new values for join B's
199 // hyperedge endpoints).
204 // Steps that we have applied so far, in chronological order.
205 // Used so that we can undo them easily on UndoSimplificationStep().
208 // Steps that we used to have applied, but have undone, in chronological
209 // order of the undo (ie., latest undone step last).
210 // DoSimplificationStep() will use these to quickly reapply an undone
211 // step if needed (and then move it to the end of done_steps again).
214 // Cache the cardinalities of (a join of) the nodes on each side of each
215 // hyperedge, corresponding 1:1 index-wise to m_graph->edges. So if
216 // e.g. m_graph->graph.edges[0].left contains {t1,t2,t4}, then
217 // m_edge_cardinalities[0].left will contain the cardinality of joining
218 // t1, t2 and t4 together.
219 //
220 // This cache is so that we don't need to make repeated calls to
221 // GetCardinality(), which is fairly expensive. It is updated when we
222 // apply simplification steps (which change the hyperedges).
224 double left;
225 double right;
226 };
229 // The graph we are simplifying.
232 // Stores must-happen-before relationships between the joins (edges),
233 // so that we don't end up with impossibilities. See OnlineCycleFinder
234 // for more information.
237 // Used for storing which neighbors are possible to simplify,
238 // and how attractive they are. This speeds up repeated application of
239 // DoSimplificationStep() significantly, as we don't have to recompute
240 // the same information over and over again. This is keyed on the numerically
241 // lowest join of the join pair, i.e., information about the benefit of
242 // ordering join A before or after join B is stored on m_cache[min(A,B)].
243 // These take part in a priority queue (see m_pq below), so that we always
244 // know cheaply which one is the most attractive.
245 //
246 // There is a maybe surprising twist here; for any given cache node (join),
247 // we only store the most beneficial ordering, and throw away all others.
248 // This is because our benefit values keep changing all the time; once we've
249 // chosen to put A before B, it means we've changed B, and that means every
250 // single join pair involving B now needs to be recalculated anyway
251 // (the costs, and thus ordering benefits, are highly dependent on the
252 // hyperedge of B). Thus, storing only the best one (and by extension,
253 // not having information about the other ones in the priority queue)
254 // allows us to very quickly and easily throw away half of the invalidated
255 // ones. We still need to check the other half (the ones that may be the best
256 // for other nodes) to see if we need to invalidate them, but actual
257 // invalidation is rare, as it only happens for the best simplification
258 // involving that node (i.e., 1/N).
259 //
260 // It's unclear if this is the same scheme that the paper alludes to;
261 // it mentions a priority queue and ordering by neighbor-involving joins,
262 // but very little detail.
264 // The best simplification involving this join and a higher-indexed join,
265 // and the index of that other node. (best_neighbor could be inferred
266 // from the indexes in best_step and this index, but we keep it around
267 // for simplicity.) best_neighbor == -1 indicates that there are no
268 // possible reorderings involving this join and a higher-indexed one
269 // (so it should not take part in the priority queue).
273 // Where we are in the priority queue (heap index);
274 // Priority_queue will update this for us (through MarkNeighborCache)
275 // whenever we are insert into or moved around in the queue.
276 // This is so that we can easily tell the PQ to recalculate our position
277 // whenever best_step.benefit changes. -1 means that we are
278 // currently not in the priority queue.
279 int index_in_pq = -1;
280 };
283 // A priority queue of which simplifications are the most attractive,
284 // containing pointers into m_cache. See the documentation on NeighborCache
285 // for more information.
287 bool operator()(const NeighborCache *a, const NeighborCache *b) const {
288 return a->best_step.benefit < b->best_step.benefit;
289 }
290 };
292 void operator()(size_t index, NeighborCache **cache) {
293 (*cache)->index_in_pq = index;
294 }
295 };
297 NeighborCache *,
298 std::vector<NeighborCache *, Mem_root_allocator<NeighborCache *>>,
299 CompareByBenefit, MarkNeighborCache>
303void SetNumberOfSimplifications(int num_simplifications,
304 GraphSimplifier *simplifier);
306// See comment in .cc file.
307void SimplifyQueryGraph(THD *thd, int subgraph_pair_limit,
308 JoinHypergraph *graph, GraphSimplifier *simplifier);
A wrapper class which provides array bounds checking.
Definition: sql_array.h:47
Definition: graph_simplification.h:86
ALWAYS_INLINE bool EdgesAreNeighboring(size_t edge1_idx, size_t edge2_idx, ProposedSimplificationStep *step)
Bounds_checked_array< NeighborCache > m_cache
Definition: graph_simplification.h:281
Bounds_checked_array< EdgeCardinalities > m_edge_cardinalities
Definition: graph_simplification.h:227
THD * m_thd
Definition: graph_simplification.h:203
void RecalculateNeighbors(size_t edge1_idx, size_t begin, size_t end)
GraphSimplifier(THD *thd, JoinHypergraph *graph)
SimplificationStep ConcretizeSimplificationStep(GraphSimplifier::ProposedSimplificationStep step)
JoinHypergraph * m_graph
Definition: graph_simplification.h:230
Definition: graph_simplification.h:93
Definition: graph_simplification.h:111
Definition: graph_simplification.h:95
Definition: graph_simplification.h:99
Definition: graph_simplification.h:106
void UndoSimplificationStep()
SimplificationResult DoSimplificationStep()
Mem_root_array< SimplificationStep > m_undone_steps
Definition: graph_simplification.h:212
void UpdatePQ(size_t edge_idx)
int num_steps_done() const
Definition: graph_simplification.h:133
int num_steps_undone() const
Definition: graph_simplification.h:139
Priority_queue< NeighborCache *, std::vector< NeighborCache *, Mem_root_allocator< NeighborCache * > >, CompareByBenefit, MarkNeighborCache > m_pq
Definition: graph_simplification.h:300
OnlineCycleFinder m_cycles
Definition: graph_simplification.h:235
Mem_root_array< SimplificationStep > m_done_steps
Definition: graph_simplification.h:206
A typesafe replacement for DYNAMIC_ARRAY.
Definition: mem_root_array.h:426
A fast online cycle finder, based on [Pea03].
Definition: online_cycle_finder.h:54
Implements a priority queue using a vector-based max-heap.
Definition: priority_queue.h:104
For each client connection we create a separate thread with THD serving as a thread/connection descri...
Definition: sql_lexer_thd.h:36
void SimplifyQueryGraph(THD *thd, int subgraph_pair_limit, JoinHypergraph *graph, GraphSimplifier *simplifier)
Repeatedly apply simplifications (in the order of most to least safe) to the given hypergraph,...
void SetNumberOfSimplifications(int num_simplifications, GraphSimplifier *simplifier)
Definition of an undirected (join) hypergraph.
Header for compiler-dependent features.
Definition: my_compiler.h:99
const char * begin(const char *const c)
Definition: base64.h:44
Cursor end()
A past-the-end Cursor.
Definition: graph_simplification.h:286
bool operator()(const NeighborCache *a, const NeighborCache *b) const
Definition: graph_simplification.h:287
Definition: graph_simplification.h:223
double left
Definition: graph_simplification.h:224
double right
Definition: graph_simplification.h:225
Definition: graph_simplification.h:291
void operator()(size_t index, NeighborCache **cache)
Definition: graph_simplification.h:292
Definition: graph_simplification.h:263
int best_neighbor
Definition: graph_simplification.h:270
ProposedSimplificationStep best_step
Definition: graph_simplification.h:271
int index_in_pq
Definition: graph_simplification.h:279
Definition: graph_simplification.h:176
int before_edge_idx
Definition: graph_simplification.h:178
int after_edge_idx
Definition: graph_simplification.h:179
double benefit
Definition: graph_simplification.h:177
Definition: graph_simplification.h:188
int after_edge_idx
Definition: graph_simplification.h:190
int before_edge_idx
Definition: graph_simplification.h:189
hypergraph::Hyperedge new_edge
Definition: graph_simplification.h:194
hypergraph::Hyperedge old_edge
Definition: graph_simplification.h:193
A struct containing a join hypergraph of a single query block, encapsulating the constraints given by...
Definition: make_join_hypergraph.h:88
Definition: hypergraph.h:81