MySQL 9.2.0
Source Code Documentation
hash_join_iterator.h
Go to the documentation of this file.
1#ifndef SQL_ITERATORS_HASH_JOIN_ITERATOR_H_
2#define SQL_ITERATORS_HASH_JOIN_ITERATOR_H_
3
4/* Copyright (c) 2019, 2024, Oracle and/or its affiliates.
5
6 This program is free software; you can redistribute it and/or modify
7 it under the terms of the GNU General Public License, version 2.0,
8 as published by the Free Software Foundation.
9
10 This program is designed to work with certain software (including
11 but not limited to OpenSSL) that is licensed under separate terms,
12 as designated in a particular file or component or in included license
13 documentation. The authors of MySQL hereby grant you an additional
14 permission to link the program and your derivative works with the
15 separately licensed software that they have either included with
16 the program or referenced in the documentation.
17
18 This program is distributed in the hope that it will be useful,
19 but WITHOUT ANY WARRANTY; without even the implied warranty of
20 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
21 GNU General Public License, version 2.0, for more details.
22
23 You should have received a copy of the GNU General Public License
24 along with this program; if not, write to the Free Software
25 Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */
26
27#include <stdio.h>
28#include <cassert>
29#include <cstdint>
30#include <span>
31#include <vector>
32
33#include "my_alloc.h"
34#include "my_base.h"
35#include "my_table_map.h"
36#include "prealloced_array.h"
38#include "sql/item_cmpfunc.h"
42#include "sql/join_type.h"
43#include "sql/mem_root_array.h"
44#include "sql/pack_rows.h"
45#include "sql/table.h"
46#include "sql_string.h"
47
48class Item;
49class THD;
50struct AccessPath;
51
52struct ChunkPair {
55};
56
57/// @file
58///
59/// An iterator for joining two inputs by using hashing to match rows from
60/// the inputs.
61///
62/// The iterator starts out by doing everything in-memory. If everything fits
63/// into memory, the joining algorithm for inner joins works like this:
64///
65/// 1) Designate one input as the "build" input and one input as the "probe"
66/// input. Ideally, the smallest input measured in total size (not number of
67/// rows) should be designated as the build input.
68///
69/// 2) Read all the rows from the build input into an in-memory hash table.
70/// The hash key used in the hash table is calculated from the join attributes,
71/// e.g., if we have the following query where "orders" is designated as the
72/// build input:
73///
74/// SELECT * FROM lineitem
75/// INNER JOIN orders ON orders.o_orderkey = lineitem.l_orderkey;
76///
77/// the hash value will be calculated from the values in the column
78/// orders.o_orderkey. Note that the optimizer recognizes implicit join
79/// conditions, so this also works for SQL statements like:
80///
81/// SELECT * FROM orders, lineitem
82/// WHERE orders.o_orderkey = lineitem.l_orderkey;
83///
84/// 3) Then, we read the rows from the probe input, one by one. For each row,
85/// a hash key is calculated for the other side of the join (the probe input)
86/// using the join attribute (lineitem.l_orderkey in the above example) and the
87/// same hash function as in step 2. This hash key is used to do a lookup in the
88/// hash table, and for each match, an output row is produced. Note that the row
89/// from the probe input is already located in the table record buffers, and the
90/// matching row stored in the hash table is restored back to the record buffers
91/// where it originally came from. For details around how rows are stored and
92/// restored, see comments on pack_rows::StoreFromTableBuffers.
93///
94/// The size of the in-memory hash table is controlled by the system variable
95/// join_buffer_size. If we run out of memory during step 2, we degrade into a
96/// hybrid hash join. The data already in memory is processed using regular hash
97/// join, and the remainder is processed using on-disk hash join. It works like
98/// this:
99///
100/// 1) The rest of the rows in the build input that did not fit into the hash
101/// table are partitioned out into a given amount of files, represented by
102/// HashJoinChunks. We create an equal number of chunk files for both the probe
103/// and build input. We determine which file to put a row in by calculating a
104/// hash from the join attribute like in step 2 above, but using a different
105/// hash function.
106///
107/// 2) Then, we read the rows from the probe input, one by one. We look for a
108/// match in the hash table as described above, but the row is also written out
109/// to the chunk file on disk, since it might match a row from the build input
110/// that we've written to disk.
111///
112/// 3) When the entire probe input is read, we run the "classic" hash join on
113/// each of the corresponding chunk file probe/build pairs. Since the rows are
114/// partitioned using the same hash function for probe and build inputs, we know
115/// that matching rows must be located in the same pair of chunk files.
116///
117/// The algorithm for semijoin is quite similar to inner joins:
118///
119/// 1) Designate the inner table (i.e. the IN-side of a semijoin) as the build
120/// input. As semijoins only needs the first matching row from the inner table,
121/// we do not store duplicate keys in the hash table.
122///
123/// 2) Output all rows from the probe input where there is at least one matching
124/// row in the hash table. In case we have degraded into on-disk hash join, we
125/// write the probe row out to chunk file only if we did not find a matching row
126/// in the hash table.
127///
128/// The optimizer may set up semijoins with conditions that are not pure join
129/// conditions, but that must be attached to the hash join iterator anyways.
130/// Consider the following query and (slightly modified) execution plan:
131///
132/// SELECT c FROM t WHERE 1 IN (SELECT t.c = col1 FROM t1);
133///
134/// -> Hash semijoin (no condition), extra conditions: (1 = (t.c = t1.col1))
135/// -> Table scan on t
136/// -> Hash
137/// -> Table scan on t1
138///
139/// In this query, the optimizer has set up the condition (1 = (t.c = t1.col1))
140/// as the semijoin condition. We cannot use this as a join condition, since
141/// hash join only supports equi-join conditions. However, we cannot attach this
142/// as a filter after the join, as that would cause wrong results. We attach
143/// these conditions as "extra" conditions to the hash join iterator, and causes
144/// these notable behaviors:
145///
146/// a. If we have any extra conditions, we cannot reject duplicate keys in the
147/// hash table: the first row matching the join condition could fail the
148/// extra condition(s).
149///
150/// b. We can only output rows if all extra conditions pass. If any of the extra
151/// conditions fail, we must go to the next matching row in the hash table.
152///
153/// c. In case of on-disk hash join, we must write the probe row to disk _after_
154/// we have checked that there are no rows in the hash table that match any
155/// of the extra conditions.
156///
157/// If we are able to execute the hash join in memory (classic hash join),
158/// the output will be sorted the same as the left (probe) input. If we start
159/// spilling to disk, we lose any reasonable ordering properties.
160///
161/// Note that we still might end up in a case where a single chunk file from
162/// disk won't fit into memory. This is resolved by reading as much as possible
163/// into the hash table, and then reading the entire probe chunk file for each
164/// time the hash table is reloaded. This might happen if we have a very skewed
165/// data set, for instance.
166///
167/// When we start spilling to disk, we allocate a maximum of "kMaxChunks"
168/// chunk files on disk for each of the two inputs. The reason for having an
169/// upper limit is to avoid running out of file descriptors.
170///
171/// There is also a flag we can set to avoid hash join spilling to disk
172/// regardless of the input size. If the flag is set, the join algorithm works
173/// like this:
174///
175/// 1) Read as many rows as possible from the build input into an in-memory hash
176/// table.
177/// 2) When the hash table is full (we have reached the limit set by the system
178/// variable join_buffer_size), start reading from the beginning of the probe
179/// input, probing for matches in the hash table. Output a row for each match
180/// found.
181/// 3) When the probe input is empty, see if there are any remaining rows in the
182/// build input. If so, clear the in-memory hash table and go to step 1,
183/// continuing from the build input where we stopped the last time. If not, the
184/// join is done.
185///
186/// Doing everything in memory can be beneficial in a few cases. Currently, it
187/// is used when we have a LIMIT without sorting or grouping in the query. The
188/// gain is that we start producing output rows a lot earlier than if we were to
189/// spill both inputs out to disk. It could also be beneficial if the build
190/// input _almost_ fits in memory; it would likely be better to read the probe
191/// input twice instead of writing both inputs out to disk. However, we do not
192/// currently do any such cost based optimization.
193///
194/// There is a concept called "probe row saving" in the iterator. This is a
195/// technique that is enabled in two different scenarios: when a hash join build
196/// chunk does not fit entirely in memory and when hash join is not allowed to
197/// spill to disk. Common for these two scenarios is that a probe row will be
198/// read multiple times. For certain join types (semijoin), we must take care so
199/// that the same probe row is not sent to the client multiple times. Probe row
200/// saving takes care of this by doing the following:
201///
202/// - If we realize that we are going to read the same probe row multiple times,
203/// we enable probe row saving.
204/// - When a probe row is read, we write the row out to a probe row saving write
205/// file, given that it matches certain conditions (for semijoin we only save
206/// unmatched probe rows).
207/// - After the probe input is consumed, we will swap the probe row saving
208/// _write_ file and the probe row saving _read_ file, making the write file
209/// available for writing again.
210/// - When we are to read the probe input again, we read the probe rows from the
211/// probe row saving read file. This ensures that we i.e. do not output the
212/// same probe row twice for semijoin. Note that if the rows we read from the
213/// probe row saving read file will be read again (e.g., we have a big hash
214/// join build chunk that is many times bigger than the available hash table
215/// memory, causing us to process the chunk file in chunks), we will again
216/// write the rows to a new probe row saving write file. This reading from the
217/// read file and writing to a new write file continues until we know that we
218/// are seeing the probe rows for the last time.
219///
220/// We use the same methods as on-disk hash join (HashJoinChunk) for reading and
221/// writing rows to files. Note that probe row saving is never enabled for inner
222/// joins, since we do want to output the same probe row multiple times if it
223/// matches muliple rows from the build input. There are some differences
224/// regarding when probe row saving is enabled, depending on the hash join type
225/// (see enum HashJoinType):
226///
227/// - IN_MEMORY: Probe row saving is never activated, since the probe input is
228/// read only once.
229/// - SPILL_TO_DISK: If a build chunk file does not fit in memory (may happen
230/// with skewed data set), we will have to read the corresponding probe chunk
231/// multiple times. In this case, probe row saving is enabled as soon as we
232/// see that the build chunk does not fit in memory, and remains active until
233/// the entire build chunk is consumed. After the probe chunk is read once,
234/// we swap the probe row saving write file and probe row saving read file so
235/// that probe rows will be read from the probe row saving read file. Probe
236/// row saving is deactivated once we move to the next pair of chunk files.
237/// - IN_MEMORY_WITH_HASH_TABLE_REFILL: Probe row saving is activated when we
238/// see that the build input is too large to fit in memory. Once the probe
239/// iterator has been consumed once, we swap the probe row saving write file
240/// and probe row saving read file so that probe rows will be read from the
241/// probe row saving read file. As long as the build input is not fully
242/// consumed, we write probe rows from the read file out to a new write file,
243/// swapping these files for every hash table refill. Probe row saving is
244/// never deactivated in this hash join type.
245///
246/// Note that we always write the entire row when writing to probe row saving
247/// file. It would be possible to only write the match flag, but this is tricky
248/// as long as we have the hash join type IN_MEMORY_WITH_HASH_TABLE_REFILL. If
249/// we were to write only match flags in this hash join type, we would have to
250/// read the probe iterator multiple times. But there is no guarantee that rows
251/// will come in the same order when reading an iterator multiple times (e.g.
252/// NDB does not guarantee this), so it would require us to store match flags in
253/// a lookup structure using a row ID as the key. Due to this, we will
254/// reconsider this if the hash join type IN_MEMORY_WITH_HASH_TABLE_REFILL goes
255/// away.
256
257/// The two inputs that we read from when executing a hash join.
258enum class HashJoinInput {
259 /// We build the hash table from this input.
260 kBuild,
261 /// For each row from this input, we try to find a match in the hash table.
262 kProbe
263};
264
265class HashJoinIterator final : public RowIterator {
266 public:
267 /// Construct a HashJoinIterator.
268 ///
269 /// @param thd
270 /// the thread handle
271 /// @param build_input
272 /// the iterator for the build input
273 /// @param build_input_tables
274 /// a list of all the tables in the build input. The tables are needed for
275 /// two things:
276 /// 1) Accessing the columns when creating the join key during creation of
277 /// the hash table,
278 /// 2) and accessing the column data when creating the row to be stored in
279 /// the hash table and/or the chunk file on disk.
280 /// @param estimated_build_rows
281 /// How many rows we assume there will be when reading the build input.
282 /// This is used to choose how many chunks we break it into on disk.
283 /// @param probe_input
284 /// the iterator for the probe input
285 /// @param probe_input_tables
286 /// the probe input tables. Needed for the same reasons as
287 /// build_input_tables.
288 /// @param store_rowids whether we need to make sure row ids are available
289 /// for all tables below us, after Read() has been called. used only if
290 /// we are below a weedout operation.
291 /// @param tables_to_get_rowid_for a map of which tables we need to call
292 /// position() for ourselves. tables that are in build_input_tables
293 /// but not in this map, are expected to be handled by some other iterator.
294 /// tables that are in this map but not in build_input_tables will be
295 /// ignored.
296 /// @param max_memory_available
297 /// the amount of memory available, in bytes, for this hash join iterator.
298 /// This can be user-controlled by setting the system variable
299 /// join_buffer_size.
300 /// @param join_conditions
301 /// a list of all the join conditions between the two inputs
302 /// @param allow_spill_to_disk
303 /// whether the hash join can spill to disk. This is set to false in some
304 /// cases where we have a LIMIT in the query
305 /// @param join_type
306 /// The join type.
307 /// @param extra_conditions
308 /// A list of extra conditions that the iterator will evaluate after a
309 /// lookup in the hash table is done, but before the row is returned. The
310 /// conditions are AND-ed together into a single Item.
311 /// @param single_row_index_lookups
312 /// All the single-row index lookups in the build input and probe input.
313 /// @param first_input
314 /// The first input (build or probe) to read from. (If this is empty, we
315 /// will not have to read from the other.)
316 /// @param probe_input_batch_mode
317 /// Whether we need to enable batch mode on the probe input table.
318 /// Only make sense if it is a single table, and we are not on the
319 /// outer side of any nested loop join.
320 /// @param hash_table_generation
321 /// If this is non-nullptr, it is a counter of how many times the query
322 /// block the iterator is a part of has been asked to clear hash tables,
323 /// since outer references may have changed value. It is used to know when
324 /// we need to drop our hash table; when the value changes, we need to drop
325 /// it. If it is nullptr, we _always_ drop it on Init().
327 const Prealloced_array<TABLE *, 4> &build_input_tables,
328 double estimated_build_rows,
330 const Prealloced_array<TABLE *, 4> &probe_input_tables,
331 bool store_rowids, table_map tables_to_get_rowid_for,
332 size_t max_memory_available,
333 const std::vector<HashJoinCondition> &join_conditions,
334 bool allow_spill_to_disk, JoinType join_type,
335 const Mem_root_array<Item *> &extra_conditions,
336 std::span<AccessPath *> single_row_index_lookups,
337 HashJoinInput first_input, bool probe_input_batch_mode,
338 uint64_t *hash_table_generation);
339
340 bool Init() override;
341
342 int Read() override;
343
344 void SetNullRowFlag(bool is_null_row) override {
345 // Don't call this after Init() but before calling Read() for the first
346 // time. Init() may have loaded a row that is (partially or fully) a null
347 // row, so resetting the null row flags is incorrect.
349 m_build_input->SetNullRowFlag(is_null_row);
350 m_probe_input->SetNullRowFlag(is_null_row);
351 }
352
353 void EndPSIBatchModeIfStarted() override {
354 m_build_input->EndPSIBatchModeIfStarted();
355 m_probe_input->EndPSIBatchModeIfStarted();
356 }
357
358 void UnlockRow() override {
359 // Since both inputs may have been materialized to disk, we cannot unlock
360 // them.
361 }
362
363 int ChunkCount() { return m_chunk_files_on_disk.size(); }
364
365 private:
366 /// Read all rows from the build input and store the rows into the in-memory
367 /// hash table. If the hash table goes full, the rest of the rows are written
368 /// out to chunk files on disk. See the class comment for more details.
369 ///
370 /// @retval true in case of error
371 bool BuildHashTable();
372
373 /// Write all the remaining rows from the build table input to chunk files on
374 /// disk.
375 ///
376 /// @return True on error, false on success.
378
379 /// Read all rows from the next chunk file into the in-memory hash table.
380 /// See the class comment for details.
381 ///
382 /// @retval true in case of error
384
385 /// Read a single row from the probe iterator input into the tables' record
386 /// buffers. If we have started spilling to disk, the row is written out to a
387 /// chunk file on disk as well.
388 ///
389 /// The end condition is that either:
390 /// a) a row is ready in the tables' record buffers, and the state will be set
391 /// to READING_FIRST_ROW_FROM_HASH_TABLE.
392 /// b) There are no more rows to process from the probe input, so the iterator
393 /// state will be LOADING_NEXT_CHUNK_PAIR.
394 ///
395 /// @retval true in case of error
397
398 /// Read a single row from the current probe chunk file into the tables'
399 /// record buffers. The end conditions are the same as for
400 /// ReadRowFromProbeIterator().
401 ///
402 /// @retval true in case of error
404
405 /// Read a single row from the probe row saving file into the tables' record
406 /// buffers.
407 ///
408 /// @retval true in case of error
410
411 // Do a lookup in the hash table for matching rows from the build input.
412 // The lookup is done by computing the join key from the probe input, and
413 // using that join key for doing a lookup in the hash table. If the join key
414 // contains one or more SQL NULLs, the row cannot match anything and will be
415 // skipped, and the iterator state will be READING_ROW_FROM_PROBE_INPUT. If
416 // not, the iterator state will be READING_FIRST_ROW_FROM_HASH_TABLE.
417 //
418 // After this function is called, ReadJoinedRow() will return false until
419 // there are no more matching rows for the computed join key.
421
422 /// Take the next matching row from the hash table, and put the row into the
423 /// build tables' record buffers. The function expects that
424 /// LookupProbeRowInHashTable() has been called up-front. The user must
425 /// call ReadJoinedRow() as long as it returns false, as there may be
426 /// multiple matching rows from the hash table. It is up to the caller to set
427 /// a new state in case of EOF.
428 ///
429 /// @retval 0 if a match was found and the row is put in the build tables'
430 /// record buffers
431 /// @retval -1 if there are no more matching rows in the hash table
432 int ReadJoinedRow();
433
434 // Have we degraded into on-disk hash join?
435 bool on_disk_hash_join() const { return !m_chunk_files_on_disk.empty(); }
436
437 /// Write the last row read from the probe input out to chunk files on disk,
438 /// if applicable.
439 ///
440 /// For inner joins, we must write all probe rows to chunk files, since we
441 /// need to match the row against rows from the build input that are written
442 /// out to chunk files. For semijoin, we can only write probe rows that do not
443 /// match any of the rows in the hash table. Writing a probe row with a
444 /// matching row in the hash table could cause the row to be returned multiple
445 /// times.
446 ///
447 /// @retval true in case of errors.
449
450 /// @retval true if the last joined row passes all of the extra conditions.
452
453 /// If true, reject duplicate keys in the hash table.
454 ///
455 /// Semijoins/antijoins are only interested in the first matching row from the
456 /// hash table, so we can avoid storing duplicate keys in order to save some
457 /// memory. However, this cannot be applied if we have any "extra" conditions:
458 /// the first matching row in the hash table may fail the extra condition(s).
459 ///
460 /// @retval true if we can reject duplicate keys in the hash table.
461 bool RejectDuplicateKeys() const {
462 return m_extra_condition == nullptr &&
464 }
465
466 /// Clear the row buffer and reset all iterators pointing to it. This may be
467 /// called multiple times to re-init the row buffer.
468 ///
469 /// @retval true in case of error. my_error has been called
470 bool InitRowBuffer();
471
472 /// Prepare to read the probe iterator from the beginning, and enable batch
473 /// mode if applicable. The iterator state will remain unchanged.
474 ///
475 /// @retval true in case of error. my_error has been called.
476 bool InitProbeIterator();
477
478 /// Mark that probe row saving is enabled, and prepare the probe row saving
479 /// file for writing.
480 /// @see m_write_to_probe_row_saving
481 ///
482 /// @retval true in case of error. my_error has been called.
484
485 /// Mark that we should read from the probe row saving file. The probe row
486 /// saving file is rewinded to the beginning.
487 /// @see m_read_from_probe_row_saving
488 ///
489 /// @retval true in case of error. my_error has been called.
491
492 /// Set the iterator state to the correct READING_ROW_FROM_PROBE_*-state.
493 /// Which state we end up in depends on which hash join type we are executing
494 /// (in-memory, on-disk or in-memory with hash table refill).
496
497 /// Read a joined row from the hash table, and see if it passes any extra
498 /// conditions. The last probe row read will also be written do disk if needed
499 /// (see WriteProbeRowToDiskIfApplicable).
500 ///
501 /// @retval -1 There are no more matching rows in the hash table.
502 /// @retval 0 A joined row is ready.
503 /// @retval 1 An error occurred.
505
506 enum class State {
507 // We are reading a row from the probe input, where the row comes from
508 // the iterator.
510 // We are reading a row from the probe input, where the row comes from a
511 // chunk file.
513 // We are reading a row from the probe input, where the row comes from a
514 // probe row saving file.
516 // The iterator is moving to the next pair of chunk files, where the chunk
517 // file from the build input will be loaded into the hash table.
519 // We are reading the first row returned from the hash table lookup that
520 // also passes extra conditions.
522 // We are reading the remaining rows returned from the hash table lookup.
524 // No more rows, both inputs are empty.
526 };
527
531
534
535 // The last row that was read from the hash table, or nullptr if none.
536 // All rows under the same key are linked together (see the documentation
537 // for LinkedImmutableString), so this allows iterating through the rows
538 // until the end.
540
541 // These structures holds the tables and columns that are needed for the hash
542 // join. Rows/columns that are not needed are filtered out in the constructor.
543 // We need to know which tables that belong to each iterator, so that we can
544 // compute the join key when needed.
547
548 // An in-memory hash table that holds rows from the build input (directly from
549 // the build input iterator, or from a chunk file). See the class comment for
550 // details on how and when this is used.
552
553 // A list of the join conditions (all of them are equi-join conditions).
555
556 // Array to hold the list of chunk files on disk in case we degrade into
557 // on-disk hash join.
559
560 // Which HashJoinChunk, if any, we are currently reading from, in both
561 // LOADING_NEXT_CHUNK_PAIR and READING_ROW_FROM_PROBE_CHUNK_FILE.
562 // It is incremented during the state LOADING_NEXT_CHUNK_PAIR.
564
565 // Which row we currently are reading from each of the hash join chunk file.
568
569 // How many rows we assume there will be when reading the build input.
570 // This is used to choose how many chunks we break it into on disk.
572
573 public:
574 // The seed that is by xxHash64 when calculating the hash from a join
575 // key. We use xxHash64 when calculating the hash that is used for
576 // determining which chunk file a row should be placed in (in case of
577 // on-disk hash join); if we used the same hash function (and seed) for
578 // both operation, we would get a really bad hash table when loading
579 // a chunk file to the hash table. The number is chosen randomly and have
580 // no special meaning.
581 static constexpr uint32_t kChunkPartitioningHashSeed{899339};
582
583 // The maximum number of HashJoinChunks that is allocated for each of the
584 // inputs in case we spill to disk. We might very well end up with an amount
585 // less than this number, but we keep an upper limit so we don't risk running
586 // out of file descriptors. We always use a power of two number of files,
587 // which allows us to do some optimizations when calculating which chunk a row
588 // should be placed in.
589 static constexpr size_t kMaxChunks = 128;
590
591 private:
592 // A buffer that is used during two phases:
593 // 1) when constructing a join key from join conditions.
594 // 2) when moving a row between tables' record buffers and the hash table.
595 //
596 // There are two functions that needs this buffer; ConstructJoinKey() and
597 // StoreFromTableBuffers(). After calling one of these functions, the user
598 // must take responsibility of the data if it is needed for a longer lifetime.
599 //
600 // If there are no BLOB/TEXT column in the join, we calculate an upper bound
601 // of the row size that is used to preallocate this buffer. In the case of
602 // BLOB/TEXT columns, we cannot calculate a reasonable upper bound, and the
603 // row size is calculated per row. The allocated memory is kept for the
604 // duration of the iterator, so that we (most likely) avoid reallocations.
606
607 /// The first input (build or probe) to read from. (If this is empty, we will
608 /// not have to read from the other.)
610
611 // Whether we should turn on batch mode for the probe input. Batch mode is
612 // enabled if the probe input consists of exactly one table, and said table
613 // can return more than one row and has no associated subquery condition.
614 // (See ShouldEnableBatchMode().)
616
617 // Whether we are allowed to spill to disk.
619
620 // Whether the build iterator has more rows. This is used to stop the hash
621 // join iterator asking for more rows when we know for sure that the entire
622 // build input is consumed. The variable is only used if m_allow_spill_to_disk
623 // is false, as we have to see if there are more rows in the build input after
624 // the probe input is consumed.
626
627 // What kind of join the iterator should execute.
629
630 // If not nullptr, an extra condition that the iterator will evaluate after a
631 // lookup in the hash table is done, but before the row is returned. This is
632 // needed in case we have a semijoin condition that is not an equi-join
633 // condition (i.e. 't1.col1 < t2.col1').
635
636 // Whether we should write rows from the probe input to the probe row saving
637 // write file. See the class comment on HashJoinIterator for details around
638 // probe row saving.
640
641 // Whether we should read rows from the probe row saving read file. See the
642 // class comment on HashJoinIterator for details around probe row saving.
644
645 // The probe row saving files where unmatched probe rows are written to and
646 // read from.
649
650 // Which row we currently are reading from in the probe row saving read file.
651 // Used to know whether we have reached the end of the file. How many files
652 // the probe row saving read file contains is contained in the HashJoinChunk
653 // (see m_probe_row_saving_read_file).
655
656 // All the single-row index lookups that provide rows to this iterator.
657 std::span<AccessPath *> m_single_row_index_lookups;
658
659 // The "type" of hash join we are executing. We currently have three different
660 // types of hash join:
661 // - In memory: We do everything in memory without any refills of the hash
662 // table. Each input is read only once, and nothing is written to disk.
663 // - Spill to disk: If the build input does not fit in memory, we write both
664 // inputs out to a set of chunk files. Both inputs are partitioned using a
665 // hash function over the join attribute, ensuring that matching rows can be
666 // found in the same set of chunk files. Each pair of chunk file is then
667 // processed as an in-memory hash join.
668 // - In memory with hash table refill: This is enabled if we are not allowed
669 // to spill to disk, and the build input does not fit in memory. We read as
670 // much as possible from the build input into the hash table. We then read
671 // the entire probe input, probing for matching rows in the hash table.
672 // When the probe input returns EOF, the hash table is refilled with the
673 // rows that did not fit the first time. The entire probe input is read
674 // again, and this is repeated until the entire build input is consumed.
675 enum class HashJoinType {
676 IN_MEMORY,
679 };
681
682 // The match flag for the last probe row read from chunk file.
683 //
684 // This is needed if a outer join spills to disk; a probe row can match a row
685 // from the build input we haven't seen yet (it's been written out to disk
686 // because the hash table was full). So when reading a probe row from a chunk
687 // file, this variable holds the match flag. This flag must be a class member,
688 // since one probe row may match multiple rows from the hash table; the
689 // execution will go out of HashJoinIterator::Read() between each matching
690 // row, causing any local match flag to lose the match flag info from the last
691 // probe row read.
693
694 /// If true, a row was already read from the probe input, in order to check if
695 /// that input was empty. If so, we should process that row before reading
696 /// another.
697 bool m_probe_row_read{false};
698
699 /// Helper function for Init(). Read the first row from m_probe_input.
700 /// @returns 'true' if there was an error.
701 bool ReadFirstProbeRow();
702
703 /// Helper function for Init(). Build the hash table and check for empty query
704 /// results (empty build input or non-empty build input in case of degenerate
705 /// antijoin.)
706 /// @returns 'true' in case of error.
707 bool InitHashTable();
708};
709
710#endif // SQL_ITERATORS_HASH_JOIN_ITERATOR_H_
Definition: hash_join_chunk.h:67
Definition: hash_join_iterator.h:265
HashJoinInput m_first_input
The first input (build or probe) to read from.
Definition: hash_join_iterator.h:609
bool BuildHashTable()
Read all rows from the build input and store the rows into the in-memory hash table.
Definition: hash_join_iterator.cc:488
void EndPSIBatchModeIfStarted() override
Ends performance schema batch mode, if started.
Definition: hash_join_iterator.h:353
const unique_ptr_destroy_only< RowIterator > m_build_input
Definition: hash_join_iterator.h:532
bool ReadRowFromProbeRowSavingFile()
Read a single row from the probe row saving file into the tables' record buffers.
Definition: hash_join_iterator.cc:856
bool JoinedRowPassesExtraConditions() const
Definition: hash_join_iterator.cc:1009
int ReadNextJoinedRowFromHashTable()
Read a joined row from the hash table, and see if it passes any extra conditions.
Definition: hash_join_iterator.cc:1017
State
Definition: hash_join_iterator.h:506
void UnlockRow() override
Definition: hash_join_iterator.h:358
bool ReadFirstProbeRow()
Helper function for Init().
Definition: hash_join_iterator.cc:156
bool m_probe_row_read
If true, a row was already read from the probe input, in order to check if that input was empty.
Definition: hash_join_iterator.h:697
std::span< AccessPath * > m_single_row_index_lookups
Definition: hash_join_iterator.h:657
Prealloced_array< HashJoinCondition, 4 > m_join_conditions
Definition: hash_join_iterator.h:554
bool m_build_iterator_has_more_rows
Definition: hash_join_iterator.h:625
HashJoinChunk m_probe_row_saving_write_file
Definition: hash_join_iterator.h:647
bool WriteProbeRowToDiskIfApplicable()
Write the last row read from the probe input out to chunk files on disk, if applicable.
Definition: hash_join_iterator.cc:964
Mem_root_array< ChunkPair > m_chunk_files_on_disk
Definition: hash_join_iterator.h:558
pack_rows::TableCollection m_probe_input_tables
Definition: hash_join_iterator.h:545
bool InitRowBuffer()
Clear the row buffer and reset all iterators pointing to it.
Definition: hash_join_iterator.cc:119
int Read() override
Read a single row.
Definition: hash_join_iterator.cc:1123
String m_temporary_row_and_join_key_buffer
Definition: hash_join_iterator.h:605
bool InitHashTable()
Helper function for Init().
Definition: hash_join_iterator.cc:182
void SetNullRowFlag(bool is_null_row) override
Mark the current row buffer as containing a NULL row or not, so that if you read from it and the flag...
Definition: hash_join_iterator.h:344
State m_state
Definition: hash_join_iterator.h:528
void LookupProbeRowInHashTable()
Definition: hash_join_iterator.cc:911
LinkedImmutableString m_current_row
Definition: hash_join_iterator.h:539
static constexpr size_t kMaxChunks
Definition: hash_join_iterator.h:589
HashJoinIterator(THD *thd, unique_ptr_destroy_only< RowIterator > build_input, const Prealloced_array< TABLE *, 4 > &build_input_tables, double estimated_build_rows, unique_ptr_destroy_only< RowIterator > probe_input, const Prealloced_array< TABLE *, 4 > &probe_input_tables, bool store_rowids, table_map tables_to_get_rowid_for, size_t max_memory_available, const std::vector< HashJoinCondition > &join_conditions, bool allow_spill_to_disk, JoinType join_type, const Mem_root_array< Item * > &extra_conditions, std::span< AccessPath * > single_row_index_lookups, HashJoinInput first_input, bool probe_input_batch_mode, uint64_t *hash_table_generation)
Construct a HashJoinIterator.
Definition: hash_join_iterator.cc:62
int ReadJoinedRow()
Take the next matching row from the hash table, and put the row into the build tables' record buffers...
Definition: hash_join_iterator.cc:949
bool m_read_from_probe_row_saving
Definition: hash_join_iterator.h:643
uint64_t m_last_hash_table_generation
Definition: hash_join_iterator.h:530
ha_rows m_build_chunk_current_row
Definition: hash_join_iterator.h:566
HashJoinType
Definition: hash_join_iterator.h:675
bool ReadNextHashJoinChunk()
Read all rows from the next chunk file into the in-memory hash table.
Definition: hash_join_iterator.cc:632
bool m_probe_input_batch_mode
Definition: hash_join_iterator.h:615
HashJoinType m_hash_join_type
Definition: hash_join_iterator.h:680
static constexpr uint32_t kChunkPartitioningHashSeed
Definition: hash_join_iterator.h:581
bool RejectDuplicateKeys() const
If true, reject duplicate keys in the hash table.
Definition: hash_join_iterator.h:461
bool ReadRowFromProbeChunkFile()
Read a single row from the current probe chunk file into the tables' record buffers.
Definition: hash_join_iterator.cc:813
bool m_probe_row_match_flag
Definition: hash_join_iterator.h:692
pack_rows::TableCollection m_build_input_tables
Definition: hash_join_iterator.h:546
const unique_ptr_destroy_only< RowIterator > m_probe_input
Definition: hash_join_iterator.h:533
int ChunkCount()
Definition: hash_join_iterator.h:363
ha_rows m_probe_chunk_current_row
Definition: hash_join_iterator.h:567
HashJoinChunk m_probe_row_saving_read_file
Definition: hash_join_iterator.h:648
uint64_t * m_hash_table_generation
Definition: hash_join_iterator.h:529
ha_rows m_probe_row_saving_read_file_current_row
Definition: hash_join_iterator.h:654
const double m_estimated_build_rows
Definition: hash_join_iterator.h:571
int m_current_chunk
Definition: hash_join_iterator.h:563
bool Init() override
Initialize or reinitialize the iterator.
Definition: hash_join_iterator.cc:211
const JoinType m_join_type
Definition: hash_join_iterator.h:628
bool m_allow_spill_to_disk
Definition: hash_join_iterator.h:618
bool on_disk_hash_join() const
Definition: hash_join_iterator.h:435
bool InitProbeIterator()
Prepare to read the probe iterator from the beginning, and enable batch mode if applicable.
Definition: hash_join_iterator.cc:143
void SetReadingProbeRowState()
Set the iterator state to the correct READING_ROW_FROM_PROBE_*-state.
Definition: hash_join_iterator.cc:1192
Item * m_extra_condition
Definition: hash_join_iterator.h:634
hash_join_buffer::HashJoinRowBuffer m_row_buffer
Definition: hash_join_iterator.h:551
bool InitWritingToProbeRowSavingFile()
Mark that probe row saving is enabled, and prepare the probe row saving file for writing.
Definition: hash_join_iterator.cc:1179
bool InitReadingFromProbeRowSavingFile()
Mark that we should read from the probe row saving file.
Definition: hash_join_iterator.cc:1185
bool WriteBuildTableToChunkFiles()
Write all the remaining rows from the build table input to chunk files on disk.
Definition: hash_join_iterator.cc:407
bool m_write_to_probe_row_saving
Definition: hash_join_iterator.h:639
bool ReadRowFromProbeIterator()
Read a single row from the probe iterator input into the tables' record buffers.
Definition: hash_join_iterator.cc:735
Base class that is used to represent any kind of expression in a relational query.
Definition: item.h:930
LinkedImmutableString is designed for storing rows (values) in hash join.
Definition: immutable_string.h:173
A typesafe replacement for DYNAMIC_ARRAY.
Definition: mem_root_array.h:432
A typesafe replacement for DYNAMIC_ARRAY.
Definition: prealloced_array.h:71
A context for reading through a single table using a chosen access method: index read,...
Definition: row_iterator.h:82
THD * thd() const
Definition: row_iterator.h:228
Using this class is fraught with peril, and you need to be very careful when doing so.
Definition: sql_string.h:167
For each client connection we create a separate thread with THD serving as a thread/connection descri...
Definition: sql_lexer_thd.h:36
Definition: hash_join_buffer.h:125
A structure that contains a list of input tables for a hash join operation, BKA join operation or a s...
Definition: pack_rows.h:84
This file contains the HashJoinRowBuffer class and related functions/classes.
HashJoinInput
The two inputs that we read from when executing a hash join.
Definition: hash_join_iterator.h:258
@ kProbe
For each row from this input, we try to find a match in the hash table.
@ kBuild
We build the hash table from this input.
ImmutableString defines a storage format for strings that is designed to be as compact as possible,...
JoinType
Definition: join_type.h:28
@ ANTI
Left antijoin, i.e.
@ SEMI
Left semijoin, i.e.
This file follows Google coding style, except for the name MEM_ROOT (which is kept for historical rea...
std::unique_ptr< T, Destroy_only< T > > unique_ptr_destroy_only
std::unique_ptr, but only destroying.
Definition: my_alloc.h:480
This file includes constants used by all storage engines.
my_off_t ha_rows
Definition: my_base.h:1141
uint64_t table_map
Definition: my_table_map.h:30
Generic routines for packing rows (possibly from multiple tables at the same time) into strings,...
join_type
Definition: sql_opt_exec_shared.h:186
Our own string classes, used pervasively throughout the executor.
Access paths are a query planning structure that correspond 1:1 to iterators, in that an access path ...
Definition: access_path.h:238
Definition: hash_join_iterator.h:52
HashJoinChunk probe_chunk
Definition: hash_join_iterator.h:53
HashJoinChunk build_chunk
Definition: hash_join_iterator.h:54