MySQL 9.5.0
Source Code Documentation
bka_iterator.h
Go to the documentation of this file.
1#ifndef SQL_ITERATORS_BKA_ITERATOR_H_
2#define SQL_ITERATORS_BKA_ITERATOR_H_
3
4/* Copyright (c) 2019, 2025, Oracle and/or its affiliates.
5
6 This program is free software; you can redistribute it and/or modify
7 it under the terms of the GNU General Public License, version 2.0,
8 as published by the Free Software Foundation.
9
10 This program is designed to work with certain software (including
11 but not limited to OpenSSL) that is licensed under separate terms,
12 as designated in a particular file or component or in included license
13 documentation. The authors of MySQL hereby grant you an additional
14 permission to link the program and your derivative works with the
15 separately licensed software that they have either included with
16 the program or referenced in the documentation.
17
18 This program is distributed in the hope that it will be useful,
19 but WITHOUT ANY WARRANTY; without even the implied warranty of
20 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
21 GNU General Public License, version 2.0, for more details.
22
23 You should have received a copy of the GNU General Public License
24 along with this program; if not, write to the Free Software
25 Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */
26
27/**
28 Batch key access (BKA) is a join strategy that uses multi-range read (MRR)
29 to get better read ordering on the table on the inner side. It reads
30 a block of rows from the outer side, picks up the join keys (refs) from
31 each row, and sends them all off in one big read request. The handler can
32 then order these and read them in whatever order it would prefer. This is
33 especially attractive when used with rotating media; the reads can then be
34 ordered such that it does not require constant seeking (disk-sweep MRR,
35 or DS-MRR).
36
37 BKA is implemented with two iterators working in concert. The BKAIterator
38 reads rows from the outer side into a buffer. When the buffer is full or we
39 are out of rows, it then sets up the key ranges and hand it over to the
40 MultiRangeRowIterator, which does the actual request, and reads rows from it
41 until there are none left. For each inner row returned, MultiRangeRowIterator
42 loads the appropriate outer row(s) from the buffer, doing the actual join.
43
44 The reason for this split is twofold. First, it allows us to accurately time
45 (for EXPLAIN ANALYZE) the actual table read. Second, and more importantly,
46 we can have other iterators between the BKAIterator and MultiRangeRowIterator,
47 in particular FilterIterator.
48 */
49
50#include <assert.h>
51#include <stddef.h>
52#include <sys/types.h>
53#include <iterator>
54#include <memory>
55#include <span>
56
57#include "my_alloc.h"
58
59#include "my_inttypes.h"
60#include "my_table_map.h"
61#include "sql/handler.h"
64#include "sql/join_type.h"
65#include "sql/mem_root_array.h"
66#include "sql/pack_rows.h"
67#include "sql_string.h"
68#include "template_utils.h"
69
70class Item;
71class JOIN;
73class THD;
74struct AccessPath;
75struct Index_lookup;
76struct KEY_MULTI_RANGE;
77struct TABLE;
78
79/**
80 The BKA join iterator, with an arbitrary iterator tree on the outer side
81 and a MultiRangeRowIterator on the inner side (possibly with a filter or
82 similar in-between). See file comment for more details.
83 */
84class BKAIterator final : public RowIterator {
85 public:
86 /**
87 @param thd Thread handle.
88 @param outer_input The iterator to read the outer rows from.
89 @param outer_input_tables Each outer table involved.
90 Used to know which fields we are to read into our buffer.
91 @param inner_input The iterator to read the inner rows from.
92 Must end up in a MultiRangeRowIterator.
93 @param max_memory_available Number of bytes available for row buffers,
94 both outer rows and MRR buffers. Note that allocation is incremental,
95 so we can allocate less than this.
96 @param mrr_bytes_needed_for_single_inner_row Number of bytes MRR needs
97 space for in its buffer for holding a single row from the inner table.
98 @param expected_inner_rows_per_outer_row Number of inner rows we
99 statistically expect for each outer row. Used for dividing the buffer
100 space between inner rows and MRR row buffer (if we expect many inner
101 rows, we can't load as many outer rows).
102 @param store_rowids Whether we need to make sure all tables below us have
103 row IDs available, after Read() has been called. Used only if
104 we are below a weedout operation.
105 @param tables_to_get_rowid_for A map of which tables BKAIterator needs
106 to call position() for itself. tables that are in outer_input_tables
107 but not in this map, are expected to be handled by some other iterator.
108 tables that are in this map but not in outer_input_tables will be
109 ignored.
110 @param mrr_iterator Pointer to the MRR iterator at the bottom of
111 inner_input. Used to send row ranges and buffers.
112 @param single_row_index_lookups All the single-row index lookups that
113 provide input to this iterator.
114 @param join_type What kind of join we are executing.
115 */
117 const Prealloced_array<TABLE *, 4> &outer_input_tables,
119 size_t max_memory_available,
120 size_t mrr_bytes_needed_for_single_inner_row,
121 float expected_inner_rows_per_outer_row, bool store_rowids,
122 table_map tables_to_get_rowid_for,
123 MultiRangeRowIterator *mrr_iterator,
124 std::span<AccessPath *> single_row_index_lookups,
126
127 void SetNullRowFlag(bool is_null_row) override {
128 m_outer_input->SetNullRowFlag(is_null_row);
129 m_inner_input->SetNullRowFlag(is_null_row);
130 }
131
132 void UnlockRow() override {
133 // Since we don't know which condition that caused the row to be rejected,
134 // we can't know whether we could also unlock the outer row
135 // (it may still be used as parts of other joined rows).
137 m_inner_input->UnlockRow();
138 }
139 }
140
141 void EndPSIBatchModeIfStarted() override {
142 m_outer_input->EndPSIBatchModeIfStarted();
143 m_inner_input->EndPSIBatchModeIfStarted();
144 }
145
146 private:
147 bool DoInit() override;
148 int DoRead() override;
149
150 /// Clear out the MEM_ROOT and prepare for reading rows anew.
151 void BeginNewBatch();
152
153 /// If there are more outer rows, begin the next batch. If not,
154 /// move to the EOF state.
155 void BatchFinished();
156
157 /// Find the next unmatched row, and load it for output as a NULL-complemented
158 /// row. (Assumes the NULL row flag has already been set on the inner table
159 /// iterator.) Returns 0 if a row was found, -1 if no row was found. (Errors
160 /// cannot happen.)
162
163 /// Read a batch of outer rows (BeginNewBatch() must have been called
164 /// earlier). Returns -1 for no outer rows found (sets state to END_OF_ROWS),
165 /// 0 for OK (sets state to RETURNING_JOINED_ROWS) or 1 for error.
166 int ReadOuterRows();
167
168 enum class State {
169 /**
170 We are about to start reading outer rows into our buffer.
171 A single Read() call will fill it up, so there is no
172 in-between “currently reading” state.
173 */
175
176 /**
177 We are returning rows from the MultiRangeRowIterator.
178 (For antijoins, we are looking up the rows, but don't actually
179 return them.)
180 */
182
183 /**
184 We are an outer join or antijoin, and we're returning NULL-complemented
185 rows for those outer rows that never had a matching inner row. Note that
186 this is done in the BKAIterator and not the MRR iterator for two reasons:
187 First, it gives more sensible EXPLAIN ANALYZE numbers. Second, the
188 NULL-complemented rows could be filtered inadvertently by a FilterIterator
189 before they reach the BKAIterator.
190 */
192
193 /**
194 Both the outer and inner side are out of rows.
195 */
197 };
198
200
203
204 /// The MEM_ROOT we are storing the outer rows on, and also allocating MRR
205 /// buffer from. In total, this should not go significantly over
206 /// m_max_memory_available bytes.
208
209 /// Buffered outer rows.
211
212 /// Tables and columns needed for each outer row. Rows/columns that are not
213 /// needed are filtered out in the constructor; the rest are read and stored
214 /// in m_rows.
216
217 /// Used for serializing the row we read from the outer table(s), before it
218 /// stored into the MEM_ROOT and put into m_rows. Should there not be room in
219 /// m_rows for the row, it will stay in this variable until we start reading
220 /// the next batch of outer rows.
221 ///
222 /// If there are no BLOB/TEXT column in the join, we calculate an upper bound
223 /// of the row size that is used to preallocate this buffer. In the case of
224 /// BLOB/TEXT columns, we cannot calculate a reasonable upper bound, and the
225 /// row size is calculated per row. The allocated memory is kept for the
226 /// duration of the iterator, so that we (most likely) avoid reallocations.
228
229 /// Whether we have a row in m_outer_row_buffer from the previous batch of
230 /// rows that we haven't stored in m_rows yet.
232
233 /// For each outer row, how many bytes we need in the MRR buffer (ie., the
234 /// number of bytes we expect to use on rows from the inner table).
235 /// This is the expected number of inner rows per key, multiplied by the
236 /// (fixed) size of each inner row. We use this information to stop scanning
237 /// before we've used up the entire RAM allowance on outer rows, so that
238 /// we have space remaining for the inner rows (in the MRR buffer), too.
240
241 /// Estimated number of bytes used on m_mem_root so far.
242 size_t m_bytes_used = 0;
243
244 /// Whether we've seen EOF from the outer iterator.
246
247 /// See max_memory_available in the constructor.
249
250 /// See max_memory_available in the constructor.
252
253 /// See mrr_iterator in the constructor.
255
256 // All the single-row index lookups that provide rows to this iterator.
257 std::span<AccessPath *> m_single_row_index_lookups;
258
259 /// The join type of the BKA join.
261
262 /// If we are synthesizing NULL-complemented rows (for an outer join or
263 /// antijoin), points to the next row within "m_rows" that we haven't
264 /// considered yet.
266};
267
268/**
269 The iterator actually doing the reads from the inner table during BKA.
270 See file comment.
271 */
273 public:
274 /**
275 @param thd Thread handle.
276 @param table The inner table to scan.
277 @param ref The index condition we are looking up on.
278 @param mrr_flags Flags passed on to MRR.
279 @param join_type
280 What kind of BKA join this MRR iterator is part of.
281 @param outer_input_tables
282 Which tables are on the left side of the BKA join (the MRR iterator
283 is always alone on the right side). This is needed so that it can
284 unpack the rows into the right tables, with the right format.
285 @param store_rowids Whether we need to keep row IDs.
286 @param tables_to_get_rowid_for
287 Tables we need to call table->file->position() for; if a table
288 is present in outer_input_tables but not this, some other iterator
289 will make sure that table has the correct row ID already present
290 after Read().
291 */
293 int mrr_flags, JoinType join_type,
294 const Prealloced_array<TABLE *, 4> &outer_input_tables,
295 bool store_rowids, table_map tables_to_get_rowid_for);
296
297 /**
298 Specify which outer rows to read inner rows for.
299 Must be called before Init(), and be valid until the last Read().
300 */
303 m_begin = begin;
304 m_end = end;
305 }
306
307 /**
308 Specify an unused chunk of memory MRR can use for the returned inner rows.
309 Must be called before Init(), and must be at least big enough to hold
310 one inner row.
311 */
312 void set_mrr_buffer(uchar *ptr, size_t size) {
313 m_mrr_buffer.buffer = ptr;
315 }
316
317 /**
318 Specify an unused chunk of memory that we can use to mark which inner rows
319 have been read (by the parent BKA iterator) or not. This is used for outer
320 joins to know which rows need NULL-complemented versions, and for semijoins
321 and antijoins to avoid matching the same inner row more than once.
322
323 Must be called before Init() for semijoins, outer joins and antijoins, and
324 never called otherwise. There must be room at least for one bit per row
325 given in set_rows().
326 */
328
329 /**
330 Mark that the BKA iterator has seen the last row we returned from Read().
331 (It could have been discarded by a FilterIterator before it reached them.)
332 Will be a no-op for inner joins; see set_match_flag_buffer()..
333 */
335 if (m_match_flag_buffer != nullptr) {
336 size_t row_number = std::distance(m_begin, m_last_row_returned);
337 m_match_flag_buffer[row_number / 8] |= 1 << (row_number % 8);
338 }
339 }
340
341 /**
342 Check whether the given row has been marked as read
343 (using MarkLastRowAsRead()) or not. Used internally when doing semijoins,
344 and also by the BKAIterator when synthesizing NULL-complemented rows for
345 outer joins or antijoins.
346 */
348 assert(m_match_flag_buffer != nullptr);
349 size_t row_number = std::distance(m_begin, row);
350 return m_match_flag_buffer[row_number / 8] & (1 << (row_number % 8));
351 }
352
353 private:
354 /**
355 Do the actual multi-range read with the rows given by set_rows() and using
356 the temporary buffer given in set_mrr_buffer().
357 */
358 bool DoInit() override;
359
360 /**
361 Read another inner row (if any) and load the appropriate outer row(s)
362 into the associated table buffers.
363 */
364 int DoRead() override;
365
366 // Thunks from function pointers to the actual callbacks.
367 static range_seq_t MrrInitCallbackThunk(void *init_params, uint n_ranges,
368 uint flags) {
369 return (pointer_cast<MultiRangeRowIterator *>(init_params))
370 ->MrrInitCallback(n_ranges, flags);
371 }
372 static uint MrrNextCallbackThunk(void *init_params, KEY_MULTI_RANGE *range) {
373 return (pointer_cast<MultiRangeRowIterator *>(init_params))
374 ->MrrNextCallback(range);
375 }
376 static bool MrrSkipRecordCallbackThunk(range_seq_t seq, char *range_info,
377 uchar *) {
378 return (reinterpret_cast<MultiRangeRowIterator *>(seq))
379 ->MrrSkipRecord(range_info);
380 }
381
382 // Callbacks we get from the handler during the actual read.
383 range_seq_t MrrInitCallback(uint n_ranges, uint flags);
385 bool MrrSkipIndexTuple(char *range_info);
386 bool MrrSkipRecord(char *range_info);
387
388 /// Handler for the table we are reading from.
390
391 /// The index condition.
393
394 /// Flags passed on to MRR.
395 const int m_mrr_flags;
396
397 /// Current outer rows to read inner rows for. Set by set_rows().
400
401 /// Which row we are at in the [m_begin, m_end) range.
402 /// Used during the MRR callbacks.
404
405 /// What row we last returned from Read() (used for MarkLastRowAsRead()).
407
408 /// Temporary space for storing inner rows, used by MRR.
409 /// Set by set_mrr_buffer().
411
412 /// See set_match_flag_buffer().
414
415 /// Tables and columns needed for each outer row. Same as m_outer_input_tables
416 /// in the corresponding BKAIterator.
418
419 /// The join type of the BKA join we are part of. Same as m_join_type in the
420 /// corresponding BKAIterator.
422};
423
424#endif // SQL_ITERATORS_BKA_ITERATOR_H_
The BKA join iterator, with an arbitrary iterator tree on the outer side and a MultiRangeRowIterator ...
Definition: bka_iterator.h:84
const size_t m_mrr_bytes_needed_for_single_inner_row
See max_memory_available in the constructor.
Definition: bka_iterator.h:251
int MakeNullComplementedRow()
Find the next unmatched row, and load it for output as a NULL-complemented row.
Definition: bka_iterator.cc:256
bool m_has_row_from_previous_batch
Whether we have a row in m_outer_row_buffer from the previous batch of rows that we haven't stored in...
Definition: bka_iterator.h:231
MEM_ROOT m_mem_root
The MEM_ROOT we are storing the outer rows on, and also allocating MRR buffer from.
Definition: bka_iterator.h:207
int ReadOuterRows()
Read a batch of outer rows (BeginNewBatch() must have been called earlier).
Definition: bka_iterator.cc:134
void BatchFinished()
If there are more outer rows, begin the next batch.
Definition: bka_iterator.cc:245
int DoRead() override
Definition: bka_iterator.cc:276
bool DoInit() override
Definition: bka_iterator.cc:100
const unique_ptr_destroy_only< RowIterator > m_inner_input
Definition: bka_iterator.h:202
void EndPSIBatchModeIfStarted() override
Ends performance schema batch mode, if started.
Definition: bka_iterator.h:141
BKAIterator(THD *thd, unique_ptr_destroy_only< RowIterator > outer_input, const Prealloced_array< TABLE *, 4 > &outer_input_tables, unique_ptr_destroy_only< RowIterator > inner_input, size_t max_memory_available, size_t mrr_bytes_needed_for_single_inner_row, float expected_inner_rows_per_outer_row, bool store_rowids, table_map tables_to_get_rowid_for, MultiRangeRowIterator *mrr_iterator, std::span< AccessPath * > single_row_index_lookups, JoinType join_type)
Definition: bka_iterator.cc:71
const size_t m_max_memory_available
See max_memory_available in the constructor.
Definition: bka_iterator.h:248
MultiRangeRowIterator *const m_mrr_iterator
See mrr_iterator in the constructor.
Definition: bka_iterator.h:254
State
Definition: bka_iterator.h:168
@ RETURNING_JOINED_ROWS
We are returning rows from the MultiRangeRowIterator.
@ NEED_OUTER_ROWS
We are about to start reading outer rows into our buffer.
@ RETURNING_NULL_COMPLEMENTED_ROWS
We are an outer join or antijoin, and we're returning NULL-complemented rows for those outer rows tha...
@ END_OF_ROWS
Both the outer and inner side are out of rows.
std::span< AccessPath * > m_single_row_index_lookups
Definition: bka_iterator.h:257
String m_outer_row_buffer
Used for serializing the row we read from the outer table(s), before it stored into the MEM_ROOT and ...
Definition: bka_iterator.h:227
size_t m_mrr_bytes_needed_per_row
For each outer row, how many bytes we need in the MRR buffer (ie., the number of bytes we expect to u...
Definition: bka_iterator.h:239
pack_rows::TableCollection m_outer_input_tables
Tables and columns needed for each outer row.
Definition: bka_iterator.h:215
JoinType m_join_type
The join type of the BKA join.
Definition: bka_iterator.h:260
size_t m_bytes_used
Estimated number of bytes used on m_mem_root so far.
Definition: bka_iterator.h:242
hash_join_buffer::BufferRow * m_current_pos
If we are synthesizing NULL-complemented rows (for an outer join or antijoin), points to the next row...
Definition: bka_iterator.h:265
bool m_end_of_outer_rows
Whether we've seen EOF from the outer iterator.
Definition: bka_iterator.h:245
Mem_root_array< hash_join_buffer::BufferRow > m_rows
Buffered outer rows.
Definition: bka_iterator.h:210
State m_state
Definition: bka_iterator.h:199
void BeginNewBatch()
Clear out the MEM_ROOT and prepare for reading rows anew.
Definition: bka_iterator.cc:118
void SetNullRowFlag(bool is_null_row) override
Mark the current row buffer as containing a NULL row or not, so that if you read from it and the flag...
Definition: bka_iterator.h:127
void UnlockRow() override
Definition: bka_iterator.h:132
const unique_ptr_destroy_only< RowIterator > m_outer_input
Definition: bka_iterator.h:201
Base class that is used to represent any kind of expression in a relational query.
Definition: item.h:928
Definition: sql_optimizer.h:133
A typesafe replacement for DYNAMIC_ARRAY.
Definition: mem_root_array.h:432
The iterator actually doing the reads from the inner table during BKA.
Definition: bka_iterator.h:272
MultiRangeRowIterator(THD *thd, TABLE *table, Index_lookup *ref, int mrr_flags, JoinType join_type, const Prealloced_array< TABLE *, 4 > &outer_input_tables, bool store_rowids, table_map tables_to_get_rowid_for)
Definition: bka_iterator.cc:328
void MarkLastRowAsRead()
Mark that the BKA iterator has seen the last row we returned from Read().
Definition: bka_iterator.h:334
void set_match_flag_buffer(uchar *ptr)
Specify an unused chunk of memory that we can use to mark which inner rows have been read (by the par...
Definition: bka_iterator.h:327
const int m_mrr_flags
Flags passed on to MRR.
Definition: bka_iterator.h:395
handler *const m_file
Handler for the table we are reading from.
Definition: bka_iterator.h:389
static bool MrrSkipRecordCallbackThunk(range_seq_t seq, char *range_info, uchar *)
Definition: bka_iterator.h:376
const JoinType m_join_type
The join type of the BKA join we are part of.
Definition: bka_iterator.h:421
pack_rows::TableCollection m_outer_input_tables
Tables and columns needed for each outer row.
Definition: bka_iterator.h:417
Index_lookup *const m_ref
The index condition.
Definition: bka_iterator.h:392
static uint MrrNextCallbackThunk(void *init_params, KEY_MULTI_RANGE *range)
Definition: bka_iterator.h:372
bool MrrSkipRecord(char *range_info)
Definition: bka_iterator.cc:431
HANDLER_BUFFER m_mrr_buffer
Temporary space for storing inner rows, used by MRR.
Definition: bka_iterator.h:410
bool MrrSkipIndexTuple(char *range_info)
uchar * m_match_flag_buffer
See set_match_flag_buffer().
Definition: bka_iterator.h:413
uint MrrNextCallback(KEY_MULTI_RANGE *range)
Definition: bka_iterator.cc:386
int DoRead() override
Read another inner row (if any) and load the appropriate outer row(s) into the associated table buffe...
Definition: bka_iterator.cc:436
static range_seq_t MrrInitCallbackThunk(void *init_params, uint n_ranges, uint flags)
Definition: bka_iterator.h:367
range_seq_t MrrInitCallback(uint n_ranges, uint flags)
Definition: bka_iterator.cc:381
const hash_join_buffer::BufferRow * m_current_pos
Which row we are at in the [m_begin, m_end) range.
Definition: bka_iterator.h:403
const hash_join_buffer::BufferRow * m_begin
Current outer rows to read inner rows for. Set by set_rows().
Definition: bka_iterator.h:398
bool DoInit() override
Do the actual multi-range read with the rows given by set_rows() and using the temporary buffer given...
Definition: bka_iterator.cc:340
void set_rows(const hash_join_buffer::BufferRow *begin, const hash_join_buffer::BufferRow *end)
Specify which outer rows to read inner rows for.
Definition: bka_iterator.h:301
bool RowHasBeenRead(const hash_join_buffer::BufferRow *row) const
Check whether the given row has been marked as read (using MarkLastRowAsRead()) or not.
Definition: bka_iterator.h:347
void set_mrr_buffer(uchar *ptr, size_t size)
Specify an unused chunk of memory MRR can use for the returned inner rows.
Definition: bka_iterator.h:312
const hash_join_buffer::BufferRow * m_last_row_returned
What row we last returned from Read() (used for MarkLastRowAsRead()).
Definition: bka_iterator.h:406
const hash_join_buffer::BufferRow * m_end
Definition: bka_iterator.h:399
A typesafe replacement for DYNAMIC_ARRAY.
Definition: prealloced_array.h:71
A context for reading through a single table using a chosen access method: index read,...
Definition: row_iterator.h:82
THD * thd() const
Definition: row_iterator.h:255
Using this class is fraught with peril, and you need to be very careful when doing so.
Definition: sql_string.h:169
For each client connection we create a separate thread with THD serving as a thread/connection descri...
Definition: sql_lexer_thd.h:36
Definition: row_iterator.h:267
TABLE * table() const
Definition: row_iterator.h:279
The handler class is the interface for dynamically loadable storage engines.
Definition: handler.h:4741
A structure that contains a list of input tables for a hash join operation, BKA join operation or a s...
Definition: pack_rows.h:84
This file contains the HashJoinRowBuffer class and related functions/classes.
static int flags[50]
Definition: hp_test1.cc:40
JoinType
Definition: join_type.h:28
This file follows Google coding style, except for the name MEM_ROOT (which is kept for historical rea...
std::unique_ptr< T, Destroy_only< T > > unique_ptr_destroy_only
std::unique_ptr, but only destroying.
Definition: my_alloc.h:480
Some integer typedefs for easier portability.
unsigned char uchar
Definition: my_inttypes.h:52
uint64_t table_map
Definition: my_table_map.h:30
PT & ref(PT *tp)
Definition: tablespace_impl.cc:359
bool distance(const dd::Spatial_reference_system *srs, const Geometry *g1, const Geometry *g2, double *distance, bool *is_null) noexcept
Computes the distance between two geometries.
Definition: distance.cc:40
Key BufferRow
Definition: hash_join_buffer.h:111
const char * begin(const char *const c)
Definition: base64.h:44
size_t size(const char *const c)
Definition: base64.h:46
Cursor end()
A past-the-end Cursor.
Definition: rules_table_service.cc:192
Generic routines for packing rows (possibly from multiple tables at the same time) into strings,...
void * range_seq_t
Definition: handler.h:3973
join_type
Definition: sql_opt_exec_shared.h:186
Our own string classes, used pervasively throughout the executor.
Access paths are a query planning structure that correspond 1:1 to iterators, in that an access path ...
Definition: access_path.h:238
Definition: handler.h:3967
uchar * buffer_end
Definition: handler.h:3969
uchar * buffer
Definition: handler.h:3968
Structure used for index-based lookups.
Definition: sql_opt_exec_shared.h:67
Definition: my_base.h:1208
The MEM_ROOT is a simple arena, where allocations are carved out of larger blocks.
Definition: my_alloc.h:83
Definition: table.h:1435
Definition: gen_lex_token.cc:149