MySQL  8.0.26
Source Code Documentation
AccessPath Struct Reference

Access paths are a query planning structure that correspond 1:1 to iterators, in that an access path contains pretty much exactly the information needed to instantiate given iterator, plus some information that is only needed during planning, such as costs. More...

#include <access_path.h>

Public Types

enum  Type : uint8_t {
  TABLE_SCAN , INDEX_SCAN , REF , REF_OR_NULL ,
  EQ_REF , PUSHED_JOIN_REF , FULL_TEXT_SEARCH , CONST_TABLE ,
  MRR , FOLLOW_TAIL , INDEX_RANGE_SCAN , DYNAMIC_INDEX_RANGE_SCAN ,
  TABLE_VALUE_CONSTRUCTOR , FAKE_SINGLE_ROW , ZERO_ROWS , ZERO_ROWS_AGGREGATED ,
  MATERIALIZED_TABLE_FUNCTION , UNQUALIFIED_COUNT , NESTED_LOOP_JOIN , NESTED_LOOP_SEMIJOIN_WITH_DUPLICATE_REMOVAL ,
  BKA_JOIN , HASH_JOIN , FILTER , SORT ,
  AGGREGATE , TEMPTABLE_AGGREGATE , LIMIT_OFFSET , STREAM ,
  MATERIALIZE , MATERIALIZE_INFORMATION_SCHEMA_TABLE , APPEND , WINDOW ,
  WEEDOUT , REMOVE_DUPLICATES , REMOVE_DUPLICATES_ON_INDEX , ALTERNATIVE ,
  CACHE_INVALIDATOR
}
 

Public Member Functions

auto & table_scan ()
 
const auto & table_scan () const
 
auto & index_scan ()
 
const auto & index_scan () const
 
auto & ref ()
 
const auto & ref () const
 
auto & ref_or_null ()
 
const auto & ref_or_null () const
 
auto & eq_ref ()
 
const auto & eq_ref () const
 
auto & pushed_join_ref ()
 
const auto & pushed_join_ref () const
 
auto & full_text_search ()
 
const auto & full_text_search () const
 
auto & const_table ()
 
const auto & const_table () const
 
auto & mrr ()
 
const auto & mrr () const
 
auto & follow_tail ()
 
const auto & follow_tail () const
 
auto & index_range_scan ()
 
const auto & index_range_scan () const
 
auto & dynamic_index_range_scan ()
 
const auto & dynamic_index_range_scan () const
 
auto & materialized_table_function ()
 
const auto & materialized_table_function () const
 
auto & unqualified_count ()
 
const auto & unqualified_count () const
 
auto & table_value_constructor ()
 
const auto & table_value_constructor () const
 
auto & fake_single_row ()
 
const auto & fake_single_row () const
 
auto & zero_rows ()
 
const auto & zero_rows () const
 
auto & zero_rows_aggregated ()
 
const auto & zero_rows_aggregated () const
 
auto & hash_join ()
 
const auto & hash_join () const
 
auto & bka_join ()
 
const auto & bka_join () const
 
auto & nested_loop_join ()
 
const auto & nested_loop_join () const
 
auto & nested_loop_semijoin_with_duplicate_removal ()
 
const auto & nested_loop_semijoin_with_duplicate_removal () const
 
auto & filter ()
 
const auto & filter () const
 
auto & sort ()
 
const auto & sort () const
 
auto & aggregate ()
 
const auto & aggregate () const
 
auto & temptable_aggregate ()
 
const auto & temptable_aggregate () const
 
auto & limit_offset ()
 
const auto & limit_offset () const
 
auto & stream ()
 
const auto & stream () const
 
auto & materialize ()
 
const auto & materialize () const
 
auto & materialize_information_schema_table ()
 
const auto & materialize_information_schema_table () const
 
auto & append ()
 
const auto & append () const
 
auto & window ()
 
const auto & window () const
 
auto & weedout ()
 
const auto & weedout () const
 
auto & remove_duplicates ()
 
const auto & remove_duplicates () const
 
auto & remove_duplicates_on_index ()
 
const auto & remove_duplicates_on_index () const
 
auto & alternative ()
 
const auto & alternative () const
 
auto & cache_invalidator ()
 
const auto & cache_invalidator () const
 

Public Attributes

enum AccessPath::Type type
 
bool count_examined_rows = false
 Whether this access path counts as one that scans a base table, and thus should be counted towards examined_rows. More...
 
int ordering_state = 0
 Which ordering the rows produced by this path follow, if any (see interesting_orders.h). More...
 
RowIteratoriterator = nullptr
 If an iterator has been instantiated for this access path, points to the iterator. More...
 
double num_output_rows {-1.0}
 Expected number of output rows, -1.0 for unknown. More...
 
double cost {-1.0}
 Expected cost to read all of this access path once; -1.0 for unknown. More...
 
double init_cost {-1.0}
 Expected cost to initialize this access path; ie., cost to read k out of N rows would be init_cost + (k/N) * (cost - init_cost). More...
 
double init_once_cost {0.0}
 Of init_cost, how much of the initialization needs only to be done once per query block. More...
 
double num_output_rows_before_filter {-1.0}
 If no filter, identical to num_output_rows, cost, respectively. More...
 
double cost_before_filter {-1.0}
 
union {
   uint64_t   filter_predicates {0}
 Bitmap of WHERE predicates that we are including on this access path, referring to the “predicates” array internal to the join optimizer. More...
 
   uint64_t   applied_sargable_join_predicates
 Bitmap of sargable join predicates that have already been applied in this access path by means of an index lookup (ref access), again referring to “predicates”, and thus should not be counted again for selectivity. More...
 
}; 
 
union {
   uint64_t   delayed_predicates {0}
 Bitmap of WHERE predicates that we touch tables we have joined in, but that we could not apply yet (for instance because they reference other tables, or because because we could not push them down into the nullable side of outer joins). More...
 
   uint64_t   subsumed_sargable_join_predicates
 Similar to applied_sargable_join_predicates, bitmap of sargable join predicates that have been applied and will subsume the join predicate entirely, ie., not only should the selectivity not be double-counted, but the predicate itself is redundant and need not be applied as a filter. More...
 
}; 
 
hypergraph::NodeMap parameter_tables {0}
 If nonzero, a bitmap of other tables whose joined-in rows must already be loaded when rows from this access path are evaluated; that is, this access path must be put on the inner side of a nested-loop join (or multiple such joins) where the outer side includes all of the given tables. More...
 
void * secondary_engine_data {nullptr}
 Auxiliary data used by a secondary storage engine while processing the access path during optimization and execution. More...
 
TABLEtable
 
struct {
   TABLE *   table
 
table_scan
 
int idx
 
bool use_order
 
bool reverse
 
struct {
   TABLE *   table
 
   int   idx
 
   bool   use_order
 
   bool   reverse
 
index_scan
 
TABLE_REFref
 
struct {
   TABLE *   table
 
   TABLE_REF *   ref
 
   bool   use_order
 
   bool   reverse
 
ref
 
struct {
   TABLE *   table
 
   TABLE_REF *   ref
 
   bool   use_order
 
ref_or_null
 
struct {
   TABLE *   table
 
   TABLE_REF *   ref
 
   bool   use_order
 
eq_ref
 
bool is_unique
 
struct {
   TABLE *   table
 
   TABLE_REF *   ref
 
   bool   use_order
 
   bool   is_unique
 
pushed_join_ref
 
Item_func_matchft_func
 
struct {
   TABLE *   table
 
   TABLE_REF *   ref
 
   bool   use_order
 
   Item_func_match *   ft_func
 
full_text_search
 
struct {
   TABLE *   table
 
   TABLE_REF *   ref
 
const_table
 
AccessPathbka_path
 
int mrr_flags
 
bool keep_current_rowid
 
struct {
   TABLE *   table
 
   TABLE_REF *   ref
 
   AccessPath *   bka_path
 
   int   mrr_flags
 
   bool   keep_current_rowid
 
mrr
 
struct {
   TABLE *   table
 
follow_tail
 
QUICK_SELECT_Iquick
 
struct {
   TABLE *   table
 
   QUICK_SELECT_I *   quick
 
index_range_scan
 
QEP_TABqep_tab
 
struct {
   TABLE *   table
 
   QEP_TAB *   qep_tab
 
dynamic_index_range_scan
 
Table_functiontable_function
 
AccessPathtable_path
 
struct {
   TABLE *   table
 
   Table_function *   table_function
 
   AccessPath *   table_path
 
materialized_table_function
 
struct {
unqualified_count
 
struct {
table_value_constructor
 
struct {
fake_single_row
 
AccessPathchild
 
const char * cause
 
struct {
   AccessPath *   child
 
   const char *   cause
 
zero_rows
 
struct {
   const char *   cause
 
zero_rows_aggregated
 
AccessPathouter
 
AccessPathinner
 
const JoinPredicatejoin_predicate
 
bool allow_spill_to_disk
 
bool store_rowids
 
bool rewrite_semi_to_inner
 
table_map tables_to_get_rowid_for
 
struct {
   AccessPath *   outer
 
   AccessPath *   inner
 
   const JoinPredicate *   join_predicate
 
   bool   allow_spill_to_disk
 
   bool   store_rowids
 
   bool   rewrite_semi_to_inner
 
   table_map   tables_to_get_rowid_for
 
hash_join
 
JoinType join_type
 
unsigned mrr_length_per_rec
 
float rec_per_key
 
struct {
   AccessPath *   outer
 
   AccessPath *   inner
 
   JoinType   join_type
 
   unsigned   mrr_length_per_rec
 
   float   rec_per_key
 
   bool   store_rowids
 
   table_map   tables_to_get_rowid_for
 
bka_join
 
bool pfs_batch_mode
 
struct {
   AccessPath *   outer
 
   AccessPath *   inner
 
   JoinType   join_type
 
   bool   pfs_batch_mode
 
nested_loop_join
 
const TABLEtable
 
KEYkey
 
size_t key_len
 
struct {
   AccessPath *   outer
 
   AccessPath *   inner
 
   const TABLE *   table
 
   KEY *   key
 
   size_t   key_len
 
nested_loop_semijoin_with_duplicate_removal
 
Itemcondition
 
bool materialize_subqueries
 
struct {
   AccessPath *   child
 
   Item *   condition
 
   bool   materialize_subqueries
 
filter
 
Filesortfilesort
 
ORDERorder
 
bool remove_duplicates
 
bool unwrap_rollup
 
struct {
   AccessPath *   child
 
   Filesort *   filesort
 
   table_map   tables_to_get_rowid_for
 
   ORDER *   order
 
   bool   remove_duplicates
 
   bool   unwrap_rollup
 
sort
 
bool rollup
 
struct {
   AccessPath *   child
 
   bool   rollup
 
aggregate
 
AccessPathsubquery_path
 
Temp_table_paramtemp_table_param
 
int ref_slice
 
struct {
   AccessPath *   subquery_path
 
   Temp_table_param *   temp_table_param
 
   TABLE *   table
 
   AccessPath *   table_path
 
   int   ref_slice
 
temptable_aggregate
 
ha_rows limit
 
ha_rows offset
 
bool count_all_rows
 
bool reject_multiple_rows
 
ha_rowssend_records_override
 
struct {
   AccessPath *   child
 
   ha_rows   limit
 
   ha_rows   offset
 
   bool   count_all_rows
 
   bool   reject_multiple_rows
 
   ha_rows *   send_records_override
 
limit_offset
 
JOINjoin
 
bool provide_rowid
 
struct {
   AccessPath *   child
 
   JOIN *   join
 
   Temp_table_param *   temp_table_param
 
   TABLE *   table
 
   bool   provide_rowid
 
   int   ref_slice
 
stream
 
MaterializePathParametersparam
 
struct {
   AccessPath *   table_path
 
   MaterializePathParameters *   param
 
materialize
 
TABLE_LISTtable_list
 
struct {
   AccessPath *   table_path
 
   TABLE_LIST *   table_list
 
   Item *   condition
 
materialize_information_schema_table
 
Mem_root_array< AppendPathParameters > * children
 
struct {
   Mem_root_array< AppendPathParameters > *   children
 
append
 
bool needs_buffering
 
struct {
   AccessPath *   child
 
   Temp_table_param *   temp_table_param
 
   int   ref_slice
 
   bool   needs_buffering
 
window
 
SJ_TMP_TABLEweedout_table
 
struct {
   AccessPath *   child
 
   SJ_TMP_TABLE *   weedout_table
 
   table_map   tables_to_get_rowid_for
 
weedout
 
Item ** group_items
 
int group_items_size
 
struct {
   AccessPath *   child
 
   Item **   group_items
 
   int   group_items_size
 
remove_duplicates
 
unsigned loosescan_key_len
 
struct {
   AccessPath *   child
 
   TABLE *   table
 
   KEY *   key
 
   unsigned   loosescan_key_len
 
remove_duplicates_on_index
 
AccessPathtable_scan_path
 
TABLE_REFused_ref
 
struct {
   AccessPath *   table_scan_path
 
   AccessPath *   child
 
   TABLE_REF *   used_ref
 
alternative
 
const char * name
 
struct {
   AccessPath *   child
 
   const char *   name
 
cache_invalidator
 

Private Attributes

union {
   struct {
      TABLE *   table
 
   }   table_scan
 
   struct {
      TABLE *   table
 
      int   idx
 
      bool   use_order
 
      bool   reverse
 
   }   index_scan
 
   struct {
      TABLE *   table
 
      TABLE_REF *   ref
 
      bool   use_order
 
      bool   reverse
 
   }   ref
 
   struct {
      TABLE *   table
 
      TABLE_REF *   ref
 
      bool   use_order
 
   }   ref_or_null
 
   struct {
      TABLE *   table
 
      TABLE_REF *   ref
 
      bool   use_order
 
   }   eq_ref
 
   struct {
      TABLE *   table
 
      TABLE_REF *   ref
 
      bool   use_order
 
      bool   is_unique
 
   }   pushed_join_ref
 
   struct {
      TABLE *   table
 
      TABLE_REF *   ref
 
      bool   use_order
 
      Item_func_match *   ft_func
 
   }   full_text_search
 
   struct {
      TABLE *   table
 
      TABLE_REF *   ref
 
   }   const_table
 
   struct {
      TABLE *   table
 
      TABLE_REF *   ref
 
      AccessPath *   bka_path
 
      int   mrr_flags
 
      bool   keep_current_rowid
 
   }   mrr
 
   struct {
      TABLE *   table
 
   }   follow_tail
 
   struct {
      TABLE *   table
 
      QUICK_SELECT_I *   quick
 
   }   index_range_scan
 
   struct {
      TABLE *   table
 
      QEP_TAB *   qep_tab
 
   }   dynamic_index_range_scan
 
   struct {
      TABLE *   table
 
      Table_function *   table_function
 
      AccessPath *   table_path
 
   }   materialized_table_function
 
   struct {
   }   unqualified_count
 
   struct {
   }   table_value_constructor
 
   struct {
   }   fake_single_row
 
   struct {
      AccessPath *   child
 
      const char *   cause
 
   }   zero_rows
 
   struct {
      const char *   cause
 
   }   zero_rows_aggregated
 
   struct {
      AccessPath *   outer
 
      AccessPath *   inner
 
      const JoinPredicate *   join_predicate
 
      bool   allow_spill_to_disk
 
      bool   store_rowids
 
      bool   rewrite_semi_to_inner
 
      table_map   tables_to_get_rowid_for
 
   }   hash_join
 
   struct {
      AccessPath *   outer
 
      AccessPath *   inner
 
      JoinType   join_type
 
      unsigned   mrr_length_per_rec
 
      float   rec_per_key
 
      bool   store_rowids
 
      table_map   tables_to_get_rowid_for
 
   }   bka_join
 
   struct {
      AccessPath *   outer
 
      AccessPath *   inner
 
      JoinType   join_type
 
      bool   pfs_batch_mode
 
   }   nested_loop_join
 
   struct {
      AccessPath *   outer
 
      AccessPath *   inner
 
      const TABLE *   table
 
      KEY *   key
 
      size_t   key_len
 
   }   nested_loop_semijoin_with_duplicate_removal
 
   struct {
      AccessPath *   child
 
      Item *   condition
 
      bool   materialize_subqueries
 
   }   filter
 
   struct {
      AccessPath *   child
 
      Filesort *   filesort
 
      table_map   tables_to_get_rowid_for
 
      ORDER *   order
 
      bool   remove_duplicates
 
      bool   unwrap_rollup
 
   }   sort
 
   struct {
      AccessPath *   child
 
      bool   rollup
 
   }   aggregate
 
   struct {
      AccessPath *   subquery_path
 
      Temp_table_param *   temp_table_param
 
      TABLE *   table
 
      AccessPath *   table_path
 
      int   ref_slice
 
   }   temptable_aggregate
 
   struct {
      AccessPath *   child
 
      ha_rows   limit
 
      ha_rows   offset
 
      bool   count_all_rows
 
      bool   reject_multiple_rows
 
      ha_rows *   send_records_override
 
   }   limit_offset
 
   struct {
      AccessPath *   child
 
      JOIN *   join
 
      Temp_table_param *   temp_table_param
 
      TABLE *   table
 
      bool   provide_rowid
 
      int   ref_slice
 
   }   stream
 
   struct {
      AccessPath *   table_path
 
      MaterializePathParameters *   param
 
   }   materialize
 
   struct {
      AccessPath *   table_path
 
      TABLE_LIST *   table_list
 
      Item *   condition
 
   }   materialize_information_schema_table
 
   struct {
      Mem_root_array< AppendPathParameters > *   children
 
   }   append
 
   struct {
      AccessPath *   child
 
      Temp_table_param *   temp_table_param
 
      int   ref_slice
 
      bool   needs_buffering
 
   }   window
 
   struct {
      AccessPath *   child
 
      SJ_TMP_TABLE *   weedout_table
 
      table_map   tables_to_get_rowid_for
 
   }   weedout
 
   struct {
      AccessPath *   child
 
      Item **   group_items
 
      int   group_items_size
 
   }   remove_duplicates
 
   struct {
      AccessPath *   child
 
      TABLE *   table
 
      KEY *   key
 
      unsigned   loosescan_key_len
 
   }   remove_duplicates_on_index
 
   struct {
      AccessPath *   table_scan_path
 
      AccessPath *   child
 
      TABLE_REF *   used_ref
 
   }   alternative
 
   struct {
      AccessPath *   child
 
      const char *   name
 
   }   cache_invalidator
 
u
 

Detailed Description

Access paths are a query planning structure that correspond 1:1 to iterators, in that an access path contains pretty much exactly the information needed to instantiate given iterator, plus some information that is only needed during planning, such as costs.

(The new join optimizer will extend this somewhat in the future. Some iterators also need the query block, ie., JOIN object, they are part of, but that is implicitly available when constructing the tree.)

AccessPath objects build on a variant, ie., they can hold an access path of any type (table scan, filter, hash join, sort, etc.), although only one at the same time. Currently, they contain 32 bytes of base information that is common to any access path (type identifier, costs, etc.), and then up to 40 bytes that is type-specific (e.g. for a table scan, the TABLE object). It would be nice if we could squeeze it down to 64 and fit a cache line exactly, but it does not seem to be easy without fairly large contortions.

We could have solved this by inheritance, but the fixed-size design makes it possible to replace an access path when a better one is found, without introducing a new allocation, which will be important when using them as a planning structure.

Member Enumeration Documentation

◆ Type

enum AccessPath::Type : uint8_t
Enumerator
TABLE_SCAN 
INDEX_SCAN 
REF 
REF_OR_NULL 
EQ_REF 
PUSHED_JOIN_REF 
FULL_TEXT_SEARCH 
CONST_TABLE 
MRR 
FOLLOW_TAIL 
INDEX_RANGE_SCAN 
DYNAMIC_INDEX_RANGE_SCAN 
TABLE_VALUE_CONSTRUCTOR 
FAKE_SINGLE_ROW 
ZERO_ROWS 
ZERO_ROWS_AGGREGATED 
MATERIALIZED_TABLE_FUNCTION 
UNQUALIFIED_COUNT 
NESTED_LOOP_JOIN 
NESTED_LOOP_SEMIJOIN_WITH_DUPLICATE_REMOVAL 
BKA_JOIN 
HASH_JOIN 
FILTER 
SORT 
AGGREGATE 
TEMPTABLE_AGGREGATE 
LIMIT_OFFSET 
STREAM 
MATERIALIZE 
MATERIALIZE_INFORMATION_SCHEMA_TABLE 
APPEND 
WINDOW 
WEEDOUT 
REMOVE_DUPLICATES 
REMOVE_DUPLICATES_ON_INDEX 
ALTERNATIVE 
CACHE_INVALIDATOR 

Member Function Documentation

◆ aggregate() [1/2]

auto& AccessPath::aggregate ( )
inline

◆ aggregate() [2/2]

const auto& AccessPath::aggregate ( ) const
inline

◆ alternative() [1/2]

auto& AccessPath::alternative ( )
inline

◆ alternative() [2/2]

const auto& AccessPath::alternative ( ) const
inline

◆ append() [1/2]

auto& AccessPath::append ( )
inline

◆ append() [2/2]

const auto& AccessPath::append ( ) const
inline

◆ bka_join() [1/2]

auto& AccessPath::bka_join ( )
inline

◆ bka_join() [2/2]

const auto& AccessPath::bka_join ( ) const
inline

◆ cache_invalidator() [1/2]

auto& AccessPath::cache_invalidator ( )
inline

◆ cache_invalidator() [2/2]

const auto& AccessPath::cache_invalidator ( ) const
inline

◆ const_table() [1/2]

auto& AccessPath::const_table ( )
inline

◆ const_table() [2/2]

const auto& AccessPath::const_table ( ) const
inline

◆ dynamic_index_range_scan() [1/2]

auto& AccessPath::dynamic_index_range_scan ( )
inline

◆ dynamic_index_range_scan() [2/2]

const auto& AccessPath::dynamic_index_range_scan ( ) const
inline

◆ eq_ref() [1/2]

auto& AccessPath::eq_ref ( )
inline

◆ eq_ref() [2/2]

const auto& AccessPath::eq_ref ( ) const
inline

◆ fake_single_row() [1/2]

auto& AccessPath::fake_single_row ( )
inline

◆ fake_single_row() [2/2]

const auto& AccessPath::fake_single_row ( ) const
inline

◆ filter() [1/2]

auto& AccessPath::filter ( )
inline

◆ filter() [2/2]

const auto& AccessPath::filter ( ) const
inline

◆ follow_tail() [1/2]

auto& AccessPath::follow_tail ( )
inline

◆ follow_tail() [2/2]

const auto& AccessPath::follow_tail ( ) const
inline

◆ full_text_search() [1/2]

auto& AccessPath::full_text_search ( )
inline

◆ full_text_search() [2/2]

const auto& AccessPath::full_text_search ( ) const
inline

◆ hash_join() [1/2]

auto& AccessPath::hash_join ( )
inline

◆ hash_join() [2/2]

const auto& AccessPath::hash_join ( ) const
inline

◆ index_range_scan() [1/2]

auto& AccessPath::index_range_scan ( )
inline

◆ index_range_scan() [2/2]

const auto& AccessPath::index_range_scan ( ) const
inline

◆ index_scan() [1/2]

auto& AccessPath::index_scan ( )
inline

◆ index_scan() [2/2]

const auto& AccessPath::index_scan ( ) const
inline

◆ limit_offset() [1/2]

auto& AccessPath::limit_offset ( )
inline

◆ limit_offset() [2/2]

const auto& AccessPath::limit_offset ( ) const
inline

◆ materialize() [1/2]

auto& AccessPath::materialize ( )
inline

◆ materialize() [2/2]

const auto& AccessPath::materialize ( ) const
inline

◆ materialize_information_schema_table() [1/2]

auto& AccessPath::materialize_information_schema_table ( )
inline

◆ materialize_information_schema_table() [2/2]

const auto& AccessPath::materialize_information_schema_table ( ) const
inline

◆ materialized_table_function() [1/2]

auto& AccessPath::materialized_table_function ( )
inline

◆ materialized_table_function() [2/2]

const auto& AccessPath::materialized_table_function ( ) const
inline

◆ mrr() [1/2]

auto& AccessPath::mrr ( )
inline

◆ mrr() [2/2]

const auto& AccessPath::mrr ( ) const
inline

◆ nested_loop_join() [1/2]

auto& AccessPath::nested_loop_join ( )
inline

◆ nested_loop_join() [2/2]

const auto& AccessPath::nested_loop_join ( ) const
inline

◆ nested_loop_semijoin_with_duplicate_removal() [1/2]

auto& AccessPath::nested_loop_semijoin_with_duplicate_removal ( )
inline

◆ nested_loop_semijoin_with_duplicate_removal() [2/2]

const auto& AccessPath::nested_loop_semijoin_with_duplicate_removal ( ) const
inline

◆ pushed_join_ref() [1/2]

auto& AccessPath::pushed_join_ref ( )
inline

◆ pushed_join_ref() [2/2]

const auto& AccessPath::pushed_join_ref ( ) const
inline

◆ ref() [1/2]

auto& AccessPath::ref ( )
inline

◆ ref() [2/2]

const auto& AccessPath::ref ( ) const
inline

◆ ref_or_null() [1/2]

auto& AccessPath::ref_or_null ( )
inline

◆ ref_or_null() [2/2]

const auto& AccessPath::ref_or_null ( ) const
inline

◆ remove_duplicates() [1/2]

auto& AccessPath::remove_duplicates ( )
inline

◆ remove_duplicates() [2/2]

const auto& AccessPath::remove_duplicates ( ) const
inline

◆ remove_duplicates_on_index() [1/2]

auto& AccessPath::remove_duplicates_on_index ( )
inline

◆ remove_duplicates_on_index() [2/2]

const auto& AccessPath::remove_duplicates_on_index ( ) const
inline

◆ sort() [1/2]

auto& AccessPath::sort ( )
inline

◆ sort() [2/2]

const auto& AccessPath::sort ( ) const
inline

◆ stream() [1/2]

auto& AccessPath::stream ( )
inline

◆ stream() [2/2]

const auto& AccessPath::stream ( ) const
inline

◆ table_scan() [1/2]

auto& AccessPath::table_scan ( )
inline

◆ table_scan() [2/2]

const auto& AccessPath::table_scan ( ) const
inline

◆ table_value_constructor() [1/2]

auto& AccessPath::table_value_constructor ( )
inline

◆ table_value_constructor() [2/2]

const auto& AccessPath::table_value_constructor ( ) const
inline

◆ temptable_aggregate() [1/2]

auto& AccessPath::temptable_aggregate ( )
inline

◆ temptable_aggregate() [2/2]

const auto& AccessPath::temptable_aggregate ( ) const
inline

◆ unqualified_count() [1/2]

auto& AccessPath::unqualified_count ( )
inline

◆ unqualified_count() [2/2]

const auto& AccessPath::unqualified_count ( ) const
inline

◆ weedout() [1/2]

auto& AccessPath::weedout ( )
inline

◆ weedout() [2/2]

const auto& AccessPath::weedout ( ) const
inline

◆ window() [1/2]

auto& AccessPath::window ( )
inline

◆ window() [2/2]

const auto& AccessPath::window ( ) const
inline

◆ zero_rows() [1/2]

auto& AccessPath::zero_rows ( )
inline

◆ zero_rows() [2/2]

const auto& AccessPath::zero_rows ( ) const
inline

◆ zero_rows_aggregated() [1/2]

auto& AccessPath::zero_rows_aggregated ( )
inline

◆ zero_rows_aggregated() [2/2]

const auto& AccessPath::zero_rows_aggregated ( ) const
inline

Member Data Documentation

◆ 

union { ... }

◆ 

union { ... }

◆ 

struct { ... } AccessPath::aggregate

◆ allow_spill_to_disk

bool AccessPath::allow_spill_to_disk

◆ 

struct { ... } AccessPath::alternative

◆ 

struct { ... } AccessPath::append

◆ applied_sargable_join_predicates

uint64_t AccessPath::applied_sargable_join_predicates

Bitmap of sargable join predicates that have already been applied in this access path by means of an index lookup (ref access), again referring to “predicates”, and thus should not be counted again for selectivity.

Note that the filter may need to be applied nevertheless (especially in case of type conversions); see subsumed_sargable_join_predicates.

Since these refer to the same array as filter_predicates, they will never overlap with filter_predicates, and so we can reuse the same memory using an union, even though the meaning is entirely separate. If N = num_where_predictes in the hypergraph, then bits 0..(N-1) belong to filter_predicates, and the rest to applied_sargable_join_predicates.

◆ 

struct { ... } AccessPath::bka_join

◆ bka_path

AccessPath* AccessPath::bka_path

◆ 

struct { ... } AccessPath::cache_invalidator

◆ cause

const char* AccessPath::cause

◆ child

AccessPath* AccessPath::child

◆ children

Mem_root_array<AppendPathParameters>* AccessPath::children

◆ condition

Item* AccessPath::condition

◆ 

struct { ... } AccessPath::const_table

◆ cost

double AccessPath::cost {-1.0}

Expected cost to read all of this access path once; -1.0 for unknown.

◆ cost_before_filter

double AccessPath::cost_before_filter {-1.0}

◆ count_all_rows

bool AccessPath::count_all_rows

◆ count_examined_rows

bool AccessPath::count_examined_rows = false

Whether this access path counts as one that scans a base table, and thus should be counted towards examined_rows.

It can sometimes seem a bit arbitrary which iterators count towards examined_rows and which ones do not, so the only canonical reference is the tests.

◆ delayed_predicates

uint64_t AccessPath::delayed_predicates {0}

Bitmap of WHERE predicates that we touch tables we have joined in, but that we could not apply yet (for instance because they reference other tables, or because because we could not push them down into the nullable side of outer joins).

Used during planning only (see filter_predicates).

TODO(sgunders): Add some technique for “overflow bitset” to allow having more than 64 predicates. (For now, we refuse queries that have more.)

◆ 

struct { ... } AccessPath::dynamic_index_range_scan

◆ 

struct { ... } AccessPath::eq_ref

◆ 

struct { ... } AccessPath::fake_single_row

◆ filesort

Filesort* AccessPath::filesort

◆ 

struct { ... } AccessPath::filter

◆ filter_predicates

uint64_t AccessPath::filter_predicates {0}

Bitmap of WHERE predicates that we are including on this access path, referring to the “predicates” array internal to the join optimizer.

Since bit masks are much cheaper to deal with than creating Item objects, and we don't invent new conditions during join optimization (all of them are known when we begin optimization), we stick to manipulating bit masks during optimization, saying which filters will be applied at this node (a 1-bit means the filter will be applied here; if there are multiple ones, they are ANDed together).

This is used during join optimization only; before iterators are created, we will add FILTER access paths to represent these instead, removing the dependency on the array. Said FILTER paths are by convention created with materialize_subqueries = false, since the by far most common case is that there are no subqueries in the predicate. In other words, if you wish to represent a filter with materialize_subqueries = true, you will nede to make an explicit FILTER node.

TODO(sgunders): Add some technique for “overflow bitset” to allow having more than 64 predicates. (For now, we refuse queries that have more.)

◆ 

struct { ... } AccessPath::follow_tail

◆ ft_func

Item_func_match* AccessPath::ft_func

◆ 

struct { ... } AccessPath::full_text_search

◆ group_items

Item** AccessPath::group_items

◆ group_items_size

int AccessPath::group_items_size

◆ 

struct { ... } AccessPath::hash_join

◆ idx

int AccessPath::idx

◆ 

struct { ... } AccessPath::index_range_scan

◆ 

struct { ... } AccessPath::index_scan

◆ init_cost

double AccessPath::init_cost {-1.0}

Expected cost to initialize this access path; ie., cost to read k out of N rows would be init_cost + (k/N) * (cost - init_cost).

Note that EXPLAIN prints out cost of reading the first row because it is easier for the user and also easier to measure in EXPLAIN ANALYZE, but it is easier to do calculations with a pure initialization cost, so that is what we use in this member. -1.0 for unknown.

◆ init_once_cost

double AccessPath::init_once_cost {0.0}

Of init_cost, how much of the initialization needs only to be done once per query block.

(This is a cost, not a proportion.) Ie., if the access path can reuse some its initialization work if Init() is called multiple times, this member will be nonzero. A typical example is a materialized table with rematerialize=false; the second time Init() is called, it's a no-op. Most paths will have init_once_cost = 0.0, ie., repeated scans will cost the same. We do not intend to use this field to model cache effects.

This is currently not printed in EXPLAIN, only optimizer trace.

◆ inner

AccessPath * AccessPath::inner

◆ is_unique

bool AccessPath::is_unique

◆ iterator

RowIterator* AccessPath::iterator = nullptr

If an iterator has been instantiated for this access path, points to the iterator.

Used for constructing iterators that need to talk to each other (e.g. for recursive CTEs, or BKA join), and also for locating timing information in EXPLAIN ANALYZE queries.

◆ join

JOIN* AccessPath::join

◆ join_predicate

const JoinPredicate* AccessPath::join_predicate

◆ join_type

JoinType AccessPath::join_type

◆ keep_current_rowid

bool AccessPath::keep_current_rowid

◆ key

KEY* AccessPath::key

◆ key_len

size_t AccessPath::key_len

◆ limit

ha_rows AccessPath::limit

◆ 

struct { ... } AccessPath::limit_offset

◆ loosescan_key_len

unsigned AccessPath::loosescan_key_len

◆ 

struct { ... } AccessPath::materialize

◆ 

struct { ... } AccessPath::materialize_information_schema_table

◆ materialize_subqueries

bool AccessPath::materialize_subqueries

◆ 

struct { ... } AccessPath::materialized_table_function

◆ 

struct { ... } AccessPath::mrr

◆ mrr_flags

int AccessPath::mrr_flags

◆ mrr_length_per_rec

unsigned AccessPath::mrr_length_per_rec

◆ name

const char* AccessPath::name

◆ needs_buffering

bool AccessPath::needs_buffering

◆ 

struct { ... } AccessPath::nested_loop_join

◆ 

struct { ... } AccessPath::nested_loop_semijoin_with_duplicate_removal

◆ num_output_rows

double AccessPath::num_output_rows {-1.0}

Expected number of output rows, -1.0 for unknown.

◆ num_output_rows_before_filter

double AccessPath::num_output_rows_before_filter {-1.0}

If no filter, identical to num_output_rows, cost, respectively.

init_cost is always the same (filters have zero initialization cost).

◆ offset

ha_rows AccessPath::offset

◆ order

ORDER* AccessPath::order

◆ ordering_state

int AccessPath::ordering_state = 0

Which ordering the rows produced by this path follow, if any (see interesting_orders.h).

This is really a LogicalOrderings::StateIndex, but we don't want to add a dependency on interesting_orders.h from this file, so we use the base type instead of the typedef here.

◆ outer

AccessPath* AccessPath::outer

◆ param

MaterializePathParameters* AccessPath::param

◆ parameter_tables

hypergraph::NodeMap AccessPath::parameter_tables {0}

If nonzero, a bitmap of other tables whose joined-in rows must already be loaded when rows from this access path are evaluated; that is, this access path must be put on the inner side of a nested-loop join (or multiple such joins) where the outer side includes all of the given tables.

The most obvious case for this is dependent tables in LATERAL, but a more common case is when we have pushed join conditions referring to those tables; e.g., if this access path represents t1 and we have a condition t1.x=t2.x that is pushed down into an index lookup (ref access), t2 will be set in this bitmap. We can still join in other tables, deferring t2, but the bit(s) will then propagate, and we cannot be on the right side of a hash join until parameter_tables is zero again.

As a special case, we allow setting RAND_TABLE_BIT, even though it is normally part of a table_map, not a NodeMap. In this case, it specifies that the access path is entirely noncachable, because it depends on something nondeterministic or an outer reference, and thus can never be on the right side of a hash join, ever.

◆ pfs_batch_mode

bool AccessPath::pfs_batch_mode

◆ provide_rowid

bool AccessPath::provide_rowid

◆ 

struct { ... } AccessPath::pushed_join_ref

◆ qep_tab

QEP_TAB* AccessPath::qep_tab

◆ quick

QUICK_SELECT_I* AccessPath::quick

◆ rec_per_key

float AccessPath::rec_per_key

◆ ref [1/2]

TABLE_REF* AccessPath::ref

◆  [2/2]

struct { ... } AccessPath::ref

◆ 

struct { ... } AccessPath::ref_or_null

◆ ref_slice

int AccessPath::ref_slice

◆ reject_multiple_rows

bool AccessPath::reject_multiple_rows

◆ remove_duplicates [1/2]

bool AccessPath::remove_duplicates

◆  [2/2]

struct { ... } AccessPath::remove_duplicates

◆ 

struct { ... } AccessPath::remove_duplicates_on_index

◆ reverse

bool AccessPath::reverse

◆ rewrite_semi_to_inner

bool AccessPath::rewrite_semi_to_inner

◆ rollup

bool AccessPath::rollup

◆ secondary_engine_data

void* AccessPath::secondary_engine_data {nullptr}

Auxiliary data used by a secondary storage engine while processing the access path during optimization and execution.

The secondary storage engine is free to store any useful information in this member, for example extra statistics or cost estimates. The data pointed to is fully owned by the secondary storage engine, and it is the responsibility of the secondary engine to manage the memory and make sure it is properly destroyed.

◆ send_records_override

ha_rows* AccessPath::send_records_override

◆ 

struct { ... } AccessPath::sort

◆ store_rowids

bool AccessPath::store_rowids

◆ 

struct { ... } AccessPath::stream

◆ subquery_path

AccessPath* AccessPath::subquery_path

◆ subsumed_sargable_join_predicates

uint64_t AccessPath::subsumed_sargable_join_predicates

Similar to applied_sargable_join_predicates, bitmap of sargable join predicates that have been applied and will subsume the join predicate entirely, ie., not only should the selectivity not be double-counted, but the predicate itself is redundant and need not be applied as a filter.

(It is an error to have a bit set here but not in applied_sargable_join_predicates.)

◆ table [1/2]

TABLE* AccessPath::table

◆ table [2/2]

const TABLE* AccessPath::table

◆ table_function

Table_function* AccessPath::table_function

◆ table_list

TABLE_LIST* AccessPath::table_list

◆ table_path

AccessPath* AccessPath::table_path

◆ 

struct { ... } AccessPath::table_scan

◆ table_scan_path

AccessPath* AccessPath::table_scan_path

◆ 

struct { ... } AccessPath::table_value_constructor

◆ tables_to_get_rowid_for

table_map AccessPath::tables_to_get_rowid_for

◆ temp_table_param

Temp_table_param* AccessPath::temp_table_param

◆ 

struct { ... } AccessPath::temptable_aggregate

◆ type

enum AccessPath::Type AccessPath::type

◆ 

union { ... } AccessPath::u

◆ 

struct { ... } AccessPath::unqualified_count

◆ unwrap_rollup

bool AccessPath::unwrap_rollup

◆ use_order

bool AccessPath::use_order

◆ used_ref

TABLE_REF* AccessPath::used_ref

◆ 

struct { ... } AccessPath::weedout

◆ weedout_table

SJ_TMP_TABLE* AccessPath::weedout_table

◆ 

struct { ... } AccessPath::window

◆ 

struct { ... } AccessPath::zero_rows

◆ 

struct { ... } AccessPath::zero_rows_aggregated

The documentation for this struct was generated from the following file: