MySQL 9.1.0
Source Code Documentation
|
The handler class is the interface for dynamically loadable storage engines. More...
#include <handler.h>
Public Types | |
enum | enum_range_scan_direction { RANGE_SCAN_ASC , RANGE_SCAN_DESC } |
enum | { NONE = 0 , INDEX , RND , SAMPLING } |
typedef ulonglong | Table_flags |
using | Blob_context = void * |
using | Load_init_cbk = std::function< bool(void *cookie, ulong ncols, ulong row_len, const ulong *col_offsets, const ulong *null_byte_offsets, const ulong *null_bitmasks)> |
This callback is called by each parallel load thread at the beginning of the parallel load for the adapter scan. More... | |
using | Load_cbk = std::function< bool(void *cookie, uint nrows, void *rowdata, uint64_t partition_id)> |
This callback is called by each parallel load thread when processing of rows is required for the adapter scan. More... | |
using | Load_end_cbk = std::function< void(void *cookie)> |
This callback is called by each parallel load thread when processing of rows has ended for the adapter scan. More... | |
typedef void(* | my_gcolumn_template_callback_t) (const TABLE *, void *) |
Callback function that will be called by my_prepare_gcolumn_template once the table has been opened. More... | |
Public Member Functions | |
void | unbind_psi () |
void | rebind_psi () |
void | start_psi_batch_mode () |
Put the handler in 'batch' mode when collecting table io instrumented events. More... | |
void | end_psi_batch_mode () |
End a batch started with start_psi_batch_mode . More... | |
bool | end_psi_batch_mode_if_started () |
If a PSI batch was started, turn if off. More... | |
handler (handlerton *ht_arg, TABLE_SHARE *share_arg) | |
virtual | ~handler (void) |
virtual std::string | explain_extra () const |
Return extra handler specific text for EXPLAIN. More... | |
virtual handler * | clone (const char *name, MEM_ROOT *mem_root) |
void | init () |
This is called after create to allow us to set up cached variables. More... | |
void | ha_set_record_buffer (Record_buffer *buffer) |
Set a record buffer that the storage engine can use for multi-row reads. More... | |
Record_buffer * | ha_get_record_buffer () const |
Get the record buffer that was set with ha_set_record_buffer(). More... | |
bool | ha_is_record_buffer_wanted (ha_rows *const max_rows) const |
Does this handler want to get a Record_buffer for multi-row reads via the ha_set_record_buffer() function? And if so, what is the maximum number of records to allocate space for in the buffer? More... | |
int | ha_open (TABLE *table, const char *name, int mode, int test_if_locked, const dd::Table *table_def) |
int | ha_close (void) |
Close handler. More... | |
int | ha_index_init (uint idx, bool sorted) |
Initialize use of index. More... | |
int | ha_index_end () |
End use of index. More... | |
int | ha_rnd_init (bool scan) |
Initialize table for random read or scan. More... | |
int | ha_rnd_end () |
End use of random access. More... | |
int | ha_rnd_next (uchar *buf) |
Read next row via random scan. More... | |
int | ha_rnd_pos (uchar *buf, uchar *pos) |
Read row via random scan from position. More... | |
int | ha_index_read_map (uchar *buf, const uchar *key, key_part_map keypart_map, enum ha_rkey_function find_flag) |
Read [part of] row via [part of] index. More... | |
int | ha_index_read_last_map (uchar *buf, const uchar *key, key_part_map keypart_map) |
int | ha_index_read_idx_map (uchar *buf, uint index, const uchar *key, key_part_map keypart_map, enum ha_rkey_function find_flag) |
Initializes an index and read it. More... | |
int | ha_index_next (uchar *buf) |
Reads the next row via index. More... | |
int | ha_index_prev (uchar *buf) |
Reads the previous row via index. More... | |
int | ha_index_first (uchar *buf) |
Reads the first row via index. More... | |
int | ha_index_last (uchar *buf) |
Reads the last row via index. More... | |
int | ha_index_next_same (uchar *buf, const uchar *key, uint keylen) |
Reads the next same row via index. More... | |
int | ha_reset () |
Check handler usage and reset state of file to after 'open'. More... | |
int | ha_index_or_rnd_end () |
Table_flags | ha_table_flags () const |
The cached_table_flags is set at ha_open and ha_external_lock. More... | |
int | ha_external_lock (THD *thd, int lock_type) |
These functions represent the public interface to users of the handler class, hence they are not virtual. More... | |
int | ha_write_row (uchar *buf) |
int | ha_update_row (const uchar *old_data, uchar *new_data) |
Update the current row. More... | |
int | ha_delete_row (const uchar *buf) |
void | ha_release_auto_increment () |
int | ha_check_for_upgrade (HA_CHECK_OPT *check_opt) |
int | ha_check (THD *thd, HA_CHECK_OPT *check_opt) |
to be actually called to get 'check()' functionality More... | |
int | ha_repair (THD *thd, HA_CHECK_OPT *check_opt) |
Repair table: public interface. More... | |
void | ha_start_bulk_insert (ha_rows rows) |
Start bulk insert. More... | |
int | ha_end_bulk_insert () |
End bulk insert. More... | |
int | ha_bulk_update_row (const uchar *old_data, uchar *new_data, uint *dup_key_found) |
Bulk update row: public interface. More... | |
int | ha_delete_all_rows () |
Delete all rows: public interface. More... | |
int | ha_truncate (dd::Table *table_def) |
Truncate table: public interface. More... | |
int | ha_optimize (THD *thd, HA_CHECK_OPT *check_opt) |
Optimize table: public interface. More... | |
int | ha_analyze (THD *thd, HA_CHECK_OPT *check_opt) |
Analyze table: public interface. More... | |
bool | ha_check_and_repair (THD *thd) |
Check and repair table: public interface. More... | |
int | ha_disable_indexes (uint mode) |
Disable indexes: public interface. More... | |
int | ha_enable_indexes (uint mode) |
Enable indexes: public interface. More... | |
int | ha_discard_or_import_tablespace (bool discard, dd::Table *table_def) |
Discard or import tablespace: public interface. More... | |
int | ha_rename_table (const char *from, const char *to, const dd::Table *from_table_def, dd::Table *to_table_def) |
Rename table: public interface. More... | |
int | ha_delete_table (const char *name, const dd::Table *table_def) |
Delete table: public interface. More... | |
void | ha_drop_table (const char *name) |
Drop table in the engine: public interface. More... | |
int | ha_create (const char *name, TABLE *form, HA_CREATE_INFO *info, dd::Table *table_def) |
Create a table in the engine: public interface. More... | |
int | ha_load_table (const TABLE &table, bool *skip_metadata_update) |
Loads a table into its defined secondary storage engine: public interface. More... | |
int | ha_unload_table (const char *db_name, const char *table_name, bool error_if_not_loaded) |
Unloads a table from its defined secondary storage engine: public interface. More... | |
virtual int | parallel_scan_init (void *&scan_ctx, size_t *num_threads, bool use_reserved_threads, size_t max_desired_threads) |
Initializes a parallel scan. More... | |
virtual int | parallel_scan (void *scan_ctx, void **thread_ctxs, Load_init_cbk init_fn, Load_cbk load_fn, Load_end_cbk end_fn) |
Run the parallel read of data. More... | |
virtual void | parallel_scan_end (void *scan_ctx) |
End of the parallel scan. More... | |
virtual bool | bulk_load_check (THD *thd) const |
Check if the table is ready for bulk load. More... | |
virtual size_t | bulk_load_available_memory (THD *thd) const |
Get the total memory available for bulk load in SE. More... | |
virtual void * | bulk_load_begin (THD *thd, size_t data_size, size_t memory, size_t num_threads) |
Begin parallel bulk data load to the table. More... | |
virtual int | bulk_load_execute (THD *thd, void *load_ctx, size_t thread_idx, const Rows_mysql &rows, Bulk_load::Stat_callbacks &wait_cbk) |
Execute bulk load operation. More... | |
virtual int | open_blob (THD *thd, void *load_ctx, size_t thread_idx, Blob_context &blob_ctx, unsigned char *blobref) |
Open a blob for write operation. More... | |
virtual int | write_blob (THD *thd, void *load_ctx, size_t thread_idx, Blob_context blob_ctx, unsigned char *blobref, const unsigned char *data, size_t data_len) |
Write to a blob. More... | |
virtual int | close_blob (THD *thd, void *load_ctx, size_t thread_idx, Blob_context blob_ctx, unsigned char *blobref) |
Close the blob. More... | |
virtual int | bulk_load_end (THD *thd, void *load_ctx, bool is_error) |
End bulk load operation. More... | |
bool | ha_get_se_private_data (dd::Table *dd_table, bool reset) |
Submit a dd::Table object representing a core DD table having hardcoded data to be filled in by the DDSE. More... | |
void | adjust_next_insert_id_after_explicit_value (ulonglong nr) |
int | update_auto_increment () |
virtual void | print_error (int error, myf errflag) |
Print error that we got from handler function. More... | |
virtual bool | get_error_message (int error, String *buf) |
Return an error message specific to this handler. More... | |
uint | get_dup_key (int error) |
virtual bool | get_foreign_dup_key (char *child_table_name, uint child_table_name_len, char *child_key_name, uint child_key_name_len) |
Retrieves the names of the table and the key for which there was a duplicate entry in the case of HA_ERR_FOREIGN_DUPLICATE_KEY. More... | |
virtual void | change_table_ptr (TABLE *table_arg, TABLE_SHARE *share) |
Change the internal TABLE_SHARE pointer. More... | |
const TABLE_SHARE * | get_table_share () const |
const TABLE * | get_table () const |
virtual double | scan_time () |
virtual double | read_time (uint index, uint ranges, ha_rows rows) |
The cost of reading a set of ranges from the table using an index to access it. More... | |
virtual double | index_only_read_time (uint keynr, double records) |
Calculate cost of 'index only' scan for given index and number of records. More... | |
virtual Cost_estimate | table_scan_cost () |
Cost estimate for doing a complete table scan. More... | |
virtual Cost_estimate | index_scan_cost (uint index, double ranges, double rows) |
Cost estimate for reading a number of ranges from an index. More... | |
virtual Cost_estimate | read_cost (uint index, double ranges, double rows) |
Cost estimate for reading a set of ranges from the table using an index to access it. More... | |
virtual double | page_read_cost (uint index, double reads) |
Cost estimate for doing a number of non-sequentially accesses against the storage engine. More... | |
virtual double | worst_seek_times (double reads) |
Provide an upper cost-limit of doing a specified number of seek-and-read key lookups. More... | |
virtual longlong | get_memory_buffer_size () const |
Return an estimate on the amount of memory the storage engine will use for caching data in memory. More... | |
double | table_in_memory_estimate () const |
Return an estimate of how much of the table that is currently stored in main memory. More... | |
double | index_in_memory_estimate (uint keyno) const |
Return an estimate of how much of the index that is currently stored in main memory. More... | |
int | ha_sample_init (void *&scan_ctx, double sampling_percentage, int sampling_seed, enum_sampling_method sampling_method, const bool tablesample) |
Initialize sampling. More... | |
int | ha_sample_next (void *scan_ctx, uchar *buf) |
Get the next record for sampling. More... | |
int | ha_sample_end (void *scan_ctx) |
End sampling. More... | |
virtual ha_rows | multi_range_read_info_const (uint keyno, RANGE_SEQ_IF *seq, void *seq_init_param, uint n_ranges, uint *bufsz, uint *flags, bool *force_default_mrr, Cost_estimate *cost) |
Get cost and other information about MRR scan over a known list of ranges. More... | |
virtual ha_rows | multi_range_read_info (uint keyno, uint n_ranges, uint keys, uint *bufsz, uint *flags, Cost_estimate *cost) |
Get cost and other information about MRR scan over some sequence of ranges. More... | |
virtual int | multi_range_read_init (RANGE_SEQ_IF *seq, void *seq_init_param, uint n_ranges, uint mode, HANDLER_BUFFER *buf) |
Initialize the MRR scan. More... | |
int | ha_multi_range_read_next (char **range_info) |
int | ha_read_range_first (const key_range *start_key, const key_range *end_key, bool eq_range, bool sorted) |
int | ha_read_range_next () |
bool | has_transactions () |
virtual uint | extra_rec_buf_length () const |
virtual bool | is_ignorable_error (int error) |
Determine whether an error can be ignored or not. More... | |
virtual bool | is_fatal_error (int error) |
Determine whether an error is fatal or not. More... | |
int | ha_records (ha_rows *num_rows) |
Wrapper function to call records() in storage engine. More... | |
int | ha_records (ha_rows *num_rows, uint index) |
Wrapper function to call records_from_index() in storage engine. More... | |
virtual ha_rows | estimate_rows_upper_bound () |
Return upper bound of current number of records in the table (max. More... | |
virtual enum row_type | get_real_row_type (const HA_CREATE_INFO *create_info) const |
Get real row type for the table created based on one specified by user, CREATE TABLE options and SE capabilities. More... | |
virtual enum ha_key_alg | get_default_index_algorithm () const |
Get default key algorithm for SE. More... | |
virtual bool | is_index_algorithm_supported (enum ha_key_alg key_alg) const |
Check if SE supports specific key algorithm. More... | |
virtual void | column_bitmaps_signal () |
Signal that the table->read_set and table->write_set table maps changed The handler is allowed to set additional bits in the above map in this call. More... | |
uint | get_index (void) const |
virtual bool | start_bulk_update () |
virtual bool | start_bulk_delete () |
virtual int | exec_bulk_update (uint *dup_key_found) |
After this call all outstanding updates must be performed. More... | |
virtual void | end_bulk_update () |
Perform any needed clean-up, no outstanding updates are there at the moment. More... | |
virtual int | end_bulk_delete () |
Execute all outstanding deletes and close down the bulk delete. More... | |
void | set_end_range (const key_range *range, enum_range_scan_direction direction) |
Set the end position for a range scan. More... | |
int | compare_key (key_range *range) |
Compare if found key (in row) is over max-value. More... | |
int | compare_key_icp (const key_range *range) const |
int | compare_key_in_buffer (const uchar *buf) const |
Check if the key in the given buffer (which is not necessarily TABLE::record[0]) is within range. More... | |
virtual int | ft_init () |
virtual FT_INFO * | ft_init_ext (uint flags, uint inx, String *key) |
virtual FT_INFO * | ft_init_ext_with_hints (uint inx, String *key, Ft_hints *hints) |
int | ha_ft_read (uchar *buf) |
int | ha_read_first_row (uchar *buf, uint primary_key) |
Read first row (only) from a table. More... | |
virtual int | rnd_pos_by_record (uchar *record) |
This function only works for handlers having HA_PRIMARY_KEY_REQUIRED_FOR_POSITION set. More... | |
virtual ha_rows | records_in_range (uint inx, key_range *min_key, key_range *max_key) |
Find number of records in a range. More... | |
virtual void | position (const uchar *record)=0 |
virtual int | info (uint flag)=0 |
General method to gather info from handler. More... | |
virtual uint32 | calculate_key_hash_value (Field **field_array) |
int | ha_extra (enum ha_extra_function operation) |
Request storage engine to do an extra operation: enable,disable or run some functionality. More... | |
virtual int | extra_opt (enum ha_extra_function operation, ulong cache_size) |
virtual const handlerton * | hton_supporting_engine_pushdown () |
Get the handlerton of the storage engine if the SE is capable of pushing down some of the AccessPath functionality. More... | |
virtual bool | start_read_removal (void) |
Start read (before write) removal on the current table. More... | |
virtual ha_rows | end_read_removal (void) |
End read (before write) removal and return the number of rows really written. More... | |
virtual bool | was_semi_consistent_read () |
virtual void | try_semi_consistent_read (bool) |
Tell the engine whether it should avoid unnecessary lock waits. More... | |
virtual void | unlock_row () |
Unlock last accessed row. More... | |
virtual int | start_stmt (THD *thd, thr_lock_type lock_type) |
Start a statement when table is locked. More... | |
virtual void | get_auto_increment (ulonglong offset, ulonglong increment, ulonglong nb_desired_values, ulonglong *first_value, ulonglong *nb_reserved_values) |
Reserves an interval of auto_increment values from the handler. More... | |
void | set_next_insert_id (ulonglong id) |
void | restore_auto_increment (ulonglong prev_insert_id) |
virtual void | update_create_info (HA_CREATE_INFO *create_info) |
Update create info as part of ALTER TABLE. More... | |
virtual int | assign_to_keycache (THD *, HA_CHECK_OPT *) |
virtual int | preload_keys (THD *, HA_CHECK_OPT *) |
virtual int | indexes_are_disabled (void) |
Check if indexes are disabled. More... | |
virtual void | append_create_info (String *packet) |
virtual void | init_table_handle_for_HANDLER () |
virtual const char * | table_type () const =0 |
The following can be called without an open handler. More... | |
virtual ulong | index_flags (uint idx, uint part, bool all_parts) const =0 |
uint | max_record_length () const |
uint | max_keys () const |
uint | max_key_parts () const |
uint | max_key_length () const |
uint | max_key_part_length (HA_CREATE_INFO *create_info) const |
virtual uint | max_supported_record_length () const |
virtual uint | max_supported_keys () const |
virtual uint | max_supported_key_parts () const |
virtual uint | max_supported_key_length () const |
virtual uint | max_supported_key_part_length (HA_CREATE_INFO *create_info) const |
virtual uint | min_record_length (uint options) const |
virtual bool | low_byte_first () const |
virtual ha_checksum | checksum () const |
virtual bool | is_crashed () const |
Check if the table is crashed. More... | |
virtual bool | auto_repair () const |
Check if the table can be automatically repaired. More... | |
virtual uint | lock_count (void) const |
Get number of lock objects returned in store_lock. More... | |
virtual THR_LOCK_DATA ** | store_lock (THD *thd, THR_LOCK_DATA **to, enum thr_lock_type lock_type)=0 |
Is not invoked for non-transactional temporary tables. More... | |
virtual bool | primary_key_is_clustered () const |
Check if the primary key is clustered or not. More... | |
virtual int | cmp_ref (const uchar *ref1, const uchar *ref2) const |
Compare two positions. More... | |
virtual const Item * | cond_push (const Item *cond) |
Push condition down to the table handler. More... | |
virtual Item * | idx_cond_push (uint keyno, Item *idx_cond) |
Push down an index condition to the handler. More... | |
virtual void | cancel_pushed_idx_cond () |
Reset information about pushed index conditions. More... | |
virtual uint | number_of_pushed_joins () const |
Reports number of tables included in pushed join which this handler instance is part of. More... | |
virtual const TABLE * | member_of_pushed_join () const |
If this handler instance is part of a pushed join sequence returned TABLE instance being root of the pushed query? More... | |
virtual const TABLE * | parent_of_pushed_join () const |
If this handler instance is a child in a pushed join sequence returned TABLE instance being my parent? More... | |
virtual table_map | tables_in_pushed_join () const |
int | ha_index_read_pushed (uchar *buf, const uchar *key, key_part_map keypart_map) |
int | ha_index_next_pushed (uchar *buf) |
virtual bool | check_if_incompatible_data (HA_CREATE_INFO *create_info, uint table_changes) |
Part of old, deprecated in-place ALTER API. More... | |
virtual enum_alter_inplace_result | check_if_supported_inplace_alter (TABLE *altered_table, Alter_inplace_info *ha_alter_info) |
Check if a storage engine supports a particular alter table in-place. More... | |
bool | ha_prepare_inplace_alter_table (TABLE *altered_table, Alter_inplace_info *ha_alter_info, const dd::Table *old_table_def, dd::Table *new_table_def) |
Public functions wrapping the actual handler call. More... | |
bool | ha_inplace_alter_table (TABLE *altered_table, Alter_inplace_info *ha_alter_info, const dd::Table *old_table_def, dd::Table *new_table_def) |
Public function wrapping the actual handler call. More... | |
bool | ha_commit_inplace_alter_table (TABLE *altered_table, Alter_inplace_info *ha_alter_info, bool commit, const dd::Table *old_table_def, dd::Table *new_table_def) |
Public function wrapping the actual handler call. More... | |
void | ha_notify_table_changed (Alter_inplace_info *ha_alter_info) |
Public function wrapping the actual handler call. More... | |
virtual void | use_hidden_primary_key () |
use_hidden_primary_key() is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key More... | |
virtual int | bulk_update_row (const uchar *old_data, uchar *new_data, uint *dup_key_found) |
This method is similar to update_row, however the handler doesn't need to execute the updates at this point in time. More... | |
virtual int | delete_all_rows () |
Delete all rows in a table. More... | |
virtual int | truncate (dd::Table *table_def) |
Quickly remove all rows from a table. More... | |
virtual int | optimize (THD *, HA_CHECK_OPT *) |
virtual int | analyze (THD *, HA_CHECK_OPT *) |
virtual bool | check_and_repair (THD *thd) |
Check and repair the table if necessary. More... | |
virtual int | disable_indexes (uint mode) |
Disable indexes for a while. More... | |
virtual int | enable_indexes (uint mode) |
Enable indexes again. More... | |
virtual int | discard_or_import_tablespace (bool discard, dd::Table *table_def) |
Discard or import tablespace. More... | |
virtual void | drop_table (const char *name) |
virtual int | create (const char *name, TABLE *form, HA_CREATE_INFO *info, dd::Table *table_def)=0 |
Create table (implementation). More... | |
virtual bool | get_se_private_data (dd::Table *dd_table, bool reset) |
virtual int | get_extra_columns_and_keys (const HA_CREATE_INFO *create_info, const List< Create_field > *create_list, const KEY *key_info, uint key_count, dd::Table *table_obj) |
Adjust definition of table to be created by adding implicit columns and indexes necessary for the storage engine. More... | |
virtual bool | set_ha_share_ref (Handler_share **arg_ha_share) |
void | set_ha_table (TABLE *table_arg) |
int | get_lock_type () const |
virtual Partition_handler * | get_partition_handler () |
bool | ha_upgrade_table (THD *thd, const char *dbname, const char *table_name, dd::Table *dd_table, TABLE *table_arg) |
Set se_private_id and se_private_data during upgrade. More... | |
void | ha_set_primary_handler (handler *primary_handler) |
Store a pointer to the handler of the primary table that corresponds to the secondary table in this handler. More... | |
handler * | ha_get_primary_handler () const |
Get a pointer to a handler for the table in the primary storage engine, if this handler is for a table in a secondary storage engine. More... | |
void | ha_mv_key_capacity (uint *num_keys, size_t *keys_length) const |
Return max limits for a single set of multi-valued keys. More... | |
virtual void | set_external_table_offload_error (const char *) |
Propagates the secondary storage engine offload failure reason for a query to the external engine when the offloaded query fails in the secondary storage engine. More... | |
virtual void | external_table_offload_error () const |
Identifies and throws the propagated external engine query offload or exec failure reason given by the external engine handler. More... | |
Static Public Member Functions | |
static bool | my_prepare_gcolumn_template (THD *thd, const char *db_name, const char *table_name, my_gcolumn_template_callback_t myc, void *ib_table) |
Callback to allow InnoDB to prepare a template for generated column processing. More... | |
static bool | my_eval_gcolumn_expr_with_open (THD *thd, const char *db_name, const char *table_name, const MY_BITMAP *const fields, uchar *record, const char **mv_data_ptr, ulong *mv_length) |
Callback for generated columns processing. More... | |
static bool | my_eval_gcolumn_expr (THD *thd, TABLE *table, const MY_BITMAP *const fields, uchar *record, const char **mv_data_ptr, ulong *mv_length) |
Callback for computing generated column values. More... | |
Public Attributes | |
handlerton * | ht |
uchar * | ref |
Pointer to current row. More... | |
uchar * | dup_ref |
Pointer to duplicate row. More... | |
ha_statistics | stats |
range_seq_t | mrr_iter |
RANGE_SEQ_IF | mrr_funcs |
HANDLER_BUFFER * | multi_range_buffer |
uint | ranges_in_seq |
bool | mrr_is_output_sorted |
bool | mrr_have_range |
KEY_MULTI_RANGE | mrr_cur_range |
key_range * | end_range |
End value for a range scan. More... | |
bool | m_virt_gcol_in_end_range = false |
Flag which tells if end_range contains a virtual generated column. More... | |
uint | errkey |
uint | key_used_on_scan |
uint | active_index |
uint | ref_length |
Length of ref (1-8 or the clustered key length) More... | |
FT_INFO * | ft_handler |
enum handler:: { ... } | inited |
bool | implicit_emptied |
const Item * | pushed_cond |
Item * | pushed_idx_cond |
uint | pushed_idx_cond_keyno |
ulonglong | next_insert_id |
next_insert_id is the next value which should be inserted into the auto_increment column: in a inserting-multi-row statement (like INSERT SELECT), for the first row where the autoinc value is not specified by the statement, get_auto_increment() called and asked to generate a value, next_insert_id is set to the next value, then for all other rows next_insert_id is used (and increased each time) without calling get_auto_increment(). More... | |
ulonglong | insert_id_for_cur_row |
insert id for the current row (autogenerated; if not autogenerated, it's 0). More... | |
Discrete_interval | auto_inc_interval_for_cur_row |
Interval returned by get_auto_increment() and being consumed by the inserter. More... | |
uint | auto_inc_intervals_count |
Number of reserved auto-increment intervals. More... | |
PSI_table * | m_psi |
Instrumented table associated with this handler. More... | |
std::mt19937 * | m_random_number_engine {nullptr} |
double | m_sampling_percentage |
Protected Member Functions | |
virtual int | multi_range_read_next (char **range_info) |
Get next record in MRR scan. More... | |
virtual int | records (ha_rows *num_rows) |
Number of rows in table. More... | |
virtual int | records_from_index (ha_rows *num_rows, uint index) |
Number of rows in table counted using the secondary index chosen by optimizer. More... | |
virtual int | index_read_map (uchar *buf, const uchar *key, key_part_map keypart_map, enum ha_rkey_function find_flag) |
Positions an index cursor to the index specified in the handle ('active_index'). More... | |
virtual int | index_read_idx_map (uchar *buf, uint index, const uchar *key, key_part_map keypart_map, enum ha_rkey_function find_flag) |
Positions an index cursor to the index specified in argument. More... | |
virtual int | index_next (uchar *) |
virtual int | index_prev (uchar *) |
virtual int | index_first (uchar *) |
virtual int | index_last (uchar *) |
virtual int | index_next_same (uchar *buf, const uchar *key, uint keylen) |
virtual int | index_read_last_map (uchar *buf, const uchar *key, key_part_map keypart_map) |
The following functions works like index_read, but it find the last row with the current key value or prefix. More... | |
virtual int | read_range_first (const key_range *start_key, const key_range *end_key, bool eq_range_arg, bool sorted) |
Read first row between two ranges. More... | |
virtual int | read_range_next () |
Read next row between two endpoints. More... | |
virtual int | rnd_next (uchar *buf)=0 |
virtual int | rnd_pos (uchar *buf, uchar *pos)=0 |
virtual int | ft_read (uchar *) |
virtual int | index_read_pushed (uchar *, const uchar *, key_part_map) |
virtual int | index_next_pushed (uchar *) |
virtual bool | prepare_inplace_alter_table (TABLE *altered_table, Alter_inplace_info *ha_alter_info, const dd::Table *old_table_def, dd::Table *new_table_def) |
Allows the storage engine to update internal structures with concurrent writes blocked. More... | |
virtual bool | inplace_alter_table (TABLE *altered_table, Alter_inplace_info *ha_alter_info, const dd::Table *old_table_def, dd::Table *new_table_def) |
Alter the table structure in-place with operations specified using HA_ALTER_FLAGS and Alter_inplace_info. More... | |
virtual bool | commit_inplace_alter_table (TABLE *altered_table, Alter_inplace_info *ha_alter_info, bool commit, const dd::Table *old_table_def, dd::Table *new_table_def) |
Commit or rollback the changes made during prepare_inplace_alter_table() and inplace_alter_table() inside the storage engine. More... | |
virtual void | notify_table_changed (Alter_inplace_info *ha_alter_info) |
Notify the storage engine that the table definition has been updated. More... | |
void | ha_statistic_increment (ulonglong System_status_var::*offset) const |
THD * | ha_thd () const |
PSI_table_share * | ha_table_share_psi (const TABLE_SHARE *share) const |
Acquire the instrumented table information from a table share. More... | |
virtual int | rename_table (const char *from, const char *to, const dd::Table *from_table_def, dd::Table *to_table_def) |
Default rename_table() and delete_table() rename/delete files with a given name and extensions from handlerton::file_extensions. More... | |
virtual int | delete_table (const char *name, const dd::Table *table_def) |
Delete a table. More... | |
virtual int | index_read (uchar *buf, const uchar *key, uint key_len, enum ha_rkey_function find_flag) |
virtual int | index_read_last (uchar *buf, const uchar *key, uint key_len) |
Handler_share * | get_ha_share_ptr () |
Get an initialized ha_share. More... | |
void | set_ha_share_ptr (Handler_share *arg_ha_share) |
Set ha_share to be used by all instances of the same table/partition. More... | |
void | lock_shared_ha_data () |
Take a lock for protecting shared handler data. More... | |
void | unlock_shared_ha_data () |
Release lock for protecting ha_share. More... | |
Protected Attributes | |
TABLE_SHARE * | table_share |
TABLE * | table |
Table_flags | cached_table_flags {0} |
ha_rows | estimation_rows_to_insert |
KEY_PART_INFO * | range_key_part |
bool | eq_range |
bool | in_range_check_pushed_down |
Private Types | |
enum | batch_mode_t { PSI_BATCH_MODE_NONE , PSI_BATCH_MODE_STARTING , PSI_BATCH_MODE_STARTED } |
Internal state of the batch instrumentation. More... | |
Private Member Functions | |
int | check_collation_compatibility () |
Check for incompatible collation changes. More... | |
double | estimate_in_memory_buffer (ulonglong table_index_size) const |
Make a guesstimate for how much of a table or index is in a memory buffer in the case where the storage engine has not provided any estimate for this. More... | |
int | handle_records_error (int error, ha_rows *num_rows) |
Function will handle the error code from call to records() and records_from_index(). More... | |
virtual int | extra (enum ha_extra_function operation) |
Storage engine specific implementation of ha_extra() More... | |
void | mark_trx_read_write () |
A helper function to mark a transaction read-write, if it is started. More... | |
virtual int | open (const char *name, int mode, uint test_if_locked, const dd::Table *table_def)=0 |
virtual int | close (void)=0 |
virtual int | index_init (uint idx, bool sorted) |
virtual int | index_end () |
virtual int | rnd_init (bool scan)=0 |
rnd_init() can be called two times without rnd_end() in between (it only makes sense if scan=1). More... | |
virtual int | rnd_end () |
virtual int | write_row (uchar *buf) |
Write a row. More... | |
virtual int | update_row (const uchar *old_data, uchar *new_data) |
Update a single row. More... | |
virtual int | delete_row (const uchar *buf) |
virtual int | reset () |
Reset state of file to after 'open'. More... | |
virtual Table_flags | table_flags (void) const =0 |
virtual int | external_lock (THD *thd, int lock_type) |
Is not invoked for non-transactional temporary tables. More... | |
virtual void | release_auto_increment () |
virtual int | check_for_upgrade (HA_CHECK_OPT *) |
admin commands - called from mysql_admin_table More... | |
virtual int | check (THD *, HA_CHECK_OPT *) |
virtual int | repair (THD *, HA_CHECK_OPT *) |
In this method check_opt can be modified to specify CHECK option to use to call check() upon the table. More... | |
virtual void | start_bulk_insert (ha_rows) |
virtual int | end_bulk_insert () |
virtual bool | is_record_buffer_wanted (ha_rows *const max_rows) const |
Does this handler want to get a Record_buffer for multi-row reads via the ha_set_record_buffer() function? And if so, what is the maximum number of records to allocate space for in the buffer? More... | |
virtual bool | upgrade_table (THD *thd, const char *dbname, const char *table_name, dd::Table *dd_table) |
virtual int | sample_init (void *&scan_ctx, double sampling_percentage, int sampling_seed, enum_sampling_method sampling_method, const bool tablesample) |
Initialize sampling. More... | |
virtual int | sample_next (void *scan_ctx, uchar *buf) |
Get the next record for sampling. More... | |
virtual int | sample_end (void *scan_ctx) |
End sampling. More... | |
virtual int | load_table (const TABLE &table, bool *skip_metadata_update) |
Loads a table into its defined secondary storage engine. More... | |
virtual int | unload_table (const char *db_name, const char *table_name, bool error_if_not_loaded) |
Unloads a table from its defined secondary storage engine. More... | |
virtual void | mv_key_capacity (uint *num_keys, size_t *keys_length) const |
Engine-specific function for ha_can_store_mv_keys(). More... | |
bool | filter_dup_records () |
Filter duplicate records when multi-valued index is used for retrieval. More... | |
Private Attributes | |
Record_buffer * | m_record_buffer = nullptr |
Buffer for multi-row reads. More... | |
key_range | save_end_range |
enum_range_scan_direction | range_scan_direction |
int | key_compare_result_on_equal |
handler * | m_primary_handler {nullptr} |
Pointer to the handler of the table in the primary storage engine, if this handler represents a table in a secondary storage engine. More... | |
batch_mode_t | m_psi_batch_mode |
Batch mode state. More... | |
ulonglong | m_psi_numrows |
The number of rows in the batch. More... | |
PSI_table_locker * | m_psi_locker |
The current event in a batch. More... | |
PSI_table_locker_state | m_psi_locker_state |
Storage for the event in a batch. More... | |
int | m_lock_type |
The lock type set by when calling::ha_external_lock(). More... | |
Handler_share ** | ha_share |
Pointer where to store/retrieve the Handler_share pointer. More... | |
bool | m_update_generated_read_fields |
Some non-virtual ha_* functions, responsible for reading rows, like ha_rnd_pos(), must ensure that virtual generated columns are calculated before they return. More... | |
Unique_on_insert * | m_unique |
Friends | |
class | Partition_handler |
class | DsMrr_impl |
The handler class is the interface for dynamically loadable storage engines.
Do not add ifdefs and take care when adding or changing virtual functions to avoid vtable confusion
Functions in this class accept and return table columns data. Two data representation formats are used:
[Warning: this description is work in progress and may be incomplete] The table record is stored in a fixed-size buffer:
record: null_bytes, column1_data, column2_data, ...
The offsets of the parts of the buffer are also fixed: every column has an offset to its column{i}_data, and if it is nullable it also has its own bit in null_bytes.
The record buffer only includes data about columns that are marked in the relevant column set (table->read_set and/or table->write_set, depending on the situation). <not-sure>It could be that it is required that null bits of non-present columns are set to 1</not-sure>
VARIOUS EXCEPTIONS AND SPECIAL CASES
If the table has no nullable columns, then null_bytes is still present, its length is one byte <not-sure> which must be set to 0xFF at all times. </not-sure>
If the table has columns of type BIT, then certain bits from those columns may be stored in null_bytes as well. Grep around for Field_bit for details.
For blob columns (see Field_blob), the record buffer stores length of the data, following by memory pointer to the blob data. The pointer is owned by the storage engine and is valid until the next operation.
If a blob column has NULL value, then its length and blob data pointer must be set to 0.
The overview below was copied from the storage/partition/ha_partition.h when support for non-native partitioning was removed.
Object create/delete method. Normally called when a table object exists.
Meta data routines to CREATE, DROP, RENAME table are often used at ALTER TABLE (update_create_info used from ALTER TABLE and SHOW ..).
Methods: delete_table() rename_table() create() update_create_info()
Open and close handler object to ensure all underlying files and objects allocated and deallocated for query handling is handled properly.
A handler object is opened as part of its initialisation and before being used for normal queries (not before meta-data changes always. If the object was opened it will also be closed before being deleted.
Methods: open() close()
This module contains methods that are used to understand start/end of statements, transaction boundaries, and aid for proper concurrency control.
Methods: store_lock() external_lock() start_stmt() lock_count() unlock_row() was_semi_consistent_read() try_semi_consistent_read()
This part of the handler interface is used to change the records after INSERT, DELETE, UPDATE, REPLACE method calls but also other special meta-data operations as ALTER TABLE, LOAD DATA, TRUNCATE.
These methods are used for insert (write_row), update (update_row) and delete (delete_row). All methods to change data always work on one row at a time. update_row and delete_row also contains the old row. delete_all_rows will delete all rows in the table in one call as a special optimization for DELETE from table;
Bulk inserts are supported if all underlying handlers support it. start_bulk_insert and end_bulk_insert is called before and after a number of calls to write_row.
Methods: write_row() update_row() delete_row() delete_all_rows() start_bulk_insert() end_bulk_insert()
This module is used for the most basic access method for any table handler. This is to fetch all data through a full table scan. No indexes are needed to implement this part. It contains one method to start the scan (rnd_init) that can also be called multiple times (typical in a nested loop join). Then proceeding to the next record (rnd_next) and closing the scan (rnd_end). To remember a record for later access there is a method (position) and there is a method used to retrieve the record based on the stored position. The position can be a file position, a primary key, a ROWID dependent on the handler below.
All functions that retrieve records and are callable through the handler interface must indicate whether a record is present after the call or not. Record found is indicated by returning 0 and setting table status to "has row". Record not found is indicated by returning a non-zero value and setting table status to "no row".
Methods:
rnd_init() rnd_end() rnd_next() rnd_pos() rnd_pos_by_record() position()
This part of the handler interface is used to perform access through indexes. The interface is defined as a scan interface but the handler can also use key lookup if the index is a unique index or a primary key index. Index scans are mostly useful for SELECT queries but are an important part also of UPDATE, DELETE, REPLACE and CREATE TABLE table AS SELECT and so forth. Naturally an index is needed for an index scan and indexes can either be ordered, hash based. Some ordered indexes can return data in order but not necessarily all of them. There are many flags that define the behavior of indexes in the various handlers. These methods are found in the optimizer module.
index_read is called to start a scan of an index. The find_flag defines the semantics of the scan. These flags are defined in include/my_base.h index_read_idx is the same but also initializes index before calling doing the same thing as index_read. Thus it is similar to index_init followed by index_read. This is also how we implement it.
index_read/index_read_idx does also return the first row. Thus for key lookups, the index_read will be the only call to the handler in the index scan.
index_init initializes an index before using it and index_end does any end processing needed.
Methods: index_read_map() index_init() index_end() index_read_idx_map() index_next() index_prev() index_first() index_last() index_next_same() index_read_last_map() read_range_first() read_range_next()
This calls are used to inform the handler of specifics of the ongoing scans and other actions. Most of these are used for optimisation purposes.
Methods: info() get_dynamic_partition_info extra() extra_opt() reset()
NOTE: One important part of the public handler interface that is not depicted in the methods is the attribute records which is defined in the base class. This is looked upon directly and is set by calling info(HA_STATUS_INFO) ?
Methods: min_rows_for_estimate() get_biggest_used_partition() scan_time() read_time() records_in_range() estimate_rows_upper_bound() records()
This module contains various methods that returns text messages for table types, index type and error messages.
Methods: table_type() get_row_type() print_error() get_error_message()
This module contains a number of methods defining limitations and characteristics of the handler (see also documentation regarding the individual flags).
Methods: table_flags() index_flags() min_of_the_max_uint() max_supported_record_length() max_supported_keys() max_supported_key_parts() max_supported_key_length() max_supported_key_part_length() low_byte_first() extra_rec_buf_length() min_record_length(uint options) primary_key_is_clustered() ha_key_alg get_default_index_algorithm() is_index_algorithm_supported()
cmp_ref checks if two references are the same. For most handlers this is a simple memcmp of the reference. However some handlers use primary key as reference and this can be the same even if memcmp says they are different. This is due to character sets and end spaces and so forth.
Methods: cmp_ref()
This module is used to handle the support of auto increments.
This variable in the handler is used as part of the handler interface It is maintained by the parent handler object and should not be touched by child handler objects (see handler.cc for its use).
Methods: get_auto_increment() release_auto_increment()
This method is a special InnoDB method called before a HANDLER query.
Methods: init_table_handle_for_HANDLER()
Fulltext index support.
Methods: ft_init_ext_with_hints() ft_init() ft_init_ext() ft_read()
Methods for in-place ALTER TABLE support (implemented by InnoDB and NDB).
Methods: check_if_supported_inplace_alter() prepare_inplace_alter_table() inplace_alter_table() commit_inplace_alter_table() notify_table_changed()
Methods: discard_or_import_tablespace()
Methods: optimize() analyze() check() repair() check_and_repair() auto_repair() is_crashed() check_for_upgrade() checksum() assign_to_keycache()
Enable/Disable Indexes are only supported by HEAP and MyISAM.
Methods: disable_indexes() enable_indexes() indexes_are_disabled()
Only used by MyISAM MERGE tables.
Methods: append_create_info()
Methods: get_partition_handler()
using handler::Blob_context = void * |
using handler::Load_cbk = std::function<bool(void *cookie, uint nrows, void *rowdata, uint64_t partition_id)> |
This callback is called by each parallel load thread when processing of rows is required for the adapter scan.
[in] | cookie | The cookie for this thread |
[in] | nrows | The nrows that are available |
[in] | rowdata | The mysql-in-memory row data buffer. This is a memory buffer for nrows records. The length of each record is fixed and communicated via Load_init_cbk |
[in] | partition_id | Partition id if it's a partitioned table, else std::numeric_limits<uint64_t>::max() |
using handler::Load_end_cbk = std::function<void(void *cookie)> |
This callback is called by each parallel load thread when processing of rows has ended for the adapter scan.
[in] | cookie | The cookie for this thread |
using handler::Load_init_cbk = std::function<bool( void *cookie, ulong ncols, ulong row_len, const ulong *col_offsets, const ulong *null_byte_offsets, const ulong *null_bitmasks)> |
This callback is called by each parallel load thread at the beginning of the parallel load for the adapter scan.
cookie | The cookie for this thread |
ncols | Number of columns in each row |
row_len | The size of a row in bytes |
col_offsets | An array of size ncols, where each element represents the offset of a column in the row data. The memory of this array belongs to the caller and will be free-ed after the pload_end_cbk call. |
null_byte_offsets | An array of size ncols, where each element represents the offset of a column in the row data. The memory of this array belongs to the caller and will be free-ed after the pload_end_cbk call. |
null_bitmasks | An array of size ncols, where each element represents the bitmask required to get the null bit. The memory of this array belongs to the caller and will be free-ed after the pload_end_cbk call. |
typedef void(* handler::my_gcolumn_template_callback_t) (const TABLE *, void *) |
Callback function that will be called by my_prepare_gcolumn_template once the table has been opened.
typedef ulonglong handler::Table_flags |
|
private |
|
inline |
|
inlinevirtual |
void handler::adjust_next_insert_id_after_explicit_value | ( | ulonglong | nr | ) |
|
inlinevirtual |
Reimplemented in temptable::Handler, ha_innobase, and ha_myisam.
|
inlinevirtual |
Reimplemented in ha_myisammrg.
|
inlinevirtual |
Reimplemented in ha_myisam.
|
inlinevirtual |
Check if the table can be automatically repaired.
true | Can be auto repaired |
false | Cannot be auto repaired |
Reimplemented in ha_archive, ha_tina, and ha_myisam.
|
inlinevirtual |
Get the total memory available for bulk load in SE.
[in] | thd | user session |
Reimplemented in ha_innobase.
|
inlinevirtual |
Begin parallel bulk data load to the table.
[in] | thd | user session |
[in] | data_size | total data size to load |
[in] | memory | memory to be used by SE |
[in] | num_threads | number of concurrent threads used for load. |
Reimplemented in ha_innobase.
|
inlinevirtual |
Check if the table is ready for bulk load.
[in] | thd | user session |
Reimplemented in ha_innobase.
|
inlinevirtual |
End bulk load operation.
Must be called after all execution threads have completed. Must be called even if the bulk load execution failed.
[in,out] | thd | user session |
[in,out] | load_ctx | load execution context |
[in] | is_error | true, if bulk load execution have failed |
Reimplemented in ha_innobase.
|
inlinevirtual |
Execute bulk load operation.
To be called by each of the concurrent threads idenified by thread index.
[in,out] | thd | user session |
[in,out] | load_ctx | load execution context |
[in] | thread_idx | index of the thread executing |
[in] | rows | rows to be loaded to the table |
Reimplemented in ha_innobase.
|
inlinevirtual |
This method is similar to update_row, however the handler doesn't need to execute the updates at this point in time.
The handler can be certain that another call to bulk_update_row will occur OR a call to exec_bulk_update before the set of updates in this query is concluded.
Note: If HA_ERR_FOUND_DUPP_KEY is returned, the handler must read all columns of the row so MySQL can create an error message. If the columns required for the error message are not read, the error message will contain garbage.
old_data | Old record |
new_data | New record |
dup_key_found | Number of duplicate keys found |
Reimplemented in ha_innopart.
|
inlinevirtual |
Reset information about pushed index conditions.
|
inlinevirtual |
Change the internal TABLE_SHARE pointer.
table_arg | TABLE object |
share | New share to use |
|
inlineprivatevirtual |
Reimplemented in temptable::Handler, ha_archive, ha_tina, ha_innobase, ha_innopart, ha_myisam, and ha_myisammrg.
|
inlinevirtual |
Check and repair the table if necessary.
thd | Thread object |
true | Error/Not supported |
false | Success |
Reimplemented in ha_archive, ha_tina, and ha_myisam.
|
private |
Check for incompatible collation changes.
HA_ADMIN_NEEDS_UPGRADE | Table may have data requiring upgrade. |
0 | No upgrade required. |
|
inlineprivatevirtual |
admin commands - called from mysql_admin_table
Reimplemented in ha_archive.
|
inlinevirtual |
Part of old, deprecated in-place ALTER API.
Reimplemented in temptable::Handler, ha_archive, ha_tina, ha_heap, ha_innobase, ha_innopart, ha_myisam, and ha_myisammrg.
|
virtual |
Check if a storage engine supports a particular alter table in-place.
altered_table | TABLE object for new version of table. |
ha_alter_info | Structure describing changes to be done by ALTER TABLE and holding data used during in-place alter. |
HA_ALTER_ERROR | Unexpected error. |
HA_ALTER_INPLACE_NOT_SUPPORTED | Not supported, must use copy. |
HA_ALTER_INPLACE_EXCLUSIVE_LOCK | Supported, but requires X lock. |
HA_ALTER_INPLACE_SHARED_LOCK_AFTER_PREPARE | Supported, but requires SNW lock during main phase. Prepare phase requires X lock. |
HA_ALTER_INPLACE_SHARED_LOCK | Supported, but requires SNW lock. |
HA_ALTER_INPLACE_NO_LOCK_AFTER_PREPARE | Supported, concurrent reads/writes allowed. However, prepare phase requires X lock. |
HA_ALTER_INPLACE_NO_LOCK | Supported, concurrent reads/writes allowed. |
HA_ALTER_INPLACE_INSTANT | Instant algorithm is supported. Prepare and main phases are no-op. Changes happen during commit phase and it should be "instant". We keep SU lock, allowing concurrent reads and writes during no-op phases and upgrade it to X lock before commit phase. |
Reimplemented in ha_innobase, and ha_innopart.
|
inlinevirtual |
Reimplemented in ha_myisam.
Reimplemented in temptable::Handler, ha_heap, ha_innobase, ha_innopart, ha_myisam, and ha_myisammrg.
|
privatepure virtual |
Implemented in ha_innopart, ha_perfschema, mock::ha_mock, temptable::Handler, ha_archive, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, and ha_myisammrg.
|
inlinevirtual |
Close the blob.
[in,out] | thd | user session |
[in,out] | load_ctx | load execution context |
[in] | thread_idx | index of the thread executing |
[in] | blob_ctx | a blob context |
Reimplemented in ha_innobase.
Compare two positions.
ref1 | First position. |
ref2 | Second position. |
<0 | ref1 < ref2. |
0 | Equal. |
>0 | ref1 > ref2. |
Reimplemented in temptable::Handler, ha_heap, ha_innobase, and ha_innopart.
|
virtual |
Signal that the table->read_set and table->write_set table maps changed The handler is allowed to set additional bits in the above map in this call.
MySQL signal that it changed the column bitmap.
Normally the handler should ignore all calls until we have done a ha_rnd_init() or ha_index_init(), write_row(), update_row or delete_row() as there may be several calls to this routine.
USAGE This is for handlers that needs to setup their own column bitmaps. Normally the handler should set up their own column bitmaps in index_init() or rnd_init() and in any column_bitmaps_signal() call after this.
The handler is allowed to do changes to the bitmap after an index_init or rnd_init() call is made as after this, MySQL will not use the bitmap for any program logic checking.
|
inlineprotectedvirtual |
Commit or rollback the changes made during prepare_inplace_alter_table() and inplace_alter_table() inside the storage engine.
Note that in case of rollback the allowed level of concurrency during this operation will be the same as for inplace_alter_table() and thus might be higher than during prepare_inplace_alter_table(). (For example, concurrent writes were blocked during prepare, but might not be during rollback).
altered_table | TABLE object for new version of table. |
ha_alter_info | Structure describing changes to be done by ALTER TABLE and holding data used during in-place alter. |
commit | True => Commit, False => Rollback. |
old_table_def | dd::Table object describing old version of the table. |
new_table_def | dd::Table object for the new version of the table. Can be adjusted by this call if SE supports atomic DDL. These changes to the table definition will be persisted in the data-dictionary at statement commit time. |
true | Error |
false | Success |
Reimplemented in ha_innobase, and ha_innopart.
int handler::compare_key | ( | key_range * | range | ) |
Compare if found key (in row) is over max-value.
range | range to compare to row. May be 0 for no range |
int handler::compare_key_icp | ( | const key_range * | range | ) | const |
int handler::compare_key_in_buffer | ( | const uchar * | buf | ) | const |
Check if the key in the given buffer (which is not necessarily TABLE::record[0]) is within range.
Called by the storage engine to avoid reading too many rows.
buf | the buffer that holds the key |
-1 | if the key is within the range |
0 | if the key is equal to the end_range key, and key_compare_result_on_equal is 0 |
1 | if the key is outside the range |
Push condition down to the table handler.
cond | Condition to be pushed. The condition tree must not be modified by the caller. |
|
pure virtual |
Create table (implementation).
[in] | name | Table name. |
[in] | form | TABLE object describing the table to be created. |
[in] | info | HA_CREATE_INFO describing table. |
[in,out] | table_def | dd::Table object describing the table to be created. This object can be adjusted by storage engine if it supports atomic DDL (i.e. has HTON_SUPPORTS_ATOMIC_DDL flag set). These changes will be persisted in the data-dictionary. Can be NULL for temporary tables created by optimizer. |
0 | Success. |
non-0 | Error. |
Implemented in mock::ha_mock, ha_archive, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_innopart, ha_myisam, ha_myisammrg, ha_blackhole, ha_perfschema, and temptable::Handler.
|
inlinevirtual |
Delete all rows in a table.
This is called both for cases of truncate and for cases where the optimizer realizes that all rows will be removed as a result of an SQL statement.
If the handler don't support this, then this function will return HA_ERR_WRONG_COMMAND and MySQL will delete the rows one by one.
Reimplemented in ha_innobase, ha_innopart, ha_perfschema, temptable::Handler, ha_tina, ha_example, ha_federated, ha_heap, and ha_myisam.
|
inlineprivatevirtual |
Reimplemented in ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, temptable::Handler, and ha_innopart.
|
protectedvirtual |
Delete a table.
Used to delete a table. By the time delete_table() has been called all opened references to this table will have been closed (and your globally shared references released. The variable name will just be the name of the table. You will need to remove any files you have created at this point. Called for base as well as temporary tables.
name | Full path of table name. |
table_def | dd::Table describing table being deleted (can be NULL for temporary tables created by optimizer). |
Reimplemented in ha_example, ha_heap, ha_perfschema, ha_innopart, ha_innobase, ha_myisam, and temptable::Handler.
|
inlinevirtual |
Disable indexes for a while.
mode | Mode. |
0 | Success. |
!= | 0 Error. |
Reimplemented in ha_heap, ha_innobase, ha_innopart, ha_myisam, and temptable::Handler.
|
inlinevirtual |
Discard or import tablespace.
[in] | discard | Indicates whether this is discard operation. |
[in,out] | table_def | dd::Table object describing the table in which tablespace needs to be discarded or imported. This object can be adjusted by storage engine if it supports atomic DDL (i.e. has HTON_SUPPORTS_ATOMIC_DDL flag set). These changes will be persisted in the data-dictionary. |
0 | Success. |
!= | 0 Error. |
Reimplemented in ha_innobase, and ha_innopart.
|
virtual |
Reimplemented in ha_heap.
|
inlinevirtual |
Enable indexes again.
mode | Mode. |
0 | Success. |
!= | 0 Error. |
Reimplemented in ha_heap, ha_innobase, ha_innopart, ha_myisam, and temptable::Handler.
|
inlinevirtual |
Execute all outstanding deletes and close down the bulk delete.
0 | Success |
>0 | Error code |
|
inlineprivatevirtual |
Reimplemented in ha_archive, ha_federated, and ha_myisam.
|
inlinevirtual |
Perform any needed clean-up, no outstanding updates are there at the moment.
void handler::end_psi_batch_mode | ( | ) |
End a batch started with start_psi_batch_mode
.
|
inline |
If a PSI batch was started, turn if off.
|
inlinevirtual |
End read (before write) removal and return the number of rows really written.
|
private |
Make a guesstimate for how much of a table or index is in a memory buffer in the case where the storage engine has not provided any estimate for this.
table_index_size | size of the table or index |
|
inlinevirtual |
Return upper bound of current number of records in the table (max.
of how many records one will retrieve when doing a full table scan) If upper bound is not known, HA_POS_ERROR should be returned as a max possible upper bound.
Reimplemented in ha_tina, ha_federated, ha_innobase, ha_innopart, and ha_perfschema.
|
inlinevirtual |
After this call all outstanding updates must be performed.
The number of duplicate key errors are reported in the duplicate key parameter. It is allowed to continue to the batched update after this call, the handler has to wait until end_bulk_update with changing state.
dup_key_found | Number of duplicate keys found |
0 | Success |
>0 | Error code |
|
inlinevirtual |
Return extra handler specific text for EXPLAIN.
|
inlineprivatevirtual |
Is not invoked for non-transactional temporary tables.
Tells the storage engine that we intend to read or write data from the table. This call is prefixed with a call to handler::store_lock() and is invoked only for those handler instances that stored the lock.
Calls to rnd_init
/ index_init
are prefixed with this call. When table IO is complete, we call
. A storage engine writer should expect that each call to
is followed by a call to
. If it is not, it is a bug in MySQL.
The name and signature originate from the first implementation in MyISAM, which would call fcntl
to set/clear an advisory lock on the data file in this method.
Originally this method was used to set locks on file level to enable several MySQL Servers to work on the same data. For transactional engines it has been "abused" to also mean start and end of statements to enable proper rollback of statements and transactions. When LOCK TABLES has been issued the start_stmt method takes over the role of indicating start of statement but in this case there is no end of statement indicator(?).
Called from lock.cc by lock_external() and unlock_external(). Also called from sql_table.cc by copy_data_between_tables().
thd | the current thread |
lock_type | F_RDLCK, F_WRLCK, F_UNLCK |
Reimplemented in temptable::Handler, ha_blackhole, ha_example, ha_federated, ha_heap, ha_innobase, ha_innopart, ha_myisam, and ha_myisammrg.
|
inlinevirtual |
Identifies and throws the propagated external engine query offload or exec failure reason given by the external engine handler.
|
inlineprivatevirtual |
Storage engine specific implementation of ha_extra()
operation | the operation to perform |
Reimplemented in ha_archive, ha_tina, ha_example, ha_heap, ha_innopart, ha_myisam, ha_myisammrg, ha_federated, and ha_innobase.
|
inlinevirtual |
Reimplemented in ha_myisam, and ha_myisammrg.
|
inlinevirtual |
|
private |
Filter duplicate records when multi-valued index is used for retrieval.
|
inlinevirtual |
Reimplemented in ha_blackhole, ha_innobase, ha_innopart, and ha_myisam.
Reimplemented in ha_blackhole, ha_innobase, ha_innopart, and ha_myisam.
|
inlinevirtual |
Reimplemented in ha_innobase, and ha_innopart.
|
inlineprotectedvirtual |
Reimplemented in ha_blackhole, ha_innobase, ha_innopart, and ha_myisam.
|
virtual |
Reserves an interval of auto_increment values from the handler.
offset | offset (modulus increment) | |
increment | increment between calls | |
nb_desired_values | how many values we want | |
[out] | first_value | the first value reserved by the handler |
[out] | nb_reserved_values | how many values the handler reserved |
offset and increment means that we want values to be of the form offset + N * increment, where N>=0 is integer. If the function sets *first_value to ULLONG_MAX it means an error. If the function sets *nb_reserved_values to ULLONG_MAX it means it has reserved to "positive infinite".
Reimplemented in ha_archive, ha_heap, ha_innobase, ha_innopart, and ha_myisam.
|
inlinevirtual |
Get default key algorithm for SE.
It is used when user has not provided algorithm explicitly or when algorithm specified is not supported by SE.
Reimplemented in ha_blackhole, ha_example, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, and temptable::Handler.
uint handler::get_dup_key | ( | int | error | ) |
|
virtual |
Return an error message specific to this handler.
error | error code previously returned by handler |
buf | pointer to String where to add error message |
Reimplemented in ha_federated, ha_innobase, and temptable::Handler.
|
inlinevirtual |
Adjust definition of table to be created by adding implicit columns and indexes necessary for the storage engine.
[in] | create_info | HA_CREATE_INFO describing the table. |
[in] | create_list | List of columns in the table. |
[in] | key_info | Array of KEY objects describing table indexes. |
[in] | key_count | Number of indexes in the table. |
[in,out] | table_obj | dd::Table object describing the table to be created. Implicit columns and indexes are to be added to this object. Adjusted table description will be saved into the data-dictionary. |
0 | Success. |
non-0 | Error. |
Reimplemented in ha_innobase.
|
virtual |
Retrieves the names of the table and the key for which there was a duplicate entry in the case of HA_ERR_FOREIGN_DUPLICATE_KEY.
If any of the table or key name is not available this method will return false and will not change any of child_table_name or child_key_name.
[out] | child_table_name | Table name |
[in] | child_table_name_len | Table name buffer size |
[out] | child_key_name | Key name |
[in] | child_key_name_len | Key name buffer size |
true | table and key names were available and were written into the corresponding out parameters. |
false | table and key names were not available, the out parameters were not touched. |
Reimplemented in ha_innobase, and ha_innopart.
|
protected |
Get an initialized ha_share.
NULL | ha_share is not yet initialized. |
!= | NULL previous initialized ha_share. |
|
inline |
|
inline |
|
inlinevirtual |
Return an estimate on the amount of memory the storage engine will use for caching data in memory.
If this is unknown or the storage engine does not cache data in memory -1 is returned.
Reimplemented in ha_innobase, and temptable::Handler.
|
inlinevirtual |
Reimplemented in ha_innopart.
|
inlinevirtual |
Get real row type for the table created based on one specified by user, CREATE TABLE options and SE capabilities.
Reimplemented in ha_archive, ha_heap, and ha_innobase.
|
inlinevirtual |
Reimplemented in ha_innobase.
|
inline |
|
inline |
int handler::ha_analyze | ( | THD * | thd, |
HA_CHECK_OPT * | check_opt | ||
) |
Analyze table: public interface.
Bulk update row: public interface.
int handler::ha_check | ( | THD * | thd, |
HA_CHECK_OPT * | check_opt | ||
) |
to be actually called to get 'check()' functionality
Performs checks upon the table.
thd | thread doing CHECK TABLE operation |
check_opt | options from the parser |
HA_ADMIN_OK | Successful upgrade |
HA_ADMIN_NEEDS_UPGRADE | Table has structures requiring upgrade |
HA_ADMIN_NEEDS_ALTER | Table has structures requiring ALTER TABLE |
HA_ADMIN_NOT_IMPLEMENTED | Not implemented |
bool handler::ha_check_and_repair | ( | THD * | thd | ) |
Check and repair table: public interface.
int handler::ha_check_for_upgrade | ( | HA_CHECK_OPT * | check_opt | ) |
int handler::ha_close | ( | void | ) |
Close handler.
Called from sql_base.cc, sql_select.cc, and table.cc. In sql_select.cc it is only used to close up temporary tables or during the process where a temporary table is converted over to being a myisam table. For sql_base.cc look at close_data_tables().
0 | Success |
!= | 0 Error (error code returned) |
bool handler::ha_commit_inplace_alter_table | ( | TABLE * | altered_table, |
Alter_inplace_info * | ha_alter_info, | ||
bool | commit, | ||
const dd::Table * | old_table_def, | ||
dd::Table * | new_table_def | ||
) |
Public function wrapping the actual handler call.
Allows us to enforce asserts regardless of handler implementation.
int handler::ha_create | ( | const char * | name, |
TABLE * | form, | ||
HA_CREATE_INFO * | info, | ||
dd::Table * | table_def | ||
) |
Create a table in the engine: public interface.
int handler::ha_delete_all_rows | ( | ) |
Delete all rows: public interface.
int handler::ha_delete_row | ( | const uchar * | buf | ) |
int handler::ha_delete_table | ( | const char * | name, |
const dd::Table * | table_def | ||
) |
Delete table: public interface.
int handler::ha_disable_indexes | ( | uint | mode | ) |
Disable indexes: public interface.
int handler::ha_discard_or_import_tablespace | ( | bool | discard, |
dd::Table * | table_def | ||
) |
Discard or import tablespace: public interface.
void handler::ha_drop_table | ( | const char * | name | ) |
Drop table in the engine: public interface.
int handler::ha_enable_indexes | ( | uint | mode | ) |
Enable indexes: public interface.
int handler::ha_end_bulk_insert | ( | ) |
End bulk insert.
0 | Success |
!= | 0 Failure (error code returned) |
int handler::ha_external_lock | ( | THD * | thd, |
int | lock_type | ||
) |
These functions represent the public interface to users of the handler class, hence they are not virtual.
For the inheritance interface, see the (private) functions write_row(), update_row(), and delete_row() below.
int handler::ha_extra | ( | enum ha_extra_function | operation | ) |
Request storage engine to do an extra operation: enable,disable or run some functionality.
operation | the operation to perform |
int handler::ha_ft_read | ( | uchar * | buf | ) |
|
inline |
Get a pointer to a handler for the table in the primary storage engine, if this handler is for a table in a secondary storage engine.
|
inline |
Get the record buffer that was set with ha_set_record_buffer().
bool handler::ha_get_se_private_data | ( | dd::Table * | dd_table, |
bool | reset | ||
) |
Submit a dd::Table object representing a core DD table having hardcoded data to be filled in by the DDSE.
Get the hard coded SE private data from the handler for a DD table.
This function can be used for retrieving the hard coded SE private data for the mysql.dd_properties table, before creating or opening it, or for retrieving the hard coded SE private data for a core table, before creating or opening them.
dd_table | [in,out] A dd::Table object representing a core DD table. |
reset | Reset counters. |
true | An error occurred. |
false | Success - no errors. |
int handler::ha_index_end | ( | ) |
End use of index.
0 | Success |
!= | 0 Error (error code returned) |
int handler::ha_index_first | ( | uchar * | buf | ) |
Reads the first row via index.
[out] | buf | Row data |
0 | Success |
HA_ERR_END_OF_FILE | Row not found |
!= | 0 Error |
int handler::ha_index_init | ( | uint | idx, |
bool | sorted | ||
) |
Initialize use of index.
idx | Index to use |
sorted | Use sorted order |
0 | Success |
!= | 0 Error (error code returned) |
int handler::ha_index_last | ( | uchar * | buf | ) |
Reads the last row via index.
[out] | buf | Row data |
0 | Success |
HA_ERR_END_OF_FILE | Row not found |
!= | 0 Error |
int handler::ha_index_next | ( | uchar * | buf | ) |
Reads the next row via index.
[out] | buf | Row data |
0 | Success |
HA_ERR_END_OF_FILE | Row not found |
HA_ERR_KEY_NOT_FOUND | This return value indicates duplicate row returned from storage engine during multi-value index read. |
!= | 0 Error |
int handler::ha_index_next_pushed | ( | uchar * | buf | ) |
Reads the next same row via index.
[out] | buf | Row data |
key | Key to search for | |
keylen | Length of key |
0 | Success |
HA_ERR_END_OF_FILE | Row not found |
HA_ERR_KEY_NOT_FOUND | This return value indicates indicates row returned from storage engine during multi-value index read. |
!= | 0 Error |
|
inline |
int handler::ha_index_prev | ( | uchar * | buf | ) |
Reads the previous row via index.
[out] | buf | Row data |
0 | Success |
HA_ERR_END_OF_FILE | Row not found |
HA_ERR_KEY_NOT_FOUND | This return value indicates duplicate row returned from storage engine during multi-value index read. HA_ERR_KEY_NOT_FOUND indicates end of result for ref scan. And for range and index scan, current result row needs to skipped. |
!= | 0 Error |
int handler::ha_index_read_idx_map | ( | uchar * | buf, |
uint | index, | ||
const uchar * | key, | ||
key_part_map | keypart_map, | ||
enum ha_rkey_function | find_flag | ||
) |
Initializes an index and read it.
int handler::ha_index_read_last_map | ( | uchar * | buf, |
const uchar * | key, | ||
key_part_map | keypart_map | ||
) |
int handler::ha_index_read_map | ( | uchar * | buf, |
const uchar * | key, | ||
key_part_map | keypart_map, | ||
enum ha_rkey_function | find_flag | ||
) |
Read [part of] row via [part of] index.
[out] | buf | buffer where store the data |
key | Key to search for | |
keypart_map | Which part of key to use | |
find_flag | Direction/condition on key usage |
0 | Success (found a record, and function has set table status to "has row") |
HA_ERR_END_OF_FILE | Row not found (function has set table status to "no row"). End of index passed. |
HA_ERR_KEY_NOT_FOUND | Row not found (function has set table status to "no row"). Index cursor positioned. |
!= | 0 Error |
int handler::ha_index_read_pushed | ( | uchar * | buf, |
const uchar * | key, | ||
key_part_map | keypart_map | ||
) |
|
inline |
Public function wrapping the actual handler call.
|
inline |
Does this handler want to get a Record_buffer for multi-row reads via the ha_set_record_buffer() function? And if so, what is the maximum number of records to allocate space for in the buffer?
Storage engines that support using a Record_buffer should override handler::is_record_buffer_wanted().
[out] | max_rows | gets set to the maximum number of records to allocate space for in the buffer if the function returns true |
true | if the handler would like a Record_buffer |
false | if the handler does not want a Record_buffer |
int handler::ha_load_table | ( | const TABLE & | table, |
bool * | skip_metadata_update | ||
) |
Loads a table into its defined secondary storage engine: public interface.
[in] | table | - The table to load into the secondary engine. Its read_set tells which columns to load. |
[out] | skip_metadata_update | - should the DD metadata be updated for the load of this table |
int handler::ha_multi_range_read_next | ( | char ** | range_info | ) |
|
inline |
Return max limits for a single set of multi-valued keys.
[out] | num_keys | number of keys to store |
[out] | keys_length | total length of keys, bytes |
|
inline |
Public function wrapping the actual handler call.
int handler::ha_open | ( | TABLE * | table, |
const char * | name, | ||
int | mode, | ||
int | test_if_locked, | ||
const dd::Table * | table_def | ||
) |
int handler::ha_optimize | ( | THD * | thd, |
HA_CHECK_OPT * | check_opt | ||
) |
Optimize table: public interface.
bool handler::ha_prepare_inplace_alter_table | ( | TABLE * | altered_table, |
Alter_inplace_info * | ha_alter_info, | ||
const dd::Table * | old_table_def, | ||
dd::Table * | new_table_def | ||
) |
Public functions wrapping the actual handler call.
int handler::ha_read_first_row | ( | uchar * | buf, |
uint | primary_key | ||
) |
Read first row (only) from a table.
This is never called for tables whose storage engine do not contain exact statistics on number of records, e.g. InnoDB.
int handler::ha_read_range_first | ( | const key_range * | start_key, |
const key_range * | end_key, | ||
bool | eq_range, | ||
bool | sorted | ||
) |
int handler::ha_read_range_next | ( | ) |
|
inline |
Wrapper function to call records() in storage engine.
num_rows | [out] Number of rows in table. |
0 | for OK, one of the HA_xxx values in case of error. |
|
inline |
Wrapper function to call records_from_index() in storage engine.
num_rows | [out] Number of rows in table. |
index | Index chosen by optimizer for counting. |
0 | for OK, one of the HA_xxx values in case of error. |
void handler::ha_release_auto_increment | ( | ) |
int handler::ha_rename_table | ( | const char * | from, |
const char * | to, | ||
const dd::Table * | from_table_def, | ||
dd::Table * | to_table_def | ||
) |
Rename table: public interface.
int handler::ha_repair | ( | THD * | thd, |
HA_CHECK_OPT * | check_opt | ||
) |
Repair table: public interface.
int handler::ha_reset | ( | ) |
Check handler usage and reset state of file to after 'open'.
int handler::ha_rnd_end | ( | ) |
End use of random access.
0 | Success |
!= | 0 Error (error code returned) |
int handler::ha_rnd_init | ( | bool | scan | ) |
Initialize table for random read or scan.
scan | if true: Initialize for random scans through rnd_next() if false: Initialize for random reads through rnd_pos() |
0 | Success |
!= | 0 Error (error code returned) |
int handler::ha_rnd_next | ( | uchar * | buf | ) |
Read next row via random scan.
buf | Buffer to read the row into |
0 | Success |
!= | 0 Error (error code returned) |
Read row via random scan from position.
[out] | buf | Buffer to read the row into |
pos | Position from position() call |
0 | Success |
!= | 0 Error (error code returned) |
int handler::ha_sample_end | ( | void * | scan_ctx | ) |
End sampling.
[in] | scan_ctx | Scan context of the sampling |
int handler::ha_sample_init | ( | void *& | scan_ctx, |
double | sampling_percentage, | ||
int | sampling_seed, | ||
enum_sampling_method | sampling_method, | ||
const bool | tablesample | ||
) |
Initialize sampling.
[out] | scan_ctx | A scan context created by this method that has to be used in sample_next |
[in] | sampling_percentage | percentage of records that need to be sampled |
[in] | sampling_seed | random seed that the random generator will use |
[in] | sampling_method | sampling method to be used; currently only SYSTEM sampling is supported |
[in] | tablesample | true if the sampling is for tablesample |
int handler::ha_sample_next | ( | void * | scan_ctx, |
uchar * | buf | ||
) |
Get the next record for sampling.
[in] | scan_ctx | Scan context of the sampling |
[in] | buf | buffer to place the read record |
void handler::ha_set_primary_handler | ( | handler * | primary_handler | ) |
Store a pointer to the handler of the primary table that corresponds to the secondary table in this handler.
|
inline |
Set a record buffer that the storage engine can use for multi-row reads.
The buffer has to be provided prior to the first read from an index or a table.
buffer | the buffer to use for multi-row reads |
void handler::ha_start_bulk_insert | ( | ha_rows | rows | ) |
Start bulk insert.
Allow the handler to optimize for multiple row insert.
rows | Estimated rows to insert |
|
protected |
|
inline |
The cached_table_flags is set at ha_open and ha_external_lock.
|
protected |
Acquire the instrumented table information from a table share.
share | a table share |
|
protected |
int handler::ha_truncate | ( | dd::Table * | table_def | ) |
Truncate table: public interface.
int handler::ha_unload_table | ( | const char * | db_name, |
const char * | table_name, | ||
bool | error_if_not_loaded | ||
) |
Unloads a table from its defined secondary storage engine: public interface.
Update the current row.
old_data | the old contents of the row |
new_data | the new contents of the row |
bool handler::ha_upgrade_table | ( | THD * | thd, |
const char * | dbname, | ||
const char * | table_name, | ||
dd::Table * | dd_table, | ||
TABLE * | table_arg | ||
) |
Set se_private_id and se_private_data during upgrade.
thd | Pointer of THD |
dbname | Database name |
table_name | Table name |
dd_table | dd::Table for the table |
table_arg | TABLE object for the table. |
false | Success |
true | Error |
int handler::ha_write_row | ( | uchar * | buf | ) |
|
private |
Function will handle the error code from call to records() and records_from_index().
error | return code from records() and records_from_index(). |
num_rows | Check if it contains HA_POS_ERROR in case error < 0. |
0 | for OK, one of the HA_xxx values in case of error. |
|
inline |
|
inlinevirtual |
Get the handlerton of the storage engine if the SE is capable of pushing down some of the AccessPath functionality.
(Join, Filter conditions, ... possiby more)
Call the handlerton::push_to_engine() method for performing the actual pushdown of (parts of) the AccessPath functionality
Else, 'nullptr' is returned.
Push down an index condition to the handler.
The server will use this method to push down a condition it wants the handler to evaluate when retrieving records using a specified index. The pushed index condition will only refer to fields from this handler that is contained in the index (but it may also refer to fields in other handlers). Before the handler evaluates the condition it must read the content of the index entry into the record buffer.
The handler is free to decide if and how much of the condition it will take responsibility for evaluating. Based on this evaluation it should return the part of the condition it will not evaluate. If it decides to evaluate the entire condition it should return NULL. If it decides not to evaluate any part of the condition it should return a pointer to the same condition as given as argument.
keyno | the index number to evaluate the condition on |
idx_cond | the condition to be evaluated by the handler |
Reimplemented in ha_innobase, and ha_myisam.
|
inlineprivatevirtual |
Reimplemented in ha_federated, ha_innobase, ha_innopart, ha_myisam, ha_perfschema, and temptable::Handler.
|
inlineprotectedvirtual |
Reimplemented in temptable::Handler, ha_blackhole, ha_example, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, and ha_innopart.
|
pure virtual |
Implemented in ha_innobase, ha_perfschema, temptable::Handler, ha_example, ha_blackhole, ha_heap, ha_myisam, ha_myisammrg, ha_archive, ha_tina, and ha_federated.
double handler::index_in_memory_estimate | ( | uint | keyno | ) | const |
Return an estimate of how much of the index that is currently stored in main memory.
This estimate should be the fraction of the index that currently is available in a main memory buffer. The estimate should be in the range from 0.0 (nothing in memory) to 1.0 (entire index in memory).
keyno | the index to get an estimate for |
|
inlineprivatevirtual |
Reimplemented in ha_myisam, ha_perfschema, ha_innobase, ha_innopart, temptable::Handler, ha_archive, and ha_federated.
|
inlineprotectedvirtual |
Reimplemented in temptable::Handler, ha_blackhole, ha_example, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, and ha_innopart.
|
inlineprotectedvirtual |
Reimplemented in ha_archive, ha_blackhole, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, temptable::Handler, and ha_innopart.
|
inlineprotectedvirtual |
Reimplemented in ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, temptable::Handler, and ha_innopart.
|
virtual |
Calculate cost of 'index only' scan for given index and number of records.
keynr | Index number |
records | Estimated number of records to be retrieved |
|
inlineprotectedvirtual |
Reimplemented in ha_blackhole, ha_example, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, temptable::Handler, and ha_innopart.
|
inlineprotectedvirtual |
Reimplemented in ha_archive, ha_federated, ha_perfschema, ha_innobase, and temptable::Handler.
|
protectedvirtual |
Positions an index cursor to the index specified in argument.
Fetches the row if available. If the key value is null, begin at the first key of the index.
Reimplemented in ha_blackhole, ha_federated, ha_heap, ha_innopart, ha_myisam, and ha_myisammrg.
|
inlineprotectedvirtual |
Reimplemented in ha_innobase, and temptable::Handler.
|
inlineprotectedvirtual |
The following functions works like index_read, but it find the last row with the current key value or prefix.
Reimplemented in ha_blackhole, ha_heap, ha_myisam, ha_myisammrg, and ha_innopart.
|
inlineprotectedvirtual |
Positions an index cursor to the index specified in the handle ('active_index').
Fetches the row if available. If the key value is null, begin at the first key of the index.
Reimplemented in ha_blackhole, ha_example, ha_heap, ha_innopart, ha_myisam, and ha_myisammrg.
|
inlineprotectedvirtual |
|
virtual |
Cost estimate for reading a number of ranges from an index.
The cost estimate will only include the cost of reading data that is contained in the index. If the records need to be read, use read_cost() instead.
index | the index number |
ranges | the number of ranges to be read |
rows | total number of rows to be read |
|
inlinevirtual |
|
pure virtual |
General method to gather info from handler.
info() is used to return information to the optimizer. SHOW also makes use of this data Another note, if your handler doesn't proved exact record count, you will probably want to have the following in your code: if (records < 2) records = 2; The reason is that the server will optimize for cases of only a single record. If in a table scan you don't know the number of records it will probably be better to set records to two so you can return as many records as you need.
Along with records a few more variables you may wish to set are: records deleted data_file_length index_file_length delete_length check_time Take a look at the public variables in handler.h for more information. See also my_base.h for a full description.
flag | Specifies what info is requested |
Implemented in ha_blackhole, temptable::Handler, ha_archive, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, and ha_perfschema.
|
inline |
This is called after create to allow us to set up cached variables.
|
inlinevirtual |
Reimplemented in ha_innobase, and temptable::Handler.
|
inlineprotectedvirtual |
Alter the table structure in-place with operations specified using HA_ALTER_FLAGS and Alter_inplace_info.
The level of concurrency allowed during this operation depends on the return value from check_if_supported_inplace_alter().
altered_table | TABLE object for new version of table. |
ha_alter_info | Structure describing changes to be done by ALTER TABLE and holding data used during in-place alter. |
old_table_def | dd::Table object describing old version of the table. |
new_table_def | dd::Table object for the new version of the table. Can be adjusted by this call if SE supports atomic DDL. These changes to the table definition will be persisted in the data-dictionary at statement commit time. |
true | Error |
false | Success |
Reimplemented in ha_innobase, and ha_innopart.
|
inlinevirtual |
Check if the table is crashed.
true | Crashed |
false | Not crashed |
Reimplemented in ha_archive, ha_tina, and ha_myisam.
|
virtual |
Determine whether an error is fatal or not.
This method is used to analyse the error to see whether the error is fatal or not.
This method is used to analyze the error to see whether the error is fatal or not. A fatal error is an error that will not be possible to handle with SP handlers and will not be subject to retry attempts on the slave.
error | error code received from the handler interface (HA_ERR_...) |
true | the error is fatal |
false | the error is not fatal |
Further comments in header file.
|
virtual |
Determine whether an error can be ignored or not.
This method is used to analyse the error to see whether the error is ignorable or not.
This method is used to analyze the error to see whether the error is ignorable or not. Such errors will be reported as warnings instead of errors for IGNORE statements. This means that the statement will not abort, but instead continue to the next row.
HA_ERR_FOUND_DUP_UNIQUE is a special case in MyISAM that means the same thing as HA_ERR_FOUND_DUP_KEY, but can in some cases lead to a slightly different error message.
error | error code received from the handler interface (HA_ERR_...) |
true | the error is ignorable |
false | the error is not ignorable |
Further comments in header file.
Reimplemented in ha_innopart.
|
inlinevirtual |
Check if SE supports specific key algorithm.
Reimplemented in ha_blackhole, ha_example, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, and temptable::Handler.
|
inlineprivatevirtual |
Does this handler want to get a Record_buffer for multi-row reads via the ha_set_record_buffer() function? And if so, what is the maximum number of records to allocate space for in the buffer?
Storage engines that support using a Record_buffer should override this function and return true for scans that could benefit from a buffer.
[out] | max_rows | gets set to the maximum number of records to allocate space for in the buffer if the function returns true |
true | if the handler would like a Record_buffer |
false | if the handler does not want a Record_buffer |
Reimplemented in ha_innobase.
|
inlineprivatevirtual |
Loads a table into its defined secondary storage engine.
[in] | table | - Table opened in primary storage engine. Its read_set tells which columns to load. |
[out] | skip_metadata_update | - should the DD metadata be updated for the load of this table |
Reimplemented in mock::ha_mock.
|
inlinevirtual |
Get number of lock objects returned in store_lock.
Returns the number of store locks needed in call to store lock. We return number of partitions we will lock multiplied with number of locks needed by each partition. Assists the above functions in allocating sufficient space for lock structures.
Reimplemented in ha_innobase, and ha_myisammrg.
|
protected |
Take a lock for protecting shared handler data.
|
inlinevirtual |
|
private |
A helper function to mark a transaction read-write, if it is started.
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inlinevirtual |
Reimplemented in ha_archive, ha_blackhole, ha_example, ha_federated, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, and temptable::Handler.
|
inlinevirtual |
Reimplemented in ha_archive, ha_blackhole, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, and temptable::Handler.
|
inlinevirtual |
Reimplemented in ha_example, ha_federated, and ha_perfschema.
|
inlinevirtual |
Reimplemented in ha_archive, ha_blackhole, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, and ha_perfschema.
|
inlinevirtual |
Reimplemented in ha_example, ha_federated, and ha_perfschema.
|
inlinevirtual |
If this handler instance is part of a pushed join sequence returned TABLE instance being root of the pushed query?
|
inlinevirtual |
|
virtual |
Get cost and other information about MRR scan over some sequence of ranges.
Calculate estimated cost and other information about an MRR scan for some sequence of ranges.
The ranges themselves will be known only at execution phase. When this function is called we only know number of ranges and a (rough) E(records) within those ranges.
Currently this function is only called for "n-keypart singlepoint" ranges, i.e. each range is "keypart1=someconst1 AND ... AND keypartN=someconstN"
The flags parameter is a combination of those flags: HA_MRR_SORTED, HA_MRR_INDEX_ONLY, HA_MRR_NO_ASSOCIATION, HA_MRR_LIMITS.
keyno | Index number | |
n_ranges | Estimated number of ranges (i.e. intervals) in the range sequence. | |
n_rows | Estimated total number of records contained within all of the ranges | |
[in,out] | bufsz | IN: Size of the buffer available for use OUT: Size of the buffer that will be actually used, or 0 if buffer is not needed. |
[in,out] | flags | A combination of HA_MRR_* flags |
[out] | cost | Estimated cost of MRR access |
0 | OK, *cost contains cost of the scan, *bufsz and *flags contain scan parameters. |
other | Error or can't perform the requested scan |
Reimplemented in ha_innobase, and ha_myisam.
|
virtual |
Get cost and other information about MRR scan over a known list of ranges.
Calculate estimated cost and other information about an MRR scan for given sequence of ranges.
keyno | Index number | |
seq | Range sequence to be traversed | |
seq_init_param | First parameter for seq->init() | |
n_ranges_arg | Number of ranges in the sequence, or 0 if the caller can't efficiently determine it | |
[in,out] | bufsz | IN: Size of the buffer available for use OUT: Size of the buffer that is expected to be actually used, or 0 if buffer is not needed. |
[in,out] | flags | A combination of HA_MRR_* flags |
[out] | force_default_mrr | Force default MRR implementation |
[out] | cost | Estimated cost of MRR access |
thd->killed
and return HA_POS_ERROR if it is not zero. This is required for a user to be able to interrupt the calculation by killing the connection/query.HA_POS_ERROR | Error or the engine is unable to perform the requested scan. Values of OUT parameters are undefined. |
other | OK, *cost contains cost of the scan, *bufsz and *flags contain scan parameters. |
Reimplemented in ha_innobase, and ha_myisam.
|
virtual |
Initialize the MRR scan.
This function may do heavyweight scan initialization like row prefetching/sorting/etc (NOTE: but better not do it here as we may not need it, e.g. if we never satisfy WHERE clause on previous tables. For many implementations it would be natural to do such initializations in the first multi_read_range_next() call)
mode is a combination of the following flags: HA_MRR_SORTED, HA_MRR_INDEX_ONLY, HA_MRR_NO_ASSOCIATION
seq_funcs | Range sequence to be traversed |
seq_init_param | First parameter for seq->init() |
n_ranges | Number of ranges in the sequence |
mode | Flags, see the description section for the details |
buf | INOUT: memory buffer to be used |
Until WL#2623 is done (see its text, section 3.2), the following will also hold: The caller will guarantee that if "seq->init == mrr_ranges_array_init" then seq_init_param is an array of n_ranges KEY_MULTI_RANGE structures. This property will only be used by NDB handler until WL#2623 is done.
Buffer memory management is done according to the following scenario: The caller allocates the buffer and provides it to the callee by filling the members of HANDLER_BUFFER structure. The callee consumes all or some fraction of the provided buffer space, and sets the HANDLER_BUFFER members accordingly. The callee may use the buffer memory until the next multi_range_read_init() call is made, all records have been read, or until index_end() call is made, whichever comes first.
0 | OK |
1 | Error |
Reimplemented in ha_innobase, and ha_myisam.
|
protectedvirtual |
Get next record in MRR scan.
Default MRR implementation: read the next record
range_info | OUT Undefined if HA_MRR_NO_ASSOCIATION flag is in effect Otherwise, the opaque value associated with the range that contains the returned record. |
0 | OK |
other | Error code |
Reimplemented in ha_innobase, and ha_myisam.
|
inlineprivatevirtual |
Engine-specific function for ha_can_store_mv_keys().
Dummy function. SE's overloaded method is used instead.
Reimplemented in ha_innobase.
|
static |
Callback for computing generated column values.
Storage engines that need to have virtual column values for a row can use this function to get the values computed. The storage engine must have filled in the values for the base columns that the virtual columns depend on.
thd | thread handle | |
table | table object | |
fields | bitmap of field index of evaluated generated column | |
[in,out] | record | buff of base columns generated column depends. After calling this function, it will be used to return the value of the generated columns. |
[out] | mv_data_ptr | When given (not null) and the field needs to be calculated is a typed array field, it will contain pointer to field's calculated value. |
[out] | mv_length | Length of the data above |
true | in case of error |
false | on success |
|
static |
Callback for generated columns processing.
Will open the table, in the server only, and call my_eval_gcolumn_expr_helper() to do the actual processing. This function is a variant of the other handler::my_eval_gcolumn_expr() but is intended for use when no TABLE object already exists - e.g. from purge threads.
Note! The call to open_table_uncached() must be made with the second-to-last argument (open_in_engine) set to false. Failing to do so will cause deadlocks and incorrect behavior.
thd | thread handle | |
db_name | database containing the table to open | |
table_name | name of table to open | |
fields | bitmap of field index of evaluated generated column | |
record | record buffer | |
[out] | mv_data_ptr | For a typed array field in this arg the pointer to its value is returned |
[out] | mv_length | Length of the value above |
|
static |
Callback to allow InnoDB to prepare a template for generated column processing.
This function will open the table without opening in the engine and call the provided function with the TABLE object made. The function will then close the TABLE.
thd | Thread handle |
db_name | Name of database containing the table |
table_name | Name of table to open |
myc | InnoDB function to call for processing TABLE |
ib_table | Argument for InnoDB function |
|
inlineprotectedvirtual |
Notify the storage engine that the table definition has been updated.
ha_alter_info | Structure describing changes done by ALTER TABLE and holding data used during in-place alter. |
|
inlinevirtual |
Reports number of tables included in pushed join which this handler instance is part of.
==0 -> Not pushed
|
privatepure virtual |
Implemented in ha_tina, ha_archive, ha_blackhole, ha_example, ha_federated, ha_heap, ha_innopart, ha_myisam, ha_perfschema, ha_myisammrg, ha_innobase, and temptable::Handler.
|
inlinevirtual |
Open a blob for write operation.
[in,out] | thd | user session |
[in,out] | load_ctx | load execution context |
[in] | thread_idx | index of the thread executing |
[out] | blob_ctx | a blob context |
[out] | blobref | a blob reference to be placed in the record. |
Reimplemented in ha_innobase.
|
inlinevirtual |
Reimplemented in temptable::Handler, ha_archive, ha_federated, ha_innobase, ha_innopart, and ha_myisam.
|
virtual |
Cost estimate for doing a number of non-sequentially accesses against the storage engine.
Such accesses can be either number of rows to read, or number of disk pages to access. Each handler implementation is free to interpret that as best suited, depending on what is the dominating cost for that storage engine.
This method is mainly provided as a temporary workaround for bug#33317872, where we fix problems caused by calling Cost_model::page_read_cost() directly from the optimizer. That should be avoided, as it introduced assumption about all storage engines being disk-page based, and having a 'page' cost. Furthermore, this page cost was even compared against read_cost(), which was computed with an entirely different algorithm, and thus could not be compared.
The default implementation still call Cost_model::page_read_cost(), thus behaving just as before. However, handler implementation may override it to call handler::read_cost() instead(), which probably will be more correct. (If a page_read_cost should be included in the cost estimate, that should preferable be done inside each read_cost() implementation)
Longer term we should consider to remove all page_read_cost() usage from the optimizer itself, making this method obsolete.
index | the index number |
reads | the number of accesses being made |
|
inlinevirtual |
Run the parallel read of data.
[in] | scan_ctx | Scan context of the parallel read. |
[in,out] | thread_ctxs | Caller thread contexts. |
[in] | init_fn | Callback called by each parallel load thread at the beginning of the parallel load. |
[in] | load_fn | Callback called by each parallel load thread when processing of rows is required. |
[in] | end_fn | Callback called by each parallel load thread when processing of rows has ended. |
0 | on success |
Reimplemented in ha_innobase, and ha_innopart.
|
inlinevirtual |
End of the parallel scan.
[in] | scan_ctx | A scan context created by parallel_scan_init. |
Reimplemented in ha_innopart, and ha_innobase.
|
inlinevirtual |
Initializes a parallel scan.
It creates a parallel_scan_ctx that has to be used across all parallel_scan methods. Also, gets the number of threads that would be spawned for parallel scan.
[out] | scan_ctx | The parallel scan context. |
[out] | num_threads | Number of threads used for the scan. |
[in] | use_reserved_threads | true if reserved threads are to be used if we exhaust the max cap of number of parallel read threads that can be spawned at a time |
[in] | max_desired_threads | Maximum number of desired scan threads; passing 0 has no effect, it is ignored. |
0 | on success |
Reimplemented in ha_innobase, and ha_innopart.
|
inlinevirtual |
If this handler instance is a child in a pushed join sequence returned TABLE instance being my parent?
|
pure virtual |
Implemented in temptable::Handler, ha_archive, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_innopart, ha_myisam, ha_myisammrg, ha_perfschema, and mock::ha_mock.
|
inlinevirtual |
Reimplemented in ha_myisam.
|
inlineprotectedvirtual |
Allows the storage engine to update internal structures with concurrent writes blocked.
If check_if_supported_inplace_alter() returns HA_ALTER_INPLACE_NO_LOCK_AFTER_PREPARE or HA_ALTER_INPLACE_SHARED_AFTER_PREPARE, this function is called with exclusive lock otherwise the same level of locking as for inplace_alter_table() will be used.
altered_table | TABLE object for new version of table. |
ha_alter_info | Structure describing changes to be done by ALTER TABLE and holding data used during in-place alter. |
old_table_def | dd::Table object describing old version of the table. |
new_table_def | dd::Table object for the new version of the table. Can be adjusted by this call if SE supports atomic DDL. These changes to the table definition will be persisted in the data-dictionary at statement commit time. |
true | Error |
false | Success |
Reimplemented in ha_innobase, and ha_innopart.
|
inlinevirtual |
Check if the primary key is clustered or not.
true | Primary key (if there is one) is a clustered key covering all fields |
false | otherwise |
Reimplemented in ha_innobase, and temptable::Handler.
|
virtual |
Print error that we got from handler function.
Reimplemented in ha_innopart, and ha_perfschema.
|
virtual |
Cost estimate for reading a set of ranges from the table using an index to access it.
index | the index number |
ranges | the number of ranges to be read |
rows | total number of rows to be read |
|
protectedvirtual |
Read first row between two ranges.
Store ranges for future calls to read_range_next.
start_key | Start key. Is 0 if no min range |
end_key | End key. Is 0 if no max range |
eq_range_arg | Set to 1 if start_key == end_key |
sorted | Set to 1 if result should be sorted per key |
0 | Found row |
HA_ERR_END_OF_FILE | No rows in range |
Reimplemented in ha_federated, ha_innobase, and ha_innopart.
|
protectedvirtual |
Read next row between two endpoints.
0 | Found row |
HA_ERR_END_OF_FILE | No rows in range |
Reimplemented in ha_federated, ha_innobase, and ha_innopart.
|
inlinevirtual |
The cost of reading a set of ranges from the table using an index to access it.
index | The index number. |
ranges | The number of ranges to be read. |
rows | Total number of rows to be read. |
This method can be used to calculate the total cost of scanning a table using an index by calling it using read_time(index, 1, table_size).
Reimplemented in ha_innobase, ha_example, ha_federated, ha_heap, and temptable::Handler.
void handler::rebind_psi | ( | ) |
|
protectedvirtual |
Number of rows in table.
If HA_COUNT_ROWS_INSTANT is set, count is available instantly. Else do a table scan.
num_rows | [out] num_rows number of rows in table. |
0 | for OK, one of the HA_xxx values in case of error. |
Reimplemented in temptable::Handler, ha_archive, ha_innobase, ha_innopart, and ha_myisammrg.
|
protectedvirtual |
Number of rows in table counted using the secondary index chosen by optimizer.
See comments in optimize_aggregated_query() .
num_rows | [out] Number of rows in table. |
index | Index chosen by optimizer for counting. |
0 | for OK, one of the HA_xxx values in case of error. |
|
inlinevirtual |
Find number of records in a range.
Given a starting key, and an ending key estimate the number of rows that will exist between the two. max_key may be empty which in case determine if start_key matches any rows. Used by optimizer to calculate cost of using a particular index.
inx | Index number |
min_key | Start of range |
max_key | End of range |
Reimplemented in ha_example, ha_heap, ha_innobase, ha_innopart, ha_myisam, ha_myisammrg, and ha_federated.
|
inlineprivatevirtual |
Reimplemented in ha_innobase, and ha_innopart.
|
protectedvirtual |
Default rename_table() and delete_table() rename/delete files with a given name and extensions from handlerton::file_extensions.
These methods can be overridden, but their default implementation provide useful functionality.
[in] | from | Path for the old table name. |
[in] | to | Path for the new table name. |
[in] | from_table_def | Old version of definition for table being renamed (i.e. prior to rename). |
[in,out] | to_table_def | New version of definition for table being renamed. Storage engines which support atomic DDL (i.e. having HTON_SUPPORTS_ATOMIC_DDL flag set) are allowed to adjust this object. |
>0 | Error. |
0 | Success. |
Reimplemented in temptable::Handler, ha_innobase, ha_innopart, ha_example, ha_heap, ha_myisam, and ha_perfschema.
|
inlineprivatevirtual |
In this method check_opt can be modified to specify CHECK option to use to call check() upon the table.
Reimplemented in ha_archive, ha_tina, ha_federated, ha_myisam, and ha_innopart.
|
inlineprivatevirtual |
Reset state of file to after 'open'.
This function is called after every statement for all tables used by that statement.
Reimplemented in ha_heap, ha_innobase, ha_innopart, temptable::Handler, ha_federated, ha_myisam, and ha_myisammrg.
|
inline |
|
inlineprivatevirtual |
Reimplemented in ha_tina, ha_example, ha_federated, ha_innobase, ha_innopart, ha_myisam, ha_perfschema, and temptable::Handler.
|
privatepure virtual |
rnd_init() can be called two times without rnd_end() in between (it only makes sense if scan=1).
then the second call should prepare for the new table scan (e.g if rnd_init allocates the cursor, second call should position it to the start of the table, no need to deallocate and allocate it again
Implemented in ha_blackhole, ha_example, ha_federated, ha_heap, ha_innobase, ha_innopart, ha_myisam, ha_myisammrg, ha_perfschema, ha_archive, ha_tina, mock::ha_mock, and temptable::Handler.
|
protectedpure virtual |
Implemented in ha_archive, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, temptable::Handler, ha_innopart, and mock::ha_mock.
Implemented in ha_archive, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, temptable::Handler, ha_innopart, and mock::ha_mock.
|
inlinevirtual |
This function only works for handlers having HA_PRIMARY_KEY_REQUIRED_FOR_POSITION set.
It will return the row with the PK given in the record argument.
Reimplemented in ha_federated, and ha_innopart.
|
privatevirtual |
End sampling.
[in] | scan_ctx | Scan context of the sampling |
Reimplemented in ha_innobase, and ha_innopart.
|
privatevirtual |
Initialize sampling.
[out] | scan_ctx | A scan context created by this method that has to be used in sample_next |
[in] | sampling_percentage | percentage of records that need to be sampled |
[in] | sampling_seed | random seed |
[in] | sampling_method | sampling method to be used; currently only SYSTEM sampling is supported |
[in] | tablesample | true if the sampling is for tablesample |
Reimplemented in ha_innobase, and ha_innopart.
|
privatevirtual |
Get the next record for sampling.
[in] | scan_ctx | Scan context of the sampling |
[in] | buf | buffer to place the read record |
Reimplemented in ha_innobase, and ha_innopart.
|
inlinevirtual |
Reimplemented in ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_innopart, ha_myisammrg, ha_perfschema, and temptable::Handler.
void handler::set_end_range | ( | const key_range * | range, |
enum_range_scan_direction | direction | ||
) |
Set the end position for a range scan.
This is used for checking for when to end the range scan and by the ICP code to determine that the next record is within the current range.
range | The end value for the range scan |
direction | Direction of the range scan |
|
inlinevirtual |
Propagates the secondary storage engine offload failure reason for a query to the external engine when the offloaded query fails in the secondary storage engine.
|
protected |
Set ha_share to be used by all instances of the same table/partition.
arg_ha_share | Handler_share to be shared. |
|
inlinevirtual |
|
inline |
|
inline |
|
inlinevirtual |
false | Bulk delete used by handler |
true | Bulk delete not used, normal operation used |
|
inlineprivatevirtual |
Reimplemented in ha_archive, ha_federated, and ha_myisam.
|
inlinevirtual |
false | Bulk update used by handler |
true | Bulk update not used, normal operation used |
void handler::start_psi_batch_mode | ( | ) |
Put the handler in 'batch' mode when collecting table io instrumented events.
When operating in batch mode:
start_psi_batch_mode
and end_psi_batch_mode
is not instrumented: the number of rows affected is counted instead in m_psi_numrows
.end_psi_batch_mode
.
|
inlinevirtual |
Start read (before write) removal on the current table.
|
inlinevirtual |
Start a statement when table is locked.
This method is called instead of external lock when the table is locked before the statement is executed.
thd | Thread object. |
lock_type | Type of external lock. |
>0 | Error code. |
0 | Success. |
Reimplemented in temptable::Handler, ha_innobase, and ha_innopart.
|
pure virtual |
Is not invoked for non-transactional temporary tables.
The idea with handler::store_lock() is the following:
The statement decided which locks we should need for the table for updates/deletes/inserts we get WRITE locks, for SELECT... we get read locks.
Before adding the lock into the table lock handler (see thr_lock.c) mysqld calls store lock with the requested locks. Store lock can now modify a write lock to a read lock (or some other lock), ignore the lock (if we don't want to use MySQL table locks at all) or add locks for many tables (like we do when we are using a MERGE handler).
In some exceptional cases MySQL may send a request for a TL_IGNORE; This means that we are requesting the same lock as last time and this should also be ignored.
Called from lock.cc by get_lock_data().
Implemented in temptable::Handler, ha_archive, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_myisam, ha_myisammrg, ha_perfschema, ha_innobase, ha_innopart, and mock::ha_mock.
|
privatepure virtual |
Implemented in ha_archive, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_innopart, ha_myisam, ha_myisammrg, ha_perfschema, mock::ha_mock, and temptable::Handler.
double handler::table_in_memory_estimate | ( | ) | const |
Return an estimate of how much of the table that is currently stored in main memory.
This estimate should be the fraction of the table that currently is available in a main memory buffer. The estimate should be in the range from 0.0 (nothing in memory) to 1.0 (entire table in memory).
|
virtual |
Cost estimate for doing a complete table scan.
|
pure virtual |
The following can be called without an open handler.
Implemented in ha_archive, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, mock::ha_mock, and temptable::Handler.
|
inlinevirtual |
|
inlinevirtual |
Quickly remove all rows from a table.
[in,out] | table_def | dd::Table object for table being truncated. |
Reimplemented in temptable::Handler, ha_archive, ha_federated, ha_myisammrg, and ha_perfschema.
|
inlinevirtual |
Tell the engine whether it should avoid unnecessary lock waits.
If yes, in an UPDATE or DELETE, if the row under the cursor was locked by another transaction, the engine may try an optimistic read of the last committed row value under the cursor.
Reimplemented in ha_innobase, and ha_innopart.
void handler::unbind_psi | ( | ) |
|
inlineprivatevirtual |
Unloads a table from its defined secondary storage engine.
db_name | Database name. |
table_name | Table name. |
error_if_not_loaded | If true, then errors will be reported by this function. If false, no errors will be reported (silently fail). This case of false is useful during DROP TABLE where a failure to unload should not prevent dropping the whole table. |
Reimplemented in mock::ha_mock.
|
inlinevirtual |
Unlock last accessed row.
Record currently processed was not in the result set of the statement and is thus unlocked. Used for UPDATE and DELETE queries.
Reimplemented in ha_innobase, ha_innopart, and temptable::Handler.
|
protected |
Release lock for protecting ha_share.
int handler::update_auto_increment | ( | void | ) |
|
inlinevirtual |
Update create info as part of ALTER TABLE.
Forward this handler call to the storage engine foreach partition handler. The data_file_name for each partition may need to be reset if the tablespace was moved. Use a dummy HA_CREATE_INFO structure and transfer necessary data.
create_info | Create info from ALTER TABLE. |
Reimplemented in temptable::Handler, ha_archive, ha_heap, ha_innobase, ha_innopart, ha_myisam, and ha_myisammrg.
Update a single row.
Note: If HA_ERR_FOUND_DUPP_KEY is returned, the handler must read all columns of the row so MySQL can create an error message. If the columns required for the error message are not read, the error message will contain garbage.
Reimplemented in temptable::Handler, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, and ha_innopart.
|
inlineprivatevirtual |
Reimplemented in ha_innobase.
|
virtual |
use_hidden_primary_key() is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key
Reimplemented in ha_perfschema.
|
inlinevirtual |
Normally, when running UPDATE or DELETE queries, we need to wait for other transactions to release their locks on a given row before we can read it and potentially update it. However, in READ UNCOMMITTED and READ COMMITTED, we can ignore these locks if we don't intend to modify the row (e.g., because it failed a WHERE). This is signaled through enabling “semi-consistent read”, by calling try_semi_consistent_read(true) (and then setting it back to false after finishing the query). If semi-consistent read is enabled, and we are in READ UNCOMMITTED or READ COMMITTED, the storage engine is permitted to return rows that are locked and thus un-updatable. If the optimizer doesn't want the row, e.g., because it got filtered out, it can call unlock_row() as usual. However, if it intends to update the row, it needs to call was_semi_consistent_read() before doing so. If was_semi_consistent_read() returns false, the row was never locked to begin with and can be updated as usual. However, if it returns 1, it was read optimistically, must be discarded (ie., do not try to update the row) and must be re-read with locking enabled. The next read call after was_semi_consistent_read() will automatically re-read the same row, this time with locking enabled. Thus, typical use in an UPDATE scenario would look like this: file->try_semi_consistent_read(true); file->ha_rnd_init(true); while (file->ha_rnd_next(table->record[0]) == 0) { if (row is filtered...) { file->unlock_row(); continue; } if (file->was_semi_consistent_read()) {
Discard the row; next ha_rnd_next() will read it again with locking. continue; } Process row here. } file->ha_rnd_end(); file->try_semi_consistent_read(false);
If the transaction isolation level is REPEATABLE READ or SERIALIZABLE, enabling this flag has no effect.
Reimplemented in ha_innobase, and ha_innopart.
|
virtual |
Provide an upper cost-limit of doing a specified number of seek-and-read key lookups.
This need to be comparable and calculated with the same 'metric' as page_read_cost.
reads | the number of rows read in the 'worst' case. |
|
inlinevirtual |
Write to a blob.
[in,out] | thd | user session |
[in,out] | load_ctx | load execution context |
[in] | thread_idx | index of the thread executing |
[in] | blob_ctx | a blob context |
[in] | data | data to be written to blob. |
[in] | data_len | length of data to be written in bytes. |
Reimplemented in ha_innobase.
|
inlineprivatevirtual |
Write a row.
write_row() inserts a row. buf is a byte array of data, normally record[0].
You can use the field information to extract the data from the native byte array type.
Example of this would be: for (Field **field=table->field ; *field ; field++) { ... }
buf | Buffer to write from. |
0 | Success. |
!= | 0 Error code. |
Reimplemented in ha_archive, ha_blackhole, ha_tina, ha_example, ha_federated, ha_heap, ha_innobase, ha_myisam, ha_myisammrg, ha_perfschema, temptable::Handler, and ha_innopart.
|
friend |
|
friend |
uint handler::active_index |
Discrete_interval handler::auto_inc_interval_for_cur_row |
Interval returned by get_auto_increment() and being consumed by the inserter.
uint handler::auto_inc_intervals_count |
Number of reserved auto-increment intervals.
Serves as a heuristic when we have no estimation of how many records the statement will insert: the more intervals we have reserved, the bigger the next one. Reset in handler::ha_release_auto_increment().
|
protected |
uchar* handler::dup_ref |
Pointer to duplicate row.
key_range* handler::end_range |
End value for a range scan.
If this is NULL the range scan has no end value. Should also be NULL when there is no ongoing range scan. Used by the read_range() functions and also evaluated by pushed index conditions.
|
protected |
uint handler::errkey |
|
protected |
FT_INFO* handler::ft_handler |
|
private |
Pointer where to store/retrieve the Handler_share pointer.
For non partitioned handlers this is &TABLE_SHARE::ha_share.
handlerton* handler::ht |
bool handler::implicit_emptied |
|
protected |
enum { ... } handler::inited |
ulonglong handler::insert_id_for_cur_row |
insert id for the current row (autogenerated; if not autogenerated, it's 0).
At first successful insertion, this variable is stored into THD::first_successful_insert_id_in_cur_stmt.
|
private |
uint handler::key_used_on_scan |
|
private |
The lock type set by when calling::ha_external_lock().
This is propagated down to the storage engine. The reason for also storing it here, is that when doing MRR we need to create/clone a second handler object. This cloned handler object needs to know about the lock_type used.
Pointer to the handler of the table in the primary storage engine, if this handler represents a table in a secondary storage engine.
PSI_table* handler::m_psi |
Instrumented table associated with this handler.
|
private |
Batch mode state.
|
private |
The current event in a batch.
|
private |
Storage for the event in a batch.
|
private |
The number of rows in the batch.
std::mt19937* handler::m_random_number_engine {nullptr} |
|
private |
Buffer for multi-row reads.
double handler::m_sampling_percentage |
|
private |
|
private |
Some non-virtual ha_* functions, responsible for reading rows, like ha_rnd_pos(), must ensure that virtual generated columns are calculated before they return.
For that, they should set this member to true at their start, and check it before they return: if the member is still true, it means they should calculate; if it's false, it means the calculation has been done by some called lower-level function and does not need to be re-done (which is why we need this status flag: to avoid redundant calculations, for performance).
Note that when updating generated fields, the NULL row status in the underlying TABLE objects matter, so be sure to reset them if needed!
bool handler::m_virt_gcol_in_end_range = false |
Flag which tells if end_range contains a virtual generated column.
The content is invalid when end_range is nullptr
.
KEY_MULTI_RANGE handler::mrr_cur_range |
RANGE_SEQ_IF handler::mrr_funcs |
bool handler::mrr_have_range |
bool handler::mrr_is_output_sorted |
range_seq_t handler::mrr_iter |
HANDLER_BUFFER* handler::multi_range_buffer |
ulonglong handler::next_insert_id |
next_insert_id is the next value which should be inserted into the auto_increment column: in a inserting-multi-row statement (like INSERT SELECT), for the first row where the autoinc value is not specified by the statement, get_auto_increment() called and asked to generate a value, next_insert_id is set to the next value, then for all other rows next_insert_id is used (and increased each time) without calling get_auto_increment().
const Item* handler::pushed_cond |
Item* handler::pushed_idx_cond |
uint handler::pushed_idx_cond_keyno |
|
protected |
|
private |
uint handler::ranges_in_seq |
uchar* handler::ref |
Pointer to current row.
uint handler::ref_length |
Length of ref (1-8 or the clustered key length)
|
private |
ha_statistics handler::stats |
|
protected |
|
protected |