MySQL 9.1.0
Source Code Documentation
|
The general architecture is that the work is done in two phases, roughly the read and write phase. More...
Classes | |
struct | Builder |
For loading indexes. More... | |
struct | Compare_key |
Compare the keys of an index. More... | |
struct | Context |
DDL context/configuration. More... | |
struct | Copy_ctx |
Context for copying cluster index row for the index to being created. More... | |
struct | Cursor |
Cursor for reading the data. More... | |
struct | Dup |
Structure for reporting duplicate records. More... | |
struct | Fetch_sequence |
Fetch the document ID from the table. More... | |
struct | File_cursor |
For loading a Btree index from a file. More... | |
struct | File_reader |
Read rows from the temporary file. More... | |
struct | file_t |
Information about temporary files used in merge sort. More... | |
struct | FTS |
Full text search index builder. More... | |
struct | Gen_sequence |
Generate the next document ID using a monotonic sequence. More... | |
struct | Index_defn |
Definition of an index being created. More... | |
struct | Index_field |
Index field definition. More... | |
struct | Insert |
Structure stores information needed for the insertion phase of FTS parallel sort. More... | |
struct | Key_sort_buffer |
Buffer for sorting in main memory. More... | |
struct | Key_sort_buffer_cursor |
For loading an index from a sorted buffer. More... | |
struct | Load_cursor |
class | Loader |
Build indexes on a table by reading a clustered index, creating a temporary file containing index entries, merge sorting these index entries and inserting sorted index entries to indexes. More... | |
struct | Merge_cursor |
Merge the sorted files. More... | |
struct | Merge_file_sort |
Merge the blocks in the file. More... | |
struct | Parallel_cursor |
Cursor used for parallel reads. More... | |
struct | Row |
Physical row context. More... | |
class | RTree_inserter |
Class that caches RTree index tuples made from a single cluster index page scan, and then insert into corresponding index tree. More... | |
struct | Sequence |
Generate the next autoinc based on a snapshot of the session auto_increment_increment and auto_increment_offset variables. More... | |
struct | Token |
Row fts token for plugin parser. More... | |
struct | Tokenize_ctx |
Structure stores information from string tokenization operation. More... | |
class | Unique_os_file_descriptor |
Captures ownership and manages lifetime of an already opened OS file descriptor. More... | |
Typedefs | |
using | mrec_buf_t = byte[UNIV_PAGE_SIZE_MAX] |
Secondary buffer for I/O operations of merge records. More... | |
using | mrec_t = byte |
Merge record in Aligned_buffer. More... | |
using | Range = std::pair< os_offset_t, os_offset_t > |
Represents the chunk in bytes : first element represents the beginning offset of the chunk and, second element represents the length of the chunk. More... | |
using | IO_buffer = std::pair< byte *, os_offset_t > |
Block size for DDL I/O operations. More... | |
using | Latch_release = std::function< dberr_t()> |
Called when a log free check is required. More... | |
using | Builders = std::vector< Builder *, ut::allocator< Builder * > > |
using | Merge_offsets = std::deque< os_offset_t, ut::allocator< os_offset_t > > |
Start offsets in the file, from where to merge records. More... | |
Enumerations | |
enum class | Thread_state : uint8_t { UNKNOWN , COMPLETE , EXITING , ABORT } |
status bit used for communication between parent and child thread More... | |
Functions | |
template<typename T , typename Compare > | |
void | merge_sort (T *arr, T *aux_arr, const size_t low, const size_t high, Compare compare) |
Merge sort a given array. More... | |
static void | index_build_failed (dict_index_t *index) noexcept |
Note that an index build has failed. More... | |
dberr_t | pread (os_fd_t fd, void *ptr, size_t len, os_offset_t offset) noexcept |
Read a merge block from the file system. More... | |
dberr_t | pwrite (os_fd_t fd, void *ptr, size_t size, os_offset_t offset) noexcept |
Write a merge block to the file system. More... | |
Unique_os_file_descriptor | file_create_low (const char *path) noexcept |
Create temporary merge files in the given parameter path, and if UNIV_PFS_IO defined, register the file descriptor with Performance Schema. More... | |
bool | file_create (file_t *file, const char *path) noexcept |
Create a merge file int the given location. More... | |
dict_index_t * | create_index (trx_t *trx, dict_table_t *table, const Index_defn *index_def, const dict_add_v_col_t *add_v) noexcept |
Create the index and load in to the dictionary. More... | |
dberr_t | drop_table (trx_t *trx, dict_table_t *table) noexcept |
Drop a table. More... | |
dberr_t | lock_table (trx_t *trx, dict_table_t *table, lock_mode mode) noexcept |
Sets an exclusive lock on a table, for the duration of creating indexes. More... | |
static void | mark_secondary_indexes (trx_t *trx, dict_table_t *table) noexcept |
We will have to drop the secondary indexes later, when the table is in use, unless the the DDL has already been externalized. More... | |
static void | drop_secondary_indexes (trx_t *trx, dict_table_t *table) noexcept |
Invalidate all row_prebuilt_t::ins_graph that are referring to this table. More... | |
void | drop_indexes (trx_t *trx, dict_table_t *table, bool locked) noexcept |
Drop those indexes which were created before an error occurred. More... | |
Variables | |
long | fill_factor |
Innodb B-tree index fill factor for bulk load. More... | |
ulong | fts_parser_threads = 2 |
Variable specifying the number of FTS parser threads to use. More... | |
constexpr size_t | IO_BLOCK_SIZE = 4 * 1024 |
Minimum IO buffer size. More... | |
static constexpr size_t | SERVER_CLUSTER_INDEX_ID = 0 |
Cluster index ID (always the first index). More... | |
The general architecture is that the work is done in two phases, roughly the read and write phase.
The scanner pushes the document to a read handler queue for processing.
Phase I: Start several parsing/tokenization threads that read the document from a queue, parse the document, tokenize the document, add them to a buffer, sort the rows in the buffer and then write the buffer to a temporary file. There is one file per auxiliary table per parser instance. So, if you have 2 parse threads you will end up with:
2 x FTS_NUM_AUX_INDEX files.
Phase 2: The temporary files generated during phase I are not closed but passed to the second (write) phase so that these temporary files can be merged and the rows inserted into the new FTS index. Using the example from above, create FTS_NUM_AUX_INDEX threads and each thread will merge 2 files.
using ddl::Builders = typedef std::vector<Builder *, ut::allocator<Builder *> > |
using ddl::IO_buffer = typedef std::pair<byte *, os_offset_t> |
Block size for DDL I/O operations.
The minimum is UNIV_PAGE_SIZE, or page_get_free_space_of_empty() rounded to a power of 2.
using ddl::Latch_release = typedef std::function<dberr_t()> |
Called when a log free check is required.
using ddl::Merge_offsets = typedef std::deque<os_offset_t, ut::allocator<os_offset_t> > |
Start offsets in the file, from where to merge records.
using ddl::mrec_buf_t = typedef byte[UNIV_PAGE_SIZE_MAX] |
Secondary buffer for I/O operations of merge records.
This buffer is used for writing or reading a record that spans two Aligned_buffer. Thus, it must be able to hold one merge record, whose maximum size is the same as the minimum size of Aligned_buffer.
using ddl::mrec_t = typedef byte |
Merge record in Aligned_buffer.
The format is the same as a record in ROW_FORMAT=COMPACT with the exception that the REC_N_NEW_EXTRA_BYTES are omitted.
using ddl::Range = typedef std::pair<os_offset_t, os_offset_t> |
Represents the chunk in bytes : first element represents the beginning offset of the chunk and, second element represents the length of the chunk.
|
strong |
|
noexcept |
Create the index and load in to the dictionary.
[in,out] | trx | Trx (sets error_state) |
[in,out] | table | The index is on this table |
[in] | index_def | The index definition |
[in] | add_v | New virtual columns added along with add index call |
|
noexcept |
Drop those indexes which were created before an error occurred.
The data dictionary must have been locked exclusively by the caller, because the transaction will not be committed.
[in,out] | trx | Transaction |
[in] | table | Table to lock. |
[in] | locked | true=table locked, false=may need to do a lazy drop |
|
staticnoexcept |
Invalidate all row_prebuilt_t::ins_graph that are referring to this table.
That is, force row_get_prebuilt_insert_row() to rebuild prebuilt->ins_node->entry_list).
[in,out] | trx | Transaction |
[in] | table | Table that owns the indexes. |
|
noexcept |
Drop a table.
The caller must have ensured that the background stats thread is not processing the table. This can be done by calling dict_stats_wait_bg_to_stop_using_table() after locking the dictionary and before calling this function.
[in,out] | trx | Transaction |
[in,out] | table | Table to drop. |
|
noexcept |
Create a merge file int the given location.
[out] | file | Temporary generated during DDL. |
[in] | path | Location for creating temporary file |
|
noexcept |
Create temporary merge files in the given parameter path, and if UNIV_PFS_IO defined, register the file descriptor with Performance Schema.
[in] | path | Location for creating temporary merge files. |
|
staticnoexcept |
Note that an index build has failed.
[in,out] | index | Index that failed to build. |
|
noexcept |
Sets an exclusive lock on a table, for the duration of creating indexes.
[in,out] | trx | Transaction |
[in] | table | Table to lock. |
[in] | mode | Lock mode LOCK_X or LOCK_S |
|
staticnoexcept |
We will have to drop the secondary indexes later, when the table is in use, unless the the DDL has already been externalized.
Mark the indexes as incomplete and corrupted, so that other threads will stop using them. Let dict_table_close() or crash recovery or the next invocation of prepare_inplace_alter_table() take care of dropping the indexes.
[in,out] | trx | Transaction |
[in] | table | Table that owns the indexes. |
|
inline |
Merge sort a given array.
[in,out] | arr | Array to sort. |
[in,out] | aux_arr | Auxiliary space to use for sort. |
[in] | low | First element (inclusive). |
[in] | high | Number of elements to sort from low. |
[in] | compare | Function to compare two elements. |
|
noexcept |
Read a merge block from the file system.
[in] | fd | file descriptor. |
[out] | ptr | Buffer to read into. |
[in] | len | Number of bytes to read. |
[in] | offset | Byte offset to start reading from. |
|
noexcept |
Write a merge block to the file system.
[in] | fd | File descriptor |
[in] | ptr | Buffer to write. |
[in] | size | Number of bytes to write. |
[in] | offset | Byte offset where to write. |
long ddl::fill_factor |
Innodb B-tree index fill factor for bulk load.
|
extern |
Variable specifying the number of FTS parser threads to use.
Parallel sort degree, must be a power of 2.
|
constexpr |
Minimum IO buffer size.
|
staticconstexpr |
Cluster index ID (always the first index).