MySQL 8.0.40
Source Code Documentation
|
The database buffer pool LRU replacement algorithm. More...
Go to the source code of this file.
Classes | |
struct | buf_LRU_stat_t |
Statistics for selecting the LRU list for eviction. More... | |
Typedefs | |
using | Space_References = std::map< struct fil_space_t *, size_t > |
Functions | |
bool | buf_LRU_buf_pool_running_out (void) |
Returns true if less than 25 % of the buffer pool is available. More... | |
void | buf_LRU_flush_or_remove_pages (space_id_t id, buf_remove_t buf_remove, const trx_t *trx, bool strict=true) |
Flushes all dirty pages or removes all pages belonging to a given tablespace. More... | |
void | buf_LRU_insert_zip_clean (buf_page_t *bpage) |
Insert a compressed block into buf_pool->zip_clean in the LRU order. More... | |
bool | buf_LRU_free_page (buf_page_t *bpage, bool zip) |
Try to free a block. More... | |
bool | buf_LRU_scan_and_free_block (buf_pool_t *buf_pool, bool scan_all) |
Try to free a replaceable block. More... | |
buf_block_t * | buf_LRU_get_free_only (buf_pool_t *buf_pool) |
Returns a free block from the buf_pool. More... | |
buf_block_t * | buf_LRU_get_free_block (buf_pool_t *buf_pool) |
Returns a free block from the buf_pool. More... | |
bool | buf_LRU_evict_from_unzip_LRU (buf_pool_t *buf_pool) |
Determines if the unzip_LRU list should be used for evicting a victim instead of the general LRU list. More... | |
void | buf_LRU_block_free_non_file_page (buf_block_t *block) |
Puts a block back to the free list. More... | |
void | buf_LRU_add_block (buf_page_t *bpage, bool old) |
Adds a block to the LRU list. More... | |
void | buf_unzip_LRU_add_block (buf_block_t *block, bool old) |
Adds a block to the LRU list of decompressed zip pages. More... | |
void | buf_LRU_make_block_young (buf_page_t *bpage) |
Moves a block to the start of the LRU list. More... | |
void | buf_LRU_make_block_old (buf_page_t *bpage) |
Moves a block to the end of the LRU list. More... | |
uint | buf_LRU_old_ratio_update (uint old_pct, bool adjust) |
Updates buf_pool->LRU_old_ratio. More... | |
void | buf_LRU_stat_update (void) |
Update the historical stats that we are collecting for LRU eviction policy at the end of each interval. More... | |
void | buf_LRU_free_one_page (buf_page_t *bpage, bool ignore_content) |
Remove one page from LRU list and put it to free list. More... | |
void | buf_LRU_adjust_hp (buf_pool_t *buf_pool, const buf_page_t *bpage) |
Adjust LRU hazard pointers if needed. More... | |
void | buf_LRU_validate (void) |
Validates the LRU list. More... | |
void | buf_LRU_validate_instance (buf_pool_t *buf_pool) |
Validates the LRU list for one buffer pool instance. More... | |
Space_References | buf_LRU_count_space_references () |
Counts number of pages that are still in the LRU for each space instance encountered. More... | |
void | buf_LRU_print (void) |
Prints the LRU list. More... | |
void | buf_LRU_stat_inc_io () |
Increments the I/O counter in buf_LRU_stat_cur. More... | |
void | buf_LRU_stat_inc_unzip () |
Increments the page_zip_decompress() counter in buf_LRU_stat_cur. More... | |
Variables | |
constexpr uint32_t | BUF_LRU_OLD_MIN_LEN = 8 * 1024 / 16 |
Minimum LRU list length for which the LRU_old pointer is defined 8 megabytes of 16k pages. More... | |
buf_LRU_stat_t | buf_LRU_stat_cur |
Current operation counters. More... | |
buf_LRU_stat_t | buf_LRU_stat_sum |
Running sum of past values of buf_LRU_stat_cur. More... | |
Heuristics for detecting index scan | |
constexpr uint32_t | BUF_LRU_OLD_RATIO_DIV = 1024 |
The denominator of buf_pool->LRU_old_ratio. More... | |
constexpr uint32_t | BUF_LRU_OLD_RATIO_MAX = BUF_LRU_OLD_RATIO_DIV |
Maximum value of buf_pool->LRU_old_ratio. More... | |
constexpr uint32_t | BUF_LRU_OLD_RATIO_MIN = 51 |
Minimum value of buf_pool->LRU_old_ratio. More... | |
std::chrono::milliseconds | get_buf_LRU_old_threshold () |
Move blocks to "new" LRU list only if the first access was at least this many milliseconds ago. More... | |
The database buffer pool LRU replacement algorithm.
Created 11/5/1995 Heikki Tuuri
using Space_References = std::map<struct fil_space_t *, size_t> |
void buf_LRU_add_block | ( | buf_page_t * | bpage, |
bool | old | ||
) |
Adds a block to the LRU list.
Please make sure that the page_size is already set when invoking the function, so that we can get correct page_size from the buffer page when adding a block into LRU in: true if should be put to the old blocks in the LRU list, else put to the start; if the LRU list is very short, added to the start regardless of this parameter
Please make sure that the page_size is already set when invoking the function, so that we can get correct page_size from the buffer page when adding a block into LRU
bpage | in: control block |
old | in: true if should be put to the old blocks in the LRU list, else put to the start; if the LRU list is very short, the block is added to the start, regardless of this parameter |
void buf_LRU_adjust_hp | ( | buf_pool_t * | buf_pool, |
const buf_page_t * | bpage | ||
) |
Adjust LRU hazard pointers if needed.
[in] | buf_pool | Buffer pool instance |
[in] | bpage | Control block |
void buf_LRU_block_free_non_file_page | ( | buf_block_t * | block | ) |
Puts a block back to the free list.
[in] | block | block must not contain a file page |
bool buf_LRU_buf_pool_running_out | ( | void | ) |
Returns true if less than 25 % of the buffer pool is available.
This can be used in heuristics to prevent huge transactions eating up the whole buffer pool for their locks.
Returns true if less than 25 % of the buffer pool is available.
This can be used in heuristics to prevent huge transactions eating up the whole buffer pool for their locks.
Space_References buf_LRU_count_space_references | ( | ) |
Counts number of pages that are still in the LRU for each space instance encountered.
bool buf_LRU_evict_from_unzip_LRU | ( | buf_pool_t * | buf_pool | ) |
Determines if the unzip_LRU list should be used for evicting a victim instead of the general LRU list.
[in,out] | buf_pool | buffer pool instance |
void buf_LRU_flush_or_remove_pages | ( | space_id_t | id, |
buf_remove_t | buf_remove, | ||
const trx_t * | trx, | ||
bool | strict = true |
||
) |
Flushes all dirty pages or removes all pages belonging to a given tablespace.
A PROBLEM: if readahead is being started, what guarantees that it will not try to read in pages after this operation has completed?
[in] | id | tablespace ID |
[in] | buf_remove | remove or flush strategy |
[in] | trx | to check if the operation must be interrupted |
[in] | strict | true, if no page from tablespace can be in buffer pool just after flush |
void buf_LRU_free_one_page | ( | buf_page_t * | bpage, |
bool | ignore_content | ||
) |
Remove one page from LRU list and put it to free list.
The caller must hold the LRU list and block mutexes and have page hash latched in X. The latch and the block mutexes will be released.
[in,out] | bpage | block, must contain a file page and be in a state where it can be freed; there may or may not be a hash index to the page |
[in] | ignore_content | true if should ignore page content, since it could be not initialized |
bool buf_LRU_free_page | ( | buf_page_t * | bpage, |
bool | zip | ||
) |
Try to free a block.
If bpage is a descriptor of a compressed-only page, the descriptor object will be freed as well. NOTE: this function may temporarily release and relock the buf_page_get_mutex(). Furthermore, the page frame will no longer be accessible via bpage. If this function returns true, it will also release the LRU list mutex. The caller must hold the LRU list and buf_page_get_mutex() mutexes.
[in] | bpage | block to be freed |
[in] | zip | true if should remove also the compressed page of an uncompressed page |
buf_block_t * buf_LRU_get_free_block | ( | buf_pool_t * | buf_pool | ) |
Returns a free block from the buf_pool.
The block is taken off the free list. If free list is empty, blocks are moved from the end of the LRU list to the free list. This function is called from a user thread when it needs a clean block to read in a page. Note that we only ever get a block from the free list. Even when we flush a page or find a page in LRU scan we put it to free list to be used. iteration 0: get a block from free list, success:done if buf_pool->try_LRU_scan is set scan LRU up to srv_LRU_scan_depth to find a clean block the above will put the block on free list success:retry the free list flush one dirty page from tail of LRU to disk the above will put the block on free list success: retry the free list iteration 1: same as iteration 0 except: scan whole LRU list scan LRU list even if buf_pool->try_LRU_scan is not set iteration > 1: same as iteration 1 but sleep 10ms
[in,out] | buf_pool | buffer pool instance |
buf_block_t * buf_LRU_get_free_only | ( | buf_pool_t * | buf_pool | ) |
Returns a free block from the buf_pool.
The block is taken off the free list. If it is empty, returns NULL.
[in] | buf_pool | buffer pool instance |
void buf_LRU_insert_zip_clean | ( | buf_page_t * | bpage | ) |
Insert a compressed block into buf_pool->zip_clean in the LRU order.
[in] | bpage | pointer to the block in question |
void buf_LRU_make_block_old | ( | buf_page_t * | bpage | ) |
Moves a block to the end of the LRU list.
[in] | bpage | control block |
void buf_LRU_make_block_young | ( | buf_page_t * | bpage | ) |
Moves a block to the start of the LRU list.
[in] | bpage | control block |
Updates buf_pool->LRU_old_ratio.
old_pct | in: Reserve this percentage of the buffer pool for "old" blocks. |
adjust | in: true=adjust the LRU list; false=just assign buf_pool->LRU_old_ratio during the initialization of InnoDB |
void buf_LRU_print | ( | void | ) |
Prints the LRU list.
bool buf_LRU_scan_and_free_block | ( | buf_pool_t * | buf_pool, |
bool | scan_all | ||
) |
Try to free a replaceable block.
[in,out] | buf_pool | buffer pool instance |
[in] | scan_all | scan whole LRU list if true, otherwise scan only BUF_LRU_SEARCH_SCAN_THRESHOLD blocks |
|
inline |
Increments the I/O counter in buf_LRU_stat_cur.
|
inline |
Increments the page_zip_decompress() counter in buf_LRU_stat_cur.
void buf_LRU_stat_update | ( | void | ) |
Update the historical stats that we are collecting for LRU eviction policy at the end of each interval.
void buf_LRU_validate | ( | void | ) |
Validates the LRU list.
void buf_LRU_validate_instance | ( | buf_pool_t * | buf_pool | ) |
Validates the LRU list for one buffer pool instance.
[in] | buf_pool | buffer pool instance |
void buf_unzip_LRU_add_block | ( | buf_block_t * | block, |
bool | old | ||
) |
Adds a block to the LRU list of decompressed zip pages.
[in] | block | control block |
[in] | old | true if should be put to the end of the list, else put to the start |
std::chrono::milliseconds get_buf_LRU_old_threshold | ( | ) |
Move blocks to "new" LRU list only if the first access was at least this many milliseconds ago.
Not protected by any mutex or latch.
|
constexpr |
Minimum LRU list length for which the LRU_old pointer is defined 8 megabytes of 16k pages.
|
constexpr |
The denominator of buf_pool->LRU_old_ratio.
|
constexpr |
Maximum value of buf_pool->LRU_old_ratio.
|
constexpr |
Minimum value of buf_pool->LRU_old_ratio.
|
extern |
Current operation counters.
Not protected by any mutex. Cleared by buf_LRU_stat_update().
|
extern |
Running sum of past values of buf_LRU_stat_cur.
Updated by buf_LRU_stat_update(). Accesses protected by memory barriers.
Updated by buf_LRU_stat_update(). Not Protected by any mutex.