MySQL 9.0.0
Source Code Documentation
buf_pool_t Struct Reference

The buffer pool structure. More...

#include <buf0buf.h>

Public Attributes

General fields
BufListMutex chunks_mutex
 protects (de)allocation of chunks: More...
 
BufListMutex LRU_list_mutex
 LRU list mutex. More...
 
BufListMutex free_list_mutex
 free and withdraw list mutex More...
 
BufListMutex zip_free_mutex
 buddy allocator mutex More...
 
BufListMutex zip_hash_mutex
 zip_hash mutex More...
 
ib_mutex_t flush_state_mutex
 Flush state protection mutex. More...
 
BufPoolZipMutex zip_mutex
 Zip mutex of this buffer pool instance, protects compressed only pages (of type buf_page_t, not buf_block_t. More...
 
ulint instance_no
 Array index of this buffer pool instance. More...
 
ulint curr_pool_size
 Current pool size in bytes. More...
 
ulint LRU_old_ratio
 Reserve this much of the buffer pool for "old" blocks. More...
 
ulint buddy_n_frames
 Number of frames allocated from the buffer pool to the buddy system. More...
 
volatile ulint n_chunks
 Number of buffer pool chunks. More...
 
volatile ulint n_chunks_new
 New number of buffer pool chunks. More...
 
buf_chunk_tchunks
 buffer pool chunks More...
 
buf_chunk_tchunks_old
 old buffer pool chunks to be freed after resizing buffer pool More...
 
ulint curr_size
 Current pool size in pages. More...
 
ulint old_size
 Previous pool size in pages. More...
 
page_no_t read_ahead_area
 Size in pages of the area which the read-ahead algorithms read if invoked. More...
 
hash_table_tpage_hash
 Hash table of buf_page_t or buf_block_t file pages, buf_page_in_file() == true, indexed by (space_id, offset). More...
 
hash_table_tzip_hash
 Hash table of buf_block_t blocks whose frames are allocated to the zip buddy system, indexed by block->frame. More...
 
std::atomic< ulintn_pend_reads
 Number of pending read operations. More...
 
std::atomic< ulintn_pend_unzip
 number of pending decompressions. More...
 
std::chrono::steady_clock::time_point last_printout_time
 when buf_print_io was last time called. More...
 
buf_buddy_stat_t buddy_stat [BUF_BUDDY_SIZES_MAX+1]
 Statistics of buddy system, indexed by block size. More...
 
buf_pool_stat_t stat
 Current statistics. More...
 
buf_pool_stat_t old_stat
 Old statistics. More...
 

Page flushing algorithm fields

BufListMutex flush_list_mutex
 Mutex protecting the flush list access. More...
 
FlushHp flush_hp
 "Hazard pointer" used during scan of flush_list while doing flush list batch. More...
 
FlushHp oldest_hp
 Entry pointer to scan the oldest page except for system temporary. More...
 
bool init_flush [BUF_FLUSH_N_TYPES]
 This is true when a flush of the given type is being initialized. More...
 
std::array< size_t, BUF_FLUSH_N_TYPESn_flush
 This is the number of pending writes in the given flush type. More...
 
os_event_t no_flush [BUF_FLUSH_N_TYPES]
 This is in the set state when there is no flush batch of the given type running. More...
 
ib_rbt_tflush_rbt
 A red-black tree is used exclusively during recovery to speed up insertions in the flush_list. More...
 
ulint freed_page_clock
 A sequence number used to count the number of buffer blocks removed from the end of the LRU list; NOTE that this counter may wrap around at 4 billion! A thread is allowed to read this for heuristic purposes without holding any mutex or latch. More...
 
bool try_LRU_scan
 Set to false when an LRU scan for free block fails. More...
 
lsn_t track_page_lsn
 Page Tracking start LSN. More...
 
lsn_t max_lsn_io
 Maximum LSN for which write io has already started. More...
 
 UT_LIST_BASE_NODE_T (buf_page_t, list) flush_list
 Base node of the modified block list. More...
 
bool is_tracking ()
 Check if the page modifications are tracked. More...
 

LRU replacement algorithm fields

ulint withdraw_target
 Target length of withdraw block list, when withdrawing. More...
 
LRUHp lru_hp
 "hazard pointer" used during scan of LRU while doing LRU list batch. More...
 
LRUItr lru_scan_itr
 Iterator used to scan the LRU list when searching for replaceable victim. More...
 
LRUItr single_scan_itr
 Iterator used to scan the LRU list when searching for single page flushing victim. More...
 
buf_page_tLRU_old
 Pointer to the about LRU_old_ratio/BUF_LRU_OLD_RATIO_DIV oldest blocks in the LRU list; NULL if LRU length less than BUF_LRU_OLD_MIN_LEN; NOTE: when LRU_old != NULL, its length should always equal LRU_old_len. More...
 
ulint LRU_old_len
 Length of the LRU list from the block to which LRU_old points onward, including that block; see buf0lru.cc for the restrictions on this value; 0 if LRU_old == NULL; NOTE: LRU_old_len must be adjusted whenever LRU_old shrinks or grows! More...
 
 UT_LIST_BASE_NODE_T (buf_page_t, list) free
 Base node of the free block list. More...
 
 UT_LIST_BASE_NODE_T (buf_page_t, list) withdraw
 base node of the withdraw block list. More...
 
 UT_LIST_BASE_NODE_T (buf_page_t, LRU) LRU
 Base node of the LRU list. More...
 
 UT_LIST_BASE_NODE_T (buf_block_t, unzip_LRU) unzip_LRU
 Base node of the unzip_LRU list. More...
 

Buddy allocator fields

The buddy allocator is used for allocating compressed page frames and buf_page_t descriptors of blocks that exist in the buffer pool only in compressed form.

buf_page_twatch
 Sentinel records for buffer pool watches. More...
 
 UT_LIST_BASE_NODE_T (buf_page_t, list) zip_clean
 Unmodified compressed pages. More...
 
 UT_LIST_BASE_NODE_T (buf_buddy_free_t, list) zip_free[BUF_BUDDY_SIZES_MAX]
 Buddy free lists. More...
 
bool allocate_chunk (ulonglong mem_size, buf_chunk_t *chunk)
 A wrapper for buf_pool_t::allocator.alocate_large which also advices the OS that this chunk should not be dumped to a core file if that was requested. More...
 
void deallocate_chunk (buf_chunk_t *chunk)
 A wrapper for buf_pool_t::allocator.deallocate_large which also advices the OS that this chunk can be dumped to a core file. More...
 
bool madvise_dump ()
 Advices the OS that all chunks in this buffer pool instance can be dumped to a core file. More...
 
bool madvise_dont_dump ()
 Advices the OS that all chunks in this buffer pool instance should not be dumped to a core file. More...
 
bool is_flushing (buf_flush_t flush_type) const
 Checks if the batch is running, which is basically equivalent to !os_event_is_set(no_flush[type]) if you hold flush_state_mutex. More...
 
template<typename F >
void change_flush_state (buf_flush_t flush_type, F &&change)
 Executes change() which modifies fields protected by flush_state_mutex. More...
 

Detailed Description

The buffer pool structure.

NOTE! The definition appears here only for other modules of this directory (buf) to see it. Do not use from outside!

Member Function Documentation

◆ allocate_chunk()

bool buf_pool_t::allocate_chunk ( ulonglong  mem_size,
buf_chunk_t chunk 
)

A wrapper for buf_pool_t::allocator.alocate_large which also advices the OS that this chunk should not be dumped to a core file if that was requested.

Emits a warning to the log and disables @global.core_file if advising was requested but could not be performed, but still return true as the allocation itself succeeded.

Parameters
[in]mem_sizenumber of bytes to allocate
[in,out]chunkmem and mem_pfx fields of this chunk will be updated to contain information about allocated memory region
Returns
true iff allocated successfully

◆ change_flush_state()

template<typename F >
void buf_pool_t::change_flush_state ( buf_flush_t  flush_type,
F &&  change 
)
inline

Executes change() which modifies fields protected by flush_state_mutex.

If it caused a change to is_flushing(flush_type) then it sets or resets the no_flush[flush_type] to keep it in sync.

Parameters
[in]flush_typeThe type of the flush this change of state concerns
[in]changeA callback to execute within flush_state_mutex

◆ deallocate_chunk()

void buf_pool_t::deallocate_chunk ( buf_chunk_t chunk)

A wrapper for buf_pool_t::allocator.deallocate_large which also advices the OS that this chunk can be dumped to a core file.

Emits a warning to the log and disables @global.core_file if advising was requested but could not be performed.

Parameters
[in]chunkmem and mem_pfx fields of this chunk will be used to locate the memory region to free

◆ is_flushing()

bool buf_pool_t::is_flushing ( buf_flush_t  flush_type) const
inline

Checks if the batch is running, which is basically equivalent to !os_event_is_set(no_flush[type]) if you hold flush_state_mutex.

It is used as source of truth to know when to set or reset this event. Caller should hold flush_state_mutex.

Parameters
[in]flush_typeThe type of the flush we are interested in
Returns
Should no_flush[type] be in the "unset" state?

◆ is_tracking()

bool buf_pool_t::is_tracking ( )
inline

Check if the page modifications are tracked.

Returns
true if page modifications are tracked, false otherwise.

◆ madvise_dont_dump()

bool buf_pool_t::madvise_dont_dump ( )

Advices the OS that all chunks in this buffer pool instance should not be dumped to a core file.

Emits a warning to the log if could not succeed.

Returns
true iff succeeded, false if no OS support or failed

◆ madvise_dump()

bool buf_pool_t::madvise_dump ( )

Advices the OS that all chunks in this buffer pool instance can be dumped to a core file.

Emits a warning to the log if could not succeed.

Returns
true iff succeeded, false if no OS support or failed

◆ UT_LIST_BASE_NODE_T() [1/7]

buf_pool_t::UT_LIST_BASE_NODE_T ( buf_block_t  ,
unzip_LRU   
)

Base node of the unzip_LRU list.

The list is protected by the LRU_list_mutex.

◆ UT_LIST_BASE_NODE_T() [2/7]

buf_pool_t::UT_LIST_BASE_NODE_T ( buf_buddy_free_t  ,
list   
)

Buddy free lists.

◆ UT_LIST_BASE_NODE_T() [3/7]

buf_pool_t::UT_LIST_BASE_NODE_T ( buf_page_t  ,
list   
)

Base node of the modified block list.

◆ UT_LIST_BASE_NODE_T() [4/7]

buf_pool_t::UT_LIST_BASE_NODE_T ( buf_page_t  ,
list   
)

Base node of the free block list.

◆ UT_LIST_BASE_NODE_T() [5/7]

buf_pool_t::UT_LIST_BASE_NODE_T ( buf_page_t  ,
list   
)

base node of the withdraw block list.

It is only used during shrinking buffer pool size, not to reuse the blocks will be removed. Protected by free_list_mutex

◆ UT_LIST_BASE_NODE_T() [6/7]

buf_pool_t::UT_LIST_BASE_NODE_T ( buf_page_t  ,
list   
)

Unmodified compressed pages.

◆ UT_LIST_BASE_NODE_T() [7/7]

buf_pool_t::UT_LIST_BASE_NODE_T ( buf_page_t  ,
LRU   
)

Base node of the LRU list.

Member Data Documentation

◆ buddy_n_frames

ulint buf_pool_t::buddy_n_frames

Number of frames allocated from the buffer pool to the buddy system.

Protected by zip_hash_mutex.

◆ buddy_stat

buf_buddy_stat_t buf_pool_t::buddy_stat[BUF_BUDDY_SIZES_MAX+1]

Statistics of buddy system, indexed by block size.

Protected by zip_free mutex, except for the used field, which is also accessed atomically

◆ chunks

buf_chunk_t* buf_pool_t::chunks

buffer pool chunks

◆ chunks_mutex

BufListMutex buf_pool_t::chunks_mutex

protects (de)allocation of chunks:

  • changes to chunks, n_chunks are performed while holding this latch,
  • reading buf_pool_should_madvise requires holding this latch for any buf_pool_t
  • writing to buf_pool_should_madvise requires holding these latches for all buf_pool_t-s

◆ chunks_old

buf_chunk_t* buf_pool_t::chunks_old

old buffer pool chunks to be freed after resizing buffer pool

◆ curr_pool_size

ulint buf_pool_t::curr_pool_size

Current pool size in bytes.

◆ curr_size

ulint buf_pool_t::curr_size

Current pool size in pages.

◆ flush_hp

FlushHp buf_pool_t::flush_hp

"Hazard pointer" used during scan of flush_list while doing flush list batch.

Protected by flush_list_mutex

◆ flush_list_mutex

BufListMutex buf_pool_t::flush_list_mutex

Mutex protecting the flush list access.

This mutex protects flush_list, flush_rbt and bpage::list pointers when the bpage is on flush_list. It also protects writes to bpage::oldest_modification and flush_list_hp

◆ flush_rbt

ib_rbt_t* buf_pool_t::flush_rbt

A red-black tree is used exclusively during recovery to speed up insertions in the flush_list.

This tree contains blocks in order of oldest_modification LSN and is kept in sync with the flush_list. Each member of the tree MUST also be on the flush_list. This tree is relevant only in recovery and is set to NULL once the recovery is over. Protected by flush_list_mutex

◆ flush_state_mutex

ib_mutex_t buf_pool_t::flush_state_mutex

Flush state protection mutex.

◆ free_list_mutex

BufListMutex buf_pool_t::free_list_mutex

free and withdraw list mutex

◆ freed_page_clock

ulint buf_pool_t::freed_page_clock

A sequence number used to count the number of buffer blocks removed from the end of the LRU list; NOTE that this counter may wrap around at 4 billion! A thread is allowed to read this for heuristic purposes without holding any mutex or latch.

For non-heuristic purposes protected by LRU_list_mutex

◆ init_flush

bool buf_pool_t::init_flush[BUF_FLUSH_N_TYPES]

This is true when a flush of the given type is being initialized.

Protected by flush_state_mutex.

◆ instance_no

ulint buf_pool_t::instance_no

Array index of this buffer pool instance.

◆ last_printout_time

std::chrono::steady_clock::time_point buf_pool_t::last_printout_time

when buf_print_io was last time called.

Accesses not protected.

◆ lru_hp

LRUHp buf_pool_t::lru_hp

"hazard pointer" used during scan of LRU while doing LRU list batch.

Protected by buf_pool::LRU_list_mutex

◆ LRU_list_mutex

BufListMutex buf_pool_t::LRU_list_mutex

LRU list mutex.

◆ LRU_old

buf_page_t* buf_pool_t::LRU_old

Pointer to the about LRU_old_ratio/BUF_LRU_OLD_RATIO_DIV oldest blocks in the LRU list; NULL if LRU length less than BUF_LRU_OLD_MIN_LEN; NOTE: when LRU_old != NULL, its length should always equal LRU_old_len.

◆ LRU_old_len

ulint buf_pool_t::LRU_old_len

Length of the LRU list from the block to which LRU_old points onward, including that block; see buf0lru.cc for the restrictions on this value; 0 if LRU_old == NULL; NOTE: LRU_old_len must be adjusted whenever LRU_old shrinks or grows!

◆ LRU_old_ratio

ulint buf_pool_t::LRU_old_ratio

Reserve this much of the buffer pool for "old" blocks.

◆ lru_scan_itr

LRUItr buf_pool_t::lru_scan_itr

Iterator used to scan the LRU list when searching for replaceable victim.

Protected by buf_pool::LRU_list_mutex.

◆ max_lsn_io

lsn_t buf_pool_t::max_lsn_io

Maximum LSN for which write io has already started.

◆ n_chunks

volatile ulint buf_pool_t::n_chunks

Number of buffer pool chunks.

◆ n_chunks_new

volatile ulint buf_pool_t::n_chunks_new

New number of buffer pool chunks.

◆ n_flush

std::array<size_t, BUF_FLUSH_N_TYPES> buf_pool_t::n_flush

This is the number of pending writes in the given flush type.

Protected by flush_state_mutex.

◆ n_pend_reads

std::atomic<ulint> buf_pool_t::n_pend_reads

Number of pending read operations.

Accessed atomically

◆ n_pend_unzip

std::atomic<ulint> buf_pool_t::n_pend_unzip

number of pending decompressions.

Accessed atomically.

◆ no_flush

os_event_t buf_pool_t::no_flush[BUF_FLUSH_N_TYPES]

This is in the set state when there is no flush batch of the given type running.

Protected by flush_state_mutex.

◆ old_size

ulint buf_pool_t::old_size

Previous pool size in pages.

◆ old_stat

buf_pool_stat_t buf_pool_t::old_stat

Old statistics.

◆ oldest_hp

FlushHp buf_pool_t::oldest_hp

Entry pointer to scan the oldest page except for system temporary.

◆ page_hash

hash_table_t* buf_pool_t::page_hash

Hash table of buf_page_t or buf_block_t file pages, buf_page_in_file() == true, indexed by (space_id, offset).

page_hash is protected by an array of mutexes.

◆ read_ahead_area

page_no_t buf_pool_t::read_ahead_area

Size in pages of the area which the read-ahead algorithms read if invoked.

◆ single_scan_itr

LRUItr buf_pool_t::single_scan_itr

Iterator used to scan the LRU list when searching for single page flushing victim.

Protected by buf_pool::LRU_list_mutex.

◆ stat

buf_pool_stat_t buf_pool_t::stat

Current statistics.

◆ track_page_lsn

lsn_t buf_pool_t::track_page_lsn

Page Tracking start LSN.

◆ try_LRU_scan

bool buf_pool_t::try_LRU_scan

Set to false when an LRU scan for free block fails.

This flag is used to avoid repeated scans of LRU list when we know that there is no free block available in the scan depth for eviction. Set to true whenever we flush a batch from the buffer pool. Accessed protected by memory barriers.

◆ watch

buf_page_t* buf_pool_t::watch

Sentinel records for buffer pool watches.

Scanning the array is protected by taking all page_hash latches in X. Updating or reading an individual watch page is protected by a corresponding individual page_hash latch.

◆ withdraw_target

ulint buf_pool_t::withdraw_target

Target length of withdraw block list, when withdrawing.

◆ zip_free_mutex

BufListMutex buf_pool_t::zip_free_mutex

buddy allocator mutex

◆ zip_hash

hash_table_t* buf_pool_t::zip_hash

Hash table of buf_block_t blocks whose frames are allocated to the zip buddy system, indexed by block->frame.

◆ zip_hash_mutex

BufListMutex buf_pool_t::zip_hash_mutex

zip_hash mutex

◆ zip_mutex

BufPoolZipMutex buf_pool_t::zip_mutex

Zip mutex of this buffer pool instance, protects compressed only pages (of type buf_page_t, not buf_block_t.


The documentation for this struct was generated from the following files: