MySQL 9.1.0
Source Code Documentation
|
Latching protocol for trx_lock_t::que_state. More...
#include <trx0trx.h>
Public Member Functions | |
trx_lock_t ()=default | |
Default constructor. More... | |
Public Attributes | |
ulint | n_active_thrs |
number of active query threads More... | |
trx_que_t | que_state |
valid when trx->state == TRX_STATE_ACTIVE: TRX_QUE_RUNNING, TRX_QUE_LOCK_WAIT, ... More... | |
uint64_t | trx_locks_version |
Incremented each time a lock is added or removed from the trx->lock.trx_locks, so that the thread which iterates over the list can spot a change if it occurred while it was reacquiring latches. More... | |
std::atomic< trx_t * > | blocking_trx |
If this transaction is waiting for a lock, then blocking_trx points to a transaction which holds a conflicting lock. More... | |
std::atomic< lock_t * > | wait_lock |
The lock request of this transaction is waiting for. More... | |
uint32_t | wait_lock_type |
Stores the type of the most recent lock for which this trx had to wait. More... | |
bool | was_chosen_as_deadlock_victim |
when the transaction decides to wait for a lock, it sets this to false; if another transaction chooses this transaction as a victim in deadlock resolution, it sets this to true. More... | |
std::chrono::system_clock::time_point | wait_started |
Lock wait started at this time. More... | |
que_thr_t * | wait_thr |
query thread belonging to this trx that is in QUE_THR_LOCK_WAIT state. More... | |
lock_pool_t | rec_pool |
Pre-allocated record locks. More... | |
lock_pool_t | table_pool |
Pre-allocated table locks. More... | |
ulint | rec_cached |
Next free record lock in pool. More... | |
ulint | table_cached |
Next free table lock in pool. More... | |
mem_heap_t * | lock_heap |
Memory heap for trx_locks. More... | |
trx_lock_list_t | trx_locks |
Locks requested by the transaction. More... | |
ib_vector_t * | autoinc_locks |
AUTOINC locks held by this transaction. More... | |
std::atomic< ulint > | n_rec_locks |
Number of rec locks in this trx. More... | |
std::atomic< bool > | inherit_all |
Used to indicate that every lock of this transaction placed on a record which is being purged should be inherited to the gap. More... | |
std::atomic< trx_schedule_weight_t > | schedule_weight |
Weight of the waiting transaction used for scheduling. More... | |
bool | in_rollback |
When a transaction is forced to rollback due to a deadlock check or by another high priority transaction this is true. More... | |
bool | start_stmt |
The transaction called ha_innobase::start_stmt() to lock a table. More... | |
Latching protocol for trx_lock_t::que_state.
trx_lock_t::que_state captures the state of the query thread during the execution of a query. This is different from a transaction state. The query state of a transaction can be updated asynchronously by other threads. The other threads can be system threads, like the timeout monitor thread or user threads executing other queries. Another thing to be mindful of is that there is a delay between when a query thread is put into LOCK_WAIT state and before it actually starts waiting. Between these two events it is possible that the query thread is granted the lock it was waiting for, which implies that the state can be changed asynchronously.
All these operations take place within the context of locking. Therefore state changes within the locking code must latch the shard with the wait_lock and the trx->mutex when changing trx->lock.que_state to TRX_QUE_LOCK_WAIT or trx->lock.wait_lock to non-NULL but when the lock wait ends it is sufficient to only acquire the trx->mutex. To query the state either of the mutexes is sufficient within the locking code and no mutex is required when the query thread is no longer waiting. The locks and state of an active transaction. Protected by exclusive lock_sys latch or trx->mutex combined with shared lock_sys latch (unless stated otherwise for particular field).
|
default |
Default constructor.
ib_vector_t* trx_lock_t::autoinc_locks |
AUTOINC locks held by this transaction.
Note that these are also in the trx_locks list. This vector needs to be freed explicitly when the trx instance is destroyed. Protected by trx->mutex.
std::atomic<trx_t *> trx_lock_t::blocking_trx |
If this transaction is waiting for a lock, then blocking_trx points to a transaction which holds a conflicting lock.
It is possible that the transaction has trx->lock.wait_lock == nullptr, yet it has non-null value of trx->lock.blocking_trx. For example this can happen when we are in the process of moving locks from one heap_no to another. This however is always done while the lock_sys shards which contain the queues involved are latched and conceptually it is true that the blocking_trx is the one for which the transaction waits, even though temporarily there is no pointer to a particular WAITING lock object.
This field is changed from null to non-null, when holding this->mutex and mutex for lock_sys shard containing the new value of trx->lock.wait_lock. The field is changed from non-null to different non-null value, while holding mutex for lock_sys shard containing the trx->lock.wait_lock. The field is changed from non-null to null, while holding this->mutex, mutex for lock_sys shard containing the old value of trx->lock.wait_lock, before it was changed to null.
Readers might read it without any latch, but then they should validate the value, i.e. test if it is not-null, and points to a valid trx. To make any definite judgments one needs to latch the lock_sys shard containing the trx->lock.wait_lock.
bool trx_lock_t::in_rollback |
When a transaction is forced to rollback due to a deadlock check or by another high priority transaction this is true.
Used by debug checks in lock0lock.cc
std::atomic<bool> trx_lock_t::inherit_all |
Used to indicate that every lock of this transaction placed on a record which is being purged should be inherited to the gap.
Readers should hold a latch on the lock they'd like to learn about whether or not it should be inherited. Writers who want to set it to true, should hold a latch on the lock-sys queue they intend to add a lock to. Writers may set it to false at any time.
mem_heap_t* trx_lock_t::lock_heap |
Memory heap for trx_locks.
Protected by trx->mutex
ulint trx_lock_t::n_active_thrs |
number of active query threads
std::atomic<ulint> trx_lock_t::n_rec_locks |
Number of rec locks in this trx.
It is modified with shared lock_sys latch. It is read with exclusive lock_sys latch.
trx_que_t trx_lock_t::que_state |
valid when trx->state == TRX_STATE_ACTIVE: TRX_QUE_RUNNING, TRX_QUE_LOCK_WAIT, ...
ulint trx_lock_t::rec_cached |
Next free record lock in pool.
Protected by trx->mutex.
lock_pool_t trx_lock_t::rec_pool |
Pre-allocated record locks.
Protected by trx->mutex.
std::atomic<trx_schedule_weight_t> trx_lock_t::schedule_weight |
Weight of the waiting transaction used for scheduling.
The higher the weight the more we are willing to grant a lock to this transaction. Values are updated and read without any synchronization beyond that provided by atomics, as slightly stale values do not hurt correctness, just the performance.
bool trx_lock_t::start_stmt |
The transaction called ha_innobase::start_stmt() to lock a table.
Most likely a temporary table.
ulint trx_lock_t::table_cached |
Next free table lock in pool.
Protected by trx->mutex.
lock_pool_t trx_lock_t::table_pool |
Pre-allocated table locks.
Protected by trx->mutex.
trx_lock_list_t trx_lock_t::trx_locks |
Locks requested by the transaction.
It is sorted so that LOCK_TABLE locks are before LOCK_REC locks. Modifications are protected by trx->mutex and shard of lock_sys mutex. Reads can be performed while holding trx->mutex or exclusive lock_sys latch. One can also check if this list is empty or not from the thread running this transaction without holding any latches, keeping in mind that other threads might modify the list in parallel (for example during implicit-to-explicit conversion, or when B-tree split or merge causes locks to be moved from one page to another) - we rely on assumption that such operations do not change the "emptiness" of the list and that one can check for emptiness in a safe manner (in current implementation length of the list is stored explicitly so one can read it without risking unsafe pointer operations)
uint64_t trx_lock_t::trx_locks_version |
Incremented each time a lock is added or removed from the trx->lock.trx_locks, so that the thread which iterates over the list can spot a change if it occurred while it was reacquiring latches.
Protected by trx->mutex.
std::atomic<lock_t *> trx_lock_t::wait_lock |
The lock request of this transaction is waiting for.
It might be NULL if the transaction is not currently waiting, or if the lock was temporarily removed during B-tree reorganization and will be recreated in a different queue. Such move is protected by latching the shards containing both queues, so the intermediate state should not be observed by readers who latch the old shard.
Changes from NULL to non-NULL while holding trx->mutex and latching the shard containing the new wait_lock value. Changes from non-NULL to NULL while latching the shard containing the old wait_lock value. Never changes from non-NULL to other non-NULL directly.
Readers should hold exclusive global latch on lock_sys, as in general they can't know what shard the lock belongs to before reading it. However, in debug assertions, where we strongly believe to know the value of this field in advance, we can:
uint32_t trx_lock_t::wait_lock_type |
Stores the type of the most recent lock for which this trx had to wait.
Set to lock_get_type_low(wait_lock) together with wait_lock in lock_set_lock_and_trx_wait(). This field is not cleared when wait_lock is set to NULL during lock_reset_lock_and_trx_wait() as in lock_wait_suspend_thread() we are interested in reporting the last known value of this field via thd_wait_begin(). When a thread has to wait for a lock, it first releases lock-sys latch, and then calls lock_wait_suspend_thread() where among other things it tries to report statistic via thd_wait_begin() about the kind of lock (THD_WAIT_ROW_LOCK vs THD_WAIT_TABLE_LOCK) that caused the wait. But there is a possibility that before it gets to call thd_wait_begin() some other thread could latch lock-sys and grant the lock and call lock_reset_lock_and_trx_wait(). In other words: in case another thread was quick enough to grant the lock, we still would like to report the reason for attempting to sleep. Another common scenario of "setting trx->lock.wait_lock to NULL" is page reorganization: when we have to move records between pages, we also move locks, and when doing so, we temporarily remove the old waiting lock, and then add another one. For example look at lock_rec_move_low(). It first calls lock_reset_lock_and_trx_wait() which changes trx->lock.wait_lock to NULL, but then calls lock_rec_add_to_queue() -> RecLock::create() -> RecLock::lock_add() -> lock_set_lock_and_trx_wait() to set it again to the new lock. This all happens while holding lock-sys latch, but we read wait_lock_type without this latch, so we should not clear the wait_lock_type simply because somebody changed wait_lock to NULL. Protected by trx->mutex.
std::chrono::system_clock::time_point trx_lock_t::wait_started |
Lock wait started at this time.
Writes under shared lock_sys latch combined with trx->mutex. Reads require either trx->mutex or exclusive lock_sys latch.
que_thr_t* trx_lock_t::wait_thr |
query thread belonging to this trx that is in QUE_THR_LOCK_WAIT state.
For threads suspended in a lock wait, this is protected by lock_sys latch for the wait_lock's shard. Otherwise, this may only be modified by the thread that is serving the running transaction.
bool trx_lock_t::was_chosen_as_deadlock_victim |
when the transaction decides to wait for a lock, it sets this to false; if another transaction chooses this transaction as a victim in deadlock resolution, it sets this to true.
Protected by trx->mutex.