MySQL 8.0.39
Source Code Documentation
locksys::Latches Class Reference

The class which handles the logic of latching of lock_sys queues themselves. More...

#include <lock0latches.h>

Classes

class  Page_shards
 
class  Table_shards
 
class  Unique_sharded_rw_lock
 A helper wrapper around Shared_rw_lock which simplifies: More...
 

Public Member Functions

 ~Latches ()=default
 For some reason clang 6.0.0 and 7.0.0 (but not 8.0.0) fail at linking stage if we completely omit the destructor declaration, or use. More...
 
bool owns_exclusive_global_latch () const
 Tests if lock_sys latch is exclusively owned by the current thread. More...
 
bool owns_shared_global_latch () const
 Tests if lock_sys latch is owned in shared mode by the current thread. More...
 
bool owns_page_shard (const page_id_t &page_id) const
 Tests if given page shard can be safely accessed by the current thread. More...
 
bool owns_table_shard (const dict_table_t &table) const
 Tests if given table shard can be safely accessed by the current thread. More...
 

Private Types

using Lock_mutex = ib_mutex_t
 
using Padded_mutex = ut::Cacheline_padded< Lock_mutex >
 

Private Attributes

char pad1 [ut::INNODB_CACHE_LINE_SIZE] = {}
 padding to prevent other memory update hotspots from residing on the same memory cache line More...
 
Unique_sharded_rw_lock global_latch
 
Page_shards page_shards
 
Table_shards table_shards
 

Static Private Attributes

static constexpr size_t SHARDS_COUNT = 512
 Number of page shards, and also number of table shards. More...
 

Friends

class Global_exclusive_latch_guard
 
class Global_exclusive_try_latch
 
class Global_shared_latch_guard
 
class Shard_naked_latch_guard
 
class Shard_naked_latches_guard
 
class Unsafe_global_latch_manipulator
 You should not use this functionality in new code. More...
 

Detailed Description

The class which handles the logic of latching of lock_sys queues themselves.

The lock requests for table locks and record locks are stored in queues, and to allow concurrent operations on these queues, we need a mechanism to latch these queues in safe and quick fashion. In the past we had a single latch which protected access to all of them. Now, we use more granular approach. In extreme, one could imagine protecting each queue with a separate latch. To avoid having too many latch objects, and having to create and remove them on demand, we use a more conservative approach. The queues are grouped into a fixed number of shards, and each shard is protected by its own mutex.

However, there are several rare events in which we need to "stop the world" - latch all queues, to prevent any activity inside lock-sys. One way to accomplish this would be to simply latch all the shards one by one, but it turns out to be way too slow in debug runs, where such "stop the world" events are very frequent due to lock_sys validation.

To allow for efficient latching of everything, we've introduced a global_latch, which is a read-write latch. Most of the time, we operate on one or two shards, in which case it is sufficient to s-latch the global_latch and then latch shard's mutex. For the "stop the world" operations, we x-latch the global_latch, which prevents any other thread from latching any shard.

However, it turned out that on ARM architecture, the default implementation of read-write latch (rw_lock_t) is too slow because increments and decrements of the number of s-latchers is implemented as read-update-try-to-write loop, which means multiple threads try to modify the same cache line disrupting each other. Therefore, we use a sharded version of read-write latch (Sharded_rw_lock), which internally uses multiple instances of rw_lock_t, spreading the load over several cache lines. Note that this sharding is a technical internal detail of the global_latch, which for all other purposes can be treated as a single entity.

This his how this conceptually looks like:

[ global latch ]
|
v
[table shard 1] ... [table shard 512] [page shard 1] ... [page shard 512]
int page
Definition: ctype-mb.cc:1236

So, for example access two queues for two records involves following steps:

  1. s-latch the global_latch
  2. identify the 2 pages to which the records belong
  3. identify the lock_sys 2 hash cells which contain the queues for given pages
  4. identify the 2 shard ids which contain these two cells
  5. latch mutexes for the two shards in the order of their addresses

All of the steps above (except 2, as we usually know the page already) are accomplished with the help of single line:

locksys::Shard_latches_guard guard{*block_a, *block_b};

And to "stop the world" one can simply x-latch the global latch by using:

locksys::Global_exclusive_latch_guard guard{};

This class does not expose too many public functions, as the intention is to rather use friend guard classes, like the Shard_latches_guard demonstrated.

Member Typedef Documentation

◆ Lock_mutex

using locksys::Latches::Lock_mutex = ib_mutex_t
private

◆ Padded_mutex

using locksys::Latches::Padded_mutex = ut::Cacheline_padded<Lock_mutex>
private

Constructor & Destructor Documentation

◆ ~Latches()

locksys::Latches::~Latches ( )
default

For some reason clang 6.0.0 and 7.0.0 (but not 8.0.0) fail at linking stage if we completely omit the destructor declaration, or use.

~Latches() = default;

This might be somehow connected to one of these: https://bugs.llvm.org/show_bug.cgi?id=28280 https://github.com/android/ndk/issues/143 https://reviews.llvm.org/D45898 So, this declaration is just to make clang 6.0.0 and 7.0.0 happy.

Member Function Documentation

◆ owns_exclusive_global_latch()

bool locksys::Latches::owns_exclusive_global_latch ( ) const
inline

Tests if lock_sys latch is exclusively owned by the current thread.

Returns
true iff the current thread owns exclusive global lock_sys latch

◆ owns_page_shard()

bool locksys::Latches::owns_page_shard ( const page_id_t page_id) const
inline

Tests if given page shard can be safely accessed by the current thread.

Parameters
[in]page_idThe space_id and page_no of the page
Returns
true iff the current thread owns exclusive global lock_sys latch or both a shared global lock_sys latch and mutex protecting the page shard

◆ owns_shared_global_latch()

bool locksys::Latches::owns_shared_global_latch ( ) const
inline

Tests if lock_sys latch is owned in shared mode by the current thread.

Returns
true iff the current thread owns shared global lock_sys latch

◆ owns_table_shard()

bool locksys::Latches::owns_table_shard ( const dict_table_t table) const
inline

Tests if given table shard can be safely accessed by the current thread.

Parameters
tablethe table
Returns
true iff the current thread owns exclusive global lock_sys latch or both a shared global lock_sys latch and mutex protecting the table shard

Friends And Related Function Documentation

◆ Global_exclusive_latch_guard

friend class Global_exclusive_latch_guard
friend

◆ Global_exclusive_try_latch

friend class Global_exclusive_try_latch
friend

◆ Global_shared_latch_guard

friend class Global_shared_latch_guard
friend

◆ Shard_naked_latch_guard

friend class Shard_naked_latch_guard
friend

◆ Shard_naked_latches_guard

friend class Shard_naked_latches_guard
friend

◆ Unsafe_global_latch_manipulator

friend class Unsafe_global_latch_manipulator
friend

You should not use this functionality in new code.

Instead use Global_exclusive_latch_guard. This is intended only to be use within lock0* module, thus this class is only accessible through lock0priv.h. It is only used by lock_rec_fetch_page() as a workaround.

Member Data Documentation

◆ global_latch

Unique_sharded_rw_lock locksys::Latches::global_latch
private

◆ pad1

char locksys::Latches::pad1[ut::INNODB_CACHE_LINE_SIZE] = {}
private

padding to prevent other memory update hotspots from residing on the same memory cache line

◆ page_shards

Page_shards locksys::Latches::page_shards
private

◆ SHARDS_COUNT

constexpr size_t locksys::Latches::SHARDS_COUNT = 512
staticconstexprprivate

Number of page shards, and also number of table shards.

Must be a power of two

◆ table_shards

Table_shards locksys::Latches::table_shards
private

The documentation for this class was generated from the following file: