MySQL 8.3.0
Source Code Documentation
lock0latches.h
Go to the documentation of this file.
1/*****************************************************************************
2
3Copyright (c) 2020, 2023, Oracle and/or its affiliates.
4
5This program is free software; you can redistribute it and/or modify it under
6the terms of the GNU General Public License, version 2.0, as published by the
7Free Software Foundation.
8
9This program is also distributed with certain software (including but not
10limited to OpenSSL) that is licensed under separate terms, as designated in a
11particular file or component or in included license documentation. The authors
12of MySQL hereby grant you an additional permission to link the program and
13your derivative works with the separately licensed software that they have
14included with MySQL.
15
16This program is distributed in the hope that it will be useful, but WITHOUT
17ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
18FOR A PARTICULAR PURPOSE. See the GNU General Public License, version 2.0,
19for more details.
20
21You should have received a copy of the GNU General Public License along with
22this program; if not, write to the Free Software Foundation, Inc.,
2351 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
24
25*****************************************************************************/
26#ifndef lock0latches_h
27#define lock0latches_h
28
29#include "dict0types.h"
30#include "sync0sharded_rw.h"
31#include "ut0cpu_cache.h"
32#include "ut0mutex.h"
33
34/* Forward declarations */
35struct dict_table_t;
36class page_id_t;
37
38namespace locksys {
39/**
40The class which handles the logic of latching of lock_sys queues themselves.
41The lock requests for table locks and record locks are stored in queues, and to
42allow concurrent operations on these queues, we need a mechanism to latch these
43queues in safe and quick fashion.
44In the past we had a single latch which protected access to all of them.
45Now, we use more granular approach.
46In extreme, one could imagine protecting each queue with a separate latch.
47To avoid having too many latch objects, and having to create and remove them on
48demand, we use a more conservative approach.
49The queues are grouped into a fixed number of shards, and each shard is
50protected by its own mutex.
51
52However, there are several rare events in which we need to "stop the world" -
53latch all queues, to prevent any activity inside lock-sys.
54One way to accomplish this would be to simply latch all the shards one by one,
55but it turns out to be way too slow in debug runs, where such "stop the world"
56events are very frequent due to lock_sys validation.
57
58To allow for efficient latching of everything, we've introduced a global_latch,
59which is a read-write latch.
60Most of the time, we operate on one or two shards, in which case it is
61sufficient to s-latch the global_latch and then latch shard's mutex.
62For the "stop the world" operations, we x-latch the global_latch, which prevents
63any other thread from latching any shard.
64
65However, it turned out that on ARM architecture, the default implementation of
66read-write latch (rw_lock_t) is too slow because increments and decrements of
67the number of s-latchers is implemented as read-update-try-to-write loop, which
68means multiple threads try to modify the same cache line disrupting each other.
69Therefore, we use a sharded version of read-write latch (Sharded_rw_lock), which
70internally uses multiple instances of rw_lock_t, spreading the load over several
71cache lines. Note that this sharding is a technical internal detail of the
72global_latch, which for all other purposes can be treated as a single entity.
73
74This his how this conceptually looks like:
75```
76 [ global latch ]
77 |
78 v
79 [table shard 1] ... [table shard 512] [page shard 1] ... [page shard 512]
80
81```
82
83So, for example access two queues for two records involves following steps:
841. s-latch the global_latch
852. identify the 2 pages to which the records belong
863. identify the lock_sys 2 hash cells which contain the queues for given pages
874. identify the 2 shard ids which contain these two cells
885. latch mutexes for the two shards in the order of their addresses
89
90All of the steps above (except 2, as we usually know the page already) are
91accomplished with the help of single line:
92
93 locksys::Shard_latches_guard guard{*block_a, *block_b};
94
95And to "stop the world" one can simply x-latch the global latch by using:
96
97 locksys::Global_exclusive_latch_guard guard{};
98
99This class does not expose too many public functions, as the intention is to
100rather use friend guard classes, like the Shard_latches_guard demonstrated.
101*/
102class Latches {
103 private:
104 using Lock_mutex = ib_mutex_t;
105
106 /** A helper wrapper around Shared_rw_lock which simplifies:
107 - lifecycle by providing constructor and destructor, and
108 - s-latching and s-unlatching by keeping track of the shard id used for
109 spreading the contention.
110 There must be at most one instance of this class (the one in the lock_sys), as
111 it uses thread_local-s to remember which shard of sharded rw lock was used by
112 this thread to perform s-latching (so, hypothetical other instances would
113 share this field, overwriting it and leading to errors). */
115 /** The actual rw_lock implementation doing the heavy lifting */
117
118 /** The value used for m_shard_id to indicate that current thread did not
119 s-latch any of the rw_lock's shards */
120 static constexpr size_t NOT_IN_USE = std::numeric_limits<size_t>::max();
121
122 /** The id of the rw_lock's shard which this thread has s-latched, or
123 NOT_IN_USE if it has not s-latched any*/
124 static thread_local size_t m_shard_id;
125
126 public:
129 bool try_x_lock(ut::Location location) {
130 return rw_lock.try_x_lock(location);
131 }
132 /** Checks if there is a thread requesting an x-latch waiting for our
133 thread to release its s-latch.
134 Must be called while holding an s-latch.
135 @return true iff there is an x-latcher blocked by our s-latch. */
139 }
140 void x_lock(ut::Location location) { rw_lock.x_lock(location); }
142 void s_lock(ut::Location location) {
144 m_shard_id = rw_lock.s_lock(location);
145 }
146 void s_unlock() {
150 }
151#ifdef UNIV_DEBUG
152 bool x_own() const { return rw_lock.x_own(); }
153 bool s_own() const {
155 }
156#endif
157 };
158
160
161 /** Number of page shards, and also number of table shards.
162 Must be a power of two */
163 static constexpr size_t SHARDS_COUNT = 512;
164
165 /*
166 Functions related to sharding by page (containing records to lock).
167
168 This must be done in such a way that two pages which share a single lock
169 queue fall into the same shard. We accomplish this by reusing hash function
170 used to determine lock queue, and then group multiple queues into single
171 shard.
172 */
174 /** Each shard is protected by a separate mutex. Mutexes are padded to avoid
175 false sharing issues with cache. */
177 /**
178 Identifies the page shard which contains record locks for records from the
179 given page.
180 @param[in] page_id The space_id and page_no of the page
181 @return Integer in the range [0..lock_sys_t::SHARDS_COUNT)
182 */
183 static size_t get_shard(const page_id_t &page_id);
184
185 public:
186 Page_shards();
187 ~Page_shards();
188
189 /**
190 Returns the mutex which (together with the global latch) protects the page
191 shard which contains record locks for records from the given page.
192 @param[in] page_id The space_id and page_no of the page
193 @return The mutex responsible for the shard containing the page
194 */
195 const Lock_mutex &get_mutex(const page_id_t &page_id) const;
196
197 /**
198 Returns the mutex which (together with the global latch) protects the page
199 shard which contains record locks for records from the given page.
200 @param[in] page_id The space_id and page_no of the page
201 @return The mutex responsible for the shard containing the page
202 */
203 Lock_mutex &get_mutex(const page_id_t &page_id);
204 };
205
206 /*
207 Functions related to sharding by table
208
209 We identify tables by their id. Each table has its own lock queue, so we
210 simply group several such queues into single shard.
211 */
213 /** Each shard is protected by a separate mutex. Mutexes are padded to avoid
214 false sharing issues with cache. */
216 /**
217 Identifies the table shard which contains locks for the given table.
218 @param[in] table_id The id of the table
219 @return Integer in the range [0..lock_sys_t::SHARDS_COUNT)
220 */
221 static size_t get_shard(const table_id_t table_id);
222
223 public:
224 Table_shards();
226
227 /** Returns the mutex which (together with the global latch) protects the
228 table shard which contains table locks for the given table.
229 @param[in] table_id The id of the table
230 @return The mutex responsible for the shard containing the table
231 */
232 Lock_mutex &get_mutex(const table_id_t table_id);
233
234 /** Returns the mutex which (together with the global latch) protects the
235 table shard which contains table locks for the given table.
236 @param[in] table_id The id of the table
237 @return The mutex responsible for the shard containing the table
238 */
239 const Lock_mutex &get_mutex(const table_id_t table_id) const;
240
241 /** Returns the mutex which (together with the global latch) protects the
242 table shard which contains table locks for the given table.
243 @param[in] table The table
244 @return The mutex responsible for the shard containing the table
245 */
246 const Lock_mutex &get_mutex(const dict_table_t &table) const;
247 };
248
249 /** padding to prevent other memory update hotspots from residing on the same
250 memory cache line */
252
254
256
258
259 public:
260 /* You should use following RAII guards to modify the state of Latches. */
266
267 /** You should not use this functionality in new code.
268 Instead use Global_exclusive_latch_guard.
269 This is intended only to be use within lock0* module, thus this class is only
270 accessible through lock0priv.h.
271 It is only used by lock_rec_fetch_page() as a workaround. */
273
274 /** For some reason clang 6.0.0 and 7.0.0 (but not 8.0.0) fail at linking
275 stage if we completely omit the destructor declaration, or use
276
277 ~Latches() = default;
278
279 This might be somehow connected to one of these:
280 https://bugs.llvm.org/show_bug.cgi?id=28280
281 https://github.com/android/ndk/issues/143
282 https://reviews.llvm.org/D45898
283 So, this declaration is just to make clang 6.0.0 and 7.0.0 happy.
284 */
285#if defined(__clang__) && (__clang_major__ < 8)
286 ~Latches() {} // NOLINT(modernize-use-equals-default)
287#else
288 ~Latches() = default;
289#endif
290
291#ifdef UNIV_DEBUG
292 /**
293 Tests if lock_sys latch is exclusively owned by the current thread.
294 @return true iff the current thread owns exclusive global lock_sys latch
295 */
297
298 /**
299 Tests if lock_sys latch is owned in shared mode by the current thread.
300 @return true iff the current thread owns shared global lock_sys latch
301 */
302 bool owns_shared_global_latch() const { return global_latch.s_own(); }
303
304 /**
305 Tests if given page shard can be safely accessed by the current thread.
306 @param[in] page_id The space_id and page_no of the page
307 @return true iff the current thread owns exclusive global lock_sys latch or
308 both a shared global lock_sys latch and mutex protecting the page shard
309 */
310 bool owns_page_shard(const page_id_t &page_id) const {
312 (page_shards.get_mutex(page_id).is_owned() &&
314 }
315
316 /**
317 Tests if given table shard can be safely accessed by the current thread.
318 @param table the table
319 @return true iff the current thread owns exclusive global lock_sys latch or
320 both a shared global lock_sys latch and mutex protecting the table shard
321 */
324 (table_shards.get_mutex(table).is_owned() &&
326 }
327#endif /* UNIV_DEBUG */
328};
329} // namespace locksys
330
331#endif /* lock0latches_h */
Rw-lock with very fast, highly concurrent s-lock but slower x-lock.
Definition: sync0sharded_rw.h:62
bool s_own(size_t shard_no) const
Definition: sync0sharded_rw.h:142
void x_unlock()
Definition: sync0sharded_rw.h:137
size_t s_lock(ut::Location location)
Definition: sync0sharded_rw.h:95
void s_unlock(size_t shard_no)
Definition: sync0sharded_rw.h:102
bool x_own() const
Definition: sync0sharded_rw.h:146
bool try_x_lock(ut::Location location)
Tries to obtain exclusive latch - similar to x_lock(), but non-blocking, and thus can fail.
Definition: sync0sharded_rw.h:119
bool is_x_blocked_by_s(size_t shard_no)
Checks if there is a thread requesting an x-latch waiting for threads to release their s-latches on g...
Definition: sync0sharded_rw.h:110
void x_lock(ut::Location location)
Definition: sync0sharded_rw.h:131
A RAII helper which latches global_latch in exclusive mode during constructor, and unlatches it durin...
Definition: lock0guards.h:39
A RAII helper which tries to exclusively latch the global_lach in constructor and unlatches it,...
Definition: lock0guards.h:50
A RAII helper which latches global_latch in shared mode during constructor, and unlatches it during d...
Definition: lock0guards.h:70
Definition: lock0latches.h:173
Page_shards()
Definition: lock0latches.cc:98
static size_t get_shard(const page_id_t &page_id)
Identifies the page shard which contains record locks for records from the given page.
Definition: lock0latches.cc:35
~Page_shards()
Definition: lock0latches.cc:104
const Lock_mutex & get_mutex(const page_id_t &page_id) const
Returns the mutex which (together with the global latch) protects the page shard which contains recor...
Definition: lock0latches.cc:53
Padded_mutex mutexes[SHARDS_COUNT]
Each shard is protected by a separate mutex.
Definition: lock0latches.h:176
Definition: lock0latches.h:212
static size_t get_shard(const table_id_t table_id)
Identifies the table shard which contains locks for the given table.
Definition: lock0latches.cc:65
Table_shards()
Definition: lock0latches.cc:110
Padded_mutex mutexes[SHARDS_COUNT]
Each shard is protected by a separate mutex.
Definition: lock0latches.h:215
Lock_mutex & get_mutex(const table_id_t table_id)
Returns the mutex which (together with the global latch) protects the table shard which contains tabl...
Definition: lock0latches.cc:74
~Table_shards()
Definition: lock0latches.cc:116
A helper wrapper around Shared_rw_lock which simplifies:
Definition: lock0latches.h:114
bool is_x_blocked_by_our_s()
Checks if there is a thread requesting an x-latch waiting for our thread to release its s-latch.
Definition: lock0latches.h:136
Sharded_rw_lock rw_lock
The actual rw_lock implementation doing the heavy lifting.
Definition: lock0latches.h:116
Unique_sharded_rw_lock()
Definition: lock0latches.cc:88
void x_lock(ut::Location location)
Definition: lock0latches.h:140
bool s_own() const
Definition: lock0latches.h:153
static constexpr size_t NOT_IN_USE
The value used for m_shard_id to indicate that current thread did not s-latch any of the rw_lock's sh...
Definition: lock0latches.h:120
static thread_local size_t m_shard_id
The id of the rw_lock's shard which this thread has s-latched, or NOT_IN_USE if it has not s-latched ...
Definition: lock0latches.h:124
bool x_own() const
Definition: lock0latches.h:152
void s_lock(ut::Location location)
Definition: lock0latches.h:142
bool try_x_lock(ut::Location location)
Definition: lock0latches.h:129
void s_unlock()
Definition: lock0latches.h:146
~Unique_sharded_rw_lock()
Definition: lock0latches.cc:96
void x_unlock()
Definition: lock0latches.h:141
The class which handles the logic of latching of lock_sys queues themselves.
Definition: lock0latches.h:102
~Latches()=default
For some reason clang 6.0.0 and 7.0.0 (but not 8.0.0) fail at linking stage if we completely omit the...
Page_shards page_shards
Definition: lock0latches.h:255
static constexpr size_t SHARDS_COUNT
Number of page shards, and also number of table shards.
Definition: lock0latches.h:163
char pad1[ut::INNODB_CACHE_LINE_SIZE]
padding to prevent other memory update hotspots from residing on the same memory cache line
Definition: lock0latches.h:251
ib_mutex_t Lock_mutex
Definition: lock0latches.h:104
bool owns_table_shard(const dict_table_t &table) const
Tests if given table shard can be safely accessed by the current thread.
Definition: lock0latches.h:322
Unique_sharded_rw_lock global_latch
Definition: lock0latches.h:253
bool owns_shared_global_latch() const
Tests if lock_sys latch is owned in shared mode by the current thread.
Definition: lock0latches.h:302
bool owns_page_shard(const page_id_t &page_id) const
Tests if given page shard can be safely accessed by the current thread.
Definition: lock0latches.h:310
bool owns_exclusive_global_latch() const
Tests if lock_sys latch is exclusively owned by the current thread.
Definition: lock0latches.h:296
Table_shards table_shards
Definition: lock0latches.h:257
A RAII helper which latches the mutex protecting given shard during constructor, and unlatches it dur...
Definition: lock0guards.h:90
A RAII helper which latches the mutexes protecting specified shards for the duration of its scope.
Definition: lock0guards.h:140
Definition: lock0priv.h:1097
Page identifier.
Definition: buf0types.h:206
Data dictionary global types.
ib_id_t table_id_t
Table or partition identifier (unique within an InnoDB instance).
Definition: dict0types.h:217
static PFS_engine_table_share_proxy table
Definition: pfs.cc:60
Definition: lock0guards.h:33
constexpr size_t INNODB_CACHE_LINE_SIZE
CPU cache line size.
Definition: ut0cpu_cache.h:40
Data structure for a database table.
Definition: dict0mem.h:1908
Definition: ut0core.h:35
The sharded read-write lock (for threads).
Utilities related to CPU cache.
#define ut_ad(EXPR)
Debug assertion.
Definition: ut0dbg.h:104
Policy based mutexes.