Lock free ref counter.
More...
#include <ut0lock_free_hash.h>
|
size_t | n_cnt_index () const |
| Derive an appropriate index in m_cnt[] for the current thread. More...
|
|
Lock free ref counter.
It uses a few counter variables internally to improve performance on machines with lots of CPUs.
◆ ut_lock_free_cnt_t()
ut_lock_free_cnt_t::ut_lock_free_cnt_t |
( |
| ) |
|
|
inline |
◆ await_release_of_old_references()
void ut_lock_free_cnt_t::await_release_of_old_references |
( |
| ) |
const |
|
inline |
Wait until all previously existing references get released.
This function assumes that the caller ensured that no new references should appear (or rather: no long-lived references - there can be treads which call reference(), realize the object should no longer be referenced and immediately release it)
◆ n_cnt_index()
size_t ut_lock_free_cnt_t::n_cnt_index |
( |
| ) |
const |
|
inlineprivate |
Derive an appropriate index in m_cnt[] for the current thread.
- Returns
- index in m_cnt[] for this thread to use
◆ reference()
handle_t ut_lock_free_cnt_t::reference |
( |
| ) |
|
|
inline |
◆ m_cnt
The shards of the counter.
We've just picked up some number that is supposedly larger than the number of CPUs on the system or close to it, but small enough that await_release_of_old_references() finishes in reasonable time, and that the size (256 * 64B = 16 KiB) is not too large. We pad the atomics to avoid false sharing. In particular, we hope that on platforms which HAVE_OS_GETCPU the same CPU will always fetch the same counter and thus will store it in its local cache. This should also help on NUMA architectures by avoiding the cost of synchronizing caches between CPUs.
The documentation for this class was generated from the following file: