Lines Matching +full:write +full:- +full:to +full:- +full:read
1 /* SPDX-License-Identifier: GPL-2.0 */
6 * seqcount_t / seqlock_t - a reader-writer consistency mechanism with
7 * lockless readers (read-only retry loops), and no writer starvation.
12 * - Based on x86_64 vsyscall gettimeofday: Keith Owens, Andrea Arcangeli
13 * - Sequence counters with associated locks, (C) 2020 Linutronix GmbH
17 #include <linux/kcsan-checks.h>
28 * read begin/retry/end. For readers, typically there is a call to
32 * As a consequence, we take the following best-effort approach for raw usage
33 * via seqcount_t under KCSAN: upon beginning a seq-reader critical section,
47 lockdep_init_map(&s->dep_map, name, key, 0); in __seqcount_init()
48 s->sequence = 0; in __seqcount_init()
57 * seqcount_init() - runtime initializer for seqcount_t
58 * @s: Pointer to the seqcount_t instance
72 seqcount_acquire_read(&l->dep_map, 0, 0, _RET_IP_); in seqcount_lockdep_reader_access()
73 seqcount_release(&l->dep_map, _RET_IP_); in seqcount_lockdep_reader_access()
84 * SEQCNT_ZERO() - static initializer for seqcount_t
93 * serialization at initialization time. This enables lockdep to validate
94 * that the write side critical section is properly serialized.
97 * preemption protection is enforced in the write side function.
99 * Lockdep is never used in any for the raw write variants.
105 * typedef seqcount_LOCKNAME_t - sequence counter with LOCKNAME associated
107 * @lock: Pointer to the associated lock
110 * LOCKNAME @lock. The lock is associated to the sequence counter in the
111 * static initializer or init function. This enables lockdep to validate
112 * that the write side critical section is properly serialized.
118 * seqcount_LOCKNAME_init() - runtime initializer for seqcount_LOCKNAME_t
119 * @s: Pointer to the seqcount_LOCKNAME_t instance
120 * @lock: Pointer to the associated lock
126 seqcount_init(&____s->seqcount); \
127 __SEQ_LOCK(____s->lock = (_lock)); \
136 * SEQCOUNT_LOCKNAME() - Instantiate seqcount_LOCKNAME_t and helpers
137 * seqprop_LOCKNAME_*() - Property accessors for seqcount_LOCKNAME_t
148 return &s->seqcount; \
154 return &s->seqcount; \
160 unsigned seq = smp_load_acquire(&s->seqcount.sequence); \
166 __SEQ_LOCK(lockbase##_lock(s->lock)); \
167 __SEQ_LOCK(lockbase##_unlock(s->lock)); \
170 * Re-read the sequence counter since the (possibly \
173 seq = smp_load_acquire(&s->seqcount.sequence); \
192 __SEQ_LOCK(lockdep_assert_held(s->lock)); \
211 return smp_load_acquire(&s->sequence); in __seqprop_sequence()
228 SEQCOUNT_LOCKNAME(rwlock, rwlock_t, __SEQ_RT, read) in SEQCOUNT_LOCKNAME()
233 * SEQCNT_LOCKNAME_ZERO - static initializer for seqcount_LOCKNAME_t in SEQCOUNT_LOCKNAME()
235 * @lock: Pointer to the associated LOCKNAME in SEQCOUNT_LOCKNAME()
266 * __read_seqcount_begin() - begin a seqcount_t read section
267 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
269 * Return: count to be passed to read_seqcount_retry()
283 * raw_read_seqcount_begin() - begin a seqcount_t read section w/o lockdep
284 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
286 * Return: count to be passed to read_seqcount_retry()
291 * read_seqcount_begin() - begin a seqcount_t read critical section
292 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
294 * Return: count to be passed to read_seqcount_retry()
303 * raw_read_seqcount() - read the raw seqcount_t counter value
304 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
306 * raw_read_seqcount opens a read critical section of the given
311 * Return: count to be passed to read_seqcount_retry()
322 * raw_seqcount_try_begin() - begin a seqcount_t read critical section
324 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
325 * @start: count to be passed to read_seqcount_retry()
327 * Similar to raw_seqcount_begin(), except it enables eliding the critical
331 * Useful when counter stabilization is more or less equivalent to taking
334 * If true, start will be set to the (even) sequence count read.
336 * Return: true when a read critical section is started.
345 * raw_seqcount_begin() - begin a seqcount_t read critical section w/o
347 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
349 * raw_seqcount_begin opens a read critical section of the given
351 * for the count to stabilize. If a writer is active when it begins, it
352 * will fail the read_seqcount_retry() at the end of the read critical
355 * Use this only in special kernel hot paths where the read section is
359 * Return: count to be passed to read_seqcount_retry()
371 * __read_seqcount_retry() - end a seqcount_t read section w/o barrier
372 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
377 * provided before actually loading any of the variables that are to be
383 * Return: true if a read section retry is required, else false
391 return unlikely(READ_ONCE(s->sequence) != start);
395 * read_seqcount_retry() - end a seqcount_t read critical section
396 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
399 * read_seqcount_retry closes the read critical section of given
403 * Return: true if a read section retry is required, else false
415 * raw_write_seqcount_begin() - start a seqcount_t write section w/o lockdep
416 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
431 s->sequence++; in do_raw_write_seqcount_begin()
436 * raw_write_seqcount_end() - end a seqcount_t write section w/o lockdep
437 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
452 s->sequence++; in do_raw_write_seqcount_end()
457 * write_seqcount_begin_nested() - start a seqcount_t write section with
459 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
462 * See Documentation/locking/lockdep-design.rst
477 seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_); in do_write_seqcount_begin_nested()
482 * write_seqcount_begin() - start a seqcount_t write side critical section
483 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
485 * Context: sequence counter write side sections must be serialized and
486 * non-preemptible. Preemption will be automatically disabled if and
487 * only if the seqcount write serialization lock is associated, and
507 * write_seqcount_end() - end a seqcount_t write side critical section
508 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
510 * Context: Preemption will be automatically re-enabled if and only if
511 * the seqcount write serialization lock is associated, and preemptible.
523 seqcount_release(&s->dep_map, _RET_IP_); in do_write_seqcount_end()
528 * raw_write_seqcount_barrier() - do a seqcount_t write barrier
529 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
531 * This can be used to provide an ordering guarantee instead of the usual
533 * the two back-to-back wmb()s.
536 * via WRITE_ONCE): a) to ensure the writes become visible to other threads
537 * atomically, avoiding compiler optimizations; b) to document which writes are
538 * meant to propagate to the reader critical section. This is necessary because
539 * neither writes before nor after the barrier are enclosed in a seq-writer
545 * void read(void)
559 * void write(void)
574 s->sequence++; in do_raw_write_seqcount_barrier()
576 s->sequence++; in do_raw_write_seqcount_barrier()
581 * write_seqcount_invalidate() - invalidate in-progress seqcount_t read
583 * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
585 * After write_seqcount_invalidate, no seqcount_t read side operations
595 s->sequence+=2; in do_write_seqcount_invalidate()
602 * A sequence counter variant where the counter even/odd value is used to
603 * switch between two copies of protected data. This allows the read path,
604 * typically NMIs, to safely interrupt the write side critical section.
606 * As the write sections are fully preemptible, no special handling for
614 * SEQCNT_LATCH_ZERO() - static initializer for seqcount_latch_t
622 * seqcount_latch_init() - runtime initializer for seqcount_latch_t
623 * @s: Pointer to the seqcount_latch_t instance
625 #define seqcount_latch_init(s) seqcount_init(&(s)->seqcount)
628 * raw_read_seqcount_latch() - pick even/odd latch data copy
629 * @s: Pointer to seqcount_latch_t
635 * picking which data copy to read. The full counter must then be checked
642 * Due to the dependent load, a full smp_rmb() is not needed. in raw_read_seqcount_latch()
644 return READ_ONCE(s->seqcount.sequence); in raw_read_seqcount_latch()
648 * read_seqcount_latch() - pick even/odd latch data copy
649 * @s: Pointer to seqcount_latch_t
655 * picking which data copy to read. The full counter must then be checked
665 * raw_read_seqcount_latch_retry() - end a seqcount_latch_t read section
666 * @s: Pointer to seqcount_latch_t
669 * Return: true if a read section retry is required, else false
675 return unlikely(READ_ONCE(s->seqcount.sequence) != start); in raw_read_seqcount_latch_retry()
679 * read_seqcount_latch_retry() - end a seqcount_latch_t read section
680 * @s: Pointer to seqcount_latch_t
683 * Return: true if a read section retry is required, else false
693 * raw_write_seqcount_latch() - redirect latch readers to even/odd copy
694 * @s: Pointer to seqcount_latch_t
699 s->seqcount.sequence++; in raw_write_seqcount_latch()
704 * write_seqcount_latch_begin() - redirect latch readers to odd copy
705 * @s: Pointer to seqcount_latch_t
708 * queries during non-atomic modifications. If you can guarantee queries never
709 * interrupt the modification -- e.g. the concurrency is strictly between CPUs
710 * -- you most likely do not need this.
713 * modifications to ensure queries observe either the old or the new state the
714 * latch allows the same for non-atomic updates. The trade-off is doubling the
715 * cost of storage; we have to maintain two copies of the entire data
719 * there is always one copy in a stable state, ready to give us an answer.
728 * Where a modification, which is assumed to be externally serialized, does the
733 * write_seqcount_latch_begin(&latch->seq);
734 * modify(latch->data[0], ...);
735 * write_seqcount_latch(&latch->seq);
736 * modify(latch->data[1], ...);
737 * write_seqcount_latch_end(&latch->seq);
748 * seq = read_seqcount_latch(&latch->seq);
751 * entry = data_query(latch->data[idx], ...);
754 * } while (read_seqcount_latch_retry(&latch->seq, seq));
759 * So during the modification, queries are first redirected to data[1]. Then we
760 * modify data[0]. When that is complete, we redirect queries back to data[0]
765 * The non-requirement for atomic modifications does _NOT_ include
770 * to miss an entire modification sequence, once it resumes it might
776 * patterns to manage the lifetimes of the objects within.
785 * write_seqcount_latch() - redirect latch readers to even copy
786 * @s: Pointer to seqcount_latch_t
794 * write_seqcount_latch_end() - end a seqcount_latch_t write section
795 * @s: Pointer to seqcount_latch_t
798 * latch-protected data have been updated.
812 * seqlock_init() - dynamic initializer for seqlock_t
813 * @sl: Pointer to the seqlock_t instance
817 spin_lock_init(&(sl)->lock); \
818 seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \
822 * DEFINE_SEQLOCK(sl) - Define a statically allocated seqlock_t
829 * read_seqbegin() - start a seqlock_t read side critical section
830 * @sl: Pointer to seqlock_t
832 * Return: count, to be passed to read_seqretry()
836 return read_seqcount_begin(&sl->seqcount); in read_seqbegin()
840 * read_seqretry() - end a seqlock_t read side section
841 * @sl: Pointer to seqlock_t
844 * read_seqretry closes the read side critical section of given seqlock_t.
848 * Return: true if a read section retry is required, else false
852 return read_seqcount_retry(&sl->seqcount, start); in read_seqretry()
856 * For all seqlock_t write side functions, use the internal
862 * write_seqlock() - start a seqlock_t write side critical section
863 * @sl: Pointer to seqlock_t
865 * write_seqlock opens a write side critical section for the given
867 * that sequential lock. All seqlock_t write side sections are thus
868 * automatically serialized and non-preemptible.
870 * Context: if the seqlock_t read section, or other write side critical
876 spin_lock(&sl->lock); in write_seqlock()
877 do_write_seqcount_begin(&sl->seqcount.seqcount); in write_seqlock()
881 * write_sequnlock() - end a seqlock_t write side critical section
882 * @sl: Pointer to seqlock_t
884 * write_sequnlock closes the (serialized and non-preemptible) write side
889 do_write_seqcount_end(&sl->seqcount.seqcount); in write_sequnlock()
890 spin_unlock(&sl->lock); in write_sequnlock()
894 * write_seqlock_bh() - start a softirqs-disabled seqlock_t write section
895 * @sl: Pointer to seqlock_t
897 * _bh variant of write_seqlock(). Use only if the read side section, or
898 * other write side sections, can be invoked from softirq contexts.
902 spin_lock_bh(&sl->lock); in write_seqlock_bh()
903 do_write_seqcount_begin(&sl->seqcount.seqcount); in write_seqlock_bh()
907 * write_sequnlock_bh() - end a softirqs-disabled seqlock_t write section
908 * @sl: Pointer to seqlock_t
910 * write_sequnlock_bh closes the serialized, non-preemptible, and
911 * softirqs-disabled, seqlock_t write side critical section opened with
916 do_write_seqcount_end(&sl->seqcount.seqcount); in write_sequnlock_bh()
917 spin_unlock_bh(&sl->lock); in write_sequnlock_bh()
921 * write_seqlock_irq() - start a non-interruptible seqlock_t write section
922 * @sl: Pointer to seqlock_t
924 * _irq variant of write_seqlock(). Use only if the read side section, or
925 * other write sections, can be invoked from hardirq contexts.
929 spin_lock_irq(&sl->lock); in write_seqlock_irq()
930 do_write_seqcount_begin(&sl->seqcount.seqcount); in write_seqlock_irq()
934 * write_sequnlock_irq() - end a non-interruptible seqlock_t write section
935 * @sl: Pointer to seqlock_t
937 * write_sequnlock_irq closes the serialized and non-interruptible
938 * seqlock_t write side section opened with write_seqlock_irq().
942 do_write_seqcount_end(&sl->seqcount.seqcount); in write_sequnlock_irq()
943 spin_unlock_irq(&sl->lock); in write_sequnlock_irq()
950 spin_lock_irqsave(&sl->lock, flags); in __write_seqlock_irqsave()
951 do_write_seqcount_begin(&sl->seqcount.seqcount); in __write_seqlock_irqsave()
956 * write_seqlock_irqsave() - start a non-interruptible seqlock_t write
958 * @lock: Pointer to seqlock_t
959 * @flags: Stack-allocated storage for saving caller's local interrupt
960 * state, to be passed to write_sequnlock_irqrestore().
962 * _irqsave variant of write_seqlock(). Use it only if the read side
963 * section, or other write sections, can be invoked from hardirq context.
969 * write_sequnlock_irqrestore() - end non-interruptible seqlock_t write
971 * @sl: Pointer to seqlock_t
974 * write_sequnlock_irqrestore closes the serialized and non-interruptible
975 * seqlock_t write section previously opened with write_seqlock_irqsave().
980 do_write_seqcount_end(&sl->seqcount.seqcount); in write_sequnlock_irqrestore()
981 spin_unlock_irqrestore(&sl->lock, flags); in write_sequnlock_irqrestore()
985 * read_seqlock_excl() - begin a seqlock_t locking reader section
986 * @sl: Pointer to seqlock_t
994 * Context: if the seqlock_t write section, *or other read sections*, can
998 * The opened read section must be closed with read_sequnlock_excl().
1002 spin_lock(&sl->lock); in read_seqlock_excl()
1006 * read_sequnlock_excl() - end a seqlock_t locking reader critical section
1007 * @sl: Pointer to seqlock_t
1011 spin_unlock(&sl->lock); in read_sequnlock_excl()
1015 * read_seqlock_excl_bh() - start a seqlock_t locking reader section with
1017 * @sl: Pointer to seqlock_t
1020 * seqlock_t write side section, *or other read sections*, can be invoked
1025 spin_lock_bh(&sl->lock); in read_seqlock_excl_bh()
1029 * read_sequnlock_excl_bh() - stop a seqlock_t softirq-disabled locking
1031 * @sl: Pointer to seqlock_t
1035 spin_unlock_bh(&sl->lock); in read_sequnlock_excl_bh()
1039 * read_seqlock_excl_irq() - start a non-interruptible seqlock_t locking
1041 * @sl: Pointer to seqlock_t
1044 * write side section, *or other read sections*, can be invoked from a
1049 spin_lock_irq(&sl->lock); in read_seqlock_excl_irq()
1053 * read_sequnlock_excl_irq() - end an interrupts-disabled seqlock_t
1055 * @sl: Pointer to seqlock_t
1059 spin_unlock_irq(&sl->lock); in read_sequnlock_excl_irq()
1066 spin_lock_irqsave(&sl->lock, flags); in __read_seqlock_excl_irqsave()
1071 * read_seqlock_excl_irqsave() - start a non-interruptible seqlock_t
1073 * @lock: Pointer to seqlock_t
1074 * @flags: Stack-allocated storage for saving caller's local interrupt
1075 * state, to be passed to read_sequnlock_excl_irqrestore().
1078 * write side section, *or other read sections*, can be invoked from a
1085 * read_sequnlock_excl_irqrestore() - end non-interruptible seqlock_t
1087 * @sl: Pointer to seqlock_t
1093 spin_unlock_irqrestore(&sl->lock, flags); in read_sequnlock_excl_irqrestore()
1097 * read_seqbegin_or_lock() - begin a seqlock_t lockless or locking reader
1098 * @lock: Pointer to seqlock_t
1102 * as in read_seqlock_excl(). In the first call to this function, the
1103 * caller *must* initialize and pass an even value to @seq; this way, a
1104 * lockless read can be optimistically tried first.
1106 * read_seqbegin_or_lock is an API designed to optimistically try a normal
1107 * lockless seqlock_t read section first. If an odd counter is found, the
1108 * lockless read trial has failed, and the next read iteration transforms
1111 * This is typically used to avoid seqlock_t lockless readers starvation
1112 * (too much retry loops) in the case of a sharp spike in write side
1115 * Context: if the seqlock_t write section, *or other read sections*, can
1123 * value must be checked with need_seqretry(). If the read section need to
1136 * need_seqretry() - validate seqlock_t "locking or lockless" read section
1137 * @lock: Pointer to seqlock_t
1140 * Return: true if a read section retry is required, false otherwise
1148 * done_seqretry() - end seqlock_t "locking or lockless" reader section
1149 * @lock: Pointer to seqlock_t
1152 * done_seqretry finishes the seqlock_t read side critical section started
1162 * read_seqbegin_or_lock_irqsave() - begin a seqlock_t lockless reader, or
1163 * a non-interruptible locking reader
1164 * @lock: Pointer to seqlock_t
1168 * the seqlock_t write section, *or other read sections*, can be invoked
1175 * 1. The saved local interrupts state in case of a locking reader, to
1176 * be passed to done_seqretry_irqrestore().
1195 * done_seqretry_irqrestore() - end a seqlock_t lockless reader, or a
1196 * non-interruptible locking reader section
1197 * @lock: Pointer to seqlock_t
1202 * This is the _irqrestore variant of done_seqretry(). The read section