Lines Matching +full:fetch +full:- +full:depth

1 /* SPDX-License-Identifier: GPL-2.0+ */
3 * Read-Copy Update mechanism for mutual exclusion
15 * For detailed explanation of Read-Copy Update mechanism see -
33 #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b))
34 #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b))
36 #define USHORT_CMP_GE(a, b) (USHRT_MAX / 2 >= (unsigned short)((a) - (b)))
37 #define USHORT_CMP_LT(a, b) (USHRT_MAX / 2 < (unsigned short)((a) - (b)))
53 * nesting depth, but makes sense only if CONFIG_PREEMPT_RCU -- in other
54 * types of kernel builds, the rcu_read_lock() nesting depth is unknowable.
56 #define rcu_preempt_depth() (current->rcu_read_lock_nesting)
122 * RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
125 * RCU read-side critical sections are forbidden in the inner idle loop,
126 * that is, between the rcu_idle_enter() and the rcu_idle_exit() -- RCU
127 * will happily ignore any such read-side critical sections. However,
135 * on the order of a million or so, even on 32-bit systems). It is
147 * Note a quasi-voluntary context switch for RCU-tasks's benefit.
155 if (!(preempt) && READ_ONCE((t)->rcu_tasks_holdout)) \
156 WRITE_ONCE((t)->rcu_tasks_holdout, false); \
169 if (!likely(READ_ONCE((t)->trc_reader_checked)) && \
170 !unlikely(READ_ONCE((t)->trc_reader_nesting))) { \
171 smp_store_release(&(t)->trc_reader_checked, true); \
203 * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
206 * report potential quiescent states to RCU-tasks even if the cond_resched()
306 * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
323 "Illegal context switch in RCU read-side critical section"); in rcu_preempt_sleep_check()
333 "Illegal context switch in RCU-bh read-side critical section"); \
335 "Illegal context switch in RCU-sched read-side critical section"); \
388 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
394 * rcu_assign_pointer() - assign to RCU-protected pointer
398 * Assigns the specified value to the specified RCU-protected
406 * will be dereferenced by RCU read-side code.
413 * impossible-to-diagnose memory corruption. So please be careful.
420 * macros, this execute-arguments-only-once property is important, so
436 * rcu_replace_pointer() - replace an RCU pointer, returning its old value
441 * Perform a replacement, where @rcu_ptr is an RCU-annotated
454 * rcu_access_pointer() - fetch RCU pointer with no dereferencing
457 * Return the value of the specified RCU-protected pointer, but omit the
458 * lockdep checks for being in an RCU read-side critical section. This is
460 * not dereferenced, for example, when testing an RCU-protected pointer
462 * where update-side locks prevent the value of the pointer from changing,
465 * It is also permissible to use rcu_access_pointer() when read-side
469 * when tearing down multi-linked structures after a grace period
475 * rcu_dereference_check() - rcu_dereference with debug checking
483 * An implicit check for being in an RCU read-side critical section
488 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock));
490 * could be used to indicate to lockdep that foo->bar may only be dereferenced
492 * the bar struct at foo->bar is held.
498 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock) ||
499 * atomic_read(&foo->usage) == 0);
511 * rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
515 * This is the RCU-bh counterpart to rcu_dereference_check().
521 * rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
525 * This is the RCU-sched counterpart to rcu_dereference_check().
535 * The no-tracing version of rcu_dereference_raw() must not call
541 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
545 * Return the value of the specified RCU-protected pointer, but omit
546 * the READ_ONCE(). This is useful in cases where update-side locks
552 * This function is only for update-side use. Using this function
561 * rcu_dereference() - fetch RCU-protected pointer for dereferencing
569 * rcu_dereference_bh() - fetch an RCU-bh-protected pointer for dereferencing
577 * rcu_dereference_sched() - fetch RCU-sched-protected pointer for dereferencing
585 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
597 * if (!atomic_inc_not_zero(p->refcnt))
607 * rcu_read_lock() - mark the beginning of an RCU read-side critical section
610 * are within RCU read-side critical sections, then the
613 * on one CPU while other CPUs are within RCU read-side critical
618 * with new RCU read-side critical sections. One way that this can happen
620 * read-side critical section, (2) CPU 1 invokes call_rcu() to register
621 * an RCU callback, (3) CPU 0 exits the RCU read-side critical section,
622 * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU
623 * callback is invoked. This is legal, because the RCU read-side critical
629 * RCU read-side critical sections may be nested. Any deferred actions
630 * will be deferred until the outermost RCU read-side critical section
635 * read-side critical section that would block in a !PREEMPTION kernel.
638 * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
639 * it is illegal to block while in an RCU read-side critical section.
641 * kernel builds, RCU read-side critical sections may be preempted,
643 * implementations in real-time (with -rt patchset) kernel builds, RCU
644 * read-side critical sections may be preempted and they may also block, but
659 * a bug -- this property is what provides RCU's performance benefits.
667 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
673 * priority-inheritance spinlocks. This means that deadlock could result
679 * that preemption never happens within any RCU read-side critical
686 * at any time, a somewhat more future-proofed approach is to make sure
687 * that that preemption never happens within any RCU read-side critical
690 * acquires irq-disabled locks.
709 * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
713 * an RCU read-side critical section.
730 * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
744 * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
747 * Read-side critical sections can also be introduced by anything else
772 * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
793 * RCU_INIT_POINTER() - initialize an RCU protected pointer
797 * Initialize an RCU-protected pointer in special cases where readers
807 * a. You have not made *any* reader-visible changes to
815 * result in impossible-to-diagnose memory corruption. As in the structures
817 * see pre-initialized values of the referenced data structure. So
820 * If you are creating an RCU-protected linked structure that is accessed
821 * by a single external-to-structure RCU-protected pointer, then you may
822 * use RCU_INIT_POINTER() to initialize the internal RCU-protected
824 * external-to-structure pointer *after* you have completely initialized
825 * the reader-accessible portions of the linked structure.
837 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
841 * GCC-style initialization for an RCU-protected pointer in a structure field.
853 * Helper macro for kfree_rcu() to prevent argument-expansion eyestrain.
862 * kfree_rcu() - kfree an object after a grace period.
869 * high-latency rcu_barrier() function at module-unload time.
874 * Because the functions are not allowed in the low-order 4096 bytes of
876 * If the offset is larger than 4095 bytes, a compile-time error will
892 __kvfree_rcu(&((___p)->rhf), offsetof(typeof(*(ptr)), rhf)); \
896 * kvfree_rcu() - kvfree an object after a grace period.
899 * based on whether an object is head-less or not. If it
908 * When it comes to head-less variant, only one argument
916 * Please note, head-less way of freeing is permitted to
935 * Place this after a lock-acquisition primitive to guarantee that
950 * rcu_head_init - Initialize rcu_head for rcu_head_after_call_rcu()
961 rhp->func = (rcu_callback_t)~0L; in rcu_head_init()
965 * rcu_head_after_call_rcu() - Has this rcu_head been passed to call_rcu()?
974 * in an RCU read-side critical section that includes a read-side fetch
980 rcu_callback_t func = READ_ONCE(rhp->func); in rcu_head_after_call_rcu()