Lines Matching refs:synchronize_rcu

101       14   synchronize_rcu();
105 Because the ``synchronize_rcu()`` on line 14 waits for all pre-existing
110 started after the ``synchronize_rcu()`` started, and must therefore also
124 | block ``synchronize_rcu()``!!! |
131 | Second, even when using ``synchronize_rcu()``, the other update-side |
165 24 synchronize_rcu();
169 28 synchronize_rcu();
174 the ``synchronize_rcu()`` in ``start_recovery()`` to guarantee that
181 | Why is the ``synchronize_rcu()`` on line 28 needed? |
191 critical section must not contain calls to ``synchronize_rcu()``.
194 ``synchronize_rcu()``.
409 13 synchronize_rcu();
487 before ``synchronize_rcu()`` starts is guaranteed to execute a full
489 section ends and the time that ``synchronize_rcu()`` returns. Without
494 ``synchronize_rcu()`` returns is guaranteed to execute a full memory
495 barrier between the time that ``synchronize_rcu()`` begins and the
500 #. If the task invoking ``synchronize_rcu()`` remains on a given CPU,
502 during the execution of ``synchronize_rcu()``. This guarantee ensures
505 #. If the task invoking ``synchronize_rcu()`` migrates among a group of
508 execution of ``synchronize_rcu()``. This guarantee also ensures that
511 thread executing the ``synchronize_rcu()`` migrates in the meantime.
519 | given instance of ``synchronize_rcu()``? |
524 | section starts before a given instance of ``synchronize_rcu()``, then |
526 | In other words, a given instance of ``synchronize_rcu()`` can avoid |
528 | prove that ``synchronize_rcu()`` started first. |
562 | #. CPU 0: ``synchronize_rcu()`` starts. |
566 | #. CPU 0: ``synchronize_rcu()`` returns. |
577 | #. CPU 0: ``synchronize_rcu()`` starts. |
581 | #. CPU 0: ``synchronize_rcu()`` returns. |
658 before invoking ``synchronize_rcu()``, however, this inconvenience can
700 ``synchronize_rcu()``. To see this, consider the following pair of
787 It might be tempting to assume that after ``synchronize_rcu()``
790 ``synchronize_rcu()`` starts, and ``synchronize_rcu()`` is under no
796 | Suppose that synchronize_rcu() did wait until *all* readers had |
802 | For no time at all. Even if ``synchronize_rcu()`` were to wait until |
804 | ``synchronize_rcu()`` completed. Therefore, the code following |
805 | ``synchronize_rcu()`` can *never* rely on there being no readers. |
833 12 synchronize_rcu();
876 12 synchronize_rcu();
883 19 synchronize_rcu();
934 12 synchronize_rcu();
949 27 synchronize_rcu();
1212 The ``synchronize_rcu()`` grace-period-wait primitive is optimized for
1216 ``synchronize_rcu()`` are required to use batching optimizations so that
1221 of ``synchronize_rcu()``, thus amortizing the per-invocation overhead
1226 In some cases, the multi-millisecond ``synchronize_rcu()`` latencies are
1244 be used in place of ``synchronize_rcu()`` as follows:
1283 neither ``synchronize_rcu()`` nor ``synchronize_rcu_expedited()`` would
1346 and ``kfree_rcu()``, but not ``synchronize_rcu()``. This was due to the
1348 places that needed something like ``synchronize_rcu()`` simply
1765 Perhaps surprisingly, ``synchronize_rcu()`` and
1768 disabled. This means that the call ``synchronize_rcu()`` (or friends)
1773 boot trick fails for ``synchronize_rcu()`` (as well as for
1776 which means that a subsequent ``synchronize_rcu()`` really does have to
1778 Unfortunately, ``synchronize_rcu()`` can't do this until all of its
1914 | ``synchronize_rcu()`` and ``rcu_barrier()``. If latency is a concern, |
1932 grace-period operations such as ``synchronize_rcu()`` and
2285 optimizations for ``synchronize_rcu()``, ``call_rcu()``,
2584 ``synchronize_rcu()`` would guarantee that execution reached the
2604 ``synchronize_rcu()``, and ``rcu_barrier()``, respectively. In