Lines Matching +full:per +full:- +full:module
8 RCU (read-copy update) is a synchronization mechanism that can be thought
9 of as a replacement for read-writer locking (among other things), but with
10 very low-overhead readers that are immune to deadlock, priority inversion,
11 and unbounded latency. RCU read-side critical sections are delimited
12 by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
18 pre-existing readers have finished. These old versions are needed because
20 rather expensive, and RCU is thus best suited for read-mostly situations.
25 pre-existing readers have completed. An updater wishing to delete an
33 But the above code cannot be used in IRQ context -- the call_rcu()
35 rcu_head struct placed within the RCU-protected data structure and
41 call_rcu(&p->rcu, p_callback);
55 -------------------------------------
57 But what if p_callback is defined in an unloadable module?
59 If we unload the module while some RCU callbacks are pending,
62 http://lwn.net/images/ns/kernel/rcu-drop.jpg.
64 We could try placing a synchronize_rcu() in the module-exit code path,
68 One might be tempted to try several back-to-back synchronize_rcu()
70 heavy RCU-callback load, then some of the callbacks might be deferred
76 -------------
85 Pseudo-code using rcu_barrier() is as follows:
89 3. Allow the module to be unloaded.
93 module uses multiple flavors of call_rcu(), then it must also use multiple
94 flavors of rcu_barrier() when unloading that module. For example, if
103 The rcutorture module makes use of rcu_barrier() in its exit function
160 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
162 57 if (cur_ops->cleanup != NULL)
163 58 cur_ops->cleanup();
171 re-posting themselves. This will not be necessary in most cases, since
173 module is an exception to this rule, and therefore needs to set this
176 Lines 7-50 stop all the kernel tasks associated with the rcutorture
177 module. Therefore, once execution reaches line 53, no more rcutorture
179 for any pre-existing callbacks to complete.
181 Then lines 55-62 print status and do operation-specific cleanup, and
182 then return, permitting the module-unload operation to be completed.
192 Your module might have additional complications. For example, if your
193 module invokes call_rcu() from timers, you will need to first cancel all
197 Of course, if you module uses call_rcu(), you will need to invoke
198 rcu_barrier() before unloading. Similarly, if your module uses
200 and on the same srcu_struct structure. If your module uses call_rcu()
206 --------------------------
209 that RCU callbacks are never reordered once queued on one of the per-CPU
210 queues. His implementation queues an RCU callback on each of the per-CPU
248 7 head = &rdp->barrier;
253 Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
282 to avoid disturbing idle CPUs (especially on battery-powered systems)
283 and the need to minimally disturb non-idle CPUs in real-time systems.
288 ---------------------
292 you are using RCU from an unloadable module, you need to use rcu_barrier()
293 so that your module may be safely unloaded.
297 ------------------------
306 implemented for module unloading. Nikita Danilov was using
308 filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
310 filesystem-unmount process.
312 Much later, yours truly hit the RCU module-unload problem when
330 causing this latter to spin until the cross-CPU invocation of
332 a grace period from completing on non-CONFIG_PREEMPT kernels,
345 Currently, -rt implementations of RCU keep but a single global
347 problem. However, when the -rt RCU eventually does have per-CPU