xref: /OK3568_Linux_fs/kernel/Documentation/timers/no_hz.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun======================================
2*4882a593SmuzhiyunNO_HZ: Reducing Scheduling-Clock Ticks
3*4882a593Smuzhiyun======================================
4*4882a593Smuzhiyun
5*4882a593Smuzhiyun
6*4882a593SmuzhiyunThis document describes Kconfig options and boot parameters that can
7*4882a593Smuzhiyunreduce the number of scheduling-clock interrupts, thereby improving energy
8*4882a593Smuzhiyunefficiency and reducing OS jitter.  Reducing OS jitter is important for
9*4882a593Smuzhiyunsome types of computationally intensive high-performance computing (HPC)
10*4882a593Smuzhiyunapplications and for real-time applications.
11*4882a593Smuzhiyun
12*4882a593SmuzhiyunThere are three main ways of managing scheduling-clock interrupts
13*4882a593Smuzhiyun(also known as "scheduling-clock ticks" or simply "ticks"):
14*4882a593Smuzhiyun
15*4882a593Smuzhiyun1.	Never omit scheduling-clock ticks (CONFIG_HZ_PERIODIC=y or
16*4882a593Smuzhiyun	CONFIG_NO_HZ=n for older kernels).  You normally will -not-
17*4882a593Smuzhiyun	want to choose this option.
18*4882a593Smuzhiyun
19*4882a593Smuzhiyun2.	Omit scheduling-clock ticks on idle CPUs (CONFIG_NO_HZ_IDLE=y or
20*4882a593Smuzhiyun	CONFIG_NO_HZ=y for older kernels).  This is the most common
21*4882a593Smuzhiyun	approach, and should be the default.
22*4882a593Smuzhiyun
23*4882a593Smuzhiyun3.	Omit scheduling-clock ticks on CPUs that are either idle or that
24*4882a593Smuzhiyun	have only one runnable task (CONFIG_NO_HZ_FULL=y).  Unless you
25*4882a593Smuzhiyun	are running realtime applications or certain types of HPC
26*4882a593Smuzhiyun	workloads, you will normally -not- want this option.
27*4882a593Smuzhiyun
28*4882a593SmuzhiyunThese three cases are described in the following three sections, followed
29*4882a593Smuzhiyunby a third section on RCU-specific considerations, a fourth section
30*4882a593Smuzhiyundiscussing testing, and a fifth and final section listing known issues.
31*4882a593Smuzhiyun
32*4882a593Smuzhiyun
33*4882a593SmuzhiyunNever Omit Scheduling-Clock Ticks
34*4882a593Smuzhiyun=================================
35*4882a593Smuzhiyun
36*4882a593SmuzhiyunVery old versions of Linux from the 1990s and the very early 2000s
37*4882a593Smuzhiyunare incapable of omitting scheduling-clock ticks.  It turns out that
38*4882a593Smuzhiyunthere are some situations where this old-school approach is still the
39*4882a593Smuzhiyunright approach, for example, in heavy workloads with lots of tasks
40*4882a593Smuzhiyunthat use short bursts of CPU, where there are very frequent idle
41*4882a593Smuzhiyunperiods, but where these idle periods are also quite short (tens or
42*4882a593Smuzhiyunhundreds of microseconds).  For these types of workloads, scheduling
43*4882a593Smuzhiyunclock interrupts will normally be delivered any way because there
44*4882a593Smuzhiyunwill frequently be multiple runnable tasks per CPU.  In these cases,
45*4882a593Smuzhiyunattempting to turn off the scheduling clock interrupt will have no effect
46*4882a593Smuzhiyunother than increasing the overhead of switching to and from idle and
47*4882a593Smuzhiyuntransitioning between user and kernel execution.
48*4882a593Smuzhiyun
49*4882a593SmuzhiyunThis mode of operation can be selected using CONFIG_HZ_PERIODIC=y (or
50*4882a593SmuzhiyunCONFIG_NO_HZ=n for older kernels).
51*4882a593Smuzhiyun
52*4882a593SmuzhiyunHowever, if you are instead running a light workload with long idle
53*4882a593Smuzhiyunperiods, failing to omit scheduling-clock interrupts will result in
54*4882a593Smuzhiyunexcessive power consumption.  This is especially bad on battery-powered
55*4882a593Smuzhiyundevices, where it results in extremely short battery lifetimes.  If you
56*4882a593Smuzhiyunare running light workloads, you should therefore read the following
57*4882a593Smuzhiyunsection.
58*4882a593Smuzhiyun
59*4882a593SmuzhiyunIn addition, if you are running either a real-time workload or an HPC
60*4882a593Smuzhiyunworkload with short iterations, the scheduling-clock interrupts can
61*4882a593Smuzhiyundegrade your applications performance.  If this describes your workload,
62*4882a593Smuzhiyunyou should read the following two sections.
63*4882a593Smuzhiyun
64*4882a593Smuzhiyun
65*4882a593SmuzhiyunOmit Scheduling-Clock Ticks For Idle CPUs
66*4882a593Smuzhiyun=========================================
67*4882a593Smuzhiyun
68*4882a593SmuzhiyunIf a CPU is idle, there is little point in sending it a scheduling-clock
69*4882a593Smuzhiyuninterrupt.  After all, the primary purpose of a scheduling-clock interrupt
70*4882a593Smuzhiyunis to force a busy CPU to shift its attention among multiple duties,
71*4882a593Smuzhiyunand an idle CPU has no duties to shift its attention among.
72*4882a593Smuzhiyun
73*4882a593SmuzhiyunThe CONFIG_NO_HZ_IDLE=y Kconfig option causes the kernel to avoid sending
74*4882a593Smuzhiyunscheduling-clock interrupts to idle CPUs, which is critically important
75*4882a593Smuzhiyunboth to battery-powered devices and to highly virtualized mainframes.
76*4882a593SmuzhiyunA battery-powered device running a CONFIG_HZ_PERIODIC=y kernel would
77*4882a593Smuzhiyundrain its battery very quickly, easily 2-3 times as fast as would the
78*4882a593Smuzhiyunsame device running a CONFIG_NO_HZ_IDLE=y kernel.  A mainframe running
79*4882a593Smuzhiyun1,500 OS instances might find that half of its CPU time was consumed by
80*4882a593Smuzhiyununnecessary scheduling-clock interrupts.  In these situations, there
81*4882a593Smuzhiyunis strong motivation to avoid sending scheduling-clock interrupts to
82*4882a593Smuzhiyunidle CPUs.  That said, dyntick-idle mode is not free:
83*4882a593Smuzhiyun
84*4882a593Smuzhiyun1.	It increases the number of instructions executed on the path
85*4882a593Smuzhiyun	to and from the idle loop.
86*4882a593Smuzhiyun
87*4882a593Smuzhiyun2.	On many architectures, dyntick-idle mode also increases the
88*4882a593Smuzhiyun	number of expensive clock-reprogramming operations.
89*4882a593Smuzhiyun
90*4882a593SmuzhiyunTherefore, systems with aggressive real-time response constraints often
91*4882a593Smuzhiyunrun CONFIG_HZ_PERIODIC=y kernels (or CONFIG_NO_HZ=n for older kernels)
92*4882a593Smuzhiyunin order to avoid degrading from-idle transition latencies.
93*4882a593Smuzhiyun
94*4882a593SmuzhiyunAn idle CPU that is not receiving scheduling-clock interrupts is said to
95*4882a593Smuzhiyunbe "dyntick-idle", "in dyntick-idle mode", "in nohz mode", or "running
96*4882a593Smuzhiyuntickless".  The remainder of this document will use "dyntick-idle mode".
97*4882a593Smuzhiyun
98*4882a593SmuzhiyunThere is also a boot parameter "nohz=" that can be used to disable
99*4882a593Smuzhiyundyntick-idle mode in CONFIG_NO_HZ_IDLE=y kernels by specifying "nohz=off".
100*4882a593SmuzhiyunBy default, CONFIG_NO_HZ_IDLE=y kernels boot with "nohz=on", enabling
101*4882a593Smuzhiyundyntick-idle mode.
102*4882a593Smuzhiyun
103*4882a593Smuzhiyun
104*4882a593SmuzhiyunOmit Scheduling-Clock Ticks For CPUs With Only One Runnable Task
105*4882a593Smuzhiyun================================================================
106*4882a593Smuzhiyun
107*4882a593SmuzhiyunIf a CPU has only one runnable task, there is little point in sending it
108*4882a593Smuzhiyuna scheduling-clock interrupt because there is no other task to switch to.
109*4882a593SmuzhiyunNote that omitting scheduling-clock ticks for CPUs with only one runnable
110*4882a593Smuzhiyuntask implies also omitting them for idle CPUs.
111*4882a593Smuzhiyun
112*4882a593SmuzhiyunThe CONFIG_NO_HZ_FULL=y Kconfig option causes the kernel to avoid
113*4882a593Smuzhiyunsending scheduling-clock interrupts to CPUs with a single runnable task,
114*4882a593Smuzhiyunand such CPUs are said to be "adaptive-ticks CPUs".  This is important
115*4882a593Smuzhiyunfor applications with aggressive real-time response constraints because
116*4882a593Smuzhiyunit allows them to improve their worst-case response times by the maximum
117*4882a593Smuzhiyunduration of a scheduling-clock interrupt.  It is also important for
118*4882a593Smuzhiyuncomputationally intensive short-iteration workloads:  If any CPU is
119*4882a593Smuzhiyundelayed during a given iteration, all the other CPUs will be forced to
120*4882a593Smuzhiyunwait idle while the delayed CPU finishes.  Thus, the delay is multiplied
121*4882a593Smuzhiyunby one less than the number of CPUs.  In these situations, there is
122*4882a593Smuzhiyunagain strong motivation to avoid sending scheduling-clock interrupts.
123*4882a593Smuzhiyun
124*4882a593SmuzhiyunBy default, no CPU will be an adaptive-ticks CPU.  The "nohz_full="
125*4882a593Smuzhiyunboot parameter specifies the adaptive-ticks CPUs.  For example,
126*4882a593Smuzhiyun"nohz_full=1,6-8" says that CPUs 1, 6, 7, and 8 are to be adaptive-ticks
127*4882a593SmuzhiyunCPUs.  Note that you are prohibited from marking all of the CPUs as
128*4882a593Smuzhiyunadaptive-tick CPUs:  At least one non-adaptive-tick CPU must remain
129*4882a593Smuzhiyunonline to handle timekeeping tasks in order to ensure that system
130*4882a593Smuzhiyuncalls like gettimeofday() returns accurate values on adaptive-tick CPUs.
131*4882a593Smuzhiyun(This is not an issue for CONFIG_NO_HZ_IDLE=y because there are no running
132*4882a593Smuzhiyunuser processes to observe slight drifts in clock rate.)  Therefore, the
133*4882a593Smuzhiyunboot CPU is prohibited from entering adaptive-ticks mode.  Specifying a
134*4882a593Smuzhiyun"nohz_full=" mask that includes the boot CPU will result in a boot-time
135*4882a593Smuzhiyunerror message, and the boot CPU will be removed from the mask.  Note that
136*4882a593Smuzhiyunthis means that your system must have at least two CPUs in order for
137*4882a593SmuzhiyunCONFIG_NO_HZ_FULL=y to do anything for you.
138*4882a593Smuzhiyun
139*4882a593SmuzhiyunFinally, adaptive-ticks CPUs must have their RCU callbacks offloaded.
140*4882a593SmuzhiyunThis is covered in the "RCU IMPLICATIONS" section below.
141*4882a593Smuzhiyun
142*4882a593SmuzhiyunNormally, a CPU remains in adaptive-ticks mode as long as possible.
143*4882a593SmuzhiyunIn particular, transitioning to kernel mode does not automatically change
144*4882a593Smuzhiyunthe mode.  Instead, the CPU will exit adaptive-ticks mode only if needed,
145*4882a593Smuzhiyunfor example, if that CPU enqueues an RCU callback.
146*4882a593Smuzhiyun
147*4882a593SmuzhiyunJust as with dyntick-idle mode, the benefits of adaptive-tick mode do
148*4882a593Smuzhiyunnot come for free:
149*4882a593Smuzhiyun
150*4882a593Smuzhiyun1.	CONFIG_NO_HZ_FULL selects CONFIG_NO_HZ_COMMON, so you cannot run
151*4882a593Smuzhiyun	adaptive ticks without also running dyntick idle.  This dependency
152*4882a593Smuzhiyun	extends down into the implementation, so that all of the costs
153*4882a593Smuzhiyun	of CONFIG_NO_HZ_IDLE are also incurred by CONFIG_NO_HZ_FULL.
154*4882a593Smuzhiyun
155*4882a593Smuzhiyun2.	The user/kernel transitions are slightly more expensive due
156*4882a593Smuzhiyun	to the need to inform kernel subsystems (such as RCU) about
157*4882a593Smuzhiyun	the change in mode.
158*4882a593Smuzhiyun
159*4882a593Smuzhiyun3.	POSIX CPU timers prevent CPUs from entering adaptive-tick mode.
160*4882a593Smuzhiyun	Real-time applications needing to take actions based on CPU time
161*4882a593Smuzhiyun	consumption need to use other means of doing so.
162*4882a593Smuzhiyun
163*4882a593Smuzhiyun4.	If there are more perf events pending than the hardware can
164*4882a593Smuzhiyun	accommodate, they are normally round-robined so as to collect
165*4882a593Smuzhiyun	all of them over time.  Adaptive-tick mode may prevent this
166*4882a593Smuzhiyun	round-robining from happening.  This will likely be fixed by
167*4882a593Smuzhiyun	preventing CPUs with large numbers of perf events pending from
168*4882a593Smuzhiyun	entering adaptive-tick mode.
169*4882a593Smuzhiyun
170*4882a593Smuzhiyun5.	Scheduler statistics for adaptive-tick CPUs may be computed
171*4882a593Smuzhiyun	slightly differently than those for non-adaptive-tick CPUs.
172*4882a593Smuzhiyun	This might in turn perturb load-balancing of real-time tasks.
173*4882a593Smuzhiyun
174*4882a593SmuzhiyunAlthough improvements are expected over time, adaptive ticks is quite
175*4882a593Smuzhiyunuseful for many types of real-time and compute-intensive applications.
176*4882a593SmuzhiyunHowever, the drawbacks listed above mean that adaptive ticks should not
177*4882a593Smuzhiyun(yet) be enabled by default.
178*4882a593Smuzhiyun
179*4882a593Smuzhiyun
180*4882a593SmuzhiyunRCU Implications
181*4882a593Smuzhiyun================
182*4882a593Smuzhiyun
183*4882a593SmuzhiyunThere are situations in which idle CPUs cannot be permitted to
184*4882a593Smuzhiyunenter either dyntick-idle mode or adaptive-tick mode, the most
185*4882a593Smuzhiyuncommon being when that CPU has RCU callbacks pending.
186*4882a593Smuzhiyun
187*4882a593SmuzhiyunThe CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such CPUs
188*4882a593Smuzhiyunto enter dyntick-idle mode or adaptive-tick mode anyway.  In this case,
189*4882a593Smuzhiyuna timer will awaken these CPUs every four jiffies in order to ensure
190*4882a593Smuzhiyunthat the RCU callbacks are processed in a timely fashion.
191*4882a593Smuzhiyun
192*4882a593SmuzhiyunAnother approach is to offload RCU callback processing to "rcuo" kthreads
193*4882a593Smuzhiyunusing the CONFIG_RCU_NOCB_CPU=y Kconfig option.  The specific CPUs to
194*4882a593Smuzhiyunoffload may be selected using The "rcu_nocbs=" kernel boot parameter,
195*4882a593Smuzhiyunwhich takes a comma-separated list of CPUs and CPU ranges, for example,
196*4882a593Smuzhiyun"1,3-5" selects CPUs 1, 3, 4, and 5.
197*4882a593Smuzhiyun
198*4882a593SmuzhiyunThe offloaded CPUs will never queue RCU callbacks, and therefore RCU
199*4882a593Smuzhiyunnever prevents offloaded CPUs from entering either dyntick-idle mode
200*4882a593Smuzhiyunor adaptive-tick mode.  That said, note that it is up to userspace to
201*4882a593Smuzhiyunpin the "rcuo" kthreads to specific CPUs if desired.  Otherwise, the
202*4882a593Smuzhiyunscheduler will decide where to run them, which might or might not be
203*4882a593Smuzhiyunwhere you want them to run.
204*4882a593Smuzhiyun
205*4882a593Smuzhiyun
206*4882a593SmuzhiyunTesting
207*4882a593Smuzhiyun=======
208*4882a593Smuzhiyun
209*4882a593SmuzhiyunSo you enable all the OS-jitter features described in this document,
210*4882a593Smuzhiyunbut do not see any change in your workload's behavior.  Is this because
211*4882a593Smuzhiyunyour workload isn't affected that much by OS jitter, or is it because
212*4882a593Smuzhiyunsomething else is in the way?  This section helps answer this question
213*4882a593Smuzhiyunby providing a simple OS-jitter test suite, which is available on branch
214*4882a593Smuzhiyunmaster of the following git archive:
215*4882a593Smuzhiyun
216*4882a593Smuzhiyungit://git.kernel.org/pub/scm/linux/kernel/git/frederic/dynticks-testing.git
217*4882a593Smuzhiyun
218*4882a593SmuzhiyunClone this archive and follow the instructions in the README file.
219*4882a593SmuzhiyunThis test procedure will produce a trace that will allow you to evaluate
220*4882a593Smuzhiyunwhether or not you have succeeded in removing OS jitter from your system.
221*4882a593SmuzhiyunIf this trace shows that you have removed OS jitter as much as is
222*4882a593Smuzhiyunpossible, then you can conclude that your workload is not all that
223*4882a593Smuzhiyunsensitive to OS jitter.
224*4882a593Smuzhiyun
225*4882a593SmuzhiyunNote: this test requires that your system have at least two CPUs.
226*4882a593SmuzhiyunWe do not currently have a good way to remove OS jitter from single-CPU
227*4882a593Smuzhiyunsystems.
228*4882a593Smuzhiyun
229*4882a593Smuzhiyun
230*4882a593SmuzhiyunKnown Issues
231*4882a593Smuzhiyun============
232*4882a593Smuzhiyun
233*4882a593Smuzhiyun*	Dyntick-idle slows transitions to and from idle slightly.
234*4882a593Smuzhiyun	In practice, this has not been a problem except for the most
235*4882a593Smuzhiyun	aggressive real-time workloads, which have the option of disabling
236*4882a593Smuzhiyun	dyntick-idle mode, an option that most of them take.  However,
237*4882a593Smuzhiyun	some workloads will no doubt want to use adaptive ticks to
238*4882a593Smuzhiyun	eliminate scheduling-clock interrupt latencies.  Here are some
239*4882a593Smuzhiyun	options for these workloads:
240*4882a593Smuzhiyun
241*4882a593Smuzhiyun	a.	Use PMQOS from userspace to inform the kernel of your
242*4882a593Smuzhiyun		latency requirements (preferred).
243*4882a593Smuzhiyun
244*4882a593Smuzhiyun	b.	On x86 systems, use the "idle=mwait" boot parameter.
245*4882a593Smuzhiyun
246*4882a593Smuzhiyun	c.	On x86 systems, use the "intel_idle.max_cstate=" to limit
247*4882a593Smuzhiyun	`	the maximum C-state depth.
248*4882a593Smuzhiyun
249*4882a593Smuzhiyun	d.	On x86 systems, use the "idle=poll" boot parameter.
250*4882a593Smuzhiyun		However, please note that use of this parameter can cause
251*4882a593Smuzhiyun		your CPU to overheat, which may cause thermal throttling
252*4882a593Smuzhiyun		to degrade your latencies -- and that this degradation can
253*4882a593Smuzhiyun		be even worse than that of dyntick-idle.  Furthermore,
254*4882a593Smuzhiyun		this parameter effectively disables Turbo Mode on Intel
255*4882a593Smuzhiyun		CPUs, which can significantly reduce maximum performance.
256*4882a593Smuzhiyun
257*4882a593Smuzhiyun*	Adaptive-ticks slows user/kernel transitions slightly.
258*4882a593Smuzhiyun	This is not expected to be a problem for computationally intensive
259*4882a593Smuzhiyun	workloads, which have few such transitions.  Careful benchmarking
260*4882a593Smuzhiyun	will be required to determine whether or not other workloads
261*4882a593Smuzhiyun	are significantly affected by this effect.
262*4882a593Smuzhiyun
263*4882a593Smuzhiyun*	Adaptive-ticks does not do anything unless there is only one
264*4882a593Smuzhiyun	runnable task for a given CPU, even though there are a number
265*4882a593Smuzhiyun	of other situations where the scheduling-clock tick is not
266*4882a593Smuzhiyun	needed.  To give but one example, consider a CPU that has one
267*4882a593Smuzhiyun	runnable high-priority SCHED_FIFO task and an arbitrary number
268*4882a593Smuzhiyun	of low-priority SCHED_OTHER tasks.  In this case, the CPU is
269*4882a593Smuzhiyun	required to run the SCHED_FIFO task until it either blocks or
270*4882a593Smuzhiyun	some other higher-priority task awakens on (or is assigned to)
271*4882a593Smuzhiyun	this CPU, so there is no point in sending a scheduling-clock
272*4882a593Smuzhiyun	interrupt to this CPU.	However, the current implementation
273*4882a593Smuzhiyun	nevertheless sends scheduling-clock interrupts to CPUs having a
274*4882a593Smuzhiyun	single runnable SCHED_FIFO task and multiple runnable SCHED_OTHER
275*4882a593Smuzhiyun	tasks, even though these interrupts are unnecessary.
276*4882a593Smuzhiyun
277*4882a593Smuzhiyun	And even when there are multiple runnable tasks on a given CPU,
278*4882a593Smuzhiyun	there is little point in interrupting that CPU until the current
279*4882a593Smuzhiyun	running task's timeslice expires, which is almost always way
280*4882a593Smuzhiyun	longer than the time of the next scheduling-clock interrupt.
281*4882a593Smuzhiyun
282*4882a593Smuzhiyun	Better handling of these sorts of situations is future work.
283*4882a593Smuzhiyun
284*4882a593Smuzhiyun*	A reboot is required to reconfigure both adaptive idle and RCU
285*4882a593Smuzhiyun	callback offloading.  Runtime reconfiguration could be provided
286*4882a593Smuzhiyun	if needed, however, due to the complexity of reconfiguring RCU at
287*4882a593Smuzhiyun	runtime, there would need to be an earthshakingly good reason.
288*4882a593Smuzhiyun	Especially given that you have the straightforward option of
289*4882a593Smuzhiyun	simply offloading RCU callbacks from all CPUs and pinning them
290*4882a593Smuzhiyun	where you want them whenever you want them pinned.
291*4882a593Smuzhiyun
292*4882a593Smuzhiyun*	Additional configuration is required to deal with other sources
293*4882a593Smuzhiyun	of OS jitter, including interrupts and system-utility tasks
294*4882a593Smuzhiyun	and processes.  This configuration normally involves binding
295*4882a593Smuzhiyun	interrupts and tasks to particular CPUs.
296*4882a593Smuzhiyun
297*4882a593Smuzhiyun*	Some sources of OS jitter can currently be eliminated only by
298*4882a593Smuzhiyun	constraining the workload.  For example, the only way to eliminate
299*4882a593Smuzhiyun	OS jitter due to global TLB shootdowns is to avoid the unmapping
300*4882a593Smuzhiyun	operations (such as kernel module unload operations) that
301*4882a593Smuzhiyun	result in these shootdowns.  For another example, page faults
302*4882a593Smuzhiyun	and TLB misses can be reduced (and in some cases eliminated) by
303*4882a593Smuzhiyun	using huge pages and by constraining the amount of memory used
304*4882a593Smuzhiyun	by the application.  Pre-faulting the working set can also be
305*4882a593Smuzhiyun	helpful, especially when combined with the mlock() and mlockall()
306*4882a593Smuzhiyun	system calls.
307*4882a593Smuzhiyun
308*4882a593Smuzhiyun*	Unless all CPUs are idle, at least one CPU must keep the
309*4882a593Smuzhiyun	scheduling-clock interrupt going in order to support accurate
310*4882a593Smuzhiyun	timekeeping.
311*4882a593Smuzhiyun
312*4882a593Smuzhiyun*	If there might potentially be some adaptive-ticks CPUs, there
313*4882a593Smuzhiyun	will be at least one CPU keeping the scheduling-clock interrupt
314*4882a593Smuzhiyun	going, even if all CPUs are otherwise idle.
315*4882a593Smuzhiyun
316*4882a593Smuzhiyun	Better handling of this situation is ongoing work.
317*4882a593Smuzhiyun
318*4882a593Smuzhiyun*	Some process-handling operations still require the occasional
319*4882a593Smuzhiyun	scheduling-clock tick.	These operations include calculating CPU
320*4882a593Smuzhiyun	load, maintaining sched average, computing CFS entity vruntime,
321*4882a593Smuzhiyun	computing avenrun, and carrying out load balancing.  They are
322*4882a593Smuzhiyun	currently accommodated by scheduling-clock tick every second
323*4882a593Smuzhiyun	or so.	On-going work will eliminate the need even for these
324*4882a593Smuzhiyun	infrequent scheduling-clock ticks.
325