xref: /OK3568_Linux_fs/kernel/Documentation/scheduler/sched-arch.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun=================================================================
2*4882a593SmuzhiyunCPU Scheduler implementation hints for architecture specific code
3*4882a593Smuzhiyun=================================================================
4*4882a593Smuzhiyun
5*4882a593Smuzhiyun	Nick Piggin, 2005
6*4882a593Smuzhiyun
7*4882a593SmuzhiyunContext switch
8*4882a593Smuzhiyun==============
9*4882a593Smuzhiyun1. Runqueue locking
10*4882a593SmuzhiyunBy default, the switch_to arch function is called with the runqueue
11*4882a593Smuzhiyunlocked. This is usually not a problem unless switch_to may need to
12*4882a593Smuzhiyuntake the runqueue lock. This is usually due to a wake up operation in
13*4882a593Smuzhiyunthe context switch. See arch/ia64/include/asm/switch_to.h for an example.
14*4882a593Smuzhiyun
15*4882a593SmuzhiyunTo request the scheduler call switch_to with the runqueue unlocked,
16*4882a593Smuzhiyunyou must `#define __ARCH_WANT_UNLOCKED_CTXSW` in a header file
17*4882a593Smuzhiyun(typically the one where switch_to is defined).
18*4882a593Smuzhiyun
19*4882a593SmuzhiyunUnlocked context switches introduce only a very minor performance
20*4882a593Smuzhiyunpenalty to the core scheduler implementation in the CONFIG_SMP case.
21*4882a593Smuzhiyun
22*4882a593SmuzhiyunCPU idle
23*4882a593Smuzhiyun========
24*4882a593SmuzhiyunYour cpu_idle routines need to obey the following rules:
25*4882a593Smuzhiyun
26*4882a593Smuzhiyun1. Preempt should now disabled over idle routines. Should only
27*4882a593Smuzhiyun   be enabled to call schedule() then disabled again.
28*4882a593Smuzhiyun
29*4882a593Smuzhiyun2. need_resched/TIF_NEED_RESCHED is only ever set, and will never
30*4882a593Smuzhiyun   be cleared until the running task has called schedule(). Idle
31*4882a593Smuzhiyun   threads need only ever query need_resched, and may never set or
32*4882a593Smuzhiyun   clear it.
33*4882a593Smuzhiyun
34*4882a593Smuzhiyun3. When cpu_idle finds (need_resched() == 'true'), it should call
35*4882a593Smuzhiyun   schedule(). It should not call schedule() otherwise.
36*4882a593Smuzhiyun
37*4882a593Smuzhiyun4. The only time interrupts need to be disabled when checking
38*4882a593Smuzhiyun   need_resched is if we are about to sleep the processor until
39*4882a593Smuzhiyun   the next interrupt (this doesn't provide any protection of
40*4882a593Smuzhiyun   need_resched, it prevents losing an interrupt):
41*4882a593Smuzhiyun
42*4882a593Smuzhiyun	4a. Common problem with this type of sleep appears to be::
43*4882a593Smuzhiyun
44*4882a593Smuzhiyun	        local_irq_disable();
45*4882a593Smuzhiyun	        if (!need_resched()) {
46*4882a593Smuzhiyun	                local_irq_enable();
47*4882a593Smuzhiyun	                *** resched interrupt arrives here ***
48*4882a593Smuzhiyun	                __asm__("sleep until next interrupt");
49*4882a593Smuzhiyun	        }
50*4882a593Smuzhiyun
51*4882a593Smuzhiyun5. TIF_POLLING_NRFLAG can be set by idle routines that do not
52*4882a593Smuzhiyun   need an interrupt to wake them up when need_resched goes high.
53*4882a593Smuzhiyun   In other words, they must be periodically polling need_resched,
54*4882a593Smuzhiyun   although it may be reasonable to do some background work or enter
55*4882a593Smuzhiyun   a low CPU priority.
56*4882a593Smuzhiyun
57*4882a593Smuzhiyun      - 5a. If TIF_POLLING_NRFLAG is set, and we do decide to enter
58*4882a593Smuzhiyun	an interrupt sleep, it needs to be cleared then a memory
59*4882a593Smuzhiyun	barrier issued (followed by a test of need_resched with
60*4882a593Smuzhiyun	interrupts disabled, as explained in 3).
61*4882a593Smuzhiyun
62*4882a593Smuzhiyunarch/x86/kernel/process.c has examples of both polling and
63*4882a593Smuzhiyunsleeping idle functions.
64*4882a593Smuzhiyun
65*4882a593Smuzhiyun
66*4882a593SmuzhiyunPossible arch/ problems
67*4882a593Smuzhiyun=======================
68*4882a593Smuzhiyun
69*4882a593SmuzhiyunPossible arch problems I found (and either tried to fix or didn't):
70*4882a593Smuzhiyun
71*4882a593Smuzhiyunia64 - is safe_halt call racy vs interrupts? (does it sleep?) (See #4a)
72*4882a593Smuzhiyun
73*4882a593Smuzhiyunsh64 - Is sleeping racy vs interrupts? (See #4a)
74*4882a593Smuzhiyun
75*4882a593Smuzhiyunsparc - IRQs on at this point(?), change local_irq_save to _disable.
76*4882a593Smuzhiyun      - TODO: needs secondary CPUs to disable preempt (See #1)
77