xref: /OK3568_Linux_fs/kernel/Documentation/locking/hwspinlock.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun===========================
2*4882a593SmuzhiyunHardware Spinlock Framework
3*4882a593Smuzhiyun===========================
4*4882a593Smuzhiyun
5*4882a593SmuzhiyunIntroduction
6*4882a593Smuzhiyun============
7*4882a593Smuzhiyun
8*4882a593SmuzhiyunHardware spinlock modules provide hardware assistance for synchronization
9*4882a593Smuzhiyunand mutual exclusion between heterogeneous processors and those not operating
10*4882a593Smuzhiyununder a single, shared operating system.
11*4882a593Smuzhiyun
12*4882a593SmuzhiyunFor example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP,
13*4882a593Smuzhiyuneach of which is running a different Operating System (the master, A9,
14*4882a593Smuzhiyunis usually running Linux and the slave processors, the M3 and the DSP,
15*4882a593Smuzhiyunare running some flavor of RTOS).
16*4882a593Smuzhiyun
17*4882a593SmuzhiyunA generic hwspinlock framework allows platform-independent drivers to use
18*4882a593Smuzhiyunthe hwspinlock device in order to access data structures that are shared
19*4882a593Smuzhiyunbetween remote processors, that otherwise have no alternative mechanism
20*4882a593Smuzhiyunto accomplish synchronization and mutual exclusion operations.
21*4882a593Smuzhiyun
22*4882a593SmuzhiyunThis is necessary, for example, for Inter-processor communications:
23*4882a593Smuzhiyunon OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the
24*4882a593Smuzhiyunremote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink).
25*4882a593Smuzhiyun
26*4882a593SmuzhiyunTo achieve fast message-based communications, a minimal kernel support
27*4882a593Smuzhiyunis needed to deliver messages arriving from a remote processor to the
28*4882a593Smuzhiyunappropriate user process.
29*4882a593Smuzhiyun
30*4882a593SmuzhiyunThis communication is based on simple data structures that is shared between
31*4882a593Smuzhiyunthe remote processors, and access to it is synchronized using the hwspinlock
32*4882a593Smuzhiyunmodule (remote processor directly places new messages in this shared data
33*4882a593Smuzhiyunstructure).
34*4882a593Smuzhiyun
35*4882a593SmuzhiyunA common hwspinlock interface makes it possible to have generic, platform-
36*4882a593Smuzhiyunindependent, drivers.
37*4882a593Smuzhiyun
38*4882a593SmuzhiyunUser API
39*4882a593Smuzhiyun========
40*4882a593Smuzhiyun
41*4882a593Smuzhiyun::
42*4882a593Smuzhiyun
43*4882a593Smuzhiyun  struct hwspinlock *hwspin_lock_request(void);
44*4882a593Smuzhiyun
45*4882a593SmuzhiyunDynamically assign an hwspinlock and return its address, or NULL
46*4882a593Smuzhiyunin case an unused hwspinlock isn't available. Users of this
47*4882a593SmuzhiyunAPI will usually want to communicate the lock's id to the remote core
48*4882a593Smuzhiyunbefore it can be used to achieve synchronization.
49*4882a593Smuzhiyun
50*4882a593SmuzhiyunShould be called from a process context (might sleep).
51*4882a593Smuzhiyun
52*4882a593Smuzhiyun::
53*4882a593Smuzhiyun
54*4882a593Smuzhiyun  struct hwspinlock *hwspin_lock_request_specific(unsigned int id);
55*4882a593Smuzhiyun
56*4882a593SmuzhiyunAssign a specific hwspinlock id and return its address, or NULL
57*4882a593Smuzhiyunif that hwspinlock is already in use. Usually board code will
58*4882a593Smuzhiyunbe calling this function in order to reserve specific hwspinlock
59*4882a593Smuzhiyunids for predefined purposes.
60*4882a593Smuzhiyun
61*4882a593SmuzhiyunShould be called from a process context (might sleep).
62*4882a593Smuzhiyun
63*4882a593Smuzhiyun::
64*4882a593Smuzhiyun
65*4882a593Smuzhiyun  int of_hwspin_lock_get_id(struct device_node *np, int index);
66*4882a593Smuzhiyun
67*4882a593SmuzhiyunRetrieve the global lock id for an OF phandle-based specific lock.
68*4882a593SmuzhiyunThis function provides a means for DT users of a hwspinlock module
69*4882a593Smuzhiyunto get the global lock id of a specific hwspinlock, so that it can
70*4882a593Smuzhiyunbe requested using the normal hwspin_lock_request_specific() API.
71*4882a593Smuzhiyun
72*4882a593SmuzhiyunThe function returns a lock id number on success, -EPROBE_DEFER if
73*4882a593Smuzhiyunthe hwspinlock device is not yet registered with the core, or other
74*4882a593Smuzhiyunerror values.
75*4882a593Smuzhiyun
76*4882a593SmuzhiyunShould be called from a process context (might sleep).
77*4882a593Smuzhiyun
78*4882a593Smuzhiyun::
79*4882a593Smuzhiyun
80*4882a593Smuzhiyun  int hwspin_lock_free(struct hwspinlock *hwlock);
81*4882a593Smuzhiyun
82*4882a593SmuzhiyunFree a previously-assigned hwspinlock; returns 0 on success, or an
83*4882a593Smuzhiyunappropriate error code on failure (e.g. -EINVAL if the hwspinlock
84*4882a593Smuzhiyunis already free).
85*4882a593Smuzhiyun
86*4882a593SmuzhiyunShould be called from a process context (might sleep).
87*4882a593Smuzhiyun
88*4882a593Smuzhiyun::
89*4882a593Smuzhiyun
90*4882a593Smuzhiyun  int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
91*4882a593Smuzhiyun
92*4882a593SmuzhiyunLock a previously-assigned hwspinlock with a timeout limit (specified in
93*4882a593Smuzhiyunmsecs). If the hwspinlock is already taken, the function will busy loop
94*4882a593Smuzhiyunwaiting for it to be released, but give up when the timeout elapses.
95*4882a593SmuzhiyunUpon a successful return from this function, preemption is disabled so
96*4882a593Smuzhiyunthe caller must not sleep, and is advised to release the hwspinlock as
97*4882a593Smuzhiyunsoon as possible, in order to minimize remote cores polling on the
98*4882a593Smuzhiyunhardware interconnect.
99*4882a593Smuzhiyun
100*4882a593SmuzhiyunReturns 0 when successful and an appropriate error code otherwise (most
101*4882a593Smuzhiyunnotably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
102*4882a593SmuzhiyunThe function will never sleep.
103*4882a593Smuzhiyun
104*4882a593Smuzhiyun::
105*4882a593Smuzhiyun
106*4882a593Smuzhiyun  int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
107*4882a593Smuzhiyun
108*4882a593SmuzhiyunLock a previously-assigned hwspinlock with a timeout limit (specified in
109*4882a593Smuzhiyunmsecs). If the hwspinlock is already taken, the function will busy loop
110*4882a593Smuzhiyunwaiting for it to be released, but give up when the timeout elapses.
111*4882a593SmuzhiyunUpon a successful return from this function, preemption and the local
112*4882a593Smuzhiyuninterrupts are disabled, so the caller must not sleep, and is advised to
113*4882a593Smuzhiyunrelease the hwspinlock as soon as possible.
114*4882a593Smuzhiyun
115*4882a593SmuzhiyunReturns 0 when successful and an appropriate error code otherwise (most
116*4882a593Smuzhiyunnotably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
117*4882a593SmuzhiyunThe function will never sleep.
118*4882a593Smuzhiyun
119*4882a593Smuzhiyun::
120*4882a593Smuzhiyun
121*4882a593Smuzhiyun  int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
122*4882a593Smuzhiyun				  unsigned long *flags);
123*4882a593Smuzhiyun
124*4882a593SmuzhiyunLock a previously-assigned hwspinlock with a timeout limit (specified in
125*4882a593Smuzhiyunmsecs). If the hwspinlock is already taken, the function will busy loop
126*4882a593Smuzhiyunwaiting for it to be released, but give up when the timeout elapses.
127*4882a593SmuzhiyunUpon a successful return from this function, preemption is disabled,
128*4882a593Smuzhiyunlocal interrupts are disabled and their previous state is saved at the
129*4882a593Smuzhiyungiven flags placeholder. The caller must not sleep, and is advised to
130*4882a593Smuzhiyunrelease the hwspinlock as soon as possible.
131*4882a593Smuzhiyun
132*4882a593SmuzhiyunReturns 0 when successful and an appropriate error code otherwise (most
133*4882a593Smuzhiyunnotably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
134*4882a593Smuzhiyun
135*4882a593SmuzhiyunThe function will never sleep.
136*4882a593Smuzhiyun
137*4882a593Smuzhiyun::
138*4882a593Smuzhiyun
139*4882a593Smuzhiyun  int hwspin_lock_timeout_raw(struct hwspinlock *hwlock, unsigned int timeout);
140*4882a593Smuzhiyun
141*4882a593SmuzhiyunLock a previously-assigned hwspinlock with a timeout limit (specified in
142*4882a593Smuzhiyunmsecs). If the hwspinlock is already taken, the function will busy loop
143*4882a593Smuzhiyunwaiting for it to be released, but give up when the timeout elapses.
144*4882a593Smuzhiyun
145*4882a593SmuzhiyunCaution: User must protect the routine of getting hardware lock with mutex
146*4882a593Smuzhiyunor spinlock to avoid dead-lock, that will let user can do some time-consuming
147*4882a593Smuzhiyunor sleepable operations under the hardware lock.
148*4882a593Smuzhiyun
149*4882a593SmuzhiyunReturns 0 when successful and an appropriate error code otherwise (most
150*4882a593Smuzhiyunnotably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
151*4882a593Smuzhiyun
152*4882a593SmuzhiyunThe function will never sleep.
153*4882a593Smuzhiyun
154*4882a593Smuzhiyun::
155*4882a593Smuzhiyun
156*4882a593Smuzhiyun  int hwspin_lock_timeout_in_atomic(struct hwspinlock *hwlock, unsigned int to);
157*4882a593Smuzhiyun
158*4882a593SmuzhiyunLock a previously-assigned hwspinlock with a timeout limit (specified in
159*4882a593Smuzhiyunmsecs). If the hwspinlock is already taken, the function will busy loop
160*4882a593Smuzhiyunwaiting for it to be released, but give up when the timeout elapses.
161*4882a593Smuzhiyun
162*4882a593SmuzhiyunThis function shall be called only from an atomic context and the timeout
163*4882a593Smuzhiyunvalue shall not exceed a few msecs.
164*4882a593Smuzhiyun
165*4882a593SmuzhiyunReturns 0 when successful and an appropriate error code otherwise (most
166*4882a593Smuzhiyunnotably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
167*4882a593Smuzhiyun
168*4882a593SmuzhiyunThe function will never sleep.
169*4882a593Smuzhiyun
170*4882a593Smuzhiyun::
171*4882a593Smuzhiyun
172*4882a593Smuzhiyun  int hwspin_trylock(struct hwspinlock *hwlock);
173*4882a593Smuzhiyun
174*4882a593Smuzhiyun
175*4882a593SmuzhiyunAttempt to lock a previously-assigned hwspinlock, but immediately fail if
176*4882a593Smuzhiyunit is already taken.
177*4882a593Smuzhiyun
178*4882a593SmuzhiyunUpon a successful return from this function, preemption is disabled so
179*4882a593Smuzhiyuncaller must not sleep, and is advised to release the hwspinlock as soon as
180*4882a593Smuzhiyunpossible, in order to minimize remote cores polling on the hardware
181*4882a593Smuzhiyuninterconnect.
182*4882a593Smuzhiyun
183*4882a593SmuzhiyunReturns 0 on success and an appropriate error code otherwise (most
184*4882a593Smuzhiyunnotably -EBUSY if the hwspinlock was already taken).
185*4882a593SmuzhiyunThe function will never sleep.
186*4882a593Smuzhiyun
187*4882a593Smuzhiyun::
188*4882a593Smuzhiyun
189*4882a593Smuzhiyun  int hwspin_trylock_irq(struct hwspinlock *hwlock);
190*4882a593Smuzhiyun
191*4882a593Smuzhiyun
192*4882a593SmuzhiyunAttempt to lock a previously-assigned hwspinlock, but immediately fail if
193*4882a593Smuzhiyunit is already taken.
194*4882a593Smuzhiyun
195*4882a593SmuzhiyunUpon a successful return from this function, preemption and the local
196*4882a593Smuzhiyuninterrupts are disabled so caller must not sleep, and is advised to
197*4882a593Smuzhiyunrelease the hwspinlock as soon as possible.
198*4882a593Smuzhiyun
199*4882a593SmuzhiyunReturns 0 on success and an appropriate error code otherwise (most
200*4882a593Smuzhiyunnotably -EBUSY if the hwspinlock was already taken).
201*4882a593Smuzhiyun
202*4882a593SmuzhiyunThe function will never sleep.
203*4882a593Smuzhiyun
204*4882a593Smuzhiyun::
205*4882a593Smuzhiyun
206*4882a593Smuzhiyun  int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
207*4882a593Smuzhiyun
208*4882a593SmuzhiyunAttempt to lock a previously-assigned hwspinlock, but immediately fail if
209*4882a593Smuzhiyunit is already taken.
210*4882a593Smuzhiyun
211*4882a593SmuzhiyunUpon a successful return from this function, preemption is disabled,
212*4882a593Smuzhiyunthe local interrupts are disabled and their previous state is saved
213*4882a593Smuzhiyunat the given flags placeholder. The caller must not sleep, and is advised
214*4882a593Smuzhiyunto release the hwspinlock as soon as possible.
215*4882a593Smuzhiyun
216*4882a593SmuzhiyunReturns 0 on success and an appropriate error code otherwise (most
217*4882a593Smuzhiyunnotably -EBUSY if the hwspinlock was already taken).
218*4882a593SmuzhiyunThe function will never sleep.
219*4882a593Smuzhiyun
220*4882a593Smuzhiyun::
221*4882a593Smuzhiyun
222*4882a593Smuzhiyun  int hwspin_trylock_raw(struct hwspinlock *hwlock);
223*4882a593Smuzhiyun
224*4882a593SmuzhiyunAttempt to lock a previously-assigned hwspinlock, but immediately fail if
225*4882a593Smuzhiyunit is already taken.
226*4882a593Smuzhiyun
227*4882a593SmuzhiyunCaution: User must protect the routine of getting hardware lock with mutex
228*4882a593Smuzhiyunor spinlock to avoid dead-lock, that will let user can do some time-consuming
229*4882a593Smuzhiyunor sleepable operations under the hardware lock.
230*4882a593Smuzhiyun
231*4882a593SmuzhiyunReturns 0 on success and an appropriate error code otherwise (most
232*4882a593Smuzhiyunnotably -EBUSY if the hwspinlock was already taken).
233*4882a593SmuzhiyunThe function will never sleep.
234*4882a593Smuzhiyun
235*4882a593Smuzhiyun::
236*4882a593Smuzhiyun
237*4882a593Smuzhiyun  int hwspin_trylock_in_atomic(struct hwspinlock *hwlock);
238*4882a593Smuzhiyun
239*4882a593SmuzhiyunAttempt to lock a previously-assigned hwspinlock, but immediately fail if
240*4882a593Smuzhiyunit is already taken.
241*4882a593Smuzhiyun
242*4882a593SmuzhiyunThis function shall be called only from an atomic context.
243*4882a593Smuzhiyun
244*4882a593SmuzhiyunReturns 0 on success and an appropriate error code otherwise (most
245*4882a593Smuzhiyunnotably -EBUSY if the hwspinlock was already taken).
246*4882a593SmuzhiyunThe function will never sleep.
247*4882a593Smuzhiyun
248*4882a593Smuzhiyun::
249*4882a593Smuzhiyun
250*4882a593Smuzhiyun  void hwspin_unlock(struct hwspinlock *hwlock);
251*4882a593Smuzhiyun
252*4882a593SmuzhiyunUnlock a previously-locked hwspinlock. Always succeed, and can be called
253*4882a593Smuzhiyunfrom any context (the function never sleeps).
254*4882a593Smuzhiyun
255*4882a593Smuzhiyun.. note::
256*4882a593Smuzhiyun
257*4882a593Smuzhiyun  code should **never** unlock an hwspinlock which is already unlocked
258*4882a593Smuzhiyun  (there is no protection against this).
259*4882a593Smuzhiyun
260*4882a593Smuzhiyun::
261*4882a593Smuzhiyun
262*4882a593Smuzhiyun  void hwspin_unlock_irq(struct hwspinlock *hwlock);
263*4882a593Smuzhiyun
264*4882a593SmuzhiyunUnlock a previously-locked hwspinlock and enable local interrupts.
265*4882a593SmuzhiyunThe caller should **never** unlock an hwspinlock which is already unlocked.
266*4882a593Smuzhiyun
267*4882a593SmuzhiyunDoing so is considered a bug (there is no protection against this).
268*4882a593SmuzhiyunUpon a successful return from this function, preemption and local
269*4882a593Smuzhiyuninterrupts are enabled. This function will never sleep.
270*4882a593Smuzhiyun
271*4882a593Smuzhiyun::
272*4882a593Smuzhiyun
273*4882a593Smuzhiyun  void
274*4882a593Smuzhiyun  hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
275*4882a593Smuzhiyun
276*4882a593SmuzhiyunUnlock a previously-locked hwspinlock.
277*4882a593Smuzhiyun
278*4882a593SmuzhiyunThe caller should **never** unlock an hwspinlock which is already unlocked.
279*4882a593SmuzhiyunDoing so is considered a bug (there is no protection against this).
280*4882a593SmuzhiyunUpon a successful return from this function, preemption is reenabled,
281*4882a593Smuzhiyunand the state of the local interrupts is restored to the state saved at
282*4882a593Smuzhiyunthe given flags. This function will never sleep.
283*4882a593Smuzhiyun
284*4882a593Smuzhiyun::
285*4882a593Smuzhiyun
286*4882a593Smuzhiyun  void hwspin_unlock_raw(struct hwspinlock *hwlock);
287*4882a593Smuzhiyun
288*4882a593SmuzhiyunUnlock a previously-locked hwspinlock.
289*4882a593Smuzhiyun
290*4882a593SmuzhiyunThe caller should **never** unlock an hwspinlock which is already unlocked.
291*4882a593SmuzhiyunDoing so is considered a bug (there is no protection against this).
292*4882a593SmuzhiyunThis function will never sleep.
293*4882a593Smuzhiyun
294*4882a593Smuzhiyun::
295*4882a593Smuzhiyun
296*4882a593Smuzhiyun  void hwspin_unlock_in_atomic(struct hwspinlock *hwlock);
297*4882a593Smuzhiyun
298*4882a593SmuzhiyunUnlock a previously-locked hwspinlock.
299*4882a593Smuzhiyun
300*4882a593SmuzhiyunThe caller should **never** unlock an hwspinlock which is already unlocked.
301*4882a593SmuzhiyunDoing so is considered a bug (there is no protection against this).
302*4882a593SmuzhiyunThis function will never sleep.
303*4882a593Smuzhiyun
304*4882a593Smuzhiyun::
305*4882a593Smuzhiyun
306*4882a593Smuzhiyun  int hwspin_lock_get_id(struct hwspinlock *hwlock);
307*4882a593Smuzhiyun
308*4882a593SmuzhiyunRetrieve id number of a given hwspinlock. This is needed when an
309*4882a593Smuzhiyunhwspinlock is dynamically assigned: before it can be used to achieve
310*4882a593Smuzhiyunmutual exclusion with a remote cpu, the id number should be communicated
311*4882a593Smuzhiyunto the remote task with which we want to synchronize.
312*4882a593Smuzhiyun
313*4882a593SmuzhiyunReturns the hwspinlock id number, or -EINVAL if hwlock is null.
314*4882a593Smuzhiyun
315*4882a593SmuzhiyunTypical usage
316*4882a593Smuzhiyun=============
317*4882a593Smuzhiyun
318*4882a593Smuzhiyun::
319*4882a593Smuzhiyun
320*4882a593Smuzhiyun	#include <linux/hwspinlock.h>
321*4882a593Smuzhiyun	#include <linux/err.h>
322*4882a593Smuzhiyun
323*4882a593Smuzhiyun	int hwspinlock_example1(void)
324*4882a593Smuzhiyun	{
325*4882a593Smuzhiyun		struct hwspinlock *hwlock;
326*4882a593Smuzhiyun		int ret;
327*4882a593Smuzhiyun
328*4882a593Smuzhiyun		/* dynamically assign a hwspinlock */
329*4882a593Smuzhiyun		hwlock = hwspin_lock_request();
330*4882a593Smuzhiyun		if (!hwlock)
331*4882a593Smuzhiyun			...
332*4882a593Smuzhiyun
333*4882a593Smuzhiyun		id = hwspin_lock_get_id(hwlock);
334*4882a593Smuzhiyun		/* probably need to communicate id to a remote processor now */
335*4882a593Smuzhiyun
336*4882a593Smuzhiyun		/* take the lock, spin for 1 sec if it's already taken */
337*4882a593Smuzhiyun		ret = hwspin_lock_timeout(hwlock, 1000);
338*4882a593Smuzhiyun		if (ret)
339*4882a593Smuzhiyun			...
340*4882a593Smuzhiyun
341*4882a593Smuzhiyun		/*
342*4882a593Smuzhiyun		* we took the lock, do our thing now, but do NOT sleep
343*4882a593Smuzhiyun		*/
344*4882a593Smuzhiyun
345*4882a593Smuzhiyun		/* release the lock */
346*4882a593Smuzhiyun		hwspin_unlock(hwlock);
347*4882a593Smuzhiyun
348*4882a593Smuzhiyun		/* free the lock */
349*4882a593Smuzhiyun		ret = hwspin_lock_free(hwlock);
350*4882a593Smuzhiyun		if (ret)
351*4882a593Smuzhiyun			...
352*4882a593Smuzhiyun
353*4882a593Smuzhiyun		return ret;
354*4882a593Smuzhiyun	}
355*4882a593Smuzhiyun
356*4882a593Smuzhiyun	int hwspinlock_example2(void)
357*4882a593Smuzhiyun	{
358*4882a593Smuzhiyun		struct hwspinlock *hwlock;
359*4882a593Smuzhiyun		int ret;
360*4882a593Smuzhiyun
361*4882a593Smuzhiyun		/*
362*4882a593Smuzhiyun		* assign a specific hwspinlock id - this should be called early
363*4882a593Smuzhiyun		* by board init code.
364*4882a593Smuzhiyun		*/
365*4882a593Smuzhiyun		hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID);
366*4882a593Smuzhiyun		if (!hwlock)
367*4882a593Smuzhiyun			...
368*4882a593Smuzhiyun
369*4882a593Smuzhiyun		/* try to take it, but don't spin on it */
370*4882a593Smuzhiyun		ret = hwspin_trylock(hwlock);
371*4882a593Smuzhiyun		if (!ret) {
372*4882a593Smuzhiyun			pr_info("lock is already taken\n");
373*4882a593Smuzhiyun			return -EBUSY;
374*4882a593Smuzhiyun		}
375*4882a593Smuzhiyun
376*4882a593Smuzhiyun		/*
377*4882a593Smuzhiyun		* we took the lock, do our thing now, but do NOT sleep
378*4882a593Smuzhiyun		*/
379*4882a593Smuzhiyun
380*4882a593Smuzhiyun		/* release the lock */
381*4882a593Smuzhiyun		hwspin_unlock(hwlock);
382*4882a593Smuzhiyun
383*4882a593Smuzhiyun		/* free the lock */
384*4882a593Smuzhiyun		ret = hwspin_lock_free(hwlock);
385*4882a593Smuzhiyun		if (ret)
386*4882a593Smuzhiyun			...
387*4882a593Smuzhiyun
388*4882a593Smuzhiyun		return ret;
389*4882a593Smuzhiyun	}
390*4882a593Smuzhiyun
391*4882a593Smuzhiyun
392*4882a593SmuzhiyunAPI for implementors
393*4882a593Smuzhiyun====================
394*4882a593Smuzhiyun
395*4882a593Smuzhiyun::
396*4882a593Smuzhiyun
397*4882a593Smuzhiyun  int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
398*4882a593Smuzhiyun		const struct hwspinlock_ops *ops, int base_id, int num_locks);
399*4882a593Smuzhiyun
400*4882a593SmuzhiyunTo be called from the underlying platform-specific implementation, in
401*4882a593Smuzhiyunorder to register a new hwspinlock device (which is usually a bank of
402*4882a593Smuzhiyunnumerous locks). Should be called from a process context (this function
403*4882a593Smuzhiyunmight sleep).
404*4882a593Smuzhiyun
405*4882a593SmuzhiyunReturns 0 on success, or appropriate error code on failure.
406*4882a593Smuzhiyun
407*4882a593Smuzhiyun::
408*4882a593Smuzhiyun
409*4882a593Smuzhiyun  int hwspin_lock_unregister(struct hwspinlock_device *bank);
410*4882a593Smuzhiyun
411*4882a593SmuzhiyunTo be called from the underlying vendor-specific implementation, in order
412*4882a593Smuzhiyunto unregister an hwspinlock device (which is usually a bank of numerous
413*4882a593Smuzhiyunlocks).
414*4882a593Smuzhiyun
415*4882a593SmuzhiyunShould be called from a process context (this function might sleep).
416*4882a593Smuzhiyun
417*4882a593SmuzhiyunReturns the address of hwspinlock on success, or NULL on error (e.g.
418*4882a593Smuzhiyunif the hwspinlock is still in use).
419*4882a593Smuzhiyun
420*4882a593SmuzhiyunImportant structs
421*4882a593Smuzhiyun=================
422*4882a593Smuzhiyun
423*4882a593Smuzhiyunstruct hwspinlock_device is a device which usually contains a bank
424*4882a593Smuzhiyunof hardware locks. It is registered by the underlying hwspinlock
425*4882a593Smuzhiyunimplementation using the hwspin_lock_register() API.
426*4882a593Smuzhiyun
427*4882a593Smuzhiyun::
428*4882a593Smuzhiyun
429*4882a593Smuzhiyun	/**
430*4882a593Smuzhiyun	* struct hwspinlock_device - a device which usually spans numerous hwspinlocks
431*4882a593Smuzhiyun	* @dev: underlying device, will be used to invoke runtime PM api
432*4882a593Smuzhiyun	* @ops: platform-specific hwspinlock handlers
433*4882a593Smuzhiyun	* @base_id: id index of the first lock in this device
434*4882a593Smuzhiyun	* @num_locks: number of locks in this device
435*4882a593Smuzhiyun	* @lock: dynamically allocated array of 'struct hwspinlock'
436*4882a593Smuzhiyun	*/
437*4882a593Smuzhiyun	struct hwspinlock_device {
438*4882a593Smuzhiyun		struct device *dev;
439*4882a593Smuzhiyun		const struct hwspinlock_ops *ops;
440*4882a593Smuzhiyun		int base_id;
441*4882a593Smuzhiyun		int num_locks;
442*4882a593Smuzhiyun		struct hwspinlock lock[0];
443*4882a593Smuzhiyun	};
444*4882a593Smuzhiyun
445*4882a593Smuzhiyunstruct hwspinlock_device contains an array of hwspinlock structs, each
446*4882a593Smuzhiyunof which represents a single hardware lock::
447*4882a593Smuzhiyun
448*4882a593Smuzhiyun	/**
449*4882a593Smuzhiyun	* struct hwspinlock - this struct represents a single hwspinlock instance
450*4882a593Smuzhiyun	* @bank: the hwspinlock_device structure which owns this lock
451*4882a593Smuzhiyun	* @lock: initialized and used by hwspinlock core
452*4882a593Smuzhiyun	* @priv: private data, owned by the underlying platform-specific hwspinlock drv
453*4882a593Smuzhiyun	*/
454*4882a593Smuzhiyun	struct hwspinlock {
455*4882a593Smuzhiyun		struct hwspinlock_device *bank;
456*4882a593Smuzhiyun		spinlock_t lock;
457*4882a593Smuzhiyun		void *priv;
458*4882a593Smuzhiyun	};
459*4882a593Smuzhiyun
460*4882a593SmuzhiyunWhen registering a bank of locks, the hwspinlock driver only needs to
461*4882a593Smuzhiyunset the priv members of the locks. The rest of the members are set and
462*4882a593Smuzhiyuninitialized by the hwspinlock core itself.
463*4882a593Smuzhiyun
464*4882a593SmuzhiyunImplementation callbacks
465*4882a593Smuzhiyun========================
466*4882a593Smuzhiyun
467*4882a593SmuzhiyunThere are three possible callbacks defined in 'struct hwspinlock_ops'::
468*4882a593Smuzhiyun
469*4882a593Smuzhiyun	struct hwspinlock_ops {
470*4882a593Smuzhiyun		int (*trylock)(struct hwspinlock *lock);
471*4882a593Smuzhiyun		void (*unlock)(struct hwspinlock *lock);
472*4882a593Smuzhiyun		void (*relax)(struct hwspinlock *lock);
473*4882a593Smuzhiyun	};
474*4882a593Smuzhiyun
475*4882a593SmuzhiyunThe first two callbacks are mandatory:
476*4882a593Smuzhiyun
477*4882a593SmuzhiyunThe ->trylock() callback should make a single attempt to take the lock, and
478*4882a593Smuzhiyunreturn 0 on failure and 1 on success. This callback may **not** sleep.
479*4882a593Smuzhiyun
480*4882a593SmuzhiyunThe ->unlock() callback releases the lock. It always succeed, and it, too,
481*4882a593Smuzhiyunmay **not** sleep.
482*4882a593Smuzhiyun
483*4882a593SmuzhiyunThe ->relax() callback is optional. It is called by hwspinlock core while
484*4882a593Smuzhiyunspinning on a lock, and can be used by the underlying implementation to force
485*4882a593Smuzhiyuna delay between two successive invocations of ->trylock(). It may **not** sleep.
486