xref: /OK3568_Linux_fs/kernel/fs/jffs2/README.Locking (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun
2*4882a593Smuzhiyun	JFFS2 LOCKING DOCUMENTATION
3*4882a593Smuzhiyun	---------------------------
4*4882a593Smuzhiyun
5*4882a593SmuzhiyunThis document attempts to describe the existing locking rules for
6*4882a593SmuzhiyunJFFS2. It is not expected to remain perfectly up to date, but ought to
7*4882a593Smuzhiyunbe fairly close.
8*4882a593Smuzhiyun
9*4882a593Smuzhiyun
10*4882a593Smuzhiyun	alloc_sem
11*4882a593Smuzhiyun	---------
12*4882a593Smuzhiyun
13*4882a593SmuzhiyunThe alloc_sem is a per-filesystem mutex, used primarily to ensure
14*4882a593Smuzhiyuncontiguous allocation of space on the medium. It is automatically
15*4882a593Smuzhiyunobtained during space allocations (jffs2_reserve_space()) and freed
16*4882a593Smuzhiyunupon write completion (jffs2_complete_reservation()). Note that
17*4882a593Smuzhiyunthe garbage collector will obtain this right at the beginning of
18*4882a593Smuzhiyunjffs2_garbage_collect_pass() and release it at the end, thereby
19*4882a593Smuzhiyunpreventing any other write activity on the file system during a
20*4882a593Smuzhiyungarbage collect pass.
21*4882a593Smuzhiyun
22*4882a593SmuzhiyunWhen writing new nodes, the alloc_sem must be held until the new nodes
23*4882a593Smuzhiyunhave been properly linked into the data structures for the inode to
24*4882a593Smuzhiyunwhich they belong. This is for the benefit of NAND flash - adding new
25*4882a593Smuzhiyunnodes to an inode may obsolete old ones, and by holding the alloc_sem
26*4882a593Smuzhiyununtil this happens we ensure that any data in the write-buffer at the
27*4882a593Smuzhiyuntime this happens are part of the new node, not just something that
28*4882a593Smuzhiyunwas written afterwards. Hence, we can ensure the newly-obsoleted nodes
29*4882a593Smuzhiyundon't actually get erased until the write-buffer has been flushed to
30*4882a593Smuzhiyunthe medium.
31*4882a593Smuzhiyun
32*4882a593SmuzhiyunWith the introduction of NAND flash support and the write-buffer,
33*4882a593Smuzhiyunthe alloc_sem is also used to protect the wbuf-related members of the
34*4882a593Smuzhiyunjffs2_sb_info structure. Atomically reading the wbuf_len member to see
35*4882a593Smuzhiyunif the wbuf is currently holding any data is permitted, though.
36*4882a593Smuzhiyun
37*4882a593SmuzhiyunOrdering constraints: See f->sem.
38*4882a593Smuzhiyun
39*4882a593Smuzhiyun
40*4882a593Smuzhiyun	File Mutex f->sem
41*4882a593Smuzhiyun	---------------------
42*4882a593Smuzhiyun
43*4882a593SmuzhiyunThis is the JFFS2-internal equivalent of the inode mutex i->i_sem.
44*4882a593SmuzhiyunIt protects the contents of the jffs2_inode_info private inode data,
45*4882a593Smuzhiyunincluding the linked list of node fragments (but see the notes below on
46*4882a593Smuzhiyunerase_completion_lock), etc.
47*4882a593Smuzhiyun
48*4882a593SmuzhiyunThe reason that the i_sem itself isn't used for this purpose is to
49*4882a593Smuzhiyunavoid deadlocks with garbage collection -- the VFS will lock the i_sem
50*4882a593Smuzhiyunbefore calling a function which may need to allocate space. The
51*4882a593Smuzhiyunallocation may trigger garbage-collection, which may need to move a
52*4882a593Smuzhiyunnode belonging to the inode which was locked in the first place by the
53*4882a593SmuzhiyunVFS. If the garbage collection code were to attempt to lock the i_sem
54*4882a593Smuzhiyunof the inode from which it's garbage-collecting a physical node, this
55*4882a593Smuzhiyunlead to deadlock, unless we played games with unlocking the i_sem
56*4882a593Smuzhiyunbefore calling the space allocation functions.
57*4882a593Smuzhiyun
58*4882a593SmuzhiyunInstead of playing such games, we just have an extra internal
59*4882a593Smuzhiyunmutex, which is obtained by the garbage collection code and also
60*4882a593Smuzhiyunby the normal file system code _after_ allocation of space.
61*4882a593Smuzhiyun
62*4882a593SmuzhiyunOrdering constraints:
63*4882a593Smuzhiyun
64*4882a593Smuzhiyun	1. Never attempt to allocate space or lock alloc_sem with
65*4882a593Smuzhiyun	   any f->sem held.
66*4882a593Smuzhiyun	2. Never attempt to lock two file mutexes in one thread.
67*4882a593Smuzhiyun	   No ordering rules have been made for doing so.
68*4882a593Smuzhiyun	3. Never lock a page cache page with f->sem held.
69*4882a593Smuzhiyun
70*4882a593Smuzhiyun
71*4882a593Smuzhiyun	erase_completion_lock spinlock
72*4882a593Smuzhiyun	------------------------------
73*4882a593Smuzhiyun
74*4882a593SmuzhiyunThis is used to serialise access to the eraseblock lists, to the
75*4882a593Smuzhiyunper-eraseblock lists of physical jffs2_raw_node_ref structures, and
76*4882a593Smuzhiyun(NB) the per-inode list of physical nodes. The latter is a special
77*4882a593Smuzhiyuncase - see below.
78*4882a593Smuzhiyun
79*4882a593SmuzhiyunAs the MTD API no longer permits erase-completion callback functions
80*4882a593Smuzhiyunto be called from bottom-half (timer) context (on the basis that nobody
81*4882a593Smuzhiyunever actually implemented such a thing), it's now sufficient to use
82*4882a593Smuzhiyuna simple spin_lock() rather than spin_lock_bh().
83*4882a593Smuzhiyun
84*4882a593SmuzhiyunNote that the per-inode list of physical nodes (f->nodes) is a special
85*4882a593Smuzhiyuncase. Any changes to _valid_ nodes (i.e. ->flash_offset & 1 == 0) in
86*4882a593Smuzhiyunthe list are protected by the file mutex f->sem. But the erase code
87*4882a593Smuzhiyunmay remove _obsolete_ nodes from the list while holding only the
88*4882a593Smuzhiyunerase_completion_lock. So you can walk the list only while holding the
89*4882a593Smuzhiyunerase_completion_lock, and can drop the lock temporarily mid-walk as
90*4882a593Smuzhiyunlong as the pointer you're holding is to a _valid_ node, not an
91*4882a593Smuzhiyunobsolete one.
92*4882a593Smuzhiyun
93*4882a593SmuzhiyunThe erase_completion_lock is also used to protect the c->gc_task
94*4882a593Smuzhiyunpointer when the garbage collection thread exits. The code to kill the
95*4882a593SmuzhiyunGC thread locks it, sends the signal, then unlocks it - while the GC
96*4882a593Smuzhiyunthread itself locks it, zeroes c->gc_task, then unlocks on the exit path.
97*4882a593Smuzhiyun
98*4882a593Smuzhiyun
99*4882a593Smuzhiyun	inocache_lock spinlock
100*4882a593Smuzhiyun	----------------------
101*4882a593Smuzhiyun
102*4882a593SmuzhiyunThis spinlock protects the hashed list (c->inocache_list) of the
103*4882a593Smuzhiyunin-core jffs2_inode_cache objects (each inode in JFFS2 has the
104*4882a593Smuzhiyuncorrespondent jffs2_inode_cache object). So, the inocache_lock
105*4882a593Smuzhiyunhas to be locked while walking the c->inocache_list hash buckets.
106*4882a593Smuzhiyun
107*4882a593SmuzhiyunThis spinlock also covers allocation of new inode numbers, which is
108*4882a593Smuzhiyuncurrently just '++->highest_ino++', but might one day get more complicated
109*4882a593Smuzhiyunif we need to deal with wrapping after 4 milliard inode numbers are used.
110*4882a593Smuzhiyun
111*4882a593SmuzhiyunNote, the f->sem guarantees that the correspondent jffs2_inode_cache
112*4882a593Smuzhiyunwill not be removed. So, it is allowed to access it without locking
113*4882a593Smuzhiyunthe inocache_lock spinlock.
114*4882a593Smuzhiyun
115*4882a593SmuzhiyunOrdering constraints:
116*4882a593Smuzhiyun
117*4882a593Smuzhiyun	If both erase_completion_lock and inocache_lock are needed, the
118*4882a593Smuzhiyun	c->erase_completion has to be acquired first.
119*4882a593Smuzhiyun
120*4882a593Smuzhiyun
121*4882a593Smuzhiyun	erase_free_sem
122*4882a593Smuzhiyun	--------------
123*4882a593Smuzhiyun
124*4882a593SmuzhiyunThis mutex is only used by the erase code which frees obsolete node
125*4882a593Smuzhiyunreferences and the jffs2_garbage_collect_deletion_dirent() function.
126*4882a593SmuzhiyunThe latter function on NAND flash must read _obsolete_ nodes to
127*4882a593Smuzhiyundetermine whether the 'deletion dirent' under consideration can be
128*4882a593Smuzhiyundiscarded or whether it is still required to show that an inode has
129*4882a593Smuzhiyunbeen unlinked. Because reading from the flash may sleep, the
130*4882a593Smuzhiyunerase_completion_lock cannot be held, so an alternative, more
131*4882a593Smuzhiyunheavyweight lock was required to prevent the erase code from freeing
132*4882a593Smuzhiyunthe jffs2_raw_node_ref structures in question while the garbage
133*4882a593Smuzhiyuncollection code is looking at them.
134*4882a593Smuzhiyun
135*4882a593SmuzhiyunSuggestions for alternative solutions to this problem would be welcomed.
136*4882a593Smuzhiyun
137*4882a593Smuzhiyun
138*4882a593Smuzhiyun	wbuf_sem
139*4882a593Smuzhiyun	--------
140*4882a593Smuzhiyun
141*4882a593SmuzhiyunThis read/write semaphore protects against concurrent access to the
142*4882a593Smuzhiyunwrite-behind buffer ('wbuf') used for flash chips where we must write
143*4882a593Smuzhiyunin blocks. It protects both the contents of the wbuf and the metadata
144*4882a593Smuzhiyunwhich indicates which flash region (if any) is currently covered by
145*4882a593Smuzhiyunthe buffer.
146*4882a593Smuzhiyun
147*4882a593SmuzhiyunOrdering constraints:
148*4882a593Smuzhiyun	Lock wbuf_sem last, after the alloc_sem or and f->sem.
149*4882a593Smuzhiyun
150*4882a593Smuzhiyun
151*4882a593Smuzhiyun	c->xattr_sem
152*4882a593Smuzhiyun	------------
153*4882a593Smuzhiyun
154*4882a593SmuzhiyunThis read/write semaphore protects against concurrent access to the
155*4882a593Smuzhiyunxattr related objects which include stuff in superblock and ic->xref.
156*4882a593SmuzhiyunIn read-only path, write-semaphore is too much exclusion. It's enough
157*4882a593Smuzhiyunby read-semaphore. But you must hold write-semaphore when updating,
158*4882a593Smuzhiyuncreating or deleting any xattr related object.
159*4882a593Smuzhiyun
160*4882a593SmuzhiyunOnce xattr_sem released, there would be no assurance for the existence
161*4882a593Smuzhiyunof those objects. Thus, a series of processes is often required to retry,
162*4882a593Smuzhiyunwhen updating such a object is necessary under holding read semaphore.
163*4882a593SmuzhiyunFor example, do_jffs2_getxattr() holds read-semaphore to scan xref and
164*4882a593Smuzhiyunxdatum at first. But it retries this process with holding write-semaphore
165*4882a593Smuzhiyunafter release read-semaphore, if it's necessary to load name/value pair
166*4882a593Smuzhiyunfrom medium.
167*4882a593Smuzhiyun
168*4882a593SmuzhiyunOrdering constraints:
169*4882a593Smuzhiyun	Lock xattr_sem last, after the alloc_sem.
170