1c91a719dSKyungmin Park /* 2c91a719dSKyungmin Park * Copyright (c) International Business Machines Corp., 2006 3c91a719dSKyungmin Park * 41a459660SWolfgang Denk * SPDX-License-Identifier: GPL-2.0+ 5c91a719dSKyungmin Park * 6c91a719dSKyungmin Park * Authors: Artem Bityutskiy (Битюцкий Артём), Thomas Gleixner 7c91a719dSKyungmin Park */ 8c91a719dSKyungmin Park 9c91a719dSKyungmin Park /* 10ff94bc40SHeiko Schocher * UBI wear-leveling sub-system. 11c91a719dSKyungmin Park * 12ff94bc40SHeiko Schocher * This sub-system is responsible for wear-leveling. It works in terms of 13ff94bc40SHeiko Schocher * physical eraseblocks and erase counters and knows nothing about logical 14ff94bc40SHeiko Schocher * eraseblocks, volumes, etc. From this sub-system's perspective all physical 15ff94bc40SHeiko Schocher * eraseblocks are of two types - used and free. Used physical eraseblocks are 16ff94bc40SHeiko Schocher * those that were "get" by the 'ubi_wl_get_peb()' function, and free physical 17ff94bc40SHeiko Schocher * eraseblocks are those that were put by the 'ubi_wl_put_peb()' function. 18c91a719dSKyungmin Park * 19c91a719dSKyungmin Park * Physical eraseblocks returned by 'ubi_wl_get_peb()' have only erase counter 20ff94bc40SHeiko Schocher * header. The rest of the physical eraseblock contains only %0xFF bytes. 21c91a719dSKyungmin Park * 22ff94bc40SHeiko Schocher * When physical eraseblocks are returned to the WL sub-system by means of the 23c91a719dSKyungmin Park * 'ubi_wl_put_peb()' function, they are scheduled for erasure. The erasure is 24c91a719dSKyungmin Park * done asynchronously in context of the per-UBI device background thread, 25ff94bc40SHeiko Schocher * which is also managed by the WL sub-system. 26c91a719dSKyungmin Park * 27c91a719dSKyungmin Park * The wear-leveling is ensured by means of moving the contents of used 28c91a719dSKyungmin Park * physical eraseblocks with low erase counter to free physical eraseblocks 29c91a719dSKyungmin Park * with high erase counter. 30c91a719dSKyungmin Park * 31ff94bc40SHeiko Schocher * If the WL sub-system fails to erase a physical eraseblock, it marks it as 32ff94bc40SHeiko Schocher * bad. 33c91a719dSKyungmin Park * 34ff94bc40SHeiko Schocher * This sub-system is also responsible for scrubbing. If a bit-flip is detected 35ff94bc40SHeiko Schocher * in a physical eraseblock, it has to be moved. Technically this is the same 36ff94bc40SHeiko Schocher * as moving it for wear-leveling reasons. 37c91a719dSKyungmin Park * 38ff94bc40SHeiko Schocher * As it was said, for the UBI sub-system all physical eraseblocks are either 39ff94bc40SHeiko Schocher * "free" or "used". Free eraseblock are kept in the @wl->free RB-tree, while 40ff94bc40SHeiko Schocher * used eraseblocks are kept in @wl->used, @wl->erroneous, or @wl->scrub 41ff94bc40SHeiko Schocher * RB-trees, as well as (temporarily) in the @wl->pq queue. 42c91a719dSKyungmin Park * 43ff94bc40SHeiko Schocher * When the WL sub-system returns a physical eraseblock, the physical 44ff94bc40SHeiko Schocher * eraseblock is protected from being moved for some "time". For this reason, 45ff94bc40SHeiko Schocher * the physical eraseblock is not directly moved from the @wl->free tree to the 46ff94bc40SHeiko Schocher * @wl->used tree. There is a protection queue in between where this 47ff94bc40SHeiko Schocher * physical eraseblock is temporarily stored (@wl->pq). 48ff94bc40SHeiko Schocher * 49ff94bc40SHeiko Schocher * All this protection stuff is needed because: 50ff94bc40SHeiko Schocher * o we don't want to move physical eraseblocks just after we have given them 51ff94bc40SHeiko Schocher * to the user; instead, we first want to let users fill them up with data; 52ff94bc40SHeiko Schocher * 53ff94bc40SHeiko Schocher * o there is a chance that the user will put the physical eraseblock very 54ff94bc40SHeiko Schocher * soon, so it makes sense not to move it for some time, but wait. 55ff94bc40SHeiko Schocher * 56ff94bc40SHeiko Schocher * Physical eraseblocks stay protected only for limited time. But the "time" is 57ff94bc40SHeiko Schocher * measured in erase cycles in this case. This is implemented with help of the 58ff94bc40SHeiko Schocher * protection queue. Eraseblocks are put to the tail of this queue when they 59ff94bc40SHeiko Schocher * are returned by the 'ubi_wl_get_peb()', and eraseblocks are removed from the 60ff94bc40SHeiko Schocher * head of the queue on each erase operation (for any eraseblock). So the 61ff94bc40SHeiko Schocher * length of the queue defines how may (global) erase cycles PEBs are protected. 62ff94bc40SHeiko Schocher * 63ff94bc40SHeiko Schocher * To put it differently, each physical eraseblock has 2 main states: free and 64ff94bc40SHeiko Schocher * used. The former state corresponds to the @wl->free tree. The latter state 65ff94bc40SHeiko Schocher * is split up on several sub-states: 66ff94bc40SHeiko Schocher * o the WL movement is allowed (@wl->used tree); 67ff94bc40SHeiko Schocher * o the WL movement is disallowed (@wl->erroneous) because the PEB is 68ff94bc40SHeiko Schocher * erroneous - e.g., there was a read error; 69ff94bc40SHeiko Schocher * o the WL movement is temporarily prohibited (@wl->pq queue); 70ff94bc40SHeiko Schocher * o scrubbing is needed (@wl->scrub tree). 71ff94bc40SHeiko Schocher * 72ff94bc40SHeiko Schocher * Depending on the sub-state, wear-leveling entries of the used physical 73ff94bc40SHeiko Schocher * eraseblocks may be kept in one of those structures. 74c91a719dSKyungmin Park * 75c91a719dSKyungmin Park * Note, in this implementation, we keep a small in-RAM object for each physical 76c91a719dSKyungmin Park * eraseblock. This is surely not a scalable solution. But it appears to be good 77c91a719dSKyungmin Park * enough for moderately large flashes and it is simple. In future, one may 78ff94bc40SHeiko Schocher * re-work this sub-system and make it more scalable. 79c91a719dSKyungmin Park * 80ff94bc40SHeiko Schocher * At the moment this sub-system does not utilize the sequence number, which 81ff94bc40SHeiko Schocher * was introduced relatively recently. But it would be wise to do this because 82ff94bc40SHeiko Schocher * the sequence number of a logical eraseblock characterizes how old is it. For 83c91a719dSKyungmin Park * example, when we move a PEB with low erase counter, and we need to pick the 84c91a719dSKyungmin Park * target PEB, we pick a PEB with the highest EC if our PEB is "old" and we 85c91a719dSKyungmin Park * pick target PEB with an average EC if our PEB is not very "old". This is a 86ff94bc40SHeiko Schocher * room for future re-works of the WL sub-system. 87c91a719dSKyungmin Park */ 88c91a719dSKyungmin Park 89ff94bc40SHeiko Schocher #ifndef __UBOOT__ 90c91a719dSKyungmin Park #include <linux/slab.h> 91c91a719dSKyungmin Park #include <linux/crc32.h> 92c91a719dSKyungmin Park #include <linux/freezer.h> 93c91a719dSKyungmin Park #include <linux/kthread.h> 94ff94bc40SHeiko Schocher #else 95ff94bc40SHeiko Schocher #include <ubi_uboot.h> 96c91a719dSKyungmin Park #endif 97c91a719dSKyungmin Park 98c91a719dSKyungmin Park #include "ubi.h" 990195a7bbSHeiko Schocher #include "wl.h" 100c91a719dSKyungmin Park 101c91a719dSKyungmin Park /* Number of physical eraseblocks reserved for wear-leveling purposes */ 102c91a719dSKyungmin Park #define WL_RESERVED_PEBS 1 103c91a719dSKyungmin Park 104c91a719dSKyungmin Park /* 105c91a719dSKyungmin Park * Maximum difference between two erase counters. If this threshold is 106ff94bc40SHeiko Schocher * exceeded, the WL sub-system starts moving data from used physical 107ff94bc40SHeiko Schocher * eraseblocks with low erase counter to free physical eraseblocks with high 108ff94bc40SHeiko Schocher * erase counter. 109c91a719dSKyungmin Park */ 110c91a719dSKyungmin Park #define UBI_WL_THRESHOLD CONFIG_MTD_UBI_WL_THRESHOLD 111c91a719dSKyungmin Park 112c91a719dSKyungmin Park /* 113ff94bc40SHeiko Schocher * When a physical eraseblock is moved, the WL sub-system has to pick the target 114c91a719dSKyungmin Park * physical eraseblock to move to. The simplest way would be just to pick the 115c91a719dSKyungmin Park * one with the highest erase counter. But in certain workloads this could lead 116c91a719dSKyungmin Park * to an unlimited wear of one or few physical eraseblock. Indeed, imagine a 117c91a719dSKyungmin Park * situation when the picked physical eraseblock is constantly erased after the 118c91a719dSKyungmin Park * data is written to it. So, we have a constant which limits the highest erase 119ff94bc40SHeiko Schocher * counter of the free physical eraseblock to pick. Namely, the WL sub-system 120ff94bc40SHeiko Schocher * does not pick eraseblocks with erase counter greater than the lowest erase 121c91a719dSKyungmin Park * counter plus %WL_FREE_MAX_DIFF. 122c91a719dSKyungmin Park */ 123c91a719dSKyungmin Park #define WL_FREE_MAX_DIFF (2*UBI_WL_THRESHOLD) 124c91a719dSKyungmin Park 125c91a719dSKyungmin Park /* 126c91a719dSKyungmin Park * Maximum number of consecutive background thread failures which is enough to 127c91a719dSKyungmin Park * switch to read-only mode. 128c91a719dSKyungmin Park */ 129c91a719dSKyungmin Park #define WL_MAX_FAILURES 32 130c91a719dSKyungmin Park 131ff94bc40SHeiko Schocher static int self_check_ec(struct ubi_device *ubi, int pnum, int ec); 132ff94bc40SHeiko Schocher static int self_check_in_wl_tree(const struct ubi_device *ubi, 133ff94bc40SHeiko Schocher struct ubi_wl_entry *e, struct rb_root *root); 134ff94bc40SHeiko Schocher static int self_check_in_pq(const struct ubi_device *ubi, 135ff94bc40SHeiko Schocher struct ubi_wl_entry *e); 136ff94bc40SHeiko Schocher 137c91a719dSKyungmin Park /** 138c91a719dSKyungmin Park * wl_tree_add - add a wear-leveling entry to a WL RB-tree. 139c91a719dSKyungmin Park * @e: the wear-leveling entry to add 140c91a719dSKyungmin Park * @root: the root of the tree 141c91a719dSKyungmin Park * 142c91a719dSKyungmin Park * Note, we use (erase counter, physical eraseblock number) pairs as keys in 143c91a719dSKyungmin Park * the @ubi->used and @ubi->free RB-trees. 144c91a719dSKyungmin Park */ 145c91a719dSKyungmin Park static void wl_tree_add(struct ubi_wl_entry *e, struct rb_root *root) 146c91a719dSKyungmin Park { 147c91a719dSKyungmin Park struct rb_node **p, *parent = NULL; 148c91a719dSKyungmin Park 149c91a719dSKyungmin Park p = &root->rb_node; 150c91a719dSKyungmin Park while (*p) { 151c91a719dSKyungmin Park struct ubi_wl_entry *e1; 152c91a719dSKyungmin Park 153c91a719dSKyungmin Park parent = *p; 154ff94bc40SHeiko Schocher e1 = rb_entry(parent, struct ubi_wl_entry, u.rb); 155c91a719dSKyungmin Park 156c91a719dSKyungmin Park if (e->ec < e1->ec) 157c91a719dSKyungmin Park p = &(*p)->rb_left; 158c91a719dSKyungmin Park else if (e->ec > e1->ec) 159c91a719dSKyungmin Park p = &(*p)->rb_right; 160c91a719dSKyungmin Park else { 161c91a719dSKyungmin Park ubi_assert(e->pnum != e1->pnum); 162c91a719dSKyungmin Park if (e->pnum < e1->pnum) 163c91a719dSKyungmin Park p = &(*p)->rb_left; 164c91a719dSKyungmin Park else 165c91a719dSKyungmin Park p = &(*p)->rb_right; 166c91a719dSKyungmin Park } 167c91a719dSKyungmin Park } 168c91a719dSKyungmin Park 169ff94bc40SHeiko Schocher rb_link_node(&e->u.rb, parent, p); 170ff94bc40SHeiko Schocher rb_insert_color(&e->u.rb, root); 171c91a719dSKyungmin Park } 172c91a719dSKyungmin Park 173c91a719dSKyungmin Park /** 1740195a7bbSHeiko Schocher * wl_tree_destroy - destroy a wear-leveling entry. 1750195a7bbSHeiko Schocher * @ubi: UBI device description object 1760195a7bbSHeiko Schocher * @e: the wear-leveling entry to add 1770195a7bbSHeiko Schocher * 1780195a7bbSHeiko Schocher * This function destroys a wear leveling entry and removes 1790195a7bbSHeiko Schocher * the reference from the lookup table. 1800195a7bbSHeiko Schocher */ 1810195a7bbSHeiko Schocher static void wl_entry_destroy(struct ubi_device *ubi, struct ubi_wl_entry *e) 1820195a7bbSHeiko Schocher { 1830195a7bbSHeiko Schocher ubi->lookuptbl[e->pnum] = NULL; 1840195a7bbSHeiko Schocher kmem_cache_free(ubi_wl_entry_slab, e); 1850195a7bbSHeiko Schocher } 1860195a7bbSHeiko Schocher 1870195a7bbSHeiko Schocher /** 188c91a719dSKyungmin Park * do_work - do one pending work. 189c91a719dSKyungmin Park * @ubi: UBI device description object 190c91a719dSKyungmin Park * 191c91a719dSKyungmin Park * This function returns zero in case of success and a negative error code in 192c91a719dSKyungmin Park * case of failure. 193c91a719dSKyungmin Park */ 1940195a7bbSHeiko Schocher #ifndef __UBOOT__ 195c91a719dSKyungmin Park static int do_work(struct ubi_device *ubi) 1960195a7bbSHeiko Schocher #else 1970195a7bbSHeiko Schocher int do_work(struct ubi_device *ubi) 1980195a7bbSHeiko Schocher #endif 199c91a719dSKyungmin Park { 200c91a719dSKyungmin Park int err; 201c91a719dSKyungmin Park struct ubi_work *wrk; 202c91a719dSKyungmin Park 203c91a719dSKyungmin Park cond_resched(); 204c91a719dSKyungmin Park 205c91a719dSKyungmin Park /* 206c91a719dSKyungmin Park * @ubi->work_sem is used to synchronize with the workers. Workers take 207c91a719dSKyungmin Park * it in read mode, so many of them may be doing works at a time. But 208c91a719dSKyungmin Park * the queue flush code has to be sure the whole queue of works is 209c91a719dSKyungmin Park * done, and it takes the mutex in write mode. 210c91a719dSKyungmin Park */ 211c91a719dSKyungmin Park down_read(&ubi->work_sem); 212c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 213c91a719dSKyungmin Park if (list_empty(&ubi->works)) { 214c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 215c91a719dSKyungmin Park up_read(&ubi->work_sem); 216c91a719dSKyungmin Park return 0; 217c91a719dSKyungmin Park } 218c91a719dSKyungmin Park 219c91a719dSKyungmin Park wrk = list_entry(ubi->works.next, struct ubi_work, list); 220c91a719dSKyungmin Park list_del(&wrk->list); 221c91a719dSKyungmin Park ubi->works_count -= 1; 222c91a719dSKyungmin Park ubi_assert(ubi->works_count >= 0); 223c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 224c91a719dSKyungmin Park 225c91a719dSKyungmin Park /* 226c91a719dSKyungmin Park * Call the worker function. Do not touch the work structure 227c91a719dSKyungmin Park * after this call as it will have been freed or reused by that 228c91a719dSKyungmin Park * time by the worker function. 229c91a719dSKyungmin Park */ 230c91a719dSKyungmin Park err = wrk->func(ubi, wrk, 0); 231c91a719dSKyungmin Park if (err) 2320195a7bbSHeiko Schocher ubi_err(ubi, "work failed with error code %d", err); 233c91a719dSKyungmin Park up_read(&ubi->work_sem); 234c91a719dSKyungmin Park 235c91a719dSKyungmin Park return err; 236c91a719dSKyungmin Park } 237c91a719dSKyungmin Park 238c91a719dSKyungmin Park /** 239c91a719dSKyungmin Park * in_wl_tree - check if wear-leveling entry is present in a WL RB-tree. 240c91a719dSKyungmin Park * @e: the wear-leveling entry to check 241c91a719dSKyungmin Park * @root: the root of the tree 242c91a719dSKyungmin Park * 243c91a719dSKyungmin Park * This function returns non-zero if @e is in the @root RB-tree and zero if it 244c91a719dSKyungmin Park * is not. 245c91a719dSKyungmin Park */ 246c91a719dSKyungmin Park static int in_wl_tree(struct ubi_wl_entry *e, struct rb_root *root) 247c91a719dSKyungmin Park { 248c91a719dSKyungmin Park struct rb_node *p; 249c91a719dSKyungmin Park 250c91a719dSKyungmin Park p = root->rb_node; 251c91a719dSKyungmin Park while (p) { 252c91a719dSKyungmin Park struct ubi_wl_entry *e1; 253c91a719dSKyungmin Park 254ff94bc40SHeiko Schocher e1 = rb_entry(p, struct ubi_wl_entry, u.rb); 255c91a719dSKyungmin Park 256c91a719dSKyungmin Park if (e->pnum == e1->pnum) { 257c91a719dSKyungmin Park ubi_assert(e == e1); 258c91a719dSKyungmin Park return 1; 259c91a719dSKyungmin Park } 260c91a719dSKyungmin Park 261c91a719dSKyungmin Park if (e->ec < e1->ec) 262c91a719dSKyungmin Park p = p->rb_left; 263c91a719dSKyungmin Park else if (e->ec > e1->ec) 264c91a719dSKyungmin Park p = p->rb_right; 265c91a719dSKyungmin Park else { 266c91a719dSKyungmin Park ubi_assert(e->pnum != e1->pnum); 267c91a719dSKyungmin Park if (e->pnum < e1->pnum) 268c91a719dSKyungmin Park p = p->rb_left; 269c91a719dSKyungmin Park else 270c91a719dSKyungmin Park p = p->rb_right; 271c91a719dSKyungmin Park } 272c91a719dSKyungmin Park } 273c91a719dSKyungmin Park 274c91a719dSKyungmin Park return 0; 275c91a719dSKyungmin Park } 276c91a719dSKyungmin Park 277c91a719dSKyungmin Park /** 278ff94bc40SHeiko Schocher * prot_queue_add - add physical eraseblock to the protection queue. 279c91a719dSKyungmin Park * @ubi: UBI device description object 280c91a719dSKyungmin Park * @e: the physical eraseblock to add 281c91a719dSKyungmin Park * 282ff94bc40SHeiko Schocher * This function adds @e to the tail of the protection queue @ubi->pq, where 283ff94bc40SHeiko Schocher * @e will stay for %UBI_PROT_QUEUE_LEN erase operations and will be 284ff94bc40SHeiko Schocher * temporarily protected from the wear-leveling worker. Note, @wl->lock has to 285ff94bc40SHeiko Schocher * be locked. 286c91a719dSKyungmin Park */ 287ff94bc40SHeiko Schocher static void prot_queue_add(struct ubi_device *ubi, struct ubi_wl_entry *e) 288c91a719dSKyungmin Park { 289ff94bc40SHeiko Schocher int pq_tail = ubi->pq_head - 1; 290c91a719dSKyungmin Park 291ff94bc40SHeiko Schocher if (pq_tail < 0) 292ff94bc40SHeiko Schocher pq_tail = UBI_PROT_QUEUE_LEN - 1; 293ff94bc40SHeiko Schocher ubi_assert(pq_tail >= 0 && pq_tail < UBI_PROT_QUEUE_LEN); 294ff94bc40SHeiko Schocher list_add_tail(&e->u.list, &ubi->pq[pq_tail]); 295ff94bc40SHeiko Schocher dbg_wl("added PEB %d EC %d to the protection queue", e->pnum, e->ec); 296c91a719dSKyungmin Park } 297c91a719dSKyungmin Park 298c91a719dSKyungmin Park /** 299c91a719dSKyungmin Park * find_wl_entry - find wear-leveling entry closest to certain erase counter. 300ff94bc40SHeiko Schocher * @ubi: UBI device description object 301c91a719dSKyungmin Park * @root: the RB-tree where to look for 302ff94bc40SHeiko Schocher * @diff: maximum possible difference from the smallest erase counter 303c91a719dSKyungmin Park * 304c91a719dSKyungmin Park * This function looks for a wear leveling entry with erase counter closest to 305ff94bc40SHeiko Schocher * min + @diff, where min is the smallest erase counter. 306c91a719dSKyungmin Park */ 307ff94bc40SHeiko Schocher static struct ubi_wl_entry *find_wl_entry(struct ubi_device *ubi, 308ff94bc40SHeiko Schocher struct rb_root *root, int diff) 309c91a719dSKyungmin Park { 310c91a719dSKyungmin Park struct rb_node *p; 311ff94bc40SHeiko Schocher struct ubi_wl_entry *e, *prev_e = NULL; 312ff94bc40SHeiko Schocher int max; 313c91a719dSKyungmin Park 314ff94bc40SHeiko Schocher e = rb_entry(rb_first(root), struct ubi_wl_entry, u.rb); 315ff94bc40SHeiko Schocher max = e->ec + diff; 316c91a719dSKyungmin Park 317c91a719dSKyungmin Park p = root->rb_node; 318c91a719dSKyungmin Park while (p) { 319c91a719dSKyungmin Park struct ubi_wl_entry *e1; 320c91a719dSKyungmin Park 321ff94bc40SHeiko Schocher e1 = rb_entry(p, struct ubi_wl_entry, u.rb); 322c91a719dSKyungmin Park if (e1->ec >= max) 323c91a719dSKyungmin Park p = p->rb_left; 324c91a719dSKyungmin Park else { 325c91a719dSKyungmin Park p = p->rb_right; 326ff94bc40SHeiko Schocher prev_e = e; 327c91a719dSKyungmin Park e = e1; 328c91a719dSKyungmin Park } 329c91a719dSKyungmin Park } 330c91a719dSKyungmin Park 331ff94bc40SHeiko Schocher /* If no fastmap has been written and this WL entry can be used 332ff94bc40SHeiko Schocher * as anchor PEB, hold it back and return the second best WL entry 333ff94bc40SHeiko Schocher * such that fastmap can use the anchor PEB later. */ 334ff94bc40SHeiko Schocher if (prev_e && !ubi->fm_disabled && 335ff94bc40SHeiko Schocher !ubi->fm && e->pnum < UBI_FM_MAX_START) 336ff94bc40SHeiko Schocher return prev_e; 337ff94bc40SHeiko Schocher 338c91a719dSKyungmin Park return e; 339c91a719dSKyungmin Park } 340c91a719dSKyungmin Park 341c91a719dSKyungmin Park /** 342ff94bc40SHeiko Schocher * find_mean_wl_entry - find wear-leveling entry with medium erase counter. 343c91a719dSKyungmin Park * @ubi: UBI device description object 344ff94bc40SHeiko Schocher * @root: the RB-tree where to look for 345c91a719dSKyungmin Park * 346ff94bc40SHeiko Schocher * This function looks for a wear leveling entry with medium erase counter, 347ff94bc40SHeiko Schocher * but not greater or equivalent than the lowest erase counter plus 348ff94bc40SHeiko Schocher * %WL_FREE_MAX_DIFF/2. 349c91a719dSKyungmin Park */ 350ff94bc40SHeiko Schocher static struct ubi_wl_entry *find_mean_wl_entry(struct ubi_device *ubi, 351ff94bc40SHeiko Schocher struct rb_root *root) 352c91a719dSKyungmin Park { 353c91a719dSKyungmin Park struct ubi_wl_entry *e, *first, *last; 354c91a719dSKyungmin Park 355ff94bc40SHeiko Schocher first = rb_entry(rb_first(root), struct ubi_wl_entry, u.rb); 356ff94bc40SHeiko Schocher last = rb_entry(rb_last(root), struct ubi_wl_entry, u.rb); 357c91a719dSKyungmin Park 358ff94bc40SHeiko Schocher if (last->ec - first->ec < WL_FREE_MAX_DIFF) { 359ff94bc40SHeiko Schocher e = rb_entry(root->rb_node, struct ubi_wl_entry, u.rb); 360c91a719dSKyungmin Park 361ff94bc40SHeiko Schocher /* If no fastmap has been written and this WL entry can be used 362ff94bc40SHeiko Schocher * as anchor PEB, hold it back and return the second best 363ff94bc40SHeiko Schocher * WL entry such that fastmap can use the anchor PEB later. */ 3640195a7bbSHeiko Schocher e = may_reserve_for_fm(ubi, e, root); 365ff94bc40SHeiko Schocher } else 366ff94bc40SHeiko Schocher e = find_wl_entry(ubi, root, WL_FREE_MAX_DIFF/2); 367c91a719dSKyungmin Park 368ff94bc40SHeiko Schocher return e; 369c91a719dSKyungmin Park } 370c91a719dSKyungmin Park 371ff94bc40SHeiko Schocher /** 3720195a7bbSHeiko Schocher * wl_get_wle - get a mean wl entry to be used by ubi_wl_get_peb() or 3730195a7bbSHeiko Schocher * refill_wl_user_pool(). 374ff94bc40SHeiko Schocher * @ubi: UBI device description object 375ff94bc40SHeiko Schocher * 3760195a7bbSHeiko Schocher * This function returns a a wear leveling entry in case of success and 3770195a7bbSHeiko Schocher * NULL in case of failure. 378ff94bc40SHeiko Schocher */ 3790195a7bbSHeiko Schocher static struct ubi_wl_entry *wl_get_wle(struct ubi_device *ubi) 380ff94bc40SHeiko Schocher { 381ff94bc40SHeiko Schocher struct ubi_wl_entry *e; 382ff94bc40SHeiko Schocher 383ff94bc40SHeiko Schocher e = find_mean_wl_entry(ubi, &ubi->free); 384ff94bc40SHeiko Schocher if (!e) { 3850195a7bbSHeiko Schocher ubi_err(ubi, "no free eraseblocks"); 3860195a7bbSHeiko Schocher return NULL; 387ff94bc40SHeiko Schocher } 388ff94bc40SHeiko Schocher 389ff94bc40SHeiko Schocher self_check_in_wl_tree(ubi, e, &ubi->free); 390ff94bc40SHeiko Schocher 391ff94bc40SHeiko Schocher /* 392ff94bc40SHeiko Schocher * Move the physical eraseblock to the protection queue where it will 393ff94bc40SHeiko Schocher * be protected from being moved for some time. 394ff94bc40SHeiko Schocher */ 395ff94bc40SHeiko Schocher rb_erase(&e->u.rb, &ubi->free); 396ff94bc40SHeiko Schocher ubi->free_count--; 397ff94bc40SHeiko Schocher dbg_wl("PEB %d EC %d", e->pnum, e->ec); 398ff94bc40SHeiko Schocher 399ff94bc40SHeiko Schocher return e; 400ff94bc40SHeiko Schocher } 401ff94bc40SHeiko Schocher 402ff94bc40SHeiko Schocher /** 403ff94bc40SHeiko Schocher * prot_queue_del - remove a physical eraseblock from the protection queue. 404c91a719dSKyungmin Park * @ubi: UBI device description object 405c91a719dSKyungmin Park * @pnum: the physical eraseblock to remove 406c91a719dSKyungmin Park * 407ff94bc40SHeiko Schocher * This function deletes PEB @pnum from the protection queue and returns zero 408ff94bc40SHeiko Schocher * in case of success and %-ENODEV if the PEB was not found. 409c91a719dSKyungmin Park */ 410ff94bc40SHeiko Schocher static int prot_queue_del(struct ubi_device *ubi, int pnum) 411c91a719dSKyungmin Park { 412ff94bc40SHeiko Schocher struct ubi_wl_entry *e; 413c91a719dSKyungmin Park 414ff94bc40SHeiko Schocher e = ubi->lookuptbl[pnum]; 415ff94bc40SHeiko Schocher if (!e) 416c91a719dSKyungmin Park return -ENODEV; 417c91a719dSKyungmin Park 418ff94bc40SHeiko Schocher if (self_check_in_pq(ubi, e)) 419ff94bc40SHeiko Schocher return -ENODEV; 420ff94bc40SHeiko Schocher 421ff94bc40SHeiko Schocher list_del(&e->u.list); 422ff94bc40SHeiko Schocher dbg_wl("deleted PEB %d from the protection queue", e->pnum); 423c91a719dSKyungmin Park return 0; 424c91a719dSKyungmin Park } 425c91a719dSKyungmin Park 426c91a719dSKyungmin Park /** 427c91a719dSKyungmin Park * sync_erase - synchronously erase a physical eraseblock. 428c91a719dSKyungmin Park * @ubi: UBI device description object 429c91a719dSKyungmin Park * @e: the the physical eraseblock to erase 430c91a719dSKyungmin Park * @torture: if the physical eraseblock has to be tortured 431c91a719dSKyungmin Park * 432c91a719dSKyungmin Park * This function returns zero in case of success and a negative error code in 433c91a719dSKyungmin Park * case of failure. 434c91a719dSKyungmin Park */ 435ff94bc40SHeiko Schocher static int sync_erase(struct ubi_device *ubi, struct ubi_wl_entry *e, 436ff94bc40SHeiko Schocher int torture) 437c91a719dSKyungmin Park { 438c91a719dSKyungmin Park int err; 439c91a719dSKyungmin Park struct ubi_ec_hdr *ec_hdr; 440c91a719dSKyungmin Park unsigned long long ec = e->ec; 441c91a719dSKyungmin Park 442c91a719dSKyungmin Park dbg_wl("erase PEB %d, old EC %llu", e->pnum, ec); 443c91a719dSKyungmin Park 444ff94bc40SHeiko Schocher err = self_check_ec(ubi, e->pnum, e->ec); 445ff94bc40SHeiko Schocher if (err) 446c91a719dSKyungmin Park return -EINVAL; 447c91a719dSKyungmin Park 448c91a719dSKyungmin Park ec_hdr = kzalloc(ubi->ec_hdr_alsize, GFP_NOFS); 449c91a719dSKyungmin Park if (!ec_hdr) 450c91a719dSKyungmin Park return -ENOMEM; 451c91a719dSKyungmin Park 452c91a719dSKyungmin Park err = ubi_io_sync_erase(ubi, e->pnum, torture); 453c91a719dSKyungmin Park if (err < 0) 454c91a719dSKyungmin Park goto out_free; 455c91a719dSKyungmin Park 456c91a719dSKyungmin Park ec += err; 457c91a719dSKyungmin Park if (ec > UBI_MAX_ERASECOUNTER) { 458c91a719dSKyungmin Park /* 459c91a719dSKyungmin Park * Erase counter overflow. Upgrade UBI and use 64-bit 460c91a719dSKyungmin Park * erase counters internally. 461c91a719dSKyungmin Park */ 4620195a7bbSHeiko Schocher ubi_err(ubi, "erase counter overflow at PEB %d, EC %llu", 463c91a719dSKyungmin Park e->pnum, ec); 464c91a719dSKyungmin Park err = -EINVAL; 465c91a719dSKyungmin Park goto out_free; 466c91a719dSKyungmin Park } 467c91a719dSKyungmin Park 468c91a719dSKyungmin Park dbg_wl("erased PEB %d, new EC %llu", e->pnum, ec); 469c91a719dSKyungmin Park 470c91a719dSKyungmin Park ec_hdr->ec = cpu_to_be64(ec); 471c91a719dSKyungmin Park 472c91a719dSKyungmin Park err = ubi_io_write_ec_hdr(ubi, e->pnum, ec_hdr); 473c91a719dSKyungmin Park if (err) 474c91a719dSKyungmin Park goto out_free; 475c91a719dSKyungmin Park 476c91a719dSKyungmin Park e->ec = ec; 477c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 478c91a719dSKyungmin Park if (e->ec > ubi->max_ec) 479c91a719dSKyungmin Park ubi->max_ec = e->ec; 480c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 481c91a719dSKyungmin Park 482c91a719dSKyungmin Park out_free: 483c91a719dSKyungmin Park kfree(ec_hdr); 484c91a719dSKyungmin Park return err; 485c91a719dSKyungmin Park } 486c91a719dSKyungmin Park 487c91a719dSKyungmin Park /** 488ff94bc40SHeiko Schocher * serve_prot_queue - check if it is time to stop protecting PEBs. 489c91a719dSKyungmin Park * @ubi: UBI device description object 490c91a719dSKyungmin Park * 491ff94bc40SHeiko Schocher * This function is called after each erase operation and removes PEBs from the 492ff94bc40SHeiko Schocher * tail of the protection queue. These PEBs have been protected for long enough 493ff94bc40SHeiko Schocher * and should be moved to the used tree. 494c91a719dSKyungmin Park */ 495ff94bc40SHeiko Schocher static void serve_prot_queue(struct ubi_device *ubi) 496c91a719dSKyungmin Park { 497ff94bc40SHeiko Schocher struct ubi_wl_entry *e, *tmp; 498ff94bc40SHeiko Schocher int count; 499c91a719dSKyungmin Park 500c91a719dSKyungmin Park /* 501c91a719dSKyungmin Park * There may be several protected physical eraseblock to remove, 502c91a719dSKyungmin Park * process them all. 503c91a719dSKyungmin Park */ 504ff94bc40SHeiko Schocher repeat: 505ff94bc40SHeiko Schocher count = 0; 506c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 507ff94bc40SHeiko Schocher list_for_each_entry_safe(e, tmp, &ubi->pq[ubi->pq_head], u.list) { 508ff94bc40SHeiko Schocher dbg_wl("PEB %d EC %d protection over, move to used tree", 509ff94bc40SHeiko Schocher e->pnum, e->ec); 510ff94bc40SHeiko Schocher 511ff94bc40SHeiko Schocher list_del(&e->u.list); 512ff94bc40SHeiko Schocher wl_tree_add(e, &ubi->used); 513ff94bc40SHeiko Schocher if (count++ > 32) { 514ff94bc40SHeiko Schocher /* 515ff94bc40SHeiko Schocher * Let's be nice and avoid holding the spinlock for 516ff94bc40SHeiko Schocher * too long. 517ff94bc40SHeiko Schocher */ 518c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 519c91a719dSKyungmin Park cond_resched(); 520ff94bc40SHeiko Schocher goto repeat; 521c91a719dSKyungmin Park } 522c91a719dSKyungmin Park } 523c91a719dSKyungmin Park 524ff94bc40SHeiko Schocher ubi->pq_head += 1; 525ff94bc40SHeiko Schocher if (ubi->pq_head == UBI_PROT_QUEUE_LEN) 526ff94bc40SHeiko Schocher ubi->pq_head = 0; 527ff94bc40SHeiko Schocher ubi_assert(ubi->pq_head >= 0 && ubi->pq_head < UBI_PROT_QUEUE_LEN); 528ff94bc40SHeiko Schocher spin_unlock(&ubi->wl_lock); 529ff94bc40SHeiko Schocher } 530ff94bc40SHeiko Schocher 531ff94bc40SHeiko Schocher /** 532ff94bc40SHeiko Schocher * __schedule_ubi_work - schedule a work. 533ff94bc40SHeiko Schocher * @ubi: UBI device description object 534ff94bc40SHeiko Schocher * @wrk: the work to schedule 535ff94bc40SHeiko Schocher * 536ff94bc40SHeiko Schocher * This function adds a work defined by @wrk to the tail of the pending works 5370195a7bbSHeiko Schocher * list. Can only be used if ubi->work_sem is already held in read mode! 538ff94bc40SHeiko Schocher */ 539ff94bc40SHeiko Schocher static void __schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk) 540ff94bc40SHeiko Schocher { 541ff94bc40SHeiko Schocher spin_lock(&ubi->wl_lock); 542ff94bc40SHeiko Schocher list_add_tail(&wrk->list, &ubi->works); 543ff94bc40SHeiko Schocher ubi_assert(ubi->works_count >= 0); 544ff94bc40SHeiko Schocher ubi->works_count += 1; 545ff94bc40SHeiko Schocher #ifndef __UBOOT__ 546ff94bc40SHeiko Schocher if (ubi->thread_enabled && !ubi_dbg_is_bgt_disabled(ubi)) 547ff94bc40SHeiko Schocher wake_up_process(ubi->bgt_thread); 548ff94bc40SHeiko Schocher #else 5490195a7bbSHeiko Schocher int err; 550ff94bc40SHeiko Schocher /* 551ff94bc40SHeiko Schocher * U-Boot special: We have no bgt_thread in U-Boot! 552ff94bc40SHeiko Schocher * So just call do_work() here directly. 553ff94bc40SHeiko Schocher */ 5540195a7bbSHeiko Schocher err = do_work(ubi); 5550195a7bbSHeiko Schocher if (err) { 5560195a7bbSHeiko Schocher ubi_err(ubi, "%s: work failed with error code %d", 5570195a7bbSHeiko Schocher ubi->bgt_name, err); 5580195a7bbSHeiko Schocher } 559ff94bc40SHeiko Schocher #endif 560ff94bc40SHeiko Schocher spin_unlock(&ubi->wl_lock); 561ff94bc40SHeiko Schocher } 562ff94bc40SHeiko Schocher 563c91a719dSKyungmin Park /** 564c91a719dSKyungmin Park * schedule_ubi_work - schedule a work. 565c91a719dSKyungmin Park * @ubi: UBI device description object 566c91a719dSKyungmin Park * @wrk: the work to schedule 567c91a719dSKyungmin Park * 568ff94bc40SHeiko Schocher * This function adds a work defined by @wrk to the tail of the pending works 569ff94bc40SHeiko Schocher * list. 570c91a719dSKyungmin Park */ 571c91a719dSKyungmin Park static void schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk) 572c91a719dSKyungmin Park { 573ff94bc40SHeiko Schocher down_read(&ubi->work_sem); 574ff94bc40SHeiko Schocher __schedule_ubi_work(ubi, wrk); 575ff94bc40SHeiko Schocher up_read(&ubi->work_sem); 576c91a719dSKyungmin Park } 577c91a719dSKyungmin Park 578c91a719dSKyungmin Park static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk, 5790195a7bbSHeiko Schocher int shutdown); 580ff94bc40SHeiko Schocher 581c91a719dSKyungmin Park /** 582c91a719dSKyungmin Park * schedule_erase - schedule an erase work. 583c91a719dSKyungmin Park * @ubi: UBI device description object 584c91a719dSKyungmin Park * @e: the WL entry of the physical eraseblock to erase 585ff94bc40SHeiko Schocher * @vol_id: the volume ID that last used this PEB 586ff94bc40SHeiko Schocher * @lnum: the last used logical eraseblock number for the PEB 587c91a719dSKyungmin Park * @torture: if the physical eraseblock has to be tortured 588c91a719dSKyungmin Park * 589c91a719dSKyungmin Park * This function returns zero in case of success and a %-ENOMEM in case of 590c91a719dSKyungmin Park * failure. 591c91a719dSKyungmin Park */ 592c91a719dSKyungmin Park static int schedule_erase(struct ubi_device *ubi, struct ubi_wl_entry *e, 593ff94bc40SHeiko Schocher int vol_id, int lnum, int torture) 594c91a719dSKyungmin Park { 595c91a719dSKyungmin Park struct ubi_work *wl_wrk; 596c91a719dSKyungmin Park 597ff94bc40SHeiko Schocher ubi_assert(e); 598ff94bc40SHeiko Schocher 599c91a719dSKyungmin Park dbg_wl("schedule erasure of PEB %d, EC %d, torture %d", 600c91a719dSKyungmin Park e->pnum, e->ec, torture); 601c91a719dSKyungmin Park 602c91a719dSKyungmin Park wl_wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS); 603c91a719dSKyungmin Park if (!wl_wrk) 604c91a719dSKyungmin Park return -ENOMEM; 605c91a719dSKyungmin Park 606c91a719dSKyungmin Park wl_wrk->func = &erase_worker; 607c91a719dSKyungmin Park wl_wrk->e = e; 608ff94bc40SHeiko Schocher wl_wrk->vol_id = vol_id; 609ff94bc40SHeiko Schocher wl_wrk->lnum = lnum; 610c91a719dSKyungmin Park wl_wrk->torture = torture; 611c91a719dSKyungmin Park 612c91a719dSKyungmin Park schedule_ubi_work(ubi, wl_wrk); 613c91a719dSKyungmin Park return 0; 614c91a719dSKyungmin Park } 615c91a719dSKyungmin Park 616c91a719dSKyungmin Park /** 617ff94bc40SHeiko Schocher * do_sync_erase - run the erase worker synchronously. 618ff94bc40SHeiko Schocher * @ubi: UBI device description object 619ff94bc40SHeiko Schocher * @e: the WL entry of the physical eraseblock to erase 620ff94bc40SHeiko Schocher * @vol_id: the volume ID that last used this PEB 621ff94bc40SHeiko Schocher * @lnum: the last used logical eraseblock number for the PEB 622ff94bc40SHeiko Schocher * @torture: if the physical eraseblock has to be tortured 623ff94bc40SHeiko Schocher * 624ff94bc40SHeiko Schocher */ 625ff94bc40SHeiko Schocher static int do_sync_erase(struct ubi_device *ubi, struct ubi_wl_entry *e, 626ff94bc40SHeiko Schocher int vol_id, int lnum, int torture) 627ff94bc40SHeiko Schocher { 628ff94bc40SHeiko Schocher struct ubi_work *wl_wrk; 629ff94bc40SHeiko Schocher 630ff94bc40SHeiko Schocher dbg_wl("sync erase of PEB %i", e->pnum); 631ff94bc40SHeiko Schocher 632ff94bc40SHeiko Schocher wl_wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS); 633ff94bc40SHeiko Schocher if (!wl_wrk) 634ff94bc40SHeiko Schocher return -ENOMEM; 635ff94bc40SHeiko Schocher 636ff94bc40SHeiko Schocher wl_wrk->e = e; 637ff94bc40SHeiko Schocher wl_wrk->vol_id = vol_id; 638ff94bc40SHeiko Schocher wl_wrk->lnum = lnum; 639ff94bc40SHeiko Schocher wl_wrk->torture = torture; 640ff94bc40SHeiko Schocher 641ff94bc40SHeiko Schocher return erase_worker(ubi, wl_wrk, 0); 642ff94bc40SHeiko Schocher } 643ff94bc40SHeiko Schocher 644ff94bc40SHeiko Schocher /** 645c91a719dSKyungmin Park * wear_leveling_worker - wear-leveling worker function. 646c91a719dSKyungmin Park * @ubi: UBI device description object 647c91a719dSKyungmin Park * @wrk: the work object 6480195a7bbSHeiko Schocher * @shutdown: non-zero if the worker has to free memory and exit 6490195a7bbSHeiko Schocher * because the WL-subsystem is shutting down 650c91a719dSKyungmin Park * 651c91a719dSKyungmin Park * This function copies a more worn out physical eraseblock to a less worn out 652c91a719dSKyungmin Park * one. Returns zero in case of success and a negative error code in case of 653c91a719dSKyungmin Park * failure. 654c91a719dSKyungmin Park */ 655c91a719dSKyungmin Park static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk, 6560195a7bbSHeiko Schocher int shutdown) 657c91a719dSKyungmin Park { 658ff94bc40SHeiko Schocher int err, scrubbing = 0, torture = 0, protect = 0, erroneous = 0; 6590195a7bbSHeiko Schocher int vol_id = -1, lnum = -1; 660ff94bc40SHeiko Schocher #ifdef CONFIG_MTD_UBI_FASTMAP 661ff94bc40SHeiko Schocher int anchor = wrk->anchor; 662ff94bc40SHeiko Schocher #endif 663c91a719dSKyungmin Park struct ubi_wl_entry *e1, *e2; 664c91a719dSKyungmin Park struct ubi_vid_hdr *vid_hdr; 665c91a719dSKyungmin Park 666c91a719dSKyungmin Park kfree(wrk); 6670195a7bbSHeiko Schocher if (shutdown) 668c91a719dSKyungmin Park return 0; 669c91a719dSKyungmin Park 670c91a719dSKyungmin Park vid_hdr = ubi_zalloc_vid_hdr(ubi, GFP_NOFS); 671c91a719dSKyungmin Park if (!vid_hdr) 672c91a719dSKyungmin Park return -ENOMEM; 673c91a719dSKyungmin Park 674c91a719dSKyungmin Park mutex_lock(&ubi->move_mutex); 675c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 676c91a719dSKyungmin Park ubi_assert(!ubi->move_from && !ubi->move_to); 677c91a719dSKyungmin Park ubi_assert(!ubi->move_to_put); 678c91a719dSKyungmin Park 679c91a719dSKyungmin Park if (!ubi->free.rb_node || 680c91a719dSKyungmin Park (!ubi->used.rb_node && !ubi->scrub.rb_node)) { 681c91a719dSKyungmin Park /* 682c91a719dSKyungmin Park * No free physical eraseblocks? Well, they must be waiting in 683c91a719dSKyungmin Park * the queue to be erased. Cancel movement - it will be 684c91a719dSKyungmin Park * triggered again when a free physical eraseblock appears. 685c91a719dSKyungmin Park * 686c91a719dSKyungmin Park * No used physical eraseblocks? They must be temporarily 687c91a719dSKyungmin Park * protected from being moved. They will be moved to the 688c91a719dSKyungmin Park * @ubi->used tree later and the wear-leveling will be 689c91a719dSKyungmin Park * triggered again. 690c91a719dSKyungmin Park */ 691c91a719dSKyungmin Park dbg_wl("cancel WL, a list is empty: free %d, used %d", 692c91a719dSKyungmin Park !ubi->free.rb_node, !ubi->used.rb_node); 693c91a719dSKyungmin Park goto out_cancel; 694c91a719dSKyungmin Park } 695c91a719dSKyungmin Park 696ff94bc40SHeiko Schocher #ifdef CONFIG_MTD_UBI_FASTMAP 697ff94bc40SHeiko Schocher /* Check whether we need to produce an anchor PEB */ 698ff94bc40SHeiko Schocher if (!anchor) 699ff94bc40SHeiko Schocher anchor = !anchor_pebs_avalible(&ubi->free); 700ff94bc40SHeiko Schocher 701ff94bc40SHeiko Schocher if (anchor) { 702ff94bc40SHeiko Schocher e1 = find_anchor_wl_entry(&ubi->used); 703ff94bc40SHeiko Schocher if (!e1) 704ff94bc40SHeiko Schocher goto out_cancel; 705ff94bc40SHeiko Schocher e2 = get_peb_for_wl(ubi); 706ff94bc40SHeiko Schocher if (!e2) 707ff94bc40SHeiko Schocher goto out_cancel; 708ff94bc40SHeiko Schocher 709ff94bc40SHeiko Schocher self_check_in_wl_tree(ubi, e1, &ubi->used); 710ff94bc40SHeiko Schocher rb_erase(&e1->u.rb, &ubi->used); 711ff94bc40SHeiko Schocher dbg_wl("anchor-move PEB %d to PEB %d", e1->pnum, e2->pnum); 712ff94bc40SHeiko Schocher } else if (!ubi->scrub.rb_node) { 713ff94bc40SHeiko Schocher #else 714c91a719dSKyungmin Park if (!ubi->scrub.rb_node) { 715ff94bc40SHeiko Schocher #endif 716c91a719dSKyungmin Park /* 717c91a719dSKyungmin Park * Now pick the least worn-out used physical eraseblock and a 718c91a719dSKyungmin Park * highly worn-out free physical eraseblock. If the erase 719c91a719dSKyungmin Park * counters differ much enough, start wear-leveling. 720c91a719dSKyungmin Park */ 721ff94bc40SHeiko Schocher e1 = rb_entry(rb_first(&ubi->used), struct ubi_wl_entry, u.rb); 722ff94bc40SHeiko Schocher e2 = get_peb_for_wl(ubi); 723ff94bc40SHeiko Schocher if (!e2) 724ff94bc40SHeiko Schocher goto out_cancel; 725c91a719dSKyungmin Park 726c91a719dSKyungmin Park if (!(e2->ec - e1->ec >= UBI_WL_THRESHOLD)) { 727c91a719dSKyungmin Park dbg_wl("no WL needed: min used EC %d, max free EC %d", 728c91a719dSKyungmin Park e1->ec, e2->ec); 729ff94bc40SHeiko Schocher 730ff94bc40SHeiko Schocher /* Give the unused PEB back */ 731ff94bc40SHeiko Schocher wl_tree_add(e2, &ubi->free); 7324e67c571SHeiko Schocher ubi->free_count++; 733c91a719dSKyungmin Park goto out_cancel; 734c91a719dSKyungmin Park } 735ff94bc40SHeiko Schocher self_check_in_wl_tree(ubi, e1, &ubi->used); 736ff94bc40SHeiko Schocher rb_erase(&e1->u.rb, &ubi->used); 737c91a719dSKyungmin Park dbg_wl("move PEB %d EC %d to PEB %d EC %d", 738c91a719dSKyungmin Park e1->pnum, e1->ec, e2->pnum, e2->ec); 739c91a719dSKyungmin Park } else { 740c91a719dSKyungmin Park /* Perform scrubbing */ 741c91a719dSKyungmin Park scrubbing = 1; 742ff94bc40SHeiko Schocher e1 = rb_entry(rb_first(&ubi->scrub), struct ubi_wl_entry, u.rb); 743ff94bc40SHeiko Schocher e2 = get_peb_for_wl(ubi); 744ff94bc40SHeiko Schocher if (!e2) 745ff94bc40SHeiko Schocher goto out_cancel; 746ff94bc40SHeiko Schocher 747ff94bc40SHeiko Schocher self_check_in_wl_tree(ubi, e1, &ubi->scrub); 748ff94bc40SHeiko Schocher rb_erase(&e1->u.rb, &ubi->scrub); 749c91a719dSKyungmin Park dbg_wl("scrub PEB %d to PEB %d", e1->pnum, e2->pnum); 750c91a719dSKyungmin Park } 751c91a719dSKyungmin Park 752c91a719dSKyungmin Park ubi->move_from = e1; 753c91a719dSKyungmin Park ubi->move_to = e2; 754c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 755c91a719dSKyungmin Park 756c91a719dSKyungmin Park /* 757c91a719dSKyungmin Park * Now we are going to copy physical eraseblock @e1->pnum to @e2->pnum. 758c91a719dSKyungmin Park * We so far do not know which logical eraseblock our physical 759c91a719dSKyungmin Park * eraseblock (@e1) belongs to. We have to read the volume identifier 760c91a719dSKyungmin Park * header first. 761c91a719dSKyungmin Park * 762c91a719dSKyungmin Park * Note, we are protected from this PEB being unmapped and erased. The 763c91a719dSKyungmin Park * 'ubi_wl_put_peb()' would wait for moving to be finished if the PEB 764c91a719dSKyungmin Park * which is being moved was unmapped. 765c91a719dSKyungmin Park */ 766c91a719dSKyungmin Park 767c91a719dSKyungmin Park err = ubi_io_read_vid_hdr(ubi, e1->pnum, vid_hdr, 0); 768c91a719dSKyungmin Park if (err && err != UBI_IO_BITFLIPS) { 769ff94bc40SHeiko Schocher if (err == UBI_IO_FF) { 770c91a719dSKyungmin Park /* 771c91a719dSKyungmin Park * We are trying to move PEB without a VID header. UBI 772c91a719dSKyungmin Park * always write VID headers shortly after the PEB was 773ff94bc40SHeiko Schocher * given, so we have a situation when it has not yet 774ff94bc40SHeiko Schocher * had a chance to write it, because it was preempted. 775ff94bc40SHeiko Schocher * So add this PEB to the protection queue so far, 776ff94bc40SHeiko Schocher * because presumably more data will be written there 777ff94bc40SHeiko Schocher * (including the missing VID header), and then we'll 778ff94bc40SHeiko Schocher * move it. 779c91a719dSKyungmin Park */ 780c91a719dSKyungmin Park dbg_wl("PEB %d has no VID header", e1->pnum); 781ff94bc40SHeiko Schocher protect = 1; 782ff94bc40SHeiko Schocher goto out_not_moved; 783ff94bc40SHeiko Schocher } else if (err == UBI_IO_FF_BITFLIPS) { 784ff94bc40SHeiko Schocher /* 785ff94bc40SHeiko Schocher * The same situation as %UBI_IO_FF, but bit-flips were 786ff94bc40SHeiko Schocher * detected. It is better to schedule this PEB for 787ff94bc40SHeiko Schocher * scrubbing. 788ff94bc40SHeiko Schocher */ 789ff94bc40SHeiko Schocher dbg_wl("PEB %d has no VID header but has bit-flips", 790ff94bc40SHeiko Schocher e1->pnum); 791ff94bc40SHeiko Schocher scrubbing = 1; 792c91a719dSKyungmin Park goto out_not_moved; 793c91a719dSKyungmin Park } 794c91a719dSKyungmin Park 7950195a7bbSHeiko Schocher ubi_err(ubi, "error %d while reading VID header from PEB %d", 796c91a719dSKyungmin Park err, e1->pnum); 797c91a719dSKyungmin Park goto out_error; 798c91a719dSKyungmin Park } 799c91a719dSKyungmin Park 800ff94bc40SHeiko Schocher vol_id = be32_to_cpu(vid_hdr->vol_id); 801ff94bc40SHeiko Schocher lnum = be32_to_cpu(vid_hdr->lnum); 802ff94bc40SHeiko Schocher 803c91a719dSKyungmin Park err = ubi_eba_copy_leb(ubi, e1->pnum, e2->pnum, vid_hdr); 804c91a719dSKyungmin Park if (err) { 805ff94bc40SHeiko Schocher if (err == MOVE_CANCEL_RACE) { 806ff94bc40SHeiko Schocher /* 807ff94bc40SHeiko Schocher * The LEB has not been moved because the volume is 808ff94bc40SHeiko Schocher * being deleted or the PEB has been put meanwhile. We 809ff94bc40SHeiko Schocher * should prevent this PEB from being selected for 810ff94bc40SHeiko Schocher * wear-leveling movement again, so put it to the 811ff94bc40SHeiko Schocher * protection queue. 812ff94bc40SHeiko Schocher */ 813ff94bc40SHeiko Schocher protect = 1; 814ff94bc40SHeiko Schocher goto out_not_moved; 815ff94bc40SHeiko Schocher } 816ff94bc40SHeiko Schocher if (err == MOVE_RETRY) { 817ff94bc40SHeiko Schocher scrubbing = 1; 818ff94bc40SHeiko Schocher goto out_not_moved; 819ff94bc40SHeiko Schocher } 820ff94bc40SHeiko Schocher if (err == MOVE_TARGET_BITFLIPS || err == MOVE_TARGET_WR_ERR || 821ff94bc40SHeiko Schocher err == MOVE_TARGET_RD_ERR) { 822ff94bc40SHeiko Schocher /* 823ff94bc40SHeiko Schocher * Target PEB had bit-flips or write error - torture it. 824ff94bc40SHeiko Schocher */ 825ff94bc40SHeiko Schocher torture = 1; 826ff94bc40SHeiko Schocher goto out_not_moved; 827ff94bc40SHeiko Schocher } 828ff94bc40SHeiko Schocher 829ff94bc40SHeiko Schocher if (err == MOVE_SOURCE_RD_ERR) { 830ff94bc40SHeiko Schocher /* 831ff94bc40SHeiko Schocher * An error happened while reading the source PEB. Do 832ff94bc40SHeiko Schocher * not switch to R/O mode in this case, and give the 833ff94bc40SHeiko Schocher * upper layers a possibility to recover from this, 834ff94bc40SHeiko Schocher * e.g. by unmapping corresponding LEB. Instead, just 835ff94bc40SHeiko Schocher * put this PEB to the @ubi->erroneous list to prevent 836ff94bc40SHeiko Schocher * UBI from trying to move it over and over again. 837ff94bc40SHeiko Schocher */ 838ff94bc40SHeiko Schocher if (ubi->erroneous_peb_count > ubi->max_erroneous) { 8390195a7bbSHeiko Schocher ubi_err(ubi, "too many erroneous eraseblocks (%d)", 840ff94bc40SHeiko Schocher ubi->erroneous_peb_count); 841ff94bc40SHeiko Schocher goto out_error; 842ff94bc40SHeiko Schocher } 843ff94bc40SHeiko Schocher erroneous = 1; 844ff94bc40SHeiko Schocher goto out_not_moved; 845ff94bc40SHeiko Schocher } 846c91a719dSKyungmin Park 847c91a719dSKyungmin Park if (err < 0) 848c91a719dSKyungmin Park goto out_error; 849c91a719dSKyungmin Park 850ff94bc40SHeiko Schocher ubi_assert(0); 851c91a719dSKyungmin Park } 852c91a719dSKyungmin Park 853ff94bc40SHeiko Schocher /* The PEB has been successfully moved */ 854ff94bc40SHeiko Schocher if (scrubbing) 8550195a7bbSHeiko Schocher ubi_msg(ubi, "scrubbed PEB %d (LEB %d:%d), data moved to PEB %d", 856ff94bc40SHeiko Schocher e1->pnum, vol_id, lnum, e2->pnum); 857c91a719dSKyungmin Park ubi_free_vid_hdr(ubi, vid_hdr); 858ff94bc40SHeiko Schocher 859c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 860ff94bc40SHeiko Schocher if (!ubi->move_to_put) { 861c91a719dSKyungmin Park wl_tree_add(e2, &ubi->used); 862ff94bc40SHeiko Schocher e2 = NULL; 863ff94bc40SHeiko Schocher } 864c91a719dSKyungmin Park ubi->move_from = ubi->move_to = NULL; 865c91a719dSKyungmin Park ubi->move_to_put = ubi->wl_scheduled = 0; 866c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 867c91a719dSKyungmin Park 868ff94bc40SHeiko Schocher err = do_sync_erase(ubi, e1, vol_id, lnum, 0); 869ff94bc40SHeiko Schocher if (err) { 870ff94bc40SHeiko Schocher if (e2) 8710195a7bbSHeiko Schocher wl_entry_destroy(ubi, e2); 872ff94bc40SHeiko Schocher goto out_ro; 873ff94bc40SHeiko Schocher } 874ff94bc40SHeiko Schocher 875ff94bc40SHeiko Schocher if (e2) { 876c91a719dSKyungmin Park /* 877c91a719dSKyungmin Park * Well, the target PEB was put meanwhile, schedule it for 878c91a719dSKyungmin Park * erasure. 879c91a719dSKyungmin Park */ 880ff94bc40SHeiko Schocher dbg_wl("PEB %d (LEB %d:%d) was put meanwhile, erase", 881ff94bc40SHeiko Schocher e2->pnum, vol_id, lnum); 882ff94bc40SHeiko Schocher err = do_sync_erase(ubi, e2, vol_id, lnum, 0); 8830195a7bbSHeiko Schocher if (err) 884ff94bc40SHeiko Schocher goto out_ro; 885c91a719dSKyungmin Park } 886c91a719dSKyungmin Park 887c91a719dSKyungmin Park dbg_wl("done"); 888c91a719dSKyungmin Park mutex_unlock(&ubi->move_mutex); 889c91a719dSKyungmin Park return 0; 890c91a719dSKyungmin Park 891c91a719dSKyungmin Park /* 892c91a719dSKyungmin Park * For some reasons the LEB was not moved, might be an error, might be 893c91a719dSKyungmin Park * something else. @e1 was not changed, so return it back. @e2 might 894ff94bc40SHeiko Schocher * have been changed, schedule it for erasure. 895c91a719dSKyungmin Park */ 896c91a719dSKyungmin Park out_not_moved: 897ff94bc40SHeiko Schocher if (vol_id != -1) 898ff94bc40SHeiko Schocher dbg_wl("cancel moving PEB %d (LEB %d:%d) to PEB %d (%d)", 899ff94bc40SHeiko Schocher e1->pnum, vol_id, lnum, e2->pnum, err); 900ff94bc40SHeiko Schocher else 901ff94bc40SHeiko Schocher dbg_wl("cancel moving PEB %d to PEB %d (%d)", 902ff94bc40SHeiko Schocher e1->pnum, e2->pnum, err); 903c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 904ff94bc40SHeiko Schocher if (protect) 905ff94bc40SHeiko Schocher prot_queue_add(ubi, e1); 906ff94bc40SHeiko Schocher else if (erroneous) { 907ff94bc40SHeiko Schocher wl_tree_add(e1, &ubi->erroneous); 908ff94bc40SHeiko Schocher ubi->erroneous_peb_count += 1; 909ff94bc40SHeiko Schocher } else if (scrubbing) 910c91a719dSKyungmin Park wl_tree_add(e1, &ubi->scrub); 911c91a719dSKyungmin Park else 912c91a719dSKyungmin Park wl_tree_add(e1, &ubi->used); 913ff94bc40SHeiko Schocher ubi_assert(!ubi->move_to_put); 914c91a719dSKyungmin Park ubi->move_from = ubi->move_to = NULL; 915ff94bc40SHeiko Schocher ubi->wl_scheduled = 0; 916c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 917c91a719dSKyungmin Park 918ff94bc40SHeiko Schocher ubi_free_vid_hdr(ubi, vid_hdr); 919ff94bc40SHeiko Schocher err = do_sync_erase(ubi, e2, vol_id, lnum, torture); 9200195a7bbSHeiko Schocher if (err) 921ff94bc40SHeiko Schocher goto out_ro; 9220195a7bbSHeiko Schocher 923c91a719dSKyungmin Park mutex_unlock(&ubi->move_mutex); 924c91a719dSKyungmin Park return 0; 925c91a719dSKyungmin Park 926c91a719dSKyungmin Park out_error: 927ff94bc40SHeiko Schocher if (vol_id != -1) 9280195a7bbSHeiko Schocher ubi_err(ubi, "error %d while moving PEB %d to PEB %d", 929c91a719dSKyungmin Park err, e1->pnum, e2->pnum); 930ff94bc40SHeiko Schocher else 9310195a7bbSHeiko Schocher ubi_err(ubi, "error %d while moving PEB %d (LEB %d:%d) to PEB %d", 932ff94bc40SHeiko Schocher err, e1->pnum, vol_id, lnum, e2->pnum); 933c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 934c91a719dSKyungmin Park ubi->move_from = ubi->move_to = NULL; 935c91a719dSKyungmin Park ubi->move_to_put = ubi->wl_scheduled = 0; 936c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 937c91a719dSKyungmin Park 938ff94bc40SHeiko Schocher ubi_free_vid_hdr(ubi, vid_hdr); 9390195a7bbSHeiko Schocher wl_entry_destroy(ubi, e1); 9400195a7bbSHeiko Schocher wl_entry_destroy(ubi, e2); 941c91a719dSKyungmin Park 942ff94bc40SHeiko Schocher out_ro: 943ff94bc40SHeiko Schocher ubi_ro_mode(ubi); 944c91a719dSKyungmin Park mutex_unlock(&ubi->move_mutex); 945ff94bc40SHeiko Schocher ubi_assert(err != 0); 946ff94bc40SHeiko Schocher return err < 0 ? err : -EIO; 947c91a719dSKyungmin Park 948c91a719dSKyungmin Park out_cancel: 949c91a719dSKyungmin Park ubi->wl_scheduled = 0; 950c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 951c91a719dSKyungmin Park mutex_unlock(&ubi->move_mutex); 952c91a719dSKyungmin Park ubi_free_vid_hdr(ubi, vid_hdr); 953c91a719dSKyungmin Park return 0; 954c91a719dSKyungmin Park } 955c91a719dSKyungmin Park 956c91a719dSKyungmin Park /** 957c91a719dSKyungmin Park * ensure_wear_leveling - schedule wear-leveling if it is needed. 958c91a719dSKyungmin Park * @ubi: UBI device description object 959ff94bc40SHeiko Schocher * @nested: set to non-zero if this function is called from UBI worker 960c91a719dSKyungmin Park * 961c91a719dSKyungmin Park * This function checks if it is time to start wear-leveling and schedules it 962c91a719dSKyungmin Park * if yes. This function returns zero in case of success and a negative error 963c91a719dSKyungmin Park * code in case of failure. 964c91a719dSKyungmin Park */ 965ff94bc40SHeiko Schocher static int ensure_wear_leveling(struct ubi_device *ubi, int nested) 966c91a719dSKyungmin Park { 967c91a719dSKyungmin Park int err = 0; 968c91a719dSKyungmin Park struct ubi_wl_entry *e1; 969c91a719dSKyungmin Park struct ubi_wl_entry *e2; 970c91a719dSKyungmin Park struct ubi_work *wrk; 971c91a719dSKyungmin Park 972c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 973c91a719dSKyungmin Park if (ubi->wl_scheduled) 974c91a719dSKyungmin Park /* Wear-leveling is already in the work queue */ 975c91a719dSKyungmin Park goto out_unlock; 976c91a719dSKyungmin Park 977c91a719dSKyungmin Park /* 978c91a719dSKyungmin Park * If the ubi->scrub tree is not empty, scrubbing is needed, and the 979c91a719dSKyungmin Park * the WL worker has to be scheduled anyway. 980c91a719dSKyungmin Park */ 981c91a719dSKyungmin Park if (!ubi->scrub.rb_node) { 982c91a719dSKyungmin Park if (!ubi->used.rb_node || !ubi->free.rb_node) 983c91a719dSKyungmin Park /* No physical eraseblocks - no deal */ 984c91a719dSKyungmin Park goto out_unlock; 985c91a719dSKyungmin Park 986c91a719dSKyungmin Park /* 987c91a719dSKyungmin Park * We schedule wear-leveling only if the difference between the 988c91a719dSKyungmin Park * lowest erase counter of used physical eraseblocks and a high 989ff94bc40SHeiko Schocher * erase counter of free physical eraseblocks is greater than 990c91a719dSKyungmin Park * %UBI_WL_THRESHOLD. 991c91a719dSKyungmin Park */ 992ff94bc40SHeiko Schocher e1 = rb_entry(rb_first(&ubi->used), struct ubi_wl_entry, u.rb); 993ff94bc40SHeiko Schocher e2 = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF); 994c91a719dSKyungmin Park 995c91a719dSKyungmin Park if (!(e2->ec - e1->ec >= UBI_WL_THRESHOLD)) 996c91a719dSKyungmin Park goto out_unlock; 997c91a719dSKyungmin Park dbg_wl("schedule wear-leveling"); 998c91a719dSKyungmin Park } else 999c91a719dSKyungmin Park dbg_wl("schedule scrubbing"); 1000c91a719dSKyungmin Park 1001c91a719dSKyungmin Park ubi->wl_scheduled = 1; 1002c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1003c91a719dSKyungmin Park 1004c91a719dSKyungmin Park wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS); 1005c91a719dSKyungmin Park if (!wrk) { 1006c91a719dSKyungmin Park err = -ENOMEM; 1007c91a719dSKyungmin Park goto out_cancel; 1008c91a719dSKyungmin Park } 1009c91a719dSKyungmin Park 1010ff94bc40SHeiko Schocher wrk->anchor = 0; 1011c91a719dSKyungmin Park wrk->func = &wear_leveling_worker; 1012ff94bc40SHeiko Schocher if (nested) 1013ff94bc40SHeiko Schocher __schedule_ubi_work(ubi, wrk); 1014ff94bc40SHeiko Schocher else 1015c91a719dSKyungmin Park schedule_ubi_work(ubi, wrk); 1016c91a719dSKyungmin Park return err; 1017c91a719dSKyungmin Park 1018c91a719dSKyungmin Park out_cancel: 1019c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 1020c91a719dSKyungmin Park ubi->wl_scheduled = 0; 1021c91a719dSKyungmin Park out_unlock: 1022c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1023c91a719dSKyungmin Park return err; 1024c91a719dSKyungmin Park } 1025c91a719dSKyungmin Park 1026c91a719dSKyungmin Park /** 1027c91a719dSKyungmin Park * erase_worker - physical eraseblock erase worker function. 1028c91a719dSKyungmin Park * @ubi: UBI device description object 1029c91a719dSKyungmin Park * @wl_wrk: the work object 10300195a7bbSHeiko Schocher * @shutdown: non-zero if the worker has to free memory and exit 10310195a7bbSHeiko Schocher * because the WL sub-system is shutting down 1032c91a719dSKyungmin Park * 1033c91a719dSKyungmin Park * This function erases a physical eraseblock and perform torture testing if 1034c91a719dSKyungmin Park * needed. It also takes care about marking the physical eraseblock bad if 1035c91a719dSKyungmin Park * needed. Returns zero in case of success and a negative error code in case of 1036c91a719dSKyungmin Park * failure. 1037c91a719dSKyungmin Park */ 1038c91a719dSKyungmin Park static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk, 10390195a7bbSHeiko Schocher int shutdown) 1040c91a719dSKyungmin Park { 1041c91a719dSKyungmin Park struct ubi_wl_entry *e = wl_wrk->e; 1042ff94bc40SHeiko Schocher int pnum = e->pnum; 1043ff94bc40SHeiko Schocher int vol_id = wl_wrk->vol_id; 1044ff94bc40SHeiko Schocher int lnum = wl_wrk->lnum; 1045ff94bc40SHeiko Schocher int err, available_consumed = 0; 1046c91a719dSKyungmin Park 10470195a7bbSHeiko Schocher if (shutdown) { 1048c91a719dSKyungmin Park dbg_wl("cancel erasure of PEB %d EC %d", pnum, e->ec); 1049c91a719dSKyungmin Park kfree(wl_wrk); 10500195a7bbSHeiko Schocher wl_entry_destroy(ubi, e); 1051c91a719dSKyungmin Park return 0; 1052c91a719dSKyungmin Park } 1053c91a719dSKyungmin Park 1054ff94bc40SHeiko Schocher dbg_wl("erase PEB %d EC %d LEB %d:%d", 1055ff94bc40SHeiko Schocher pnum, e->ec, wl_wrk->vol_id, wl_wrk->lnum); 1056ff94bc40SHeiko Schocher 1057c91a719dSKyungmin Park err = sync_erase(ubi, e, wl_wrk->torture); 1058c91a719dSKyungmin Park if (!err) { 1059c91a719dSKyungmin Park /* Fine, we've erased it successfully */ 1060c91a719dSKyungmin Park kfree(wl_wrk); 1061c91a719dSKyungmin Park 1062c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 1063c91a719dSKyungmin Park wl_tree_add(e, &ubi->free); 1064ff94bc40SHeiko Schocher ubi->free_count++; 1065c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1066c91a719dSKyungmin Park 1067c91a719dSKyungmin Park /* 1068ff94bc40SHeiko Schocher * One more erase operation has happened, take care about 1069ff94bc40SHeiko Schocher * protected physical eraseblocks. 1070c91a719dSKyungmin Park */ 1071ff94bc40SHeiko Schocher serve_prot_queue(ubi); 1072c91a719dSKyungmin Park 1073c91a719dSKyungmin Park /* And take care about wear-leveling */ 1074ff94bc40SHeiko Schocher err = ensure_wear_leveling(ubi, 1); 1075c91a719dSKyungmin Park return err; 1076c91a719dSKyungmin Park } 1077c91a719dSKyungmin Park 10780195a7bbSHeiko Schocher ubi_err(ubi, "failed to erase PEB %d, error %d", pnum, err); 1079c91a719dSKyungmin Park kfree(wl_wrk); 1080c91a719dSKyungmin Park 1081c91a719dSKyungmin Park if (err == -EINTR || err == -ENOMEM || err == -EAGAIN || 1082c91a719dSKyungmin Park err == -EBUSY) { 1083c91a719dSKyungmin Park int err1; 1084c91a719dSKyungmin Park 1085c91a719dSKyungmin Park /* Re-schedule the LEB for erasure */ 1086ff94bc40SHeiko Schocher err1 = schedule_erase(ubi, e, vol_id, lnum, 0); 1087c91a719dSKyungmin Park if (err1) { 1088c91a719dSKyungmin Park err = err1; 1089c91a719dSKyungmin Park goto out_ro; 1090c91a719dSKyungmin Park } 1091c91a719dSKyungmin Park return err; 1092ff94bc40SHeiko Schocher } 1093ff94bc40SHeiko Schocher 10940195a7bbSHeiko Schocher wl_entry_destroy(ubi, e); 1095ff94bc40SHeiko Schocher if (err != -EIO) 1096c91a719dSKyungmin Park /* 1097c91a719dSKyungmin Park * If this is not %-EIO, we have no idea what to do. Scheduling 1098c91a719dSKyungmin Park * this physical eraseblock for erasure again would cause 1099ff94bc40SHeiko Schocher * errors again and again. Well, lets switch to R/O mode. 1100c91a719dSKyungmin Park */ 1101c91a719dSKyungmin Park goto out_ro; 1102c91a719dSKyungmin Park 1103c91a719dSKyungmin Park /* It is %-EIO, the PEB went bad */ 1104c91a719dSKyungmin Park 1105c91a719dSKyungmin Park if (!ubi->bad_allowed) { 11060195a7bbSHeiko Schocher ubi_err(ubi, "bad physical eraseblock %d detected", pnum); 1107c91a719dSKyungmin Park goto out_ro; 1108c91a719dSKyungmin Park } 1109c91a719dSKyungmin Park 1110c91a719dSKyungmin Park spin_lock(&ubi->volumes_lock); 1111c91a719dSKyungmin Park if (ubi->beb_rsvd_pebs == 0) { 1112ff94bc40SHeiko Schocher if (ubi->avail_pebs == 0) { 1113c91a719dSKyungmin Park spin_unlock(&ubi->volumes_lock); 11140195a7bbSHeiko Schocher ubi_err(ubi, "no reserved/available physical eraseblocks"); 1115c91a719dSKyungmin Park goto out_ro; 1116c91a719dSKyungmin Park } 1117ff94bc40SHeiko Schocher ubi->avail_pebs -= 1; 1118ff94bc40SHeiko Schocher available_consumed = 1; 1119ff94bc40SHeiko Schocher } 1120c91a719dSKyungmin Park spin_unlock(&ubi->volumes_lock); 1121c91a719dSKyungmin Park 11220195a7bbSHeiko Schocher ubi_msg(ubi, "mark PEB %d as bad", pnum); 1123c91a719dSKyungmin Park err = ubi_io_mark_bad(ubi, pnum); 1124c91a719dSKyungmin Park if (err) 1125c91a719dSKyungmin Park goto out_ro; 1126c91a719dSKyungmin Park 1127c91a719dSKyungmin Park spin_lock(&ubi->volumes_lock); 1128ff94bc40SHeiko Schocher if (ubi->beb_rsvd_pebs > 0) { 1129ff94bc40SHeiko Schocher if (available_consumed) { 1130ff94bc40SHeiko Schocher /* 1131ff94bc40SHeiko Schocher * The amount of reserved PEBs increased since we last 1132ff94bc40SHeiko Schocher * checked. 1133ff94bc40SHeiko Schocher */ 1134ff94bc40SHeiko Schocher ubi->avail_pebs += 1; 1135ff94bc40SHeiko Schocher available_consumed = 0; 1136ff94bc40SHeiko Schocher } 1137c91a719dSKyungmin Park ubi->beb_rsvd_pebs -= 1; 1138ff94bc40SHeiko Schocher } 1139c91a719dSKyungmin Park ubi->bad_peb_count += 1; 1140c91a719dSKyungmin Park ubi->good_peb_count -= 1; 1141c91a719dSKyungmin Park ubi_calculate_reserved(ubi); 1142ff94bc40SHeiko Schocher if (available_consumed) 11430195a7bbSHeiko Schocher ubi_warn(ubi, "no PEBs in the reserved pool, used an available PEB"); 1144ff94bc40SHeiko Schocher else if (ubi->beb_rsvd_pebs) 11450195a7bbSHeiko Schocher ubi_msg(ubi, "%d PEBs left in the reserve", 11460195a7bbSHeiko Schocher ubi->beb_rsvd_pebs); 1147ff94bc40SHeiko Schocher else 11480195a7bbSHeiko Schocher ubi_warn(ubi, "last PEB from the reserve was used"); 1149c91a719dSKyungmin Park spin_unlock(&ubi->volumes_lock); 1150c91a719dSKyungmin Park 1151c91a719dSKyungmin Park return err; 1152c91a719dSKyungmin Park 1153c91a719dSKyungmin Park out_ro: 1154ff94bc40SHeiko Schocher if (available_consumed) { 1155ff94bc40SHeiko Schocher spin_lock(&ubi->volumes_lock); 1156ff94bc40SHeiko Schocher ubi->avail_pebs += 1; 1157ff94bc40SHeiko Schocher spin_unlock(&ubi->volumes_lock); 1158ff94bc40SHeiko Schocher } 1159c91a719dSKyungmin Park ubi_ro_mode(ubi); 1160c91a719dSKyungmin Park return err; 1161c91a719dSKyungmin Park } 1162c91a719dSKyungmin Park 1163c91a719dSKyungmin Park /** 1164ff94bc40SHeiko Schocher * ubi_wl_put_peb - return a PEB to the wear-leveling sub-system. 1165c91a719dSKyungmin Park * @ubi: UBI device description object 1166ff94bc40SHeiko Schocher * @vol_id: the volume ID that last used this PEB 1167ff94bc40SHeiko Schocher * @lnum: the last used logical eraseblock number for the PEB 1168c91a719dSKyungmin Park * @pnum: physical eraseblock to return 1169c91a719dSKyungmin Park * @torture: if this physical eraseblock has to be tortured 1170c91a719dSKyungmin Park * 1171c91a719dSKyungmin Park * This function is called to return physical eraseblock @pnum to the pool of 1172c91a719dSKyungmin Park * free physical eraseblocks. The @torture flag has to be set if an I/O error 1173c91a719dSKyungmin Park * occurred to this @pnum and it has to be tested. This function returns zero 1174c91a719dSKyungmin Park * in case of success, and a negative error code in case of failure. 1175c91a719dSKyungmin Park */ 1176ff94bc40SHeiko Schocher int ubi_wl_put_peb(struct ubi_device *ubi, int vol_id, int lnum, 1177ff94bc40SHeiko Schocher int pnum, int torture) 1178c91a719dSKyungmin Park { 1179c91a719dSKyungmin Park int err; 1180c91a719dSKyungmin Park struct ubi_wl_entry *e; 1181c91a719dSKyungmin Park 1182c91a719dSKyungmin Park dbg_wl("PEB %d", pnum); 1183c91a719dSKyungmin Park ubi_assert(pnum >= 0); 1184c91a719dSKyungmin Park ubi_assert(pnum < ubi->peb_count); 1185c91a719dSKyungmin Park 11860195a7bbSHeiko Schocher down_read(&ubi->fm_protect); 11870195a7bbSHeiko Schocher 1188c91a719dSKyungmin Park retry: 1189c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 1190c91a719dSKyungmin Park e = ubi->lookuptbl[pnum]; 1191c91a719dSKyungmin Park if (e == ubi->move_from) { 1192c91a719dSKyungmin Park /* 1193c91a719dSKyungmin Park * User is putting the physical eraseblock which was selected to 1194c91a719dSKyungmin Park * be moved. It will be scheduled for erasure in the 1195c91a719dSKyungmin Park * wear-leveling worker. 1196c91a719dSKyungmin Park */ 1197c91a719dSKyungmin Park dbg_wl("PEB %d is being moved, wait", pnum); 1198c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1199c91a719dSKyungmin Park 1200c91a719dSKyungmin Park /* Wait for the WL worker by taking the @ubi->move_mutex */ 1201c91a719dSKyungmin Park mutex_lock(&ubi->move_mutex); 1202c91a719dSKyungmin Park mutex_unlock(&ubi->move_mutex); 1203c91a719dSKyungmin Park goto retry; 1204c91a719dSKyungmin Park } else if (e == ubi->move_to) { 1205c91a719dSKyungmin Park /* 1206c91a719dSKyungmin Park * User is putting the physical eraseblock which was selected 1207c91a719dSKyungmin Park * as the target the data is moved to. It may happen if the EBA 1208ff94bc40SHeiko Schocher * sub-system already re-mapped the LEB in 'ubi_eba_copy_leb()' 1209ff94bc40SHeiko Schocher * but the WL sub-system has not put the PEB to the "used" tree 1210ff94bc40SHeiko Schocher * yet, but it is about to do this. So we just set a flag which 1211ff94bc40SHeiko Schocher * will tell the WL worker that the PEB is not needed anymore 1212ff94bc40SHeiko Schocher * and should be scheduled for erasure. 1213c91a719dSKyungmin Park */ 1214c91a719dSKyungmin Park dbg_wl("PEB %d is the target of data moving", pnum); 1215c91a719dSKyungmin Park ubi_assert(!ubi->move_to_put); 1216c91a719dSKyungmin Park ubi->move_to_put = 1; 1217c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 12180195a7bbSHeiko Schocher up_read(&ubi->fm_protect); 1219c91a719dSKyungmin Park return 0; 1220c91a719dSKyungmin Park } else { 1221c91a719dSKyungmin Park if (in_wl_tree(e, &ubi->used)) { 1222ff94bc40SHeiko Schocher self_check_in_wl_tree(ubi, e, &ubi->used); 1223ff94bc40SHeiko Schocher rb_erase(&e->u.rb, &ubi->used); 1224c91a719dSKyungmin Park } else if (in_wl_tree(e, &ubi->scrub)) { 1225ff94bc40SHeiko Schocher self_check_in_wl_tree(ubi, e, &ubi->scrub); 1226ff94bc40SHeiko Schocher rb_erase(&e->u.rb, &ubi->scrub); 1227ff94bc40SHeiko Schocher } else if (in_wl_tree(e, &ubi->erroneous)) { 1228ff94bc40SHeiko Schocher self_check_in_wl_tree(ubi, e, &ubi->erroneous); 1229ff94bc40SHeiko Schocher rb_erase(&e->u.rb, &ubi->erroneous); 1230ff94bc40SHeiko Schocher ubi->erroneous_peb_count -= 1; 1231ff94bc40SHeiko Schocher ubi_assert(ubi->erroneous_peb_count >= 0); 1232ff94bc40SHeiko Schocher /* Erroneous PEBs should be tortured */ 1233ff94bc40SHeiko Schocher torture = 1; 1234c91a719dSKyungmin Park } else { 1235ff94bc40SHeiko Schocher err = prot_queue_del(ubi, e->pnum); 1236c91a719dSKyungmin Park if (err) { 12370195a7bbSHeiko Schocher ubi_err(ubi, "PEB %d not found", pnum); 1238c91a719dSKyungmin Park ubi_ro_mode(ubi); 1239c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 12400195a7bbSHeiko Schocher up_read(&ubi->fm_protect); 1241c91a719dSKyungmin Park return err; 1242c91a719dSKyungmin Park } 1243c91a719dSKyungmin Park } 1244c91a719dSKyungmin Park } 1245c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1246c91a719dSKyungmin Park 1247ff94bc40SHeiko Schocher err = schedule_erase(ubi, e, vol_id, lnum, torture); 1248c91a719dSKyungmin Park if (err) { 1249c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 1250c91a719dSKyungmin Park wl_tree_add(e, &ubi->used); 1251c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1252c91a719dSKyungmin Park } 1253c91a719dSKyungmin Park 12540195a7bbSHeiko Schocher up_read(&ubi->fm_protect); 1255c91a719dSKyungmin Park return err; 1256c91a719dSKyungmin Park } 1257c91a719dSKyungmin Park 1258c91a719dSKyungmin Park /** 1259c91a719dSKyungmin Park * ubi_wl_scrub_peb - schedule a physical eraseblock for scrubbing. 1260c91a719dSKyungmin Park * @ubi: UBI device description object 1261c91a719dSKyungmin Park * @pnum: the physical eraseblock to schedule 1262c91a719dSKyungmin Park * 1263c91a719dSKyungmin Park * If a bit-flip in a physical eraseblock is detected, this physical eraseblock 1264c91a719dSKyungmin Park * needs scrubbing. This function schedules a physical eraseblock for 1265c91a719dSKyungmin Park * scrubbing which is done in background. This function returns zero in case of 1266c91a719dSKyungmin Park * success and a negative error code in case of failure. 1267c91a719dSKyungmin Park */ 1268c91a719dSKyungmin Park int ubi_wl_scrub_peb(struct ubi_device *ubi, int pnum) 1269c91a719dSKyungmin Park { 1270c91a719dSKyungmin Park struct ubi_wl_entry *e; 1271c91a719dSKyungmin Park 12720195a7bbSHeiko Schocher ubi_msg(ubi, "schedule PEB %d for scrubbing", pnum); 1273c91a719dSKyungmin Park 1274c91a719dSKyungmin Park retry: 1275c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 1276c91a719dSKyungmin Park e = ubi->lookuptbl[pnum]; 1277ff94bc40SHeiko Schocher if (e == ubi->move_from || in_wl_tree(e, &ubi->scrub) || 1278ff94bc40SHeiko Schocher in_wl_tree(e, &ubi->erroneous)) { 1279c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1280c91a719dSKyungmin Park return 0; 1281c91a719dSKyungmin Park } 1282c91a719dSKyungmin Park 1283c91a719dSKyungmin Park if (e == ubi->move_to) { 1284c91a719dSKyungmin Park /* 1285c91a719dSKyungmin Park * This physical eraseblock was used to move data to. The data 1286c91a719dSKyungmin Park * was moved but the PEB was not yet inserted to the proper 1287c91a719dSKyungmin Park * tree. We should just wait a little and let the WL worker 1288c91a719dSKyungmin Park * proceed. 1289c91a719dSKyungmin Park */ 1290c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1291c91a719dSKyungmin Park dbg_wl("the PEB %d is not in proper tree, retry", pnum); 1292c91a719dSKyungmin Park yield(); 1293c91a719dSKyungmin Park goto retry; 1294c91a719dSKyungmin Park } 1295c91a719dSKyungmin Park 1296c91a719dSKyungmin Park if (in_wl_tree(e, &ubi->used)) { 1297ff94bc40SHeiko Schocher self_check_in_wl_tree(ubi, e, &ubi->used); 1298ff94bc40SHeiko Schocher rb_erase(&e->u.rb, &ubi->used); 1299c91a719dSKyungmin Park } else { 1300c91a719dSKyungmin Park int err; 1301c91a719dSKyungmin Park 1302ff94bc40SHeiko Schocher err = prot_queue_del(ubi, e->pnum); 1303c91a719dSKyungmin Park if (err) { 13040195a7bbSHeiko Schocher ubi_err(ubi, "PEB %d not found", pnum); 1305c91a719dSKyungmin Park ubi_ro_mode(ubi); 1306c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1307c91a719dSKyungmin Park return err; 1308c91a719dSKyungmin Park } 1309c91a719dSKyungmin Park } 1310c91a719dSKyungmin Park 1311c91a719dSKyungmin Park wl_tree_add(e, &ubi->scrub); 1312c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1313c91a719dSKyungmin Park 1314c91a719dSKyungmin Park /* 1315c91a719dSKyungmin Park * Technically scrubbing is the same as wear-leveling, so it is done 1316c91a719dSKyungmin Park * by the WL worker. 1317c91a719dSKyungmin Park */ 1318ff94bc40SHeiko Schocher return ensure_wear_leveling(ubi, 0); 1319c91a719dSKyungmin Park } 1320c91a719dSKyungmin Park 1321c91a719dSKyungmin Park /** 1322c91a719dSKyungmin Park * ubi_wl_flush - flush all pending works. 1323c91a719dSKyungmin Park * @ubi: UBI device description object 1324ff94bc40SHeiko Schocher * @vol_id: the volume id to flush for 1325ff94bc40SHeiko Schocher * @lnum: the logical eraseblock number to flush for 1326c91a719dSKyungmin Park * 1327ff94bc40SHeiko Schocher * This function executes all pending works for a particular volume id / 1328ff94bc40SHeiko Schocher * logical eraseblock number pair. If either value is set to %UBI_ALL, then it 1329ff94bc40SHeiko Schocher * acts as a wildcard for all of the corresponding volume numbers or logical 1330ff94bc40SHeiko Schocher * eraseblock numbers. It returns zero in case of success and a negative error 1331ff94bc40SHeiko Schocher * code in case of failure. 1332c91a719dSKyungmin Park */ 1333ff94bc40SHeiko Schocher int ubi_wl_flush(struct ubi_device *ubi, int vol_id, int lnum) 1334c91a719dSKyungmin Park { 1335ff94bc40SHeiko Schocher int err = 0; 1336ff94bc40SHeiko Schocher int found = 1; 1337c91a719dSKyungmin Park 1338c91a719dSKyungmin Park /* 1339ff94bc40SHeiko Schocher * Erase while the pending works queue is not empty, but not more than 1340c91a719dSKyungmin Park * the number of currently pending works. 1341c91a719dSKyungmin Park */ 1342ff94bc40SHeiko Schocher dbg_wl("flush pending work for LEB %d:%d (%d pending works)", 1343ff94bc40SHeiko Schocher vol_id, lnum, ubi->works_count); 1344ff94bc40SHeiko Schocher 1345ff94bc40SHeiko Schocher while (found) { 13460195a7bbSHeiko Schocher struct ubi_work *wrk, *tmp; 1347ff94bc40SHeiko Schocher found = 0; 1348ff94bc40SHeiko Schocher 1349ff94bc40SHeiko Schocher down_read(&ubi->work_sem); 1350ff94bc40SHeiko Schocher spin_lock(&ubi->wl_lock); 13510195a7bbSHeiko Schocher list_for_each_entry_safe(wrk, tmp, &ubi->works, list) { 1352ff94bc40SHeiko Schocher if ((vol_id == UBI_ALL || wrk->vol_id == vol_id) && 1353ff94bc40SHeiko Schocher (lnum == UBI_ALL || wrk->lnum == lnum)) { 1354ff94bc40SHeiko Schocher list_del(&wrk->list); 1355ff94bc40SHeiko Schocher ubi->works_count -= 1; 1356ff94bc40SHeiko Schocher ubi_assert(ubi->works_count >= 0); 1357ff94bc40SHeiko Schocher spin_unlock(&ubi->wl_lock); 1358ff94bc40SHeiko Schocher 1359ff94bc40SHeiko Schocher err = wrk->func(ubi, wrk, 0); 1360ff94bc40SHeiko Schocher if (err) { 1361ff94bc40SHeiko Schocher up_read(&ubi->work_sem); 1362c91a719dSKyungmin Park return err; 1363c91a719dSKyungmin Park } 1364c91a719dSKyungmin Park 1365ff94bc40SHeiko Schocher spin_lock(&ubi->wl_lock); 1366ff94bc40SHeiko Schocher found = 1; 1367ff94bc40SHeiko Schocher break; 1368ff94bc40SHeiko Schocher } 1369ff94bc40SHeiko Schocher } 1370ff94bc40SHeiko Schocher spin_unlock(&ubi->wl_lock); 1371ff94bc40SHeiko Schocher up_read(&ubi->work_sem); 1372ff94bc40SHeiko Schocher } 1373ff94bc40SHeiko Schocher 1374c91a719dSKyungmin Park /* 1375c91a719dSKyungmin Park * Make sure all the works which have been done in parallel are 1376c91a719dSKyungmin Park * finished. 1377c91a719dSKyungmin Park */ 1378c91a719dSKyungmin Park down_write(&ubi->work_sem); 1379c91a719dSKyungmin Park up_write(&ubi->work_sem); 1380c91a719dSKyungmin Park 1381c91a719dSKyungmin Park return err; 1382c91a719dSKyungmin Park } 1383c91a719dSKyungmin Park 1384c91a719dSKyungmin Park /** 1385c91a719dSKyungmin Park * tree_destroy - destroy an RB-tree. 13860195a7bbSHeiko Schocher * @ubi: UBI device description object 1387c91a719dSKyungmin Park * @root: the root of the tree to destroy 1388c91a719dSKyungmin Park */ 13890195a7bbSHeiko Schocher static void tree_destroy(struct ubi_device *ubi, struct rb_root *root) 1390c91a719dSKyungmin Park { 1391c91a719dSKyungmin Park struct rb_node *rb; 1392c91a719dSKyungmin Park struct ubi_wl_entry *e; 1393c91a719dSKyungmin Park 1394c91a719dSKyungmin Park rb = root->rb_node; 1395c91a719dSKyungmin Park while (rb) { 1396c91a719dSKyungmin Park if (rb->rb_left) 1397c91a719dSKyungmin Park rb = rb->rb_left; 1398c91a719dSKyungmin Park else if (rb->rb_right) 1399c91a719dSKyungmin Park rb = rb->rb_right; 1400c91a719dSKyungmin Park else { 1401ff94bc40SHeiko Schocher e = rb_entry(rb, struct ubi_wl_entry, u.rb); 1402c91a719dSKyungmin Park 1403c91a719dSKyungmin Park rb = rb_parent(rb); 1404c91a719dSKyungmin Park if (rb) { 1405ff94bc40SHeiko Schocher if (rb->rb_left == &e->u.rb) 1406c91a719dSKyungmin Park rb->rb_left = NULL; 1407c91a719dSKyungmin Park else 1408c91a719dSKyungmin Park rb->rb_right = NULL; 1409c91a719dSKyungmin Park } 1410c91a719dSKyungmin Park 14110195a7bbSHeiko Schocher wl_entry_destroy(ubi, e); 1412c91a719dSKyungmin Park } 1413c91a719dSKyungmin Park } 1414c91a719dSKyungmin Park } 1415c91a719dSKyungmin Park 1416c91a719dSKyungmin Park /** 1417c91a719dSKyungmin Park * ubi_thread - UBI background thread. 1418c91a719dSKyungmin Park * @u: the UBI device description object pointer 1419c91a719dSKyungmin Park */ 1420c91a719dSKyungmin Park int ubi_thread(void *u) 1421c91a719dSKyungmin Park { 1422c91a719dSKyungmin Park int failures = 0; 1423c91a719dSKyungmin Park struct ubi_device *ubi = u; 1424c91a719dSKyungmin Park 14250195a7bbSHeiko Schocher ubi_msg(ubi, "background thread \"%s\" started, PID %d", 1426c91a719dSKyungmin Park ubi->bgt_name, task_pid_nr(current)); 1427c91a719dSKyungmin Park 1428c91a719dSKyungmin Park set_freezable(); 1429c91a719dSKyungmin Park for (;;) { 1430c91a719dSKyungmin Park int err; 1431c91a719dSKyungmin Park 1432c91a719dSKyungmin Park if (kthread_should_stop()) 1433c91a719dSKyungmin Park break; 1434c91a719dSKyungmin Park 1435c91a719dSKyungmin Park if (try_to_freeze()) 1436c91a719dSKyungmin Park continue; 1437c91a719dSKyungmin Park 1438c91a719dSKyungmin Park spin_lock(&ubi->wl_lock); 1439c91a719dSKyungmin Park if (list_empty(&ubi->works) || ubi->ro_mode || 1440ff94bc40SHeiko Schocher !ubi->thread_enabled || ubi_dbg_is_bgt_disabled(ubi)) { 1441c91a719dSKyungmin Park set_current_state(TASK_INTERRUPTIBLE); 1442c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1443c91a719dSKyungmin Park schedule(); 1444c91a719dSKyungmin Park continue; 1445c91a719dSKyungmin Park } 1446c91a719dSKyungmin Park spin_unlock(&ubi->wl_lock); 1447c91a719dSKyungmin Park 1448c91a719dSKyungmin Park err = do_work(ubi); 1449c91a719dSKyungmin Park if (err) { 14500195a7bbSHeiko Schocher ubi_err(ubi, "%s: work failed with error code %d", 1451c91a719dSKyungmin Park ubi->bgt_name, err); 1452c91a719dSKyungmin Park if (failures++ > WL_MAX_FAILURES) { 1453c91a719dSKyungmin Park /* 1454c91a719dSKyungmin Park * Too many failures, disable the thread and 1455c91a719dSKyungmin Park * switch to read-only mode. 1456c91a719dSKyungmin Park */ 14570195a7bbSHeiko Schocher ubi_msg(ubi, "%s: %d consecutive failures", 1458c91a719dSKyungmin Park ubi->bgt_name, WL_MAX_FAILURES); 1459c91a719dSKyungmin Park ubi_ro_mode(ubi); 1460ff94bc40SHeiko Schocher ubi->thread_enabled = 0; 1461ff94bc40SHeiko Schocher continue; 1462c91a719dSKyungmin Park } 1463c91a719dSKyungmin Park } else 1464c91a719dSKyungmin Park failures = 0; 1465c91a719dSKyungmin Park 1466c91a719dSKyungmin Park cond_resched(); 1467c91a719dSKyungmin Park } 1468c91a719dSKyungmin Park 1469c91a719dSKyungmin Park dbg_wl("background thread \"%s\" is killed", ubi->bgt_name); 1470c91a719dSKyungmin Park return 0; 1471c91a719dSKyungmin Park } 1472c91a719dSKyungmin Park 1473c91a719dSKyungmin Park /** 14740195a7bbSHeiko Schocher * shutdown_work - shutdown all pending works. 1475c91a719dSKyungmin Park * @ubi: UBI device description object 1476c91a719dSKyungmin Park */ 14770195a7bbSHeiko Schocher static void shutdown_work(struct ubi_device *ubi) 1478c91a719dSKyungmin Park { 14790195a7bbSHeiko Schocher #ifdef CONFIG_MTD_UBI_FASTMAP 14800195a7bbSHeiko Schocher #ifndef __UBOOT__ 14810195a7bbSHeiko Schocher flush_work(&ubi->fm_work); 14820195a7bbSHeiko Schocher #else 14830195a7bbSHeiko Schocher /* in U-Boot, we have all work done */ 14840195a7bbSHeiko Schocher #endif 14850195a7bbSHeiko Schocher #endif 1486c91a719dSKyungmin Park while (!list_empty(&ubi->works)) { 1487c91a719dSKyungmin Park struct ubi_work *wrk; 1488c91a719dSKyungmin Park 1489c91a719dSKyungmin Park wrk = list_entry(ubi->works.next, struct ubi_work, list); 1490c91a719dSKyungmin Park list_del(&wrk->list); 1491c91a719dSKyungmin Park wrk->func(ubi, wrk, 1); 1492c91a719dSKyungmin Park ubi->works_count -= 1; 1493c91a719dSKyungmin Park ubi_assert(ubi->works_count >= 0); 1494c91a719dSKyungmin Park } 1495c91a719dSKyungmin Park } 1496c91a719dSKyungmin Park 1497c91a719dSKyungmin Park /** 1498ff94bc40SHeiko Schocher * ubi_wl_init - initialize the WL sub-system using attaching information. 1499c91a719dSKyungmin Park * @ubi: UBI device description object 1500ff94bc40SHeiko Schocher * @ai: attaching information 1501c91a719dSKyungmin Park * 1502c91a719dSKyungmin Park * This function returns zero in case of success, and a negative error code in 1503c91a719dSKyungmin Park * case of failure. 1504c91a719dSKyungmin Park */ 1505ff94bc40SHeiko Schocher int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai) 1506c91a719dSKyungmin Park { 1507ff94bc40SHeiko Schocher int err, i, reserved_pebs, found_pebs = 0; 1508c91a719dSKyungmin Park struct rb_node *rb1, *rb2; 1509ff94bc40SHeiko Schocher struct ubi_ainf_volume *av; 1510ff94bc40SHeiko Schocher struct ubi_ainf_peb *aeb, *tmp; 1511c91a719dSKyungmin Park struct ubi_wl_entry *e; 1512c91a719dSKyungmin Park 1513ff94bc40SHeiko Schocher ubi->used = ubi->erroneous = ubi->free = ubi->scrub = RB_ROOT; 1514c91a719dSKyungmin Park spin_lock_init(&ubi->wl_lock); 1515c91a719dSKyungmin Park mutex_init(&ubi->move_mutex); 1516c91a719dSKyungmin Park init_rwsem(&ubi->work_sem); 1517ff94bc40SHeiko Schocher ubi->max_ec = ai->max_ec; 1518c91a719dSKyungmin Park INIT_LIST_HEAD(&ubi->works); 1519c91a719dSKyungmin Park 1520c91a719dSKyungmin Park sprintf(ubi->bgt_name, UBI_BGT_NAME_PATTERN, ubi->ubi_num); 1521c91a719dSKyungmin Park 1522c91a719dSKyungmin Park err = -ENOMEM; 1523c91a719dSKyungmin Park ubi->lookuptbl = kzalloc(ubi->peb_count * sizeof(void *), GFP_KERNEL); 1524c91a719dSKyungmin Park if (!ubi->lookuptbl) 1525c91a719dSKyungmin Park return err; 1526c91a719dSKyungmin Park 1527ff94bc40SHeiko Schocher for (i = 0; i < UBI_PROT_QUEUE_LEN; i++) 1528ff94bc40SHeiko Schocher INIT_LIST_HEAD(&ubi->pq[i]); 1529ff94bc40SHeiko Schocher ubi->pq_head = 0; 1530ff94bc40SHeiko Schocher 1531*68fc4490SHeiko Schocher ubi->free_count = 0; 1532ff94bc40SHeiko Schocher list_for_each_entry_safe(aeb, tmp, &ai->erase, u.list) { 1533c91a719dSKyungmin Park cond_resched(); 1534c91a719dSKyungmin Park 1535c91a719dSKyungmin Park e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); 1536c91a719dSKyungmin Park if (!e) 1537c91a719dSKyungmin Park goto out_free; 1538c91a719dSKyungmin Park 1539ff94bc40SHeiko Schocher e->pnum = aeb->pnum; 1540ff94bc40SHeiko Schocher e->ec = aeb->ec; 1541c91a719dSKyungmin Park ubi->lookuptbl[e->pnum] = e; 1542ff94bc40SHeiko Schocher if (schedule_erase(ubi, e, aeb->vol_id, aeb->lnum, 0)) { 15430195a7bbSHeiko Schocher wl_entry_destroy(ubi, e); 1544c91a719dSKyungmin Park goto out_free; 1545c91a719dSKyungmin Park } 1546ff94bc40SHeiko Schocher 1547ff94bc40SHeiko Schocher found_pebs++; 1548c91a719dSKyungmin Park } 1549c91a719dSKyungmin Park 1550ff94bc40SHeiko Schocher list_for_each_entry(aeb, &ai->free, u.list) { 1551c91a719dSKyungmin Park cond_resched(); 1552c91a719dSKyungmin Park 1553c91a719dSKyungmin Park e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); 1554c91a719dSKyungmin Park if (!e) 1555c91a719dSKyungmin Park goto out_free; 1556c91a719dSKyungmin Park 1557ff94bc40SHeiko Schocher e->pnum = aeb->pnum; 1558ff94bc40SHeiko Schocher e->ec = aeb->ec; 1559c91a719dSKyungmin Park ubi_assert(e->ec >= 0); 1560ff94bc40SHeiko Schocher 1561c91a719dSKyungmin Park wl_tree_add(e, &ubi->free); 1562ff94bc40SHeiko Schocher ubi->free_count++; 1563ff94bc40SHeiko Schocher 1564c91a719dSKyungmin Park ubi->lookuptbl[e->pnum] = e; 1565ff94bc40SHeiko Schocher 1566ff94bc40SHeiko Schocher found_pebs++; 1567c91a719dSKyungmin Park } 1568c91a719dSKyungmin Park 1569ff94bc40SHeiko Schocher ubi_rb_for_each_entry(rb1, av, &ai->volumes, rb) { 1570ff94bc40SHeiko Schocher ubi_rb_for_each_entry(rb2, aeb, &av->root, u.rb) { 1571c91a719dSKyungmin Park cond_resched(); 1572c91a719dSKyungmin Park 1573c91a719dSKyungmin Park e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); 1574c91a719dSKyungmin Park if (!e) 1575c91a719dSKyungmin Park goto out_free; 1576c91a719dSKyungmin Park 1577ff94bc40SHeiko Schocher e->pnum = aeb->pnum; 1578ff94bc40SHeiko Schocher e->ec = aeb->ec; 1579c91a719dSKyungmin Park ubi->lookuptbl[e->pnum] = e; 1580c91a719dSKyungmin Park 1581ff94bc40SHeiko Schocher if (!aeb->scrub) { 1582c91a719dSKyungmin Park dbg_wl("add PEB %d EC %d to the used tree", 1583c91a719dSKyungmin Park e->pnum, e->ec); 1584c91a719dSKyungmin Park wl_tree_add(e, &ubi->used); 1585c91a719dSKyungmin Park } else { 1586c91a719dSKyungmin Park dbg_wl("add PEB %d EC %d to the scrub tree", 1587c91a719dSKyungmin Park e->pnum, e->ec); 1588c91a719dSKyungmin Park wl_tree_add(e, &ubi->scrub); 1589c91a719dSKyungmin Park } 1590ff94bc40SHeiko Schocher 1591ff94bc40SHeiko Schocher found_pebs++; 1592c91a719dSKyungmin Park } 1593c91a719dSKyungmin Park } 1594c91a719dSKyungmin Park 1595ff94bc40SHeiko Schocher dbg_wl("found %i PEBs", found_pebs); 1596ff94bc40SHeiko Schocher 15970195a7bbSHeiko Schocher if (ubi->fm) { 15980195a7bbSHeiko Schocher ubi_assert(ubi->good_peb_count == 1599ff94bc40SHeiko Schocher found_pebs + ubi->fm->used_blocks); 16000195a7bbSHeiko Schocher 16010195a7bbSHeiko Schocher for (i = 0; i < ubi->fm->used_blocks; i++) { 16020195a7bbSHeiko Schocher e = ubi->fm->e[i]; 16030195a7bbSHeiko Schocher ubi->lookuptbl[e->pnum] = e; 16040195a7bbSHeiko Schocher } 16050195a7bbSHeiko Schocher } 1606ff94bc40SHeiko Schocher else 1607ff94bc40SHeiko Schocher ubi_assert(ubi->good_peb_count == found_pebs); 1608ff94bc40SHeiko Schocher 1609ff94bc40SHeiko Schocher reserved_pebs = WL_RESERVED_PEBS; 16100195a7bbSHeiko Schocher ubi_fastmap_init(ubi, &reserved_pebs); 1611ff94bc40SHeiko Schocher 1612ff94bc40SHeiko Schocher if (ubi->avail_pebs < reserved_pebs) { 16130195a7bbSHeiko Schocher ubi_err(ubi, "no enough physical eraseblocks (%d, need %d)", 1614ff94bc40SHeiko Schocher ubi->avail_pebs, reserved_pebs); 1615ff94bc40SHeiko Schocher if (ubi->corr_peb_count) 16160195a7bbSHeiko Schocher ubi_err(ubi, "%d PEBs are corrupted and not used", 1617ff94bc40SHeiko Schocher ubi->corr_peb_count); 1618c91a719dSKyungmin Park goto out_free; 1619c91a719dSKyungmin Park } 1620ff94bc40SHeiko Schocher ubi->avail_pebs -= reserved_pebs; 1621ff94bc40SHeiko Schocher ubi->rsvd_pebs += reserved_pebs; 1622c91a719dSKyungmin Park 1623c91a719dSKyungmin Park /* Schedule wear-leveling if needed */ 1624ff94bc40SHeiko Schocher err = ensure_wear_leveling(ubi, 0); 1625c91a719dSKyungmin Park if (err) 1626c91a719dSKyungmin Park goto out_free; 1627c91a719dSKyungmin Park 1628c91a719dSKyungmin Park return 0; 1629c91a719dSKyungmin Park 1630c91a719dSKyungmin Park out_free: 16310195a7bbSHeiko Schocher shutdown_work(ubi); 16320195a7bbSHeiko Schocher tree_destroy(ubi, &ubi->used); 16330195a7bbSHeiko Schocher tree_destroy(ubi, &ubi->free); 16340195a7bbSHeiko Schocher tree_destroy(ubi, &ubi->scrub); 1635c91a719dSKyungmin Park kfree(ubi->lookuptbl); 1636c91a719dSKyungmin Park return err; 1637c91a719dSKyungmin Park } 1638c91a719dSKyungmin Park 1639c91a719dSKyungmin Park /** 1640ff94bc40SHeiko Schocher * protection_queue_destroy - destroy the protection queue. 1641c91a719dSKyungmin Park * @ubi: UBI device description object 1642c91a719dSKyungmin Park */ 1643ff94bc40SHeiko Schocher static void protection_queue_destroy(struct ubi_device *ubi) 1644c91a719dSKyungmin Park { 1645ff94bc40SHeiko Schocher int i; 1646ff94bc40SHeiko Schocher struct ubi_wl_entry *e, *tmp; 1647c91a719dSKyungmin Park 1648ff94bc40SHeiko Schocher for (i = 0; i < UBI_PROT_QUEUE_LEN; ++i) { 1649ff94bc40SHeiko Schocher list_for_each_entry_safe(e, tmp, &ubi->pq[i], u.list) { 1650ff94bc40SHeiko Schocher list_del(&e->u.list); 16510195a7bbSHeiko Schocher wl_entry_destroy(ubi, e); 1652c91a719dSKyungmin Park } 1653c91a719dSKyungmin Park } 1654c91a719dSKyungmin Park } 1655c91a719dSKyungmin Park 1656c91a719dSKyungmin Park /** 1657ff94bc40SHeiko Schocher * ubi_wl_close - close the wear-leveling sub-system. 1658c91a719dSKyungmin Park * @ubi: UBI device description object 1659c91a719dSKyungmin Park */ 1660c91a719dSKyungmin Park void ubi_wl_close(struct ubi_device *ubi) 1661c91a719dSKyungmin Park { 1662ff94bc40SHeiko Schocher dbg_wl("close the WL sub-system"); 16630195a7bbSHeiko Schocher ubi_fastmap_close(ubi); 16640195a7bbSHeiko Schocher shutdown_work(ubi); 1665ff94bc40SHeiko Schocher protection_queue_destroy(ubi); 16660195a7bbSHeiko Schocher tree_destroy(ubi, &ubi->used); 16670195a7bbSHeiko Schocher tree_destroy(ubi, &ubi->erroneous); 16680195a7bbSHeiko Schocher tree_destroy(ubi, &ubi->free); 16690195a7bbSHeiko Schocher tree_destroy(ubi, &ubi->scrub); 1670c91a719dSKyungmin Park kfree(ubi->lookuptbl); 1671c91a719dSKyungmin Park } 1672c91a719dSKyungmin Park 1673c91a719dSKyungmin Park /** 1674ff94bc40SHeiko Schocher * self_check_ec - make sure that the erase counter of a PEB is correct. 1675c91a719dSKyungmin Park * @ubi: UBI device description object 1676c91a719dSKyungmin Park * @pnum: the physical eraseblock number to check 1677c91a719dSKyungmin Park * @ec: the erase counter to check 1678c91a719dSKyungmin Park * 1679c91a719dSKyungmin Park * This function returns zero if the erase counter of physical eraseblock @pnum 1680ff94bc40SHeiko Schocher * is equivalent to @ec, and a negative error code if not or if an error 1681c91a719dSKyungmin Park * occurred. 1682c91a719dSKyungmin Park */ 1683ff94bc40SHeiko Schocher static int self_check_ec(struct ubi_device *ubi, int pnum, int ec) 1684c91a719dSKyungmin Park { 1685c91a719dSKyungmin Park int err; 1686c91a719dSKyungmin Park long long read_ec; 1687c91a719dSKyungmin Park struct ubi_ec_hdr *ec_hdr; 1688c91a719dSKyungmin Park 1689ff94bc40SHeiko Schocher if (!ubi_dbg_chk_gen(ubi)) 1690ff94bc40SHeiko Schocher return 0; 1691ff94bc40SHeiko Schocher 1692c91a719dSKyungmin Park ec_hdr = kzalloc(ubi->ec_hdr_alsize, GFP_NOFS); 1693c91a719dSKyungmin Park if (!ec_hdr) 1694c91a719dSKyungmin Park return -ENOMEM; 1695c91a719dSKyungmin Park 1696c91a719dSKyungmin Park err = ubi_io_read_ec_hdr(ubi, pnum, ec_hdr, 0); 1697c91a719dSKyungmin Park if (err && err != UBI_IO_BITFLIPS) { 1698c91a719dSKyungmin Park /* The header does not have to exist */ 1699c91a719dSKyungmin Park err = 0; 1700c91a719dSKyungmin Park goto out_free; 1701c91a719dSKyungmin Park } 1702c91a719dSKyungmin Park 1703c91a719dSKyungmin Park read_ec = be64_to_cpu(ec_hdr->ec); 1704ff94bc40SHeiko Schocher if (ec != read_ec && read_ec - ec > 1) { 17050195a7bbSHeiko Schocher ubi_err(ubi, "self-check failed for PEB %d", pnum); 17060195a7bbSHeiko Schocher ubi_err(ubi, "read EC is %lld, should be %d", read_ec, ec); 1707ff94bc40SHeiko Schocher dump_stack(); 1708c91a719dSKyungmin Park err = 1; 1709c91a719dSKyungmin Park } else 1710c91a719dSKyungmin Park err = 0; 1711c91a719dSKyungmin Park 1712c91a719dSKyungmin Park out_free: 1713c91a719dSKyungmin Park kfree(ec_hdr); 1714c91a719dSKyungmin Park return err; 1715c91a719dSKyungmin Park } 1716c91a719dSKyungmin Park 1717c91a719dSKyungmin Park /** 1718ff94bc40SHeiko Schocher * self_check_in_wl_tree - check that wear-leveling entry is in WL RB-tree. 1719ff94bc40SHeiko Schocher * @ubi: UBI device description object 1720c91a719dSKyungmin Park * @e: the wear-leveling entry to check 1721c91a719dSKyungmin Park * @root: the root of the tree 1722c91a719dSKyungmin Park * 1723ff94bc40SHeiko Schocher * This function returns zero if @e is in the @root RB-tree and %-EINVAL if it 1724c91a719dSKyungmin Park * is not. 1725c91a719dSKyungmin Park */ 1726ff94bc40SHeiko Schocher static int self_check_in_wl_tree(const struct ubi_device *ubi, 1727ff94bc40SHeiko Schocher struct ubi_wl_entry *e, struct rb_root *root) 1728c91a719dSKyungmin Park { 1729ff94bc40SHeiko Schocher if (!ubi_dbg_chk_gen(ubi)) 1730ff94bc40SHeiko Schocher return 0; 1731ff94bc40SHeiko Schocher 1732c91a719dSKyungmin Park if (in_wl_tree(e, root)) 1733c91a719dSKyungmin Park return 0; 1734c91a719dSKyungmin Park 17350195a7bbSHeiko Schocher ubi_err(ubi, "self-check failed for PEB %d, EC %d, RB-tree %p ", 1736c91a719dSKyungmin Park e->pnum, e->ec, root); 1737ff94bc40SHeiko Schocher dump_stack(); 1738ff94bc40SHeiko Schocher return -EINVAL; 1739c91a719dSKyungmin Park } 1740c91a719dSKyungmin Park 1741ff94bc40SHeiko Schocher /** 1742ff94bc40SHeiko Schocher * self_check_in_pq - check if wear-leveling entry is in the protection 1743ff94bc40SHeiko Schocher * queue. 1744ff94bc40SHeiko Schocher * @ubi: UBI device description object 1745ff94bc40SHeiko Schocher * @e: the wear-leveling entry to check 1746ff94bc40SHeiko Schocher * 1747ff94bc40SHeiko Schocher * This function returns zero if @e is in @ubi->pq and %-EINVAL if it is not. 1748ff94bc40SHeiko Schocher */ 1749ff94bc40SHeiko Schocher static int self_check_in_pq(const struct ubi_device *ubi, 1750ff94bc40SHeiko Schocher struct ubi_wl_entry *e) 1751ff94bc40SHeiko Schocher { 1752ff94bc40SHeiko Schocher struct ubi_wl_entry *p; 1753ff94bc40SHeiko Schocher int i; 1754ff94bc40SHeiko Schocher 1755ff94bc40SHeiko Schocher if (!ubi_dbg_chk_gen(ubi)) 1756ff94bc40SHeiko Schocher return 0; 1757ff94bc40SHeiko Schocher 1758ff94bc40SHeiko Schocher for (i = 0; i < UBI_PROT_QUEUE_LEN; ++i) 1759ff94bc40SHeiko Schocher list_for_each_entry(p, &ubi->pq[i], u.list) 1760ff94bc40SHeiko Schocher if (p == e) 1761ff94bc40SHeiko Schocher return 0; 1762ff94bc40SHeiko Schocher 17630195a7bbSHeiko Schocher ubi_err(ubi, "self-check failed for PEB %d, EC %d, Protect queue", 1764ff94bc40SHeiko Schocher e->pnum, e->ec); 1765ff94bc40SHeiko Schocher dump_stack(); 1766ff94bc40SHeiko Schocher return -EINVAL; 1767ff94bc40SHeiko Schocher } 17680195a7bbSHeiko Schocher #ifndef CONFIG_MTD_UBI_FASTMAP 17690195a7bbSHeiko Schocher static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi) 17700195a7bbSHeiko Schocher { 17710195a7bbSHeiko Schocher struct ubi_wl_entry *e; 17720195a7bbSHeiko Schocher 17730195a7bbSHeiko Schocher e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF); 17740195a7bbSHeiko Schocher self_check_in_wl_tree(ubi, e, &ubi->free); 17750195a7bbSHeiko Schocher ubi->free_count--; 17760195a7bbSHeiko Schocher ubi_assert(ubi->free_count >= 0); 17770195a7bbSHeiko Schocher rb_erase(&e->u.rb, &ubi->free); 17780195a7bbSHeiko Schocher 17790195a7bbSHeiko Schocher return e; 17800195a7bbSHeiko Schocher } 17810195a7bbSHeiko Schocher 17820195a7bbSHeiko Schocher /** 17830195a7bbSHeiko Schocher * produce_free_peb - produce a free physical eraseblock. 17840195a7bbSHeiko Schocher * @ubi: UBI device description object 17850195a7bbSHeiko Schocher * 17860195a7bbSHeiko Schocher * This function tries to make a free PEB by means of synchronous execution of 17870195a7bbSHeiko Schocher * pending works. This may be needed if, for example the background thread is 17880195a7bbSHeiko Schocher * disabled. Returns zero in case of success and a negative error code in case 17890195a7bbSHeiko Schocher * of failure. 17900195a7bbSHeiko Schocher */ 17910195a7bbSHeiko Schocher static int produce_free_peb(struct ubi_device *ubi) 17920195a7bbSHeiko Schocher { 17930195a7bbSHeiko Schocher int err; 17940195a7bbSHeiko Schocher 17950195a7bbSHeiko Schocher while (!ubi->free.rb_node && ubi->works_count) { 17960195a7bbSHeiko Schocher spin_unlock(&ubi->wl_lock); 17970195a7bbSHeiko Schocher 17980195a7bbSHeiko Schocher dbg_wl("do one work synchronously"); 17990195a7bbSHeiko Schocher err = do_work(ubi); 18000195a7bbSHeiko Schocher 18010195a7bbSHeiko Schocher spin_lock(&ubi->wl_lock); 18020195a7bbSHeiko Schocher if (err) 18030195a7bbSHeiko Schocher return err; 18040195a7bbSHeiko Schocher } 18050195a7bbSHeiko Schocher 18060195a7bbSHeiko Schocher return 0; 18070195a7bbSHeiko Schocher } 18080195a7bbSHeiko Schocher 18090195a7bbSHeiko Schocher /** 18100195a7bbSHeiko Schocher * ubi_wl_get_peb - get a physical eraseblock. 18110195a7bbSHeiko Schocher * @ubi: UBI device description object 18120195a7bbSHeiko Schocher * 18130195a7bbSHeiko Schocher * This function returns a physical eraseblock in case of success and a 18140195a7bbSHeiko Schocher * negative error code in case of failure. 18150195a7bbSHeiko Schocher * Returns with ubi->fm_eba_sem held in read mode! 18160195a7bbSHeiko Schocher */ 18170195a7bbSHeiko Schocher int ubi_wl_get_peb(struct ubi_device *ubi) 18180195a7bbSHeiko Schocher { 18190195a7bbSHeiko Schocher int err; 18200195a7bbSHeiko Schocher struct ubi_wl_entry *e; 18210195a7bbSHeiko Schocher 18220195a7bbSHeiko Schocher retry: 18230195a7bbSHeiko Schocher down_read(&ubi->fm_eba_sem); 18240195a7bbSHeiko Schocher spin_lock(&ubi->wl_lock); 18250195a7bbSHeiko Schocher if (!ubi->free.rb_node) { 18260195a7bbSHeiko Schocher if (ubi->works_count == 0) { 18270195a7bbSHeiko Schocher ubi_err(ubi, "no free eraseblocks"); 18280195a7bbSHeiko Schocher ubi_assert(list_empty(&ubi->works)); 18290195a7bbSHeiko Schocher spin_unlock(&ubi->wl_lock); 18300195a7bbSHeiko Schocher return -ENOSPC; 18310195a7bbSHeiko Schocher } 18320195a7bbSHeiko Schocher 18330195a7bbSHeiko Schocher err = produce_free_peb(ubi); 18340195a7bbSHeiko Schocher if (err < 0) { 18350195a7bbSHeiko Schocher spin_unlock(&ubi->wl_lock); 18360195a7bbSHeiko Schocher return err; 18370195a7bbSHeiko Schocher } 18380195a7bbSHeiko Schocher spin_unlock(&ubi->wl_lock); 18390195a7bbSHeiko Schocher up_read(&ubi->fm_eba_sem); 18400195a7bbSHeiko Schocher goto retry; 18410195a7bbSHeiko Schocher 18420195a7bbSHeiko Schocher } 18430195a7bbSHeiko Schocher e = wl_get_wle(ubi); 18440195a7bbSHeiko Schocher prot_queue_add(ubi, e); 18450195a7bbSHeiko Schocher spin_unlock(&ubi->wl_lock); 18460195a7bbSHeiko Schocher 18470195a7bbSHeiko Schocher err = ubi_self_check_all_ff(ubi, e->pnum, ubi->vid_hdr_aloffset, 18480195a7bbSHeiko Schocher ubi->peb_size - ubi->vid_hdr_aloffset); 18490195a7bbSHeiko Schocher if (err) { 18500195a7bbSHeiko Schocher ubi_err(ubi, "new PEB %d does not contain all 0xFF bytes", e->pnum); 18510195a7bbSHeiko Schocher return err; 18520195a7bbSHeiko Schocher } 18530195a7bbSHeiko Schocher 18540195a7bbSHeiko Schocher return e->pnum; 18550195a7bbSHeiko Schocher } 18560195a7bbSHeiko Schocher #else 18570195a7bbSHeiko Schocher #include "fastmap-wl.c" 18580195a7bbSHeiko Schocher #endif 1859