Lines Matching full:fault
39 * Returns 0 if mmiotrace is disabled, or if the fault is not
117 * If it was a exec (instruction fetch) fault on NX page, then in is_prefetch()
118 * do not ignore the fault: in is_prefetch()
202 * Handle a fault on the vmalloc or module mapping area
213 * unhandled page-fault when they are accessed.
424 * The OS sees this as a page fault with the upper 32bits of RIP cleared.
458 * We catch this in the page fault handler because these addresses
544 pr_alert("BUG: unable to handle page fault for address: %px\n", in show_fault_oops()
567 * contributory exception from user code and gets a page fault in show_fault_oops()
568 * during delivery, the page fault can be delivered as though in show_fault_oops()
652 /* Are we prepared to handle this kernel fault? */ in no_context()
655 * Any interrupt that takes a fault gets the fixup. This makes in no_context()
656 * the below recursive fault logic only apply to a faults from in no_context()
683 * Stack overflow? During boot, we can fault near the initial in no_context()
694 * double-fault even before we get this far, in which case in no_context()
695 * we're fine: the double-fault handler will deal with it. in no_context()
698 * and then double-fault, though, because we're likely to in no_context()
705 : "D" ("kernel stack overflow (page fault)"), in no_context()
715 * Valid to do another page fault here, because if this fault in no_context()
730 * Buggy firmware could access regions which might page fault, try to in no_context()
813 * Valid to do another page fault here because this one came in __bad_area_nosemaphore()
906 * A protection key fault means that the PKRU value did not allow in bad_area_access_error()
913 * fault and that there was a VMA once we got in the fault in bad_area_access_error()
921 * 5. T1 : enters fault handler, takes mmap_lock, etc... in bad_area_access_error()
935 vm_fault_t fault) in do_sigbus() argument
943 /* User-space => ok to do another page fault: */ in do_sigbus()
950 if (fault & (VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { in do_sigbus()
955 "MCE: Killing %s:%d due to hardware memory corruption fault at %lx\n", in do_sigbus()
957 if (fault & VM_FAULT_HWPOISON_LARGE) in do_sigbus()
958 lsb = hstate_index_to_shift(VM_FAULT_GET_HINDEX(fault)); in do_sigbus()
959 if (fault & VM_FAULT_HWPOISON) in do_sigbus()
970 unsigned long address, vm_fault_t fault) in mm_fault_error() argument
977 if (fault & VM_FAULT_OOM) { in mm_fault_error()
987 * userspace (which will retry the fault, or kill us if we got in mm_fault_error()
992 if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON| in mm_fault_error()
994 do_sigbus(regs, error_code, address, fault); in mm_fault_error()
995 else if (fault & VM_FAULT_SIGSEGV) in mm_fault_error()
1014 * Handle a spurious fault caused by a stale TLB entry.
1029 * Returns non-zero if a spurious fault was handled, zero otherwise.
1112 * a follow-up action to resolve the fault, like a COW. in access_error()
1175 * We can fault-in kernel-space virtual memory on-demand. The in do_kern_addr_fault()
1184 * fault is not any of the following: in do_kern_addr_fault()
1185 * 1. A fault on a PTE with a reserved bit set. in do_kern_addr_fault()
1186 * 2. A fault caused by a user-mode access. (Do not demand- in do_kern_addr_fault()
1187 * fault kernel memory due to user-mode accesses). in do_kern_addr_fault()
1188 * 3. A fault caused by a page-level protection violation. in do_kern_addr_fault()
1189 * (A demand fault would be on a non-present page which in do_kern_addr_fault()
1204 /* Was the fault spurious, caused by lazy TLB invalidation? */ in do_kern_addr_fault()
1215 * and handling kernel code that can fault, like get_user(). in do_kern_addr_fault()
1218 * fault we could otherwise deadlock: in do_kern_addr_fault()
1233 vm_fault_t fault; in do_user_addr_fault() local
1267 * in a region with pagefaults disabled then we must not take the fault in do_user_addr_fault()
1276 * vmalloc fault has been handled. in do_user_addr_fault()
1279 * potential system fault or CPU buglet: in do_user_addr_fault()
1315 * Do not try to do a speculative page fault if the fault was due to in do_user_addr_fault()
1319 fault = handle_speculative_fault(mm, address, flags, &vma, regs); in do_user_addr_fault()
1320 if (fault != VM_FAULT_RETRY) in do_user_addr_fault()
1327 * tables. But, an erroneous kernel fault occurring outside one of in do_user_addr_fault()
1329 * to validate the fault against the address space. in do_user_addr_fault()
1339 * Fault from code in kernel from in do_user_addr_fault()
1384 * If for any reason at all we couldn't handle the fault, in do_user_addr_fault()
1386 * the fault. Since we never set FAULT_FLAG_RETRY_NOWAIT, if in do_user_addr_fault()
1391 * repeat the page fault later with a VM_FAULT_NOPAGE retval in do_user_addr_fault()
1396 fault = handle_mm_fault(vma, address, flags, regs); in do_user_addr_fault()
1399 if (fault_signal_pending(fault, regs)) { in do_user_addr_fault()
1411 if (unlikely((fault & VM_FAULT_RETRY) && in do_user_addr_fault()
1427 if (unlikely(fault & VM_FAULT_ERROR)) { in do_user_addr_fault()
1428 mm_fault_error(regs, hw_error_code, address, fault); in do_user_addr_fault()
1458 /* Was the fault on kernel-controlled part of the address space? */ in handle_page_fault()
1464 * User address page fault handling might have reenabled in handle_page_fault()
1483 * (asynchronous page fault mechanism). The event happens when a in DEFINE_IDTENTRY_RAW_ERRORCODE()
1508 * be invoked because a kernel fault on a user space address might in DEFINE_IDTENTRY_RAW_ERRORCODE()
1511 * In case the fault hit a RCU idle region the conditional entry in DEFINE_IDTENTRY_RAW_ERRORCODE()