Lines Matching refs:kprobe

37 (also called return probes).  A kprobe can be inserted on virtually
64 When a kprobe is registered, Kprobes makes a copy of the probed
71 associated with the kprobe, passing the handler the addresses of the
72 kprobe struct and the saved registers.
81 "post_handler," if any, that is associated with the kprobe.
110 When you call register_kretprobe(), Kprobes establishes a kprobe at
115 At boot time, Kprobes registers a kprobe at the trampoline.
148 field of the kretprobe struct. Whenever the kprobe placed by kretprobe at the
183 Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
236 If the kprobe can be optimized, Kprobes enqueues the kprobe to an
237 optimizing list, and kicks the kprobe-optimizer workqueue to optimize
252 of kprobe optimization supports only kernels with CONFIG_PREEMPT=n [4]_.
261 When an optimized kprobe is unregistered, disabled, or blocked by
262 another kprobe, it will be unoptimized. If this happens before
263 the optimization is complete, the kprobe is just dequeued from the
279 The jump optimization changes the kprobe's pre_handler behavior.
286 - Specify an empty function for the kprobe's post_handler.
340 kprobe address resolution code.
363 int register_kprobe(struct kprobe *kp);
376 1. With the introduction of the "symbol_name" field to struct kprobe,
385 2. Use the "offset" field of struct kprobe if the offset into the symbol
389 3. Specify either the kprobe "symbol_name" OR the "addr". If both are
390 specified, kprobe registration will fail with -EINVAL.
393 does not validate if the kprobe.addr is at an instruction boundary.
402 int pre_handler(struct kprobe *p, struct pt_regs *regs);
404 Called with p pointing to the kprobe associated with the breakpoint,
412 void post_handler(struct kprobe *p, struct pt_regs *regs,
422 int fault_handler(struct kprobe *p, struct pt_regs *regs, int trapnr);
452 regs is as described for kprobe.pre_handler. ri points to the
474 void unregister_kprobe(struct kprobe *kp);
491 int register_kprobes(struct kprobe **kps, int num);
513 void unregister_kprobes(struct kprobe **kps, int num);
531 int disable_kprobe(struct kprobe *kp);
543 int enable_kprobe(struct kprobe *kp);
554 So if you install a kprobe with a post_handler, at an optimized
582 handlers won't be run in that instance, and the kprobe.nmissed member
593 kretprobe handlers and optimized kprobe handlers run without interrupt
643 of the kprobe, because the bytes in DCR are replaced by
657 On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
661 hit typically takes 50-75% longer than a kprobe hit.
662 When you have a return probe set on a function, adding a kprobe at
667 k = kprobe; r = return probe; kr = kprobe + return probe
682 Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
685 k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
733 - Use ftrace dynamic events (kprobe event) with perf-probe.
759 The second column identifies the type of probe (k - kprobe and r - kretprobe)