Message ID | 20180828201421.157735-2-jannh@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86: BUG() on #GP / kernel #PF in uaccess | expand |
On Tue, 28 Aug 2018 22:14:15 +0200 Jann Horn <jannh@google.com> wrote: > This is an extension of commit b506a9d08bae ("x86: code clarification patch > to Kprobes arch code"). As that commit explains, even though > kprobe_running() can't be called with preemption enabled, you don't have to > disable preemption - if preemption is on, you can't be in a kprobe. > > Also, use X86_TRAP_PF instead of 14. > > Signed-off-by: Jann Horn <jannh@google.com> > --- > v3: > - avoid unnecessary branch on return value and split up the checks > (Borislav Petkov) > > arch/x86/mm/fault.c | 24 +++++++++++++----------- > 1 file changed, 13 insertions(+), 11 deletions(-) > > diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c > index b9123c497e0a..bcdaae1d5bf5 100644 > --- a/arch/x86/mm/fault.c > +++ b/arch/x86/mm/fault.c > @@ -44,17 +44,19 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr) > > static nokprobe_inline int kprobes_fault(struct pt_regs *regs) > { > - int ret = 0; > - > - /* kprobe_running() needs smp_processor_id() */ > - if (kprobes_built_in() && !user_mode(regs)) { > - preempt_disable(); > - if (kprobe_running() && kprobe_fault_handler(regs, 14)) > - ret = 1; > - preempt_enable(); > - } > - > - return ret; > + if (!kprobes_built_in()) > + return 0; > + if (user_mode(regs)) > + return 0; > + /* > + * To be potentially processing a kprobe fault and to be allowed to call > + * kprobe_running(), we have to be non-preemptible. Good catch! Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Thanks! > + */ > + if (preemptible()) > + return 0; > + if (!kprobe_running()) > + return 0; > + return kprobe_fault_handler(regs, X86_TRAP_PF); > } > > /* > -- > 2.19.0.rc0.228.g281dcd1b4d0-goog >
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index b9123c497e0a..bcdaae1d5bf5 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -44,17 +44,19 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr) static nokprobe_inline int kprobes_fault(struct pt_regs *regs) { - int ret = 0; - - /* kprobe_running() needs smp_processor_id() */ - if (kprobes_built_in() && !user_mode(regs)) { - preempt_disable(); - if (kprobe_running() && kprobe_fault_handler(regs, 14)) - ret = 1; - preempt_enable(); - } - - return ret; + if (!kprobes_built_in()) + return 0; + if (user_mode(regs)) + return 0; + /* + * To be potentially processing a kprobe fault and to be allowed to call + * kprobe_running(), we have to be non-preemptible. + */ + if (preemptible()) + return 0; + if (!kprobe_running()) + return 0; + return kprobe_fault_handler(regs, X86_TRAP_PF); } /*
This is an extension of commit b506a9d08bae ("x86: code clarification patch to Kprobes arch code"). As that commit explains, even though kprobe_running() can't be called with preemption enabled, you don't have to disable preemption - if preemption is on, you can't be in a kprobe. Also, use X86_TRAP_PF instead of 14. Signed-off-by: Jann Horn <jannh@google.com> --- v3: - avoid unnecessary branch on return value and split up the checks (Borislav Petkov) arch/x86/mm/fault.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-)