Message ID | 20240319212013.1046779-1-andrii@kernel.org (mailing list archive) |
---|---|
State | Accepted |
Commit | a8497506cd2c0fc90a64f6f5d2744a0ddb2c81eb |
Delegated to: | BPF |
Headers | show |
Series | [bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP | expand |
On Tue, 19 Mar 2024 14:20:13 -0700 Andrii Nakryiko <andrii@kernel.org> wrote: > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault()) > is not free and it does pop up in performance profiles when > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config. > > Let's avoid using it if we know that fentry_ip - 4 can't cross page > boundary. We do that by masking lowest 12 bits and checking if they are > >= 4, in which case we can do direct memory read. > > Another benefit (and actually what caused a closer look at this part of > code) is that now LBR record is (typically) not wasted on > copy_from_kernel_nofault() call and code, which helps tools like > retsnoop that grab LBR records from inside BPF code in kretprobes. Hmm, we may better to have this function in kprobe side and store a flag which such architecture dependent offset is added. That is more natural. Thanks! > > Cc: Masami Hiramatsu <mhiramat@kernel.org> > Cc: Peter Zijlstra <peterz@infradead.org> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org> > --- > kernel/trace/bpf_trace.c | 12 +++++++++--- > 1 file changed, 9 insertions(+), 3 deletions(-) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index 0a5c4efc73c3..f81adabda38c 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip) > { > u32 instr; > > - /* Being extra safe in here in case entry ip is on the page-edge. */ > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1)) > - return fentry_ip; > + /* We want to be extra safe in case entry ip is on the page edge, > + * but otherwise we need to avoid get_kernel_nofault()'s overhead. > + */ > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) { > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE))) > + return fentry_ip; > + } else { > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE); > + } > if (is_endbr(instr)) > fentry_ip -= ENDBR_INSN_SIZE; > return fentry_ip; > -- > 2.43.0 >
On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote: > On Tue, 19 Mar 2024 14:20:13 -0700 > Andrii Nakryiko <andrii@kernel.org> wrote: > > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault()) > > is not free and it does pop up in performance profiles when > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config. > > > > Let's avoid using it if we know that fentry_ip - 4 can't cross page > > boundary. We do that by masking lowest 12 bits and checking if they are > > >= 4, in which case we can do direct memory read. > > > > Another benefit (and actually what caused a closer look at this part of > > code) is that now LBR record is (typically) not wasted on > > copy_from_kernel_nofault() call and code, which helps tools like > > retsnoop that grab LBR records from inside BPF code in kretprobes. I think this is nice improvement Acked-by: Jiri Olsa <jolsa@kernel.org> > > Hmm, we may better to have this function in kprobe side and > store a flag which such architecture dependent offset is added. > That is more natural. I like the idea of new flag saying the address was adjusted for endbr kprobe adjust the address in arch_adjust_kprobe_addr, it could be easily added in there and then we'd adjust the address in get_entry_ip accordingly jirka > > Thanks! > > > > > Cc: Masami Hiramatsu <mhiramat@kernel.org> > > Cc: Peter Zijlstra <peterz@infradead.org> > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org> > > --- > > kernel/trace/bpf_trace.c | 12 +++++++++--- > > 1 file changed, 9 insertions(+), 3 deletions(-) > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > index 0a5c4efc73c3..f81adabda38c 100644 > > --- a/kernel/trace/bpf_trace.c > > +++ b/kernel/trace/bpf_trace.c > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip) > > { > > u32 instr; > > > > - /* Being extra safe in here in case entry ip is on the page-edge. */ > > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1)) > > - return fentry_ip; > > + /* We want to be extra safe in case entry ip is on the page edge, > > + * but otherwise we need to avoid get_kernel_nofault()'s overhead. > > + */ > > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) { > > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE))) > > + return fentry_ip; > > + } else { > > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE); > > + } > > if (is_endbr(instr)) > > fentry_ip -= ENDBR_INSN_SIZE; > > return fentry_ip; > > -- > > 2.43.0 > > > > > -- > Masami Hiramatsu (Google) <mhiramat@kernel.org> >
On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote: > > On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote: > > On Tue, 19 Mar 2024 14:20:13 -0700 > > Andrii Nakryiko <andrii@kernel.org> wrote: > > > > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault()) > > > is not free and it does pop up in performance profiles when > > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config. > > > > > > Let's avoid using it if we know that fentry_ip - 4 can't cross page > > > boundary. We do that by masking lowest 12 bits and checking if they are > > > >= 4, in which case we can do direct memory read. > > > > > > Another benefit (and actually what caused a closer look at this part of > > > code) is that now LBR record is (typically) not wasted on > > > copy_from_kernel_nofault() call and code, which helps tools like > > > retsnoop that grab LBR records from inside BPF code in kretprobes. > > I think this is nice improvement > > Acked-by: Jiri Olsa <jolsa@kernel.org> > Masami, are you ok if we land this rather straightforward fix in bpf-next tree for now, and then you or someone a bit more familiar with ftrace/kprobe internals can generalize this in a more generic way? > > > > Hmm, we may better to have this function in kprobe side and > > store a flag which such architecture dependent offset is added. > > That is more natural. > > I like the idea of new flag saying the address was adjusted for endbr > instead of a flag, can kprobe low-level infrastructure just provide "effective fentry ip" without any flags, so that BPF side of things don't have to care? > kprobe adjust the address in arch_adjust_kprobe_addr, it could be > easily added in there and then we'd adjust the address in get_entry_ip > accordingly > > jirka > > > > > Thanks! > > > > > > > > Cc: Masami Hiramatsu <mhiramat@kernel.org> > > > Cc: Peter Zijlstra <peterz@infradead.org> > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org> > > > --- > > > kernel/trace/bpf_trace.c | 12 +++++++++--- > > > 1 file changed, 9 insertions(+), 3 deletions(-) > > > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > > index 0a5c4efc73c3..f81adabda38c 100644 > > > --- a/kernel/trace/bpf_trace.c > > > +++ b/kernel/trace/bpf_trace.c > > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip) > > > { > > > u32 instr; > > > > > > - /* Being extra safe in here in case entry ip is on the page-edge. */ > > > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1)) > > > - return fentry_ip; > > > + /* We want to be extra safe in case entry ip is on the page edge, > > > + * but otherwise we need to avoid get_kernel_nofault()'s overhead. > > > + */ > > > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) { > > > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE))) > > > + return fentry_ip; > > > + } else { > > > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE); > > > + } > > > if (is_endbr(instr)) > > > fentry_ip -= ENDBR_INSN_SIZE; > > > return fentry_ip; > > > -- > > > 2.43.0 > > > > > > > > > -- > > Masami Hiramatsu (Google) <mhiramat@kernel.org> > >
On Wed, 20 Mar 2024 10:46:54 -0700 Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote: > > > > On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote: > > > On Tue, 19 Mar 2024 14:20:13 -0700 > > > Andrii Nakryiko <andrii@kernel.org> wrote: > > > > > > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault()) > > > > is not free and it does pop up in performance profiles when > > > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config. > > > > > > > > Let's avoid using it if we know that fentry_ip - 4 can't cross page > > > > boundary. We do that by masking lowest 12 bits and checking if they are > > > > >= 4, in which case we can do direct memory read. > > > > > > > > Another benefit (and actually what caused a closer look at this part of > > > > code) is that now LBR record is (typically) not wasted on > > > > copy_from_kernel_nofault() call and code, which helps tools like > > > > retsnoop that grab LBR records from inside BPF code in kretprobes. > > > > I think this is nice improvement > > > > Acked-by: Jiri Olsa <jolsa@kernel.org> > > > > Masami, are you ok if we land this rather straightforward fix in > bpf-next tree for now, and then you or someone a bit more familiar > with ftrace/kprobe internals can generalize this in a more generic > way? I'm OK for this change for short term fix. As far as I can see, the kprobe-side change may involve more kprobe internal changes, so Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> > > > > > > > Hmm, we may better to have this function in kprobe side and > > > store a flag which such architecture dependent offset is added. > > > That is more natural. > > > > I like the idea of new flag saying the address was adjusted for endbr > > > > instead of a flag, can kprobe low-level infrastructure just provide > "effective fentry ip" without any flags, so that BPF side of things > don't have to care? It's possible. But it is a bit only for BPF and not fit to kprobe itself. I think we can add it in trace_kprobe instead of kprobe, which can be accessed from struct kprobe *kp. Thank you, > > > kprobe adjust the address in arch_adjust_kprobe_addr, it could be > > easily added in there and then we'd adjust the address in get_entry_ip > > accordingly > > > > jirka > > > > > > > > Thanks! > > > > > > > > > > > Cc: Masami Hiramatsu <mhiramat@kernel.org> > > > > Cc: Peter Zijlstra <peterz@infradead.org> > > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org> > > > > --- > > > > kernel/trace/bpf_trace.c | 12 +++++++++--- > > > > 1 file changed, 9 insertions(+), 3 deletions(-) > > > > > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > > > index 0a5c4efc73c3..f81adabda38c 100644 > > > > --- a/kernel/trace/bpf_trace.c > > > > +++ b/kernel/trace/bpf_trace.c > > > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip) > > > > { > > > > u32 instr; > > > > > > > > - /* Being extra safe in here in case entry ip is on the page-edge. */ > > > > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1)) > > > > - return fentry_ip; > > > > + /* We want to be extra safe in case entry ip is on the page edge, > > > > + * but otherwise we need to avoid get_kernel_nofault()'s overhead. > > > > + */ > > > > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) { > > > > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE))) > > > > + return fentry_ip; > > > > + } else { > > > > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE); > > > > + } > > > > if (is_endbr(instr)) > > > > fentry_ip -= ENDBR_INSN_SIZE; > > > > return fentry_ip; > > > > -- > > > > 2.43.0 > > > > > > > > > > > > > -- > > > Masami Hiramatsu (Google) <mhiramat@kernel.org> > > >
On Wed, Mar 20, 2024 at 4:46 PM Masami Hiramatsu <mhiramat@kernel.org> wrote: > > On Wed, 20 Mar 2024 10:46:54 -0700 > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > > On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote: > > > > > > On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote: > > > > On Tue, 19 Mar 2024 14:20:13 -0700 > > > > Andrii Nakryiko <andrii@kernel.org> wrote: > > > > > > > > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault()) > > > > > is not free and it does pop up in performance profiles when > > > > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config. > > > > > > > > > > Let's avoid using it if we know that fentry_ip - 4 can't cross page > > > > > boundary. We do that by masking lowest 12 bits and checking if they are > > > > > >= 4, in which case we can do direct memory read. > > > > > > > > > > Another benefit (and actually what caused a closer look at this part of > > > > > code) is that now LBR record is (typically) not wasted on > > > > > copy_from_kernel_nofault() call and code, which helps tools like > > > > > retsnoop that grab LBR records from inside BPF code in kretprobes. > > > > > > I think this is nice improvement > > > > > > Acked-by: Jiri Olsa <jolsa@kernel.org> > > > > > > > Masami, are you ok if we land this rather straightforward fix in > > bpf-next tree for now, and then you or someone a bit more familiar > > with ftrace/kprobe internals can generalize this in a more generic > > way? > > I'm OK for this change for short term fix. As far as I can see, the > kprobe-side change may involve more kprobe internal changes, so > > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> > Great, thank you! > > > > > > > > > > Hmm, we may better to have this function in kprobe side and > > > > store a flag which such architecture dependent offset is added. > > > > That is more natural. > > > > > > I like the idea of new flag saying the address was adjusted for endbr > > > > > > > instead of a flag, can kprobe low-level infrastructure just provide > > "effective fentry ip" without any flags, so that BPF side of things > > don't have to care? > > It's possible. But it is a bit only for BPF and not fit to kprobe > itself. I think we can add it in trace_kprobe instead of kprobe, > which can be accessed from struct kprobe *kp. sure, if it can be just "endbr64 offset" instead of a true/false flag, it would help to avoid extra conditionals in the hot path (which waste LBR records in some mode, which are important in some applications) > > Thank you, > > > > > > kprobe adjust the address in arch_adjust_kprobe_addr, it could be > > > easily added in there and then we'd adjust the address in get_entry_ip > > > accordingly > > > > > > jirka > > > > > > > > > > > Thanks! > > > > > > > > > > > > > > Cc: Masami Hiramatsu <mhiramat@kernel.org> > > > > > Cc: Peter Zijlstra <peterz@infradead.org> > > > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org> > > > > > --- > > > > > kernel/trace/bpf_trace.c | 12 +++++++++--- > > > > > 1 file changed, 9 insertions(+), 3 deletions(-) > > > > > > > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > > > > index 0a5c4efc73c3..f81adabda38c 100644 > > > > > --- a/kernel/trace/bpf_trace.c > > > > > +++ b/kernel/trace/bpf_trace.c > > > > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip) > > > > > { > > > > > u32 instr; > > > > > > > > > > - /* Being extra safe in here in case entry ip is on the page-edge. */ > > > > > - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1)) > > > > > - return fentry_ip; > > > > > + /* We want to be extra safe in case entry ip is on the page edge, > > > > > + * but otherwise we need to avoid get_kernel_nofault()'s overhead. > > > > > + */ > > > > > + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) { > > > > > + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE))) > > > > > + return fentry_ip; > > > > > + } else { > > > > > + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE); > > > > > + } > > > > > if (is_endbr(instr)) > > > > > fentry_ip -= ENDBR_INSN_SIZE; > > > > > return fentry_ip; > > > > > -- > > > > > 2.43.0 > > > > > > > > > > > > > > > > > -- > > > > Masami Hiramatsu (Google) <mhiramat@kernel.org> > > > > > > > -- > Masami Hiramatsu (Google) <mhiramat@kernel.org>
Hello: This patch was applied to bpf/bpf-next.git (master) by Daniel Borkmann <daniel@iogearbox.net>: On Tue, 19 Mar 2024 14:20:13 -0700 you wrote: > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault()) > is not free and it does pop up in performance profiles when > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config. > > Let's avoid using it if we know that fentry_ip - 4 can't cross page > boundary. We do that by masking lowest 12 bits and checking if they are > >= 4, in which case we can do direct memory read. > > [...] Here is the summary with links: - [bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP https://git.kernel.org/bpf/bpf-next/c/a8497506cd2c You are awesome, thank you!
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 0a5c4efc73c3..f81adabda38c 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip) { u32 instr; - /* Being extra safe in here in case entry ip is on the page-edge. */ - if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1)) - return fentry_ip; + /* We want to be extra safe in case entry ip is on the page edge, + * but otherwise we need to avoid get_kernel_nofault()'s overhead. + */ + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) { + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE))) + return fentry_ip; + } else { + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE); + } if (is_endbr(instr)) fentry_ip -= ENDBR_INSN_SIZE; return fentry_ip;