diff mbox series

[bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP

Message ID 20240319212013.1046779-1-andrii@kernel.org (mailing list archive)
State Accepted
Commit a8497506cd2c0fc90a64f6f5d2744a0ddb2c81eb
Delegated to: BPF
Headers show
Series [bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 956 this patch: 956
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 12 maintainers not CCed: haoluo@google.com linux-trace-kernel@vger.kernel.org john.fastabend@gmail.com rostedt@goodmis.org eddyz87@gmail.com sdf@google.com song@kernel.org kpsingh@kernel.org yonghong.song@linux.dev martin.lau@linux.dev jolsa@kernel.org mathieu.desnoyers@efficios.com
netdev/build_clang success Errors and warnings before: 957 this patch: 957
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 973 this patch: 973
netdev/checkpatch warning WARNING: line length of 84 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-13 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-15 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-42 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18

Commit Message

Andrii Nakryiko March 19, 2024, 9:20 p.m. UTC
get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
is not free and it does pop up in performance profiles when
kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.

Let's avoid using it if we know that fentry_ip - 4 can't cross page
boundary. We do that by masking lowest 12 bits and checking if they are
>= 4, in which case we can do direct memory read.

Another benefit (and actually what caused a closer look at this part of
code) is that now LBR record is (typically) not wasted on
copy_from_kernel_nofault() call and code, which helps tools like
retsnoop that grab LBR records from inside BPF code in kretprobes.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/trace/bpf_trace.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

Comments

Masami Hiramatsu (Google) March 20, 2024, 3:47 a.m. UTC | #1
On Tue, 19 Mar 2024 14:20:13 -0700
Andrii Nakryiko <andrii@kernel.org> wrote:

> get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> is not free and it does pop up in performance profiles when
> kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> 
> Let's avoid using it if we know that fentry_ip - 4 can't cross page
> boundary. We do that by masking lowest 12 bits and checking if they are
> >= 4, in which case we can do direct memory read.
> 
> Another benefit (and actually what caused a closer look at this part of
> code) is that now LBR record is (typically) not wasted on
> copy_from_kernel_nofault() call and code, which helps tools like
> retsnoop that grab LBR records from inside BPF code in kretprobes.

Hmm, we may better to have this function in kprobe side and
store a flag which such architecture dependent offset is added.
That is more natural.

Thanks!

> 
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
>  kernel/trace/bpf_trace.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 0a5c4efc73c3..f81adabda38c 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
>  {
>  	u32 instr;
>  
> -	/* Being extra safe in here in case entry ip is on the page-edge. */
> -	if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> -		return fentry_ip;
> +	/* We want to be extra safe in case entry ip is on the page edge,
> +	 * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> +	 */
> +	if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> +		if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> +			return fentry_ip;
> +	} else {
> +		instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> +	}
>  	if (is_endbr(instr))
>  		fentry_ip -= ENDBR_INSN_SIZE;
>  	return fentry_ip;
> -- 
> 2.43.0
>
Jiri Olsa March 20, 2024, 8:34 a.m. UTC | #2
On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> On Tue, 19 Mar 2024 14:20:13 -0700
> Andrii Nakryiko <andrii@kernel.org> wrote:
> 
> > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > is not free and it does pop up in performance profiles when
> > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> > 
> > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > boundary. We do that by masking lowest 12 bits and checking if they are
> > >= 4, in which case we can do direct memory read.
> > 
> > Another benefit (and actually what caused a closer look at this part of
> > code) is that now LBR record is (typically) not wasted on
> > copy_from_kernel_nofault() call and code, which helps tools like
> > retsnoop that grab LBR records from inside BPF code in kretprobes.

I think this is nice improvement

Acked-by: Jiri Olsa <jolsa@kernel.org>

> 
> Hmm, we may better to have this function in kprobe side and
> store a flag which such architecture dependent offset is added.
> That is more natural.

I like the idea of new flag saying the address was adjusted for endbr

kprobe adjust the address in arch_adjust_kprobe_addr, it could be
easily added in there and then we'd adjust the address in get_entry_ip
accordingly

jirka

> 
> Thanks!
> 
> > 
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> >  kernel/trace/bpf_trace.c | 12 +++++++++---
> >  1 file changed, 9 insertions(+), 3 deletions(-)
> > 
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 0a5c4efc73c3..f81adabda38c 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> >  {
> >  	u32 instr;
> >  
> > -	/* Being extra safe in here in case entry ip is on the page-edge. */
> > -	if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > -		return fentry_ip;
> > +	/* We want to be extra safe in case entry ip is on the page edge,
> > +	 * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > +	 */
> > +	if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > +		if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > +			return fentry_ip;
> > +	} else {
> > +		instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > +	}
> >  	if (is_endbr(instr))
> >  		fentry_ip -= ENDBR_INSN_SIZE;
> >  	return fentry_ip;
> > -- 
> > 2.43.0
> > 
> 
> 
> -- 
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
Andrii Nakryiko March 20, 2024, 5:46 p.m. UTC | #3
On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> > On Tue, 19 Mar 2024 14:20:13 -0700
> > Andrii Nakryiko <andrii@kernel.org> wrote:
> >
> > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > > is not free and it does pop up in performance profiles when
> > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> > >
> > > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > > boundary. We do that by masking lowest 12 bits and checking if they are
> > > >= 4, in which case we can do direct memory read.
> > >
> > > Another benefit (and actually what caused a closer look at this part of
> > > code) is that now LBR record is (typically) not wasted on
> > > copy_from_kernel_nofault() call and code, which helps tools like
> > > retsnoop that grab LBR records from inside BPF code in kretprobes.
>
> I think this is nice improvement
>
> Acked-by: Jiri Olsa <jolsa@kernel.org>
>

Masami, are you ok if we land this rather straightforward fix in
bpf-next tree for now, and then you or someone a bit more familiar
with ftrace/kprobe internals can generalize this in a more generic
way?

> >
> > Hmm, we may better to have this function in kprobe side and
> > store a flag which such architecture dependent offset is added.
> > That is more natural.
>
> I like the idea of new flag saying the address was adjusted for endbr
>

instead of a flag, can kprobe low-level infrastructure just provide
"effective fentry ip" without any flags, so that BPF side of things
don't have to care?

> kprobe adjust the address in arch_adjust_kprobe_addr, it could be
> easily added in there and then we'd adjust the address in get_entry_ip
> accordingly
>
> jirka
>
> >
> > Thanks!
> >
> > >
> > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > ---
> > >  kernel/trace/bpf_trace.c | 12 +++++++++---
> > >  1 file changed, 9 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > index 0a5c4efc73c3..f81adabda38c 100644
> > > --- a/kernel/trace/bpf_trace.c
> > > +++ b/kernel/trace/bpf_trace.c
> > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> > >  {
> > >     u32 instr;
> > >
> > > -   /* Being extra safe in here in case entry ip is on the page-edge. */
> > > -   if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > > -           return fentry_ip;
> > > +   /* We want to be extra safe in case entry ip is on the page edge,
> > > +    * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > > +    */
> > > +   if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > > +           if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > > +                   return fentry_ip;
> > > +   } else {
> > > +           instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > > +   }
> > >     if (is_endbr(instr))
> > >             fentry_ip -= ENDBR_INSN_SIZE;
> > >     return fentry_ip;
> > > --
> > > 2.43.0
> > >
> >
> >
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> >
Masami Hiramatsu (Google) March 20, 2024, 11:46 p.m. UTC | #4
On Wed, 20 Mar 2024 10:46:54 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:

> On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> > > On Tue, 19 Mar 2024 14:20:13 -0700
> > > Andrii Nakryiko <andrii@kernel.org> wrote:
> > >
> > > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > > > is not free and it does pop up in performance profiles when
> > > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> > > >
> > > > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > > > boundary. We do that by masking lowest 12 bits and checking if they are
> > > > >= 4, in which case we can do direct memory read.
> > > >
> > > > Another benefit (and actually what caused a closer look at this part of
> > > > code) is that now LBR record is (typically) not wasted on
> > > > copy_from_kernel_nofault() call and code, which helps tools like
> > > > retsnoop that grab LBR records from inside BPF code in kretprobes.
> >
> > I think this is nice improvement
> >
> > Acked-by: Jiri Olsa <jolsa@kernel.org>
> >
> 
> Masami, are you ok if we land this rather straightforward fix in
> bpf-next tree for now, and then you or someone a bit more familiar
> with ftrace/kprobe internals can generalize this in a more generic
> way?

I'm OK for this change for short term fix. As far as I can see, the
kprobe-side change may involve more kprobe internal changes, so

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

> 
> > >
> > > Hmm, we may better to have this function in kprobe side and
> > > store a flag which such architecture dependent offset is added.
> > > That is more natural.
> >
> > I like the idea of new flag saying the address was adjusted for endbr
> >
> 
> instead of a flag, can kprobe low-level infrastructure just provide
> "effective fentry ip" without any flags, so that BPF side of things
> don't have to care?

It's possible. But it is a bit only for BPF and not fit to kprobe
itself. I think we can add it in trace_kprobe instead of kprobe,
which can be accessed from struct kprobe *kp.

Thank you,

> 
> > kprobe adjust the address in arch_adjust_kprobe_addr, it could be
> > easily added in there and then we'd adjust the address in get_entry_ip
> > accordingly
> >
> > jirka
> >
> > >
> > > Thanks!
> > >
> > > >
> > > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > ---
> > > >  kernel/trace/bpf_trace.c | 12 +++++++++---
> > > >  1 file changed, 9 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > index 0a5c4efc73c3..f81adabda38c 100644
> > > > --- a/kernel/trace/bpf_trace.c
> > > > +++ b/kernel/trace/bpf_trace.c
> > > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> > > >  {
> > > >     u32 instr;
> > > >
> > > > -   /* Being extra safe in here in case entry ip is on the page-edge. */
> > > > -   if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > > > -           return fentry_ip;
> > > > +   /* We want to be extra safe in case entry ip is on the page edge,
> > > > +    * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > > > +    */
> > > > +   if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > > > +           if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > > > +                   return fentry_ip;
> > > > +   } else {
> > > > +           instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > > > +   }
> > > >     if (is_endbr(instr))
> > > >             fentry_ip -= ENDBR_INSN_SIZE;
> > > >     return fentry_ip;
> > > > --
> > > > 2.43.0
> > > >
> > >
> > >
> > > --
> > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > >
Andrii Nakryiko March 21, 2024, 4:16 p.m. UTC | #5
On Wed, Mar 20, 2024 at 4:46 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Wed, 20 Mar 2024 10:46:54 -0700
> Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>
> > On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > >
> > > On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> > > > On Tue, 19 Mar 2024 14:20:13 -0700
> > > > Andrii Nakryiko <andrii@kernel.org> wrote:
> > > >
> > > > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > > > > is not free and it does pop up in performance profiles when
> > > > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> > > > >
> > > > > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > > > > boundary. We do that by masking lowest 12 bits and checking if they are
> > > > > >= 4, in which case we can do direct memory read.
> > > > >
> > > > > Another benefit (and actually what caused a closer look at this part of
> > > > > code) is that now LBR record is (typically) not wasted on
> > > > > copy_from_kernel_nofault() call and code, which helps tools like
> > > > > retsnoop that grab LBR records from inside BPF code in kretprobes.
> > >
> > > I think this is nice improvement
> > >
> > > Acked-by: Jiri Olsa <jolsa@kernel.org>
> > >
> >
> > Masami, are you ok if we land this rather straightforward fix in
> > bpf-next tree for now, and then you or someone a bit more familiar
> > with ftrace/kprobe internals can generalize this in a more generic
> > way?
>
> I'm OK for this change for short term fix. As far as I can see, the
> kprobe-side change may involve more kprobe internal changes, so
>
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
>

Great, thank you!

> >
> > > >
> > > > Hmm, we may better to have this function in kprobe side and
> > > > store a flag which such architecture dependent offset is added.
> > > > That is more natural.
> > >
> > > I like the idea of new flag saying the address was adjusted for endbr
> > >
> >
> > instead of a flag, can kprobe low-level infrastructure just provide
> > "effective fentry ip" without any flags, so that BPF side of things
> > don't have to care?
>
> It's possible. But it is a bit only for BPF and not fit to kprobe
> itself. I think we can add it in trace_kprobe instead of kprobe,
> which can be accessed from struct kprobe *kp.

sure, if it can be just "endbr64 offset" instead of a true/false flag,
it would help to avoid extra conditionals in the hot path (which waste
LBR records in some mode, which are important in some applications)

>
> Thank you,
>
> >
> > > kprobe adjust the address in arch_adjust_kprobe_addr, it could be
> > > easily added in there and then we'd adjust the address in get_entry_ip
> > > accordingly
> > >
> > > jirka
> > >
> > > >
> > > > Thanks!
> > > >
> > > > >
> > > > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > > ---
> > > > >  kernel/trace/bpf_trace.c | 12 +++++++++---
> > > > >  1 file changed, 9 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > > index 0a5c4efc73c3..f81adabda38c 100644
> > > > > --- a/kernel/trace/bpf_trace.c
> > > > > +++ b/kernel/trace/bpf_trace.c
> > > > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> > > > >  {
> > > > >     u32 instr;
> > > > >
> > > > > -   /* Being extra safe in here in case entry ip is on the page-edge. */
> > > > > -   if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > > > > -           return fentry_ip;
> > > > > +   /* We want to be extra safe in case entry ip is on the page edge,
> > > > > +    * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > > > > +    */
> > > > > +   if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > > > > +           if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > > > > +                   return fentry_ip;
> > > > > +   } else {
> > > > > +           instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > > > > +   }
> > > > >     if (is_endbr(instr))
> > > > >             fentry_ip -= ENDBR_INSN_SIZE;
> > > > >     return fentry_ip;
> > > > > --
> > > > > 2.43.0
> > > > >
> > > >
> > > >
> > > > --
> > > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
patchwork-bot+netdevbpf@kernel.org March 25, 2024, 4:10 p.m. UTC | #6
Hello:

This patch was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Tue, 19 Mar 2024 14:20:13 -0700 you wrote:
> get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> is not free and it does pop up in performance profiles when
> kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> 
> Let's avoid using it if we know that fentry_ip - 4 can't cross page
> boundary. We do that by masking lowest 12 bits and checking if they are
> >= 4, in which case we can do direct memory read.
> 
> [...]

Here is the summary with links:
  - [bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP
    https://git.kernel.org/bpf/bpf-next/c/a8497506cd2c

You are awesome, thank you!
diff mbox series

Patch

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0a5c4efc73c3..f81adabda38c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1053,9 +1053,15 @@  static unsigned long get_entry_ip(unsigned long fentry_ip)
 {
 	u32 instr;
 
-	/* Being extra safe in here in case entry ip is on the page-edge. */
-	if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
-		return fentry_ip;
+	/* We want to be extra safe in case entry ip is on the page edge,
+	 * but otherwise we need to avoid get_kernel_nofault()'s overhead.
+	 */
+	if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
+		if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
+			return fentry_ip;
+	} else {
+		instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
+	}
 	if (is_endbr(instr))
 		fentry_ip -= ENDBR_INSN_SIZE;
 	return fentry_ip;