diff mbox series

[bpf-next] bpf: Use fake pt_regs when doing bpf syscall tracepoint tracing

Message ID 20240910034306.3122378-1-yonghong.song@linux.dev (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next] bpf: Use fake pt_regs when doing bpf syscall tracepoint tracing | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-15 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-17 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-18 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17-O2
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18-O2
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-13 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
netdev/tree_selection success Clearly marked for bpf-next
netdev/apply success Patch already applied to bpf-next-0

Commit Message

Yonghong Song Sept. 10, 2024, 3:43 a.m. UTC
Salvatore Benedetto reported an issue that when doing syscall tracepoint
tracing the kernel stack is empty. For example, using the following
command line
  bpftrace -e 'tracepoint:syscalls:sys_enter_read { print("Kernel Stack\n"); print(kstack()); }'
the output will be
===
  Kernel Stack
===

Further analysis shows that pt_regs used for bpf syscall tracepoint
tracing is from the one constructed during user->kernel transition.
The call stack looks like
  perf_syscall_enter+0x88/0x7c0
  trace_sys_enter+0x41/0x80
  syscall_trace_enter+0x100/0x160
  do_syscall_64+0x38/0xf0
  entry_SYSCALL_64_after_hwframe+0x76/0x7e

The ip address stored in pt_regs is from user space hence no kernel
stack is printed.

To fix the issue, we need to use kernel address from pt_regs.
In kernel repo, there are already a few cases like this. For example,
in kernel/trace/bpf_trace.c, several perf_fetch_caller_regs(fake_regs_ptr)
instances are used to supply ip address or use ip address to construct
call stack.

The patch follows the above example by using a fake pt_regs.
The pt_regs is stored in local stack since the syscall tracepoint
tracing is in process context and there are no possibility that
different concurrent syscall tracepoint tracing could mess up with each
other. This is similar to a perf_fetch_caller_regs() use case in
kernel/trace/trace_event_perf.c with function perf_ftrace_function_call()
where a local pt_regs is used.

With this patch, for the above bpftrace script, I got the following output
===
  Kernel Stack

        syscall_trace_enter+407
        syscall_trace_enter+407
        do_syscall_64+74
        entry_SYSCALL_64_after_hwframe+75
===

Reported-by: Salvatore Benedetto <salvabenedetto@meta.com>
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/trace/trace_syscalls.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Andrii Nakryiko Sept. 10, 2024, 5:34 a.m. UTC | #1
On Mon, Sep 9, 2024 at 8:43 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
> Salvatore Benedetto reported an issue that when doing syscall tracepoint
> tracing the kernel stack is empty. For example, using the following
> command line
>   bpftrace -e 'tracepoint:syscalls:sys_enter_read { print("Kernel Stack\n"); print(kstack()); }'
> the output will be
> ===
>   Kernel Stack
> ===
>
> Further analysis shows that pt_regs used for bpf syscall tracepoint
> tracing is from the one constructed during user->kernel transition.
> The call stack looks like
>   perf_syscall_enter+0x88/0x7c0
>   trace_sys_enter+0x41/0x80
>   syscall_trace_enter+0x100/0x160
>   do_syscall_64+0x38/0xf0
>   entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> The ip address stored in pt_regs is from user space hence no kernel
> stack is printed.
>
> To fix the issue, we need to use kernel address from pt_regs.
> In kernel repo, there are already a few cases like this. For example,
> in kernel/trace/bpf_trace.c, several perf_fetch_caller_regs(fake_regs_ptr)
> instances are used to supply ip address or use ip address to construct
> call stack.
>
> The patch follows the above example by using a fake pt_regs.
> The pt_regs is stored in local stack since the syscall tracepoint
> tracing is in process context and there are no possibility that
> different concurrent syscall tracepoint tracing could mess up with each
> other. This is similar to a perf_fetch_caller_regs() use case in
> kernel/trace/trace_event_perf.c with function perf_ftrace_function_call()
> where a local pt_regs is used.
>
> With this patch, for the above bpftrace script, I got the following output
> ===
>   Kernel Stack
>
>         syscall_trace_enter+407
>         syscall_trace_enter+407
>         do_syscall_64+74
>         entry_SYSCALL_64_after_hwframe+75
> ===
>
> Reported-by: Salvatore Benedetto <salvabenedetto@meta.com>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
>  kernel/trace/trace_syscalls.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>

Note, we need to solve the same for perf_call_bpf_exit().

pw-bot: cr

> diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
> index 9c581d6da843..063f51952d49 100644
> --- a/kernel/trace/trace_syscalls.c
> +++ b/kernel/trace/trace_syscalls.c
> @@ -559,12 +559,15 @@ static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re

let's also drop struct pt_regs * argument into
perf_call_bpf_{enter,exit}(), they are not actually used anymore

>                 int syscall_nr;
>                 unsigned long args[SYSCALL_DEFINE_MAXARGS];
>         } __aligned(8) param;
> +       struct pt_regs fake_regs;
>         int i;
>
>         BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
>
>         /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. &param) */
> -       *(struct pt_regs **)&param = regs;
> +       memset(&fake_regs, 0, sizeof(fake_regs));

sizeof(struct pt_regs) == 168 on x86-64, and on arm64 it's a whopping
336 bytes, so these memset(0) calls are not free for sure.

But we don't need to do this unnecessary work all the time.

I initially was going to suggest to use get_bpf_raw_tp_regs() from
kernel/trace/bpf_trace.c to get a temporary pt_regs that was already
memset(0) and used to initialize these minimal "fake regs".

But, it turns out we don't need to do even that. Note
perf_trace_buf_alloc(), it has `struct pt_regs **` second argument,
and if you pass a valid pointer there, it will return "fake regs"
struct to be used. We already use that functionality in
perf_trace_##call in include/trace/perf.h (i.e., non-syscall
tracepoints), so this seems to be a perfect fit.

> +       perf_fetch_caller_regs(&fake_regs);
> +       *(struct pt_regs **)&param = &fake_regs;
>         param.syscall_nr = rec->nr;
>         for (i = 0; i < sys_data->nb_args; i++)
>                 param.args[i] = rec->args[i];
> --
> 2.43.5
>
Andrii Nakryiko Sept. 10, 2024, 5:42 a.m. UTC | #2
On Mon, Sep 9, 2024 at 10:34 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Mon, Sep 9, 2024 at 8:43 PM Yonghong Song <yonghong.song@linux.dev> wrote:
> >
> > Salvatore Benedetto reported an issue that when doing syscall tracepoint
> > tracing the kernel stack is empty. For example, using the following
> > command line
> >   bpftrace -e 'tracepoint:syscalls:sys_enter_read { print("Kernel Stack\n"); print(kstack()); }'
> > the output will be
> > ===
> >   Kernel Stack
> > ===
> >
> > Further analysis shows that pt_regs used for bpf syscall tracepoint
> > tracing is from the one constructed during user->kernel transition.
> > The call stack looks like
> >   perf_syscall_enter+0x88/0x7c0
> >   trace_sys_enter+0x41/0x80
> >   syscall_trace_enter+0x100/0x160
> >   do_syscall_64+0x38/0xf0
> >   entry_SYSCALL_64_after_hwframe+0x76/0x7e
> >
> > The ip address stored in pt_regs is from user space hence no kernel
> > stack is printed.
> >
> > To fix the issue, we need to use kernel address from pt_regs.
> > In kernel repo, there are already a few cases like this. For example,
> > in kernel/trace/bpf_trace.c, several perf_fetch_caller_regs(fake_regs_ptr)
> > instances are used to supply ip address or use ip address to construct
> > call stack.
> >
> > The patch follows the above example by using a fake pt_regs.
> > The pt_regs is stored in local stack since the syscall tracepoint
> > tracing is in process context and there are no possibility that
> > different concurrent syscall tracepoint tracing could mess up with each
> > other. This is similar to a perf_fetch_caller_regs() use case in
> > kernel/trace/trace_event_perf.c with function perf_ftrace_function_call()
> > where a local pt_regs is used.
> >
> > With this patch, for the above bpftrace script, I got the following output
> > ===
> >   Kernel Stack
> >
> >         syscall_trace_enter+407
> >         syscall_trace_enter+407
> >         do_syscall_64+74
> >         entry_SYSCALL_64_after_hwframe+75
> > ===
> >
> > Reported-by: Salvatore Benedetto <salvabenedetto@meta.com>
> > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> > ---
> >  kernel/trace/trace_syscalls.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
>
> Note, we need to solve the same for perf_call_bpf_exit().
>
> pw-bot: cr
>

BTW, we lived with this bug for years, so I suggest basing your fix on
top of bpf-next/master, no bpf/master, which will give people a bit of
time to validate that the fix works as expected and doesn't produce
any undesirable side effects, before this makes it into the final
Linux release.

> > diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
> > index 9c581d6da843..063f51952d49 100644
> > --- a/kernel/trace/trace_syscalls.c
> > +++ b/kernel/trace/trace_syscalls.c
> > @@ -559,12 +559,15 @@ static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re
>
> let's also drop struct pt_regs * argument into
> perf_call_bpf_{enter,exit}(), they are not actually used anymore
>
> >                 int syscall_nr;
> >                 unsigned long args[SYSCALL_DEFINE_MAXARGS];
> >         } __aligned(8) param;
> > +       struct pt_regs fake_regs;
> >         int i;
> >
> >         BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
> >
> >         /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. &param) */
> > -       *(struct pt_regs **)&param = regs;
> > +       memset(&fake_regs, 0, sizeof(fake_regs));
>
> sizeof(struct pt_regs) == 168 on x86-64, and on arm64 it's a whopping
> 336 bytes, so these memset(0) calls are not free for sure.
>
> But we don't need to do this unnecessary work all the time.
>
> I initially was going to suggest to use get_bpf_raw_tp_regs() from
> kernel/trace/bpf_trace.c to get a temporary pt_regs that was already
> memset(0) and used to initialize these minimal "fake regs".
>
> But, it turns out we don't need to do even that. Note
> perf_trace_buf_alloc(), it has `struct pt_regs **` second argument,
> and if you pass a valid pointer there, it will return "fake regs"
> struct to be used. We already use that functionality in
> perf_trace_##call in include/trace/perf.h (i.e., non-syscall
> tracepoints), so this seems to be a perfect fit.
>
> > +       perf_fetch_caller_regs(&fake_regs);
> > +       *(struct pt_regs **)&param = &fake_regs;
> >         param.syscall_nr = rec->nr;
> >         for (i = 0; i < sys_data->nb_args; i++)
> >                 param.args[i] = rec->args[i];
> > --
> > 2.43.5
> >
Yonghong Song Sept. 10, 2024, 3:23 p.m. UTC | #3
On 9/9/24 10:34 PM, Andrii Nakryiko wrote:
> On Mon, Sep 9, 2024 at 8:43 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>> Salvatore Benedetto reported an issue that when doing syscall tracepoint
>> tracing the kernel stack is empty. For example, using the following
>> command line
>>    bpftrace -e 'tracepoint:syscalls:sys_enter_read { print("Kernel Stack\n"); print(kstack()); }'
>> the output will be
>> ===
>>    Kernel Stack
>> ===
>>
>> Further analysis shows that pt_regs used for bpf syscall tracepoint
>> tracing is from the one constructed during user->kernel transition.
>> The call stack looks like
>>    perf_syscall_enter+0x88/0x7c0
>>    trace_sys_enter+0x41/0x80
>>    syscall_trace_enter+0x100/0x160
>>    do_syscall_64+0x38/0xf0
>>    entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>
>> The ip address stored in pt_regs is from user space hence no kernel
>> stack is printed.
>>
>> To fix the issue, we need to use kernel address from pt_regs.
>> In kernel repo, there are already a few cases like this. For example,
>> in kernel/trace/bpf_trace.c, several perf_fetch_caller_regs(fake_regs_ptr)
>> instances are used to supply ip address or use ip address to construct
>> call stack.
>>
>> The patch follows the above example by using a fake pt_regs.
>> The pt_regs is stored in local stack since the syscall tracepoint
>> tracing is in process context and there are no possibility that
>> different concurrent syscall tracepoint tracing could mess up with each
>> other. This is similar to a perf_fetch_caller_regs() use case in
>> kernel/trace/trace_event_perf.c with function perf_ftrace_function_call()
>> where a local pt_regs is used.
>>
>> With this patch, for the above bpftrace script, I got the following output
>> ===
>>    Kernel Stack
>>
>>          syscall_trace_enter+407
>>          syscall_trace_enter+407
>>          do_syscall_64+74
>>          entry_SYSCALL_64_after_hwframe+75
>> ===
>>
>> Reported-by: Salvatore Benedetto <salvabenedetto@meta.com>
>> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>> ---
>>   kernel/trace/trace_syscalls.c | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>
> Note, we need to solve the same for perf_call_bpf_exit().

Sorry, missed this one! Will add in the next revision.

>
> pw-bot: cr
>
>> diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
>> index 9c581d6da843..063f51952d49 100644
>> --- a/kernel/trace/trace_syscalls.c
>> +++ b/kernel/trace/trace_syscalls.c
>> @@ -559,12 +559,15 @@ static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re
> let's also drop struct pt_regs * argument into
> perf_call_bpf_{enter,exit}(), they are not actually used anymore

Ack.

>
>>                  int syscall_nr;
>>                  unsigned long args[SYSCALL_DEFINE_MAXARGS];
>>          } __aligned(8) param;
>> +       struct pt_regs fake_regs;
>>          int i;
>>
>>          BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
>>
>>          /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. &param) */
>> -       *(struct pt_regs **)&param = regs;
>> +       memset(&fake_regs, 0, sizeof(fake_regs));
> sizeof(struct pt_regs) == 168 on x86-64, and on arm64 it's a whopping
> 336 bytes, so these memset(0) calls are not free for sure.

I calculated size on x86-64 and feels it might be acceptable.
But not aware that arm64 has much larger size like 336. Indeed
336 bytes on the stack is quite large.

>
> But we don't need to do this unnecessary work all the time.
>
> I initially was going to suggest to use get_bpf_raw_tp_regs() from
> kernel/trace/bpf_trace.c to get a temporary pt_regs that was already
> memset(0) and used to initialize these minimal "fake regs".
>
> But, it turns out we don't need to do even that. Note
> perf_trace_buf_alloc(), it has `struct pt_regs **` second argument,
> and if you pass a valid pointer there, it will return "fake regs"
> struct to be used. We already use that functionality in
> perf_trace_##call in include/trace/perf.h (i.e., non-syscall
> tracepoints), so this seems to be a perfect fit.

I double checked and perf_call_bpf_enter() call is in atomic
process context (preempt disabled), so we are safe to use
perf_trace_buf_alloc() which uses per-cpu variables.

>
>> +       perf_fetch_caller_regs(&fake_regs);
>> +       *(struct pt_regs **)&param = &fake_regs;
>>          param.syscall_nr = rec->nr;
>>          for (i = 0; i < sys_data->nb_args; i++)
>>                  param.args[i] = rec->args[i];
>> --
>> 2.43.5
>>
Yonghong Song Sept. 10, 2024, 3:25 p.m. UTC | #4
On 9/9/24 10:42 PM, Andrii Nakryiko wrote:
> On Mon, Sep 9, 2024 at 10:34 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
>> On Mon, Sep 9, 2024 at 8:43 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>>> Salvatore Benedetto reported an issue that when doing syscall tracepoint
>>> tracing the kernel stack is empty. For example, using the following
>>> command line
>>>    bpftrace -e 'tracepoint:syscalls:sys_enter_read { print("Kernel Stack\n"); print(kstack()); }'
>>> the output will be
>>> ===
>>>    Kernel Stack
>>> ===
>>>
>>> Further analysis shows that pt_regs used for bpf syscall tracepoint
>>> tracing is from the one constructed during user->kernel transition.
>>> The call stack looks like
>>>    perf_syscall_enter+0x88/0x7c0
>>>    trace_sys_enter+0x41/0x80
>>>    syscall_trace_enter+0x100/0x160
>>>    do_syscall_64+0x38/0xf0
>>>    entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>>
>>> The ip address stored in pt_regs is from user space hence no kernel
>>> stack is printed.
>>>
>>> To fix the issue, we need to use kernel address from pt_regs.
>>> In kernel repo, there are already a few cases like this. For example,
>>> in kernel/trace/bpf_trace.c, several perf_fetch_caller_regs(fake_regs_ptr)
>>> instances are used to supply ip address or use ip address to construct
>>> call stack.
>>>
>>> The patch follows the above example by using a fake pt_regs.
>>> The pt_regs is stored in local stack since the syscall tracepoint
>>> tracing is in process context and there are no possibility that
>>> different concurrent syscall tracepoint tracing could mess up with each
>>> other. This is similar to a perf_fetch_caller_regs() use case in
>>> kernel/trace/trace_event_perf.c with function perf_ftrace_function_call()
>>> where a local pt_regs is used.
>>>
>>> With this patch, for the above bpftrace script, I got the following output
>>> ===
>>>    Kernel Stack
>>>
>>>          syscall_trace_enter+407
>>>          syscall_trace_enter+407
>>>          do_syscall_64+74
>>>          entry_SYSCALL_64_after_hwframe+75
>>> ===
>>>
>>> Reported-by: Salvatore Benedetto <salvabenedetto@meta.com>
>>> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>> ---
>>>   kernel/trace/trace_syscalls.c | 5 ++++-
>>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>> Note, we need to solve the same for perf_call_bpf_exit().
>>
>> pw-bot: cr
>>
> BTW, we lived with this bug for years, so I suggest basing your fix on
> top of bpf-next/master, no bpf/master, which will give people a bit of
> time to validate that the fix works as expected and doesn't produce
> any undesirable side effects, before this makes it into the final
> Linux release.

Yes, I did. See I indeed use 'bpf-next' in subject above.

>
>>> diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
>>> index 9c581d6da843..063f51952d49 100644
>>> --- a/kernel/trace/trace_syscalls.c
>>> +++ b/kernel/trace/trace_syscalls.c
>>> @@ -559,12 +559,15 @@ static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re
>> let's also drop struct pt_regs * argument into
>> perf_call_bpf_{enter,exit}(), they are not actually used anymore
>>
>>>                  int syscall_nr;
>>>                  unsigned long args[SYSCALL_DEFINE_MAXARGS];
>>>          } __aligned(8) param;
>>> +       struct pt_regs fake_regs;
>>>          int i;
>>>
>>>          BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
>>>
>>>          /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. &param) */
>>> -       *(struct pt_regs **)&param = regs;
>>> +       memset(&fake_regs, 0, sizeof(fake_regs));
>> sizeof(struct pt_regs) == 168 on x86-64, and on arm64 it's a whopping
>> 336 bytes, so these memset(0) calls are not free for sure.
>>
>> But we don't need to do this unnecessary work all the time.
>>
>> I initially was going to suggest to use get_bpf_raw_tp_regs() from
>> kernel/trace/bpf_trace.c to get a temporary pt_regs that was already
>> memset(0) and used to initialize these minimal "fake regs".
>>
>> But, it turns out we don't need to do even that. Note
>> perf_trace_buf_alloc(), it has `struct pt_regs **` second argument,
>> and if you pass a valid pointer there, it will return "fake regs"
>> struct to be used. We already use that functionality in
>> perf_trace_##call in include/trace/perf.h (i.e., non-syscall
>> tracepoints), so this seems to be a perfect fit.
>>
>>> +       perf_fetch_caller_regs(&fake_regs);
>>> +       *(struct pt_regs **)&param = &fake_regs;
>>>          param.syscall_nr = rec->nr;
>>>          for (i = 0; i < sys_data->nb_args; i++)
>>>                  param.args[i] = rec->args[i];
>>> --
>>> 2.43.5
>>>
Andrii Nakryiko Sept. 10, 2024, 4:50 p.m. UTC | #5
On Tue, Sep 10, 2024 at 8:25 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
> On 9/9/24 10:42 PM, Andrii Nakryiko wrote:
> > On Mon, Sep 9, 2024 at 10:34 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> >> On Mon, Sep 9, 2024 at 8:43 PM Yonghong Song <yonghong.song@linux.dev> wrote:
> >>> Salvatore Benedetto reported an issue that when doing syscall tracepoint
> >>> tracing the kernel stack is empty. For example, using the following
> >>> command line
> >>>    bpftrace -e 'tracepoint:syscalls:sys_enter_read { print("Kernel Stack\n"); print(kstack()); }'
> >>> the output will be
> >>> ===
> >>>    Kernel Stack
> >>> ===
> >>>
> >>> Further analysis shows that pt_regs used for bpf syscall tracepoint
> >>> tracing is from the one constructed during user->kernel transition.
> >>> The call stack looks like
> >>>    perf_syscall_enter+0x88/0x7c0
> >>>    trace_sys_enter+0x41/0x80
> >>>    syscall_trace_enter+0x100/0x160
> >>>    do_syscall_64+0x38/0xf0
> >>>    entry_SYSCALL_64_after_hwframe+0x76/0x7e
> >>>
> >>> The ip address stored in pt_regs is from user space hence no kernel
> >>> stack is printed.
> >>>
> >>> To fix the issue, we need to use kernel address from pt_regs.
> >>> In kernel repo, there are already a few cases like this. For example,
> >>> in kernel/trace/bpf_trace.c, several perf_fetch_caller_regs(fake_regs_ptr)
> >>> instances are used to supply ip address or use ip address to construct
> >>> call stack.
> >>>
> >>> The patch follows the above example by using a fake pt_regs.
> >>> The pt_regs is stored in local stack since the syscall tracepoint
> >>> tracing is in process context and there are no possibility that
> >>> different concurrent syscall tracepoint tracing could mess up with each
> >>> other. This is similar to a perf_fetch_caller_regs() use case in
> >>> kernel/trace/trace_event_perf.c with function perf_ftrace_function_call()
> >>> where a local pt_regs is used.
> >>>
> >>> With this patch, for the above bpftrace script, I got the following output
> >>> ===
> >>>    Kernel Stack
> >>>
> >>>          syscall_trace_enter+407
> >>>          syscall_trace_enter+407
> >>>          do_syscall_64+74
> >>>          entry_SYSCALL_64_after_hwframe+75
> >>> ===
> >>>
> >>> Reported-by: Salvatore Benedetto <salvabenedetto@meta.com>
> >>> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> >>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> >>> ---
> >>>   kernel/trace/trace_syscalls.c | 5 ++++-
> >>>   1 file changed, 4 insertions(+), 1 deletion(-)
> >>>
> >> Note, we need to solve the same for perf_call_bpf_exit().
> >>
> >> pw-bot: cr
> >>
> > BTW, we lived with this bug for years, so I suggest basing your fix on
> > top of bpf-next/master, no bpf/master, which will give people a bit of
> > time to validate that the fix works as expected and doesn't produce
> > any undesirable side effects, before this makes it into the final
> > Linux release.
>
> Yes, I did. See I indeed use 'bpf-next' in subject above.

Huh, strange, I actually tried to apply your patch to bpf-next/master
and it didn't apply cleanly. It did apply to bpf/master, though, which
is why I assumed you based it off of bpf/master.

>
> >
> >>> diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
> >>> index 9c581d6da843..063f51952d49 100644
> >>> --- a/kernel/trace/trace_syscalls.c
> >>> +++ b/kernel/trace/trace_syscalls.c
> >>> @@ -559,12 +559,15 @@ static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re
> >> let's also drop struct pt_regs * argument into
> >> perf_call_bpf_{enter,exit}(), they are not actually used anymore
> >>
> >>>                  int syscall_nr;
> >>>                  unsigned long args[SYSCALL_DEFINE_MAXARGS];
> >>>          } __aligned(8) param;
> >>> +       struct pt_regs fake_regs;
> >>>          int i;
> >>>
> >>>          BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
> >>>
> >>>          /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. &param) */
> >>> -       *(struct pt_regs **)&param = regs;
> >>> +       memset(&fake_regs, 0, sizeof(fake_regs));
> >> sizeof(struct pt_regs) == 168 on x86-64, and on arm64 it's a whopping
> >> 336 bytes, so these memset(0) calls are not free for sure.
> >>
> >> But we don't need to do this unnecessary work all the time.
> >>
> >> I initially was going to suggest to use get_bpf_raw_tp_regs() from
> >> kernel/trace/bpf_trace.c to get a temporary pt_regs that was already
> >> memset(0) and used to initialize these minimal "fake regs".
> >>
> >> But, it turns out we don't need to do even that. Note
> >> perf_trace_buf_alloc(), it has `struct pt_regs **` second argument,
> >> and if you pass a valid pointer there, it will return "fake regs"
> >> struct to be used. We already use that functionality in
> >> perf_trace_##call in include/trace/perf.h (i.e., non-syscall
> >> tracepoints), so this seems to be a perfect fit.
> >>
> >>> +       perf_fetch_caller_regs(&fake_regs);
> >>> +       *(struct pt_regs **)&param = &fake_regs;
> >>>          param.syscall_nr = rec->nr;
> >>>          for (i = 0; i < sys_data->nb_args; i++)
> >>>                  param.args[i] = rec->args[i];
> >>> --
> >>> 2.43.5
> >>>
Yonghong Song Sept. 10, 2024, 6:22 p.m. UTC | #6
On 9/10/24 9:50 AM, Andrii Nakryiko wrote:
> On Tue, Sep 10, 2024 at 8:25 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>>
>> On 9/9/24 10:42 PM, Andrii Nakryiko wrote:
>>> On Mon, Sep 9, 2024 at 10:34 PM Andrii Nakryiko
>>> <andrii.nakryiko@gmail.com> wrote:
>>>> On Mon, Sep 9, 2024 at 8:43 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>>>>> Salvatore Benedetto reported an issue that when doing syscall tracepoint
>>>>> tracing the kernel stack is empty. For example, using the following
>>>>> command line
>>>>>     bpftrace -e 'tracepoint:syscalls:sys_enter_read { print("Kernel Stack\n"); print(kstack()); }'
>>>>> the output will be
>>>>> ===
>>>>>     Kernel Stack
>>>>> ===
>>>>>
>>>>> Further analysis shows that pt_regs used for bpf syscall tracepoint
>>>>> tracing is from the one constructed during user->kernel transition.
>>>>> The call stack looks like
>>>>>     perf_syscall_enter+0x88/0x7c0
>>>>>     trace_sys_enter+0x41/0x80
>>>>>     syscall_trace_enter+0x100/0x160
>>>>>     do_syscall_64+0x38/0xf0
>>>>>     entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>>>>
>>>>> The ip address stored in pt_regs is from user space hence no kernel
>>>>> stack is printed.
>>>>>
>>>>> To fix the issue, we need to use kernel address from pt_regs.
>>>>> In kernel repo, there are already a few cases like this. For example,
>>>>> in kernel/trace/bpf_trace.c, several perf_fetch_caller_regs(fake_regs_ptr)
>>>>> instances are used to supply ip address or use ip address to construct
>>>>> call stack.
>>>>>
>>>>> The patch follows the above example by using a fake pt_regs.
>>>>> The pt_regs is stored in local stack since the syscall tracepoint
>>>>> tracing is in process context and there are no possibility that
>>>>> different concurrent syscall tracepoint tracing could mess up with each
>>>>> other. This is similar to a perf_fetch_caller_regs() use case in
>>>>> kernel/trace/trace_event_perf.c with function perf_ftrace_function_call()
>>>>> where a local pt_regs is used.
>>>>>
>>>>> With this patch, for the above bpftrace script, I got the following output
>>>>> ===
>>>>>     Kernel Stack
>>>>>
>>>>>           syscall_trace_enter+407
>>>>>           syscall_trace_enter+407
>>>>>           do_syscall_64+74
>>>>>           entry_SYSCALL_64_after_hwframe+75
>>>>> ===
>>>>>
>>>>> Reported-by: Salvatore Benedetto <salvabenedetto@meta.com>
>>>>> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
>>>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>>>> ---
>>>>>    kernel/trace/trace_syscalls.c | 5 ++++-
>>>>>    1 file changed, 4 insertions(+), 1 deletion(-)
>>>>>
>>>> Note, we need to solve the same for perf_call_bpf_exit().
>>>>
>>>> pw-bot: cr
>>>>
>>> BTW, we lived with this bug for years, so I suggest basing your fix on
>>> top of bpf-next/master, no bpf/master, which will give people a bit of
>>> time to validate that the fix works as expected and doesn't produce
>>> any undesirable side effects, before this makes it into the final
>>> Linux release.
>> Yes, I did. See I indeed use 'bpf-next' in subject above.
> Huh, strange, I actually tried to apply your patch to bpf-next/master
> and it didn't apply cleanly. It did apply to bpf/master, though, which
> is why I assumed you based it off of bpf/master.

Interesting. The following is my git history:

7b71206057440d9559ecb9cd02d891f46927b272 (HEAD -> trace_syscall) bpf: Use fake pt_regs when doing bpf syscall tracepoint tracing
41d0c4677feee1ea063e0f2c2af72dc953b1f1cc (origin/master, origin/HEAD, master) libbpf: Fix some typos in comments
72d8508ecd3b081dba03ec00930c6b07c1ad55d3 MAINTAINERS: BPF ARC JIT: Update my e-mail address
bee109b7b3e50739b88252a219fa07ecd78ad628 bpf: Fix error message on kfunc arg type mismatch
...

Not sure what is going on ...
   

>
>>>>> diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
>>>>> index 9c581d6da843..063f51952d49 100644
>>>>> --- a/kernel/trace/trace_syscalls.c
>>>>> +++ b/kernel/trace/trace_syscalls.c
>>>>> @@ -559,12 +559,15 @@ static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re
>>>> let's also drop struct pt_regs * argument into
>>>> perf_call_bpf_{enter,exit}(), they are not actually used anymore
>>>>
>>>>>                   int syscall_nr;
>>>>>                   unsigned long args[SYSCALL_DEFINE_MAXARGS];
>>>>>           } __aligned(8) param;
>>>>> +       struct pt_regs fake_regs;
>>>>>           int i;
>>>>>
>>>>>           BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
>>>>>
>>>>>           /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. &param) */
>>>>> -       *(struct pt_regs **)&param = regs;
>>>>> +       memset(&fake_regs, 0, sizeof(fake_regs));
>>>> sizeof(struct pt_regs) == 168 on x86-64, and on arm64 it's a whopping
>>>> 336 bytes, so these memset(0) calls are not free for sure.
>>>>
>>>> But we don't need to do this unnecessary work all the time.
>>>>
>>>> I initially was going to suggest to use get_bpf_raw_tp_regs() from
>>>> kernel/trace/bpf_trace.c to get a temporary pt_regs that was already
>>>> memset(0) and used to initialize these minimal "fake regs".
>>>>
>>>> But, it turns out we don't need to do even that. Note
>>>> perf_trace_buf_alloc(), it has `struct pt_regs **` second argument,
>>>> and if you pass a valid pointer there, it will return "fake regs"
>>>> struct to be used. We already use that functionality in
>>>> perf_trace_##call in include/trace/perf.h (i.e., non-syscall
>>>> tracepoints), so this seems to be a perfect fit.
>>>>
>>>>> +       perf_fetch_caller_regs(&fake_regs);
>>>>> +       *(struct pt_regs **)&param = &fake_regs;
>>>>>           param.syscall_nr = rec->nr;
>>>>>           for (i = 0; i < sys_data->nb_args; i++)
>>>>>                   param.args[i] = rec->args[i];
>>>>> --
>>>>> 2.43.5
>>>>>
Andrii Nakryiko Sept. 10, 2024, 6:25 p.m. UTC | #7
On Tue, Sep 10, 2024 at 11:22 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
> On 9/10/24 9:50 AM, Andrii Nakryiko wrote:
> > On Tue, Sep 10, 2024 at 8:25 AM Yonghong Song <yonghong.song@linux.dev> wrote:
> >>
> >> On 9/9/24 10:42 PM, Andrii Nakryiko wrote:
> >>> On Mon, Sep 9, 2024 at 10:34 PM Andrii Nakryiko
> >>> <andrii.nakryiko@gmail.com> wrote:
> >>>> On Mon, Sep 9, 2024 at 8:43 PM Yonghong Song <yonghong.song@linux.dev> wrote:
> >>>>> Salvatore Benedetto reported an issue that when doing syscall tracepoint
> >>>>> tracing the kernel stack is empty. For example, using the following
> >>>>> command line
> >>>>>     bpftrace -e 'tracepoint:syscalls:sys_enter_read { print("Kernel Stack\n"); print(kstack()); }'
> >>>>> the output will be
> >>>>> ===
> >>>>>     Kernel Stack
> >>>>> ===
> >>>>>
> >>>>> Further analysis shows that pt_regs used for bpf syscall tracepoint
> >>>>> tracing is from the one constructed during user->kernel transition.
> >>>>> The call stack looks like
> >>>>>     perf_syscall_enter+0x88/0x7c0
> >>>>>     trace_sys_enter+0x41/0x80
> >>>>>     syscall_trace_enter+0x100/0x160
> >>>>>     do_syscall_64+0x38/0xf0
> >>>>>     entry_SYSCALL_64_after_hwframe+0x76/0x7e
> >>>>>
> >>>>> The ip address stored in pt_regs is from user space hence no kernel
> >>>>> stack is printed.
> >>>>>
> >>>>> To fix the issue, we need to use kernel address from pt_regs.
> >>>>> In kernel repo, there are already a few cases like this. For example,
> >>>>> in kernel/trace/bpf_trace.c, several perf_fetch_caller_regs(fake_regs_ptr)
> >>>>> instances are used to supply ip address or use ip address to construct
> >>>>> call stack.
> >>>>>
> >>>>> The patch follows the above example by using a fake pt_regs.
> >>>>> The pt_regs is stored in local stack since the syscall tracepoint
> >>>>> tracing is in process context and there are no possibility that
> >>>>> different concurrent syscall tracepoint tracing could mess up with each
> >>>>> other. This is similar to a perf_fetch_caller_regs() use case in
> >>>>> kernel/trace/trace_event_perf.c with function perf_ftrace_function_call()
> >>>>> where a local pt_regs is used.
> >>>>>
> >>>>> With this patch, for the above bpftrace script, I got the following output
> >>>>> ===
> >>>>>     Kernel Stack
> >>>>>
> >>>>>           syscall_trace_enter+407
> >>>>>           syscall_trace_enter+407
> >>>>>           do_syscall_64+74
> >>>>>           entry_SYSCALL_64_after_hwframe+75
> >>>>> ===
> >>>>>
> >>>>> Reported-by: Salvatore Benedetto <salvabenedetto@meta.com>
> >>>>> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> >>>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> >>>>> ---
> >>>>>    kernel/trace/trace_syscalls.c | 5 ++++-
> >>>>>    1 file changed, 4 insertions(+), 1 deletion(-)
> >>>>>
> >>>> Note, we need to solve the same for perf_call_bpf_exit().
> >>>>
> >>>> pw-bot: cr
> >>>>
> >>> BTW, we lived with this bug for years, so I suggest basing your fix on
> >>> top of bpf-next/master, no bpf/master, which will give people a bit of
> >>> time to validate that the fix works as expected and doesn't produce
> >>> any undesirable side effects, before this makes it into the final
> >>> Linux release.
> >> Yes, I did. See I indeed use 'bpf-next' in subject above.
> > Huh, strange, I actually tried to apply your patch to bpf-next/master
> > and it didn't apply cleanly. It did apply to bpf/master, though, which
> > is why I assumed you based it off of bpf/master.
>
> Interesting. The following is my git history:
>
> 7b71206057440d9559ecb9cd02d891f46927b272 (HEAD -> trace_syscall) bpf: Use fake pt_regs when doing bpf syscall tracepoint tracing
> 41d0c4677feee1ea063e0f2c2af72dc953b1f1cc (origin/master, origin/HEAD, master) libbpf: Fix some typos in comments
> 72d8508ecd3b081dba03ec00930c6b07c1ad55d3 MAINTAINERS: BPF ARC JIT: Update my e-mail address
> bee109b7b3e50739b88252a219fa07ecd78ad628 bpf: Fix error message on kfunc arg type mismatch
> ...
>
> Not sure what is going on ...
>

Doesn't matter, maybe my local bpf-next/master branch was screwed up.
Just send a v2 when you are ready and I'll try again :)

>
> >
> >>>>> diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
> >>>>> index 9c581d6da843..063f51952d49 100644
> >>>>> --- a/kernel/trace/trace_syscalls.c
> >>>>> +++ b/kernel/trace/trace_syscalls.c
> >>>>> @@ -559,12 +559,15 @@ static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re
> >>>> let's also drop struct pt_regs * argument into
> >>>> perf_call_bpf_{enter,exit}(), they are not actually used anymore
> >>>>
> >>>>>                   int syscall_nr;
> >>>>>                   unsigned long args[SYSCALL_DEFINE_MAXARGS];
> >>>>>           } __aligned(8) param;
> >>>>> +       struct pt_regs fake_regs;
> >>>>>           int i;
> >>>>>
> >>>>>           BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
> >>>>>
> >>>>>           /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. &param) */
> >>>>> -       *(struct pt_regs **)&param = regs;
> >>>>> +       memset(&fake_regs, 0, sizeof(fake_regs));
> >>>> sizeof(struct pt_regs) == 168 on x86-64, and on arm64 it's a whopping
> >>>> 336 bytes, so these memset(0) calls are not free for sure.
> >>>>
> >>>> But we don't need to do this unnecessary work all the time.
> >>>>
> >>>> I initially was going to suggest to use get_bpf_raw_tp_regs() from
> >>>> kernel/trace/bpf_trace.c to get a temporary pt_regs that was already
> >>>> memset(0) and used to initialize these minimal "fake regs".
> >>>>
> >>>> But, it turns out we don't need to do even that. Note
> >>>> perf_trace_buf_alloc(), it has `struct pt_regs **` second argument,
> >>>> and if you pass a valid pointer there, it will return "fake regs"
> >>>> struct to be used. We already use that functionality in
> >>>> perf_trace_##call in include/trace/perf.h (i.e., non-syscall
> >>>> tracepoints), so this seems to be a perfect fit.
> >>>>
> >>>>> +       perf_fetch_caller_regs(&fake_regs);
> >>>>> +       *(struct pt_regs **)&param = &fake_regs;
> >>>>>           param.syscall_nr = rec->nr;
> >>>>>           for (i = 0; i < sys_data->nb_args; i++)
> >>>>>                   param.args[i] = rec->args[i];
> >>>>> --
> >>>>> 2.43.5
> >>>>>
diff mbox series

Patch

diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 9c581d6da843..063f51952d49 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -559,12 +559,15 @@  static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re
 		int syscall_nr;
 		unsigned long args[SYSCALL_DEFINE_MAXARGS];
 	} __aligned(8) param;
+	struct pt_regs fake_regs;
 	int i;
 
 	BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
 
 	/* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. &param) */
-	*(struct pt_regs **)&param = regs;
+	memset(&fake_regs, 0, sizeof(fake_regs));
+	perf_fetch_caller_regs(&fake_regs);
+	*(struct pt_regs **)&param = &fake_regs;
 	param.syscall_nr = rec->nr;
 	for (i = 0; i < sys_data->nb_args; i++)
 		param.args[i] = rec->args[i];