diff mbox series

[3/3] x86/kprobes: Fix arch_check_optimized_kprobe check within optimized_kprobe range

Message ID 20230215115430.236046-4-yangjihong1@huawei.com (mailing list archive)
State Superseded
Headers show
Series kprobes: Fix issues related to optkprobe | expand

Commit Message

Yang Jihong Feb. 15, 2023, 11:54 a.m. UTC
When arch_prepare_optimized_kprobe calculating jump destination address,
it copies original instructions from jmp-optimized kprobe (see
__recover_optprobed_insn), and calculated based on length of original
instruction.

arch_check_optimized_kprobe does not check KPROBE_FLAG_OPTIMATED when
checking whether jmp-optimized kprobe exists.
As a result, setup_detour_execution may jump to a range that has been
overwritten by jump destination address, resulting in an inval opcode error.

For example, assume that register two kprobes whose addresses are
<func+9> and <func+11> in "func" function.
The original code of "func" function is as follows:

   0xffffffff816cb5e9 <+9>:     push   %r12
   0xffffffff816cb5eb <+11>:    xor    %r12d,%r12d
   0xffffffff816cb5ee <+14>:    test   %rdi,%rdi
   0xffffffff816cb5f1 <+17>:    setne  %r12b
   0xffffffff816cb5f5 <+21>:    push   %rbp

1.Register the kprobe for <func+11>, assume that is kp1, corresponding optimized_kprobe is op1.
  After the optimization, "func" code changes to:

   0xffffffff816cc079 <+9>:     push   %r12
   0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
   0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
   0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
   0xffffffff816cc084 <+20>:    (bad)
   0xffffffff816cc085 <+21>:    push   %rbp

Now op1->flags == KPROBE_FLAG_OPTIMATED;

2. Register the kprobe for <func+9>, assume that is kp2, corresponding optimized_kprobe is op2.

register_kprobe(kp2)
  register_aggr_kprobe
    alloc_aggr_kprobe
      __prepare_optimized_kprobe
        arch_prepare_optimized_kprobe
          __recover_optprobed_insn    // copy original bytes from kp1->optinsn.copied_insn,
                                      // jump address = <func+14>

3. disable kp1:

disable_kprobe(kp1)
  __disable_kprobe
    ...
    if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
      ret = disarm_kprobe(orig_p, true)       // add op1 in unoptimizing_list, not unoptimized
      orig_p->flags |= KPROBE_FLAG_DISABLED;  // op1->flags ==  KPROBE_FLAG_OPTIMATED | KPROBE_FLAG_DISABLED
    ...

4. unregister kp2
__unregister_kprobe_top
  ...
  if (!kprobe_disabled(ap) && !kprobes_all_disarmed) {
    optimize_kprobe(op)
      ...
      if (arch_check_optimized_kprobe(op) < 0) // because op1 has KPROBE_FLAG_DISABLED, here not return
        return;
      p->kp.flags |= KPROBE_FLAG_OPTIMIZED;   //  now op2 has KPROBE_FLAG_OPTIMIZED
  }

"func" code now is:

   0xffffffff816cc079 <+9>:     int3
   0xffffffff816cc07a <+10>:    push   %rsp
   0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
   0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
   0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
   0xffffffff816cc084 <+20>:    (bad)
   0xffffffff816cc085 <+21>:    push   %rbp

5. if call "func", int3 handler call setup_detour_execution:

  if (p->flags & KPROBE_FLAG_OPTIMIZED) {
    ...
    regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
    ...
  }

The code for the destination address is

   0xffffffffa021072c:  push   %r12
   0xffffffffa021072e:  xor    %r12d,%r12d
   0xffffffffa0210731:  jmp    0xffffffff816cb5ee <func+14>

However, <func+14> is not a valid start instruction address. As a result, an error occurs.

Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
---
 arch/x86/kernel/kprobes/opt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Masami Hiramatsu (Google) Feb. 15, 2023, 3:48 p.m. UTC | #1
On Wed, 15 Feb 2023 19:54:30 +0800
Yang Jihong <yangjihong1@huawei.com> wrote:

> When arch_prepare_optimized_kprobe calculating jump destination address,
> it copies original instructions from jmp-optimized kprobe (see
> __recover_optprobed_insn), and calculated based on length of original
> instruction.
> 
> arch_check_optimized_kprobe does not check KPROBE_FLAG_OPTIMATED when
> checking whether jmp-optimized kprobe exists.
> As a result, setup_detour_execution may jump to a range that has been
> overwritten by jump destination address, resulting in an inval opcode error.

OK, good catch !! I missed "delayed unoptimization" case here too.

> 
> For example, assume that register two kprobes whose addresses are
> <func+9> and <func+11> in "func" function.
> The original code of "func" function is as follows:
> 
>    0xffffffff816cb5e9 <+9>:     push   %r12
>    0xffffffff816cb5eb <+11>:    xor    %r12d,%r12d
>    0xffffffff816cb5ee <+14>:    test   %rdi,%rdi
>    0xffffffff816cb5f1 <+17>:    setne  %r12b
>    0xffffffff816cb5f5 <+21>:    push   %rbp
> 
> 1.Register the kprobe for <func+11>, assume that is kp1, corresponding optimized_kprobe is op1.
>   After the optimization, "func" code changes to:
> 
>    0xffffffff816cc079 <+9>:     push   %r12
>    0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
>    0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
>    0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
>    0xffffffff816cc084 <+20>:    (bad)
>    0xffffffff816cc085 <+21>:    push   %rbp
> 
> Now op1->flags == KPROBE_FLAG_OPTIMATED;
> 
> 2. Register the kprobe for <func+9>, assume that is kp2, corresponding optimized_kprobe is op2.
> 
> register_kprobe(kp2)
>   register_aggr_kprobe
>     alloc_aggr_kprobe
>       __prepare_optimized_kprobe
>         arch_prepare_optimized_kprobe
>           __recover_optprobed_insn    // copy original bytes from kp1->optinsn.copied_insn,
>                                       // jump address = <func+14>
> 
> 3. disable kp1:
> 
> disable_kprobe(kp1)
>   __disable_kprobe
>     ...
>     if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
>       ret = disarm_kprobe(orig_p, true)       // add op1 in unoptimizing_list, not unoptimized
>       orig_p->flags |= KPROBE_FLAG_DISABLED;  // op1->flags ==  KPROBE_FLAG_OPTIMATED | KPROBE_FLAG_DISABLED
>     ...
> 
> 4. unregister kp2
> __unregister_kprobe_top
>   ...
>   if (!kprobe_disabled(ap) && !kprobes_all_disarmed) {
>     optimize_kprobe(op)
>       ...
>       if (arch_check_optimized_kprobe(op) < 0) // because op1 has KPROBE_FLAG_DISABLED, here not return
>         return;
>       p->kp.flags |= KPROBE_FLAG_OPTIMIZED;   //  now op2 has KPROBE_FLAG_OPTIMIZED
>   }
> 
> "func" code now is:
> 
>    0xffffffff816cc079 <+9>:     int3
>    0xffffffff816cc07a <+10>:    push   %rsp
>    0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
>    0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
>    0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
>    0xffffffff816cc084 <+20>:    (bad)
>    0xffffffff816cc085 <+21>:    push   %rbp
> 
> 5. if call "func", int3 handler call setup_detour_execution:
> 
>   if (p->flags & KPROBE_FLAG_OPTIMIZED) {
>     ...
>     regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
>     ...
>   }
> 
> The code for the destination address is
> 
>    0xffffffffa021072c:  push   %r12
>    0xffffffffa021072e:  xor    %r12d,%r12d
>    0xffffffffa0210731:  jmp    0xffffffff816cb5ee <func+14>
> 
> However, <func+14> is not a valid start instruction address. As a result, an error occurs.

OK, it has been introduced by the same commit as previous one. (delayed unoptimization)

> 
> Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
> ---
>  arch/x86/kernel/kprobes/opt.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
> index 3718d6863555..e6d9bd038401 100644
> --- a/arch/x86/kernel/kprobes/opt.c
> +++ b/arch/x86/kernel/kprobes/opt.c
> @@ -353,7 +353,7 @@ int arch_check_optimized_kprobe(struct optimized_kprobe *op)
>  
>  	for (i = 1; i < op->optinsn.size; i++) {
>  		p = get_kprobe(op->kp.addr + i);
> -		if (p && !kprobe_disabled(p))
> +		if (p && (!kprobe_disabled(p) || kprobe_optimized(p)))

Hmm, can you rewrite this with kprobe_disarmed() instead of kprobe_disabled()?
Since this is checking there are any other kprobes are "armed" on the address
where it will be replaced by jump. So it is natural to use "disarmed" check.

Thank you,


>  			return -EEXIST;
>  	}
>  
> -- 
> 2.30.GIT
>
Yang Jihong Feb. 16, 2023, 2:56 a.m. UTC | #2
Hello Masami,

On 2023/2/15 23:48, Masami Hiramatsu (Google) wrote:
> On Wed, 15 Feb 2023 19:54:30 +0800
> Yang Jihong <yangjihong1@huawei.com> wrote:
> 
>> When arch_prepare_optimized_kprobe calculating jump destination address,
>> it copies original instructions from jmp-optimized kprobe (see
>> __recover_optprobed_insn), and calculated based on length of original
>> instruction.
>>
>> arch_check_optimized_kprobe does not check KPROBE_FLAG_OPTIMATED when
>> checking whether jmp-optimized kprobe exists.
>> As a result, setup_detour_execution may jump to a range that has been
>> overwritten by jump destination address, resulting in an inval opcode error.
> 
> OK, good catch !! I missed "delayed unoptimization" case here too.
> 
>>
>> For example, assume that register two kprobes whose addresses are
>> <func+9> and <func+11> in "func" function.
>> The original code of "func" function is as follows:
>>
>>     0xffffffff816cb5e9 <+9>:     push   %r12
>>     0xffffffff816cb5eb <+11>:    xor    %r12d,%r12d
>>     0xffffffff816cb5ee <+14>:    test   %rdi,%rdi
>>     0xffffffff816cb5f1 <+17>:    setne  %r12b
>>     0xffffffff816cb5f5 <+21>:    push   %rbp
>>
>> 1.Register the kprobe for <func+11>, assume that is kp1, corresponding optimized_kprobe is op1.
>>    After the optimization, "func" code changes to:
>>
>>     0xffffffff816cc079 <+9>:     push   %r12
>>     0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
>>     0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
>>     0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
>>     0xffffffff816cc084 <+20>:    (bad)
>>     0xffffffff816cc085 <+21>:    push   %rbp
>>
>> Now op1->flags == KPROBE_FLAG_OPTIMATED;
>>
>> 2. Register the kprobe for <func+9>, assume that is kp2, corresponding optimized_kprobe is op2.
>>
>> register_kprobe(kp2)
>>    register_aggr_kprobe
>>      alloc_aggr_kprobe
>>        __prepare_optimized_kprobe
>>          arch_prepare_optimized_kprobe
>>            __recover_optprobed_insn    // copy original bytes from kp1->optinsn.copied_insn,
>>                                        // jump address = <func+14>
>>
>> 3. disable kp1:
>>
>> disable_kprobe(kp1)
>>    __disable_kprobe
>>      ...
>>      if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
>>        ret = disarm_kprobe(orig_p, true)       // add op1 in unoptimizing_list, not unoptimized
>>        orig_p->flags |= KPROBE_FLAG_DISABLED;  // op1->flags ==  KPROBE_FLAG_OPTIMATED | KPROBE_FLAG_DISABLED
>>      ...
>>
>> 4. unregister kp2
>> __unregister_kprobe_top
>>    ...
>>    if (!kprobe_disabled(ap) && !kprobes_all_disarmed) {
>>      optimize_kprobe(op)
>>        ...
>>        if (arch_check_optimized_kprobe(op) < 0) // because op1 has KPROBE_FLAG_DISABLED, here not return
>>          return;
>>        p->kp.flags |= KPROBE_FLAG_OPTIMIZED;   //  now op2 has KPROBE_FLAG_OPTIMIZED
>>    }
>>
>> "func" code now is:
>>
>>     0xffffffff816cc079 <+9>:     int3
>>     0xffffffff816cc07a <+10>:    push   %rsp
>>     0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
>>     0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
>>     0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
>>     0xffffffff816cc084 <+20>:    (bad)
>>     0xffffffff816cc085 <+21>:    push   %rbp
>>
>> 5. if call "func", int3 handler call setup_detour_execution:
>>
>>    if (p->flags & KPROBE_FLAG_OPTIMIZED) {
>>      ...
>>      regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
>>      ...
>>    }
>>
>> The code for the destination address is
>>
>>     0xffffffffa021072c:  push   %r12
>>     0xffffffffa021072e:  xor    %r12d,%r12d
>>     0xffffffffa0210731:  jmp    0xffffffff816cb5ee <func+14>
>>
>> However, <func+14> is not a valid start instruction address. As a result, an error occurs.
> 
> OK, it has been introduced by the same commit as previous one. (delayed unoptimization)
> 

OK, will add "Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after 
unoptimizing code")" in next version

In addition, "
Cc: stable@vger.kernel.org" is required, same as the previous patch.

>>
>> Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
>> ---
>>   arch/x86/kernel/kprobes/opt.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
>> index 3718d6863555..e6d9bd038401 100644
>> --- a/arch/x86/kernel/kprobes/opt.c
>> +++ b/arch/x86/kernel/kprobes/opt.c
>> @@ -353,7 +353,7 @@ int arch_check_optimized_kprobe(struct optimized_kprobe *op)
>>   
>>   	for (i = 1; i < op->optinsn.size; i++) {
>>   		p = get_kprobe(op->kp.addr + i);
>> -		if (p && !kprobe_disabled(p))
>> +		if (p && (!kprobe_disabled(p) || kprobe_optimized(p)))
> 
> Hmm, can you rewrite this with kprobe_disarmed() instead of kprobe_disabled()?
> Since this is checking there are any other kprobes are "armed" on the address
> where it will be replaced by jump. So it is natural to use "disarmed" check.
> 

Yes, It is better to change it to use "kprobe_disarmed", will modify in 
next version.

Thanks,
Yang
diff mbox series

Patch

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 3718d6863555..e6d9bd038401 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -353,7 +353,7 @@  int arch_check_optimized_kprobe(struct optimized_kprobe *op)
 
 	for (i = 1; i < op->optinsn.size; i++) {
 		p = get_kprobe(op->kp.addr + i);
-		if (p && !kprobe_disabled(p))
+		if (p && (!kprobe_disabled(p) || kprobe_optimized(p)))
 			return -EEXIST;
 	}