diff mbox series

[v2,3/3] x86/kprobes: Boost more instructions from grp2/3/4/5

Message ID 20240204031300.830475-4-jinghao7@illinois.edu (mailing list archive)
State Accepted
Commit 290eb13f1a657313177789159a6d1786187cf168
Delegated to: Masami Hiramatsu
Headers show
Series x86/kprobes: add exception opcode detector and boost more opcodes | expand

Commit Message

Jinghao Jia Feb. 4, 2024, 3:13 a.m. UTC
With the instruction decoder, we are now able to decode and recognize
instructions with opcode extensions. There are more instructions in
these groups that can be boosted:

Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
Group 4: INC, DEC (byte operation)
Group 5: INC, DEC (word/doubleword/quadword operation)

These instructions are not boosted previously because there are reserved
opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
unmapped. As a result, kprobes attached to them requires two int3 traps
as being non-boostable also prevents jump-optimization.

Some simple tests on QEMU show that after boosting and jump-optimization
a single kprobe on these instructions with an empty pre-handler runs 10x
faster (~1000 cycles vs. ~100 cycles).

Since these instructions are mostly ALU operations and do not touch
special registers like RIP, let's boost them so that we get the
performance benefit.

Signed-off-by: Jinghao Jia <jinghao7@illinois.edu>
---
 arch/x86/kernel/kprobes/core.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

Comments

Masami Hiramatsu (Google) Feb. 4, 2024, 12:09 p.m. UTC | #1
On Sat,  3 Feb 2024 21:13:00 -0600
Jinghao Jia <jinghao7@illinois.edu> wrote:

> With the instruction decoder, we are now able to decode and recognize
> instructions with opcode extensions. There are more instructions in
> these groups that can be boosted:
> 
> Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
> Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
> Group 4: INC, DEC (byte operation)
> Group 5: INC, DEC (word/doubleword/quadword operation)
> 
> These instructions are not boosted previously because there are reserved
> opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
> unmapped. As a result, kprobes attached to them requires two int3 traps
> as being non-boostable also prevents jump-optimization.
> 
> Some simple tests on QEMU show that after boosting and jump-optimization
> a single kprobe on these instructions with an empty pre-handler runs 10x
> faster (~1000 cycles vs. ~100 cycles).
> 
> Since these instructions are mostly ALU operations and do not touch
> special registers like RIP, let's boost them so that we get the
> performance benefit.
> 

This looks good to me. And can you check how many instructions in the
vmlinux will be covered by this change typically?

Thank you,

> Signed-off-by: Jinghao Jia <jinghao7@illinois.edu>
> ---
>  arch/x86/kernel/kprobes/core.c | 23 +++++++++++++++++------
>  1 file changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index 7a08d6a486c8..530f6d4b34f4 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -169,22 +169,33 @@ bool can_boost(struct insn *insn, void *addr)
>  	case 0x62:		/* bound */
>  	case 0x70 ... 0x7f:	/* Conditional jumps */
>  	case 0x9a:		/* Call far */
> -	case 0xc0 ... 0xc1:	/* Grp2 */
>  	case 0xcc ... 0xce:	/* software exceptions */
> -	case 0xd0 ... 0xd3:	/* Grp2 */
>  	case 0xd6:		/* (UD) */
>  	case 0xd8 ... 0xdf:	/* ESC */
>  	case 0xe0 ... 0xe3:	/* LOOP*, JCXZ */
>  	case 0xe8 ... 0xe9:	/* near Call, JMP */
>  	case 0xeb:		/* Short JMP */
>  	case 0xf0 ... 0xf4:	/* LOCK/REP, HLT */
> -	case 0xf6 ... 0xf7:	/* Grp3 */
> -	case 0xfe:		/* Grp4 */
>  		/* ... are not boostable */
>  		return false;
> +	case 0xc0 ... 0xc1:	/* Grp2 */
> +	case 0xd0 ... 0xd3:	/* Grp2 */
> +		/*
> +		 * AMD uses nnn == 110 as SHL/SAL, but Intel makes it reserved.
> +		 */
> +		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b110;
> +	case 0xf6 ... 0xf7:	/* Grp3 */
> +		/* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */
> +		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b001;
> +	case 0xfe:		/* Grp4 */
> +		/* Only INC and DEC are boostable */
> +		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001;
>  	case 0xff:		/* Grp5 */
> -		/* Only indirect jmp is boostable */
> -		return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
> +		/* Only INC, DEC, and indirect JMP are boostable */
> +		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001 ||
> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b100;
>  	default:
>  		return true;
>  	}
> -- 
> 2.43.0
>
Jinghao Jia Feb. 5, 2024, 4:39 a.m. UTC | #2
On 2/4/24 06:09, Masami Hiramatsu (Google) wrote:
> On Sat,  3 Feb 2024 21:13:00 -0600
> Jinghao Jia <jinghao7@illinois.edu> wrote:
> 
>> With the instruction decoder, we are now able to decode and recognize
>> instructions with opcode extensions. There are more instructions in
>> these groups that can be boosted:
>>
>> Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
>> Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
>> Group 4: INC, DEC (byte operation)
>> Group 5: INC, DEC (word/doubleword/quadword operation)
>>
>> These instructions are not boosted previously because there are reserved
>> opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
>> unmapped. As a result, kprobes attached to them requires two int3 traps
>> as being non-boostable also prevents jump-optimization.
>>
>> Some simple tests on QEMU show that after boosting and jump-optimization
>> a single kprobe on these instructions with an empty pre-handler runs 10x
>> faster (~1000 cycles vs. ~100 cycles).
>>
>> Since these instructions are mostly ALU operations and do not touch
>> special registers like RIP, let's boost them so that we get the
>> performance benefit.
>>
> 
> This looks good to me. And can you check how many instructions in the
> vmlinux will be covered by this change typically?
> 

I collected the stats from the LLVM CodeGen backend on kernel version 6.7.3
using Gentoo's dist-kernel config (with a mod2yesconfig to make modules
builtin) and here are the number of Grp 2/3/4/5 instructions that are newly
covered by this patch:

Kernel total # of insns:    28552017    (from objdump)
Grp2 insns:                 286249      (from LLVM)
Grp3 insns:                 286556      (from LLVM)
Grp4 insns:                 5832        (from LLVM)
Grp5 insns:                 146314      (from LLVM)

Note that using LLVM means we miss the stats from inline assembly and
assembly source files.

--Jinghao

> Thank you,
> 
>> Signed-off-by: Jinghao Jia <jinghao7@illinois.edu>
>> ---
>>  arch/x86/kernel/kprobes/core.c | 23 +++++++++++++++++------
>>  1 file changed, 17 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
>> index 7a08d6a486c8..530f6d4b34f4 100644
>> --- a/arch/x86/kernel/kprobes/core.c
>> +++ b/arch/x86/kernel/kprobes/core.c
>> @@ -169,22 +169,33 @@ bool can_boost(struct insn *insn, void *addr)
>>  	case 0x62:		/* bound */
>>  	case 0x70 ... 0x7f:	/* Conditional jumps */
>>  	case 0x9a:		/* Call far */
>> -	case 0xc0 ... 0xc1:	/* Grp2 */
>>  	case 0xcc ... 0xce:	/* software exceptions */
>> -	case 0xd0 ... 0xd3:	/* Grp2 */
>>  	case 0xd6:		/* (UD) */
>>  	case 0xd8 ... 0xdf:	/* ESC */
>>  	case 0xe0 ... 0xe3:	/* LOOP*, JCXZ */
>>  	case 0xe8 ... 0xe9:	/* near Call, JMP */
>>  	case 0xeb:		/* Short JMP */
>>  	case 0xf0 ... 0xf4:	/* LOCK/REP, HLT */
>> -	case 0xf6 ... 0xf7:	/* Grp3 */
>> -	case 0xfe:		/* Grp4 */
>>  		/* ... are not boostable */
>>  		return false;
>> +	case 0xc0 ... 0xc1:	/* Grp2 */
>> +	case 0xd0 ... 0xd3:	/* Grp2 */
>> +		/*
>> +		 * AMD uses nnn == 110 as SHL/SAL, but Intel makes it reserved.
>> +		 */
>> +		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b110;
>> +	case 0xf6 ... 0xf7:	/* Grp3 */
>> +		/* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */
>> +		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b001;
>> +	case 0xfe:		/* Grp4 */
>> +		/* Only INC and DEC are boostable */
>> +		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
>> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001;
>>  	case 0xff:		/* Grp5 */
>> -		/* Only indirect jmp is boostable */
>> -		return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
>> +		/* Only INC, DEC, and indirect JMP are boostable */
>> +		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
>> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001 ||
>> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b100;
>>  	default:
>>  		return true;
>>  	}
>> -- 
>> 2.43.0
>>
> 
>
Masami Hiramatsu (Google) Feb. 6, 2024, 11:40 p.m. UTC | #3
On Sun, 4 Feb 2024 22:39:32 -0600
Jinghao Jia <jinghao7@illinois.edu> wrote:

> On 2/4/24 06:09, Masami Hiramatsu (Google) wrote:
> > On Sat,  3 Feb 2024 21:13:00 -0600
> > Jinghao Jia <jinghao7@illinois.edu> wrote:
> > 
> >> With the instruction decoder, we are now able to decode and recognize
> >> instructions with opcode extensions. There are more instructions in
> >> these groups that can be boosted:
> >>
> >> Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
> >> Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
> >> Group 4: INC, DEC (byte operation)
> >> Group 5: INC, DEC (word/doubleword/quadword operation)
> >>
> >> These instructions are not boosted previously because there are reserved
> >> opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
> >> unmapped. As a result, kprobes attached to them requires two int3 traps
> >> as being non-boostable also prevents jump-optimization.
> >>
> >> Some simple tests on QEMU show that after boosting and jump-optimization
> >> a single kprobe on these instructions with an empty pre-handler runs 10x
> >> faster (~1000 cycles vs. ~100 cycles).
> >>
> >> Since these instructions are mostly ALU operations and do not touch
> >> special registers like RIP, let's boost them so that we get the
> >> performance benefit.
> >>
> > 
> > This looks good to me. And can you check how many instructions in the
> > vmlinux will be covered by this change typically?
> > 
> 
> I collected the stats from the LLVM CodeGen backend on kernel version 6.7.3
> using Gentoo's dist-kernel config (with a mod2yesconfig to make modules
> builtin) and here are the number of Grp 2/3/4/5 instructions that are newly
> covered by this patch:
> 
> Kernel total # of insns:    28552017    (from objdump)
> Grp2 insns:                 286249      (from LLVM)
> Grp3 insns:                 286556      (from LLVM)
> Grp4 insns:                 5832        (from LLVM)
> Grp5 insns:                 146314      (from LLVM)
> 
> Note that using LLVM means we miss the stats from inline assembly and
> assembly source files.

Thanks for checking! so it increases the coverage ~2.5% :)

Thank you,
 

> 
> --Jinghao
> 
> > Thank you,
> > 
> >> Signed-off-by: Jinghao Jia <jinghao7@illinois.edu>
> >> ---
> >>  arch/x86/kernel/kprobes/core.c | 23 +++++++++++++++++------
> >>  1 file changed, 17 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> >> index 7a08d6a486c8..530f6d4b34f4 100644
> >> --- a/arch/x86/kernel/kprobes/core.c
> >> +++ b/arch/x86/kernel/kprobes/core.c
> >> @@ -169,22 +169,33 @@ bool can_boost(struct insn *insn, void *addr)
> >>  	case 0x62:		/* bound */
> >>  	case 0x70 ... 0x7f:	/* Conditional jumps */
> >>  	case 0x9a:		/* Call far */
> >> -	case 0xc0 ... 0xc1:	/* Grp2 */
> >>  	case 0xcc ... 0xce:	/* software exceptions */
> >> -	case 0xd0 ... 0xd3:	/* Grp2 */
> >>  	case 0xd6:		/* (UD) */
> >>  	case 0xd8 ... 0xdf:	/* ESC */
> >>  	case 0xe0 ... 0xe3:	/* LOOP*, JCXZ */
> >>  	case 0xe8 ... 0xe9:	/* near Call, JMP */
> >>  	case 0xeb:		/* Short JMP */
> >>  	case 0xf0 ... 0xf4:	/* LOCK/REP, HLT */
> >> -	case 0xf6 ... 0xf7:	/* Grp3 */
> >> -	case 0xfe:		/* Grp4 */
> >>  		/* ... are not boostable */
> >>  		return false;
> >> +	case 0xc0 ... 0xc1:	/* Grp2 */
> >> +	case 0xd0 ... 0xd3:	/* Grp2 */
> >> +		/*
> >> +		 * AMD uses nnn == 110 as SHL/SAL, but Intel makes it reserved.
> >> +		 */
> >> +		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b110;
> >> +	case 0xf6 ... 0xf7:	/* Grp3 */
> >> +		/* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */
> >> +		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b001;
> >> +	case 0xfe:		/* Grp4 */
> >> +		/* Only INC and DEC are boostable */
> >> +		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
> >> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001;
> >>  	case 0xff:		/* Grp5 */
> >> -		/* Only indirect jmp is boostable */
> >> -		return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
> >> +		/* Only INC, DEC, and indirect JMP are boostable */
> >> +		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
> >> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001 ||
> >> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b100;
> >>  	default:
> >>  		return true;
> >>  	}
> >> -- 
> >> 2.43.0
> >>
> > 
> >
diff mbox series

Patch

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 7a08d6a486c8..530f6d4b34f4 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -169,22 +169,33 @@  bool can_boost(struct insn *insn, void *addr)
 	case 0x62:		/* bound */
 	case 0x70 ... 0x7f:	/* Conditional jumps */
 	case 0x9a:		/* Call far */
-	case 0xc0 ... 0xc1:	/* Grp2 */
 	case 0xcc ... 0xce:	/* software exceptions */
-	case 0xd0 ... 0xd3:	/* Grp2 */
 	case 0xd6:		/* (UD) */
 	case 0xd8 ... 0xdf:	/* ESC */
 	case 0xe0 ... 0xe3:	/* LOOP*, JCXZ */
 	case 0xe8 ... 0xe9:	/* near Call, JMP */
 	case 0xeb:		/* Short JMP */
 	case 0xf0 ... 0xf4:	/* LOCK/REP, HLT */
-	case 0xf6 ... 0xf7:	/* Grp3 */
-	case 0xfe:		/* Grp4 */
 		/* ... are not boostable */
 		return false;
+	case 0xc0 ... 0xc1:	/* Grp2 */
+	case 0xd0 ... 0xd3:	/* Grp2 */
+		/*
+		 * AMD uses nnn == 110 as SHL/SAL, but Intel makes it reserved.
+		 */
+		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b110;
+	case 0xf6 ... 0xf7:	/* Grp3 */
+		/* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */
+		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b001;
+	case 0xfe:		/* Grp4 */
+		/* Only INC and DEC are boostable */
+		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
+		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001;
 	case 0xff:		/* Grp5 */
-		/* Only indirect jmp is boostable */
-		return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
+		/* Only INC, DEC, and indirect JMP are boostable */
+		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
+		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001 ||
+		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b100;
 	default:
 		return true;
 	}