diff mbox series

arch/riscv: kprobes: implement optprobes

Message ID 20220831041014.1295054-1-chenguokai17@mails.ucas.ac.cn (mailing list archive)
State New, archived
Headers show
Series arch/riscv: kprobes: implement optprobes | expand

Commit Message

Xim Aug. 31, 2022, 4:10 a.m. UTC
This patch adds jump optimization support for RISC-V.

This patch replaces ebreak instructions used by normal kprobes with an
auipc+jalr instruction pair, at the aim of suppressing the probe-hit
overhead.

All known optprobe-capable RISC architectures have been using a single
jump or branch instructions while this patch chooses not. RISC-V has a
quite limited jump range (4KB or 2MB) for both its branch and jump
instructions, which prevent optimizations from supporting probes that
spread all over the kernel.

Auipc-jalr instruction pair is introduced with a much wider jump range
(4GB), where auipc loads the upper 12 bits to a free register and jalr
appends the lower 20 bits to form a 32 bit immediate. Note that returning
from probe handler requires another free register. As kprobes can appear
almost anywhere inside the kernel, the free register should be found in a
generic way, not depending on calling convension or any other regulations.

The algorithm for finding the free register is inspired by the regiter
renaming in modern processors. From the perspective of register renaming, a
register could be represented as two different registers if two neighbour
instructions both write to it but no one ever reads. Extending this fact,
a register is considered to be free if there is no read before its next
write in the execution flow. We are free to change its value without
interfering normal execution.

Static analysis shows that 51% instructions of the kernel (default config)
is capable of being replaced i.e. two free registers can be found at both
the start and end of replaced instruction pairs while the replaced
instructions can be directly executed.

Signed-off-by: Chen Guokai <chenguokai17@mails.ucas.ac.cn>
Signed-off-by: Liao Chang <liaochang1@huawei.com>
---
 arch/riscv/Kconfig                        |   1 +
 arch/riscv/include/asm/ftrace.h           |   2 +-
 arch/riscv/include/asm/kprobes.h          |  28 ++
 arch/riscv/kernel/probes/Makefile         |   1 +
 arch/riscv/kernel/probes/opt.c            | 483 ++++++++++++++++++++++
 arch/riscv/kernel/probes/opt_trampoline.S | 133 ++++++
 arch/riscv/kernel/probes/simulate-insn.h  |   9 +
 7 files changed, 656 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/kernel/probes/opt.c
 create mode 100644 arch/riscv/kernel/probes/opt_trampoline.S

Comments

Conor Dooley Aug. 31, 2022, 7:24 a.m. UTC | #1
Hey Chen,

FYI there is a build warning with this patch:
arch/riscv/kernel/probes/opt.c:34:27: warning: no previous prototype for 'can_kprobe_direct_exec' [-Wmissing-prototypes]
    34 | enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)

Also, if you run scripts/checkpatch.pl --strict, it will have a
few complaints about code style for you too. Other than that, I
have a few comments for you below:

On 31/08/2022 05:10, Chen Guokai wrote:
> [You don't often get email from chenguokai17@mails.ucas.ac.cn. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
> 
> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
> 
> This patch adds jump optimization support for RISC-V.

s/This patch adds/Add

> 
> This patch replaces ebreak instructions used by normal kprobes with an

s/This patch replaces/Replace

> auipc+jalr instruction pair, at the aim of suppressing the probe-hit
> overhead.
> 
> All known optprobe-capable RISC architectures have been using a single
> jump or branch instructions while this patch chooses not. RISC-V has a
> quite limited jump range (4KB or 2MB) for both its branch and jump
> instructions, which prevent optimizations from supporting probes that
> spread all over the kernel.
> 
> Auipc-jalr instruction pair is introduced with a much wider jump range
> (4GB), where auipc loads the upper 12 bits to a free register and jalr
> appends the lower 20 bits to form a 32 bit immediate. Note that returning
> from probe handler requires another free register. As kprobes can appear
> almost anywhere inside the kernel, the free register should be found in a
> generic way, not depending on calling convension or any other regulations.

convention

> 
> The algorithm for finding the free register is inspired by the regiter

register

> renaming in modern processors. From the perspective of register renaming, a
> register could be represented as two different registers if two neighbour
> instructions both write to it but no one ever reads. Extending this fact,
> a register is considered to be free if there is no read before its next
> write in the execution flow. We are free to change its value without
> interfering normal execution.
> 
> Static analysis shows that 51% instructions of the kernel (default config)
> is capable of being replaced i.e. two free registers can be found at both
> the start and end of replaced instruction pairs while the replaced
> instructions can be directly executed.
> 
> Signed-off-by: Chen Guokai <chenguokai17@mails.ucas.ac.cn>
> Signed-off-by: Liao Chang <liaochang1@huawei.com>

What does Liao have to do with this patch?

> ---
>   arch/riscv/Kconfig                        |   1 +
>   arch/riscv/include/asm/ftrace.h           |   2 +-
>   arch/riscv/include/asm/kprobes.h          |  28 ++
>   arch/riscv/kernel/probes/Makefile         |   1 +
>   arch/riscv/kernel/probes/opt.c            | 483 ++++++++++++++++++++++
>   arch/riscv/kernel/probes/opt_trampoline.S | 133 ++++++
>   arch/riscv/kernel/probes/simulate-insn.h  |   9 +
>   7 files changed, 656 insertions(+), 1 deletion(-)
>   create mode 100644 arch/riscv/kernel/probes/opt.c
>   create mode 100644 arch/riscv/kernel/probes/opt_trampoline.S
> 
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index d557cc502..a54e50de2 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -97,6 +97,7 @@ config RISCV
>          select HAVE_KPROBES if !XIP_KERNEL
>          select HAVE_KPROBES_ON_FTRACE if !XIP_KERNEL
>          select HAVE_KRETPROBES if !XIP_KERNEL
> +       select HAVE_OPTPROBES if !XIP_KERNEL && !CONFIG_RISCV_ISA_C
>          select HAVE_MOVE_PMD
>          select HAVE_MOVE_PUD
>          select HAVE_PCI
> diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
> index 04dad3380..8b17a4c66 100644
> --- a/arch/riscv/include/asm/ftrace.h
> +++ b/arch/riscv/include/asm/ftrace.h
> @@ -35,7 +35,7 @@ struct dyn_arch_ftrace {
>   };
>   #endif
> 
> -#ifdef CONFIG_DYNAMIC_FTRACE
> +#if defined(CONFIG_DYNAMIC_FTRACE) || defined(CONFIG_OPTPROBES)
>   /*
>    * A general call in RISC-V is a pair of insts:
>    * 1) auipc: setting high-20 pc-related bits to ra register
> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> index 217ef89f2..6c5e10709 100644
> --- a/arch/riscv/include/asm/kprobes.h
> +++ b/arch/riscv/include/asm/kprobes.h
> @@ -43,5 +43,33 @@ bool kprobe_single_step_handler(struct pt_regs *regs);
>   void __kretprobe_trampoline(void);
>   void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
> 
> +#ifdef CONFIG_OPTPROBES
> +
> +#define MAX_OPTIMIZED_LENGTH   8
> +
> +/* optinsn template addresses */
> +extern __visible kprobe_opcode_t optprobe_template_entry[];
> +extern __visible kprobe_opcode_t optprobe_template_val[];
> +extern __visible kprobe_opcode_t optprobe_template_call[];
> +extern __visible kprobe_opcode_t optprobe_template_store_epc[];
> +extern __visible kprobe_opcode_t optprobe_template_end[];
> +extern __visible kprobe_opcode_t optprobe_template_sub_sp[];
> +extern __visible kprobe_opcode_t optprobe_template_add_sp[];
> +extern __visible kprobe_opcode_t optprobe_template_restore_begin[];
> +extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn[];
> +extern __visible kprobe_opcode_t optprobe_template_restore_end[];
> +
> +#define MAX_OPTINSN_SIZE                               \
> +               ((unsigned long)optprobe_template_end - \
> +                (unsigned long)optprobe_template_entry)
> +
> +#define MAX_COPIED_INSN 2
> +struct arch_optimized_insn {
> +               kprobe_opcode_t copied_insn[MAX_COPIED_INSN];
> +                       /* detour code buffer */
> +                       kprobe_opcode_t *insn;
> +};
> +#define RVI_INST_SIZE 4
> +#endif /* CONFIG_OPTPROBES */
>   #endif /* CONFIG_KPROBES */
>   #endif /* _ASM_RISCV_KPROBES_H */
> diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
> index 7f0840dcc..6255b4600 100644
> --- a/arch/riscv/kernel/probes/Makefile
> +++ b/arch/riscv/kernel/probes/Makefile
> @@ -3,4 +3,5 @@ obj-$(CONFIG_KPROBES)           += kprobes.o decode-insn.o simulate-insn.o
>   obj-$(CONFIG_KPROBES)          += kprobes_trampoline.o
>   obj-$(CONFIG_KPROBES_ON_FTRACE)        += ftrace.o
>   obj-$(CONFIG_UPROBES)          += uprobes.o decode-insn.o simulate-insn.o
> +obj-$(CONFIG_OPTPROBES)                += opt.o opt_trampoline.o
>   CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
> diff --git a/arch/riscv/kernel/probes/opt.c b/arch/riscv/kernel/probes/opt.c
> new file mode 100644
> index 000000000..b9bcf6e12
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/opt.c
> @@ -0,0 +1,483 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + *  Kernel Probes Jump Optimization (Optprobes)
> + *
> + * Copyright (C) IBM Corporation, 2002, 2004
> + * Copyright (C) Hitachi Ltd., 2012
> + * Copyright (C) Huawei Inc., 2014
> + * Copyright (C) 2022 Huawei Technologies Co., Ltd
> + * Copyright (C) Guokai Chen, 2022

Should this not be your University here?

> + * Author: Guokai Chen chenguokai17@mails.ucas.ac.cn
> + */
> +
> +#include <linux/kprobes.h>
> +#include <linux/jump_label.h>
> +#include <linux/extable.h>
> +#include <linux/stop_machine.h>
> +#include <linux/moduleloader.h>
> +#include <linux/kprobes.h>
> +#include <linux/cacheflush.h>
> +/* for patch_text */
> +#include <asm/ftrace.h>
> +#include <asm/patch.h>
> +#include "simulate-insn.h"
> +#include "decode-insn.h"
> +
> +
> +#define JUMP_SIZE 8
> +
> +/*
> + * If the probed instruction doesn't use PC and is not system or fence
> + * we can copy it into template and have it executed directly without
> + * simulation or emulation.
> + */
> +enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
> +{
> +       /*
> +        * instructions that use PC
> +        * branch jump auipc
> +        * instructions that belongs to system or fence
> +        * ebreak ecall fence.i

Please use the full columns available to you for comments.

> +        */
> +       kprobe_opcode_t inst = *addr;
> +
> +       RISCV_INSN_REJECTED(system, inst);
> +       RISCV_INSN_REJECTED(fence, inst);
> +       RISCV_INSN_REJECTED(branch, inst);
> +       RISCV_INSN_REJECTED(jal, inst);
> +       RISCV_INSN_REJECTED(jalr, inst);
> +       RISCV_INSN_REJECTED(auipc, inst);
> +       return INSN_GOOD_NO_SLOT;
> +}
> +
> +#define TMPL_VAL_IDX \
> +       ((kprobe_opcode_t *)optprobe_template_val - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_CALL_IDX \
> +       ((kprobe_opcode_t *)optprobe_template_call - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_STORE_EPC_IDX \
> +       ((kprobe_opcode_t *)optprobe_template_store_epc - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_END_IDX \
> +       ((kprobe_opcode_t *)optprobe_template_end - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_ADD_SP \
> +       ((kprobe_opcode_t *)optprobe_template_add_sp - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_SUB_SP \
> +       ((kprobe_opcode_t *)optprobe_template_sub_sp - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_RESTORE_BEGIN \
> +       ((kprobe_opcode_t *)optprobe_template_restore_begin - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_RESTORE_ORIGN_INSN \
> +       ((kprobe_opcode_t *)optprobe_template_restore_orig_insn - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_RESTORE_RET \
> +       ((kprobe_opcode_t *)optprobe_template_ret - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +#define TMPL_RESTORE_END \
> +       ((kprobe_opcode_t *)optprobe_template_restore_end - \
> +        (kprobe_opcode_t *)optprobe_template_entry)
> +
> +#define FREE_SEARCH_DEPTH 32
> +
> +/*
> + * RISC-V can always optimize an instruction if not null
> + */
> +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
> +{
> +       return optinsn->insn != NULL;
> +}
> +
> +/*
> + * In RISC-V ISA, jal has a quite limited jump range
> + * To achive adequate range, auipc+jalr is utilized
> + * It requires a replacement of two instructions
> + * thus next instruction should be examined

Please use the full columns available to you for comments.

> + */
> +int arch_check_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +       struct kprobe *p;
> +
> +       p = get_kprobe(op->kp.addr + 4);

Where does this 4 come from?

> +       if (p && !kprobe_disabled(p))
> +               return -EEXIST;
> +
> +       return 0;
> +}
> +
> +/*
> + * In RISC-V ISA, auipc+jalr requires a free register
> + * Inspired by register renaming in OoO processor,
> + * we search backwards to find such a register that:
> + * not previously used as a source register &&
> + * is used as a destination register &&
> + * before any branch/jump instruction

Ditto re comment width.

> + */
> +static int
> +__arch_find_free_register(kprobe_opcode_t *addr, int use_orig,
> +                         kprobe_opcode_t orig)
> +{
> +       int i, rs1, rs2, rd;
> +       kprobe_opcode_t inst;
> +       int rs_mask = 0;
> +
> +       for (i = 0; i < FREE_SEARCH_DEPTH; i++) {
> +               if (i == 0 && use_orig)
> +                       inst = orig;
> +               else
> +                       inst = *(kprobe_opcode_t *) (addr + i);
> +               /*
> +                * Detailed handling:
> +                * jalr/branch/system: must have reached the end, no result
> +                * jal: if not chosen as result, must have reached the end
> +                * arithmetic/load/store: record their rs
> +                * jal/arithmetic/load: if proper rd found, return result
> +                * others (float point/vector): ignore
> +                */
> +               if (riscv_insn_is_branch(inst) || riscv_insn_is_jalr(inst)
> +                       || riscv_insn_is_system(inst)) {
> +                       return 0;
> +               }
> +               /* instructions that has rs1 */
> +               if (riscv_insn_is_arith_ri(inst) || riscv_insn_is_arith_rr(inst)
> +                       || riscv_insn_is_load(inst) || riscv_insn_is_store(inst)
> +                       || riscv_insn_is_amo(inst)) {
> +                       rs1 = (inst & 0xF8000) >> 15;
> +                       rs_mask |= 1 << rs1;
> +               }
> +               /* instructions that has rs2 */
> +               if (riscv_insn_is_arith_rr(inst) || riscv_insn_is_store(inst)
> +                       || riscv_insn_is_amo(inst)) {
> +                       rs2 = (inst & 0x1F00000) >> 20;
> +                       rs_mask |= 1 << rs2;
> +               }
> +               /* instructions that has rd */
> +               if (riscv_insn_is_lui(inst) || riscv_insn_is_jal(inst)
> +                       || riscv_insn_is_load(inst) || riscv_insn_is_arith_ri(inst)
> +                       || riscv_insn_is_arith_rr(inst) || riscv_insn_is_amo(inst)) {
> +                       rd = (inst & 0xF80) >> 7;
> +                       if (rd != 0 && (rs_mask & (1 << rd)) == 0)
> +                               return rd;
> +                       if (riscv_insn_is_jal(inst))
> +                               return 0;
> +               }
> +       }
> +       return 0;
> +}
> +
> +/*
> + * If two free registers can be found at the beginning of both
> + * the start and the end of replaced code, it can be optimized
> + * Also, in-function jumps need to be checked to make sure that
> + * there is no jump to the second instruction to be replaced
> + */
> +
> +#define branch_imm(opcode) \
> +       (((((opcode) >>  8) & 0xf) <<  1) | \
> +        ((((opcode) >> 25) & 0x3f) <<  5) | \
> +        ((((opcode) >>  7) & 0x1) << 11) | \
> +        ((((opcode) >> 31) & 0x1) << 12))

All the numbers in here are quite meaningless to me.
Could you please use defines here?

> +
> +#define branch_offset(opcode) \
> +       sign_extend32((branch_imm(opcode)), 12)
> +
> +#define jal_imm(opcode) \
> +       ((((opcode >> 21) & 0x3ff) << 1) | \
> +        (((opcode >> 20) & 0x1) << 11) | \
> +        (((opcode >> 31) & 0x1) << 20))
> +#define jal_offset(opcode) \
> +       sign_extend32(jal_imm(opcode), 20)
> +
> +static int can_optimize(unsigned long paddr, kprobe_opcode_t orig)
> +{
> +       unsigned long addr, size = 0, offset = 0, target;
> +       s32 imm;
> +       kprobe_opcode_t inst;
> +
> +       if (!kallsyms_lookup_size_offset(paddr, &size, &offset))
> +               return 0;
> +
> +       addr = paddr - offset;
> +
> +       /* if there are not enough space for our kprobe, skip */
> +       if (addr + size <= paddr + MAX_OPTIMIZED_LENGTH)
> +               return 0;
> +
> +       while (addr < paddr - offset + size) {
> +               /* Check from the start until the end */
> +
> +               inst = *(kprobe_opcode_t *)addr;
> +               /* branch and jal is capable of determing target before execution */
> +               if (riscv_insn_is_branch(inst)) {
> +                       imm = branch_offset(inst);
> +                       target = addr + imm;
> +                       if (target == paddr + RVI_INST_SIZE)
> +                               return 0;
> +               } else if (riscv_insn_is_jal(inst)) {
> +                       imm = jal_offset(inst);
> +                       target = addr + imm;
> +                       if (target == paddr + RVI_INST_SIZE)
> +                               return 0;
> +               }
> +               /* RVI is always 4 byte long */
> +               addr += 4;
> +       }
> +
> +       if (can_kprobe_direct_exec((kprobe_opcode_t *)(paddr + 4)) != INSN_GOOD_NO_SLOT)
> +               return 0;
> +
> +       /* only valid when we find two free registers */
> +       return __arch_find_free_register((kprobe_opcode_t *) paddr, 1, orig)
> +               && __arch_find_free_register((kprobe_opcode_t *) (paddr + JUMP_SIZE), 0, 0);
> +}
> +
> +/* Free optimized instruction slot */
> +static void
> +__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
> +{
> +       if (op->optinsn.insn) {
> +               free_optinsn_slot(op->optinsn.insn, dirty);
> +               op->optinsn.insn = NULL;
> +       }
> +}
> +
> +extern void kprobe_handler(struct pt_regs *regs);
> +
> +static void
> +optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
> +{
> +       unsigned long flags;
> +       struct kprobe_ctlblk *kcb;
> +
> +       /* Save skipped registers */
> +       regs->epc = (unsigned long)op->kp.addr;
> +       regs->orig_a0 = ~0UL;
> +
> +       local_irq_save(flags);
> +       kcb = get_kprobe_ctlblk();
> +
> +       if (kprobe_running()) {
> +               kprobes_inc_nmissed_count(&op->kp);
> +       } else {
> +               __this_cpu_write(current_kprobe, &op->kp);
> +               kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +               opt_pre_handler(&op->kp, regs);
> +               __this_cpu_write(current_kprobe, NULL);
> +       }
> +
> +       local_irq_restore(flags);
> +}
> +
> +NOKPROBE_SYMBOL(optimized_callback)
> +static inline kprobe_opcode_t
> +__arch_patch_rd(kprobe_opcode_t inst, unsigned long val)
> +{
> +       inst &= 0xfffff07fUL;

It'd be nice if these were defines too, so that it was clear to
the untrained eye what's going on here.

> +       inst |= val << 7;
> +       return inst;
> +}
> +
> +static inline kprobe_opcode_t
> +__arch_patch_rs1(kprobe_opcode_t inst, unsigned long val)
> +{
> +       inst &= 0xfff07fffUL;
> +       inst |= val << 15;
> +       return inst;
> +}
> +
> +static inline kprobe_opcode_t __arch_patch_rs2(kprobe_opcode_t inst,
> +                                                  unsigned long val)
> +{
> +       inst &= 0xfe0fffffUL;
> +       inst |= val << 20;
> +       return inst;
> +}
> +
> +int
> +arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *orig)
> +{
> +       kprobe_opcode_t *code, *detour_slot, *detour_ret_addr;
> +       long rel_chk;
> +       unsigned long val;
> +
> +       /* not aligned address */
> +       #ifdef CONFIG_RISCV_ISA_C

Please use IS_ENABLED() here if you can.

> +       return -ERANGE;
> +       #endif
> +
> +       if (!can_optimize((unsigned long)orig->addr, orig->opcode))
> +               return -EILSEQ;
> +
> +       code = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL);
> +       detour_slot = get_optinsn_slot();
> +
> +       if (!code || !detour_slot) {
> +               kfree(code);
> +               if (detour_slot)
> +                       free_optinsn_slot(detour_slot, 0);
> +               return -ENOMEM;
> +       }
> +
> +       /*
> +        * Verify if the address gap is within 4GB range, because this uses
> +        * a auipc+jalr pair.
> +        */
> +       rel_chk = (long)detour_slot - (long)orig->addr + 8;
> +       if (abs(rel_chk) > 0x7fffffff) {

GENMASK please.

> +               /*
> +                * Different from x86, we free code buf directly instead of
> +                * calling __arch_remove_optimized_kprobe() because
> +                * we have not fill any field in op.
> +                */
> +               kfree(code);
> +               free_optinsn_slot(detour_slot, 0);
> +               return -ERANGE;
> +       }
> +
> +       /* Copy arch-dep-instance from template. */
> +       memcpy(code, (unsigned long *)optprobe_template_entry,
> +                  TMPL_END_IDX * sizeof(kprobe_opcode_t));
> +
> +       /* Set probe information */
> +       val = (unsigned long)op;
> +       *(unsigned long *)(&code[TMPL_VAL_IDX]) = val;
> +
> +       /* Set probe function call */
> +       val = (unsigned long)optimized_callback;
> +       *(unsigned long *)(&code[TMPL_CALL_IDX]) = val;

What is the benefit of using val here? I think the comments
are also pointing out the obvious here, no?

> +
> +       /* Adjust epc register */

The comments here mainly just say what you're doing & not why
it should be done.

> +       val = __arch_find_free_register(orig->addr, 1, orig->opcode);
> +       /*
> +        * patch rs2 of optprobe_template_store_epc
> +        * after patch, optprobe_template_store_epc will be
> +        * REG_S free_register, PT_EPC(sp)
> +        */
> +       code[TMPL_STORE_EPC_IDX] =
> +               __arch_patch_rs2(code[TMPL_STORE_EPC_IDX], val);
> +
> +       /* Adjust return temp register */
> +       val =
> +               __arch_find_free_register(orig->addr +
> +                                         JUMP_SIZE / sizeof(kprobe_opcode_t), 0,
> +                                         0);
> +       /*
> +        * patch of optprobe_template_restore_end
> +        * patch:
> +        *   rd and imm of auipc
> +        *   rs1 and imm of jalr
> +        * after patch:
> +        *   auipc free_register, %hi(return_address)
> +        *   jalr x0, %lo(return_address)(free_register)
> +        *
> +        */
> +
> +       detour_ret_addr = &(detour_slot[optprobe_template_restore_end - optprobe_template_entry]);
> +
> +       make_call(detour_ret_addr, (orig->addr + JUMP_SIZE / sizeof(kprobe_opcode_t)),
> +                       (code + TMPL_RESTORE_END));
> +       code[TMPL_RESTORE_END] = __arch_patch_rd(code[TMPL_RESTORE_END], val);
> +       code[TMPL_RESTORE_END + 1] =
> +               __arch_patch_rs1(code[TMPL_RESTORE_END + 1], val);
> +       code[TMPL_RESTORE_END + 1] = __arch_patch_rd(code[TMPL_RESTORE_END + 1], 0);
> +
> +       /* Copy insn and have it executed during restore */
> +
> +       code[TMPL_RESTORE_ORIGN_INSN] = orig->opcode;
> +       code[TMPL_RESTORE_ORIGN_INSN + 1] =
> +               *(kprobe_opcode_t *) (orig->addr + 1);
> +
> +       if (patch_text_nosync(detour_slot, code, MAX_OPTINSN_SIZE)) {
> +               free_optinsn_slot(detour_slot, 0);
> +               kfree(code);
> +               return -EPERM;
> +       }
> +
> +       kfree(code);
> +       /* Set op->optinsn.insn means prepared. */
> +       op->optinsn.insn = detour_slot;
> +       return 0;
> +}
> +
> +void __kprobes arch_optimize_kprobes(struct list_head *oplist)
> +{
> +       struct optimized_kprobe *op, *tmp;
> +       kprobe_opcode_t val;
> +
> +       list_for_each_entry_safe(op, tmp, oplist, list) {
> +               kprobe_opcode_t insn[2];
> +
> +               WARN_ON(kprobe_disabled(&op->kp));
> +
> +               /*
> +                * Backup instructions which will be replaced
> +                * by jump address
> +                */
> +               memcpy(op->optinsn.copied_insn, op->kp.addr, JUMP_SIZE);
> +               op->optinsn.copied_insn[0] = op->kp.opcode;
> +
> +               make_call(op->kp.addr, op->optinsn.insn, insn);
> +
> +               // patch insn jalr to have rd as free register
> +               val = (op->optinsn.insn[2] & 0x1F00000) >> 20;

Again, could you use some defines to make this more understandable
to mere mortals like me? ;)

> +
> +               insn[0] = __arch_patch_rd(insn[0], val);
> +
> +               insn[1] = __arch_patch_rd(insn[1], val);
> +               insn[1] = __arch_patch_rs1(insn[1], val);
> +
> +               /*
> +                * Similar to __arch_disarm_kprobe, operations which
> +                * removing breakpoints must be wrapped by stop_machine
> +                * to avoid racing.
> +                */
> +               WARN_ON(patch_text_nosync(op->kp.addr, insn, JUMP_SIZE));
> +
> +               list_del_init(&op->list);
> +       }
> +}
> +
> +static int arch_disarm_kprobe_opt(void *vop)
> +{
> +       struct optimized_kprobe *op = (struct optimized_kprobe *)vop;
> +
> +       patch_text_nosync(op->kp.addr, op->optinsn.copied_insn, JUMP_SIZE);
> +       arch_arm_kprobe(&op->kp);
> +       return 0;
> +}
> +
> +void arch_unoptimize_kprobe(struct optimized_kprobe *op)
> +{
> +       arch_disarm_kprobe_opt((void *)op);
> +}
> +
> +/*
> + * Recover original instructions and breakpoints from relative jumps.
> + * Caller must call with locking kprobe_mutex.
> + */
> +void arch_unoptimize_kprobes(struct list_head *oplist,
> +                                struct list_head *done_list)
> +{
> +       struct optimized_kprobe *op, *tmp;
> +
> +       list_for_each_entry_safe(op, tmp, oplist, list) {
> +               arch_unoptimize_kprobe(op);
> +               list_move(&op->list, done_list);
> +       }
> +}
> +
> +int arch_within_optimized_kprobe(struct optimized_kprobe *op,
> +                                kprobe_opcode_t *addr)
> +{
> +       return (op->kp.addr <= addr &&
> +               op->kp.addr + (JUMP_SIZE / sizeof(kprobe_opcode_t)) > addr);
> +
> +}
> +
> +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +       __arch_remove_optimized_kprobe(op, 1);
> +}
> diff --git a/arch/riscv/kernel/probes/opt_trampoline.S b/arch/riscv/kernel/probes/opt_trampoline.S

Thanks,
Conor.
Liao, Chang Aug. 31, 2022, 7:49 a.m. UTC | #2
在 2022/8/31 15:24, Conor.Dooley@microchip.com 写道:
> Hey Chen,
> 
> FYI there is a build warning with this patch:
> arch/riscv/kernel/probes/opt.c:34:27: warning: no previous prototype for 'can_kprobe_direct_exec' [-Wmissing-prototypes]
>     34 | enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
> 
> Also, if you run scripts/checkpatch.pl --strict, it will have a
> few complaints about code style for you too. Other than that, I
> have a few comments for you below:
> 
> On 31/08/2022 05:10, Chen Guokai wrote:
>> [You don't often get email from chenguokai17@mails.ucas.ac.cn. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>>
>> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>>
>> This patch adds jump optimization support for RISC-V.
> 
> s/This patch adds/Add
> 
>>
>> This patch replaces ebreak instructions used by normal kprobes with an
> 
> s/This patch replaces/Replace
> 
>> auipc+jalr instruction pair, at the aim of suppressing the probe-hit
>> overhead.
>>
>> All known optprobe-capable RISC architectures have been using a single
>> jump or branch instructions while this patch chooses not. RISC-V has a
>> quite limited jump range (4KB or 2MB) for both its branch and jump
>> instructions, which prevent optimizations from supporting probes that
>> spread all over the kernel.
>>
>> Auipc-jalr instruction pair is introduced with a much wider jump range
>> (4GB), where auipc loads the upper 12 bits to a free register and jalr
>> appends the lower 20 bits to form a 32 bit immediate. Note that returning
>> from probe handler requires another free register. As kprobes can appear
>> almost anywhere inside the kernel, the free register should be found in a
>> generic way, not depending on calling convension or any other regulations.
> 
> convention
> 
>>
>> The algorithm for finding the free register is inspired by the regiter
> 
> register
> 
>> renaming in modern processors. From the perspective of register renaming, a
>> register could be represented as two different registers if two neighbour
>> instructions both write to it but no one ever reads. Extending this fact,
>> a register is considered to be free if there is no read before its next
>> write in the execution flow. We are free to change its value without
>> interfering normal execution.
>>
>> Static analysis shows that 51% instructions of the kernel (default config)
>> is capable of being replaced i.e. two free registers can be found at both
>> the start and end of replaced instruction pairs while the replaced
>> instructions can be directly executed.
>>
>> Signed-off-by: Chen Guokai <chenguokai17@mails.ucas.ac.cn>
>> Signed-off-by: Liao Chang <liaochang1@huawei.com>
> 
> What does Liao have to do with this patch?I just provide some suggestion to Chen Guokai during development ;)
please remove my info from Signed-off-by tag.

> 
>> ---
>>   arch/riscv/Kconfig                        |   1 +
>>   arch/riscv/include/asm/ftrace.h           |   2 +-
>>   arch/riscv/include/asm/kprobes.h          |  28 ++
>>   arch/riscv/kernel/probes/Makefile         |   1 +
>>   arch/riscv/kernel/probes/opt.c            | 483 ++++++++++++++++++++++
>>   arch/riscv/kernel/probes/opt_trampoline.S | 133 ++++++
>>   arch/riscv/kernel/probes/simulate-insn.h  |   9 +
>>   7 files changed, 656 insertions(+), 1 deletion(-)
>>   create mode 100644 arch/riscv/kernel/probes/opt.c
>>   create mode 100644 arch/riscv/kernel/probes/opt_trampoline.S
>>
>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> index d557cc502..a54e50de2 100644
>> --- a/arch/riscv/Kconfig
>> +++ b/arch/riscv/Kconfig
>> @@ -97,6 +97,7 @@ config RISCV
>>          select HAVE_KPROBES if !XIP_KERNEL
>>          select HAVE_KPROBES_ON_FTRACE if !XIP_KERNEL
>>          select HAVE_KRETPROBES if !XIP_KERNEL
>> +       select HAVE_OPTPROBES if !XIP_KERNEL && !CONFIG_RISCV_ISA_C
>>          select HAVE_MOVE_PMD
>>          select HAVE_MOVE_PUD
>>          select HAVE_PCI
>> diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
>> index 04dad3380..8b17a4c66 100644
>> --- a/arch/riscv/include/asm/ftrace.h
>> +++ b/arch/riscv/include/asm/ftrace.h
>> @@ -35,7 +35,7 @@ struct dyn_arch_ftrace {
>>   };
>>   #endif
>>
>> -#ifdef CONFIG_DYNAMIC_FTRACE
>> +#if defined(CONFIG_DYNAMIC_FTRACE) || defined(CONFIG_OPTPROBES)
>>   /*
>>    * A general call in RISC-V is a pair of insts:
>>    * 1) auipc: setting high-20 pc-related bits to ra register
>> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
>> index 217ef89f2..6c5e10709 100644
>> --- a/arch/riscv/include/asm/kprobes.h
>> +++ b/arch/riscv/include/asm/kprobes.h
>> @@ -43,5 +43,33 @@ bool kprobe_single_step_handler(struct pt_regs *regs);
>>   void __kretprobe_trampoline(void);
>>   void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
>>
>> +#ifdef CONFIG_OPTPROBES
>> +
>> +#define MAX_OPTIMIZED_LENGTH   8
>> +
>> +/* optinsn template addresses */
>> +extern __visible kprobe_opcode_t optprobe_template_entry[];
>> +extern __visible kprobe_opcode_t optprobe_template_val[];
>> +extern __visible kprobe_opcode_t optprobe_template_call[];
>> +extern __visible kprobe_opcode_t optprobe_template_store_epc[];
>> +extern __visible kprobe_opcode_t optprobe_template_end[];
>> +extern __visible kprobe_opcode_t optprobe_template_sub_sp[];
>> +extern __visible kprobe_opcode_t optprobe_template_add_sp[];
>> +extern __visible kprobe_opcode_t optprobe_template_restore_begin[];
>> +extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn[];
>> +extern __visible kprobe_opcode_t optprobe_template_restore_end[];
>> +
>> +#define MAX_OPTINSN_SIZE                               \
>> +               ((unsigned long)optprobe_template_end - \
>> +                (unsigned long)optprobe_template_entry)
>> +
>> +#define MAX_COPIED_INSN 2
>> +struct arch_optimized_insn {
>> +               kprobe_opcode_t copied_insn[MAX_COPIED_INSN];
>> +                       /* detour code buffer */
>> +                       kprobe_opcode_t *insn;
>> +};
>> +#define RVI_INST_SIZE 4
>> +#endif /* CONFIG_OPTPROBES */
>>   #endif /* CONFIG_KPROBES */
>>   #endif /* _ASM_RISCV_KPROBES_H */
>> diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
>> index 7f0840dcc..6255b4600 100644
>> --- a/arch/riscv/kernel/probes/Makefile
>> +++ b/arch/riscv/kernel/probes/Makefile
>> @@ -3,4 +3,5 @@ obj-$(CONFIG_KPROBES)           += kprobes.o decode-insn.o simulate-insn.o
>>   obj-$(CONFIG_KPROBES)          += kprobes_trampoline.o
>>   obj-$(CONFIG_KPROBES_ON_FTRACE)        += ftrace.o
>>   obj-$(CONFIG_UPROBES)          += uprobes.o decode-insn.o simulate-insn.o
>> +obj-$(CONFIG_OPTPROBES)                += opt.o opt_trampoline.o
>>   CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
>> diff --git a/arch/riscv/kernel/probes/opt.c b/arch/riscv/kernel/probes/opt.c
>> new file mode 100644
>> index 000000000..b9bcf6e12
>> --- /dev/null
>> +++ b/arch/riscv/kernel/probes/opt.c
>> @@ -0,0 +1,483 @@
>> +// SPDX-License-Identifier: GPL-2.0-or-later
>> +/*
>> + *  Kernel Probes Jump Optimization (Optprobes)
>> + *
>> + * Copyright (C) IBM Corporation, 2002, 2004
>> + * Copyright (C) Hitachi Ltd., 2012
>> + * Copyright (C) Huawei Inc., 2014
>> + * Copyright (C) 2022 Huawei Technologies Co., Ltd
>> + * Copyright (C) Guokai Chen, 2022
> 
> Should this not be your University here?
> 
>> + * Author: Guokai Chen chenguokai17@mails.ucas.ac.cn
>> + */
>> +
>> +#include <linux/kprobes.h>
>> +#include <linux/jump_label.h>
>> +#include <linux/extable.h>
>> +#include <linux/stop_machine.h>
>> +#include <linux/moduleloader.h>
>> +#include <linux/kprobes.h>
>> +#include <linux/cacheflush.h>
>> +/* for patch_text */
>> +#include <asm/ftrace.h>
>> +#include <asm/patch.h>
>> +#include "simulate-insn.h"
>> +#include "decode-insn.h"
>> +
>> +
>> +#define JUMP_SIZE 8
>> +
>> +/*
>> + * If the probed instruction doesn't use PC and is not system or fence
>> + * we can copy it into template and have it executed directly without
>> + * simulation or emulation.
>> + */
>> +enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
>> +{
>> +       /*
>> +        * instructions that use PC
>> +        * branch jump auipc
>> +        * instructions that belongs to system or fence
>> +        * ebreak ecall fence.i
> 
> Please use the full columns available to you for comments.
> 
>> +        */
>> +       kprobe_opcode_t inst = *addr;
>> +
>> +       RISCV_INSN_REJECTED(system, inst);
>> +       RISCV_INSN_REJECTED(fence, inst);
>> +       RISCV_INSN_REJECTED(branch, inst);
>> +       RISCV_INSN_REJECTED(jal, inst);
>> +       RISCV_INSN_REJECTED(jalr, inst);
>> +       RISCV_INSN_REJECTED(auipc, inst);
>> +       return INSN_GOOD_NO_SLOT;
>> +}
>> +
>> +#define TMPL_VAL_IDX \
>> +       ((kprobe_opcode_t *)optprobe_template_val - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_CALL_IDX \
>> +       ((kprobe_opcode_t *)optprobe_template_call - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_STORE_EPC_IDX \
>> +       ((kprobe_opcode_t *)optprobe_template_store_epc - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_END_IDX \
>> +       ((kprobe_opcode_t *)optprobe_template_end - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_ADD_SP \
>> +       ((kprobe_opcode_t *)optprobe_template_add_sp - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_SUB_SP \
>> +       ((kprobe_opcode_t *)optprobe_template_sub_sp - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_RESTORE_BEGIN \
>> +       ((kprobe_opcode_t *)optprobe_template_restore_begin - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_RESTORE_ORIGN_INSN \
>> +       ((kprobe_opcode_t *)optprobe_template_restore_orig_insn - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_RESTORE_RET \
>> +       ((kprobe_opcode_t *)optprobe_template_ret - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_RESTORE_END \
>> +       ((kprobe_opcode_t *)optprobe_template_restore_end - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +
>> +#define FREE_SEARCH_DEPTH 32
>> +
>> +/*
>> + * RISC-V can always optimize an instruction if not null
>> + */
>> +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
>> +{
>> +       return optinsn->insn != NULL;
>> +}
>> +
>> +/*
>> + * In RISC-V ISA, jal has a quite limited jump range
>> + * To achive adequate range, auipc+jalr is utilized
>> + * It requires a replacement of two instructions
>> + * thus next instruction should be examined
> 
> Please use the full columns available to you for comments.
> 
>> + */
>> +int arch_check_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +       struct kprobe *p;
>> +
>> +       p = get_kprobe(op->kp.addr + 4);
> 
> Where does this 4 come from?
> 
>> +       if (p && !kprobe_disabled(p))
>> +               return -EEXIST;
>> +
>> +       return 0;
>> +}
>> +
>> +/*
>> + * In RISC-V ISA, auipc+jalr requires a free register
>> + * Inspired by register renaming in OoO processor,
>> + * we search backwards to find such a register that:
>> + * not previously used as a source register &&
>> + * is used as a destination register &&
>> + * before any branch/jump instruction
> 
> Ditto re comment width.
> 
>> + */
>> +static int
>> +__arch_find_free_register(kprobe_opcode_t *addr, int use_orig,
>> +                         kprobe_opcode_t orig)
>> +{
>> +       int i, rs1, rs2, rd;
>> +       kprobe_opcode_t inst;
>> +       int rs_mask = 0;
>> +
>> +       for (i = 0; i < FREE_SEARCH_DEPTH; i++) {
>> +               if (i == 0 && use_orig)
>> +                       inst = orig;
>> +               else
>> +                       inst = *(kprobe_opcode_t *) (addr + i);
>> +               /*
>> +                * Detailed handling:
>> +                * jalr/branch/system: must have reached the end, no result
>> +                * jal: if not chosen as result, must have reached the end
>> +                * arithmetic/load/store: record their rs
>> +                * jal/arithmetic/load: if proper rd found, return result
>> +                * others (float point/vector): ignore
>> +                */
>> +               if (riscv_insn_is_branch(inst) || riscv_insn_is_jalr(inst)
>> +                       || riscv_insn_is_system(inst)) {
>> +                       return 0;
>> +               }
>> +               /* instructions that has rs1 */
>> +               if (riscv_insn_is_arith_ri(inst) || riscv_insn_is_arith_rr(inst)
>> +                       || riscv_insn_is_load(inst) || riscv_insn_is_store(inst)
>> +                       || riscv_insn_is_amo(inst)) {
>> +                       rs1 = (inst & 0xF8000) >> 15;
>> +                       rs_mask |= 1 << rs1;
>> +               }
>> +               /* instructions that has rs2 */
>> +               if (riscv_insn_is_arith_rr(inst) || riscv_insn_is_store(inst)
>> +                       || riscv_insn_is_amo(inst)) {
>> +                       rs2 = (inst & 0x1F00000) >> 20;
>> +                       rs_mask |= 1 << rs2;
>> +               }
>> +               /* instructions that has rd */
>> +               if (riscv_insn_is_lui(inst) || riscv_insn_is_jal(inst)
>> +                       || riscv_insn_is_load(inst) || riscv_insn_is_arith_ri(inst)
>> +                       || riscv_insn_is_arith_rr(inst) || riscv_insn_is_amo(inst)) {
>> +                       rd = (inst & 0xF80) >> 7;
>> +                       if (rd != 0 && (rs_mask & (1 << rd)) == 0)
>> +                               return rd;
>> +                       if (riscv_insn_is_jal(inst))
>> +                               return 0;
>> +               }
>> +       }
>> +       return 0;
>> +}
>> +
>> +/*
>> + * If two free registers can be found at the beginning of both
>> + * the start and the end of replaced code, it can be optimized
>> + * Also, in-function jumps need to be checked to make sure that
>> + * there is no jump to the second instruction to be replaced
>> + */
>> +
>> +#define branch_imm(opcode) \
>> +       (((((opcode) >>  8) & 0xf) <<  1) | \
>> +        ((((opcode) >> 25) & 0x3f) <<  5) | \
>> +        ((((opcode) >>  7) & 0x1) << 11) | \
>> +        ((((opcode) >> 31) & 0x1) << 12))
> 
> All the numbers in here are quite meaningless to me.
> Could you please use defines here?
> 
>> +
>> +#define branch_offset(opcode) \
>> +       sign_extend32((branch_imm(opcode)), 12)
>> +
>> +#define jal_imm(opcode) \
>> +       ((((opcode >> 21) & 0x3ff) << 1) | \
>> +        (((opcode >> 20) & 0x1) << 11) | \
>> +        (((opcode >> 31) & 0x1) << 20))
>> +#define jal_offset(opcode) \
>> +       sign_extend32(jal_imm(opcode), 20)
>> +
>> +static int can_optimize(unsigned long paddr, kprobe_opcode_t orig)
>> +{
>> +       unsigned long addr, size = 0, offset = 0, target;
>> +       s32 imm;
>> +       kprobe_opcode_t inst;
>> +
>> +       if (!kallsyms_lookup_size_offset(paddr, &size, &offset))
>> +               return 0;
>> +
>> +       addr = paddr - offset;
>> +
>> +       /* if there are not enough space for our kprobe, skip */
>> +       if (addr + size <= paddr + MAX_OPTIMIZED_LENGTH)
>> +               return 0;
>> +
>> +       while (addr < paddr - offset + size) {
>> +               /* Check from the start until the end */
>> +
>> +               inst = *(kprobe_opcode_t *)addr;
>> +               /* branch and jal is capable of determing target before execution */
>> +               if (riscv_insn_is_branch(inst)) {
>> +                       imm = branch_offset(inst);
>> +                       target = addr + imm;
>> +                       if (target == paddr + RVI_INST_SIZE)
>> +                               return 0;
>> +               } else if (riscv_insn_is_jal(inst)) {
>> +                       imm = jal_offset(inst);
>> +                       target = addr + imm;
>> +                       if (target == paddr + RVI_INST_SIZE)
>> +                               return 0;
>> +               }
>> +               /* RVI is always 4 byte long */
>> +               addr += 4;
>> +       }
>> +
>> +       if (can_kprobe_direct_exec((kprobe_opcode_t *)(paddr + 4)) != INSN_GOOD_NO_SLOT)
>> +               return 0;
>> +
>> +       /* only valid when we find two free registers */
>> +       return __arch_find_free_register((kprobe_opcode_t *) paddr, 1, orig)
>> +               && __arch_find_free_register((kprobe_opcode_t *) (paddr + JUMP_SIZE), 0, 0);
>> +}
>> +
>> +/* Free optimized instruction slot */
>> +static void
>> +__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
>> +{
>> +       if (op->optinsn.insn) {
>> +               free_optinsn_slot(op->optinsn.insn, dirty);
>> +               op->optinsn.insn = NULL;
>> +       }
>> +}
>> +
>> +extern void kprobe_handler(struct pt_regs *regs);
>> +
>> +static void
>> +optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
>> +{
>> +       unsigned long flags;
>> +       struct kprobe_ctlblk *kcb;
>> +
>> +       /* Save skipped registers */
>> +       regs->epc = (unsigned long)op->kp.addr;
>> +       regs->orig_a0 = ~0UL;
>> +
>> +       local_irq_save(flags);
>> +       kcb = get_kprobe_ctlblk();
>> +
>> +       if (kprobe_running()) {
>> +               kprobes_inc_nmissed_count(&op->kp);
>> +       } else {
>> +               __this_cpu_write(current_kprobe, &op->kp);
>> +               kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>> +               opt_pre_handler(&op->kp, regs);
>> +               __this_cpu_write(current_kprobe, NULL);
>> +       }
>> +
>> +       local_irq_restore(flags);
>> +}
>> +
>> +NOKPROBE_SYMBOL(optimized_callback)
>> +static inline kprobe_opcode_t
>> +__arch_patch_rd(kprobe_opcode_t inst, unsigned long val)
>> +{
>> +       inst &= 0xfffff07fUL;
> 
> It'd be nice if these were defines too, so that it was clear to
> the untrained eye what's going on here.
> 
>> +       inst |= val << 7;
>> +       return inst;
>> +}
>> +
>> +static inline kprobe_opcode_t
>> +__arch_patch_rs1(kprobe_opcode_t inst, unsigned long val)
>> +{
>> +       inst &= 0xfff07fffUL;
>> +       inst |= val << 15;
>> +       return inst;
>> +}
>> +
>> +static inline kprobe_opcode_t __arch_patch_rs2(kprobe_opcode_t inst,
>> +                                                  unsigned long val)
>> +{
>> +       inst &= 0xfe0fffffUL;
>> +       inst |= val << 20;
>> +       return inst;
>> +}
>> +
>> +int
>> +arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *orig)
>> +{
>> +       kprobe_opcode_t *code, *detour_slot, *detour_ret_addr;
>> +       long rel_chk;
>> +       unsigned long val;
>> +
>> +       /* not aligned address */
>> +       #ifdef CONFIG_RISCV_ISA_C
> 
> Please use IS_ENABLED() here if you can.
> 
>> +       return -ERANGE;
>> +       #endif
>> +
>> +       if (!can_optimize((unsigned long)orig->addr, orig->opcode))
>> +               return -EILSEQ;
>> +
>> +       code = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL);
>> +       detour_slot = get_optinsn_slot();
>> +
>> +       if (!code || !detour_slot) {
>> +               kfree(code);
>> +               if (detour_slot)
>> +                       free_optinsn_slot(detour_slot, 0);
>> +               return -ENOMEM;
>> +       }
>> +
>> +       /*
>> +        * Verify if the address gap is within 4GB range, because this uses
>> +        * a auipc+jalr pair.
>> +        */
>> +       rel_chk = (long)detour_slot - (long)orig->addr + 8;
>> +       if (abs(rel_chk) > 0x7fffffff) {
> 
> GENMASK please.
> 
>> +               /*
>> +                * Different from x86, we free code buf directly instead of
>> +                * calling __arch_remove_optimized_kprobe() because
>> +                * we have not fill any field in op.
>> +                */
>> +               kfree(code);
>> +               free_optinsn_slot(detour_slot, 0);
>> +               return -ERANGE;
>> +       }
>> +
>> +       /* Copy arch-dep-instance from template. */
>> +       memcpy(code, (unsigned long *)optprobe_template_entry,
>> +                  TMPL_END_IDX * sizeof(kprobe_opcode_t));
>> +
>> +       /* Set probe information */
>> +       val = (unsigned long)op;
>> +       *(unsigned long *)(&code[TMPL_VAL_IDX]) = val;
>> +
>> +       /* Set probe function call */
>> +       val = (unsigned long)optimized_callback;
>> +       *(unsigned long *)(&code[TMPL_CALL_IDX]) = val;
> 
> What is the benefit of using val here? I think the comments
> are also pointing out the obvious here, no?
> 
>> +
>> +       /* Adjust epc register */
> 
> The comments here mainly just say what you're doing & not why
> it should be done.
> 
>> +       val = __arch_find_free_register(orig->addr, 1, orig->opcode);
>> +       /*
>> +        * patch rs2 of optprobe_template_store_epc
>> +        * after patch, optprobe_template_store_epc will be
>> +        * REG_S free_register, PT_EPC(sp)
>> +        */
>> +       code[TMPL_STORE_EPC_IDX] =
>> +               __arch_patch_rs2(code[TMPL_STORE_EPC_IDX], val);
>> +
>> +       /* Adjust return temp register */
>> +       val =
>> +               __arch_find_free_register(orig->addr +
>> +                                         JUMP_SIZE / sizeof(kprobe_opcode_t), 0,
>> +                                         0);
>> +       /*
>> +        * patch of optprobe_template_restore_end
>> +        * patch:
>> +        *   rd and imm of auipc
>> +        *   rs1 and imm of jalr
>> +        * after patch:
>> +        *   auipc free_register, %hi(return_address)
>> +        *   jalr x0, %lo(return_address)(free_register)
>> +        *
>> +        */
>> +
>> +       detour_ret_addr = &(detour_slot[optprobe_template_restore_end - optprobe_template_entry]);
>> +
>> +       make_call(detour_ret_addr, (orig->addr + JUMP_SIZE / sizeof(kprobe_opcode_t)),
>> +                       (code + TMPL_RESTORE_END));
>> +       code[TMPL_RESTORE_END] = __arch_patch_rd(code[TMPL_RESTORE_END], val);
>> +       code[TMPL_RESTORE_END + 1] =
>> +               __arch_patch_rs1(code[TMPL_RESTORE_END + 1], val);
>> +       code[TMPL_RESTORE_END + 1] = __arch_patch_rd(code[TMPL_RESTORE_END + 1], 0);
>> +
>> +       /* Copy insn and have it executed during restore */
>> +
>> +       code[TMPL_RESTORE_ORIGN_INSN] = orig->opcode;
>> +       code[TMPL_RESTORE_ORIGN_INSN + 1] =
>> +               *(kprobe_opcode_t *) (orig->addr + 1);
>> +
>> +       if (patch_text_nosync(detour_slot, code, MAX_OPTINSN_SIZE)) {
>> +               free_optinsn_slot(detour_slot, 0);
>> +               kfree(code);
>> +               return -EPERM;
>> +       }
>> +
>> +       kfree(code);
>> +       /* Set op->optinsn.insn means prepared. */
>> +       op->optinsn.insn = detour_slot;
>> +       return 0;
>> +}
>> +
>> +void __kprobes arch_optimize_kprobes(struct list_head *oplist)
>> +{
>> +       struct optimized_kprobe *op, *tmp;
>> +       kprobe_opcode_t val;
>> +
>> +       list_for_each_entry_safe(op, tmp, oplist, list) {
>> +               kprobe_opcode_t insn[2];
>> +
>> +               WARN_ON(kprobe_disabled(&op->kp));
>> +
>> +               /*
>> +                * Backup instructions which will be replaced
>> +                * by jump address
>> +                */
>> +               memcpy(op->optinsn.copied_insn, op->kp.addr, JUMP_SIZE);
>> +               op->optinsn.copied_insn[0] = op->kp.opcode;
>> +
>> +               make_call(op->kp.addr, op->optinsn.insn, insn);
>> +
>> +               // patch insn jalr to have rd as free register
>> +               val = (op->optinsn.insn[2] & 0x1F00000) >> 20;
> 
> Again, could you use some defines to make this more understandable
> to mere mortals like me? ;)
> 
>> +
>> +               insn[0] = __arch_patch_rd(insn[0], val);
>> +
>> +               insn[1] = __arch_patch_rd(insn[1], val);
>> +               insn[1] = __arch_patch_rs1(insn[1], val);
>> +
>> +               /*
>> +                * Similar to __arch_disarm_kprobe, operations which
>> +                * removing breakpoints must be wrapped by stop_machine
>> +                * to avoid racing.
>> +                */
>> +               WARN_ON(patch_text_nosync(op->kp.addr, insn, JUMP_SIZE));
>> +
>> +               list_del_init(&op->list);
>> +       }
>> +}
>> +
>> +static int arch_disarm_kprobe_opt(void *vop)
>> +{
>> +       struct optimized_kprobe *op = (struct optimized_kprobe *)vop;
>> +
>> +       patch_text_nosync(op->kp.addr, op->optinsn.copied_insn, JUMP_SIZE);
>> +       arch_arm_kprobe(&op->kp);
>> +       return 0;
>> +}
>> +
>> +void arch_unoptimize_kprobe(struct optimized_kprobe *op)
>> +{
>> +       arch_disarm_kprobe_opt((void *)op);
>> +}
>> +
>> +/*
>> + * Recover original instructions and breakpoints from relative jumps.
>> + * Caller must call with locking kprobe_mutex.
>> + */
>> +void arch_unoptimize_kprobes(struct list_head *oplist,
>> +                                struct list_head *done_list)
>> +{
>> +       struct optimized_kprobe *op, *tmp;
>> +
>> +       list_for_each_entry_safe(op, tmp, oplist, list) {
>> +               arch_unoptimize_kprobe(op);
>> +               list_move(&op->list, done_list);
>> +       }
>> +}
>> +
>> +int arch_within_optimized_kprobe(struct optimized_kprobe *op,
>> +                                kprobe_opcode_t *addr)
>> +{
>> +       return (op->kp.addr <= addr &&
>> +               op->kp.addr + (JUMP_SIZE / sizeof(kprobe_opcode_t)) > addr);
>> +
>> +}
>> +
>> +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +       __arch_remove_optimized_kprobe(op, 1);
>> +}
>> diff --git a/arch/riscv/kernel/probes/opt_trampoline.S b/arch/riscv/kernel/probes/opt_trampoline.S
> 
> Thanks,
> Conor.
> 
> 
>
Conor Dooley Aug. 31, 2022, 7:51 a.m. UTC | #3
On 31/08/2022 08:24, Conor Dooley - M52691 wrote:
> Hey Chen,
> 
> FYI there is a build warning with this patch:
> arch/riscv/kernel/probes/opt.c:34:27: warning: no previous prototype for 'can_kprobe_direct_exec' [-Wmissing-prototypes]
>     34 | enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
> 
> Also, if you run scripts/checkpatch.pl --strict, it will have a
> few complaints about code style for you too. Other than that, I
> have a few comments for you below:
> 
> On 31/08/2022 05:10, Chen Guokai wrote:

> [PATCH] arch/riscv: kprobes: implement optprobes

One more nitpick thing, could you make this either "riscv"
or "RISC-V" not "arch/riscv"?

Thanks :)
Conor Dooley Aug. 31, 2022, 7:52 a.m. UTC | #4
On 31/08/2022 08:49, liaochang (A) wrote:
> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
> 
> 在 2022/8/31 15:24, Conor.Dooley@microchip.com 写道:
>> Hey Chen,
>>
>> FYI there is a build warning with this patch:
>> arch/riscv/kernel/probes/opt.c:34:27: warning: no previous prototype for 'can_kprobe_direct_exec' [-Wmissing-prototypes]
>>      34 | enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
>>
>> Also, if you run scripts/checkpatch.pl --strict, it will have a
>> few complaints about code style for you too. Other than that, I
>> have a few comments for you below:
>>
>> On 31/08/2022 05:10, Chen Guokai wrote:
>>> [You don't often get email from chenguokai17@mails.ucas.ac.cn. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>>>
>>> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>>>
>>> This patch adds jump optimization support for RISC-V.
>>
>> s/This patch adds/Add
>>
>>>
>>> This patch replaces ebreak instructions used by normal kprobes with an
>>
>> s/This patch replaces/Replace
>>
>>> auipc+jalr instruction pair, at the aim of suppressing the probe-hit
>>> overhead.
>>>
>>> All known optprobe-capable RISC architectures have been using a single
>>> jump or branch instructions while this patch chooses not. RISC-V has a
>>> quite limited jump range (4KB or 2MB) for both its branch and jump
>>> instructions, which prevent optimizations from supporting probes that
>>> spread all over the kernel.
>>>
>>> Auipc-jalr instruction pair is introduced with a much wider jump range
>>> (4GB), where auipc loads the upper 12 bits to a free register and jalr
>>> appends the lower 20 bits to form a 32 bit immediate. Note that returning
>>> from probe handler requires another free register. As kprobes can appear
>>> almost anywhere inside the kernel, the free register should be found in a
>>> generic way, not depending on calling convension or any other regulations.
>>
>> convention
>>
>>>
>>> The algorithm for finding the free register is inspired by the regiter
>>
>> register
>>
>>> renaming in modern processors. From the perspective of register renaming, a
>>> register could be represented as two different registers if two neighbour
>>> instructions both write to it but no one ever reads. Extending this fact,
>>> a register is considered to be free if there is no read before its next
>>> write in the execution flow. We are free to change its value without
>>> interfering normal execution.
>>>
>>> Static analysis shows that 51% instructions of the kernel (default config)
>>> is capable of being replaced i.e. two free registers can be found at both
>>> the start and end of replaced instruction pairs while the replaced
>>> instructions can be directly executed.
>>>
>>> Signed-off-by: Chen Guokai <chenguokai17@mails.ucas.ac.cn>
>>> Signed-off-by: Liao Chang <liaochang1@huawei.com>
>>
>> What does Liao have to do with this patch?
> I just provide some suggestion to Chen Guokai during development ;)
> please remove my info from Signed-off-by tag.

Does that mean that the "copyright 2022 Huawei" is also not accurate?
Conor Dooley Aug. 31, 2022, 8:15 a.m. UTC | #5
On 31/08/2022 08:48, Xim wrote:

> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
> Hi Conor,
> 
> Thanks for your review! I will correct addressed issues in the next version. I have some explanations for others.

FYI, this mail arrived as html so will not get processed by the
mailing lists.
Xim Aug. 31, 2022, 8:15 a.m. UTC | #6
Hi Conor,

Thanks for your review! I will correct addressed issues in the next version. I have some explanations for others.


&gt; -----原始邮件-----
&gt; 发件人: Conor.Dooley@microchip.com
&gt; 发送时间: 2022-08-31 15:24:08 (星期三)
&gt; 收件人: chenguokai17@mails.ucas.ac.cn, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, rostedt@goodmis.org, mingo@redhat.com, sfr@canb.auug.org.au
&gt; 抄送: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, liaochang1@huawei.com
&gt; 主题: Re: [PATCH] arch/riscv: kprobes: implement optprobes
&gt; 
&gt; Hey Chen,
&gt; 
&gt; FYI there is a build warning with this patch:
&gt; arch/riscv/kernel/probes/opt.c:34:27: warning: no previous prototype for 'can_kprobe_direct_exec' [-Wmissing-prototypes]
&gt;     34 | enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
&gt; 
&gt; Also, if you run scripts/checkpatch.pl --strict, it will have a
&gt; few complaints about code style for you too. Other than that, I
&gt; have a few comments for you below:
&gt; 
&gt; On 31/08/2022 05:10, Chen Guokai wrote:
&gt; &gt; [You don't often get email from chenguokai17@mails.ucas.ac.cn. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
&gt; &gt; 
&gt; &gt; EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
&gt; &gt; 
&gt; &gt; This patch adds jump optimization support for RISC-V.
&gt; 
&gt; s/This patch adds/Add
&gt; 
&gt; &gt; 
&gt; &gt; This patch replaces ebreak instructions used by normal kprobes with an
&gt; 
&gt; s/This patch replaces/Replace
&gt; 
&gt; &gt; auipc+jalr instruction pair, at the aim of suppressing the probe-hit
&gt; &gt; overhead.
&gt; &gt; 
&gt; &gt; All known optprobe-capable RISC architectures have been using a single
&gt; &gt; jump or branch instructions while this patch chooses not. RISC-V has a
&gt; &gt; quite limited jump range (4KB or 2MB) for both its branch and jump
&gt; &gt; instructions, which prevent optimizations from supporting probes that
&gt; &gt; spread all over the kernel.
&gt; &gt; 
&gt; &gt; Auipc-jalr instruction pair is introduced with a much wider jump range
&gt; &gt; (4GB), where auipc loads the upper 12 bits to a free register and jalr
&gt; &gt; appends the lower 20 bits to form a 32 bit immediate. Note that returning
&gt; &gt; from probe handler requires another free register. As kprobes can appear
&gt; &gt; almost anywhere inside the kernel, the free register should be found in a
&gt; &gt; generic way, not depending on calling convension or any other regulations.
&gt; 
&gt; convention
&gt; 
&gt; &gt; 
&gt; &gt; The algorithm for finding the free register is inspired by the regiter
&gt; 
&gt; register
&gt; 
&gt; &gt; renaming in modern processors. From the perspective of register renaming, a
&gt; &gt; register could be represented as two different registers if two neighbour
&gt; &gt; instructions both write to it but no one ever reads. Extending this fact,
&gt; &gt; a register is considered to be free if there is no read before its next
&gt; &gt; write in the execution flow. We are free to change its value without
&gt; &gt; interfering normal execution.
&gt; &gt; 
&gt; &gt; Static analysis shows that 51% instructions of the kernel (default config)
&gt; &gt; is capable of being replaced i.e. two free registers can be found at both
&gt; &gt; the start and end of replaced instruction pairs while the replaced
&gt; &gt; instructions can be directly executed.
&gt; &gt; 
&gt; &gt; Signed-off-by: Chen Guokai <chenguokai17@mails.ucas.ac.cn>
&gt; &gt; Signed-off-by: Liao Chang <liaochang1@huawei.com>
&gt; 
&gt; What does Liao have to do with this patch?

Liao is my mentor in OSPP 2022 hold by ISCAS, CAS. He has been taking
an active part in the design and review process of this patch.

&gt; 
&gt; &gt; ---
&gt; &gt;   arch/riscv/Kconfig                        |   1 +
&gt; &gt;   arch/riscv/include/asm/ftrace.h           |   2 +-
&gt; &gt;   arch/riscv/include/asm/kprobes.h          |  28 ++
&gt; &gt;   arch/riscv/kernel/probes/Makefile         |   1 +
&gt; &gt;   arch/riscv/kernel/probes/opt.c            | 483 ++++++++++++++++++++++
&gt; &gt;   arch/riscv/kernel/probes/opt_trampoline.S | 133 ++++++
&gt; &gt;   arch/riscv/kernel/probes/simulate-insn.h  |   9 +
&gt; &gt;   7 files changed, 656 insertions(+), 1 deletion(-)
&gt; &gt;   create mode 100644 arch/riscv/kernel/probes/opt.c
&gt; &gt;   create mode 100644 arch/riscv/kernel/probes/opt_trampoline.S
&gt; &gt; 
&gt; &gt; diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
&gt; &gt; index d557cc502..a54e50de2 100644
&gt; &gt; --- a/arch/riscv/Kconfig
&gt; &gt; +++ b/arch/riscv/Kconfig
&gt; &gt; @@ -97,6 +97,7 @@ config RISCV
&gt; &gt;          select HAVE_KPROBES if !XIP_KERNEL
&gt; &gt;          select HAVE_KPROBES_ON_FTRACE if !XIP_KERNEL
&gt; &gt;          select HAVE_KRETPROBES if !XIP_KERNEL
&gt; &gt; +       select HAVE_OPTPROBES if !XIP_KERNEL &amp;&amp; !CONFIG_RISCV_ISA_C
&gt; &gt;          select HAVE_MOVE_PMD
&gt; &gt;          select HAVE_MOVE_PUD
&gt; &gt;          select HAVE_PCI
&gt; &gt; diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
&gt; &gt; index 04dad3380..8b17a4c66 100644
&gt; &gt; --- a/arch/riscv/include/asm/ftrace.h
&gt; &gt; +++ b/arch/riscv/include/asm/ftrace.h
&gt; &gt; @@ -35,7 +35,7 @@ struct dyn_arch_ftrace {
&gt; &gt;   };
&gt; &gt;   #endif
&gt; &gt; 
&gt; &gt; -#ifdef CONFIG_DYNAMIC_FTRACE
&gt; &gt; +#if defined(CONFIG_DYNAMIC_FTRACE) || defined(CONFIG_OPTPROBES)
&gt; &gt;   /*
&gt; &gt;    * A general call in RISC-V is a pair of insts:
&gt; &gt;    * 1) auipc: setting high-20 pc-related bits to ra register
&gt; &gt; diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
&gt; &gt; index 217ef89f2..6c5e10709 100644
&gt; &gt; --- a/arch/riscv/include/asm/kprobes.h
&gt; &gt; +++ b/arch/riscv/include/asm/kprobes.h
&gt; &gt; @@ -43,5 +43,33 @@ bool kprobe_single_step_handler(struct pt_regs *regs);
&gt; &gt;   void __kretprobe_trampoline(void);
&gt; &gt;   void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
&gt; &gt; 
&gt; &gt; +#ifdef CONFIG_OPTPROBES
&gt; &gt; +
&gt; &gt; +#define MAX_OPTIMIZED_LENGTH   8
&gt; &gt; +
&gt; &gt; +/* optinsn template addresses */
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_entry[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_val[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_call[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_store_epc[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_end[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_sub_sp[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_add_sp[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_restore_begin[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn[];
&gt; &gt; +extern __visible kprobe_opcode_t optprobe_template_restore_end[];
&gt; &gt; +
&gt; &gt; +#define MAX_OPTINSN_SIZE                               \
&gt; &gt; +               ((unsigned long)optprobe_template_end - \
&gt; &gt; +                (unsigned long)optprobe_template_entry)
&gt; &gt; +
&gt; &gt; +#define MAX_COPIED_INSN 2
&gt; &gt; +struct arch_optimized_insn {
&gt; &gt; +               kprobe_opcode_t copied_insn[MAX_COPIED_INSN];
&gt; &gt; +                       /* detour code buffer */
&gt; &gt; +                       kprobe_opcode_t *insn;
&gt; &gt; +};
&gt; &gt; +#define RVI_INST_SIZE 4
&gt; &gt; +#endif /* CONFIG_OPTPROBES */
&gt; &gt;   #endif /* CONFIG_KPROBES */
&gt; &gt;   #endif /* _ASM_RISCV_KPROBES_H */
&gt; &gt; diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
&gt; &gt; index 7f0840dcc..6255b4600 100644
&gt; &gt; --- a/arch/riscv/kernel/probes/Makefile
&gt; &gt; +++ b/arch/riscv/kernel/probes/Makefile
&gt; &gt; @@ -3,4 +3,5 @@ obj-$(CONFIG_KPROBES)           += kprobes.o decode-insn.o simulate-insn.o
&gt; &gt;   obj-$(CONFIG_KPROBES)          += kprobes_trampoline.o
&gt; &gt;   obj-$(CONFIG_KPROBES_ON_FTRACE)        += ftrace.o
&gt; &gt;   obj-$(CONFIG_UPROBES)          += uprobes.o decode-insn.o simulate-insn.o
&gt; &gt; +obj-$(CONFIG_OPTPROBES)                += opt.o opt_trampoline.o
&gt; &gt;   CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
&gt; &gt; diff --git a/arch/riscv/kernel/probes/opt.c b/arch/riscv/kernel/probes/opt.c
&gt; &gt; new file mode 100644
&gt; &gt; index 000000000..b9bcf6e12
&gt; &gt; --- /dev/null
&gt; &gt; +++ b/arch/riscv/kernel/probes/opt.c
&gt; &gt; @@ -0,0 +1,483 @@
&gt; &gt; +// SPDX-License-Identifier: GPL-2.0-or-later
&gt; &gt; +/*
&gt; &gt; + *  Kernel Probes Jump Optimization (Optprobes)
&gt; &gt; + *
&gt; &gt; + * Copyright (C) IBM Corporation, 2002, 2004
&gt; &gt; + * Copyright (C) Hitachi Ltd., 2012
&gt; &gt; + * Copyright (C) Huawei Inc., 2014
&gt; &gt; + * Copyright (C) 2022 Huawei Technologies Co., Ltd
&gt; &gt; + * Copyright (C) Guokai Chen, 2022
&gt; 
&gt; Should this not be your University here?

My university does not involve in this work, sorry for any confusion.

&gt; 
&gt; &gt; + * Author: Guokai Chen chenguokai17@mails.ucas.ac.cn
&gt; &gt; + */
&gt; &gt; +
&gt; &gt; +#include <linux kprobes.h="">
&gt; &gt; +#include <linux jump_label.h="">
&gt; &gt; +#include <linux extable.h="">
&gt; &gt; +#include <linux stop_machine.h="">
&gt; &gt; +#include <linux moduleloader.h="">
&gt; &gt; +#include <linux kprobes.h="">
&gt; &gt; +#include <linux cacheflush.h="">
&gt; &gt; +/* for patch_text */
&gt; &gt; +#include <asm ftrace.h="">
&gt; &gt; +#include <asm patch.h="">
&gt; &gt; +#include "simulate-insn.h"
&gt; &gt; +#include "decode-insn.h"
&gt; &gt; +
&gt; &gt; +
&gt; &gt; +#define JUMP_SIZE 8
&gt; &gt; +
&gt; &gt; +/*
&gt; &gt; + * If the probed instruction doesn't use PC and is not system or fence
&gt; &gt; + * we can copy it into template and have it executed directly without
&gt; &gt; + * simulation or emulation.
&gt; &gt; + */
&gt; &gt; +enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
&gt; &gt; +{
&gt; &gt; +       /*
&gt; &gt; +        * instructions that use PC
&gt; &gt; +        * branch jump auipc
&gt; &gt; +        * instructions that belongs to system or fence
&gt; &gt; +        * ebreak ecall fence.i
&gt; 
&gt; Please use the full columns available to you for comments.
&gt; 
&gt; &gt; +        */
&gt; &gt; +       kprobe_opcode_t inst = *addr;
&gt; &gt; +
&gt; &gt; +       RISCV_INSN_REJECTED(system, inst);
&gt; &gt; +       RISCV_INSN_REJECTED(fence, inst);
&gt; &gt; +       RISCV_INSN_REJECTED(branch, inst);
&gt; &gt; +       RISCV_INSN_REJECTED(jal, inst);
&gt; &gt; +       RISCV_INSN_REJECTED(jalr, inst);
&gt; &gt; +       RISCV_INSN_REJECTED(auipc, inst);
&gt; &gt; +       return INSN_GOOD_NO_SLOT;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +#define TMPL_VAL_IDX \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_val - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_CALL_IDX \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_call - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_STORE_EPC_IDX \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_store_epc - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_END_IDX \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_end - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_ADD_SP \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_add_sp - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_SUB_SP \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_sub_sp - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_RESTORE_BEGIN \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_restore_begin - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_RESTORE_ORIGN_INSN \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_restore_orig_insn - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_RESTORE_RET \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_ret - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +#define TMPL_RESTORE_END \
&gt; &gt; +       ((kprobe_opcode_t *)optprobe_template_restore_end - \
&gt; &gt; +        (kprobe_opcode_t *)optprobe_template_entry)
&gt; &gt; +
&gt; &gt; +#define FREE_SEARCH_DEPTH 32
&gt; &gt; +
&gt; &gt; +/*
&gt; &gt; + * RISC-V can always optimize an instruction if not null
&gt; &gt; + */
&gt; &gt; +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
&gt; &gt; +{
&gt; &gt; +       return optinsn-&gt;insn != NULL;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +/*
&gt; &gt; + * In RISC-V ISA, jal has a quite limited jump range
&gt; &gt; + * To achive adequate range, auipc+jalr is utilized
&gt; &gt; + * It requires a replacement of two instructions
&gt; &gt; + * thus next instruction should be examined
&gt; 
&gt; Please use the full columns available to you for comments.
&gt; 
&gt; &gt; + */
&gt; &gt; +int arch_check_optimized_kprobe(struct optimized_kprobe *op)
&gt; &gt; +{
&gt; &gt; +       struct kprobe *p;
&gt; &gt; +
&gt; &gt; +       p = get_kprobe(op-&gt;kp.addr + 4);
&gt; 
&gt; Where does this 4 come from?
&gt; 
&gt; &gt; +       if (p &amp;&amp; !kprobe_disabled(p))
&gt; &gt; +               return -EEXIST;
&gt; &gt; +
&gt; &gt; +       return 0;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +/*
&gt; &gt; + * In RISC-V ISA, auipc+jalr requires a free register
&gt; &gt; + * Inspired by register renaming in OoO processor,
&gt; &gt; + * we search backwards to find such a register that:
&gt; &gt; + * not previously used as a source register &amp;&amp;
&gt; &gt; + * is used as a destination register &amp;&amp;
&gt; &gt; + * before any branch/jump instruction
&gt; 
&gt; Ditto re comment width.
&gt; 
&gt; &gt; + */
&gt; &gt; +static int
&gt; &gt; +__arch_find_free_register(kprobe_opcode_t *addr, int use_orig,
&gt; &gt; +                         kprobe_opcode_t orig)
&gt; &gt; +{
&gt; &gt; +       int i, rs1, rs2, rd;
&gt; &gt; +       kprobe_opcode_t inst;
&gt; &gt; +       int rs_mask = 0;
&gt; &gt; +
&gt; &gt; +       for (i = 0; i &lt; FREE_SEARCH_DEPTH; i++) {
&gt; &gt; +               if (i == 0 &amp;&amp; use_orig)
&gt; &gt; +                       inst = orig;
&gt; &gt; +               else
&gt; &gt; +                       inst = *(kprobe_opcode_t *) (addr + i);
&gt; &gt; +               /*
&gt; &gt; +                * Detailed handling:
&gt; &gt; +                * jalr/branch/system: must have reached the end, no result
&gt; &gt; +                * jal: if not chosen as result, must have reached the end
&gt; &gt; +                * arithmetic/load/store: record their rs
&gt; &gt; +                * jal/arithmetic/load: if proper rd found, return result
&gt; &gt; +                * others (float point/vector): ignore
&gt; &gt; +                */
&gt; &gt; +               if (riscv_insn_is_branch(inst) || riscv_insn_is_jalr(inst)
&gt; &gt; +                       || riscv_insn_is_system(inst)) {
&gt; &gt; +                       return 0;
&gt; &gt; +               }
&gt; &gt; +               /* instructions that has rs1 */
&gt; &gt; +               if (riscv_insn_is_arith_ri(inst) || riscv_insn_is_arith_rr(inst)
&gt; &gt; +                       || riscv_insn_is_load(inst) || riscv_insn_is_store(inst)
&gt; &gt; +                       || riscv_insn_is_amo(inst)) {
&gt; &gt; +                       rs1 = (inst &amp; 0xF8000) &gt;&gt; 15;
&gt; &gt; +                       rs_mask |= 1 &lt;&lt; rs1;
&gt; &gt; +               }
&gt; &gt; +               /* instructions that has rs2 */
&gt; &gt; +               if (riscv_insn_is_arith_rr(inst) || riscv_insn_is_store(inst)
&gt; &gt; +                       || riscv_insn_is_amo(inst)) {
&gt; &gt; +                       rs2 = (inst &amp; 0x1F00000) &gt;&gt; 20;
&gt; &gt; +                       rs_mask |= 1 &lt;&lt; rs2;
&gt; &gt; +               }
&gt; &gt; +               /* instructions that has rd */
&gt; &gt; +               if (riscv_insn_is_lui(inst) || riscv_insn_is_jal(inst)
&gt; &gt; +                       || riscv_insn_is_load(inst) || riscv_insn_is_arith_ri(inst)
&gt; &gt; +                       || riscv_insn_is_arith_rr(inst) || riscv_insn_is_amo(inst)) {
&gt; &gt; +                       rd = (inst &amp; 0xF80) &gt;&gt; 7;
&gt; &gt; +                       if (rd != 0 &amp;&amp; (rs_mask &amp; (1 &lt;&lt; rd)) == 0)
&gt; &gt; +                               return rd;
&gt; &gt; +                       if (riscv_insn_is_jal(inst))
&gt; &gt; +                               return 0;
&gt; &gt; +               }
&gt; &gt; +       }
&gt; &gt; +       return 0;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +/*
&gt; &gt; + * If two free registers can be found at the beginning of both
&gt; &gt; + * the start and the end of replaced code, it can be optimized
&gt; &gt; + * Also, in-function jumps need to be checked to make sure that
&gt; &gt; + * there is no jump to the second instruction to be replaced
&gt; &gt; + */
&gt; &gt; +
&gt; &gt; +#define branch_imm(opcode) \
&gt; &gt; +       (((((opcode) &gt;&gt;  8) &amp; 0xf) &lt;&lt;  1) | \
&gt; &gt; +        ((((opcode) &gt;&gt; 25) &amp; 0x3f) &lt;&lt;  5) | \
&gt; &gt; +        ((((opcode) &gt;&gt;  7) &amp; 0x1) &lt;&lt; 11) | \
&gt; &gt; +        ((((opcode) &gt;&gt; 31) &amp; 0x1) &lt;&lt; 12))
&gt; 
&gt; All the numbers in here are quite meaningless to me.
&gt; Could you please use defines here?

This code is borrowed from arch/riscv/kernel/probes/simulate-insn.c
It should have been moved to a shared header, thanks for your reminder.
As for this particular code, it extracts immediate from branch instructions.
The encoding of RISC-V ISA requires this magics :-(

&gt; 
&gt; &gt; +
&gt; &gt; +#define branch_offset(opcode) \
&gt; &gt; +       sign_extend32((branch_imm(opcode)), 12)
&gt; &gt; +
&gt; &gt; +#define jal_imm(opcode) \
&gt; &gt; +       ((((opcode &gt;&gt; 21) &amp; 0x3ff) &lt;&lt; 1) | \
&gt; &gt; +        (((opcode &gt;&gt; 20) &amp; 0x1) &lt;&lt; 11) | \
&gt; &gt; +        (((opcode &gt;&gt; 31) &amp; 0x1) &lt;&lt; 20))
&gt; &gt; +#define jal_offset(opcode) \
&gt; &gt; +       sign_extend32(jal_imm(opcode), 20)
&gt; &gt; +
&gt; &gt; +static int can_optimize(unsigned long paddr, kprobe_opcode_t orig)
&gt; &gt; +{
&gt; &gt; +       unsigned long addr, size = 0, offset = 0, target;
&gt; &gt; +       s32 imm;
&gt; &gt; +       kprobe_opcode_t inst;
&gt; &gt; +
&gt; &gt; +       if (!kallsyms_lookup_size_offset(paddr, &amp;size, &amp;offset))
&gt; &gt; +               return 0;
&gt; &gt; +
&gt; &gt; +       addr = paddr - offset;
&gt; &gt; +
&gt; &gt; +       /* if there are not enough space for our kprobe, skip */
&gt; &gt; +       if (addr + size &lt;= paddr + MAX_OPTIMIZED_LENGTH)
&gt; &gt; +               return 0;
&gt; &gt; +
&gt; &gt; +       while (addr &lt; paddr - offset + size) {
&gt; &gt; +               /* Check from the start until the end */
&gt; &gt; +
&gt; &gt; +               inst = *(kprobe_opcode_t *)addr;
&gt; &gt; +               /* branch and jal is capable of determing target before execution */
&gt; &gt; +               if (riscv_insn_is_branch(inst)) {
&gt; &gt; +                       imm = branch_offset(inst);
&gt; &gt; +                       target = addr + imm;
&gt; &gt; +                       if (target == paddr + RVI_INST_SIZE)
&gt; &gt; +                               return 0;
&gt; &gt; +               } else if (riscv_insn_is_jal(inst)) {
&gt; &gt; +                       imm = jal_offset(inst);
&gt; &gt; +                       target = addr + imm;
&gt; &gt; +                       if (target == paddr + RVI_INST_SIZE)
&gt; &gt; +                               return 0;
&gt; &gt; +               }
&gt; &gt; +               /* RVI is always 4 byte long */
&gt; &gt; +               addr += 4;
&gt; &gt; +       }
&gt; &gt; +
&gt; &gt; +       if (can_kprobe_direct_exec((kprobe_opcode_t *)(paddr + 4)) != INSN_GOOD_NO_SLOT)
&gt; &gt; +               return 0;
&gt; &gt; +
&gt; &gt; +       /* only valid when we find two free registers */
&gt; &gt; +       return __arch_find_free_register((kprobe_opcode_t *) paddr, 1, orig)
&gt; &gt; +               &amp;&amp; __arch_find_free_register((kprobe_opcode_t *) (paddr + JUMP_SIZE), 0, 0);
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +/* Free optimized instruction slot */
&gt; &gt; +static void
&gt; &gt; +__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
&gt; &gt; +{
&gt; &gt; +       if (op-&gt;optinsn.insn) {
&gt; &gt; +               free_optinsn_slot(op-&gt;optinsn.insn, dirty);
&gt; &gt; +               op-&gt;optinsn.insn = NULL;
&gt; &gt; +       }
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +extern void kprobe_handler(struct pt_regs *regs);
&gt; &gt; +
&gt; &gt; +static void
&gt; &gt; +optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
&gt; &gt; +{
&gt; &gt; +       unsigned long flags;
&gt; &gt; +       struct kprobe_ctlblk *kcb;
&gt; &gt; +
&gt; &gt; +       /* Save skipped registers */
&gt; &gt; +       regs-&gt;epc = (unsigned long)op-&gt;kp.addr;
&gt; &gt; +       regs-&gt;orig_a0 = ~0UL;
&gt; &gt; +
&gt; &gt; +       local_irq_save(flags);
&gt; &gt; +       kcb = get_kprobe_ctlblk();
&gt; &gt; +
&gt; &gt; +       if (kprobe_running()) {
&gt; &gt; +               kprobes_inc_nmissed_count(&amp;op-&gt;kp);
&gt; &gt; +       } else {
&gt; &gt; +               __this_cpu_write(current_kprobe, &amp;op-&gt;kp);
&gt; &gt; +               kcb-&gt;kprobe_status = KPROBE_HIT_ACTIVE;
&gt; &gt; +               opt_pre_handler(&amp;op-&gt;kp, regs);
&gt; &gt; +               __this_cpu_write(current_kprobe, NULL);
&gt; &gt; +       }
&gt; &gt; +
&gt; &gt; +       local_irq_restore(flags);
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +NOKPROBE_SYMBOL(optimized_callback)
&gt; &gt; +static inline kprobe_opcode_t
&gt; &gt; +__arch_patch_rd(kprobe_opcode_t inst, unsigned long val)
&gt; &gt; +{
&gt; &gt; +       inst &amp;= 0xfffff07fUL;
&gt; 
&gt; It'd be nice if these were defines too, so that it was clear to
&gt; the untrained eye what's going on here.
&gt; 
&gt; &gt; +       inst |= val &lt;&lt; 7;
&gt; &gt; +       return inst;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +static inline kprobe_opcode_t
&gt; &gt; +__arch_patch_rs1(kprobe_opcode_t inst, unsigned long val)
&gt; &gt; +{
&gt; &gt; +       inst &amp;= 0xfff07fffUL;
&gt; &gt; +       inst |= val &lt;&lt; 15;
&gt; &gt; +       return inst;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +static inline kprobe_opcode_t __arch_patch_rs2(kprobe_opcode_t inst,
&gt; &gt; +                                                  unsigned long val)
&gt; &gt; +{
&gt; &gt; +       inst &amp;= 0xfe0fffffUL;
&gt; &gt; +       inst |= val &lt;&lt; 20;
&gt; &gt; +       return inst;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +int
&gt; &gt; +arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *orig)
&gt; &gt; +{
&gt; &gt; +       kprobe_opcode_t *code, *detour_slot, *detour_ret_addr;
&gt; &gt; +       long rel_chk;
&gt; &gt; +       unsigned long val;
&gt; &gt; +
&gt; &gt; +       /* not aligned address */
&gt; &gt; +       #ifdef CONFIG_RISCV_ISA_C
&gt; 
&gt; Please use IS_ENABLED() here if you can.
&gt; 
&gt; &gt; +       return -ERANGE;
&gt; &gt; +       #endif
&gt; &gt; +
&gt; &gt; +       if (!can_optimize((unsigned long)orig-&gt;addr, orig-&gt;opcode))
&gt; &gt; +               return -EILSEQ;
&gt; &gt; +
&gt; &gt; +       code = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL);
&gt; &gt; +       detour_slot = get_optinsn_slot();
&gt; &gt; +
&gt; &gt; +       if (!code || !detour_slot) {
&gt; &gt; +               kfree(code);
&gt; &gt; +               if (detour_slot)
&gt; &gt; +                       free_optinsn_slot(detour_slot, 0);
&gt; &gt; +               return -ENOMEM;
&gt; &gt; +       }
&gt; &gt; +
&gt; &gt; +       /*
&gt; &gt; +        * Verify if the address gap is within 4GB range, because this uses
&gt; &gt; +        * a auipc+jalr pair.
&gt; &gt; +        */
&gt; &gt; +       rel_chk = (long)detour_slot - (long)orig-&gt;addr + 8;
&gt; &gt; +       if (abs(rel_chk) &gt; 0x7fffffff) {
&gt; 
&gt; GENMASK please.
&gt; 
&gt; &gt; +               /*
&gt; &gt; +                * Different from x86, we free code buf directly instead of
&gt; &gt; +                * calling __arch_remove_optimized_kprobe() because
&gt; &gt; +                * we have not fill any field in op.
&gt; &gt; +                */
&gt; &gt; +               kfree(code);
&gt; &gt; +               free_optinsn_slot(detour_slot, 0);
&gt; &gt; +               return -ERANGE;
&gt; &gt; +       }
&gt; &gt; +
&gt; &gt; +       /* Copy arch-dep-instance from template. */
&gt; &gt; +       memcpy(code, (unsigned long *)optprobe_template_entry,
&gt; &gt; +                  TMPL_END_IDX * sizeof(kprobe_opcode_t));
&gt; &gt; +
&gt; &gt; +       /* Set probe information */
&gt; &gt; +       val = (unsigned long)op;
&gt; &gt; +       *(unsigned long *)(&amp;code[TMPL_VAL_IDX]) = val;
&gt; &gt; +
&gt; &gt; +       /* Set probe function call */
&gt; &gt; +       val = (unsigned long)optimized_callback;
&gt; &gt; +       *(unsigned long *)(&amp;code[TMPL_CALL_IDX]) = val;
&gt; 
&gt; What is the benefit of using val here? I think the comments
&gt; are also pointing out the obvious here, no?
&gt; 
&gt; &gt; +
&gt; &gt; +       /* Adjust epc register */
&gt; 
&gt; The comments here mainly just say what you're doing &amp; not why
&gt; it should be done.
&gt; 
&gt; &gt; +       val = __arch_find_free_register(orig-&gt;addr, 1, orig-&gt;opcode);
&gt; &gt; +       /*
&gt; &gt; +        * patch rs2 of optprobe_template_store_epc
&gt; &gt; +        * after patch, optprobe_template_store_epc will be
&gt; &gt; +        * REG_S free_register, PT_EPC(sp)
&gt; &gt; +        */
&gt; &gt; +       code[TMPL_STORE_EPC_IDX] =
&gt; &gt; +               __arch_patch_rs2(code[TMPL_STORE_EPC_IDX], val);
&gt; &gt; +
&gt; &gt; +       /* Adjust return temp register */
&gt; &gt; +       val =
&gt; &gt; +               __arch_find_free_register(orig-&gt;addr +
&gt; &gt; +                                         JUMP_SIZE / sizeof(kprobe_opcode_t), 0,
&gt; &gt; +                                         0);
&gt; &gt; +       /*
&gt; &gt; +        * patch of optprobe_template_restore_end
&gt; &gt; +        * patch:
&gt; &gt; +        *   rd and imm of auipc
&gt; &gt; +        *   rs1 and imm of jalr
&gt; &gt; +        * after patch:
&gt; &gt; +        *   auipc free_register, %hi(return_address)
&gt; &gt; +        *   jalr x0, %lo(return_address)(free_register)
&gt; &gt; +        *
&gt; &gt; +        */
&gt; &gt; +
&gt; &gt; +       detour_ret_addr = &amp;(detour_slot[optprobe_template_restore_end - optprobe_template_entry]);
&gt; &gt; +
&gt; &gt; +       make_call(detour_ret_addr, (orig-&gt;addr + JUMP_SIZE / sizeof(kprobe_opcode_t)),
&gt; &gt; +                       (code + TMPL_RESTORE_END));
&gt; &gt; +       code[TMPL_RESTORE_END] = __arch_patch_rd(code[TMPL_RESTORE_END], val);
&gt; &gt; +       code[TMPL_RESTORE_END + 1] =
&gt; &gt; +               __arch_patch_rs1(code[TMPL_RESTORE_END + 1], val);
&gt; &gt; +       code[TMPL_RESTORE_END + 1] = __arch_patch_rd(code[TMPL_RESTORE_END + 1], 0);
&gt; &gt; +
&gt; &gt; +       /* Copy insn and have it executed during restore */
&gt; &gt; +
&gt; &gt; +       code[TMPL_RESTORE_ORIGN_INSN] = orig-&gt;opcode;
&gt; &gt; +       code[TMPL_RESTORE_ORIGN_INSN + 1] =
&gt; &gt; +               *(kprobe_opcode_t *) (orig-&gt;addr + 1);
&gt; &gt; +
&gt; &gt; +       if (patch_text_nosync(detour_slot, code, MAX_OPTINSN_SIZE)) {
&gt; &gt; +               free_optinsn_slot(detour_slot, 0);
&gt; &gt; +               kfree(code);
&gt; &gt; +               return -EPERM;
&gt; &gt; +       }
&gt; &gt; +
&gt; &gt; +       kfree(code);
&gt; &gt; +       /* Set op-&gt;optinsn.insn means prepared. */
&gt; &gt; +       op-&gt;optinsn.insn = detour_slot;
&gt; &gt; +       return 0;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +void __kprobes arch_optimize_kprobes(struct list_head *oplist)
&gt; &gt; +{
&gt; &gt; +       struct optimized_kprobe *op, *tmp;
&gt; &gt; +       kprobe_opcode_t val;
&gt; &gt; +
&gt; &gt; +       list_for_each_entry_safe(op, tmp, oplist, list) {
&gt; &gt; +               kprobe_opcode_t insn[2];
&gt; &gt; +
&gt; &gt; +               WARN_ON(kprobe_disabled(&amp;op-&gt;kp));
&gt; &gt; +
&gt; &gt; +               /*
&gt; &gt; +                * Backup instructions which will be replaced
&gt; &gt; +                * by jump address
&gt; &gt; +                */
&gt; &gt; +               memcpy(op-&gt;optinsn.copied_insn, op-&gt;kp.addr, JUMP_SIZE);
&gt; &gt; +               op-&gt;optinsn.copied_insn[0] = op-&gt;kp.opcode;
&gt; &gt; +
&gt; &gt; +               make_call(op-&gt;kp.addr, op-&gt;optinsn.insn, insn);
&gt; &gt; +
&gt; &gt; +               // patch insn jalr to have rd as free register
&gt; &gt; +               val = (op-&gt;optinsn.insn[2] &amp; 0x1F00000) &gt;&gt; 20;
&gt; 
&gt; Again, could you use some defines to make this more understandable
&gt; to mere mortals like me? ;)
&gt; 
&gt; &gt; +
&gt; &gt; +               insn[0] = __arch_patch_rd(insn[0], val);
&gt; &gt; +
&gt; &gt; +               insn[1] = __arch_patch_rd(insn[1], val);
&gt; &gt; +               insn[1] = __arch_patch_rs1(insn[1], val);
&gt; &gt; +
&gt; &gt; +               /*
&gt; &gt; +                * Similar to __arch_disarm_kprobe, operations which
&gt; &gt; +                * removing breakpoints must be wrapped by stop_machine
&gt; &gt; +                * to avoid racing.
&gt; &gt; +                */
&gt; &gt; +               WARN_ON(patch_text_nosync(op-&gt;kp.addr, insn, JUMP_SIZE));
&gt; &gt; +
&gt; &gt; +               list_del_init(&amp;op-&gt;list);
&gt; &gt; +       }
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +static int arch_disarm_kprobe_opt(void *vop)
&gt; &gt; +{
&gt; &gt; +       struct optimized_kprobe *op = (struct optimized_kprobe *)vop;
&gt; &gt; +
&gt; &gt; +       patch_text_nosync(op-&gt;kp.addr, op-&gt;optinsn.copied_insn, JUMP_SIZE);
&gt; &gt; +       arch_arm_kprobe(&amp;op-&gt;kp);
&gt; &gt; +       return 0;
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +void arch_unoptimize_kprobe(struct optimized_kprobe *op)
&gt; &gt; +{
&gt; &gt; +       arch_disarm_kprobe_opt((void *)op);
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +/*
&gt; &gt; + * Recover original instructions and breakpoints from relative jumps.
&gt; &gt; + * Caller must call with locking kprobe_mutex.
&gt; &gt; + */
&gt; &gt; +void arch_unoptimize_kprobes(struct list_head *oplist,
&gt; &gt; +                                struct list_head *done_list)
&gt; &gt; +{
&gt; &gt; +       struct optimized_kprobe *op, *tmp;
&gt; &gt; +
&gt; &gt; +       list_for_each_entry_safe(op, tmp, oplist, list) {
&gt; &gt; +               arch_unoptimize_kprobe(op);
&gt; &gt; +               list_move(&amp;op-&gt;list, done_list);
&gt; &gt; +       }
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +int arch_within_optimized_kprobe(struct optimized_kprobe *op,
&gt; &gt; +                                kprobe_opcode_t *addr)
&gt; &gt; +{
&gt; &gt; +       return (op-&gt;kp.addr &lt;= addr &amp;&amp;
&gt; &gt; +               op-&gt;kp.addr + (JUMP_SIZE / sizeof(kprobe_opcode_t)) &gt; addr);
&gt; &gt; +
&gt; &gt; +}
&gt; &gt; +
&gt; &gt; +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
&gt; &gt; +{
&gt; &gt; +       __arch_remove_optimized_kprobe(op, 1);
&gt; &gt; +}
&gt; &gt; diff --git a/arch/riscv/kernel/probes/opt_trampoline.S b/arch/riscv/kernel/probes/opt_trampoline.S
&gt; 
&gt; Thanks,
&gt; Conor.
&gt; 
&gt; 
&gt; 
</asm></asm></linux></linux></linux></linux></linux></linux></linux></liaochang1@huawei.com></chenguokai17@mails.ucas.ac.cn>
Xim Aug. 31, 2022, 8:25 a.m. UTC | #7
Hi Conor,

Thanks for your review! I will correct addressed issues in the next version. I have some explanations for others.
Sorry for previous format issues.

> 2022年8月31日 15:24,Conor.Dooley@microchip.com 写道:
> 
> Hey Chen,
> 
> FYI there is a build warning with this patch:
> arch/riscv/kernel/probes/opt.c:34:27: warning: no previous prototype for 'can_kprobe_direct_exec' [-Wmissing-prototypes]
>    34 | enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
> 
> Also, if you run scripts/checkpatch.pl --strict, it will have a
> few complaints about code style for you too. Other than that, I
> have a few comments for you below:
> 
> On 31/08/2022 05:10, Chen Guokai wrote:
>> [You don't often get email from chenguokai17@mails.ucas.ac.cn. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>> 
>> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>> 
>> This patch adds jump optimization support for RISC-V.
> 
> s/This patch adds/Add
> 
>> 
>> This patch replaces ebreak instructions used by normal kprobes with an
> 
> s/This patch replaces/Replace
> 
>> auipc+jalr instruction pair, at the aim of suppressing the probe-hit
>> overhead.
>> 
>> All known optprobe-capable RISC architectures have been using a single
>> jump or branch instructions while this patch chooses not. RISC-V has a
>> quite limited jump range (4KB or 2MB) for both its branch and jump
>> instructions, which prevent optimizations from supporting probes that
>> spread all over the kernel.
>> 
>> Auipc-jalr instruction pair is introduced with a much wider jump range
>> (4GB), where auipc loads the upper 12 bits to a free register and jalr
>> appends the lower 20 bits to form a 32 bit immediate. Note that returning
>> from probe handler requires another free register. As kprobes can appear
>> almost anywhere inside the kernel, the free register should be found in a
>> generic way, not depending on calling convension or any other regulations.
> 
> convention
> 
>> 
>> The algorithm for finding the free register is inspired by the regiter
> 
> register
> 
>> renaming in modern processors. From the perspective of register renaming, a
>> register could be represented as two different registers if two neighbour
>> instructions both write to it but no one ever reads. Extending this fact,
>> a register is considered to be free if there is no read before its next
>> write in the execution flow. We are free to change its value without
>> interfering normal execution.
>> 
>> Static analysis shows that 51% instructions of the kernel (default config)
>> is capable of being replaced i.e. two free registers can be found at both
>> the start and end of replaced instruction pairs while the replaced
>> instructions can be directly executed.
>> 
>> Signed-off-by: Chen Guokai <chenguokai17@mails.ucas.ac.cn>
>> Signed-off-by: Liao Chang <liaochang1@huawei.com>
> 
> What does Liao have to do with this patch?

Liao is my mentor in OSPP 2022 hold by ISCAS, CAS. He has been taking
an active part in the design and review process of this patch.
P.S. In the future patch version, Huawei related copyright/author info will be discarded.

> 
>> ---
>>  arch/riscv/Kconfig                        |   1 +
>>  arch/riscv/include/asm/ftrace.h           |   2 +-
>>  arch/riscv/include/asm/kprobes.h          |  28 ++
>>  arch/riscv/kernel/probes/Makefile         |   1 +
>>  arch/riscv/kernel/probes/opt.c            | 483 ++++++++++++++++++++++
>>  arch/riscv/kernel/probes/opt_trampoline.S | 133 ++++++
>>  arch/riscv/kernel/probes/simulate-insn.h  |   9 +
>>  7 files changed, 656 insertions(+), 1 deletion(-)
>>  create mode 100644 arch/riscv/kernel/probes/opt.c
>>  create mode 100644 arch/riscv/kernel/probes/opt_trampoline.S
>> 
>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> index d557cc502..a54e50de2 100644
>> --- a/arch/riscv/Kconfig
>> +++ b/arch/riscv/Kconfig
>> @@ -97,6 +97,7 @@ config RISCV
>>         select HAVE_KPROBES if !XIP_KERNEL
>>         select HAVE_KPROBES_ON_FTRACE if !XIP_KERNEL
>>         select HAVE_KRETPROBES if !XIP_KERNEL
>> +       select HAVE_OPTPROBES if !XIP_KERNEL && !CONFIG_RISCV_ISA_C
>>         select HAVE_MOVE_PMD
>>         select HAVE_MOVE_PUD
>>         select HAVE_PCI
>> diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
>> index 04dad3380..8b17a4c66 100644
>> --- a/arch/riscv/include/asm/ftrace.h
>> +++ b/arch/riscv/include/asm/ftrace.h
>> @@ -35,7 +35,7 @@ struct dyn_arch_ftrace {
>>  };
>>  #endif
>> 
>> -#ifdef CONFIG_DYNAMIC_FTRACE
>> +#if defined(CONFIG_DYNAMIC_FTRACE) || defined(CONFIG_OPTPROBES)
>>  /*
>>   * A general call in RISC-V is a pair of insts:
>>   * 1) auipc: setting high-20 pc-related bits to ra register
>> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
>> index 217ef89f2..6c5e10709 100644
>> --- a/arch/riscv/include/asm/kprobes.h
>> +++ b/arch/riscv/include/asm/kprobes.h
>> @@ -43,5 +43,33 @@ bool kprobe_single_step_handler(struct pt_regs *regs);
>>  void __kretprobe_trampoline(void);
>>  void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
>> 
>> +#ifdef CONFIG_OPTPROBES
>> +
>> +#define MAX_OPTIMIZED_LENGTH   8
>> +
>> +/* optinsn template addresses */
>> +extern __visible kprobe_opcode_t optprobe_template_entry[];
>> +extern __visible kprobe_opcode_t optprobe_template_val[];
>> +extern __visible kprobe_opcode_t optprobe_template_call[];
>> +extern __visible kprobe_opcode_t optprobe_template_store_epc[];
>> +extern __visible kprobe_opcode_t optprobe_template_end[];
>> +extern __visible kprobe_opcode_t optprobe_template_sub_sp[];
>> +extern __visible kprobe_opcode_t optprobe_template_add_sp[];
>> +extern __visible kprobe_opcode_t optprobe_template_restore_begin[];
>> +extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn[];
>> +extern __visible kprobe_opcode_t optprobe_template_restore_end[];
>> +
>> +#define MAX_OPTINSN_SIZE                               \
>> +               ((unsigned long)optprobe_template_end - \
>> +                (unsigned long)optprobe_template_entry)
>> +
>> +#define MAX_COPIED_INSN 2
>> +struct arch_optimized_insn {
>> +               kprobe_opcode_t copied_insn[MAX_COPIED_INSN];
>> +                       /* detour code buffer */
>> +                       kprobe_opcode_t *insn;
>> +};
>> +#define RVI_INST_SIZE 4
>> +#endif /* CONFIG_OPTPROBES */
>>  #endif /* CONFIG_KPROBES */
>>  #endif /* _ASM_RISCV_KPROBES_H */
>> diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
>> index 7f0840dcc..6255b4600 100644
>> --- a/arch/riscv/kernel/probes/Makefile
>> +++ b/arch/riscv/kernel/probes/Makefile
>> @@ -3,4 +3,5 @@ obj-$(CONFIG_KPROBES)           += kprobes.o decode-insn.o simulate-insn.o
>>  obj-$(CONFIG_KPROBES)          += kprobes_trampoline.o
>>  obj-$(CONFIG_KPROBES_ON_FTRACE)        += ftrace.o
>>  obj-$(CONFIG_UPROBES)          += uprobes.o decode-insn.o simulate-insn.o
>> +obj-$(CONFIG_OPTPROBES)                += opt.o opt_trampoline.o
>>  CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
>> diff --git a/arch/riscv/kernel/probes/opt.c b/arch/riscv/kernel/probes/opt.c
>> new file mode 100644
>> index 000000000..b9bcf6e12
>> --- /dev/null
>> +++ b/arch/riscv/kernel/probes/opt.c
>> @@ -0,0 +1,483 @@
>> +// SPDX-License-Identifier: GPL-2.0-or-later
>> +/*
>> + *  Kernel Probes Jump Optimization (Optprobes)
>> + *
>> + * Copyright (C) IBM Corporation, 2002, 2004
>> + * Copyright (C) Hitachi Ltd., 2012
>> + * Copyright (C) Huawei Inc., 2014
>> + * Copyright (C) 2022 Huawei Technologies Co., Ltd
>> + * Copyright (C) Guokai Chen, 2022
> 
> Should this not be your University here?

My university does not involve in this work, sorry for any confusion.

> 
>> + * Author: Guokai Chen chenguokai17@mails.ucas.ac.cn
>> + */
>> +
>> +#include <linux/kprobes.h>
>> +#include <linux/jump_label.h>
>> +#include <linux/extable.h>
>> +#include <linux/stop_machine.h>
>> +#include <linux/moduleloader.h>
>> +#include <linux/kprobes.h>
>> +#include <linux/cacheflush.h>
>> +/* for patch_text */
>> +#include <asm/ftrace.h>
>> +#include <asm/patch.h>
>> +#include "simulate-insn.h"
>> +#include "decode-insn.h"
>> +
>> +
>> +#define JUMP_SIZE 8
>> +
>> +/*
>> + * If the probed instruction doesn't use PC and is not system or fence
>> + * we can copy it into template and have it executed directly without
>> + * simulation or emulation.
>> + */
>> +enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
>> +{
>> +       /*
>> +        * instructions that use PC
>> +        * branch jump auipc
>> +        * instructions that belongs to system or fence
>> +        * ebreak ecall fence.i
> 
> Please use the full columns available to you for comments.
> 
>> +        */
>> +       kprobe_opcode_t inst = *addr;
>> +
>> +       RISCV_INSN_REJECTED(system, inst);
>> +       RISCV_INSN_REJECTED(fence, inst);
>> +       RISCV_INSN_REJECTED(branch, inst);
>> +       RISCV_INSN_REJECTED(jal, inst);
>> +       RISCV_INSN_REJECTED(jalr, inst);
>> +       RISCV_INSN_REJECTED(auipc, inst);
>> +       return INSN_GOOD_NO_SLOT;
>> +}
>> +
>> +#define TMPL_VAL_IDX \
>> +       ((kprobe_opcode_t *)optprobe_template_val - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_CALL_IDX \
>> +       ((kprobe_opcode_t *)optprobe_template_call - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_STORE_EPC_IDX \
>> +       ((kprobe_opcode_t *)optprobe_template_store_epc - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_END_IDX \
>> +       ((kprobe_opcode_t *)optprobe_template_end - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_ADD_SP \
>> +       ((kprobe_opcode_t *)optprobe_template_add_sp - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_SUB_SP \
>> +       ((kprobe_opcode_t *)optprobe_template_sub_sp - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_RESTORE_BEGIN \
>> +       ((kprobe_opcode_t *)optprobe_template_restore_begin - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_RESTORE_ORIGN_INSN \
>> +       ((kprobe_opcode_t *)optprobe_template_restore_orig_insn - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_RESTORE_RET \
>> +       ((kprobe_opcode_t *)optprobe_template_ret - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +#define TMPL_RESTORE_END \
>> +       ((kprobe_opcode_t *)optprobe_template_restore_end - \
>> +        (kprobe_opcode_t *)optprobe_template_entry)
>> +
>> +#define FREE_SEARCH_DEPTH 32
>> +
>> +/*
>> + * RISC-V can always optimize an instruction if not null
>> + */
>> +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
>> +{
>> +       return optinsn->insn != NULL;
>> +}
>> +
>> +/*
>> + * In RISC-V ISA, jal has a quite limited jump range
>> + * To achive adequate range, auipc+jalr is utilized
>> + * It requires a replacement of two instructions
>> + * thus next instruction should be examined
> 
> Please use the full columns available to you for comments.
> 
>> + */
>> +int arch_check_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +       struct kprobe *p;
>> +
>> +       p = get_kprobe(op->kp.addr + 4);
> 
> Where does this 4 come from?
> 
>> +       if (p && !kprobe_disabled(p))
>> +               return -EEXIST;
>> +
>> +       return 0;
>> +}
>> +
>> +/*
>> + * In RISC-V ISA, auipc+jalr requires a free register
>> + * Inspired by register renaming in OoO processor,
>> + * we search backwards to find such a register that:
>> + * not previously used as a source register &&
>> + * is used as a destination register &&
>> + * before any branch/jump instruction
> 
> Ditto re comment width.
> 
>> + */
>> +static int
>> +__arch_find_free_register(kprobe_opcode_t *addr, int use_orig,
>> +                         kprobe_opcode_t orig)
>> +{
>> +       int i, rs1, rs2, rd;
>> +       kprobe_opcode_t inst;
>> +       int rs_mask = 0;
>> +
>> +       for (i = 0; i < FREE_SEARCH_DEPTH; i++) {
>> +               if (i == 0 && use_orig)
>> +                       inst = orig;
>> +               else
>> +                       inst = *(kprobe_opcode_t *) (addr + i);
>> +               /*
>> +                * Detailed handling:
>> +                * jalr/branch/system: must have reached the end, no result
>> +                * jal: if not chosen as result, must have reached the end
>> +                * arithmetic/load/store: record their rs
>> +                * jal/arithmetic/load: if proper rd found, return result
>> +                * others (float point/vector): ignore
>> +                */
>> +               if (riscv_insn_is_branch(inst) || riscv_insn_is_jalr(inst)
>> +                       || riscv_insn_is_system(inst)) {
>> +                       return 0;
>> +               }
>> +               /* instructions that has rs1 */
>> +               if (riscv_insn_is_arith_ri(inst) || riscv_insn_is_arith_rr(inst)
>> +                       || riscv_insn_is_load(inst) || riscv_insn_is_store(inst)
>> +                       || riscv_insn_is_amo(inst)) {
>> +                       rs1 = (inst & 0xF8000) >> 15;
>> +                       rs_mask |= 1 << rs1;
>> +               }
>> +               /* instructions that has rs2 */
>> +               if (riscv_insn_is_arith_rr(inst) || riscv_insn_is_store(inst)
>> +                       || riscv_insn_is_amo(inst)) {
>> +                       rs2 = (inst & 0x1F00000) >> 20;
>> +                       rs_mask |= 1 << rs2;
>> +               }
>> +               /* instructions that has rd */
>> +               if (riscv_insn_is_lui(inst) || riscv_insn_is_jal(inst)
>> +                       || riscv_insn_is_load(inst) || riscv_insn_is_arith_ri(inst)
>> +                       || riscv_insn_is_arith_rr(inst) || riscv_insn_is_amo(inst)) {
>> +                       rd = (inst & 0xF80) >> 7;
>> +                       if (rd != 0 && (rs_mask & (1 << rd)) == 0)
>> +                               return rd;
>> +                       if (riscv_insn_is_jal(inst))
>> +                               return 0;
>> +               }
>> +       }
>> +       return 0;
>> +}
>> +
>> +/*
>> + * If two free registers can be found at the beginning of both
>> + * the start and the end of replaced code, it can be optimized
>> + * Also, in-function jumps need to be checked to make sure that
>> + * there is no jump to the second instruction to be replaced
>> + */
>> +
>> +#define branch_imm(opcode) \
>> +       (((((opcode) >>  8) & 0xf) <<  1) | \
>> +        ((((opcode) >> 25) & 0x3f) <<  5) | \
>> +        ((((opcode) >>  7) & 0x1) << 11) | \
>> +        ((((opcode) >> 31) & 0x1) << 12))
> 
> All the numbers in here are quite meaningless to me.
> Could you please use defines here?

This code is borrowed from arch/riscv/kernel/probes/simulate-insn.c
It should have been moved to a shared header, thanks for your reminder.
As for this particular code, it extracts immediate from branch instructions.
The encoding of RISC-V ISA requires this magics :-(

> 
>> +
>> +#define branch_offset(opcode) \
>> +       sign_extend32((branch_imm(opcode)), 12)
>> +
>> +#define jal_imm(opcode) \
>> +       ((((opcode >> 21) & 0x3ff) << 1) | \
>> +        (((opcode >> 20) & 0x1) << 11) | \
>> +        (((opcode >> 31) & 0x1) << 20))
>> +#define jal_offset(opcode) \
>> +       sign_extend32(jal_imm(opcode), 20)
>> +
>> +static int can_optimize(unsigned long paddr, kprobe_opcode_t orig)
>> +{
>> +       unsigned long addr, size = 0, offset = 0, target;
>> +       s32 imm;
>> +       kprobe_opcode_t inst;
>> +
>> +       if (!kallsyms_lookup_size_offset(paddr, &size, &offset))
>> +               return 0;
>> +
>> +       addr = paddr - offset;
>> +
>> +       /* if there are not enough space for our kprobe, skip */
>> +       if (addr + size <= paddr + MAX_OPTIMIZED_LENGTH)
>> +               return 0;
>> +
>> +       while (addr < paddr - offset + size) {
>> +               /* Check from the start until the end */
>> +
>> +               inst = *(kprobe_opcode_t *)addr;
>> +               /* branch and jal is capable of determing target before execution */
>> +               if (riscv_insn_is_branch(inst)) {
>> +                       imm = branch_offset(inst);
>> +                       target = addr + imm;
>> +                       if (target == paddr + RVI_INST_SIZE)
>> +                               return 0;
>> +               } else if (riscv_insn_is_jal(inst)) {
>> +                       imm = jal_offset(inst);
>> +                       target = addr + imm;
>> +                       if (target == paddr + RVI_INST_SIZE)
>> +                               return 0;
>> +               }
>> +               /* RVI is always 4 byte long */
>> +               addr += 4;
>> +       }
>> +
>> +       if (can_kprobe_direct_exec((kprobe_opcode_t *)(paddr + 4)) != INSN_GOOD_NO_SLOT)
>> +               return 0;
>> +
>> +       /* only valid when we find two free registers */
>> +       return __arch_find_free_register((kprobe_opcode_t *) paddr, 1, orig)
>> +               && __arch_find_free_register((kprobe_opcode_t *) (paddr + JUMP_SIZE), 0, 0);
>> +}
>> +
>> +/* Free optimized instruction slot */
>> +static void
>> +__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
>> +{
>> +       if (op->optinsn.insn) {
>> +               free_optinsn_slot(op->optinsn.insn, dirty);
>> +               op->optinsn.insn = NULL;
>> +       }
>> +}
>> +
>> +extern void kprobe_handler(struct pt_regs *regs);
>> +
>> +static void
>> +optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
>> +{
>> +       unsigned long flags;
>> +       struct kprobe_ctlblk *kcb;
>> +
>> +       /* Save skipped registers */
>> +       regs->epc = (unsigned long)op->kp.addr;
>> +       regs->orig_a0 = ~0UL;
>> +
>> +       local_irq_save(flags);
>> +       kcb = get_kprobe_ctlblk();
>> +
>> +       if (kprobe_running()) {
>> +               kprobes_inc_nmissed_count(&op->kp);
>> +       } else {
>> +               __this_cpu_write(current_kprobe, &op->kp);
>> +               kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>> +               opt_pre_handler(&op->kp, regs);
>> +               __this_cpu_write(current_kprobe, NULL);
>> +       }
>> +
>> +       local_irq_restore(flags);
>> +}
>> +
>> +NOKPROBE_SYMBOL(optimized_callback)
>> +static inline kprobe_opcode_t
>> +__arch_patch_rd(kprobe_opcode_t inst, unsigned long val)
>> +{
>> +       inst &= 0xfffff07fUL;
> 
> It'd be nice if these were defines too, so that it was clear to
> the untrained eye what's going on here.
> 
>> +       inst |= val << 7;
>> +       return inst;
>> +}
>> +
>> +static inline kprobe_opcode_t
>> +__arch_patch_rs1(kprobe_opcode_t inst, unsigned long val)
>> +{
>> +       inst &= 0xfff07fffUL;
>> +       inst |= val << 15;
>> +       return inst;
>> +}
>> +
>> +static inline kprobe_opcode_t __arch_patch_rs2(kprobe_opcode_t inst,
>> +                                                  unsigned long val)
>> +{
>> +       inst &= 0xfe0fffffUL;
>> +       inst |= val << 20;
>> +       return inst;
>> +}
>> +
>> +int
>> +arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *orig)
>> +{
>> +       kprobe_opcode_t *code, *detour_slot, *detour_ret_addr;
>> +       long rel_chk;
>> +       unsigned long val;
>> +
>> +       /* not aligned address */
>> +       #ifdef CONFIG_RISCV_ISA_C
> 
> Please use IS_ENABLED() here if you can.
> 
>> +       return -ERANGE;
>> +       #endif
>> +
>> +       if (!can_optimize((unsigned long)orig->addr, orig->opcode))
>> +               return -EILSEQ;
>> +
>> +       code = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL);
>> +       detour_slot = get_optinsn_slot();
>> +
>> +       if (!code || !detour_slot) {
>> +               kfree(code);
>> +               if (detour_slot)
>> +                       free_optinsn_slot(detour_slot, 0);
>> +               return -ENOMEM;
>> +       }
>> +
>> +       /*
>> +        * Verify if the address gap is within 4GB range, because this uses
>> +        * a auipc+jalr pair.
>> +        */
>> +       rel_chk = (long)detour_slot - (long)orig->addr + 8;
>> +       if (abs(rel_chk) > 0x7fffffff) {
> 
> GENMASK please.
> 
>> +               /*
>> +                * Different from x86, we free code buf directly instead of
>> +                * calling __arch_remove_optimized_kprobe() because
>> +                * we have not fill any field in op.
>> +                */
>> +               kfree(code);
>> +               free_optinsn_slot(detour_slot, 0);
>> +               return -ERANGE;
>> +       }
>> +
>> +       /* Copy arch-dep-instance from template. */
>> +       memcpy(code, (unsigned long *)optprobe_template_entry,
>> +                  TMPL_END_IDX * sizeof(kprobe_opcode_t));
>> +
>> +       /* Set probe information */
>> +       val = (unsigned long)op;
>> +       *(unsigned long *)(&code[TMPL_VAL_IDX]) = val;
>> +
>> +       /* Set probe function call */
>> +       val = (unsigned long)optimized_callback;
>> +       *(unsigned long *)(&code[TMPL_CALL_IDX]) = val;
> 
> What is the benefit of using val here? I think the comments
> are also pointing out the obvious here, no?
> 
>> +
>> +       /* Adjust epc register */
> 
> The comments here mainly just say what you're doing & not why
> it should be done.
> 
>> +       val = __arch_find_free_register(orig->addr, 1, orig->opcode);
>> +       /*
>> +        * patch rs2 of optprobe_template_store_epc
>> +        * after patch, optprobe_template_store_epc will be
>> +        * REG_S free_register, PT_EPC(sp)
>> +        */
>> +       code[TMPL_STORE_EPC_IDX] =
>> +               __arch_patch_rs2(code[TMPL_STORE_EPC_IDX], val);
>> +
>> +       /* Adjust return temp register */
>> +       val =
>> +               __arch_find_free_register(orig->addr +
>> +                                         JUMP_SIZE / sizeof(kprobe_opcode_t), 0,
>> +                                         0);
>> +       /*
>> +        * patch of optprobe_template_restore_end
>> +        * patch:
>> +        *   rd and imm of auipc
>> +        *   rs1 and imm of jalr
>> +        * after patch:
>> +        *   auipc free_register, %hi(return_address)
>> +        *   jalr x0, %lo(return_address)(free_register)
>> +        *
>> +        */
>> +
>> +       detour_ret_addr = &(detour_slot[optprobe_template_restore_end - optprobe_template_entry]);
>> +
>> +       make_call(detour_ret_addr, (orig->addr + JUMP_SIZE / sizeof(kprobe_opcode_t)),
>> +                       (code + TMPL_RESTORE_END));
>> +       code[TMPL_RESTORE_END] = __arch_patch_rd(code[TMPL_RESTORE_END], val);
>> +       code[TMPL_RESTORE_END + 1] =
>> +               __arch_patch_rs1(code[TMPL_RESTORE_END + 1], val);
>> +       code[TMPL_RESTORE_END + 1] = __arch_patch_rd(code[TMPL_RESTORE_END + 1], 0);
>> +
>> +       /* Copy insn and have it executed during restore */
>> +
>> +       code[TMPL_RESTORE_ORIGN_INSN] = orig->opcode;
>> +       code[TMPL_RESTORE_ORIGN_INSN + 1] =
>> +               *(kprobe_opcode_t *) (orig->addr + 1);
>> +
>> +       if (patch_text_nosync(detour_slot, code, MAX_OPTINSN_SIZE)) {
>> +               free_optinsn_slot(detour_slot, 0);
>> +               kfree(code);
>> +               return -EPERM;
>> +       }
>> +
>> +       kfree(code);
>> +       /* Set op->optinsn.insn means prepared. */
>> +       op->optinsn.insn = detour_slot;
>> +       return 0;
>> +}
>> +
>> +void __kprobes arch_optimize_kprobes(struct list_head *oplist)
>> +{
>> +       struct optimized_kprobe *op, *tmp;
>> +       kprobe_opcode_t val;
>> +
>> +       list_for_each_entry_safe(op, tmp, oplist, list) {
>> +               kprobe_opcode_t insn[2];
>> +
>> +               WARN_ON(kprobe_disabled(&op->kp));
>> +
>> +               /*
>> +                * Backup instructions which will be replaced
>> +                * by jump address
>> +                */
>> +               memcpy(op->optinsn.copied_insn, op->kp.addr, JUMP_SIZE);
>> +               op->optinsn.copied_insn[0] = op->kp.opcode;
>> +
>> +               make_call(op->kp.addr, op->optinsn.insn, insn);
>> +
>> +               // patch insn jalr to have rd as free register
>> +               val = (op->optinsn.insn[2] & 0x1F00000) >> 20;
> 
> Again, could you use some defines to make this more understandable
> to mere mortals like me? ;)
> 
>> +
>> +               insn[0] = __arch_patch_rd(insn[0], val);
>> +
>> +               insn[1] = __arch_patch_rd(insn[1], val);
>> +               insn[1] = __arch_patch_rs1(insn[1], val);
>> +
>> +               /*
>> +                * Similar to __arch_disarm_kprobe, operations which
>> +                * removing breakpoints must be wrapped by stop_machine
>> +                * to avoid racing.
>> +                */
>> +               WARN_ON(patch_text_nosync(op->kp.addr, insn, JUMP_SIZE));
>> +
>> +               list_del_init(&op->list);
>> +       }
>> +}
>> +
>> +static int arch_disarm_kprobe_opt(void *vop)
>> +{
>> +       struct optimized_kprobe *op = (struct optimized_kprobe *)vop;
>> +
>> +       patch_text_nosync(op->kp.addr, op->optinsn.copied_insn, JUMP_SIZE);
>> +       arch_arm_kprobe(&op->kp);
>> +       return 0;
>> +}
>> +
>> +void arch_unoptimize_kprobe(struct optimized_kprobe *op)
>> +{
>> +       arch_disarm_kprobe_opt((void *)op);
>> +}
>> +
>> +/*
>> + * Recover original instructions and breakpoints from relative jumps.
>> + * Caller must call with locking kprobe_mutex.
>> + */
>> +void arch_unoptimize_kprobes(struct list_head *oplist,
>> +                                struct list_head *done_list)
>> +{
>> +       struct optimized_kprobe *op, *tmp;
>> +
>> +       list_for_each_entry_safe(op, tmp, oplist, list) {
>> +               arch_unoptimize_kprobe(op);
>> +               list_move(&op->list, done_list);
>> +       }
>> +}
>> +
>> +int arch_within_optimized_kprobe(struct optimized_kprobe *op,
>> +                                kprobe_opcode_t *addr)
>> +{
>> +       return (op->kp.addr <= addr &&
>> +               op->kp.addr + (JUMP_SIZE / sizeof(kprobe_opcode_t)) > addr);
>> +
>> +}
>> +
>> +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +       __arch_remove_optimized_kprobe(op, 1);
>> +}
>> diff --git a/arch/riscv/kernel/probes/opt_trampoline.S b/arch/riscv/kernel/probes/opt_trampoline.S
> 
> Thanks,
> Conor.
Liao, Chang Aug. 31, 2022, 8:28 a.m. UTC | #8
在 2022/8/31 15:52, Conor.Dooley@microchip.com 写道:
> On 31/08/2022 08:49, liaochang (A) wrote:
>> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>>
>> 在 2022/8/31 15:24, Conor.Dooley@microchip.com 写道:
>>> Hey Chen,
>>>
>>> FYI there is a build warning with this patch:
>>> arch/riscv/kernel/probes/opt.c:34:27: warning: no previous prototype for 'can_kprobe_direct_exec' [-Wmissing-prototypes]
>>>      34 | enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
>>>
>>> Also, if you run scripts/checkpatch.pl --strict, it will have a
>>> few complaints about code style for you too. Other than that, I
>>> have a few comments for you below:
>>>
>>> On 31/08/2022 05:10, Chen Guokai wrote:
>>>> [You don't often get email from chenguokai17@mails.ucas.ac.cn. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>>>>
>>>> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>>>>
>>>> This patch adds jump optimization support for RISC-V.
>>>
>>> s/This patch adds/Add
>>>
>>>>
>>>> This patch replaces ebreak instructions used by normal kprobes with an
>>>
>>> s/This patch replaces/Replace
>>>
>>>> auipc+jalr instruction pair, at the aim of suppressing the probe-hit
>>>> overhead.
>>>>
>>>> All known optprobe-capable RISC architectures have been using a single
>>>> jump or branch instructions while this patch chooses not. RISC-V has a
>>>> quite limited jump range (4KB or 2MB) for both its branch and jump
>>>> instructions, which prevent optimizations from supporting probes that
>>>> spread all over the kernel.
>>>>
>>>> Auipc-jalr instruction pair is introduced with a much wider jump range
>>>> (4GB), where auipc loads the upper 12 bits to a free register and jalr
>>>> appends the lower 20 bits to form a 32 bit immediate. Note that returning
>>>> from probe handler requires another free register. As kprobes can appear
>>>> almost anywhere inside the kernel, the free register should be found in a
>>>> generic way, not depending on calling convension or any other regulations.
>>>
>>> convention
>>>
>>>>
>>>> The algorithm for finding the free register is inspired by the regiter
>>>
>>> register
>>>
>>>> renaming in modern processors. From the perspective of register renaming, a
>>>> register could be represented as two different registers if two neighbour
>>>> instructions both write to it but no one ever reads. Extending this fact,
>>>> a register is considered to be free if there is no read before its next
>>>> write in the execution flow. We are free to change its value without
>>>> interfering normal execution.
>>>>
>>>> Static analysis shows that 51% instructions of the kernel (default config)
>>>> is capable of being replaced i.e. two free registers can be found at both
>>>> the start and end of replaced instruction pairs while the replaced
>>>> instructions can be directly executed.
>>>>
>>>> Signed-off-by: Chen Guokai <chenguokai17@mails.ucas.ac.cn>
>>>> Signed-off-by: Liao Chang <liaochang1@huawei.com>
>>>
>>> What does Liao have to do with this patch?
>> I just provide some suggestion to Chen Guokai during development ;)
>> please remove my info from Signed-off-by tag.
> 
> Does that mean that the "copyright 2022 Huawei" is also not accurate?
Inaccurate, please remove "copyright 2022 Huawei",thanks for checking.

>   
> 
>
Conor Dooley Aug. 31, 2022, 8:34 a.m. UTC | #9
On 31/08/2022 09:25, Xim wrote:
> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
> 
> Hi Conor,
> 
> Thanks for your review! I will correct addressed issues in the next version.
> I have some explanations for others.
> Sorry for previous format issues.
> 
>> 2022年8月31日 15:24,Conor.Dooley@microchip.com 写道:
>>

>>>
>>> Signed-off-by: Chen Guokai <chenguokai17@mails.ucas.ac.cn>
>>> Signed-off-by: Liao Chang <liaochang1@huawei.com>
>>
>> What does Liao have to do with this patch?
> 
> Liao is my mentor in OSPP 2022 hold by ISCAS, CAS. He has been taking
> an active part in the design and review process of this patch.
> P.S. In the future patch version, Huawei related copyright/author info will be discarded.
> 
>>

>>> --- /dev/null
>>> +++ b/arch/riscv/kernel/probes/opt.c
>>> @@ -0,0 +1,483 @@
>>> +// SPDX-License-Identifier: GPL-2.0-or-later
>>> +/*
>>> + *  Kernel Probes Jump Optimization (Optprobes)
>>> + *
>>> + * Copyright (C) IBM Corporation, 2002, 2004
>>> + * Copyright (C) Hitachi Ltd., 2012
>>> + * Copyright (C) Huawei Inc., 2014
>>> + * Copyright (C) 2022 Huawei Technologies Co., Ltd
>>> + * Copyright (C) Guokai Chen, 2022
>>
>> Should this not be your University here?
> 
> My university does not involve in this work, sorry for any confusion.

Ah Apologies - I think  got confused between ISCAS and UCAS!
diff mbox series

Patch

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d557cc502..a54e50de2 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -97,6 +97,7 @@  config RISCV
 	select HAVE_KPROBES if !XIP_KERNEL
 	select HAVE_KPROBES_ON_FTRACE if !XIP_KERNEL
 	select HAVE_KRETPROBES if !XIP_KERNEL
+	select HAVE_OPTPROBES if !XIP_KERNEL && !CONFIG_RISCV_ISA_C
 	select HAVE_MOVE_PMD
 	select HAVE_MOVE_PUD
 	select HAVE_PCI
diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 04dad3380..8b17a4c66 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -35,7 +35,7 @@  struct dyn_arch_ftrace {
 };
 #endif
 
-#ifdef CONFIG_DYNAMIC_FTRACE
+#if defined(CONFIG_DYNAMIC_FTRACE) || defined(CONFIG_OPTPROBES)
 /*
  * A general call in RISC-V is a pair of insts:
  * 1) auipc: setting high-20 pc-related bits to ra register
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
index 217ef89f2..6c5e10709 100644
--- a/arch/riscv/include/asm/kprobes.h
+++ b/arch/riscv/include/asm/kprobes.h
@@ -43,5 +43,33 @@  bool kprobe_single_step_handler(struct pt_regs *regs);
 void __kretprobe_trampoline(void);
 void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
 
+#ifdef CONFIG_OPTPROBES
+
+#define MAX_OPTIMIZED_LENGTH	8
+
+/* optinsn template addresses */
+extern __visible kprobe_opcode_t optprobe_template_entry[];
+extern __visible kprobe_opcode_t optprobe_template_val[];
+extern __visible kprobe_opcode_t optprobe_template_call[];
+extern __visible kprobe_opcode_t optprobe_template_store_epc[];
+extern __visible kprobe_opcode_t optprobe_template_end[];
+extern __visible kprobe_opcode_t optprobe_template_sub_sp[];
+extern __visible kprobe_opcode_t optprobe_template_add_sp[];
+extern __visible kprobe_opcode_t optprobe_template_restore_begin[];
+extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn[];
+extern __visible kprobe_opcode_t optprobe_template_restore_end[];
+
+#define MAX_OPTINSN_SIZE				\
+		((unsigned long)optprobe_template_end -	\
+		 (unsigned long)optprobe_template_entry)
+
+#define MAX_COPIED_INSN 2
+struct arch_optimized_insn {
+		kprobe_opcode_t copied_insn[MAX_COPIED_INSN];
+			/* detour code buffer */
+			kprobe_opcode_t *insn;
+};
+#define RVI_INST_SIZE 4
+#endif /* CONFIG_OPTPROBES */
 #endif /* CONFIG_KPROBES */
 #endif /* _ASM_RISCV_KPROBES_H */
diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
index 7f0840dcc..6255b4600 100644
--- a/arch/riscv/kernel/probes/Makefile
+++ b/arch/riscv/kernel/probes/Makefile
@@ -3,4 +3,5 @@  obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o simulate-insn.o
 obj-$(CONFIG_KPROBES)		+= kprobes_trampoline.o
 obj-$(CONFIG_KPROBES_ON_FTRACE)	+= ftrace.o
 obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o simulate-insn.o
+obj-$(CONFIG_OPTPROBES)		+= opt.o opt_trampoline.o
 CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/riscv/kernel/probes/opt.c b/arch/riscv/kernel/probes/opt.c
new file mode 100644
index 000000000..b9bcf6e12
--- /dev/null
+++ b/arch/riscv/kernel/probes/opt.c
@@ -0,0 +1,483 @@ 
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ *  Kernel Probes Jump Optimization (Optprobes)
+ *
+ * Copyright (C) IBM Corporation, 2002, 2004
+ * Copyright (C) Hitachi Ltd., 2012
+ * Copyright (C) Huawei Inc., 2014
+ * Copyright (C) 2022 Huawei Technologies Co., Ltd
+ * Copyright (C) Guokai Chen, 2022
+ * Author: Guokai Chen chenguokai17@mails.ucas.ac.cn
+ */
+
+#include <linux/kprobes.h>
+#include <linux/jump_label.h>
+#include <linux/extable.h>
+#include <linux/stop_machine.h>
+#include <linux/moduleloader.h>
+#include <linux/kprobes.h>
+#include <linux/cacheflush.h>
+/* for patch_text */
+#include <asm/ftrace.h>
+#include <asm/patch.h>
+#include "simulate-insn.h"
+#include "decode-insn.h"
+
+
+#define JUMP_SIZE 8
+
+/*
+ * If the probed instruction doesn't use PC and is not system or fence
+ * we can copy it into template and have it executed directly without
+ * simulation or emulation.
+ */
+enum probe_insn __kprobes can_kprobe_direct_exec(kprobe_opcode_t *addr)
+{
+	/*
+	 * instructions that use PC
+	 * branch jump auipc
+	 * instructions that belongs to system or fence
+	 * ebreak ecall fence.i
+	 */
+	kprobe_opcode_t inst = *addr;
+
+	RISCV_INSN_REJECTED(system, inst);
+	RISCV_INSN_REJECTED(fence, inst);
+	RISCV_INSN_REJECTED(branch, inst);
+	RISCV_INSN_REJECTED(jal, inst);
+	RISCV_INSN_REJECTED(jalr, inst);
+	RISCV_INSN_REJECTED(auipc, inst);
+	return INSN_GOOD_NO_SLOT;
+}
+
+#define TMPL_VAL_IDX \
+	((kprobe_opcode_t *)optprobe_template_val - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_CALL_IDX \
+	((kprobe_opcode_t *)optprobe_template_call - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_STORE_EPC_IDX \
+	((kprobe_opcode_t *)optprobe_template_store_epc - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_END_IDX \
+	((kprobe_opcode_t *)optprobe_template_end - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_ADD_SP \
+	((kprobe_opcode_t *)optprobe_template_add_sp - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_SUB_SP \
+	((kprobe_opcode_t *)optprobe_template_sub_sp - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_RESTORE_BEGIN \
+	((kprobe_opcode_t *)optprobe_template_restore_begin - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_RESTORE_ORIGN_INSN \
+	((kprobe_opcode_t *)optprobe_template_restore_orig_insn - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_RESTORE_RET \
+	((kprobe_opcode_t *)optprobe_template_ret - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+#define TMPL_RESTORE_END \
+	((kprobe_opcode_t *)optprobe_template_restore_end - \
+	 (kprobe_opcode_t *)optprobe_template_entry)
+
+#define FREE_SEARCH_DEPTH 32
+
+/*
+ * RISC-V can always optimize an instruction if not null
+ */
+int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
+{
+	return optinsn->insn != NULL;
+}
+
+/*
+ * In RISC-V ISA, jal has a quite limited jump range
+ * To achive adequate range, auipc+jalr is utilized
+ * It requires a replacement of two instructions
+ * thus next instruction should be examined
+ */
+int arch_check_optimized_kprobe(struct optimized_kprobe *op)
+{
+	struct kprobe *p;
+
+	p = get_kprobe(op->kp.addr + 4);
+	if (p && !kprobe_disabled(p))
+		return -EEXIST;
+
+	return 0;
+}
+
+/*
+ * In RISC-V ISA, auipc+jalr requires a free register
+ * Inspired by register renaming in OoO processor,
+ * we search backwards to find such a register that:
+ * not previously used as a source register &&
+ * is used as a destination register &&
+ * before any branch/jump instruction
+ */
+static int
+__arch_find_free_register(kprobe_opcode_t *addr, int use_orig,
+			  kprobe_opcode_t orig)
+{
+	int i, rs1, rs2, rd;
+	kprobe_opcode_t inst;
+	int rs_mask = 0;
+
+	for (i = 0; i < FREE_SEARCH_DEPTH; i++) {
+		if (i == 0 && use_orig)
+			inst = orig;
+		else
+			inst = *(kprobe_opcode_t *) (addr + i);
+		/*
+		 * Detailed handling:
+		 * jalr/branch/system: must have reached the end, no result
+		 * jal: if not chosen as result, must have reached the end
+		 * arithmetic/load/store: record their rs
+		 * jal/arithmetic/load: if proper rd found, return result
+		 * others (float point/vector): ignore
+		 */
+		if (riscv_insn_is_branch(inst) || riscv_insn_is_jalr(inst)
+			|| riscv_insn_is_system(inst)) {
+			return 0;
+		}
+		/* instructions that has rs1 */
+		if (riscv_insn_is_arith_ri(inst) || riscv_insn_is_arith_rr(inst)
+			|| riscv_insn_is_load(inst) || riscv_insn_is_store(inst)
+			|| riscv_insn_is_amo(inst)) {
+			rs1 = (inst & 0xF8000) >> 15;
+			rs_mask |= 1 << rs1;
+		}
+		/* instructions that has rs2 */
+		if (riscv_insn_is_arith_rr(inst) || riscv_insn_is_store(inst)
+			|| riscv_insn_is_amo(inst)) {
+			rs2 = (inst & 0x1F00000) >> 20;
+			rs_mask |= 1 << rs2;
+		}
+		/* instructions that has rd */
+		if (riscv_insn_is_lui(inst) || riscv_insn_is_jal(inst)
+			|| riscv_insn_is_load(inst) || riscv_insn_is_arith_ri(inst)
+			|| riscv_insn_is_arith_rr(inst) || riscv_insn_is_amo(inst)) {
+			rd = (inst & 0xF80) >> 7;
+			if (rd != 0 && (rs_mask & (1 << rd)) == 0)
+				return rd;
+			if (riscv_insn_is_jal(inst))
+				return 0;
+		}
+	}
+	return 0;
+}
+
+/*
+ * If two free registers can be found at the beginning of both
+ * the start and the end of replaced code, it can be optimized
+ * Also, in-function jumps need to be checked to make sure that
+ * there is no jump to the second instruction to be replaced
+ */
+
+#define branch_imm(opcode) \
+	(((((opcode) >>  8) & 0xf) <<  1) | \
+	 ((((opcode) >> 25) & 0x3f) <<  5) | \
+	 ((((opcode) >>  7) & 0x1) << 11) | \
+	 ((((opcode) >> 31) & 0x1) << 12))
+
+#define branch_offset(opcode) \
+	sign_extend32((branch_imm(opcode)), 12)
+
+#define jal_imm(opcode) \
+	((((opcode >> 21) & 0x3ff) << 1) | \
+	 (((opcode >> 20) & 0x1) << 11) | \
+	 (((opcode >> 31) & 0x1) << 20))
+#define jal_offset(opcode) \
+	sign_extend32(jal_imm(opcode), 20)
+
+static int can_optimize(unsigned long paddr, kprobe_opcode_t orig)
+{
+	unsigned long addr, size = 0, offset = 0, target;
+	s32 imm;
+	kprobe_opcode_t inst;
+
+	if (!kallsyms_lookup_size_offset(paddr, &size, &offset))
+		return 0;
+
+	addr = paddr - offset;
+
+	/* if there are not enough space for our kprobe, skip */
+	if (addr + size <= paddr + MAX_OPTIMIZED_LENGTH)
+		return 0;
+
+	while (addr < paddr - offset + size) {
+		/* Check from the start until the end */
+
+		inst = *(kprobe_opcode_t *)addr;
+		/* branch and jal is capable of determing target before execution */
+		if (riscv_insn_is_branch(inst)) {
+			imm = branch_offset(inst);
+			target = addr + imm;
+			if (target == paddr + RVI_INST_SIZE)
+				return 0;
+		} else if (riscv_insn_is_jal(inst)) {
+			imm = jal_offset(inst);
+			target = addr + imm;
+			if (target == paddr + RVI_INST_SIZE)
+				return 0;
+		}
+		/* RVI is always 4 byte long */
+		addr += 4;
+	}
+
+	if (can_kprobe_direct_exec((kprobe_opcode_t *)(paddr + 4)) != INSN_GOOD_NO_SLOT)
+		return 0;
+
+	/* only valid when we find two free registers */
+	return __arch_find_free_register((kprobe_opcode_t *) paddr, 1, orig)
+		&& __arch_find_free_register((kprobe_opcode_t *) (paddr + JUMP_SIZE), 0, 0);
+}
+
+/* Free optimized instruction slot */
+static void
+__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
+{
+	if (op->optinsn.insn) {
+		free_optinsn_slot(op->optinsn.insn, dirty);
+		op->optinsn.insn = NULL;
+	}
+}
+
+extern void kprobe_handler(struct pt_regs *regs);
+
+static void
+optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+{
+	unsigned long flags;
+	struct kprobe_ctlblk *kcb;
+
+	/* Save skipped registers */
+	regs->epc = (unsigned long)op->kp.addr;
+	regs->orig_a0 = ~0UL;
+
+	local_irq_save(flags);
+	kcb = get_kprobe_ctlblk();
+
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(&op->kp);
+	} else {
+		__this_cpu_write(current_kprobe, &op->kp);
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		opt_pre_handler(&op->kp, regs);
+		__this_cpu_write(current_kprobe, NULL);
+	}
+
+	local_irq_restore(flags);
+}
+
+NOKPROBE_SYMBOL(optimized_callback)
+static inline kprobe_opcode_t
+__arch_patch_rd(kprobe_opcode_t inst, unsigned long val)
+{
+	inst &= 0xfffff07fUL;
+	inst |= val << 7;
+	return inst;
+}
+
+static inline kprobe_opcode_t
+__arch_patch_rs1(kprobe_opcode_t inst, unsigned long val)
+{
+	inst &= 0xfff07fffUL;
+	inst |= val << 15;
+	return inst;
+}
+
+static inline kprobe_opcode_t __arch_patch_rs2(kprobe_opcode_t inst,
+						   unsigned long val)
+{
+	inst &= 0xfe0fffffUL;
+	inst |= val << 20;
+	return inst;
+}
+
+int
+arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *orig)
+{
+	kprobe_opcode_t *code, *detour_slot, *detour_ret_addr;
+	long rel_chk;
+	unsigned long val;
+
+	/* not aligned address */
+	#ifdef CONFIG_RISCV_ISA_C
+	return -ERANGE;
+	#endif
+
+	if (!can_optimize((unsigned long)orig->addr, orig->opcode))
+		return -EILSEQ;
+
+	code = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL);
+	detour_slot = get_optinsn_slot();
+
+	if (!code || !detour_slot) {
+		kfree(code);
+		if (detour_slot)
+			free_optinsn_slot(detour_slot, 0);
+		return -ENOMEM;
+	}
+
+	/*
+	 * Verify if the address gap is within 4GB range, because this uses
+	 * a auipc+jalr pair.
+	 */
+	rel_chk = (long)detour_slot - (long)orig->addr + 8;
+	if (abs(rel_chk) > 0x7fffffff) {
+		/*
+		 * Different from x86, we free code buf directly instead of
+		 * calling __arch_remove_optimized_kprobe() because
+		 * we have not fill any field in op.
+		 */
+		kfree(code);
+		free_optinsn_slot(detour_slot, 0);
+		return -ERANGE;
+	}
+
+	/* Copy arch-dep-instance from template. */
+	memcpy(code, (unsigned long *)optprobe_template_entry,
+		   TMPL_END_IDX * sizeof(kprobe_opcode_t));
+
+	/* Set probe information */
+	val = (unsigned long)op;
+	*(unsigned long *)(&code[TMPL_VAL_IDX]) = val;
+
+	/* Set probe function call */
+	val = (unsigned long)optimized_callback;
+	*(unsigned long *)(&code[TMPL_CALL_IDX]) = val;
+
+	/* Adjust epc register */
+	val = __arch_find_free_register(orig->addr, 1, orig->opcode);
+	/*
+	 * patch rs2 of optprobe_template_store_epc
+	 * after patch, optprobe_template_store_epc will be
+	 * REG_S free_register, PT_EPC(sp)
+	 */
+	code[TMPL_STORE_EPC_IDX] =
+		__arch_patch_rs2(code[TMPL_STORE_EPC_IDX], val);
+
+	/* Adjust return temp register */
+	val =
+		__arch_find_free_register(orig->addr +
+					  JUMP_SIZE / sizeof(kprobe_opcode_t), 0,
+					  0);
+	/*
+	 * patch of optprobe_template_restore_end
+	 * patch:
+	 *   rd and imm of auipc
+	 *   rs1 and imm of jalr
+	 * after patch:
+	 *   auipc free_register, %hi(return_address)
+	 *   jalr x0, %lo(return_address)(free_register)
+	 *
+	 */
+
+	detour_ret_addr = &(detour_slot[optprobe_template_restore_end - optprobe_template_entry]);
+
+	make_call(detour_ret_addr, (orig->addr + JUMP_SIZE / sizeof(kprobe_opcode_t)),
+			(code + TMPL_RESTORE_END));
+	code[TMPL_RESTORE_END] = __arch_patch_rd(code[TMPL_RESTORE_END], val);
+	code[TMPL_RESTORE_END + 1] =
+		__arch_patch_rs1(code[TMPL_RESTORE_END + 1], val);
+	code[TMPL_RESTORE_END + 1] = __arch_patch_rd(code[TMPL_RESTORE_END + 1], 0);
+
+	/* Copy insn and have it executed during restore */
+
+	code[TMPL_RESTORE_ORIGN_INSN] = orig->opcode;
+	code[TMPL_RESTORE_ORIGN_INSN + 1] =
+		*(kprobe_opcode_t *) (orig->addr + 1);
+
+	if (patch_text_nosync(detour_slot, code, MAX_OPTINSN_SIZE)) {
+		free_optinsn_slot(detour_slot, 0);
+		kfree(code);
+		return -EPERM;
+	}
+
+	kfree(code);
+	/* Set op->optinsn.insn means prepared. */
+	op->optinsn.insn = detour_slot;
+	return 0;
+}
+
+void __kprobes arch_optimize_kprobes(struct list_head *oplist)
+{
+	struct optimized_kprobe *op, *tmp;
+	kprobe_opcode_t val;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		kprobe_opcode_t insn[2];
+
+		WARN_ON(kprobe_disabled(&op->kp));
+
+		/*
+		 * Backup instructions which will be replaced
+		 * by jump address
+		 */
+		memcpy(op->optinsn.copied_insn, op->kp.addr, JUMP_SIZE);
+		op->optinsn.copied_insn[0] = op->kp.opcode;
+
+		make_call(op->kp.addr, op->optinsn.insn, insn);
+
+		// patch insn jalr to have rd as free register
+		val = (op->optinsn.insn[2] & 0x1F00000) >> 20;
+
+		insn[0] = __arch_patch_rd(insn[0], val);
+
+		insn[1] = __arch_patch_rd(insn[1], val);
+		insn[1] = __arch_patch_rs1(insn[1], val);
+
+		/*
+		 * Similar to __arch_disarm_kprobe, operations which
+		 * removing breakpoints must be wrapped by stop_machine
+		 * to avoid racing.
+		 */
+		WARN_ON(patch_text_nosync(op->kp.addr, insn, JUMP_SIZE));
+
+		list_del_init(&op->list);
+	}
+}
+
+static int arch_disarm_kprobe_opt(void *vop)
+{
+	struct optimized_kprobe *op = (struct optimized_kprobe *)vop;
+
+	patch_text_nosync(op->kp.addr, op->optinsn.copied_insn, JUMP_SIZE);
+	arch_arm_kprobe(&op->kp);
+	return 0;
+}
+
+void arch_unoptimize_kprobe(struct optimized_kprobe *op)
+{
+	arch_disarm_kprobe_opt((void *)op);
+}
+
+/*
+ * Recover original instructions and breakpoints from relative jumps.
+ * Caller must call with locking kprobe_mutex.
+ */
+void arch_unoptimize_kprobes(struct list_head *oplist,
+				 struct list_head *done_list)
+{
+	struct optimized_kprobe *op, *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		arch_unoptimize_kprobe(op);
+		list_move(&op->list, done_list);
+	}
+}
+
+int arch_within_optimized_kprobe(struct optimized_kprobe *op,
+				 kprobe_opcode_t *addr)
+{
+	return (op->kp.addr <= addr &&
+		op->kp.addr + (JUMP_SIZE / sizeof(kprobe_opcode_t)) > addr);
+
+}
+
+void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
+{
+	__arch_remove_optimized_kprobe(op, 1);
+}
diff --git a/arch/riscv/kernel/probes/opt_trampoline.S b/arch/riscv/kernel/probes/opt_trampoline.S
new file mode 100644
index 000000000..7c522c5d1
--- /dev/null
+++ b/arch/riscv/kernel/probes/opt_trampoline.S
@@ -0,0 +1,133 @@ 
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2022 Huawei Technologies Co., Ltd
+ * Copyright (C) 2022 Guokai Chen
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+#ifdef CONFIG_OPTPROBES
+
+ENTRY(optprobe_template_entry)
+ENTRY(optprobe_template_sub_sp)
+
+	REG_S sp, (-(PT_SIZE_ON_STACK) + PT_SP)(sp)
+	addi sp, sp, -(PT_SIZE_ON_STACK)
+ENTRY(optprobe_template_store_epc)
+	REG_S ra, PT_EPC(sp)
+	REG_S ra, PT_RA(sp)
+	REG_S gp, PT_GP(sp)
+	REG_S tp, PT_TP(sp)
+	REG_S t0, PT_T0(sp)
+	REG_S t1, PT_T1(sp)
+	REG_S t2, PT_T2(sp)
+	REG_S s0, PT_S0(sp)
+	REG_S s1, PT_S1(sp)
+	REG_S a0, PT_A0(sp)
+	REG_S a1, PT_A1(sp)
+	REG_S a2, PT_A2(sp)
+	REG_S a3, PT_A3(sp)
+	REG_S a4, PT_A4(sp)
+	REG_S a5, PT_A5(sp)
+	REG_S a6, PT_A6(sp)
+	REG_S a7, PT_A7(sp)
+	REG_S s2, PT_S2(sp)
+	REG_S s3, PT_S3(sp)
+	REG_S s4, PT_S4(sp)
+	REG_S s5, PT_S5(sp)
+	REG_S s6, PT_S6(sp)
+	REG_S s7, PT_S7(sp)
+	REG_S s8, PT_S8(sp)
+	REG_S s9, PT_S9(sp)
+	REG_S s10, PT_S10(sp)
+	REG_S s11, PT_S11(sp)
+	REG_S t3, PT_T3(sp)
+	REG_S t4, PT_T4(sp)
+	REG_S t5, PT_T5(sp)
+	REG_S t6, PT_T6(sp)
+	csrr t0, sstatus
+	csrr t1, stval
+	csrr t2, scause
+	REG_S t0, PT_STATUS(sp)
+	REG_S t1, PT_BADADDR(sp)
+	REG_S t2, PT_CAUSE(sp)
+ENTRY(optprobe_template_add_sp)
+	move a1, sp
+	lla a0, 1f
+	REG_L a0, 0(a0)
+	REG_L a2, 2f
+	jalr 0(a2)
+ENTRY(optprobe_template_restore_begin)
+	REG_L t0, PT_STATUS(sp)
+	REG_L t1, PT_BADADDR(sp)
+	REG_L t2, PT_CAUSE(sp)
+	csrw sstatus, t0
+	csrw stval, t1
+	csrw scause, t2
+	REG_L ra, PT_RA(sp)
+	REG_L gp, PT_GP(sp)
+	REG_L tp, PT_TP(sp)
+	REG_L t0, PT_T0(sp)
+	REG_L t1, PT_T1(sp)
+	REG_L t2, PT_T2(sp)
+	REG_L s0, PT_S0(sp)
+	REG_L s1, PT_S1(sp)
+	REG_L a0, PT_A0(sp)
+	REG_L a1, PT_A1(sp)
+	REG_L a2, PT_A2(sp)
+	REG_L a3, PT_A3(sp)
+	REG_L a4, PT_A4(sp)
+	REG_L a5, PT_A5(sp)
+	REG_L a6, PT_A6(sp)
+	REG_L a7, PT_A7(sp)
+	REG_L s2, PT_S2(sp)
+	REG_L s3, PT_S3(sp)
+	REG_L s4, PT_S4(sp)
+	REG_L s5, PT_S5(sp)
+	REG_L s6, PT_S6(sp)
+	REG_L s7, PT_S7(sp)
+	REG_L s8, PT_S8(sp)
+	REG_L s9, PT_S9(sp)
+	REG_L s10, PT_S10(sp)
+	REG_L s11, PT_S11(sp)
+	REG_L t3, PT_T3(sp)
+	REG_L t4, PT_T4(sp)
+	REG_L t5, PT_T5(sp)
+	REG_L t6, PT_T6(sp)
+	addi sp, sp, PT_SIZE_ON_STACK
+ENTRY(optprobe_template_restore_orig_insn)
+	nop
+	nop
+ENTRY(optprobe_template_restore_end)
+ret_to_normal:
+	auipc ra, 0
+	jalr x0, 0(ra)
+ENTRY(optprobe_template_val)
+1:
+	.dword 0
+ENTRY(optprobe_template_call)
+2:
+	.dword 0
+	.dword 0
+ENTRY(optprobe_template_end)
+END(optprobe_template_end)
+END(optprobe_template_call)
+END(optprobe_template_val)
+END(optprobe_template_restore_end)
+END(optprobe_template_restore_orig_insn)
+END(optprobe_template_restore_begin)
+END(optprobe_template_add_sp)
+END(optprobe_template_store_epc)
+END(optprobe_template_sub_sp)
+END(optprobe_template_entry)
+
+#endif /* CONFIG_OPTPROBES */
diff --git a/arch/riscv/kernel/probes/simulate-insn.h b/arch/riscv/kernel/probes/simulate-insn.h
index cb6ff7dcc..826ad3a4a 100644
--- a/arch/riscv/kernel/probes/simulate-insn.h
+++ b/arch/riscv/kernel/probes/simulate-insn.h
@@ -44,4 +44,13 @@  __RISCV_INSN_FUNCS(branch,	0x7f, 0x63);
 __RISCV_INSN_FUNCS(jal,		0x7f, 0x6f);
 __RISCV_INSN_FUNCS(jalr,	0x707f, 0x67);
 
+/* 0111011 && 0110011 */
+__RISCV_INSN_FUNCS(arith_rr, 0x77, 0x33);
+/* 0011011 && 0010011 */
+__RISCV_INSN_FUNCS(arith_ri, 0x77, 0x13);
+__RISCV_INSN_FUNCS(lui, 0x7f, 0x37);
+__RISCV_INSN_FUNCS(load, 0x7f, 0x03);
+__RISCV_INSN_FUNCS(store, 0x7f, 0x23);
+__RISCV_INSN_FUNCS(amo, 0x7f, 0x2f);
+
 #endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */