Message ID | 20220913033454.104519-3-liaochang1@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kprobe: Optimize the performance of patching ss | expand |
diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c index 3c6e5c725d81..4feb5ce16264 100644 --- a/arch/csky/kernel/probes/kprobes.c +++ b/arch/csky/kernel/probes/kprobes.c @@ -57,7 +57,11 @@ static void __kprobes arch_prepare_ss_slot(struct kprobe *p) p->ainsn.api.restore = (unsigned long)p->addr + offset; - patch_text(p->ainsn.api.insn, p->opcode); + memcpy(p->ainsn.api.insn, &p->opcode, offset); + dcache_wb_range((unsigned long)p->ainsn.api.insn, + (unsigned long)p->ainsn.api.insn + offset); + icache_inv_range((unsigned long)p->ainsn.api.insn, + (unsigned long)p->ainsn.api.insn + offset); } static void __kprobes arch_prepare_simulate(struct kprobe *p)
Single-step slot would not be used until kprobe is enabled, that means no race condition occurs on it under SMP, hence it is safe to pacth ss slot without stopping machine. Signed-off-by: Liao Chang <liaochang1@huawei.com> --- arch/csky/kernel/probes/kprobes.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)