Message ID | 20220512063017.57412-1-kunyu@nfschina.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kernel: Ftrace seems to have functions to improve performance through optimization | expand |
On Thu, 12 May 2022 14:30:17 +0800 Li kunyu <kunyu@nfschina.com> wrote: > At present, it is found that two functions could be optimized, and the > performance may be improved. "may be impoved"? Have any numbers? This changes a very slow path. I do not think it is worth it for the "optimized". Also it's a weak function. An arch may be added that wants to return a value. > > Signed-off-by: Li kunyu <kunyu@nfschina.com> > --- > void ftrace_modify_all_code(int command) > @@ -2804,12 +2802,7 @@ void __weak arch_ftrace_update_code(int command) > > static void ftrace_run_update_code(int command) > { > - int ret; > - > - ret = ftrace_arch_code_modify_prepare(); > - FTRACE_WARN_ON(ret); Currently no arch returns anything but zero, but this was added in case something wrong did happen, and the FTRACE_WARN_ON() is more than just a WARN_ON(). It will also disable ftrace. Now, I'm not totally against this change, but not for the rationale in the change log. That is, there is no optimization here. But as a standard clean up with something like "There is currently no version of ftrace_arch_code_modify_prepare() that returns anything bug zero, so the check is not needed" is a more appropriate reason for this change. -- Steve > - if (ret) > - return; > + ftrace_arch_code_modify_prepare(); > > /* > * By default we use stop_machine() to modify the code. > @@ -2819,8 +2812,7 @@ static void ftrace_run_update_code(int command) > */ > arch_ftrace_update_code(command); > > - ret = ftrace_arch_code_modify_post_process(); > - FTRACE_WARN_ON(ret); > + ftrace_arch_code_modify_post_process(); > } > > static void ftrace_run_modify_code(struct ftrace_ops *ops, int command,
I'm glad you're willing to reply to me. My current English level is limited, and I may not describe it correctly. I consider that these functions may not be modified in a short time, which may bring advantages in function performance. I could only submit it by inquiry. Thanks.
On Thu, 12 May 2022 22:45:32 +0800 Li kunyu <kunyu@nfschina.com> wrote: > I'm glad you're willing to reply to me. > My current English level is limited, and I may not describe it correctly. > I consider that these functions may not be modified in a short time, > which may bring advantages in function performance. I could only submit > it by inquiry. The performance improvement is negligible, and not worth the churn (changes to the code). And if that is the reason for the change, it is not worth adding it. Now if the reason for the change is to clean up the code to remove something that is not used, then it may be something to add if it is not too intrusive. -- Steve
Thank you very much. Could I modify the description information of this patch and submit it again.
diff --git a/arch/arm/kernel/ftrace.c b/arch/arm/kernel/ftrace.c index 83cc068586bc..a0b6d1e3812f 100644 --- a/arch/arm/kernel/ftrace.c +++ b/arch/arm/kernel/ftrace.c @@ -79,16 +79,14 @@ static unsigned long __ref adjust_address(struct dyn_ftrace *rec, return (unsigned long)&ftrace_regs_caller_from_init; } -int ftrace_arch_code_modify_prepare(void) +void ftrace_arch_code_modify_prepare(void) { - return 0; } -int ftrace_arch_code_modify_post_process(void) +void ftrace_arch_code_modify_post_process(void) { /* Make sure any TLB misses during machine stop are cleared. */ flush_tlb_all(); - return 0; } static unsigned long ftrace_call_replace(unsigned long pc, unsigned long addr, diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c index 4716f4cdc038..2086f6585773 100644 --- a/arch/riscv/kernel/ftrace.c +++ b/arch/riscv/kernel/ftrace.c @@ -12,16 +12,14 @@ #include <asm/patch.h> #ifdef CONFIG_DYNAMIC_FTRACE -int ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex) +void ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex) { mutex_lock(&text_mutex); - return 0; } -int ftrace_arch_code_modify_post_process(void) __releases(&text_mutex) +void ftrace_arch_code_modify_post_process(void) __releases(&text_mutex) { mutex_unlock(&text_mutex); - return 0; } static int ftrace_check_current_call(unsigned long hook_pos, diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c index 1852d46babb1..416b5a94353d 100644 --- a/arch/s390/kernel/ftrace.c +++ b/arch/s390/kernel/ftrace.c @@ -225,14 +225,13 @@ void arch_ftrace_update_code(int command) ftrace_modify_all_code(command); } -int ftrace_arch_code_modify_post_process(void) +void ftrace_arch_code_modify_post_process(void) { /* * Flush any pre-fetched instructions on all * CPUs to make the new code visible. */ text_poke_sync_lock(); - return 0; } #ifdef CONFIG_MODULES diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 1e31c7d21597..73d2719ed12c 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -37,7 +37,7 @@ static int ftrace_poke_late = 0; -int ftrace_arch_code_modify_prepare(void) +void ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex) { /* @@ -47,10 +47,9 @@ int ftrace_arch_code_modify_prepare(void) */ mutex_lock(&text_mutex); ftrace_poke_late = 1; - return 0; } -int ftrace_arch_code_modify_post_process(void) +void ftrace_arch_code_modify_post_process(void) __releases(&text_mutex) { /* @@ -61,7 +60,6 @@ int ftrace_arch_code_modify_post_process(void) text_poke_finish(); ftrace_poke_late = 0; mutex_unlock(&text_mutex); - return 0; } static const char *ftrace_nop_replace(void) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 4816b7e11047..a5f74f6e7e4e 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -449,8 +449,8 @@ static inline void stack_tracer_enable(void) { } #ifdef CONFIG_DYNAMIC_FTRACE -int ftrace_arch_code_modify_prepare(void); -int ftrace_arch_code_modify_post_process(void); +void ftrace_arch_code_modify_prepare(void); +void ftrace_arch_code_modify_post_process(void); enum ftrace_bug_type { FTRACE_BUG_UNKNOWN, diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 4f1d2f5e7263..35a899f136fe 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2707,18 +2707,16 @@ ftrace_nop_initialize(struct module *mod, struct dyn_ftrace *rec) * archs can override this function if they must do something * before the modifying code is performed. */ -int __weak ftrace_arch_code_modify_prepare(void) +void __weak ftrace_arch_code_modify_prepare(void) { - return 0; } /* * archs can override this function if they must do something * after the modifying code is performed. */ -int __weak ftrace_arch_code_modify_post_process(void) +void __weak ftrace_arch_code_modify_post_process(void) { - return 0; } void ftrace_modify_all_code(int command) @@ -2804,12 +2802,7 @@ void __weak arch_ftrace_update_code(int command) static void ftrace_run_update_code(int command) { - int ret; - - ret = ftrace_arch_code_modify_prepare(); - FTRACE_WARN_ON(ret); - if (ret) - return; + ftrace_arch_code_modify_prepare(); /* * By default we use stop_machine() to modify the code. @@ -2819,8 +2812,7 @@ static void ftrace_run_update_code(int command) */ arch_ftrace_update_code(command); - ret = ftrace_arch_code_modify_post_process(); - FTRACE_WARN_ON(ret); + ftrace_arch_code_modify_post_process(); } static void ftrace_run_modify_code(struct ftrace_ops *ops, int command,
At present, it is found that two functions could be optimized, and the performance may be improved. Signed-off-by: Li kunyu <kunyu@nfschina.com> --- arch/arm/kernel/ftrace.c | 6 ++---- arch/riscv/kernel/ftrace.c | 6 ++---- arch/s390/kernel/ftrace.c | 3 +-- arch/x86/kernel/ftrace.c | 6 ++---- include/linux/ftrace.h | 4 ++-- kernel/trace/ftrace.c | 16 ++++------------ 6 files changed, 13 insertions(+), 28 deletions(-)