Message ID | 20240417032830.1764690-1-zhengyejian1@huawei.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | [v3] ftrace: Fix possible use-after-free issue in ftrace_location() | expand |
On Wed, 17 Apr 2024 11:28:30 +0800 Zheng Yejian <zhengyejian1@huawei.com> wrote: > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c > index da1710499698..e05d3e3dc06a 100644 > --- a/kernel/trace/ftrace.c > +++ b/kernel/trace/ftrace.c > @@ -1581,7 +1581,7 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end) > } > > /** > - * ftrace_location_range - return the first address of a traced location > + * ftrace_location_range_rcu - return the first address of a traced location kerneldoc comments are for external functions. You need to move this down to ftrace_location_range() as here you are commenting a local static function. But I have to ask, why did you create this static function anyway? There's only one user of it (the ftrace_location_range()). Why didn't you just simply add the rcu locking there? unsigned long ftrace_location_range(unsigned long start, unsigned long end) { struct dyn_ftrace *rec; unsigned long ip = 0; rcu_read_lock(); rec = lookup_rec(start, end); if (rec) ip = rec->ip; rcu_read_unlock(); return ip; } -- Steve > * if it touches the given ip range > * @start: start of range to search. > * @end: end of range to search (inclusive). @end points to the last byte > @@ -1592,7 +1592,7 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end) > * that is either a NOP or call to the function tracer. It checks the ftrace > * internal tables to determine if the address belongs or not. > */ > -unsigned long ftrace_location_range(unsigned long start, unsigned long end) > +static unsigned long ftrace_location_range_rcu(unsigned long start, unsigned long end) > { > struct dyn_ftrace *rec; > > @@ -1603,6 +1603,16 @@ unsigned long ftrace_location_range(unsigned long start, unsigned long end) > return 0; > } > > +unsigned long ftrace_location_range(unsigned long start, unsigned long end) > +{ > + unsigned long loc; > + > + rcu_read_lock(); > + loc = ftrace_location_range_rcu(start, end); > + rcu_read_unlock(); > + return loc; > +}
On 2024/5/3 05:07, Steven Rostedt wrote: > On Wed, 17 Apr 2024 11:28:30 +0800 > Zheng Yejian <zhengyejian1@huawei.com> wrote: > >> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c >> index da1710499698..e05d3e3dc06a 100644 >> --- a/kernel/trace/ftrace.c >> +++ b/kernel/trace/ftrace.c >> @@ -1581,7 +1581,7 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end) >> } >> >> /** >> - * ftrace_location_range - return the first address of a traced location >> + * ftrace_location_range_rcu - return the first address of a traced location > > kerneldoc comments are for external functions. You need to move this down > to ftrace_location_range() as here you are commenting a local static function. I'll do it in v4. > > But I have to ask, why did you create this static function anyway? There's > only one user of it (the ftrace_location_range()). Why didn't you just > simply add the rcu locking there? Yes, the only-one-user function looks ugly. At first thought that ftrace_location_range() needs to a lock, I just do like that, no specital reason. > > unsigned long ftrace_location_range(unsigned long start, unsigned long end) > { > struct dyn_ftrace *rec; > unsigned long ip = 0; > > rcu_read_lock(); > rec = lookup_rec(start, end); > if (rec) > ip = rec->ip; > rcu_read_unlock(); > > return ip; > } > > -- Steve > > >> * if it touches the given ip range >> * @start: start of range to search. >> * @end: end of range to search (inclusive). @end points to the last byte >> @@ -1592,7 +1592,7 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end) >> * that is either a NOP or call to the function tracer. It checks the ftrace >> * internal tables to determine if the address belongs or not. >> */ >> -unsigned long ftrace_location_range(unsigned long start, unsigned long end) >> +static unsigned long ftrace_location_range_rcu(unsigned long start, unsigned long end) >> { >> struct dyn_ftrace *rec; >> >> @@ -1603,6 +1603,16 @@ unsigned long ftrace_location_range(unsigned long start, unsigned long end) >> return 0; >> } >> >> +unsigned long ftrace_location_range(unsigned long start, unsigned long end) >> +{ >> + unsigned long loc; >> + >> + rcu_read_lock(); >> + loc = ftrace_location_range_rcu(start, end); >> + rcu_read_unlock(); >> + return loc; >> +} >
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index da1710499698..e05d3e3dc06a 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -1581,7 +1581,7 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end) } /** - * ftrace_location_range - return the first address of a traced location + * ftrace_location_range_rcu - return the first address of a traced location * if it touches the given ip range * @start: start of range to search. * @end: end of range to search (inclusive). @end points to the last byte @@ -1592,7 +1592,7 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end) * that is either a NOP or call to the function tracer. It checks the ftrace * internal tables to determine if the address belongs or not. */ -unsigned long ftrace_location_range(unsigned long start, unsigned long end) +static unsigned long ftrace_location_range_rcu(unsigned long start, unsigned long end) { struct dyn_ftrace *rec; @@ -1603,6 +1603,16 @@ unsigned long ftrace_location_range(unsigned long start, unsigned long end) return 0; } +unsigned long ftrace_location_range(unsigned long start, unsigned long end) +{ + unsigned long loc; + + rcu_read_lock(); + loc = ftrace_location_range_rcu(start, end); + rcu_read_unlock(); + return loc; +} + /** * ftrace_location - return the ftrace location * @ip: the instruction pointer to check @@ -1614,25 +1624,22 @@ unsigned long ftrace_location_range(unsigned long start, unsigned long end) */ unsigned long ftrace_location(unsigned long ip) { - struct dyn_ftrace *rec; + unsigned long loc; unsigned long offset; unsigned long size; - rec = lookup_rec(ip, ip); - if (!rec) { + loc = ftrace_location_range(ip, ip); + if (!loc) { if (!kallsyms_lookup_size_offset(ip, &size, &offset)) goto out; /* map sym+0 to __fentry__ */ if (!offset) - rec = lookup_rec(ip, ip + size - 1); + loc = ftrace_location_range(ip, ip + size - 1); } - if (rec) - return rec->ip; - out: - return 0; + return loc; } /** @@ -6596,6 +6603,8 @@ static int ftrace_process_locs(struct module *mod, /* We should have used all pages unless we skipped some */ if (pg_unuse) { WARN_ON(!skipped); + /* Need to synchronize with ftrace_location_range() */ + synchronize_rcu(); ftrace_free_pages(pg_unuse); } return ret; @@ -6809,6 +6818,9 @@ void ftrace_release_mod(struct module *mod) out_unlock: mutex_unlock(&ftrace_lock); + /* Need to synchronize with ftrace_location_range() */ + if (tmp_page) + synchronize_rcu(); for (pg = tmp_page; pg; pg = tmp_page) { /* Needs to be called outside of ftrace_lock */ @@ -7142,6 +7154,7 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr) unsigned long start = (unsigned long)(start_ptr); unsigned long end = (unsigned long)(end_ptr); struct ftrace_page **last_pg = &ftrace_pages_start; + struct ftrace_page *tmp_page = NULL; struct ftrace_page *pg; struct dyn_ftrace *rec; struct dyn_ftrace key; @@ -7183,12 +7196,8 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr) ftrace_update_tot_cnt--; if (!pg->index) { *last_pg = pg->next; - if (pg->records) { - free_pages((unsigned long)pg->records, pg->order); - ftrace_number_of_pages -= 1 << pg->order; - } - ftrace_number_of_groups--; - kfree(pg); + pg->next = tmp_page; + tmp_page = pg; pg = container_of(last_pg, struct ftrace_page, next); if (!(*last_pg)) ftrace_pages = pg; @@ -7205,6 +7214,11 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr) clear_func_from_hashes(func); kfree(func); } + /* Need to synchronize with ftrace_location_range() */ + if (tmp_page) { + synchronize_rcu(); + ftrace_free_pages(tmp_page); + } } void __init ftrace_free_init_mem(void)
KASAN reports a bug: BUG: KASAN: use-after-free in ftrace_location+0x90/0x120 Read of size 8 at addr ffff888141d40010 by task insmod/424 CPU: 8 PID: 424 Comm: insmod Tainted: G W 6.9.0-rc2+ [...] Call Trace: <TASK> dump_stack_lvl+0x68/0xa0 print_report+0xcf/0x610 kasan_report+0xb5/0xe0 ftrace_location+0x90/0x120 register_kprobe+0x14b/0xa40 kprobe_init+0x2d/0xff0 [kprobe_example] do_one_initcall+0x8f/0x2d0 do_init_module+0x13a/0x3c0 load_module+0x3082/0x33d0 init_module_from_file+0xd2/0x130 __x64_sys_finit_module+0x306/0x440 do_syscall_64+0x68/0x140 entry_SYSCALL_64_after_hwframe+0x71/0x79 The root cause is that, in lookup_rec(), ftrace record of some address is being searched in ftrace pages of some module, but those ftrace pages at the same time is being freed in ftrace_release_mod() as the corresponding module is being deleted: CPU1 | CPU2 register_kprobes() { | delete_module() { check_kprobe_address_safe() { | arch_check_ftrace_location() { | ftrace_location() { | lookup_rec() // USE! | ftrace_release_mod() // Free! To fix this issue: 1. Hold rcu lock as accessing ftrace pages in ftrace_location_range(); 2. Use ftrace_location_range() instead of lookup_rec() in ftrace_location(); 3. Call synchronize_rcu() before freeing any ftrace pages both in ftrace_process_locs()/ftrace_release_mod()/ftrace_free_mem(). Fixes: ae6aa16fdc16 ("kprobes: introduce ftrace based optimization") Suggested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com> --- kernel/trace/ftrace.c | 46 ++++++++++++++++++++++++++++--------------- 1 file changed, 30 insertions(+), 16 deletions(-) v3: - Complete the commit description and add Suggested-by tag - Add comments around where synchronize_rcu() is called v2: - Link: https://lore.kernel.org/all/20240416112459.1444612-1-zhengyejian1@huawei.com/ - Use RCU lock instead of holding ftrace_lock as suggested by Steve. Link: https://lore.kernel.org/all/20240410112823.1d084c8f@gandalf.local.home/ v1: - Link: https://lore.kernel.org/all/20240401125543.1282845-1-zhengyejian1@huawei.com/