Message ID | 20220526141912.794c2786@gandalf.local.home (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | [v4] ftrace: Add FTRACE_MCOUNT_MAX_OFFSET to avoid adding weak function | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On Thu, 26 May 2022 14:19:12 -0400 Steven Rostedt <rostedt@goodmis.org> (by way of Steven Rostedt <rostedt@goodmis.org>) wrote: > +++ b/kernel/trace/ftrace.c > @@ -3654,6 +3654,31 @@ static void add_trampoline_func(struct seq_file *m, struct ftrace_ops *ops, > seq_printf(m, " ->%pS", ptr); > } > > +#ifdef FTRACE_MCOUNT_MAX_OFFSET > +static int print_rec(struct seq_file *m, unsigned long ip) > +{ > + unsigned long offset; > + char str[KSYM_SYMBOL_LEN]; > + char *modname; > + const char *ret; > + > + ret = kallsyms_lookup(ip, NULL, &offset, &modname, str); > + if (!ret || offset > FTRACE_MCOUNT_MAX_OFFSET) > + return -1; Unfortunately, I can't just skip printing these functions. The reason is because it breaks trace-cmd (libtracefs specifically). As trace-cmd can filter with full regular expressions (regex(3)), and does so by searching the available_filter_functions. It collects an index of functions to enabled, then passes that into set_ftrace_filter. As a speed up, set_ftrace_filter allows you to pass an index, defined by the line in available_filter_functions, into it that uses array indexing into the ftrace table to enable/disable functions for tracing. By skipping entries, it breaks the indexing, because the index is a 1 to 1 paring of the lines of available_filter_functions. To solve this, instead of printing nothing, I have this: ret = kallsyms_lookup(ip, NULL, &offset, &modname, str); /* Weak functions can cause invalid addresses */ if (!ret || offset > FTRACE_MCOUNT_MAX_OFFSET) { snprintf(str, KSYM_SYMBOL_LEN, "%s_%ld", FTRACE_INVALID_FUNCTION, offset); } Where: #define FTRACE_INVALID_FUNCTION "__ftrace_invalid_address__" When doing this, the available_filter_functions file has 546 invalid entries. 14 of which are for the kvm module. Probably to deal with the differences between Intel and AMD. When a function is read as invalid, the rec->flags get set as DISABLED, which will keep it from being enabled in the future. Of course, one can just enter in numbers without reading any of the files, and that will allow them to be set. It won't do anything bad, it would just act like it does today. Does anyone have any issues with this approach (with __ftrace_invalid_address__%d inserted into available_filter_functions)? -- Steve > + > + seq_puts(m, str); > + if (modname) > + seq_printf(m, " [%s]", modname); > + return 0; > +} > +#else > +static int print_rec(struct seq_file *m, unsigned long ip) > +{ > + seq_printf(m, "%ps", (void *)ip); > + return 0; > +} > +#endif > +
On Fri, May 27, 2022 at 08:30:43AM -0400, Steven Rostedt wrote: > On Thu, 26 May 2022 14:19:12 -0400 > Steven Rostedt <rostedt@goodmis.org> (by way of Steven Rostedt > <rostedt@goodmis.org>) wrote: > > > +++ b/kernel/trace/ftrace.c > > @@ -3654,6 +3654,31 @@ static void add_trampoline_func(struct seq_file *m, struct ftrace_ops *ops, > > seq_printf(m, " ->%pS", ptr); > > } > > > > +#ifdef FTRACE_MCOUNT_MAX_OFFSET > > +static int print_rec(struct seq_file *m, unsigned long ip) > > +{ > > + unsigned long offset; > > + char str[KSYM_SYMBOL_LEN]; > > + char *modname; > > + const char *ret; > > + > > + ret = kallsyms_lookup(ip, NULL, &offset, &modname, str); > > + if (!ret || offset > FTRACE_MCOUNT_MAX_OFFSET) > > + return -1; > > Unfortunately, I can't just skip printing these functions. The reason is > because it breaks trace-cmd (libtracefs specifically). As trace-cmd can > filter with full regular expressions (regex(3)), and does so by searching > the available_filter_functions. It collects an index of functions to > enabled, then passes that into set_ftrace_filter. > > As a speed up, set_ftrace_filter allows you to pass an index, defined by the > line in available_filter_functions, into it that uses array indexing into > the ftrace table to enable/disable functions for tracing. By skipping > entries, it breaks the indexing, because the index is a 1 to 1 paring of > the lines of available_filter_functions. In what order does available_filter_functions print the symbols? The pending FGKASLR patches randomize kallsyms order and anything that prints symbols in address order will be a security leak.
On Sat, 28 May 2022 13:41:41 +0200 Peter Zijlstra <peterz@infradead.org> wrote: > In what order does available_filter_functions print the symbols? > > The pending FGKASLR patches randomize kallsyms order and anything that > prints symbols in address order will be a security leak. Yes it is sorted, but tracefs is by default root accessible only. An admin can change the owner of it via normal chmod/chown permissions, but they get to keep the security pieces if they do. There's other things in tracefs that can pose security issues if unprivileged users are allowed to read, which is why the default permissions of files is rw-r----. Thus, I'm not worried about it. And why the security paranoid can always lockdown tracing, which will completely disable tracefs and access to all its files. -- Steve
diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index 024d9797646e..4c9bfdf4dae7 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -9,6 +9,14 @@ # define MCOUNT_ADDR ((unsigned long)(__fentry__)) #define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */ +#include <asm/ibt.h> + +/* Ignore unused weak functions which will have non zero offsets */ +#ifdef CONFIG_HAVE_FENTRY +/* Add offset for endbr64 if IBT enabled */ +# define FTRACE_MCOUNT_MAX_OFFSET ENDBR_INSN_SIZE +#endif + #ifdef CONFIG_DYNAMIC_FTRACE #define ARCH_SUPPORTS_FTRACE_OPS 1 #endif diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index d653ef4febc5..f20704a44875 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -3654,6 +3654,31 @@ static void add_trampoline_func(struct seq_file *m, struct ftrace_ops *ops, seq_printf(m, " ->%pS", ptr); } +#ifdef FTRACE_MCOUNT_MAX_OFFSET +static int print_rec(struct seq_file *m, unsigned long ip) +{ + unsigned long offset; + char str[KSYM_SYMBOL_LEN]; + char *modname; + const char *ret; + + ret = kallsyms_lookup(ip, NULL, &offset, &modname, str); + if (!ret || offset > FTRACE_MCOUNT_MAX_OFFSET) + return -1; + + seq_puts(m, str); + if (modname) + seq_printf(m, " [%s]", modname); + return 0; +} +#else +static int print_rec(struct seq_file *m, unsigned long ip) +{ + seq_printf(m, "%ps", (void *)ip); + return 0; +} +#endif + static int t_show(struct seq_file *m, void *v) { struct ftrace_iterator *iter = m->private; @@ -3678,7 +3703,9 @@ static int t_show(struct seq_file *m, void *v) if (!rec) return 0; - seq_printf(m, "%ps", (void *)rec->ip); + if (print_rec(m, rec->ip)) + return 0; + if (iter->flags & FTRACE_ITER_ENABLED) { struct ftrace_ops *ops; @@ -3996,6 +4023,24 @@ add_rec_by_index(struct ftrace_hash *hash, struct ftrace_glob *func_g, return 0; } +#ifdef FTRACE_MCOUNT_MAX_OFFSET +static int lookup_ip(unsigned long ip, char **modname, char *str) +{ + unsigned long offset; + + kallsyms_lookup(ip, NULL, &offset, modname, str); + if (offset > FTRACE_MCOUNT_MAX_OFFSET) + return -1; + return 0; +} +#else +static int lookup_ip(unsigned long ip, char **modname, char *str) +{ + kallsyms_lookup(ip, NULL, NULL, modname, str); + return 0; +} +#endif + static int ftrace_match_record(struct dyn_ftrace *rec, struct ftrace_glob *func_g, struct ftrace_glob *mod_g, int exclude_mod) @@ -4003,7 +4048,8 @@ ftrace_match_record(struct dyn_ftrace *rec, struct ftrace_glob *func_g, char str[KSYM_SYMBOL_LEN]; char *modname; - kallsyms_lookup(rec->ip, NULL, NULL, &modname, str); + if (lookup_ip(rec->ip, &modname, str)) + return 0; if (mod_g) { int mod_matches = (modname) ? ftrace_match(modname, mod_g) : 0;