Message ID | 5f8081030f6f5d5af56c93fff95f0a7fadde04ad.1684120990.git.zegao@tencent.com (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | Masami Hiramatsu |
Headers | show |
Series | Make fpobe + rethook immune to recursion | expand |
On Mon, 15 May 2023 11:26:40 +0800 Ze Gao <zegao2021@gmail.com> wrote: > fprobe_hander and fprobe_kprobe_handler has guarded ftrace recusion > detection but fprobe_exit_handler has not, which possibly introduce > recurive calls if the fprobe exit callback calls any traceable > functions. Checking in fprobe_hander or fprobe_kprobe_handler > is not enough and misses this case. Good catch! Yes, this can fix such recursive call case because if we put a fprobe to the exit of the "func()", recursive call happens as below; func() { } => rethook => fprobe_exit_handler() => fp->exit_handler() { func() { } => rethook => fprobe_exit_handler() => fp->exit_handler() { func() { } => rethook ... Note that this should not happen with fprobe-based events because all the code (except for tests) under kernel/trace/ are marked notrace automatically. kretprobe avoids this by setting itself to current_kprobe, thus the other kprobes recursively called from the rethook will be skipped. > > So add recusion free guard the same way as fprobe_hander and also > mark fprobe_exit_handler notrace. Since ftrace recursion check does > not employ ips, so here use entry_ip and entry_parent_ip the same as > fprobe_handler. Looks good to me. Fixes: 5b0ab78998e3 ("fprobe: Add exit_handler support") Cc: stable@vger.kernel.org Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> > > Signed-off-by: Ze Gao <zegao@tencent.com> > --- > kernel/trace/fprobe.c | 15 ++++++++++++++- > 1 file changed, 14 insertions(+), 1 deletion(-) > > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c > index ad9a36c87ad9..cf982d4ab142 100644 > --- a/kernel/trace/fprobe.c > +++ b/kernel/trace/fprobe.c > @@ -17,6 +17,7 @@ > struct fprobe_rethook_node { > struct rethook_node node; > unsigned long entry_ip; > + unsigned long entry_parent_ip; > char data[]; > }; > > @@ -39,6 +40,7 @@ static inline notrace void __fprobe_handler(unsigned long ip, unsigned long > } > fpr = container_of(rh, struct fprobe_rethook_node, node); > fpr->entry_ip = ip; > + fpr->entry_parent_ip = parent_ip; > if (fp->entry_data_size) > entry_data = fpr->data; > } > @@ -109,19 +111,30 @@ static void notrace fprobe_kprobe_handler(unsigned long ip, unsigned long parent > ftrace_test_recursion_unlock(bit); > } > > -static void fprobe_exit_handler(struct rethook_node *rh, void *data, > +static void notrace fprobe_exit_handler(struct rethook_node *rh, void *data, > struct pt_regs *regs) > { > struct fprobe *fp = (struct fprobe *)data; > struct fprobe_rethook_node *fpr; > + int bit; > > if (!fp || fprobe_disabled(fp)) > return; > > fpr = container_of(rh, struct fprobe_rethook_node, node); > > + /* we need to assure no calls to traceable functions in-between the > + * end of fprobe_handler and the beginning of fprobe_exit_handler. > + */ > + bit = ftrace_test_recursion_trylock(fpr->entry_ip, fpr->entry_parent_ip); > + if (bit < 0) { > + fp->nmissed++; > + return; > + } > + > fp->exit_handler(fp, fpr->entry_ip, regs, > fp->entry_data_size ? (void *)fpr->data : NULL); > + ftrace_test_recursion_unlock(bit); > } > NOKPROBE_SYMBOL(fprobe_exit_handler); > > -- > 2.40.1 >
diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index ad9a36c87ad9..cf982d4ab142 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -17,6 +17,7 @@ struct fprobe_rethook_node { struct rethook_node node; unsigned long entry_ip; + unsigned long entry_parent_ip; char data[]; }; @@ -39,6 +40,7 @@ static inline notrace void __fprobe_handler(unsigned long ip, unsigned long } fpr = container_of(rh, struct fprobe_rethook_node, node); fpr->entry_ip = ip; + fpr->entry_parent_ip = parent_ip; if (fp->entry_data_size) entry_data = fpr->data; } @@ -109,19 +111,30 @@ static void notrace fprobe_kprobe_handler(unsigned long ip, unsigned long parent ftrace_test_recursion_unlock(bit); } -static void fprobe_exit_handler(struct rethook_node *rh, void *data, +static void notrace fprobe_exit_handler(struct rethook_node *rh, void *data, struct pt_regs *regs) { struct fprobe *fp = (struct fprobe *)data; struct fprobe_rethook_node *fpr; + int bit; if (!fp || fprobe_disabled(fp)) return; fpr = container_of(rh, struct fprobe_rethook_node, node); + /* we need to assure no calls to traceable functions in-between the + * end of fprobe_handler and the beginning of fprobe_exit_handler. + */ + bit = ftrace_test_recursion_trylock(fpr->entry_ip, fpr->entry_parent_ip); + if (bit < 0) { + fp->nmissed++; + return; + } + fp->exit_handler(fp, fpr->entry_ip, regs, fp->entry_data_size ? (void *)fpr->data : NULL); + ftrace_test_recursion_unlock(bit); } NOKPROBE_SYMBOL(fprobe_exit_handler);
fprobe_hander and fprobe_kprobe_handler has guarded ftrace recusion detection but fprobe_exit_handler has not, which possibly introduce recurive calls if the fprobe exit callback calls any traceable functions. Checking in fprobe_hander or fprobe_kprobe_handler is not enough and misses this case. So add recusion free guard the same way as fprobe_hander and also mark fprobe_exit_handler notrace. Since ftrace recursion check does not employ ips, so here use entry_ip and entry_parent_ip the same as fprobe_handler. Signed-off-by: Ze Gao <zegao@tencent.com> --- kernel/trace/fprobe.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)