From patchwork Wed Aug 28 14:41:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13781391 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CCD2187FFE; Wed, 28 Aug 2024 14:42:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=167.114.26.122 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856151; cv=none; b=DtGYxC1ymgsgzGAQBTqHm0dLUU+I70wanSph/odacf0s6lPx/iXAOEtxxrAV7sDVRZZbOPkJLGi8/4no+FdqRXivWXEGKiPcm+PbhE26CGDm/igHLQ8KWjE34uvTtq/AlAaKMuoxXEgZClt+WCNoIHFirhd4JwnQ1IBOv0kGneE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856151; c=relaxed/simple; bh=uNayzTiUs9yWjscegSPIBfPmvv2teHwksujD/o9dmME=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=F7Vn4ChELJfWVJFBkvE6BjVNhVxamPduIBEX0utcoyz5fuwjYCOsVtZM36Tf2rDhPKlVQvs2pUP6Cz+8zpfVZkSvnqkwkxco9j7n8UVTLUBPcrdbOgTW7uBnBjGbENuk2mY/iSPAm/MzqtYC1AmnN4N4S7p2wZDFWAxE5iwP3ms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com; spf=pass smtp.mailfrom=efficios.com; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b=LjEaIJqZ; arc=none smtp.client-ip=167.114.26.122 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=efficios.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b="LjEaIJqZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1724856148; bh=uNayzTiUs9yWjscegSPIBfPmvv2teHwksujD/o9dmME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LjEaIJqZSoRgS+DIxpLtKqYA2gnuMMhEQKlp9pBLABCdMilPm/A3RWzhgYypnEevn mmJvP8mG87I72svGcZp38MIElnSa/ZlnS3lyK/82mffKcImmUeYzIXSCDRmOW2UZKY MSssN/Elgz47dkLGaui3ieh/3vR4cc7yAuKIzDEps2IxXiElCkv55TwZZniVo50Eze OvXwVP82FjpWuO7ONGHf1orilR7FSHHj7B6AJfUOW7XXcYreof6mZJqrMhQnAYlrs7 KXufjXLuw76hgfjEq7fWz39leGVs7Z10dACSKQTxpegpVw602pQZlq1cmjIVh3LgDP gSt91TMDCYInQ== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4Wv6ZW64m1z1JSW; Wed, 28 Aug 2024 10:42:27 -0400 (EDT) From: Mathieu Desnoyers To: Steven Rostedt , Masami Hiramatsu Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E . McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Namhyung Kim , bpf@vger.kernel.org, Joel Fernandes , linux-trace-kernel@vger.kernel.org, Michael Jeanson Subject: [PATCH v6 1/5] tracing: Introduce faultable tracepoints Date: Wed, 28 Aug 2024 10:41:48 -0400 Message-Id: <20240828144153.829582-2-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> References: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When invoked from system call enter/exit instrumentation, accessing user-space data is a common use-case for tracers. However, tracepoints currently disable preemption around iteration on the registered tracepoint probes and invocation of the probe callbacks, which prevents tracers from handling page faults. Extend the tracepoint and trace event APIs to allow defining a faultable tracepoint which invokes its callback with preemption enabled. Also extend the tracepoint API to allow tracers to request specific probes to be connected to those faultable tracepoints. When the TRACEPOINT_MAY_FAULT flag is provided on registration, the probe callback will be called with preemption enabled, and is allowed to take page faults. Faultable probes can only be registered on faultable tracepoints and non-faultable probes on non-faultable tracepoints. The tasks trace rcu mechanism is used to synchronize read-side marshalling of the registered probes with respect to faultable probes unregistration and teardown. Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/ Co-developed-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers Signed-off-by: Michael Jeanson Reviewed-by: Masami Hiramatsu (Google) Cc: Steven Rostedt Cc: Masami Hiramatsu Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Namhyung Kim Cc: bpf@vger.kernel.org Cc: Joel Fernandes --- Changes since v1: - Cleanup __DO_TRACE() implementation. - Rename "sleepable tracepoints" to "faultable tracepoints", MAYSLEEP to MAYFAULT, and use might_fault() rather than might_sleep(), to properly convey that the tracepoints are meant to be able to take a page fault, which requires to be able to sleep *and* to hold the mmap_sem. Changes since v2: - Rename MAYFAULT to MAY_FAULT. - Rebased on 6.5.5. - Introduce MAY_EXIST tracepoint flag. Changes since v3: - Rebased on 6.6.2. Changes since v4: - Rebased on 6.9.6. - Simplify flag check in tracepoint_probe_register_prio_flags(). - Update MAY_EXIST flag description. Changes since v5: - Rebased on v6.11-rc5. --- include/linux/tracepoint-defs.h | 14 ++++++ include/linux/tracepoint.h | 88 +++++++++++++++++++++++---------- include/trace/define_trace.h | 7 +++ include/trace/trace_events.h | 6 +++ init/Kconfig | 1 + kernel/trace/bpf_trace.c | 4 +- kernel/trace/trace_fprobe.c | 5 +- kernel/tracepoint.c | 65 ++++++++++++++---------- 8 files changed, 136 insertions(+), 54 deletions(-) diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h index 4dc4955f0fbf..94e39c86b49f 100644 --- a/include/linux/tracepoint-defs.h +++ b/include/linux/tracepoint-defs.h @@ -29,6 +29,19 @@ struct tracepoint_func { int prio; }; +/** + * enum tracepoint_flags - Tracepoint flags + * @TRACEPOINT_MAY_EXIST: On registration, don't warn if the tracepoint is + * already registered. + * @TRACEPOINT_MAY_FAULT: The tracepoint probe callback will be called with + * preemption enabled, and is allowed to take page + * faults. + */ +enum tracepoint_flags { + TRACEPOINT_MAY_EXIST = (1 << 0), + TRACEPOINT_MAY_FAULT = (1 << 1), +}; + struct tracepoint { const char *name; /* Tracepoint name */ struct static_key key; @@ -39,6 +52,7 @@ struct tracepoint { int (*regfunc)(void); void (*unregfunc)(void); struct tracepoint_func __rcu *funcs; + unsigned int flags; }; #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h index 6be396bb4297..7ae5496a800c 100644 --- a/include/linux/tracepoint.h +++ b/include/linux/tracepoint.h @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -40,17 +41,10 @@ extern int tracepoint_probe_register_prio(struct tracepoint *tp, void *probe, void *data, int prio); extern int -tracepoint_probe_register_prio_may_exist(struct tracepoint *tp, void *probe, void *data, - int prio); +tracepoint_probe_register_prio_flags(struct tracepoint *tp, void *probe, void *data, + int prio, unsigned int flags); extern int tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data); -static inline int -tracepoint_probe_register_may_exist(struct tracepoint *tp, void *probe, - void *data) -{ - return tracepoint_probe_register_prio_may_exist(tp, probe, data, - TRACEPOINT_DEFAULT_PRIO); -} extern void for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv), void *priv); @@ -89,6 +83,7 @@ int unregister_tracepoint_module_notifier(struct notifier_block *nb) #ifdef CONFIG_TRACEPOINTS static inline void tracepoint_synchronize_unregister(void) { + synchronize_rcu_tasks_trace(); synchronize_srcu(&tracepoint_srcu); synchronize_rcu(); } @@ -191,9 +186,10 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) * it_func[0] is never NULL because there is at least one element in the array * when the array itself is non NULL. */ -#define __DO_TRACE(name, args, cond, rcuidle) \ +#define __DO_TRACE(name, args, cond, rcuidle, tp_flags) \ do { \ int __maybe_unused __idx = 0; \ + bool mayfault = (tp_flags) & TRACEPOINT_MAY_FAULT; \ \ if (!(cond)) \ return; \ @@ -202,8 +198,12 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) "Bad RCU usage for tracepoint")) \ return; \ \ - /* keep srcu and sched-rcu usage consistent */ \ - preempt_disable_notrace(); \ + if (mayfault) { \ + rcu_read_lock_trace(); \ + } else { \ + /* keep srcu and sched-rcu usage consistent */ \ + preempt_disable_notrace(); \ + } \ \ /* \ * For rcuidle callers, use srcu since sched-rcu \ @@ -221,20 +221,23 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\ } \ \ - preempt_enable_notrace(); \ + if (mayfault) \ + rcu_read_unlock_trace(); \ + else \ + preempt_enable_notrace(); \ } while (0) #ifndef MODULE -#define __DECLARE_TRACE_RCU(name, proto, args, cond) \ +#define __DECLARE_TRACE_RCU(name, proto, args, cond, tp_flags) \ static inline void trace_##name##_rcuidle(proto) \ { \ if (static_key_false(&__tracepoint_##name.key)) \ __DO_TRACE(name, \ TP_ARGS(args), \ - TP_CONDITION(cond), 1); \ + TP_CONDITION(cond), 1, tp_flags); \ } #else -#define __DECLARE_TRACE_RCU(name, proto, args, cond) +#define __DECLARE_TRACE_RCU(name, proto, args, cond, tp_flags) #endif /* @@ -248,7 +251,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) * site if it is not watching, as it will need to be active when the * tracepoint is enabled. */ -#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \ +#define __DECLARE_TRACE(name, proto, args, cond, data_proto, tp_flags) \ extern int __traceiter_##name(data_proto); \ DECLARE_STATIC_CALL(tp_func_##name, __traceiter_##name); \ extern struct tracepoint __tracepoint_##name; \ @@ -257,14 +260,16 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) if (static_key_false(&__tracepoint_##name.key)) \ __DO_TRACE(name, \ TP_ARGS(args), \ - TP_CONDITION(cond), 0); \ + TP_CONDITION(cond), 0, tp_flags); \ if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \ WARN_ONCE(!rcu_is_watching(), \ "RCU not watching for tracepoint"); \ } \ + if ((tp_flags) & TRACEPOINT_MAY_FAULT) \ + might_fault(); \ } \ __DECLARE_TRACE_RCU(name, PARAMS(proto), PARAMS(args), \ - PARAMS(cond)) \ + PARAMS(cond), tp_flags) \ static inline int \ register_trace_##name(void (*probe)(data_proto), void *data) \ { \ @@ -279,6 +284,13 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) (void *)probe, data, prio); \ } \ static inline int \ + register_trace_prio_flags_##name(void (*probe)(data_proto), void *data, \ + int prio, unsigned int flags) \ + { \ + return tracepoint_probe_register_prio_flags(&__tracepoint_##name, \ + (void *)probe, data, prio, flags); \ + } \ + static inline int \ unregister_trace_##name(void (*probe)(data_proto), void *data) \ { \ return tracepoint_probe_unregister(&__tracepoint_##name,\ @@ -299,7 +311,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) * structures, so we create an array of pointers that will be used for iteration * on the tracepoints. */ -#define DEFINE_TRACE_FN(_name, _reg, _unreg, proto, args) \ +#define DEFINE_TRACE_FN_FLAGS(_name, _reg, _unreg, proto, args, tp_flags) \ static const char __tpstrtab_##_name[] \ __section("__tracepoints_strings") = #_name; \ extern struct static_call_key STATIC_CALL_KEY(tp_func_##_name); \ @@ -315,7 +327,9 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) .probestub = &__probestub_##_name, \ .regfunc = _reg, \ .unregfunc = _unreg, \ - .funcs = NULL }; \ + .funcs = NULL, \ + .flags = (tp_flags), \ + }; \ __TRACEPOINT_ENTRY(_name); \ int __traceiter_##_name(void *__data, proto) \ { \ @@ -338,8 +352,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) } \ DEFINE_STATIC_CALL(tp_func_##_name, __traceiter_##_name); +#define DEFINE_TRACE_FN(_name, _reg, _unreg, proto, args) \ + DEFINE_TRACE_FN_FLAGS(_name, _reg, _unreg, PARAMS(proto), PARAMS(args), 0) + #define DEFINE_TRACE(name, proto, args) \ - DEFINE_TRACE_FN(name, NULL, NULL, PARAMS(proto), PARAMS(args)); + DEFINE_TRACE_FN(name, NULL, NULL, PARAMS(proto), PARAMS(args)) #define EXPORT_TRACEPOINT_SYMBOL_GPL(name) \ EXPORT_SYMBOL_GPL(__tracepoint_##name); \ @@ -352,7 +369,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) #else /* !TRACEPOINTS_ENABLED */ -#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \ +#define __DECLARE_TRACE(name, proto, args, cond, data_proto, tp_flags) \ static inline void trace_##name(proto) \ { } \ static inline void trace_##name##_rcuidle(proto) \ @@ -364,6 +381,18 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) return -ENOSYS; \ } \ static inline int \ + register_trace_prio_##name(void (*probe)(data_proto), \ + void *data, int prio) \ + { \ + return -ENOSYS; \ + } \ + static inline int \ + register_trace_prio_flags_##name(void (*probe)(data_proto), \ + void *data, int prio, unsigned int flags) \ + { \ + return -ENOSYS; \ + } \ + static inline int \ unregister_trace_##name(void (*probe)(data_proto), \ void *data) \ { \ @@ -378,6 +407,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) return false; \ } +#define DEFINE_TRACE_FN_FLAGS(name, reg, unreg, proto, args, tp_flags) #define DEFINE_TRACE_FN(name, reg, unreg, proto, args) #define DEFINE_TRACE(name, proto, args) #define EXPORT_TRACEPOINT_SYMBOL_GPL(name) @@ -432,12 +462,17 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) #define DECLARE_TRACE(name, proto, args) \ __DECLARE_TRACE(name, PARAMS(proto), PARAMS(args), \ cpu_online(raw_smp_processor_id()), \ - PARAMS(void *__data, proto)) + PARAMS(void *__data, proto), 0) + +#define DECLARE_TRACE_MAY_FAULT(name, proto, args) \ + __DECLARE_TRACE(name, PARAMS(proto), PARAMS(args), \ + cpu_online(raw_smp_processor_id()), \ + PARAMS(void *__data, proto), TRACEPOINT_MAY_FAULT) #define DECLARE_TRACE_CONDITION(name, proto, args, cond) \ __DECLARE_TRACE(name, PARAMS(proto), PARAMS(args), \ cpu_online(raw_smp_processor_id()) && (PARAMS(cond)), \ - PARAMS(void *__data, proto)) + PARAMS(void *__data, proto), 0) #define TRACE_EVENT_FLAGS(event, flag) @@ -568,6 +603,9 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) #define TRACE_EVENT_FN(name, proto, args, struct, \ assign, print, reg, unreg) \ DECLARE_TRACE(name, PARAMS(proto), PARAMS(args)) +#define TRACE_EVENT_FN_MAY_FAULT(name, proto, args, struct, \ + assign, print, reg, unreg) \ + DECLARE_TRACE_MAY_FAULT(name, PARAMS(proto), PARAMS(args)) #define TRACE_EVENT_FN_COND(name, proto, args, cond, struct, \ assign, print, reg, unreg) \ DECLARE_TRACE_CONDITION(name, PARAMS(proto), \ diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h index 00723935dcc7..1b8ca143724a 100644 --- a/include/trace/define_trace.h +++ b/include/trace/define_trace.h @@ -41,6 +41,12 @@ assign, print, reg, unreg) \ DEFINE_TRACE_FN(name, reg, unreg, PARAMS(proto), PARAMS(args)) +#undef TRACE_EVENT_FN_MAY_FAULT +#define TRACE_EVENT_FN_MAY_FAULT(name, proto, args, tstruct, \ + assign, print, reg, unreg) \ + DEFINE_TRACE_FN_FLAGS(name, reg, unreg, PARAMS(proto), \ + PARAMS(args), TRACEPOINT_MAY_FAULT) + #undef TRACE_EVENT_FN_COND #define TRACE_EVENT_FN_COND(name, proto, args, cond, tstruct, \ assign, print, reg, unreg) \ @@ -106,6 +112,7 @@ #undef TRACE_EVENT #undef TRACE_EVENT_FN +#undef TRACE_EVENT_FN_MAY_FAULT #undef TRACE_EVENT_FN_COND #undef TRACE_EVENT_CONDITION #undef TRACE_EVENT_NOP diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h index c2f9cabf154d..df590eea8ae4 100644 --- a/include/trace/trace_events.h +++ b/include/trace/trace_events.h @@ -77,6 +77,12 @@ TRACE_EVENT(name, PARAMS(proto), PARAMS(args), \ PARAMS(tstruct), PARAMS(assign), PARAMS(print)) \ +#undef TRACE_EVENT_FN_MAY_FAULT +#define TRACE_EVENT_FN_MAY_FAULT(name, proto, args, tstruct, \ + assign, print, reg, unreg) \ + TRACE_EVENT(name, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print)) \ + #undef TRACE_EVENT_FN_COND #define TRACE_EVENT_FN_COND(name, proto, args, cond, tstruct, \ assign, print, reg, unreg) \ diff --git a/init/Kconfig b/init/Kconfig index 5783a0b87517..72e13ee73c43 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1936,6 +1936,7 @@ config BINDGEN_VERSION_TEXT # config TRACEPOINTS bool + select TASKS_TRACE_RCU source "kernel/Kconfig.kexec" diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index cd098846e251..c77eb80cbd7f 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2471,7 +2471,9 @@ int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *li if (prog->aux->max_tp_access > btp->writable_size) return -EINVAL; - return tracepoint_probe_register_may_exist(tp, (void *)btp->bpf_func, link); + return tracepoint_probe_register_prio_flags(tp, (void *)btp->bpf_func, + link, TRACEPOINT_DEFAULT_PRIO, + TRACEPOINT_MAY_EXIST); } int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *link) diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 62e6a8f4aae9..f4f77dfed565 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -705,8 +705,9 @@ static int __register_trace_fprobe(struct trace_fprobe *tf) * At first, put __probestub_##TP function on the tracepoint * and put a fprobe on the stub function. */ - ret = tracepoint_probe_register_prio_may_exist(tpoint, - tpoint->probestub, NULL, 0); + ret = tracepoint_probe_register_prio_flags(tpoint, + tpoint->probestub, NULL, 0, + TRACEPOINT_MAY_EXIST); if (ret < 0) return ret; return register_fprobe_ips(&tf->fp, &ip, 1); diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c index 8d1507dd0724..cc5e71383c4d 100644 --- a/kernel/tracepoint.c +++ b/kernel/tracepoint.c @@ -111,11 +111,16 @@ static inline void *allocate_probes(int count) return p == NULL ? NULL : p->probes; } -static void srcu_free_old_probes(struct rcu_head *head) +static void rcu_tasks_trace_free_old_probes(struct rcu_head *head) { kfree(container_of(head, struct tp_probes, rcu)); } +static void srcu_free_old_probes(struct rcu_head *head) +{ + call_rcu_tasks_trace(head, rcu_tasks_trace_free_old_probes); +} + static void rcu_free_old_probes(struct rcu_head *head) { call_srcu(&tracepoint_srcu, head, srcu_free_old_probes); @@ -136,7 +141,7 @@ static __init int release_early_probes(void) return 0; } -/* SRCU is initialized at core_initcall */ +/* SRCU and Tasks Trace RCU are initialized at core_initcall */ postcore_initcall(release_early_probes); static inline void release_probes(struct tracepoint_func *old) @@ -146,8 +151,9 @@ static inline void release_probes(struct tracepoint_func *old) struct tp_probes, probes[0]); /* - * We can't free probes if SRCU is not initialized yet. - * Postpone the freeing till after SRCU is initialized. + * We can't free probes if SRCU and Tasks Trace RCU are not + * initialized yet. Postpone the freeing till after both are + * initialized. */ if (unlikely(!ok_to_free_tracepoints)) { tp_probes->rcu.next = early_probes; @@ -156,10 +162,9 @@ static inline void release_probes(struct tracepoint_func *old) } /* - * Tracepoint probes are protected by both sched RCU and SRCU, - * by calling the SRCU callback in the sched RCU callback we - * cover both cases. So let us chain the SRCU and sched RCU - * callbacks to wait for both grace periods. + * Tracepoint probes are protected by sched RCU, SRCU and + * Tasks Trace RCU by chaining the callbacks we cover all three + * cases and wait for all three grace periods. */ call_rcu(&tp_probes->rcu, rcu_free_old_probes); } @@ -460,30 +465,45 @@ static int tracepoint_remove_func(struct tracepoint *tp, } /** - * tracepoint_probe_register_prio_may_exist - Connect a probe to a tracepoint with priority + * tracepoint_probe_register_prio_flags - Connect a probe to a tracepoint with priority and flags * @tp: tracepoint * @probe: probe handler * @data: tracepoint data * @prio: priority of this function over other registered functions + * @flags: tracepoint flags argument (enum tracepoint_flags bits) * - * Same as tracepoint_probe_register_prio() except that it will not warn - * if the tracepoint is already registered. + * Returns 0 if ok, error value on error. + * Note: if @tp is within a module, the caller is responsible for + * unregistering the probe before the module is gone. This can be + * performed either with a tracepoint module going notifier, or from + * within module exit functions. */ -int tracepoint_probe_register_prio_may_exist(struct tracepoint *tp, void *probe, - void *data, int prio) +int tracepoint_probe_register_prio_flags(struct tracepoint *tp, void *probe, + void *data, int prio, unsigned int flags) { struct tracepoint_func tp_func; int ret; + /* + * For a probe to be registered to a tracepoint they must share the + * same MAY_FAULT flag value. + */ + if ((tp->flags & TRACEPOINT_MAY_FAULT) != (flags & TRACEPOINT_MAY_FAULT)) + return -EINVAL; + mutex_lock(&tracepoints_mutex); tp_func.func = probe; tp_func.data = data; tp_func.prio = prio; - ret = tracepoint_add_func(tp, &tp_func, prio, false); + /* + * When the MAY_EXIST flag is set, don't warn if the tracepoint is + * already registered. + */ + ret = tracepoint_add_func(tp, &tp_func, prio, flags & TRACEPOINT_MAY_EXIST); mutex_unlock(&tracepoints_mutex); return ret; } -EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio_may_exist); +EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio_flags); /** * tracepoint_probe_register_prio - Connect a probe to a tracepoint with priority @@ -501,16 +521,7 @@ EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio_may_exist); int tracepoint_probe_register_prio(struct tracepoint *tp, void *probe, void *data, int prio) { - struct tracepoint_func tp_func; - int ret; - - mutex_lock(&tracepoints_mutex); - tp_func.func = probe; - tp_func.data = data; - tp_func.prio = prio; - ret = tracepoint_add_func(tp, &tp_func, prio, true); - mutex_unlock(&tracepoints_mutex); - return ret; + return tracepoint_probe_register_prio_flags(tp, probe, data, prio, 0); } EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio); @@ -520,6 +531,8 @@ EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio); * @probe: probe handler * @data: tracepoint data * + * Non-faultable probes can only be registered on non-faultable tracepoints. + * * Returns 0 if ok, error value on error. * Note: if @tp is within a module, the caller is responsible for * unregistering the probe before the module is gone. This can be @@ -528,7 +541,7 @@ EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio); */ int tracepoint_probe_register(struct tracepoint *tp, void *probe, void *data) { - return tracepoint_probe_register_prio(tp, probe, data, TRACEPOINT_DEFAULT_PRIO); + return tracepoint_probe_register_prio_flags(tp, probe, data, TRACEPOINT_DEFAULT_PRIO, 0); } EXPORT_SYMBOL_GPL(tracepoint_probe_register); From patchwork Wed Aug 28 14:41:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13781390 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1B66189505; Wed, 28 Aug 2024 14:42:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=167.114.26.122 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856151; cv=none; b=q+J3rh8hM6CY7zfwlBZupmTbl0ljlwb8GmTO0RREkKcTR1Vai9iiqmDf3UT4CLyj8ix87GCkO33jurqo6FAyE75BPvBq1ql+kBNLqwdgcLTjCj9BAsi7UZQn1EdY7nV5kDo1onq+t8EAgahP2l1Uu1p3/CJDz3N3ytu9FMPxQ4c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856151; c=relaxed/simple; bh=nIWHmNh21LNhWalKnly2O+Moa6z7+iGW/VJoYGdBemo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=J10qpkuIrfG7uj+5tHV6doSVmGBwuQoCtY4B/xhGEAw+p/1G9iYpRYZD/Uo0uz2xmleZsAtE+B74eb+tmeT6CgM/txjp8dvE2etT2w6gIAa+E6cG8fv77uH1ZpLyMbQKbLha/j4hKUCzVW9/YvZZ0p+osaPW7C0Iu/qm5li3/3w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com; spf=pass smtp.mailfrom=efficios.com; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b=BjqE+mIo; arc=none smtp.client-ip=167.114.26.122 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=efficios.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b="BjqE+mIo" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1724856148; bh=nIWHmNh21LNhWalKnly2O+Moa6z7+iGW/VJoYGdBemo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BjqE+mIoEc4ip4cS/evjhYBVM4GGy6zzaJP1kiEe3xU5Qdhf6mBtWzV8sla/KmNYE FaI6PWstwqActMdVP7zhaaLYvUwJ9w0+MdsvyBZ8gNL82A/E1AfUT1xwLi5r5Cvsxx PRhvPaEjrSrx/z83hpxUYsFxtyqUo50kNVEoOtmy7VrhdA7haKI7GPOS7yfLJZBhQj eH1UHMFeY/6dUW/f0j3fcWVl6bSNgj+bstL+BmVPLAeTS0Jtbh5h3niPTMdFBFyLyg zH3naY9o5oaDzmG9jlarYx2NSBpz+FVZgftdF549VTB8BVl1r61PtPUn4lDTznysyw A11XZ7LcPIwVA== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4Wv6ZX32wRz1JFR; Wed, 28 Aug 2024 10:42:28 -0400 (EDT) From: Mathieu Desnoyers To: Steven Rostedt , Masami Hiramatsu Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E . McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Namhyung Kim , bpf@vger.kernel.org, Joel Fernandes , linux-trace-kernel@vger.kernel.org, Michael Jeanson Subject: [PATCH v6 2/5] tracing/ftrace: Add support for faultable tracepoints Date: Wed, 28 Aug 2024 10:41:49 -0400 Message-Id: <20240828144153.829582-3-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> References: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for converting system call enter/exit instrumentation into faultable tracepoints, make sure that ftrace can handle registering to such tracepoints by explicitly disabling preemption within the ftrace tracepoint probes to respect the current expectations within ftrace ring buffer code. This change does not yet allow ftrace to take page faults per se within its probe, but allows its existing probes to connect to faultable tracepoints. Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/ Co-developed-by: Michael Jeanson Signed-off-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers Reviewed-by: Masami Hiramatsu (Google) Cc: Steven Rostedt Cc: Masami Hiramatsu Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Namhyung Kim Cc: bpf@vger.kernel.org Cc: Joel Fernandes --- Changes since v4: - Use DEFINE_INACTIVE_GUARD. - Add brackets to multiline 'if' statements. Changes since v5: - Pass the TRACEPOINT_MAY_FAULT flag directly to tracepoint_probe_register_prio_flags. --- include/trace/trace_events.h | 64 ++++++++++++++++++++++++++++++++++-- kernel/trace/trace_events.c | 16 +++++---- 2 files changed, 71 insertions(+), 9 deletions(-) diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h index df590eea8ae4..c887f7b6fbe9 100644 --- a/include/trace/trace_events.h +++ b/include/trace/trace_events.h @@ -45,6 +45,16 @@ PARAMS(print)); \ DEFINE_EVENT(name, name, PARAMS(proto), PARAMS(args)); +#undef TRACE_EVENT_MAY_FAULT +#define TRACE_EVENT_MAY_FAULT(name, proto, args, tstruct, assign, print) \ + DECLARE_EVENT_CLASS_MAY_FAULT(name, \ + PARAMS(proto), \ + PARAMS(args), \ + PARAMS(tstruct), \ + PARAMS(assign), \ + PARAMS(print)); \ + DEFINE_EVENT(name, name, PARAMS(proto), PARAMS(args)); + #include "stages/stage1_struct_define.h" #undef DECLARE_EVENT_CLASS @@ -57,6 +67,11 @@ \ static struct trace_event_class event_class_##name; +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(name, proto, args, tstruct, assign, print) \ + DECLARE_EVENT_CLASS(name, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print)) + #undef DEFINE_EVENT #define DEFINE_EVENT(template, name, proto, args) \ static struct trace_event_call __used \ @@ -80,7 +95,7 @@ #undef TRACE_EVENT_FN_MAY_FAULT #define TRACE_EVENT_FN_MAY_FAULT(name, proto, args, tstruct, \ assign, print, reg, unreg) \ - TRACE_EVENT(name, PARAMS(proto), PARAMS(args), \ + TRACE_EVENT_MAY_FAULT(name, PARAMS(proto), PARAMS(args), \ PARAMS(tstruct), PARAMS(assign), PARAMS(print)) \ #undef TRACE_EVENT_FN_COND @@ -123,6 +138,11 @@ tstruct; \ }; +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print)) + #undef DEFINE_EVENT #define DEFINE_EVENT(template, name, proto, args) @@ -214,6 +234,11 @@ static struct trace_event_functions trace_event_type_funcs_##call = { \ .trace = trace_raw_output_##call, \ }; +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print)) + #undef DEFINE_EVENT_PRINT #define DEFINE_EVENT_PRINT(template, call, proto, args, print) \ static notrace enum print_line_t \ @@ -250,6 +275,11 @@ static struct trace_event_fields trace_event_fields_##call[] = { \ tstruct \ {} }; +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print)) + #undef DEFINE_EVENT_PRINT #define DEFINE_EVENT_PRINT(template, name, proto, args, print) @@ -271,6 +301,11 @@ static inline notrace int trace_event_get_offsets_##call( \ return __data_size; \ } +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print)) + #include TRACE_INCLUDE(TRACE_INCLUDE_FILE) /* @@ -380,8 +415,8 @@ static inline notrace int trace_event_get_offsets_##call( \ #include "stages/stage6_event_callback.h" -#undef DECLARE_EVENT_CLASS -#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ +#undef _DECLARE_EVENT_CLASS +#define _DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print, tp_flags) \ \ static notrace void \ trace_event_raw_event_##call(void *__data, proto) \ @@ -392,6 +427,13 @@ trace_event_raw_event_##call(void *__data, proto) \ struct trace_event_raw_##call *entry; \ int __data_size; \ \ + DEFINE_INACTIVE_GUARD(preempt_notrace, trace_event_guard); \ + \ + if ((tp_flags) & TRACEPOINT_MAY_FAULT) { \ + might_fault(); \ + activate_guard(preempt_notrace, trace_event_guard)(); \ + } \ + \ if (trace_trigger_soft_disabled(trace_file)) \ return; \ \ @@ -409,6 +451,17 @@ trace_event_raw_event_##call(void *__data, proto) \ \ trace_event_buffer_commit(&fbuffer); \ } + +#undef DECLARE_EVENT_CLASS +#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ + _DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print), 0) + +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + _DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print), TRACEPOINT_MAY_FAULT) + /* * The ftrace_test_probe is compiled out, it is only here as a build time check * to make sure that if the tracepoint handling changes, the ftrace probe will @@ -440,6 +493,11 @@ static struct trace_event_class __used __refdata event_class_##call = { \ _TRACE_PERF_INIT(call) \ }; +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print)) + #undef DEFINE_EVENT #define DEFINE_EVENT(template, call, proto, args) \ \ diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 7266ec2a4eea..5722e426143d 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -532,9 +532,11 @@ int trace_event_reg(struct trace_event_call *call, WARN_ON(!(call->flags & TRACE_EVENT_FL_TRACEPOINT)); switch (type) { case TRACE_REG_REGISTER: - return tracepoint_probe_register(call->tp, - call->class->probe, - file); + return tracepoint_probe_register_prio_flags(call->tp, + call->class->probe, + file, + TRACEPOINT_DEFAULT_PRIO, + call->tp->flags & TRACEPOINT_MAY_FAULT); case TRACE_REG_UNREGISTER: tracepoint_probe_unregister(call->tp, call->class->probe, @@ -543,9 +545,11 @@ int trace_event_reg(struct trace_event_call *call, #ifdef CONFIG_PERF_EVENTS case TRACE_REG_PERF_REGISTER: - return tracepoint_probe_register(call->tp, - call->class->perf_probe, - call); + return tracepoint_probe_register_prio_flags(call->tp, + call->class->perf_probe, + call, + TRACEPOINT_DEFAULT_PRIO, + call->tp->flags & TRACEPOINT_MAY_FAULT); case TRACE_REG_PERF_UNREGISTER: tracepoint_probe_unregister(call->tp, call->class->perf_probe, From patchwork Wed Aug 28 14:41:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13781392 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B40516CD07; Wed, 28 Aug 2024 14:42:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=167.114.26.122 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856151; cv=none; b=Qe8ZThuAuV9rKZN40TeQIDu1TmFCKHO1+0dHDYZFPbDO9CStEl9M57fw8atwEQpXHp445vECJ+9TlZfM+dZuAlwTLALBXIMYJu9PuYUfxvTMlFVJvbHryYVw18dcHKmLUWbvewBZ9KF6DfrRYf0tHN4U3SDyEZJyvqGZ/+h8hIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856151; c=relaxed/simple; bh=qYu25IF2DRRkLXFYP/BOuUl3vp8/MRIdwdEWvHyJh6s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gXYTcyYUQ2t2t7KD6wvaB20ionbYgu3S3hh0D/zb+OGaVgmRMpE0NIbgZP4kYeLtVPSc1CJE63/tYXTb7Ya7Hi/osPtAcqRA4MAxq21VLVONEQI+NriRswolio1xaSLVnpuDOIdmbDFfXiNpXmNqp5RcbrXpx/3TY/Do3jvkThc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com; spf=pass smtp.mailfrom=efficios.com; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b=lrRma/CK; arc=none smtp.client-ip=167.114.26.122 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=efficios.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b="lrRma/CK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1724856149; bh=qYu25IF2DRRkLXFYP/BOuUl3vp8/MRIdwdEWvHyJh6s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lrRma/CKvO390sPevW2+NTdMxreftMK0TzesJbq9nl10EFjq4gPDvEllRQgViGOLB Mi8trhYJsLf0Ce7CNU9cv1vFDgdbgsbi46eRiGSlarPjmxoJMazoSiNSpWgVNaNvCZ JTVhwB4Ca6LS5D4TomCZo2QJRrBglXcEjqBjnko8YUybx7EhtvP54dpJhSbIJI0JPi yZ0DiiBdfn31uWNkc5KSIWxe4Jpy3iudFVfD2jldmNddmc95VxMPS8kiwHLNbzPZ79 jGRLH9wjHDAdFIj1xZ8sqn4oEQR4nC3PDagN92GQo7WHzEimp0v2y3n1/I1aCDZZfQ 2Oq5nHMPJRWRg== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4Wv6ZX6KVzz1JFS; Wed, 28 Aug 2024 10:42:28 -0400 (EDT) From: Mathieu Desnoyers To: Steven Rostedt , Masami Hiramatsu Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E . McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Namhyung Kim , bpf@vger.kernel.org, Joel Fernandes , linux-trace-kernel@vger.kernel.org, Michael Jeanson Subject: [PATCH v6 3/5] tracing/bpf-trace: Add support for faultable tracepoints Date: Wed, 28 Aug 2024 10:41:50 -0400 Message-Id: <20240828144153.829582-4-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> References: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for converting system call enter/exit instrumentation into faultable tracepoints, make sure that bpf can handle registering to such tracepoints by explicitly disabling preemption within the bpf tracepoint probes to respect the current expectations within bpf tracing code. This change does not yet allow bpf to take page faults per se within its probe, but allows its existing probes to connect to faultable tracepoints. Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/ Co-developed-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers Signed-off-by: Michael Jeanson Reviewed-by: Masami Hiramatsu (Google) Cc: Steven Rostedt Cc: Masami Hiramatsu Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Namhyung Kim Cc: bpf@vger.kernel.org Cc: Joel Fernandes --- Changes since v4: - Use DEFINE_INACTIVE_GUARD. - Add brackets to multiline 'if' statements. Changes since v5: - Rebased on v6.11-rc5. - Pass the TRACEPOINT_MAY_FAULT flag directly to tracepoint_probe_register_prio_flags. --- include/trace/bpf_probe.h | 21 ++++++++++++++++----- kernel/trace/bpf_trace.c | 2 +- 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h index a2ea11cc912e..cc96dd1e7c3d 100644 --- a/include/trace/bpf_probe.h +++ b/include/trace/bpf_probe.h @@ -42,16 +42,27 @@ /* tracepoints with more than 12 arguments will hit build error */ #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__) -#define __BPF_DECLARE_TRACE(call, proto, args) \ +#define __BPF_DECLARE_TRACE(call, proto, args, tp_flags) \ static notrace void \ __bpf_trace_##call(void *__data, proto) \ { \ - CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ + DEFINE_INACTIVE_GUARD(preempt_notrace, bpf_trace_guard); \ + \ + if ((tp_flags) & TRACEPOINT_MAY_FAULT) { \ + might_fault(); \ + activate_guard(preempt_notrace, bpf_trace_guard)(); \ + } \ + \ + CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ } #undef DECLARE_EVENT_CLASS #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ - __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) + +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), TRACEPOINT_MAY_FAULT) /* * This part is compiled out, it is only here as a build time check @@ -105,13 +116,13 @@ static inline void bpf_test_buffer_##call(void) \ #undef DECLARE_TRACE #define DECLARE_TRACE(call, proto, args) \ - __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \ + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) \ __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), 0) #undef DECLARE_TRACE_WRITABLE #define DECLARE_TRACE_WRITABLE(call, proto, args, size) \ __CHECK_WRITABLE_BUF_SIZE(call, PARAMS(proto), PARAMS(args), size) \ - __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \ + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) \ __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), size) #include TRACE_INCLUDE(TRACE_INCLUDE_FILE) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index c77eb80cbd7f..ed07283d505b 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2473,7 +2473,7 @@ int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *li return tracepoint_probe_register_prio_flags(tp, (void *)btp->bpf_func, link, TRACEPOINT_DEFAULT_PRIO, - TRACEPOINT_MAY_EXIST); + TRACEPOINT_MAY_EXIST | (tp->flags & TRACEPOINT_MAY_FAULT)); } int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *link) From patchwork Wed Aug 28 14:41:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13781393 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8EB319AD4F; Wed, 28 Aug 2024 14:42:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=167.114.26.122 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856152; cv=none; b=qP1d/C0aAGadWyXQJUGk9b46ozs7aWaaLAhAQ5CskuYUQkUFybhoictb1LJJaX0pq5z3ViMco6npIF4LNexaZLhDY844f7ssro8I4kkL6kIcc6n6WeNTL/Za1sYkMl8OfEsBM5N2x/okgAxTBs9tgd0jxBOi37GjAj1O1ZRpM2Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856152; c=relaxed/simple; bh=149pMzz+KOT6+tFcYaXWQmFtg/yrorXY4zzPknm/1HE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rnBnx9Mbdi3Dz/Fx+CGlRFiqnKCPkJliPRgkIJzP1aczXmvV3/H3jcVMGPikyvV84CmZXZKlR1/4CuXWWl8zc7jNv7ITAUy64mlnvXUeMZe8JX6jv8yFKhplAuN5bwrK1CIV210NsvRKeNWi5waB+uTH1xluZpN/oVZK8t9bK6I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com; spf=pass smtp.mailfrom=efficios.com; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b=wbV0pGII; arc=none smtp.client-ip=167.114.26.122 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=efficios.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b="wbV0pGII" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1724856149; bh=149pMzz+KOT6+tFcYaXWQmFtg/yrorXY4zzPknm/1HE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wbV0pGII5I7qz0otuIFcz3nTR8o1MzQGb/BAdLduX/0kHxGI3+p6bsFDn/FkZrjrG m+HKZ+WhxOtuL/l0Mu0wI60nKh/Yn2DHaYxxcZSPkXAsMzzWWJO4T0pIdclSEKUdNC S+zc1Im0vSVlVhtXjT/4egm7y8ko/SehrnU3VNo9JQrFcSapFQqibo5XBcMC4mOlVP d0z11zZY1CmFLHjWShdCF/anaX9KPmsdFy9+R1h0isBiv1ZRNOIYWN4geGHuCvrojo AcMx+ah2aE0lDBngq1N8dGacKnHHXzg55SHCWem6Mp9N1PgdwUQFlygtD3tZHuva8b YEj3eHA9A132Q== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4Wv6ZY2lhWz1JFT; Wed, 28 Aug 2024 10:42:29 -0400 (EDT) From: Mathieu Desnoyers To: Steven Rostedt , Masami Hiramatsu Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E . McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Namhyung Kim , bpf@vger.kernel.org, Joel Fernandes , linux-trace-kernel@vger.kernel.org, Michael Jeanson Subject: [PATCH v6 4/5] tracing/perf: Add support for faultable tracepoints Date: Wed, 28 Aug 2024 10:41:51 -0400 Message-Id: <20240828144153.829582-5-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> References: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for converting system call enter/exit instrumentation into faultable tracepoints, make sure that perf can handle registering to such tracepoints by explicitly disabling preemption within the perf tracepoint probes to respect the current expectations within perf ring buffer code. This change does not yet allow perf to take page faults per se within its probe, but allows its existing probes to connect to faultable tracepoints. Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/ Co-developed-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers Signed-off-by: Michael Jeanson Reviewed-by: Masami Hiramatsu (Google) Cc: Steven Rostedt Cc: Masami Hiramatsu Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Namhyung Kim Cc: bpf@vger.kernel.org Cc: Joel Fernandes --- Changes since v4: - Use DEFINE_INACTIVE_GUARD. --- include/trace/perf.h | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/include/trace/perf.h b/include/trace/perf.h index 2c11181c82e0..161e1655b953 100644 --- a/include/trace/perf.h +++ b/include/trace/perf.h @@ -12,8 +12,8 @@ #undef __perf_task #define __perf_task(t) (__task = (t)) -#undef DECLARE_EVENT_CLASS -#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ +#undef _DECLARE_EVENT_CLASS +#define _DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print, tp_flags) \ static notrace void \ perf_trace_##call(void *__data, proto) \ { \ @@ -28,6 +28,13 @@ perf_trace_##call(void *__data, proto) \ int __data_size; \ int rctx; \ \ + DEFINE_INACTIVE_GUARD(preempt_notrace, trace_event_guard); \ + \ + if ((tp_flags) & TRACEPOINT_MAY_FAULT) { \ + might_fault(); \ + activate_guard(preempt_notrace, trace_event_guard)(); \ + } \ + \ __data_size = trace_event_get_offsets_##call(&__data_offsets, args); \ \ head = this_cpu_ptr(event_call->perf_events); \ @@ -55,6 +62,17 @@ perf_trace_##call(void *__data, proto) \ head, __task); \ } +#undef DECLARE_EVENT_CLASS +#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ + _DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print), 0) + +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + _DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print), \ + TRACEPOINT_MAY_FAULT) + /* * This part is compiled out, it is only here as a build time check * to make sure that if the tracepoint handling changes, the From patchwork Wed Aug 28 14:41:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13781394 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 643CB1A2548; Wed, 28 Aug 2024 14:42:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=167.114.26.122 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856154; cv=none; b=NqnGdm5bGgl24Ag02nVkpKpt3P/4X1CaPhBoUyJygraSw0hsPTx18zi3Xa/OBJr86vtqcu2ujdLZ64gh1P1oRBUNB8cky4fuxME8dwEpfPJk3hSrN0QkMKQC7pApGZH4AgVAd5bfIso+yn/XiL0LioG6+Cv0jleCDrA75v2bDcE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856154; c=relaxed/simple; bh=cQBpL+pu8rPQAQ6DXfp1ZyC9FAqqvXyCZd6oDYKg2MM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CtGezAW6nrD9Y+uIgDh9iBk0+dLy2KJNHYIyXUtQMvJqgtpOqP18USP6Zq6DSifc2KMS2xnhY/8TNJ/I9s74Z4MAarHFsQf4UNkjmMRFA67LKSCtz78h6NF0hUWkPc1Wr6NARmSaH2G5mrTnbh+dz5oopWrzNxq4xvbah4l91q0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com; spf=pass smtp.mailfrom=efficios.com; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b=qlqYUGFc; arc=none smtp.client-ip=167.114.26.122 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=efficios.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b="qlqYUGFc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1724856150; bh=cQBpL+pu8rPQAQ6DXfp1ZyC9FAqqvXyCZd6oDYKg2MM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qlqYUGFcp8QcHwKPnmP+i2C8vgLZ/61hFePQTq4/8i2knKK+zYXO7GuY+rk4qRlt/ 38ycS64IYYRRWyDxLma8PsiXNbPDYMlbkK3Lhy90m32WcEvKQTFEKZZQAOd6NcKkPU xRX3OmhLU0lXIr+DgOsObVZXADXohW9CTkY7AxZHMePAyi/QkMxOv5gm8RMuqpKoOc 4JWeZU7nt2xIRM8IjddaCmEDwsdpZONeG0yVlVJcp3XFwZkCTMUx8Ysy0AO4LQayk/ zMLNfYmV82oiOS2BDi+cAWjczjrrBBZ/1B4op3bxiDzC+9hQWEhiwc8zx2Kgjx5PsK DHgG7E+7hT2Gg== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4Wv6ZY61dHz1JFV; Wed, 28 Aug 2024 10:42:29 -0400 (EDT) From: Mathieu Desnoyers To: Steven Rostedt , Masami Hiramatsu Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E . McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Namhyung Kim , bpf@vger.kernel.org, Joel Fernandes , linux-trace-kernel@vger.kernel.org, Michael Jeanson Subject: [PATCH v6 5/5] tracing: Convert sys_enter/exit to faultable tracepoints Date: Wed, 28 Aug 2024 10:41:52 -0400 Message-Id: <20240828144153.829582-6-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> References: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Convert the definition of the system call enter/exit tracepoints to faultable tracepoints now that all upstream tracers handle it. This allows tracers to fault-in userspace system call arguments such as path strings within their probe callbacks. Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/ Co-developed-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers Signed-off-by: Michael Jeanson Reviewed-by: Masami Hiramatsu (Google) Cc: Steven Rostedt Cc: Masami Hiramatsu Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Namhyung Kim Cc: bpf@vger.kernel.org Cc: Joel Fernandes --- Since v4: - Use 'guard(preempt_notrace)'. - Add brackets to multiline 'if' statements. --- include/trace/events/syscalls.h | 4 +-- kernel/trace/trace_syscalls.c | 52 ++++++++++++++++++++++++++++----- 2 files changed, 46 insertions(+), 10 deletions(-) diff --git a/include/trace/events/syscalls.h b/include/trace/events/syscalls.h index b6e0cbc2c71f..dc30e3004818 100644 --- a/include/trace/events/syscalls.h +++ b/include/trace/events/syscalls.h @@ -15,7 +15,7 @@ #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS -TRACE_EVENT_FN(sys_enter, +TRACE_EVENT_FN_MAY_FAULT(sys_enter, TP_PROTO(struct pt_regs *regs, long id), @@ -41,7 +41,7 @@ TRACE_EVENT_FN(sys_enter, TRACE_EVENT_FLAGS(sys_enter, TRACE_EVENT_FL_CAP_ANY) -TRACE_EVENT_FN(sys_exit, +TRACE_EVENT_FN_MAY_FAULT(sys_exit, TP_PROTO(struct pt_regs *regs, long ret), diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c index 9c581d6da843..314666d663b6 100644 --- a/kernel/trace/trace_syscalls.c +++ b/kernel/trace/trace_syscalls.c @@ -299,6 +299,12 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id) int syscall_nr; int size; + /* + * Probe called with preemption enabled (may_fault), but ring buffer and + * per-cpu data require preemption to be disabled. + */ + guard(preempt_notrace)(); + syscall_nr = trace_get_syscall_nr(current, regs); if (syscall_nr < 0 || syscall_nr >= NR_syscalls) return; @@ -338,6 +344,12 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret) struct trace_event_buffer fbuffer; int syscall_nr; + /* + * Probe called with preemption enabled (may_fault), but ring buffer and + * per-cpu data require preemption to be disabled. + */ + guard(preempt_notrace)(); + syscall_nr = trace_get_syscall_nr(current, regs); if (syscall_nr < 0 || syscall_nr >= NR_syscalls) return; @@ -376,8 +388,11 @@ static int reg_event_syscall_enter(struct trace_event_file *file, if (WARN_ON_ONCE(num < 0 || num >= NR_syscalls)) return -ENOSYS; mutex_lock(&syscall_trace_lock); - if (!tr->sys_refcount_enter) - ret = register_trace_sys_enter(ftrace_syscall_enter, tr); + if (!tr->sys_refcount_enter) { + ret = register_trace_prio_flags_sys_enter(ftrace_syscall_enter, tr, + TRACEPOINT_DEFAULT_PRIO, + TRACEPOINT_MAY_FAULT); + } if (!ret) { rcu_assign_pointer(tr->enter_syscall_files[num], file); tr->sys_refcount_enter++; @@ -414,8 +429,11 @@ static int reg_event_syscall_exit(struct trace_event_file *file, if (WARN_ON_ONCE(num < 0 || num >= NR_syscalls)) return -ENOSYS; mutex_lock(&syscall_trace_lock); - if (!tr->sys_refcount_exit) - ret = register_trace_sys_exit(ftrace_syscall_exit, tr); + if (!tr->sys_refcount_exit) { + ret = register_trace_prio_flags_sys_exit(ftrace_syscall_exit, tr, + TRACEPOINT_DEFAULT_PRIO, + TRACEPOINT_MAY_FAULT); + } if (!ret) { rcu_assign_pointer(tr->exit_syscall_files[num], file); tr->sys_refcount_exit++; @@ -582,6 +600,12 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id) int rctx; int size; + /* + * Probe called with preemption enabled (may_fault), but ring buffer and + * per-cpu data require preemption to be disabled. + */ + guard(preempt_notrace)(); + syscall_nr = trace_get_syscall_nr(current, regs); if (syscall_nr < 0 || syscall_nr >= NR_syscalls) return; @@ -630,8 +654,11 @@ static int perf_sysenter_enable(struct trace_event_call *call) num = ((struct syscall_metadata *)call->data)->syscall_nr; mutex_lock(&syscall_trace_lock); - if (!sys_perf_refcount_enter) - ret = register_trace_sys_enter(perf_syscall_enter, NULL); + if (!sys_perf_refcount_enter) { + ret = register_trace_prio_flags_sys_enter(perf_syscall_enter, NULL, + TRACEPOINT_DEFAULT_PRIO, + TRACEPOINT_MAY_FAULT); + } if (ret) { pr_info("event trace: Could not activate syscall entry trace point"); } else { @@ -682,6 +709,12 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret) int rctx; int size; + /* + * Probe called with preemption enabled (may_fault), but ring buffer and + * per-cpu data require preemption to be disabled. + */ + guard(preempt_notrace)(); + syscall_nr = trace_get_syscall_nr(current, regs); if (syscall_nr < 0 || syscall_nr >= NR_syscalls) return; @@ -727,8 +760,11 @@ static int perf_sysexit_enable(struct trace_event_call *call) num = ((struct syscall_metadata *)call->data)->syscall_nr; mutex_lock(&syscall_trace_lock); - if (!sys_perf_refcount_exit) - ret = register_trace_sys_exit(perf_syscall_exit, NULL); + if (!sys_perf_refcount_exit) { + ret = register_trace_prio_flags_sys_exit(perf_syscall_exit, NULL, + TRACEPOINT_DEFAULT_PRIO, + TRACEPOINT_MAY_FAULT); + } if (ret) { pr_info("event trace: Could not activate syscall exit trace point"); } else {