From patchwork Wed Aug 28 14:41:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13781392 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B40516CD07; Wed, 28 Aug 2024 14:42:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=167.114.26.122 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856151; cv=none; b=Qe8ZThuAuV9rKZN40TeQIDu1TmFCKHO1+0dHDYZFPbDO9CStEl9M57fw8atwEQpXHp445vECJ+9TlZfM+dZuAlwTLALBXIMYJu9PuYUfxvTMlFVJvbHryYVw18dcHKmLUWbvewBZ9KF6DfrRYf0tHN4U3SDyEZJyvqGZ/+h8hIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724856151; c=relaxed/simple; bh=qYu25IF2DRRkLXFYP/BOuUl3vp8/MRIdwdEWvHyJh6s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gXYTcyYUQ2t2t7KD6wvaB20ionbYgu3S3hh0D/zb+OGaVgmRMpE0NIbgZP4kYeLtVPSc1CJE63/tYXTb7Ya7Hi/osPtAcqRA4MAxq21VLVONEQI+NriRswolio1xaSLVnpuDOIdmbDFfXiNpXmNqp5RcbrXpx/3TY/Do3jvkThc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com; spf=pass smtp.mailfrom=efficios.com; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b=lrRma/CK; arc=none smtp.client-ip=167.114.26.122 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=efficios.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=efficios.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=efficios.com header.i=@efficios.com header.b="lrRma/CK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1724856149; bh=qYu25IF2DRRkLXFYP/BOuUl3vp8/MRIdwdEWvHyJh6s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lrRma/CKvO390sPevW2+NTdMxreftMK0TzesJbq9nl10EFjq4gPDvEllRQgViGOLB Mi8trhYJsLf0Ce7CNU9cv1vFDgdbgsbi46eRiGSlarPjmxoJMazoSiNSpWgVNaNvCZ JTVhwB4Ca6LS5D4TomCZo2QJRrBglXcEjqBjnko8YUybx7EhtvP54dpJhSbIJI0JPi yZ0DiiBdfn31uWNkc5KSIWxe4Jpy3iudFVfD2jldmNddmc95VxMPS8kiwHLNbzPZ79 jGRLH9wjHDAdFIj1xZ8sqn4oEQR4nC3PDagN92GQo7WHzEimp0v2y3n1/I1aCDZZfQ 2Oq5nHMPJRWRg== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4Wv6ZX6KVzz1JFS; Wed, 28 Aug 2024 10:42:28 -0400 (EDT) From: Mathieu Desnoyers To: Steven Rostedt , Masami Hiramatsu Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E . McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Namhyung Kim , bpf@vger.kernel.org, Joel Fernandes , linux-trace-kernel@vger.kernel.org, Michael Jeanson Subject: [PATCH v6 3/5] tracing/bpf-trace: Add support for faultable tracepoints Date: Wed, 28 Aug 2024 10:41:50 -0400 Message-Id: <20240828144153.829582-4-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> References: <20240828144153.829582-1-mathieu.desnoyers@efficios.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for converting system call enter/exit instrumentation into faultable tracepoints, make sure that bpf can handle registering to such tracepoints by explicitly disabling preemption within the bpf tracepoint probes to respect the current expectations within bpf tracing code. This change does not yet allow bpf to take page faults per se within its probe, but allows its existing probes to connect to faultable tracepoints. Link: https://lore.kernel.org/lkml/20231002202531.3160-1-mathieu.desnoyers@efficios.com/ Co-developed-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers Signed-off-by: Michael Jeanson Reviewed-by: Masami Hiramatsu (Google) Cc: Steven Rostedt Cc: Masami Hiramatsu Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Namhyung Kim Cc: bpf@vger.kernel.org Cc: Joel Fernandes --- Changes since v4: - Use DEFINE_INACTIVE_GUARD. - Add brackets to multiline 'if' statements. Changes since v5: - Rebased on v6.11-rc5. - Pass the TRACEPOINT_MAY_FAULT flag directly to tracepoint_probe_register_prio_flags. --- include/trace/bpf_probe.h | 21 ++++++++++++++++----- kernel/trace/bpf_trace.c | 2 +- 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h index a2ea11cc912e..cc96dd1e7c3d 100644 --- a/include/trace/bpf_probe.h +++ b/include/trace/bpf_probe.h @@ -42,16 +42,27 @@ /* tracepoints with more than 12 arguments will hit build error */ #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__) -#define __BPF_DECLARE_TRACE(call, proto, args) \ +#define __BPF_DECLARE_TRACE(call, proto, args, tp_flags) \ static notrace void \ __bpf_trace_##call(void *__data, proto) \ { \ - CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ + DEFINE_INACTIVE_GUARD(preempt_notrace, bpf_trace_guard); \ + \ + if ((tp_flags) & TRACEPOINT_MAY_FAULT) { \ + might_fault(); \ + activate_guard(preempt_notrace, bpf_trace_guard)(); \ + } \ + \ + CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ } #undef DECLARE_EVENT_CLASS #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ - __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) + +#undef DECLARE_EVENT_CLASS_MAY_FAULT +#define DECLARE_EVENT_CLASS_MAY_FAULT(call, proto, args, tstruct, assign, print) \ + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), TRACEPOINT_MAY_FAULT) /* * This part is compiled out, it is only here as a build time check @@ -105,13 +116,13 @@ static inline void bpf_test_buffer_##call(void) \ #undef DECLARE_TRACE #define DECLARE_TRACE(call, proto, args) \ - __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \ + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) \ __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), 0) #undef DECLARE_TRACE_WRITABLE #define DECLARE_TRACE_WRITABLE(call, proto, args, size) \ __CHECK_WRITABLE_BUF_SIZE(call, PARAMS(proto), PARAMS(args), size) \ - __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) \ + __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args), 0) \ __DEFINE_EVENT(call, call, PARAMS(proto), PARAMS(args), size) #include TRACE_INCLUDE(TRACE_INCLUDE_FILE) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index c77eb80cbd7f..ed07283d505b 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2473,7 +2473,7 @@ int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *li return tracepoint_probe_register_prio_flags(tp, (void *)btp->bpf_func, link, TRACEPOINT_DEFAULT_PRIO, - TRACEPOINT_MAY_EXIST); + TRACEPOINT_MAY_EXIST | (tp->flags & TRACEPOINT_MAY_FAULT)); } int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_raw_tp_link *link)