From patchwork Thu Feb 18 22:21:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Jeanson X-Patchwork-Id: 12094415 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01AFAC433E9 for ; Thu, 18 Feb 2021 22:22:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B697D64E77 for ; Thu, 18 Feb 2021 22:22:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230165AbhBRWW1 (ORCPT ); Thu, 18 Feb 2021 17:22:27 -0500 Received: from mail.efficios.com ([167.114.26.124]:43664 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230121AbhBRWWX (ORCPT ); Thu, 18 Feb 2021 17:22:23 -0500 Received: from localhost (localhost [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id 730B229EE93; Thu, 18 Feb 2021 17:21:42 -0500 (EST) Received: from mail.efficios.com ([127.0.0.1]) by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 0dygWqtl4m2z; Thu, 18 Feb 2021 17:21:42 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id 172C329EE92; Thu, 18 Feb 2021 17:21:42 -0500 (EST) DKIM-Filter: OpenDKIM Filter v2.10.3 mail.efficios.com 172C329EE92 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=efficios.com; s=default; t=1613686902; bh=Prp/o0hMXlghJPW/8Znwk7bWZiEUnBxyhrqePmcHN3s=; h=From:To:Date:Message-Id:MIME-Version; b=oT6sdBamOM4EC9ag7nzAsKH6zwxddw+eqyGlIpwApu2+X+Z8gRkdgxdqjQv8qWfP4 B/AFfm9EYGHQ7nWnmjDrPJFYVC00atCAEmaG8PI5yqy2MAdFHa6jucvBCpvxeFsF3R BzCSV+mJTpUyNXXOr1pNoNWlatKdj1E2bU1DqfYfGXW/x3eGVDrUqR8C/O6a0TwJYv L7Cbgg61Pi+d06mM4DpsB4AaWviq5F47cBipsL/K2MpTrrA38C1PhiSjuHmavLYzHL /Kpwyc/FN/pqCPCxjhXueXOqZYsUc/JgXC3eqe4Rip+t7V9oukF5UR4V2amCmq3oE3 uHJ2E4Bt1ooUQ== X-Virus-Scanned: amavisd-new at efficios.com Received: from mail.efficios.com ([127.0.0.1]) by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 6t5Bc0FdIgng; Thu, 18 Feb 2021 17:21:42 -0500 (EST) Received: from localhost.localdomain (96-127-212-112.qc.cable.ebox.net [96.127.212.112]) by mail.efficios.com (Postfix) with ESMTPSA id BD6C529F086; Thu, 18 Feb 2021 17:21:41 -0500 (EST) From: Michael Jeanson To: linux-kernel@vger.kernel.org Cc: Michael Jeanson , Mathieu Desnoyers , Steven Rostedt , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E . McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , bpf@vger.kernel.org, Joel Fernandes Subject: [RFC PATCH 3/6] tracing: bpf-trace: add support for faultable tracepoints Date: Thu, 18 Feb 2021 17:21:22 -0500 Message-Id: <20210218222125.46565-4-mjeanson@efficios.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210218222125.46565-1-mjeanson@efficios.com> References: <20210218222125.46565-1-mjeanson@efficios.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC In preparation for converting system call enter/exit instrumentation into faultable tracepoints, make sure that bpf can handle registering to such tracepoints by explicitly disabling preemption within the bpf tracepoint probes to respect the current expectations within bpf tracing code. This change does not yet allow bpf to take page faults per se within its probe, but allows its existing probes to connect to faultable tracepoints. Co-developed-by: Mathieu Desnoyers Signed-off-by: Mathieu Desnoyers Signed-off-by: Michael Jeanson Cc: Steven Rostedt (VMware) Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Jiri Olsa Cc: Namhyung Kim Cc: bpf@vger.kernel.org Cc: Joel Fernandes --- include/trace/bpf_probe.h | 23 +++++++++++++++++++++-- kernel/trace/bpf_trace.c | 5 ++++- 2 files changed, 25 insertions(+), 3 deletions(-) diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h index cd74bffed5c6..1fc3afc49f37 100644 --- a/include/trace/bpf_probe.h +++ b/include/trace/bpf_probe.h @@ -55,15 +55,34 @@ /* tracepoints with more than 12 arguments will hit build error */ #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__) -#undef DECLARE_EVENT_CLASS -#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ +#undef _DECLARE_EVENT_CLASS +#define _DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print, tp_flags) \ static notrace void \ __bpf_trace_##call(void *__data, proto) \ { \ struct bpf_prog *prog = __data; \ + \ + if ((tp_flags) & TRACEPOINT_MAYFAULT) \ + preempt_disable_notrace(); \ + \ CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args)); \ + \ + if ((tp_flags) & TRACEPOINT_MAYFAULT) \ + preempt_enable_notrace(); \ } +#undef DECLARE_EVENT_CLASS +#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ + _DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print), 0) + +#undef DECLARE_EVENT_CLASS_MAYFAULT +#define DECLARE_EVENT_CLASS_MAYFAULT(call, proto, args, tstruct, \ + assign, print) \ + _DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), \ + PARAMS(tstruct), PARAMS(assign), PARAMS(print), \ + TRACEPOINT_MAYFAULT) + /* * This part is compiled out, it is only here as a build time check * to make sure that if the tracepoint handling changes, the diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 0dde84b9d29f..eeeb3dafb01e 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2117,7 +2117,10 @@ static int __bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog * if (prog->aux->max_tp_access > btp->writable_size) return -EINVAL; - return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog); + if (tp->flags & TRACEPOINT_MAYFAULT) + return tracepoint_probe_register_mayfault(tp, (void *)btp->bpf_func, prog); + else + return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog); } int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)