From patchwork Thu Feb 18 22:21:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Jeanson X-Patchwork-Id: 12094423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98D41C433E6 for ; Thu, 18 Feb 2021 22:23:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4605664EBD for ; Thu, 18 Feb 2021 22:23:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbhBRWXX (ORCPT ); Thu, 18 Feb 2021 17:23:23 -0500 Received: from mail.efficios.com ([167.114.26.124]:43956 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230186AbhBRWXE (ORCPT ); Thu, 18 Feb 2021 17:23:04 -0500 Received: from localhost (localhost [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id C51FD29EE94; Thu, 18 Feb 2021 17:21:44 -0500 (EST) Received: from mail.efficios.com ([127.0.0.1]) by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id ngh7VIrYHSQT; Thu, 18 Feb 2021 17:21:44 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id 2F69029EBE6; Thu, 18 Feb 2021 17:21:44 -0500 (EST) DKIM-Filter: OpenDKIM Filter v2.10.3 mail.efficios.com 2F69029EBE6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=efficios.com; s=default; t=1613686904; bh=jSJ+dbRp3dxNG29UdSxpjbejLYT/egi7+9YCbVCSAjg=; h=From:To:Date:Message-Id:MIME-Version; b=f+DpBLsGAQCjAAhwRNvqUMJFiKSLcJZMDkl08WF5w+gNXmec5toJ1G3S7fVXTo4Kx asV5Nm9wTQzGEyLJNn7FTeBw8bw+gS7rHSeOmO8hyWTdkAlAqvrXXO6T4FVooYnBpl gNYGya1DSQKP0rOVn9EddnOjKX2p60G8pCScSXcbUqUV4QRpjqLppcTAjoEPM9HlnH RXAxUYKwYNfA0NiXLe6SPCofSfso/qH6wDhfTr8qUj5/Z/wv8H3/qie7Q8lvi6kIQj cerD85gG69TpJjM64Pda0ynTYJZ4GqRbIxek553jWyynA++VVfIDpv78Ij0uzjnBj1 OpZJBf8S7AeaQ== X-Virus-Scanned: amavisd-new at efficios.com Received: from mail.efficios.com ([127.0.0.1]) by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 8FDw49Ju-c19; Thu, 18 Feb 2021 17:21:44 -0500 (EST) Received: from localhost.localdomain (96-127-212-112.qc.cable.ebox.net [96.127.212.112]) by mail.efficios.com (Postfix) with ESMTPSA id E03C629EBE4; Thu, 18 Feb 2021 17:21:43 -0500 (EST) From: Michael Jeanson To: linux-kernel@vger.kernel.org Cc: Michael Jeanson , Mathieu Desnoyers , Steven Rostedt , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E . McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , bpf@vger.kernel.org, Joel Fernandes Subject: [RFC PATCH 5/6] tracing: convert sys_enter/exit to faultable tracepoints Date: Thu, 18 Feb 2021 17:21:24 -0500 Message-Id: <20210218222125.46565-6-mjeanson@efficios.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210218222125.46565-1-mjeanson@efficios.com> References: <20210218222125.46565-1-mjeanson@efficios.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-State: RFC Convert the definition of the system call enter/exit tracepoints to faultable tracepoints now that all upstream tracers handle it. Co-developed-by: Mathieu Desnoyers Signed-off-by: Mathieu Desnoyers Signed-off-by: Michael Jeanson Cc: Steven Rostedt (VMware) Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Jiri Olsa Cc: Namhyung Kim Cc: bpf@vger.kernel.org Cc: Joel Fernandes --- include/trace/events/syscalls.h | 4 +- kernel/trace/trace_syscalls.c | 84 +++++++++++++++++++++++---------- 2 files changed, 60 insertions(+), 28 deletions(-) diff --git a/include/trace/events/syscalls.h b/include/trace/events/syscalls.h index b6e0cbc2c71f..2bd2d94563a2 100644 --- a/include/trace/events/syscalls.h +++ b/include/trace/events/syscalls.h @@ -15,7 +15,7 @@ #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS -TRACE_EVENT_FN(sys_enter, +TRACE_EVENT_FN_MAYFAULT(sys_enter, TP_PROTO(struct pt_regs *regs, long id), @@ -41,7 +41,7 @@ TRACE_EVENT_FN(sys_enter, TRACE_EVENT_FLAGS(sys_enter, TRACE_EVENT_FL_CAP_ANY) -TRACE_EVENT_FN(sys_exit, +TRACE_EVENT_FN_MAYFAULT(sys_exit, TP_PROTO(struct pt_regs *regs, long ret), diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c index d85a2f0f316b..4ca9190e26b2 100644 --- a/kernel/trace/trace_syscalls.c +++ b/kernel/trace/trace_syscalls.c @@ -304,21 +304,27 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id) int syscall_nr; int size; + /* + * Probe called with preemption enabled (mayfault), but ring buffer and + * per-cpu data require preemption to be disabled. + */ + preempt_disable_notrace(); + syscall_nr = trace_get_syscall_nr(current, regs); if (syscall_nr < 0 || syscall_nr >= NR_syscalls) - return; + goto end; /* Here we're inside tp handler's rcu_read_lock_sched (__DO_TRACE) */ trace_file = rcu_dereference_sched(tr->enter_syscall_files[syscall_nr]); if (!trace_file) - return; + goto end; if (trace_trigger_soft_disabled(trace_file)) - return; + goto end; sys_data = syscall_nr_to_meta(syscall_nr); if (!sys_data) - return; + goto end; size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args; @@ -329,7 +335,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id) event = trace_buffer_lock_reserve(buffer, sys_data->enter_event->event.type, size, irq_flags, pc); if (!event) - return; + goto end; entry = ring_buffer_event_data(event); entry->nr = syscall_nr; @@ -338,6 +344,8 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id) event_trigger_unlock_commit(trace_file, buffer, event, entry, irq_flags, pc); +end: + preempt_enable_notrace(); } static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret) @@ -352,21 +360,27 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret) int pc; int syscall_nr; + /* + * Probe called with preemption enabled (mayfault), but ring buffer and + * per-cpu data require preemption to be disabled. + */ + preempt_disable_notrace(); + syscall_nr = trace_get_syscall_nr(current, regs); if (syscall_nr < 0 || syscall_nr >= NR_syscalls) - return; + goto end; /* Here we're inside tp handler's rcu_read_lock_sched (__DO_TRACE()) */ trace_file = rcu_dereference_sched(tr->exit_syscall_files[syscall_nr]); if (!trace_file) - return; + goto end; if (trace_trigger_soft_disabled(trace_file)) - return; + goto end; sys_data = syscall_nr_to_meta(syscall_nr); if (!sys_data) - return; + goto end; local_save_flags(irq_flags); pc = preempt_count(); @@ -376,7 +390,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret) sys_data->exit_event->event.type, sizeof(*entry), irq_flags, pc); if (!event) - return; + goto end; entry = ring_buffer_event_data(event); entry->nr = syscall_nr; @@ -384,6 +398,8 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret) event_trigger_unlock_commit(trace_file, buffer, event, entry, irq_flags, pc); +end: + preempt_enable_notrace(); } static int reg_event_syscall_enter(struct trace_event_file *file, @@ -398,7 +414,7 @@ static int reg_event_syscall_enter(struct trace_event_file *file, return -ENOSYS; mutex_lock(&syscall_trace_lock); if (!tr->sys_refcount_enter) - ret = register_trace_sys_enter(ftrace_syscall_enter, tr); + ret = register_trace_mayfault_sys_enter(ftrace_syscall_enter, tr); if (!ret) { rcu_assign_pointer(tr->enter_syscall_files[num], file); tr->sys_refcount_enter++; @@ -436,7 +452,7 @@ static int reg_event_syscall_exit(struct trace_event_file *file, return -ENOSYS; mutex_lock(&syscall_trace_lock); if (!tr->sys_refcount_exit) - ret = register_trace_sys_exit(ftrace_syscall_exit, tr); + ret = register_trace_mayfault_sys_exit(ftrace_syscall_exit, tr); if (!ret) { rcu_assign_pointer(tr->exit_syscall_files[num], file); tr->sys_refcount_exit++; @@ -600,20 +616,26 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id) int rctx; int size; + /* + * Probe called with preemption enabled (mayfault), but ring buffer and + * per-cpu data require preemption to be disabled. + */ + preempt_disable_notrace(); + syscall_nr = trace_get_syscall_nr(current, regs); if (syscall_nr < 0 || syscall_nr >= NR_syscalls) - return; + goto end; if (!test_bit(syscall_nr, enabled_perf_enter_syscalls)) - return; + goto end; sys_data = syscall_nr_to_meta(syscall_nr); if (!sys_data) - return; + goto end; head = this_cpu_ptr(sys_data->enter_event->perf_events); valid_prog_array = bpf_prog_array_valid(sys_data->enter_event); if (!valid_prog_array && hlist_empty(head)) - return; + goto end; /* get the size after alignment with the u32 buffer size field */ size = sizeof(unsigned long) * sys_data->nb_args + sizeof(*rec); @@ -622,7 +644,7 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id) rec = perf_trace_buf_alloc(size, NULL, &rctx); if (!rec) - return; + goto end; rec->nr = syscall_nr; syscall_get_arguments(current, regs, args); @@ -632,12 +654,14 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id) !perf_call_bpf_enter(sys_data->enter_event, regs, sys_data, rec)) || hlist_empty(head)) { perf_swevent_put_recursion_context(rctx); - return; + goto end; } perf_trace_buf_submit(rec, size, rctx, sys_data->enter_event->event.type, 1, regs, head, NULL); +end: + preempt_enable_notrace(); } static int perf_sysenter_enable(struct trace_event_call *call) @@ -649,7 +673,7 @@ static int perf_sysenter_enable(struct trace_event_call *call) mutex_lock(&syscall_trace_lock); if (!sys_perf_refcount_enter) - ret = register_trace_sys_enter(perf_syscall_enter, NULL); + ret = register_trace_mayfault_sys_enter(perf_syscall_enter, NULL); if (ret) { pr_info("event trace: Could not activate syscall entry trace point"); } else { @@ -699,20 +723,26 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret) int rctx; int size; + /* + * Probe called with preemption enabled (mayfault), but ring buffer and + * per-cpu data require preemption to be disabled. + */ + preempt_disable_notrace(); + syscall_nr = trace_get_syscall_nr(current, regs); if (syscall_nr < 0 || syscall_nr >= NR_syscalls) - return; + goto end; if (!test_bit(syscall_nr, enabled_perf_exit_syscalls)) - return; + goto end; sys_data = syscall_nr_to_meta(syscall_nr); if (!sys_data) - return; + goto end; head = this_cpu_ptr(sys_data->exit_event->perf_events); valid_prog_array = bpf_prog_array_valid(sys_data->exit_event); if (!valid_prog_array && hlist_empty(head)) - return; + goto end; /* We can probably do that at build time */ size = ALIGN(sizeof(*rec) + sizeof(u32), sizeof(u64)); @@ -720,7 +750,7 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret) rec = perf_trace_buf_alloc(size, NULL, &rctx); if (!rec) - return; + goto end; rec->nr = syscall_nr; rec->ret = syscall_get_return_value(current, regs); @@ -729,11 +759,13 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret) !perf_call_bpf_exit(sys_data->exit_event, regs, rec)) || hlist_empty(head)) { perf_swevent_put_recursion_context(rctx); - return; + goto end; } perf_trace_buf_submit(rec, size, rctx, sys_data->exit_event->event.type, 1, regs, head, NULL); +end: + preempt_enable_notrace(); } static int perf_sysexit_enable(struct trace_event_call *call) @@ -745,7 +777,7 @@ static int perf_sysexit_enable(struct trace_event_call *call) mutex_lock(&syscall_trace_lock); if (!sys_perf_refcount_exit) - ret = register_trace_sys_exit(perf_syscall_exit, NULL); + ret = register_trace_mayfault_sys_exit(perf_syscall_exit, NULL); if (ret) { pr_info("event trace: Could not activate syscall exit trace point"); } else {