From patchwork Thu Oct 10 14:25:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13830289 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F5EE1CFEC1; Thu, 10 Oct 2024 14:25:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728570343; cv=none; b=dP91Y5X9kXn6E9NAHyx2m38+IkFcEuOBhsjukgK7reydmdO4xPCg13YgJEEu6MW+bfsIt9VfZabfpWpclUvhEeY3nz4xXZ3KxPCsA1UNyAFfz450C3dY9P7btffuxym0xP0tPhjpnB4XdVf5jatLkCVCWCJU5OSyevpklDtETsk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728570343; c=relaxed/simple; bh=CslaqvwOUPFcOHMDe/AWD9U9OWdcBsQiw7F+Vi4fwtM=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=FWABdrwwJhpogSPqv7x3HcnCOAlx8fQ3C9wG4n2KFCl5ViL9rT1WvSMF1yAaWaSJ+usDiw3cHq6mzCabAGuQSSI5JgY/3zfGDZTrkbu7EEedLftsOxiYid7QirtJXUiFBhaKdrsmLk4z4Y/9cOAD5OJHCXWRS0HRWUMG85i4ma0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE0D8C4CEE5; Thu, 10 Oct 2024 14:25:42 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1syu6w-00000001HKJ-40Bn; Thu, 10 Oct 2024 10:25:50 -0400 Message-ID: <20241010142550.818761626@goodmis.org> User-Agent: quilt/0.68 Date: Thu, 10 Oct 2024 10:25:45 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Michael Jeanson , Peter Zijlstra , Alexei Starovoitov , Yonghong Song , "Paul E. McKenney" , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Namhyung Kim , Andrii Nakryiko , bpf@vger.kernel.org, Joel Fernandes , Andrii Nakryiko Subject: [for-next][PATCH 08/10] tracing/bpf: Add might_fault check to syscall probes References: <20241010142537.255433162@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mathieu Desnoyers Add a might_fault() check to validate that the bpf sys_enter/sys_exit probe callbacks are indeed called from a context where page faults can be handled. Cc: Michael Jeanson Cc: Masami Hiramatsu Cc: Peter Zijlstra Cc: Alexei Starovoitov Cc: Yonghong Song Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Namhyung Kim Cc: Andrii Nakryiko Cc: bpf@vger.kernel.org Cc: Joel Fernandes Link: https://lore.kernel.org/20241009010718.2050182-9-mathieu.desnoyers@efficios.com Acked-by: Andrii Nakryiko Tested-by: Andrii Nakryiko # BPF parts Signed-off-by: Mathieu Desnoyers Signed-off-by: Steven Rostedt (Google) --- include/trace/bpf_probe.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h index fec97c93e1c9..183fa2aa2935 100644 --- a/include/trace/bpf_probe.h +++ b/include/trace/bpf_probe.h @@ -57,6 +57,7 @@ __bpf_trace_##call(void *__data, proto) \ static notrace void \ __bpf_trace_##call(void *__data, proto) \ { \ + might_fault(); \ preempt_disable_notrace(); \ CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ preempt_enable_notrace(); \