From patchwork Mon Apr 17 15:47:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13214289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA6F5C77B7A for ; Mon, 17 Apr 2023 15:48:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231403AbjDQPsE (ORCPT ); Mon, 17 Apr 2023 11:48:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231392AbjDQPsA (ORCPT ); Mon, 17 Apr 2023 11:48:00 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C03BB728F; Mon, 17 Apr 2023 08:47:58 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id z11-20020a17090abd8b00b0024721c47ceaso12861911pjr.3; Mon, 17 Apr 2023 08:47:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681746478; x=1684338478; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/znH9BN5OMTUR3LeE3OihOiuQbmxlh1YZhsXL8dYfT4=; b=JPDSEWjrfgUWXIj6zj2+c3dq3PtMSLRSwtABCi6ebTbg/B8kbgq0EA7gKPxEVRA3WV hP+XjUZPUP0SGkrk5HKYDhXP5u1E/3xaou2q/7ZiPTsp+rIO8t2xcXtCxcBiCsOgVb8v nNZqWgSvvBCYy5xukci8Ucp4iN7AxiGZl1lCVYEVZ1dmkyeWQVu3Su83WOOi7AMRtvsB +o6Y2r605Q+I7/nzplGvh1OLYTjTj4D8BY1+eeYdDGt6kyzUP6hLxgq/cq1a5+8gM1u5 6sigqvrWx8TAHtw+nUgXdRF7ZwmHd/7f2+AOBWhYzqGlvypnoN/R/k2kS43l9mpyOkod WKgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681746478; x=1684338478; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/znH9BN5OMTUR3LeE3OihOiuQbmxlh1YZhsXL8dYfT4=; b=ksoADueJOyC6CNjJ4caou804bES3ktO1NqgH/buRM8XIKzNNAmlv7gbe/F19HFEgKh P0l9TfueN6E0iz8IvOx2G/xzQ+OnDG4WbtoIJNlX/kituVcqlVNQJLMFgB+uBTL08Yzq RYsxrnCdKyGfcRS+GEBFNtXc34DFMm4FqkulrjSujO/2cut2VTnqD8I4poPledsMPCJu X7iSxMWgyVa2Ipb4Q41jKOGM4w0otysLST2rCcEzoGhSte1c7Of3swr4teRoARyXXPGB ClPzigE7WeNsj2jugqssWbYRQaLAoXFz2pLxLAOvO7R6UsQjy1aHOf+I91Sfsqyfgd6I dydQ== X-Gm-Message-State: AAQBX9fVy2a2Sc+hEQ12PU6ACy1vAIiwSiyz/y97uyZDZuqwNDmEGuJ4 6B0/Hw+eyg/6tWDVWzfiBFo= X-Google-Smtp-Source: AKy350aNu6PaN8hh0owL6/4RkczvLkDLflCyv+SK//mMKuNnR9B/1FvUPbqVcAPaDSLHU2n/r+zllA== X-Received: by 2002:a17:902:c149:b0:19e:839e:49d8 with SMTP id 9-20020a170902c14900b0019e839e49d8mr11923255plj.59.1681746478068; Mon, 17 Apr 2023 08:47:58 -0700 (PDT) Received: from vultr.guest ([2401:c080:3800:263c:5400:4ff:fe66:d27f]) by smtp.gmail.com with ESMTPSA id jj17-20020a170903049100b001a6b308fcaesm4437513plb.153.2023.04.17.08.47.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 08:47:57 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, rostedt@goodmis.org, mhiramat@kernel.org Cc: bpf@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org, Yafang Shao Subject: [PATCH bpf-next 1/6] bpf: Add __rcu_read_{lock,unlock} into btf id deny list Date: Mon, 17 Apr 2023 15:47:32 +0000 Message-Id: <20230417154737.12740-2-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230417154737.12740-1-laoar.shao@gmail.com> References: <20230417154737.12740-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org The tracing recursion prevention mechanism must be protected by rcu, that leaves __rcu_read_{lock,unlock} unprotected by this mechanism. If we trace them, the recursion will happen. Let's add them into the btf id deny list. Signed-off-by: Yafang Shao --- kernel/bpf/verifier.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5dae11e..83fb94f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18645,6 +18645,10 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, BTF_ID(func, preempt_count_add) BTF_ID(func, preempt_count_sub) #endif +#ifdef CONFIG_PREEMPT_RCU +BTF_ID(func, __rcu_read_lock) +BTF_ID(func, __rcu_read_unlock) +#endif BTF_SET_END(btf_id_deny) static bool can_be_sleepable(struct bpf_prog *prog) From patchwork Mon Apr 17 15:47:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13214290 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8F9CC77B7C for ; Mon, 17 Apr 2023 15:48:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231415AbjDQPsI (ORCPT ); Mon, 17 Apr 2023 11:48:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230336AbjDQPsH (ORCPT ); Mon, 17 Apr 2023 11:48:07 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61CFD49E9; Mon, 17 Apr 2023 08:48:02 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d8so912905plg.2; Mon, 17 Apr 2023 08:48:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681746482; x=1684338482; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uHDJ0ms5qntiTY/vJL+PnOx/U67KpWhrNxGfGdA7Jik=; b=CXHC9Ql/00zSsOmvxzChRGMGyfJcb0YMSNw/aRcGN4CDGRoiIsdn4lwffnubNCXcNM vAztb7EUps8Yn9pnIR+8z5hkJ3lzUg9UoZ4xQ0volTO4PDJIkzkroDBUAAzdcckTXov2 fJc4CH+qEikLD7cQSRNSL7Po/5+XpImaxp47pgc1vulxntronqhcer4FWXFVWv9L3/kn Ay5Gq2cIUofwMWnYUm8oDZYXcwU/374p/X1usEz2PvCGCJxvZ5b4Ht3VtuAnLcw/TVXM yJVhqiTq/eWcs8yjAIrYVhwTWezSmsSRlL1/OzwtR0RhRZ9s7LRfuLJ58ztMJhGTNojc Zhvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681746482; x=1684338482; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uHDJ0ms5qntiTY/vJL+PnOx/U67KpWhrNxGfGdA7Jik=; b=DnyEbpFWKqbm41TIn0vW/xAnCCK7/RDkqcsS/aYKLrUj03GOgpnm/kc19859symfgk d0oyL85aowZVhN/+4Txt7cN4UfdCDzt8o/NnObEd+fNPPPlNYwQIiANQ0UCIbOcVru1t cgzulL8RYzvctIgXBW8TIiCd7GOqlJ5L/0MuHo6/tB73KiL2+SIA5m5nI0Ui6bt6VGxI rwyL0zj/wz+3lDxxFLpXCsHKlmzW2VedMqPKOoLtwWG0yqeBjx3H3ZbT0bkp0zAnGA25 0uULC7GwesArwyST71uLgmsNSNV+NDOrRFxSSdBNTE9vNJuhk237Subt1SuDBs9wO+sq RJ3A== X-Gm-Message-State: AAQBX9dZctk4a25sL9P43246ZiX7cRb9cO80RhTsNA/EGHt/etBbPO80 MsoGTfTmmYTL8MDxLwuHssg= X-Google-Smtp-Source: AKy350Y7nNjW5vBx2O5i0pQGxt1f7hj8LXlNdATz6b9i8hZ9OXYTnT0OSr7Vgm5Rr3dNUoZgZo979g== X-Received: by 2002:a17:902:f54a:b0:1a6:7534:974a with SMTP id h10-20020a170902f54a00b001a67534974amr14200100plf.48.1681746482029; Mon, 17 Apr 2023 08:48:02 -0700 (PDT) Received: from vultr.guest ([2401:c080:3800:263c:5400:4ff:fe66:d27f]) by smtp.gmail.com with ESMTPSA id jj17-20020a170903049100b001a6b308fcaesm4437513plb.153.2023.04.17.08.47.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 08:48:01 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, rostedt@goodmis.org, mhiramat@kernel.org Cc: bpf@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org, Yafang Shao Subject: [PATCH bpf-next 2/6] tracing: Add generic test_recursion_try_acquire() Date: Mon, 17 Apr 2023 15:47:33 +0000 Message-Id: <20230417154737.12740-3-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230417154737.12740-1-laoar.shao@gmail.com> References: <20230417154737.12740-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org From: "Steven Rostedt (Google)" The ftrace_test_recursion_trylock() also disables preemption. This is not required, but was a clean up as every place that called it also disabled preemption, and making the two tightly coupled appeared to make the code simpler. But the recursion protection can be used for other purposes that do not require disabling preemption. As the recursion bits are attached to the task_struct, it follows the task, so there's no need for preemption being disabled. Add test_recursion_try_acquire/release() functions to be used generically, and separate it from being associated with ftrace. It also removes the "lock" name, as there is no lock happening. Keeping the "lock" for the ftrace version is better, as it at least differentiates that preemption is being disabled (hence, "locking the CPU"). Link: https://lore.kernel.org/linux-trace-kernel/20230321020103.13494-1-laoar.shao@gmail.com/ Acked-by: Yafang Shao Signed-off-by: Steven Rostedt (Google) Signed-off-by: Yafang Shao Reviewed-by: Masami Hiramatsu (Google) --- include/linux/trace_recursion.h | 47 ++++++++++++++++++++++++++++++----------- kernel/trace/ftrace.c | 2 ++ 2 files changed, 37 insertions(+), 12 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index d48cd92..80de2ee 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -150,9 +150,6 @@ static __always_inline int trace_get_context_bit(void) # define trace_warn_on_no_rcu(ip) false #endif -/* - * Preemption is promised to be disabled when return bit >= 0. - */ static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsigned long pip, int start) { @@ -182,18 +179,11 @@ static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsign val |= 1 << bit; current->trace_recursion = val; barrier(); - - preempt_disable_notrace(); - return bit; } -/* - * Preemption will be enabled (if it was previously enabled). - */ static __always_inline void trace_clear_recursion(int bit) { - preempt_enable_notrace(); barrier(); trace_recursion_clear(bit); } @@ -205,12 +195,18 @@ static __always_inline void trace_clear_recursion(int bit) * tracing recursed in the same context (normal vs interrupt), * * Returns: -1 if a recursion happened. - * >= 0 if no recursion. + * >= 0 if no recursion and preemption will be disabled. */ static __always_inline int ftrace_test_recursion_trylock(unsigned long ip, unsigned long parent_ip) { - return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START); + int bit; + + bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START); + if (unlikely(bit < 0)) + return bit; + preempt_disable_notrace(); + return bit; } /** @@ -221,6 +217,33 @@ static __always_inline int ftrace_test_recursion_trylock(unsigned long ip, */ static __always_inline void ftrace_test_recursion_unlock(int bit) { + preempt_enable_notrace(); + trace_clear_recursion(bit); +} + +/** + * test_recursion_try_acquire - tests for recursion in same context + * + * This will detect recursion of a function. + * + * Returns: -1 if a recursion happened. + * >= 0 if no recursion + */ +static __always_inline int test_recursion_try_acquire(unsigned long ip, + unsigned long parent_ip) +{ + return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START); +} + +/** + * test_recursion_release - called after a success of test_recursion_try_acquire() + * @bit: The return of a successful test_recursion_try_acquire() + * + * This releases the recursion lock taken by a non-negative return call + * by test_recursion_try_acquire(). + */ +static __always_inline void test_recursion_release(int bit) +{ trace_clear_recursion(bit); } diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index c67bcc8..8ad3ab4 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -7647,6 +7647,7 @@ void ftrace_reset_array_ops(struct trace_array *tr) if (bit < 0) return; + preempt_disable(); do_for_each_ftrace_op(op, ftrace_ops_list) { /* Stub functions don't need to be called nor tested */ if (op->flags & FTRACE_OPS_FL_STUB) @@ -7668,6 +7669,7 @@ void ftrace_reset_array_ops(struct trace_array *tr) } } while_for_each_ftrace_op(op); out: + preempt_enable(); trace_clear_recursion(bit); } From patchwork Mon Apr 17 15:47:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13214291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9067FC77B76 for ; Mon, 17 Apr 2023 15:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231303AbjDQPsN (ORCPT ); Mon, 17 Apr 2023 11:48:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230294AbjDQPsM (ORCPT ); Mon, 17 Apr 2023 11:48:12 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A43EF7ED8; Mon, 17 Apr 2023 08:48:06 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id s23-20020a17090aba1700b00247a8f0dd50so2085130pjr.1; Mon, 17 Apr 2023 08:48:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681746486; x=1684338486; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2wgVwyxor4VzOi7Z0g46ftmbnrT2RHnmbGZfAtqjbCA=; b=Ypm/PtLVegJYJ/lNGHMLpLBuX0ZC+yhaM4vULV0Sgptn47FedbANYyDu8n0yy0tT46 rqo/w4dUDbRwOZ+Iu/QHP/H8EhzX39xR62wLm8hd1A0TNdFffBKdVpapFv8sngPgmiTj p8tV8TeEYJuLMnek2JZ8XBAfR/a2G2sO+sCqnEVCal97CQs3kAjsRzVnBsA+Hfgt8Bjm xVs2J9wyX4gyhKm/huz+i/YHG02ndgoSRuijtPBpZf85T9SZsWOBe25tz13waIw6dv20 bP1LuMdj8lVtgkULK0dl7YrlJ9HXhWUg+0KcBI8vChWhC6jTaDFndYayT6Rt4C0VkPFQ 7yrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681746486; x=1684338486; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2wgVwyxor4VzOi7Z0g46ftmbnrT2RHnmbGZfAtqjbCA=; b=Ku7vCWL6SMXuNyuLGprMCfR1efwahsF1EmPl0WliXG7oSMxCBz4aMrKk5wwI67Fz1l 6NdVOVpFerre1W4VPIalV7ErSo9rr9VKPgLtWgGK4KV+Yvc5UA+7c7Y+XLhlMh+FE/MR ODJzZajOvVgZQ2zrN+Mmkn32n9X+ug2W58CmR1tq5ZCVKuuEcdl+daoBLhUlfw+Dn5Hj dTidIPW0O+jwsoPivDc/gdGc4wpH1AeSgWZoyZ9mNYizReCtCBoyj6s/Qbq/1CkMSWAT bZtF5W2asx2GUH3AMstBNLQhrIqAkyJRWtEEk3/onO/KqNJsseuFSc/h8SQykP8Orduu vLDA== X-Gm-Message-State: AAQBX9ehWsDYnXr57QqIvTS5rV79GXiqulsQCDxDLk7p22moRaVkYHrk s2rI7wISXQiXcUBOGKK0r18= X-Google-Smtp-Source: AKy350aRsEhgOwTO8pE4AeOr4FeKjjKs5CWz9jyW4rIRAs6bXZ46yIQ0sAhVvwQaZaEgKJzE9d0Rxg== X-Received: by 2002:a17:902:d2c2:b0:1a6:7a19:331b with SMTP id n2-20020a170902d2c200b001a67a19331bmr14377306plc.5.1681746486035; Mon, 17 Apr 2023 08:48:06 -0700 (PDT) Received: from vultr.guest ([2401:c080:3800:263c:5400:4ff:fe66:d27f]) by smtp.gmail.com with ESMTPSA id jj17-20020a170903049100b001a6b308fcaesm4437513plb.153.2023.04.17.08.48.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 08:48:05 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, rostedt@goodmis.org, mhiramat@kernel.org Cc: bpf@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org, Yafang Shao Subject: [PATCH bpf-next 3/6] tracing: Add the comment for allowing one single recursion in process context Date: Mon, 17 Apr 2023 15:47:34 +0000 Message-Id: <20230417154737.12740-4-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230417154737.12740-1-laoar.shao@gmail.com> References: <20230417154737.12740-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org After TRACE_CTX_TRANSITION is applied, it will allow one single recursion in the process context. Below is an example, SEC("fentry/htab_map_delete_elem") int BPF_PROG(on_delete, struct bpf_map *map) { pass2++; bpf_map_delete_elem(&hash2, &key); return 0; } In the above test case, the recursion will be detected at the second bpf_map_delete_elem() call in this prog. Illustrated as follows, SEC("fentry/htab_map_delete_elem") pass2++; <<<< Turn out to be 1 after this operation. bpf_map_delete_elem(&hash2, &key); SEC("fentry/htab_map_delete_elem") <<<< no recursion pass2++; <<<< Turn out to be 2 after this operation. bpf_map_delete_elem(&hash2, &key); SEC("fentry/htab_map_delete_elem") <<<< RECURSION We'd better explain this behavior explicitly. Signed-off-by: Yafang Shao Cc: Steven Rostedt --- include/linux/trace_recursion.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index 80de2ee..445a055 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -168,6 +168,8 @@ static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsign * will think a recursion occurred, and the event will be dropped. * Let a single instance happen via the TRANSITION_BIT to * not drop those events. + * After this rule is applied, one single recursion is allowed in + * the process context. */ bit = TRACE_CTX_TRANSITION + start; if (val & (1 << bit)) { From patchwork Mon Apr 17 15:47:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13214292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65913C77B76 for ; Mon, 17 Apr 2023 15:48:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231401AbjDQPsX (ORCPT ); Mon, 17 Apr 2023 11:48:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231460AbjDQPsS (ORCPT ); Mon, 17 Apr 2023 11:48:18 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8426BAD03; Mon, 17 Apr 2023 08:48:10 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id q2so31014544pll.7; Mon, 17 Apr 2023 08:48:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681746490; x=1684338490; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vYG1b357oMKLvJvHyt8MmZQVmZpqrurlWW2QFSsi01Y=; b=aXJTmhfln+ApLe5Z5D5qMz9viP1nylI6xGYmzkwjLUNPlqIXXUdw04ZeNKdRdE2mO+ PPg1qxngkETSBNc5iJXxeQ5xMFhyUsXL2ut2BnsTe4cK2WYOjYezUMmxwLHeFb1AuxFE cHSJb3lGk01ziMkBmfQABhPY7kBPLinrDTrjtgT51ibo8mX9CttpZRoQrspYpTT2sRm2 ZC2vixgtOF5GE1HvM3nKGmCLAR/yCO58KHw4vc7XOgpJ4Af4CDhvJNuiLi1mHbC4VU0m nS1KaX1LUhnrCmklhlt4fkIG0Tmdfu1fkT+Nbh2FsvuNIJbKApy8oel65Xv0Yh4AUvuM vXbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681746490; x=1684338490; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vYG1b357oMKLvJvHyt8MmZQVmZpqrurlWW2QFSsi01Y=; b=j5hWUCKLsqDvhntOiea5ab1KGAVEL4Z+VPwMqLU+Kzy0H/CtLVtLUE86hDwfh1XIS9 VLEANi7EIm98MBPaxq9RFUJetOFLpUK31SZWXwGSQ1/aycwh1l0arS+CUvdV81EE2LWQ B1Y22yzzxLGcd5LGuDtW8i5naP9v+qa7qkxqdR9sW8sDtsmpWZsJ8q6uBVhmWJ9VHSy+ 0k3ZBsmyBd9XOBw5ZrB1wLEncwByMaTjILQwvJ1Xim0YXNx9A8XlGZ/TaO2uXGIgxlPH G4spIcYgpqbpWaGZkLyIO6jM6rqvAWVn3X1UsKYnDkdYwenVVkyyD4pDx5wHR+McR6Ja YJNg== X-Gm-Message-State: AAQBX9e9d0yFi2hQlxdnaWsQ59omr/sNxcxWFYPjitP2kM0wYPoyyfH+ IbnA/mezmLTG6BymeAnqJt4= X-Google-Smtp-Source: AKy350aGHNWC1i6+7g7/8F3wKaZQKN+hE8U1ntgTH/I5O43I5uTSY+qgOFPV1ogAWkYO2XsxLM6cHg== X-Received: by 2002:a17:902:f14a:b0:1a1:f5dd:2dd5 with SMTP id d10-20020a170902f14a00b001a1f5dd2dd5mr10560624plb.13.1681746490024; Mon, 17 Apr 2023 08:48:10 -0700 (PDT) Received: from vultr.guest ([2401:c080:3800:263c:5400:4ff:fe66:d27f]) by smtp.gmail.com with ESMTPSA id jj17-20020a170903049100b001a6b308fcaesm4437513plb.153.2023.04.17.08.48.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 08:48:09 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, rostedt@goodmis.org, mhiramat@kernel.org Cc: bpf@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org, Yafang Shao Subject: [PATCH bpf-next 4/6] selftests/bpf: Allow one single recursion in fentry recursion test Date: Mon, 17 Apr 2023 15:47:35 +0000 Message-Id: <20230417154737.12740-5-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230417154737.12740-1-laoar.shao@gmail.com> References: <20230417154737.12740-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org This is a prepation for replacing prog->active with test_recursion_{acquire,release}, in which one single recursion in the process context is allowed. The behavior will be as follows, SEC("fentry/htab_map_delete_elem") pass2++; <<<< Turn out to be 1 after this operation. bpf_map_delete_elem(&hash2, &key); SEC("fentry/htab_map_delete_elem") <<<< not recursion pass2++; <<<< Turn out to be 2 after this operation. bpf_map_delete_elem(&hash2, &key); SEC("fentry/htab_map_delete_elem") <<<< RECURSION Hence we need to change the selftest to allow it. To be backward-compatibility, we allow both the old value and the new value to be expected, so a new helper ASSERT_IN_ARRAY() is introduced. Signed-off-by: Yafang Shao --- tools/testing/selftests/bpf/prog_tests/recursion.c | 7 +++++-- tools/testing/selftests/bpf/test_progs.h | 19 +++++++++++++++++++ 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/recursion.c b/tools/testing/selftests/bpf/prog_tests/recursion.c index 23552d3..dfbed2e 100644 --- a/tools/testing/selftests/bpf/prog_tests/recursion.c +++ b/tools/testing/selftests/bpf/prog_tests/recursion.c @@ -8,6 +8,7 @@ void test_recursion(void) struct bpf_prog_info prog_info = {}; __u32 prog_info_len = sizeof(prog_info); struct recursion *skel; + int expected[2]; int key = 0; int err; @@ -27,9 +28,11 @@ void test_recursion(void) ASSERT_EQ(skel->bss->pass2, 0, "pass2 == 0"); bpf_map_delete_elem(bpf_map__fd(skel->maps.hash2), &key); - ASSERT_EQ(skel->bss->pass2, 1, "pass2 == 1"); + expected[1] = 2; + ASSERT_IN_ARRAY(skel->bss->pass2, expected, "pass2 in [0 2]"); bpf_map_delete_elem(bpf_map__fd(skel->maps.hash2), &key); - ASSERT_EQ(skel->bss->pass2, 2, "pass2 == 2"); + expected[1] = 4; + ASSERT_IN_ARRAY(skel->bss->pass2, expected, "pass2 in [0 4]"); err = bpf_prog_get_info_by_fd(bpf_program__fd(skel->progs.on_delete), &prog_info, &prog_info_len); diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h index 10ba432..79e96cc 100644 --- a/tools/testing/selftests/bpf/test_progs.h +++ b/tools/testing/selftests/bpf/test_progs.h @@ -245,6 +245,25 @@ struct msg { ___ok; \ }) +#define ASSERT_IN_ARRAY(actual, expected, name) ({ \ + static int duration; \ + typeof(actual) ___act = (actual); \ + typeof((expected)[0]) * ___exp = (expected); \ + bool ___ok = false; \ + int i; \ + \ + for (i = 0; i < ARRAY_SIZE(expected); i++) { \ + if (___act == ___exp[i]) { \ + ___ok = true; \ + break; \ + } \ + } \ + CHECK(!___ok, (name), \ + "unexpected %s: actual %lld not in array\n", \ + (name), (long long)(___act)); \ + ___ok; \ +}) + #define ASSERT_NEQ(actual, expected, name) ({ \ static int duration = 0; \ typeof(actual) ___act = (actual); \ From patchwork Mon Apr 17 15:47:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13214293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A04CC77B76 for ; Mon, 17 Apr 2023 15:48:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229556AbjDQPsd (ORCPT ); Mon, 17 Apr 2023 11:48:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230294AbjDQPs1 (ORCPT ); Mon, 17 Apr 2023 11:48:27 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E47A0E4A; Mon, 17 Apr 2023 08:48:14 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id b2-20020a17090a6e0200b002470b249e59so15807982pjk.4; Mon, 17 Apr 2023 08:48:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681746494; x=1684338494; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+j/CiSSG6ZrIbfi+5KIf4ADOVLoZxvO0xQE+FQMboro=; b=L4ucy6foP1tgbPdfjmtcWLR9O6NvDjMOviPj9FRo9r4eC6YpO97ReM/IZcIQc0avJc IjRzxMtU/7kh0VByS1OveKJdDtKa7X2HuCYDyG3vAjNxjf2oIfQYzVT0zPGIB69vlcOp aokhYgtBOr6meHto8CrxzX9Nw7gqmofgx4yrAyUOHTqiSa6eJPNw2f8dQ8QqRCBaLmXI dHCbZFM/k99HM6NyUbJSNQgUTYTs3+tYRDC2il9otYn+4CHQ2qz3isxJeN3cW172YdsZ p62CaDwsza36iIVv972A48L9ld6dyXKjo6IrxwGdyWfwn8iUGgqBFVMNj8OxMCc9QJUS ANUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681746494; x=1684338494; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+j/CiSSG6ZrIbfi+5KIf4ADOVLoZxvO0xQE+FQMboro=; b=LX3GVxtl5YmtEFj6wJz1fMrytgFbGLb9lvzsi7YEINF2cWh+v8BhUDbZt7F160vxdb MKvsiMtpil96XeoCUR/m8a+Ju+BdacULXN2GwNxAq9cWG27QAt5fdayQ9y3D7clQYYta duIpMHmqz+05sQQ+E8LargQuZc0hnkU/RMLgRUVejdQnr4QUexzvX7jKwa1sAr51B2C1 H9wemTcsKhlO7AOaoK2MPXkkvAkxzYNJmMntDLaqlmYzCTjbxY5ddoquWXvdtO8qIC50 xTSVj+FbxMaCK+WiqJy5+QRxT0SapbT3QYAoW+UCIdooQbc0NoarbiI15uZDk0epx8OA fLXA== X-Gm-Message-State: AAQBX9eeNdUXB8okM34nPpgAcobN2/fSAC3rO7dW5wrhZgIGeZE7fnb1 a0Q5FTIWSEe1fu7uo72OnEk= X-Google-Smtp-Source: AKy350bgwC2lfDcoQkj9QHscMZ03FcyOAfrdc/v3lrED+lpWp4Gl+sSCT3Cm0sDQxI/ArAB6ME4Urw== X-Received: by 2002:a17:903:645:b0:1a6:46f2:4365 with SMTP id kh5-20020a170903064500b001a646f24365mr10744242plb.30.1681746493962; Mon, 17 Apr 2023 08:48:13 -0700 (PDT) Received: from vultr.guest ([2401:c080:3800:263c:5400:4ff:fe66:d27f]) by smtp.gmail.com with ESMTPSA id jj17-20020a170903049100b001a6b308fcaesm4437513plb.153.2023.04.17.08.48.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 08:48:13 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, rostedt@goodmis.org, mhiramat@kernel.org Cc: bpf@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org, Yafang Shao Subject: [PATCH bpf-next 5/6] bpf: Improve tracing recursion prevention mechanism Date: Mon, 17 Apr 2023 15:47:36 +0000 Message-Id: <20230417154737.12740-6-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230417154737.12740-1-laoar.shao@gmail.com> References: <20230417154737.12740-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org Currently we use prog->active to prevent tracing recursion, but it has some downsides, - It can't identify different contexts That said, if a process context is interrupted by a irq context and the irq context runs the same code path, it will be considered as recursion. For example, normal: this_cpu_inc_return(*(prog->active)) == 1 <- OK irq: this_cpu_inc_return(*(prog->active)) == 1 <- FAIL! [ Considered as recusion ] - It has to maintain a percpu area A percpu area will be allocated for each prog when the prog is loaded and be freed when the prog is destroyed. Let's replace it with the generic tracing recursion prevention mechanism, which can work fine with anything. In the above example, the irq context won't be considered as recursion again, normal: test_recursion_try_acquire() <- OK softirq: test_recursion_try_acquire() <- OK irq: test_recursion_try_acquire() <- OK Note that, currently one single recursion in process context is allowed due to the TRACE_CTX_TRANSITION workaround, which can be fixed in the future. That said, below behavior is expected currently, normal: test_recursion_try_acquire() <- OK [ recursion happens ] <- one single recursion is allowed test_recursion_try_acquire() <- OK [ recursion happens ] test_recursion_try_acquire() <- RECURSION! Signed-off-by: Yafang Shao --- include/linux/bpf.h | 2 +- kernel/bpf/core.c | 10 ---------- kernel/bpf/trampoline.c | 44 +++++++++++++++++++++++++++++++++----------- kernel/trace/bpf_trace.c | 12 +++++++----- 4 files changed, 41 insertions(+), 27 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 18b592f..c42ff90 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1467,7 +1467,6 @@ struct bpf_prog { u32 jited_len; /* Size of jited insns in bytes */ u8 tag[BPF_TAG_SIZE]; struct bpf_prog_stats __percpu *stats; - int __percpu *active; unsigned int (*bpf_func)(const void *ctx, const struct bpf_insn *insn); struct bpf_prog_aux *aux; /* Auxiliary fields */ @@ -1813,6 +1812,7 @@ struct bpf_tramp_run_ctx { struct bpf_run_ctx run_ctx; u64 bpf_cookie; struct bpf_run_ctx *saved_run_ctx; + int recursion_bit; }; static inline struct bpf_run_ctx *bpf_set_run_ctx(struct bpf_run_ctx *new_ctx) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 7421487..0942ab2 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -103,12 +103,6 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag vfree(fp); return NULL; } - fp->active = alloc_percpu_gfp(int, bpf_memcg_flags(GFP_KERNEL | gfp_extra_flags)); - if (!fp->active) { - vfree(fp); - kfree(aux); - return NULL; - } fp->pages = size / PAGE_SIZE; fp->aux = aux; @@ -138,7 +132,6 @@ struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags) prog->stats = alloc_percpu_gfp(struct bpf_prog_stats, gfp_flags); if (!prog->stats) { - free_percpu(prog->active); kfree(prog->aux); vfree(prog); return NULL; @@ -256,7 +249,6 @@ struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size, */ fp_old->aux = NULL; fp_old->stats = NULL; - fp_old->active = NULL; __bpf_prog_free(fp_old); } @@ -272,7 +264,6 @@ void __bpf_prog_free(struct bpf_prog *fp) kfree(fp->aux); } free_percpu(fp->stats); - free_percpu(fp->active); vfree(fp); } @@ -1385,7 +1376,6 @@ static void bpf_prog_clone_free(struct bpf_prog *fp) */ fp->aux = NULL; fp->stats = NULL; - fp->active = NULL; __bpf_prog_free(fp); } diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index f61d513..3df39a5 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -842,15 +842,21 @@ static __always_inline u64 notrace bpf_prog_start_time(void) static u64 notrace __bpf_prog_enter_recur(struct bpf_prog *prog, struct bpf_tramp_run_ctx *run_ctx) __acquires(RCU) { - rcu_read_lock(); - migrate_disable(); - - run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx); + int bit; - if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) { + rcu_read_lock(); + bit = test_recursion_try_acquire(_THIS_IP_, _RET_IP_); + run_ctx->recursion_bit = bit; + if (bit < 0) { + preempt_disable_notrace(); bpf_prog_inc_misses_counter(prog); + preempt_enable_notrace(); return 0; } + + migrate_disable(); + + run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx); return bpf_prog_start_time(); } @@ -880,11 +886,16 @@ static void notrace __bpf_prog_exit_recur(struct bpf_prog *prog, u64 start, struct bpf_tramp_run_ctx *run_ctx) __releases(RCU) { + if (run_ctx->recursion_bit < 0) + goto out; + bpf_reset_run_ctx(run_ctx->saved_run_ctx); update_prog_stats(prog, start); - this_cpu_dec(*(prog->active)); migrate_enable(); + test_recursion_release(run_ctx->recursion_bit); + +out: rcu_read_unlock(); } @@ -916,15 +927,21 @@ static void notrace __bpf_prog_exit_lsm_cgroup(struct bpf_prog *prog, u64 start, u64 notrace __bpf_prog_enter_sleepable_recur(struct bpf_prog *prog, struct bpf_tramp_run_ctx *run_ctx) { - rcu_read_lock_trace(); - migrate_disable(); - might_fault(); + int bit; - if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) { + rcu_read_lock_trace(); + bit = test_recursion_try_acquire(_THIS_IP_, _RET_IP_); + run_ctx->recursion_bit = bit; + if (bit < 0) { + preempt_disable_notrace(); bpf_prog_inc_misses_counter(prog); + preempt_enable_notrace(); return 0; } + migrate_disable(); + might_fault(); + run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx); return bpf_prog_start_time(); @@ -933,11 +950,16 @@ u64 notrace __bpf_prog_enter_sleepable_recur(struct bpf_prog *prog, void notrace __bpf_prog_exit_sleepable_recur(struct bpf_prog *prog, u64 start, struct bpf_tramp_run_ctx *run_ctx) { + if (run_ctx->recursion_bit < 0) + goto out; + bpf_reset_run_ctx(run_ctx->saved_run_ctx); update_prog_stats(prog, start); - this_cpu_dec(*(prog->active)); migrate_enable(); + test_recursion_release(run_ctx->recursion_bit); + +out: rcu_read_unlock_trace(); } diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index bcf91bc..bb9a4c9 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2250,16 +2250,18 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp) static __always_inline void __bpf_trace_run(struct bpf_prog *prog, u64 *args) { - cant_sleep(); - if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) { + int bit; + + bit = test_recursion_try_acquire(_THIS_IP_, _RET_IP_); + if (bit < 0) { bpf_prog_inc_misses_counter(prog); - goto out; + return; } + cant_sleep(); rcu_read_lock(); (void) bpf_prog_run(prog, args); rcu_read_unlock(); -out: - this_cpu_dec(*(prog->active)); + test_recursion_release(bit); } #define UNPACK(...) __VA_ARGS__ From patchwork Mon Apr 17 15:47:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13214294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1993C77B7A for ; Mon, 17 Apr 2023 15:49:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230052AbjDQPtD (ORCPT ); Mon, 17 Apr 2023 11:49:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231251AbjDQPsp (ORCPT ); Mon, 17 Apr 2023 11:48:45 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C753CC09; Mon, 17 Apr 2023 08:48:19 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id o2so26125750plg.4; Mon, 17 Apr 2023 08:48:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681746498; x=1684338498; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hzwNPovO934/64aRBZe++yk+dNmk1JTc3kyHlCW2thM=; b=EtvJ8wzzGeIKNvCSRVQyBRs0///s9z59xSvbgIBRPTP1pfha9sUxogoFUVO0c0db+g M0COUsSWzkRLMZ+BDmNpjNjihYBldeUy7CY+ea/XG6e5FC5qNvtB+ZpwGFQXDIXu8NmX 64xY4XB80LHkLvy+SfsRjM+m/8d1gweexpJpB3NblaNolg9oxMkQfNwiF+8FvZ8JM2zW HDzq/NK8MLfgTR9LMciv9ceCAK9TtdmoA4kd/QYe1ZddwdaGqEjucKQNjx8tgVFk6+DH mHGoEyEqZrq9qahzLy0G/E8FMAHx6sLdPtSlzL4L4nx8V2W6JapPUI1wGfB4yMUNmmHB XrYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681746498; x=1684338498; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hzwNPovO934/64aRBZe++yk+dNmk1JTc3kyHlCW2thM=; b=fzEs6b4m7P0M3A8Qfym+mkaSXOol0fiXFsSzY8rWOdvI7T6C7HJnWy6QDAf1gaxnNx /GpRfbTS7QIX2STu8Szb1YBn5NXzwZSavVLxNHimwbVnwi+S1/0zQw8UwByXoQFgsfoj f5Xo3c78+3H0HufoEmUIt6To9HGk4JGT2C5/KItcxkmTbr/kk7BwLfEcpK2aUfbW85uA Ph7HaNRmodLNlPHXJHse4UE+Hkhc3APC/tbIBrUsQaGfLCw9V8HHGlCJA5KEBTIw93Ag oaa3Dk1+VUWQULC8PCRN5qcmzvU8xJULtHAXQxAheEqaJxRGHM/D36nSHySzVskwZfpz pc+A== X-Gm-Message-State: AAQBX9fuEF4wEvFvB/ovbpfLu08RuOPTtD97KuCZ1hoBDwqZp+8ZGIlO acWMsaaI6tjTqyknuqeBA10= X-Google-Smtp-Source: AKy350Ztle8natRSMegaDwypqIor50+pRLxLEDw9RI/thoOaMOAwGj0hmhhdVjUJTgHAFxDO1bxoCQ== X-Received: by 2002:a17:902:eac2:b0:1a1:b440:3773 with SMTP id p2-20020a170902eac200b001a1b4403773mr12088320pld.27.1681746497891; Mon, 17 Apr 2023 08:48:17 -0700 (PDT) Received: from vultr.guest ([2401:c080:3800:263c:5400:4ff:fe66:d27f]) by smtp.gmail.com with ESMTPSA id jj17-20020a170903049100b001a6b308fcaesm4437513plb.153.2023.04.17.08.48.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 08:48:17 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, rostedt@goodmis.org, mhiramat@kernel.org Cc: bpf@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org, Yafang Shao Subject: [PATCH bpf-next 6/6] bpf: Remove some denied functions from the btf id deny list Date: Mon, 17 Apr 2023 15:47:37 +0000 Message-Id: <20230417154737.12740-7-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230417154737.12740-1-laoar.shao@gmail.com> References: <20230417154737.12740-1-laoar.shao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org With the generic tracing recursion prevention mechanism applied, it is safe to trace migrate_{disable,enable} and preempt_count_{sub,add}. So we can remove them from the deny list. However we can't remove rcu_read_unlock_strict and __rcu_read_{lock,unlock}, because they are used in rcu_read_unlock() or rcu_read_lock(). Signed-off-by: Yafang Shao --- kernel/bpf/verifier.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 83fb94f..40f6b2c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18634,17 +18634,9 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, BTF_SET_START(btf_id_deny) BTF_ID_UNUSED -#ifdef CONFIG_SMP -BTF_ID(func, migrate_disable) -BTF_ID(func, migrate_enable) -#endif #if !defined CONFIG_PREEMPT_RCU && !defined CONFIG_TINY_RCU BTF_ID(func, rcu_read_unlock_strict) #endif -#if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_TRACE_PREEMPT_TOGGLE) -BTF_ID(func, preempt_count_add) -BTF_ID(func, preempt_count_sub) -#endif #ifdef CONFIG_PREEMPT_RCU BTF_ID(func, __rcu_read_lock) BTF_ID(func, __rcu_read_unlock)