From patchwork Sat Jul 22 07:47:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 13322852 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C74D134B2 for ; Sat, 22 Jul 2023 07:48:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A43DC433C8; Sat, 22 Jul 2023 07:47:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690012082; bh=6MQ29HDi9RobghOfq9ww6P1ehrV7jdzcjLwHNnWU8TY=; h=From:To:Cc:Subject:Date:From; b=Kj9zb2QV2XUlO6ArJk7Um1dVjnJix123wkWielmwHI1G12aNiMOftnyppaawHQ2Ev 9QihLXgFdtBBVHkdUPcfxtp1WVr9oya/sDbZZHbv5Kb0KTENPsRQoJpiK84Nf9JszY aSsTkzZicl1F5DjvqaDwI9SkAjwEaPqXUptOLRt+EbkNT4pzyFXhdl5Cyn8e/gkXum e76D0qNn7Zm3o1CUc/bhEHQtKDqPC4tV5gIUdbrXCZmma6fqI8R8SpEqaQeIK8ZVYJ HoNqbHSk/r12c+HB/7bmVVjDFGFtClCwxaOpVGUlPJlPNz0x5ot1ozRxr9xlTQOSuh fZztE9O4HIqIg== From: Arnd Bergmann To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Hou Tao Cc: Arnd Bergmann , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Kumar Kartikeya Dwivedi , bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] bpf: force inc_active()/dec_active() to be inline functions Date: Sat, 22 Jul 2023 09:47:44 +0200 Message-Id: <20230722074753.568696-1-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Arnd Bergmann Splitting these out into separate helper functions means that we actually pass an uninitialized variable into another function call if dec_active() happens to not be inlined, and CONFIG_PREEMPT_RT is disabled: kernel/bpf/memalloc.c: In function 'add_obj_to_free_list': kernel/bpf/memalloc.c:200:9: error: 'flags' is used uninitialized [-Werror=uninitialized] 200 | dec_active(c, flags); Mark the two functions as __always_inline so gcc can see through this regardless of optimizations and other options that may prevent it otherwise. Fixes: 18e027b1c7c6d ("bpf: Factor out inc/dec of active flag into helpers.") Signed-off-by: Arnd Bergmann --- kernel/bpf/memalloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c index 51d6389e5152e..23906325298da 100644 --- a/kernel/bpf/memalloc.c +++ b/kernel/bpf/memalloc.c @@ -165,7 +165,7 @@ static struct mem_cgroup *get_memcg(const struct bpf_mem_cache *c) #endif } -static void inc_active(struct bpf_mem_cache *c, unsigned long *flags) +static __always_inline void inc_active(struct bpf_mem_cache *c, unsigned long *flags) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) /* In RT irq_work runs in per-cpu kthread, so disable @@ -183,7 +183,7 @@ static void inc_active(struct bpf_mem_cache *c, unsigned long *flags) WARN_ON_ONCE(local_inc_return(&c->active) != 1); } -static void dec_active(struct bpf_mem_cache *c, unsigned long flags) +static __always_inline void dec_active(struct bpf_mem_cache *c, unsigned long flags) { local_dec(&c->active); if (IS_ENABLED(CONFIG_PREEMPT_RT))