From patchwork Mon Mar 21 09:22:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 12787015 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9CF8C433EF for ; Mon, 21 Mar 2022 09:22:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243528AbiCUJYF (ORCPT ); Mon, 21 Mar 2022 05:24:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238061AbiCUJYE (ORCPT ); Mon, 21 Mar 2022 05:24:04 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 059801427EF for ; Mon, 21 Mar 2022 02:22:40 -0700 (PDT) Date: Mon, 21 Mar 2022 10:22:37 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1647854558; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type; bh=/QWaEayZvArwN7LukNZ3sAsEuQ/wTRAvldgHd0dyeCs=; b=BZJfHHClEhH87dWEjJ/ZpN4RbylwnMRvr0eMcybIeXMoXgQmaWTXlU/McF5X/S8PJ8v0R5 smwwp1UIlMCa0c09yQL/QZ+TKDxE79PJcXtIn5uXLpW//ASCokESDTPEi/hCciH5mNu8Qd jik7juazTNvuo9EhschGP7mRLZmHGRd3sXeof7KGzJbp74biYdbz4ltOMOYQLsozEMLJXZ +kepxIbHRmQRfkTOyE8BDJKoG8uEhNFsg8yAE2son/l+TN6rOL5r62Op1xGs3wy6JmunsB eZnclCD+QkRdf+ppQeIWK0MoXvkJjOaEZuW6F94gmKiJ2EFodg16l8X3Q1sfcA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1647854558; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type; bh=/QWaEayZvArwN7LukNZ3sAsEuQ/wTRAvldgHd0dyeCs=; b=EC1lLIN5hLMp1jbGoU3/KKZiqQMF6E5+c4QA2Gc+lWeXtnD2kV7P9r0G2EFIU9KhyjXyFj H2Xo+BG+LGzdlPBg== From: Sebastian Andrzej Siewior To: Jakub Kicinski Cc: "Jason A. Donenfeld" , Netdev , "David S. Miller" , Eric Dumazet , Thomas Gleixner , Peter Zijlstra , Toke =?utf-8?q?H=C3=B8iland-J=C3=B8rgensen?= Subject: [PATCH net-next] net: Revert the softirq will run annotation in ____napi_schedule(). Message-ID: MIME-Version: 1.0 Content-Disposition: inline Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The lockdep annotation lockdep_assert_softirq_will_run() expects that either hard or soft interrupts are disabled because both guaranty that the "raised" soft-interrupts will be processed once the context is left. This triggers in flush_smp_call_function_from_idle() but it this case it explicitly calls do_softirq() in case of pending softirqs. Revert the "softirq will run" annotation in ____napi_schedule() and move the check back to __netif_rx() as it was. Keep the IRQ-off assert in ____napi_schedule() because this is always required. Fixes: fbd9a2ceba5c7 ("net: Add lockdep asserts to ____napi_schedule().") Signed-off-by: Sebastian Andrzej Siewior Reviewed-by: Jason A. Donenfeld --- include/linux/lockdep.h | 7 ------- net/core/dev.c | 3 +-- 2 files changed, 1 insertion(+), 9 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 0cc65d2167015..467b94257105e 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -329,12 +329,6 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie); #define lockdep_assert_none_held_once() \ lockdep_assert_once(!current->lockdep_depth) -/* - * Ensure that softirq is handled within the callchain and not delayed and - * handled by chance. - */ -#define lockdep_assert_softirq_will_run() \ - lockdep_assert_once(hardirq_count() | softirq_count()) #define lockdep_recursing(tsk) ((tsk)->lockdep_recursion) @@ -420,7 +414,6 @@ extern int lockdep_is_held(const void *); #define lockdep_assert_held_read(l) do { (void)(l); } while (0) #define lockdep_assert_held_once(l) do { (void)(l); } while (0) #define lockdep_assert_none_held_once() do { } while (0) -#define lockdep_assert_softirq_will_run() do { } while (0) #define lockdep_recursing(tsk) (0) diff --git a/net/core/dev.c b/net/core/dev.c index 8e0cc5f2020d3..8a5109479dbe2 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4277,7 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd, { struct task_struct *thread; - lockdep_assert_softirq_will_run(); lockdep_assert_irqs_disabled(); if (test_bit(NAPI_STATE_THREADED, &napi->state)) { @@ -4887,7 +4886,7 @@ int __netif_rx(struct sk_buff *skb) { int ret; - lockdep_assert_softirq_will_run(); + lockdep_assert_once(hardirq_count() | softirq_count()); trace_netif_rx_entry(skb); ret = netif_rx_internal(skb);