From patchwork Fri May 3 18:25:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653311 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E07A15357D; Fri, 3 May 2024 18:30:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761009; cv=none; b=qmtTDpOrOpaXsL6ZsQ0g866Qtx7EPJ44yG0t4Y/H3t2EhUx01raROrzoZPJfQrwdrdE35RjK+17GyHndSmyD5A5BR87Flpnh3FFatZ6LDQzE5+waxHYBoG7a821pUlWYFMoYuQnYUnU6p5JQEuTWquCcmr/OLkiLGuQbDCp+ask= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761009; c=relaxed/simple; bh=ls6qSExaMi9oGHo1aKY5TnxWHp62QkOustRT2dJ8ZUw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G1Oydp2MVPTFXmOaEFVm1HFJMJPxQ8mJdmcfNSHc+3F4VsbrycNj6sIVwgnOL95Phq8ldFy8GCfUn4/CU4HrZwPpjFWRlwD+9iDe4w7d7Tal6zGd6D7AvahA/Bs9q5xfZEcJeM7ufkJhTHCqfClTS4Wmr+JxnpWcoPiXC8ur/Mk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=uPyhr+Ia; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=TzxsqJJV; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="uPyhr+Ia"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="TzxsqJJV" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kqOsZwjUVyamGiYcxy8jPtVsK2vi9nUiJ6gwIp5xpHA=; b=uPyhr+IaM4TTXkpfS4mUaGcqxwypUDiHODx7u6SXINmsWn+64+hp45pu/amaaEV8pA2KNh WpGRWIPkUenotXnlxXT345XXgx6hPo2L6YSRVoSVhbVUst3qxUgwtT3TVmOhQGhmfJT5DX dQD+TaJjYiy2Z4/c0NdLbXvJShXKu/rrhSc7DbEUQcYoKTbsfgSzlxK2gYPCCP5/jvur48 xr9C8vIKY2hyvPAbw0GW0umz/MNu8Ws/EWwhD5zqd5tDI9+08cTdjtqiME3l1XD5jrByZ4 2WbPQHeA3kcqKc1urEr038R5i7ba8qb9ahYfYztbxCq3GntzUwfABJYhtipyrQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kqOsZwjUVyamGiYcxy8jPtVsK2vi9nUiJ6gwIp5xpHA=; b=TzxsqJJVhweJyXxUPRML+Ux+ypJAEkV6+koJ8uMBvRS1T+Ul9ZGb/CIquFUPe7QK3dCnQI eRe7iPTAAZ2NZ6Aw== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH net-next 01/15] locking/local_lock: Introduce guard definition for local_lock. Date: Fri, 3 May 2024 20:25:05 +0200 Message-ID: <20240503182957.1042122-2-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Introduce lock guard definition for local_lock_t. There are no users yet. Signed-off-by: Sebastian Andrzej Siewior --- include/linux/local_lock.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h index e55010fa73296..82366a37f4474 100644 --- a/include/linux/local_lock.h +++ b/include/linux/local_lock.h @@ -51,4 +51,15 @@ #define local_unlock_irqrestore(lock, flags) \ __local_unlock_irqrestore(lock, flags) +DEFINE_GUARD(local_lock, local_lock_t __percpu*, + local_lock(_T), + local_unlock(_T)) +DEFINE_GUARD(local_lock_irq, local_lock_t __percpu*, + local_lock_irq(_T), + local_unlock_irq(_T)) +DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, + local_lock_irqsave(_T->lock, _T->flags), + local_unlock_irqrestore(_T->lock, _T->flags), + unsigned long flags) + #endif From patchwork Fri May 3 18:25:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653314 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49802158864; Fri, 3 May 2024 18:30:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761010; cv=none; b=P0H0UbZjfxvwt3DjIz/Y4I3JOr/JTHRncMPhk7Eat4yGXiuR8flMSPNlAQD8TrBVG53QoXimI4rToNiswQhKKTMR+iISLDeuyBlMi8k37hEim5+5inAzt1bxMrKr+FgczFEJyHAwoKungbr93JU4oJ+jt2c5JPP74d/8HSJ/nNw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761010; c=relaxed/simple; bh=MZehckFQCZ1u/y7PloNAo3YtUp4b7HlyP3SBCD07p70=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gApDYJ29j3jGWaZVbG/5cYmMUhTvM92s9SDwNXh68+odupimxvkEe4L9bcRphguhH4siLc2ZeOmXCOKSdp+P0ujhnsjBuiMEqWQZvnFE+syP2IK5g6EgKL7/JoMC8c+CSJqwJIgYdrFw8NzdotJCIMemINGPHlxHFXQ8o//eksE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=vtZGc5L9; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=P10KOxGy; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="vtZGc5L9"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="P10KOxGy" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2wq7pOwmFszJxpMJEiSdFQognKwP7zLDfT8VaO1TStI=; b=vtZGc5L98qmbPquh7Z+gsbsv5TolCzpKxbdmLaMh3VhY0YbUx3eyfeEMFay3ULMEvZDqKV rHnEpc8Xv2w0g62ncWJ2rfmsx3o7Vy+u9kwANTLENb2nsGp65bDPU3LXoXGWkhdf8nDhqs cBL5ygaTaCCtIVNjnw5UwYzHygxwnKvVkZTCKPrwQqYd7JAa0o198++8/5u//7E1W2zMZw FXjApyID1JMQkiEggkd/B6M+TbcmFAzMLkvOy2X0gAdTDj2KsBvSWWMvZGnuslDmeIs5lD FA3VuMEmCntPt1Zuvfn92qntKOnDt0JFYCS+li/26/FMLkUtDmJg9xy1ejJwWA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2wq7pOwmFszJxpMJEiSdFQognKwP7zLDfT8VaO1TStI=; b=P10KOxGyLneL8XFy/dt/maTprc9Rkl2YrgbmoGMDTLPvOtP+Cp6zhB21eDvxgs2cEWcieo frvQWDKpetzzJPBg== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH net-next 02/15] locking/local_lock: Add local nested BH locking infrastructure. Date: Fri, 3 May 2024 20:25:06 +0200 Message-ID: <20240503182957.1042122-3-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add local_lock_nested_bh() locking. It is based on local_lock_t and the naming follows the preempt_disable_nested() example. For !PREEMPT_RT + !LOCKDEP it is a per-CPU annotation for locking assumptions based on local_bh_disable(). The macro is optimized away during compilation. For !PREEMPT_RT + LOCKDEP the local_lock_nested_bh() is reduced to the usual lock-acquire plus lockdep_assert_in_softirq() - ensuring that BH is disabled. For PREEMPT_RT local_lock_nested_bh() acquires the specified per-CPU lock. It does not disable CPU migration because it relies on local_bh_disable() disabling CPU migration. With LOCKDEP it performans the usual lockdep checks as with !PREEMPT_RT. Due to include hell the softirq check has been moved spinlock.c. The intention is to use this locking in places where locking of a per-CPU variable relies on BH being disabled. Instead of treating disabled bottom halves as a big per-CPU lock, PREEMPT_RT can use this to reduce the locking scope to what actually needs protecting. A side effect is that it also documents the protection scope of the per-CPU variables. Signed-off-by: Sebastian Andrzej Siewior --- include/linux/local_lock.h | 10 ++++++++++ include/linux/local_lock_internal.h | 31 +++++++++++++++++++++++++++++ include/linux/lockdep.h | 3 +++ kernel/locking/spinlock.c | 8 ++++++++ 4 files changed, 52 insertions(+) diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h index 82366a37f4474..091dc0b6bdfb9 100644 --- a/include/linux/local_lock.h +++ b/include/linux/local_lock.h @@ -62,4 +62,14 @@ DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, local_unlock_irqrestore(_T->lock, _T->flags), unsigned long flags) +#define local_lock_nested_bh(_lock) \ + __local_lock_nested_bh(_lock) + +#define local_unlock_nested_bh(_lock) \ + __local_unlock_nested_bh(_lock) + +DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*, + local_lock_nested_bh(_T), + local_unlock_nested_bh(_T)) + #endif diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 975e33b793a77..8dd71fbbb6d2b 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -62,6 +62,17 @@ do { \ local_lock_debug_init(lock); \ } while (0) +#define __spinlock_nested_bh_init(lock) \ +do { \ + static struct lock_class_key __key; \ + \ + debug_check_no_locks_freed((void *)lock, sizeof(*lock));\ + lockdep_init_map_type(&(lock)->dep_map, #lock, &__key, \ + 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ + LD_LOCK_NORMAL); \ + local_lock_debug_init(lock); \ +} while (0) + #define __local_lock(lock) \ do { \ preempt_disable(); \ @@ -98,6 +109,15 @@ do { \ local_irq_restore(flags); \ } while (0) +#define __local_lock_nested_bh(lock) \ + do { \ + lockdep_assert_in_softirq(); \ + local_lock_acquire(this_cpu_ptr(lock)); \ + } while (0) + +#define __local_unlock_nested_bh(lock) \ + local_lock_release(this_cpu_ptr(lock)) + #else /* !CONFIG_PREEMPT_RT */ /* @@ -138,4 +158,15 @@ typedef spinlock_t local_lock_t; #define __local_unlock_irqrestore(lock, flags) __local_unlock(lock) +#define __local_lock_nested_bh(lock) \ +do { \ + lockdep_assert_in_softirq_func(); \ + spin_lock(this_cpu_ptr(lock)); \ +} while (0) + +#define __local_unlock_nested_bh(lock) \ +do { \ + spin_unlock(this_cpu_ptr((lock))); \ +} while (0) + #endif /* CONFIG_PREEMPT_RT */ diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 08b0d1d9d78b7..3f5a551579cc9 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -600,6 +600,8 @@ do { \ (!in_softirq() || in_irq() || in_nmi())); \ } while (0) +extern void lockdep_assert_in_softirq_func(void); + #else # define might_lock(lock) do { } while (0) # define might_lock_read(lock) do { } while (0) @@ -613,6 +615,7 @@ do { \ # define lockdep_assert_preemption_enabled() do { } while (0) # define lockdep_assert_preemption_disabled() do { } while (0) # define lockdep_assert_in_softirq() do { } while (0) +# define lockdep_assert_in_softirq_func() do { } while (0) #endif #ifdef CONFIG_PROVE_RAW_LOCK_NESTING diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c index 8475a0794f8c5..438c6086d540e 100644 --- a/kernel/locking/spinlock.c +++ b/kernel/locking/spinlock.c @@ -413,3 +413,11 @@ notrace int in_lock_functions(unsigned long addr) && addr < (unsigned long)__lock_text_end; } EXPORT_SYMBOL(in_lock_functions); + +#if defined(CONFIG_PROVE_LOCKING) && defined(CONFIG_PREEMPT_RT) +void notrace lockdep_assert_in_softirq_func(void) +{ + lockdep_assert_in_softirq(); +} +EXPORT_SYMBOL(lockdep_assert_in_softirq_func); +#endif From patchwork Fri May 3 18:25:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653313 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49842158866; Fri, 3 May 2024 18:30:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761009; cv=none; b=j5BVZdj+kmbpp8LHHl2sOBV6hYaQxrfNDC3hWtB58IGarfO1DMlf0UzY/e6udXI99a/RIKDy4mXFXX7oh0tY0b1NPEc/FUkPUNjjMbHzcT5t51z7GSONOQBq097DxV+NqdaiZ6TlY1Y0hUHUOfTBXrNPD34ammkmuzWSM0n+W2o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761009; c=relaxed/simple; bh=yrTjq6I9F4tvJDSchi3W1D6EoEX8ecSy8/O3MSsUTBc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bMMZzzMfzipNaHYyjzNxEN/1rAj4aXCIi+2IqAtYJogBgNT9Tzduc0hrFnXUHwvoT/CEFJHwA/y+UIvAnDsrZl8u24gqDDmc6uBt5Bcj7LrXlgnxZ43gzcT8M49p7DC/4Rad6V0oLrJQGOSfNN0ATBfIfLMFszKkAVbhjf03cc0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=D6tJjx30; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=2g1qQerk; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="D6tJjx30"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="2g1qQerk" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oD7RGOip8UU5tOqDcDr5U4zAaPpFJ3hsGwy+4VIdr1c=; b=D6tJjx30NRzffTkKAqPkisdJQCPnRO4mJh3Wcb/ErQjw30+FSPEpOoHD5b9aJeVOzO4Ijz 34EGfDKaQvlvEoAesT03UJNeWrddtPGmIfjfkcKbeY7632hel+KGyVfxnhyYP9Vy/p+4gj E0rqS/uBmoNjUoKp7mYqZB5fpXmi6pOObeBy9qFfnAuI7h/nJL73HXmF6ytBdDk3sm78XN Fh5vqiM234mcYgDWnVtNhALoHdtV2ZyFQKBf2SDrtQAZOqfEFH2yNXLhh1dly7xSJn8MFr sStx5oJ2+0rntuhxbyizL8U55kC4wnDkYiz05njvAOBAEUY21nV3qvgPlic4jg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oD7RGOip8UU5tOqDcDr5U4zAaPpFJ3hsGwy+4VIdr1c=; b=2g1qQerkfskhwQlDA9yoByrlSmRNXtWLAnixjwn0fom7Ie4H8qdCoHPCPs5RcxAu0ovqS8 pex8BGioCbm+0KAw== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH net-next 03/15] net: Use __napi_alloc_frag_align() instead of open coding it. Date: Fri, 3 May 2024 20:25:07 +0200 Message-ID: <20240503182957.1042122-4-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The else condition within __netdev_alloc_frag_align() is an open coded __napi_alloc_frag_align(). Use __napi_alloc_frag_align() instead of open coding it. Move fragsz assignment before page_frag_alloc_align() invocation because __napi_alloc_frag_align() also contains this statement. Signed-off-by: Sebastian Andrzej Siewior --- net/core/skbuff.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 28cd640a6ea97..c4479d5721a2a 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -318,19 +318,15 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) { void *data; - fragsz = SKB_DATA_ALIGN(fragsz); if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); + fragsz = SKB_DATA_ALIGN(fragsz); data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); } else { - struct napi_alloc_cache *nc; - local_bh_disable(); - nc = this_cpu_ptr(&napi_alloc_cache); - data = __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, - align_mask); + data = __napi_alloc_frag_align(fragsz, align_mask); local_bh_enable(); } return data; From patchwork Fri May 3 18:25:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653315 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0AD1E158219; Fri, 3 May 2024 18:30:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761011; cv=none; b=Qmnp1pZmH2YjS1G8u5107LY0th8JUAn15+P/9AM5md2eiIYPiGGDLTl+v4qmi5tWRIany33JkZTESyPHxGquwzPPGWCjqHE+A0e0PHFBQsLjATSP+VFpwVidVmXuoWmzKbmoB9fUkvVrKDfCWP6TuCrRGS+V7Wb7RIPVXpoe/RY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761011; c=relaxed/simple; bh=6F+WmYxkkmptqKxlRwOLSf3Q3FyJx8XzwEyn4NvBr2U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=vEC+6pSK/QwD9RQ6Ilq6GInYa4yun8sLV2tMc4vY/l1cwGdJN+N2kg5klNCW2oTiXXog+RghyJcddFgqR391tuRMesax4OQVCJQlaJvNOUfuBeyj2wfqTuhGmVKjUKgGdIW+0D3XWQqckHabyi2lAwe83Mb+bJ5Tg4Jcj9+qqCg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=PduJYgXL; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=sAykS50v; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="PduJYgXL"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="sAykS50v" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=biQ9vBP/OT8ssEpkU2fNQ/MUE/WSCld+B29qWDwe4EE=; b=PduJYgXLFW8KhQlib6ILzf43dkAgdVfolaXAQDMwftvbClGp4aUIT2v8HSS/n+jWqs8pqz c/EyxeHX5W5pIi5Y0RNx5tU/5slQOz0XXf/JGWv0Wj+AYZT9zqXvIyoQIeAmD7ha4ySlrP 5zJQFWSK7ILNcfJQoTr+4Lur66NrlxH1Zj64llpTeh4VFvTBbILe+HNzm6bbK5NzlxU5Dn yLInyjnw6Sf+t7w0e/zuisde7Zmt+oSpSfwZillpu+ROIykkKQST9l9suETjcgCr0mF49E R7oHBpSHX3IEJJQ2cCTUPzjGxH+pIzeJbxVKPgTUS6LuZleId1sx89djz9iR4g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=biQ9vBP/OT8ssEpkU2fNQ/MUE/WSCld+B29qWDwe4EE=; b=sAykS50vReWHQQAXvBFRKvJI0gooB3GHpl9sJL6vNypzWmmWJBMb5yf2eaOSDLqCajrgDT uzaBbKS/jdaKEHBw== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH net-next 04/15] net: Use nested-BH locking for napi_alloc_cache. Date: Fri, 3 May 2024 20:25:08 +0200 Message-ID: <20240503182957.1042122-5-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org napi_alloc_cache is a per-CPU variable and relies on disabled BH for its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. Add a local_lock_t to the data structure and use local_lock_nested_bh() for locking. This change adds only lockdep coverage and does not alter the functional behaviour for !PREEMPT_RT. Signed-off-by: Sebastian Andrzej Siewior --- net/core/skbuff.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index c4479d5721a2a..c8b40e6237057 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -277,6 +277,7 @@ static void *page_frag_alloc_1k(struct page_frag_1k *nc, gfp_t gfp_mask) #endif struct napi_alloc_cache { + local_lock_t bh_lock; struct page_frag_cache page; struct page_frag_1k page_small; unsigned int skb_count; @@ -284,7 +285,9 @@ struct napi_alloc_cache { }; static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache); -static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache); +static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache) = { + .bh_lock = INIT_LOCAL_LOCK(bh_lock), +}; /* Double check that napi_get_frags() allocates skbs with * skb->head being backed by slab, not a page fragment. @@ -308,6 +311,7 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); + guard(local_lock_nested_bh)(&napi_alloc_cache.bh_lock); return __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); @@ -338,6 +342,7 @@ static struct sk_buff *napi_skb_cache_get(void) struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); struct sk_buff *skb; + guard(local_lock_nested_bh)(&napi_alloc_cache.bh_lock); if (unlikely(!nc->skb_count)) { nc->skb_count = kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, GFP_ATOMIC, @@ -740,9 +745,13 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, pfmemalloc = nc->pfmemalloc; } else { local_bh_disable(); + local_lock_nested_bh(&napi_alloc_cache.bh_lock); + nc = this_cpu_ptr(&napi_alloc_cache.page); data = page_frag_alloc(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; + + local_unlock_nested_bh(&napi_alloc_cache.bh_lock); local_bh_enable(); } @@ -806,11 +815,11 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) goto skb_success; } - nc = this_cpu_ptr(&napi_alloc_cache); - if (sk_memalloc_socks()) gfp_mask |= __GFP_MEMALLOC; + local_lock_nested_bh(&napi_alloc_cache.bh_lock); + nc = this_cpu_ptr(&napi_alloc_cache); if (NAPI_HAS_SMALL_PAGE_FRAG && len <= SKB_WITH_OVERHEAD(1024)) { /* we are artificially inflating the allocation size, but * that is not as bad as it may look like, as: @@ -832,6 +841,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) data = page_frag_alloc(&nc->page, len, gfp_mask); pfmemalloc = nc->page.pfmemalloc; } + local_unlock_nested_bh(&napi_alloc_cache.bh_lock); if (unlikely(!data)) return NULL; @@ -1393,6 +1403,7 @@ static void napi_skb_cache_put(struct sk_buff *skb) if (!kasan_mempool_poison_object(skb)) return; + guard(local_lock_nested_bh)(&napi_alloc_cache.bh_lock); nc->skb_cache[nc->skb_count++] = skb; if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) { From patchwork Fri May 3 18:25:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653317 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73673158D71; Fri, 3 May 2024 18:30:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761012; cv=none; b=I1aujIZ7h/NMRV5h+yC4FZDAqG0pVQXnczjAB/F8qFX9wRKsVlYYPbQy4ehu1gckJCmWfHmt7x7AkD4sF+eKxwvJ4SVK6bSue3rEWTXENYIhkO7SlpwWoDRx2tXV2kfYo6YsRdspcucs8GseiBASgJEb5J3u+qKJgHKwByis+m8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761012; c=relaxed/simple; bh=k2aqRos9Ig5c7CmKeEblsGY4ca0+Nf1AeaeCCiJrzJU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=E1x714IYz7jBFG1gy1nR51TBDt5NePYYe7jy7nGWKCavVW2dLE1MS4dS77DQT0yJ0EUOzQgdF+zFry3Ych7PCIaiaZ0j0MEjHD4Z3+VbFnsA7qWOCnibeEcP0EDAlIKtGoPL5/zvV+vc2KU7ENP3vifnyBdLhrhcVXheFRu7Ww8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=Y6J6tyhP; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=BVtMdi71; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Y6J6tyhP"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="BVtMdi71" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0th7XIHzuUkk2FS0kElR+y11H2nXrZYRG8KL0ByVJ5k=; b=Y6J6tyhPkjX6ll6jiPCKlVPQdICDuh52JRdQutPKphfr7kKsp3CM2HZ1ALKlGv6S+Aeg9v t9Ke8+V9WuJc1aGp/ek8xYgL0g7PtTbViQDmlySTCdBDzQyPY/lshQrZYI1lbPvsDAkWls sfr7/2CyenaFYDXR57Bt2Rqe0Q2VOfp6o8UMaVCCoav9ym17LjJDyFprDcjWaLti8ILh0J tQUe4Lzm22fHS6wri1nNtwcsDjKbOHYjq9UAf6hkGt2Sp2JFuIet0k5wO/eqVSp5ySb4YA lRTyViA6R9iFyfJ9CKqRmjKO25LHirxgP85UPl4dlCl94hsrF8Wivv4SGgstKA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0th7XIHzuUkk2FS0kElR+y11H2nXrZYRG8KL0ByVJ5k=; b=BVtMdi71b1DTpa+YlGIHT5Jb3ws3hO1EZj4Ml6Q3RvshCMdpoBV5wLeAnE/VEZWFsbxz10 NNukyO/w4g5cjgDg== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , David Ahern Subject: [PATCH net-next 05/15] net/tcp_sigpool: Use nested-BH locking for sigpool_scratch. Date: Fri, 3 May 2024 20:25:09 +0200 Message-ID: <20240503182957.1042122-6-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org sigpool_scratch is a per-CPU variable and relies on disabled BH for its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. Make a struct with a pad member (original sigpool_scratch) and a local_lock_t and use local_lock_nested_bh() for locking. This change adds only lockdep coverage and does not alter the functional behaviour for !PREEMPT_RT. Cc: David Ahern Signed-off-by: Sebastian Andrzej Siewior --- net/ipv4/tcp_sigpool.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/net/ipv4/tcp_sigpool.c b/net/ipv4/tcp_sigpool.c index 8512cb09ebc09..d8a4f192873a2 100644 --- a/net/ipv4/tcp_sigpool.c +++ b/net/ipv4/tcp_sigpool.c @@ -10,7 +10,14 @@ #include static size_t __scratch_size; -static DEFINE_PER_CPU(void __rcu *, sigpool_scratch); +struct sigpool_scratch { + local_lock_t bh_lock; + void __rcu *pad; +}; + +static DEFINE_PER_CPU(struct sigpool_scratch, sigpool_scratch) = { + .bh_lock = INIT_LOCAL_LOCK(bh_lock), +}; struct sigpool_entry { struct crypto_ahash *hash; @@ -72,7 +79,7 @@ static int sigpool_reserve_scratch(size_t size) break; } - old_scratch = rcu_replace_pointer(per_cpu(sigpool_scratch, cpu), + old_scratch = rcu_replace_pointer(per_cpu(sigpool_scratch.pad, cpu), scratch, lockdep_is_held(&cpool_mutex)); if (!cpu_online(cpu) || !old_scratch) { kfree(old_scratch); @@ -93,7 +100,7 @@ static void sigpool_scratch_free(void) int cpu; for_each_possible_cpu(cpu) - kfree(rcu_replace_pointer(per_cpu(sigpool_scratch, cpu), + kfree(rcu_replace_pointer(per_cpu(sigpool_scratch.pad, cpu), NULL, lockdep_is_held(&cpool_mutex))); __scratch_size = 0; } @@ -277,7 +284,8 @@ int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(RC /* Pairs with tcp_sigpool_reserve_scratch(), scratch area is * valid (allocated) until tcp_sigpool_end(). */ - c->scratch = rcu_dereference_bh(*this_cpu_ptr(&sigpool_scratch)); + local_lock_nested_bh(&sigpool_scratch.bh_lock); + c->scratch = rcu_dereference_bh(*this_cpu_ptr(&sigpool_scratch.pad)); return 0; } EXPORT_SYMBOL_GPL(tcp_sigpool_start); @@ -286,6 +294,7 @@ void tcp_sigpool_end(struct tcp_sigpool *c) __releases(RCU_BH) { struct crypto_ahash *hash = crypto_ahash_reqtfm(c->req); + local_unlock_nested_bh(&sigpool_scratch.bh_lock); rcu_read_unlock_bh(); ahash_request_free(c->req); crypto_free_ahash(hash); From patchwork Fri May 3 18:25:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653316 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73953158D75; Fri, 3 May 2024 18:30:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761012; cv=none; b=CxYf9dQmjuXznOFg3GmfU4N0H+71sx/5HUe8KqpS23zNXtlUgfDJYBpXHzUN6hXsVTlNYAwUs2G65QCRQoMhrLh1N+5AsgHgum6DIAQNLswHGiUmGN8G/jJMWUSZf5dp6XxlxKcYy3gK2/wODS1Yz3Ihi3OqHAHsOaXkioAspLY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761012; c=relaxed/simple; bh=bqzM87LOMftzliWhYJsIqE05aE0+TuySDGTAO0goYyE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=C4ULaM+vN9awlWBqfhA7ZjWW6b0zl1GcDV+tJUXw3QWVuEtNj+LuwqD4EHecuegWZIOVtOSCvTuyOsDetAegw5aMqjZjxkgHeB65b6cegpDpRla1FMubEiLZ1704+bRRr9QhiBLujetljIg9m2XHLQQGZK7Ccj+Mr9X3MX/DIeA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=TLGXYmS4; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=GeoN2E19; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="TLGXYmS4"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="GeoN2E19" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FPrvqok0v8KL/h2cmtllHWTg95fRWT2uDug0W0NGbSA=; b=TLGXYmS48uR4l4t3jkM1UgRDY0eN8bLVpPnQdFRsk4bcX1vI+nZgMdfP95YMZwpCx8BpSF NBp0LmwpcOdW6r7ICS3jlOkpSLPzjqd3li9RhZR5m6HG30zE3YQTwJtlvEU4u2rRe81rN7 PonqaDtlSB1XUeNRMjD7PC2U+lQCjP3nyKvMwCZ7YgJYhzUhGl03A03FGn4OcMb+XAg4ao DTF78Y7Zmx6nkOBalSCiLcqwBOMRMKHcJdXbK0Bhfy2HgcBrkD2J75hHGvblgiJLr3C0jA AbfeXX2hV4rUJMMSoN5dWraesoYzqJ53X6nbQ65d3NDX8SQvICty7qQ6c+oXcQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761007; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FPrvqok0v8KL/h2cmtllHWTg95fRWT2uDug0W0NGbSA=; b=GeoN2E19t/3CDZcmhQJ+KroiNoiN2qyawYUHPKOTA344mWUzaqYyTV3M0NYHF4aNUZvxpb b4cKcXx8zFwV59Ag== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , David Ahern Subject: [PATCH net-next 06/15] net/ipv4: Use nested-BH locking for ipv4_tcp_sk. Date: Fri, 3 May 2024 20:25:10 +0200 Message-ID: <20240503182957.1042122-7-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org ipv4_tcp_sk is a per-CPU variable and relies on disabled BH for its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. Make a struct with a sock member (original ipv4_tcp_sk) and a local_lock_t and use local_lock_nested_bh() for locking. This change adds only lockdep coverage and does not alter the functional behaviour for !PREEMPT_RT. Cc: David Ahern Signed-off-by: Sebastian Andrzej Siewior --- include/net/sock.h | 5 +++++ net/ipv4/tcp_ipv4.c | 15 +++++++++++---- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index 0450494a1766a..8380898d71267 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -544,6 +544,11 @@ struct sock { netns_tracker ns_tracker; }; +struct sock_bh_locked { + struct sock *sock; + local_lock_t bh_lock; +}; + enum sk_pacing { SK_PACING_NONE = 0, SK_PACING_NEEDED = 1, diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 0427deca3e0eb..eefeff2d2f2b1 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -93,7 +93,9 @@ static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, struct inet_hashinfo tcp_hashinfo; EXPORT_SYMBOL(tcp_hashinfo); -static DEFINE_PER_CPU(struct sock *, ipv4_tcp_sk); +static DEFINE_PER_CPU(struct sock_bh_locked, ipv4_tcp_sk) = { + .bh_lock = INIT_LOCAL_LOCK(bh_lock), +}; static u32 tcp_v4_init_seq(const struct sk_buff *skb) { @@ -879,7 +881,9 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb, arg.tos = ip_hdr(skb)->tos; arg.uid = sock_net_uid(net, sk && sk_fullsock(sk) ? sk : NULL); local_bh_disable(); - ctl_sk = this_cpu_read(ipv4_tcp_sk); + local_lock_nested_bh(&ipv4_tcp_sk.bh_lock); + ctl_sk = this_cpu_read(ipv4_tcp_sk.sock); + sock_net_set(ctl_sk, net); if (sk) { ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ? @@ -904,6 +908,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb, sock_net_set(ctl_sk, &init_net); __TCP_INC_STATS(net, TCP_MIB_OUTSEGS); __TCP_INC_STATS(net, TCP_MIB_OUTRSTS); + local_unlock_nested_bh(&ipv4_tcp_sk.bh_lock); local_bh_enable(); #ifdef CONFIG_TCP_MD5SIG @@ -999,7 +1004,8 @@ static void tcp_v4_send_ack(const struct sock *sk, arg.tos = tos; arg.uid = sock_net_uid(net, sk_fullsock(sk) ? sk : NULL); local_bh_disable(); - ctl_sk = this_cpu_read(ipv4_tcp_sk); + local_lock_nested_bh(&ipv4_tcp_sk.bh_lock); + ctl_sk = this_cpu_read(ipv4_tcp_sk.sock); sock_net_set(ctl_sk, net); ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ? inet_twsk(sk)->tw_mark : READ_ONCE(sk->sk_mark); @@ -1014,6 +1020,7 @@ static void tcp_v4_send_ack(const struct sock *sk, sock_net_set(ctl_sk, &init_net); __TCP_INC_STATS(net, TCP_MIB_OUTSEGS); + local_unlock_nested_bh(&ipv4_tcp_sk.bh_lock); local_bh_enable(); } @@ -3620,7 +3627,7 @@ void __init tcp_v4_init(void) */ inet_sk(sk)->pmtudisc = IP_PMTUDISC_DO; - per_cpu(ipv4_tcp_sk, cpu) = sk; + per_cpu(ipv4_tcp_sk.sock, cpu) = sk; } if (register_pernet_subsys(&tcp_sk_ops)) panic("Failed to create the TCP control socket.\n"); From patchwork Fri May 3 18:25:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653321 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78782158D78; Fri, 3 May 2024 18:30:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761013; cv=none; b=iaP/Xa1BWk/XHNjKJa1zqhn6GbekcrucVvjnEYhJnHNhrf5e83fdg7L5dIYBAqBvaxtmdoo6sa06qoMzeblsRBF219vPm2pNaPoXN8osxW/aVlRb2c10cUW2O+cQdrJ2tf2AhywoFtrKR/pPctn/Fyn1PlMthck/a1W8h3/qo2Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761013; c=relaxed/simple; bh=sk4AU8OgknjRSmTeIM8d2vACmrBkH/YqK9W4PPXihlc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Yu1BjmxvLeZJcnWa1CZwXOReanZYiwsQQHa7VrHx/FgYkmKISfU8eKWn+m39lNS/Ud5Hra1PI1PVQH8/Y9caX7IEYaRjJddFs1AmB0cTC3OllYOhuyfh/Y8AXGRdWbeHQzma03W625aXesGovXBZR8H5JHKmZ9GH/nBeKDs0uEw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=3N9uVA/r; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=d+bl86la; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="3N9uVA/r"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="d+bl86la" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H7YehgQsfcS62wHoW0lT0hKawzgXS3v0pYsqGden+Fw=; b=3N9uVA/rSGGKnseYOg5jHK7lOkOnRpchV6uLIiU1Fg2aTTU9QiTO1Ov7OWI7xDZOMg9W63 hOInMkJK8RY8bUFyQACrzgRLeBXvvDUVvBYEoGl7go6XcTyKxBNSnLwEEjwaxd/MAcRszz o6xoG1SmE21FxRLKK2K983zSNGuP6Nj/YKrGVesu5hP8rPl2T8upS5JZYKVyGLQ0Oy++o/ iaJmFrHYQAqXz7LIjrpkutVOSlqyrCH7mg/cdFHZmuB4gOq6JiIirHkCwSCXUcNyJAxqdw VYhSP5JSZBYXzSJfI8IB5zb9JeocJnIUHQ/yWuNPC4vN4plH6jAdmQrTgWNW4Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H7YehgQsfcS62wHoW0lT0hKawzgXS3v0pYsqGden+Fw=; b=d+bl86laZw1AIF8zVC0bWXExGb/Id1llWZZysFEGJZ4g6pXGAn9IwUByl+TPwKOWD3+Mrr 4vibp7BD7BGdxbCg== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Florian Westphal , Jozsef Kadlecsik , Nikolay Aleksandrov , Pablo Neira Ayuso , Roopa Prabhu , bridge@lists.linux.dev, coreteam@netfilter.org, netfilter-devel@vger.kernel.org Subject: [PATCH net-next 07/15] netfilter: br_netfilter: Use nested-BH locking for brnf_frag_data_storage. Date: Fri, 3 May 2024 20:25:11 +0200 Message-ID: <20240503182957.1042122-8-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org brnf_frag_data_storage is a per-CPU variable and relies on disabled BH for its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. Add a local_lock_t to the data structure and use local_lock_nested_bh() for locking. This change adds only lockdep coverage and does not alter the functional behaviour for !PREEMPT_RT. Cc: Florian Westphal Cc: Jozsef Kadlecsik Cc: Nikolay Aleksandrov Cc: Pablo Neira Ayuso Cc: Roopa Prabhu Cc: bridge@lists.linux.dev Cc: coreteam@netfilter.org Cc: netfilter-devel@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- net/bridge/br_netfilter_hooks.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c index 7948a9e7542c4..baacd80716046 100644 --- a/net/bridge/br_netfilter_hooks.c +++ b/net/bridge/br_netfilter_hooks.c @@ -137,6 +137,7 @@ static inline bool is_pppoe_ipv6(const struct sk_buff *skb, #define NF_BRIDGE_MAX_MAC_HEADER_LENGTH (PPPOE_SES_HLEN + ETH_HLEN) struct brnf_frag_data { + local_lock_t bh_lock; char mac[NF_BRIDGE_MAX_MAC_HEADER_LENGTH]; u8 encap_size; u8 size; @@ -144,7 +145,9 @@ struct brnf_frag_data { __be16 vlan_proto; }; -static DEFINE_PER_CPU(struct brnf_frag_data, brnf_frag_data_storage); +static DEFINE_PER_CPU(struct brnf_frag_data, brnf_frag_data_storage) = { + .bh_lock = INIT_LOCAL_LOCK(bh_lock), +}; static void nf_bridge_info_free(struct sk_buff *skb) { @@ -882,6 +885,7 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff IPCB(skb)->frag_max_size = nf_bridge->frag_max_size; + guard(local_lock_nested_bh)(&brnf_frag_data_storage.bh_lock); data = this_cpu_ptr(&brnf_frag_data_storage); if (skb_vlan_tag_present(skb)) { @@ -909,6 +913,7 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff IP6CB(skb)->frag_max_size = nf_bridge->frag_max_size; + guard(local_lock_nested_bh)(&brnf_frag_data_storage.bh_lock); data = this_cpu_ptr(&brnf_frag_data_storage); data->encap_size = nf_bridge_encap_header_len(skb); data->size = ETH_HLEN + data->encap_size; From patchwork Fri May 3 18:25:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653318 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7872E157E6C; Fri, 3 May 2024 18:30:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761012; cv=none; b=d0IsmYxi9fYvmyYRZToZ8QfjIcA7io7b0S14t/Nuca2tNjvRwJ5euatH0iyyE1KRFVbIhqZNhqCMBUl+QmrSvExLEMNTjWK8E/ipcTGrT4tdIwN87y/yyzkhdXOyHPrQeh9B+0wuCe53oucPwYHrUZHdCxeHkT0beDjfkhwIp/I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761012; c=relaxed/simple; bh=0ek9e1xkSu9ugJNGW/WQGFdfj6oRd6gAc/VlLaOX934=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nLJV/s+adFwKlhYFN3kTZFwE31uarUHA7AquU12R017np2lETM6niw6jAmDhruEA0fquNhLnygCP+q5X3vL0NDt9KdGMXPkcYvNIImA1Z72k7f//sUECey/JQoNVwa8HYwVgA75hRwD/1gmBulQFErN1QV0BXGvinPERHSJl7gc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=hYNnkDu/; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=h4PyCzf+; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="hYNnkDu/"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="h4PyCzf+" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W12KgBST4nx2eve2nFTDwC2VpCtYmoeYSIe9gYf7jLU=; b=hYNnkDu/YEIkbMmlcXSrr3zUKVjI0lBbG6xwH8ZICrBAHKq2h9Y0QNnOvxQlhNvhwS7Cph lDweOre4LmCytwQLo0zojT6Bboy9yAl+try72DiqCSsZxTgat7elEodRt7Fp54b+qbaY8l TOWtuyBj5bLqis8pgaK0m4ZF4EXp9p3ra7V5nZOApqeP3V+wM/0/7r7nEN/GX5srHe7E6T u7jyg78ZGy4IOPNiJ5X2mqWZ1sYs57ZJk8cTMKGKymdxVrwt4Lem0YD9EBiznNQ6gREzHP c/6nY0QOXAXeoM4hrNCsQbLAED2yZHKUogV/kTnfguxeDEpqHxm8aByRMv9H+Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W12KgBST4nx2eve2nFTDwC2VpCtYmoeYSIe9gYf7jLU=; b=h4PyCzf+pnGgWSsUNbwR4a3djIi273FCLgVuwKNb1yWnxRnZ4OU0CTPr6DranJvPH9o5cW WV4AeZ+T70BCCwCg== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Juri Lelli , Mel Gorman , Steven Rostedt , Valentin Schneider , Vincent Guittot Subject: [PATCH net-next 08/15] net: softnet_data: Make xmit.recursion per task. Date: Fri, 3 May 2024 20:25:12 +0200 Message-ID: <20240503182957.1042122-9-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Softirq is preemptible on PREEMPT_RT. Without a per-CPU lock in local_bh_disable() there is no guarantee that only one device is transmitting at a time. With preemption and multiple senders it is possible that the per-CPU recursion counter gets incremented by different threads and exceeds XMIT_RECURSION_LIMIT leading to a false positive recursion alert. Instead of adding a lock to protect the per-CPU variable it is simpler to make the counter per-task. Sending and receiving skbs happens always in thread context anyway. Having a lock to protected the per-CPU counter would block/ serialize two sending threads needlessly. It would also require a recursive lock to ensure that the owner can increment the counter further. Make the recursion counter a task_struct member on PREEMPT_RT. Cc: Ben Segall Cc: Daniel Bristot de Oliveira Cc: Dietmar Eggemann Cc: Juri Lelli Cc: Mel Gorman Cc: Steven Rostedt Cc: Valentin Schneider Cc: Vincent Guittot Signed-off-by: Sebastian Andrzej Siewior --- include/linux/netdevice.h | 11 +++++++++++ include/linux/sched.h | 4 +++- net/core/dev.h | 20 ++++++++++++++++++++ 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 41853424b41d7..c551ec235f9af 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3222,7 +3222,9 @@ struct softnet_data { #endif /* written and read only by owning cpu: */ struct { +#ifndef CONFIG_PREEMPT_RT u16 recursion; +#endif u8 more; #ifdef CONFIG_NET_EGRESS u8 skip_txqueue; @@ -3255,10 +3257,19 @@ struct softnet_data { DECLARE_PER_CPU_ALIGNED(struct softnet_data, softnet_data); +#ifdef CONFIG_PREEMPT_RT +static inline int dev_recursion_level(void) +{ + return current->net_xmit_recursion; +} + +#else + static inline int dev_recursion_level(void) { return this_cpu_read(softnet_data.xmit.recursion); } +#endif void __netif_schedule(struct Qdisc *q); void netif_schedule_queue(struct netdev_queue *txq); diff --git a/include/linux/sched.h b/include/linux/sched.h index 3c2abbc587b49..6779d3b8f2578 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -969,7 +969,9 @@ struct task_struct { /* delay due to memory thrashing */ unsigned in_thrashing:1; #endif - +#ifdef CONFIG_PREEMPT_RT + u8 net_xmit_recursion; +#endif unsigned long atomic_flags; /* Flags requiring atomic access. */ struct restart_block restart_block; diff --git a/net/core/dev.h b/net/core/dev.h index b7b518bc2be55..2f96d63053ad0 100644 --- a/net/core/dev.h +++ b/net/core/dev.h @@ -150,6 +150,25 @@ struct napi_struct *napi_by_id(unsigned int napi_id); void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu); #define XMIT_RECURSION_LIMIT 8 + +#ifdef CONFIG_PREEMPT_RT +static inline bool dev_xmit_recursion(void) +{ + return unlikely(current->net_xmit_recursion > XMIT_RECURSION_LIMIT); +} + +static inline void dev_xmit_recursion_inc(void) +{ + current->net_xmit_recursion++; +} + +static inline void dev_xmit_recursion_dec(void) +{ + current->net_xmit_recursion--; +} + +#else + static inline bool dev_xmit_recursion(void) { return unlikely(__this_cpu_read(softnet_data.xmit.recursion) > @@ -165,5 +184,6 @@ static inline void dev_xmit_recursion_dec(void) { __this_cpu_dec(softnet_data.xmit.recursion); } +#endif #endif From patchwork Fri May 3 18:25:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653320 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4AC64158D9E; Fri, 3 May 2024 18:30:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761013; cv=none; b=Mg4auU8O5tCutnYKZNioF7KRgi4kjYQKXkv8C2LCCRU69Ql9oENhEEXbZKHAe/DE83/1At2thHzTvnAAWnhF6rmYWCQPNKpV8fifM+8nySXfK6hVRPTeD/C+gmBI3kK3LheQIPs/lhpWnt0RHShcJVo8DU6NO6PdTNth9A5hCiQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761013; c=relaxed/simple; bh=UzbC5O8aCltMC3s6U4W6tN76Z+89jwJ0gJI2hpTmCdo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WGPGxAUlrKY5fcFRgtZO/rooh5+2w+aO2G7BiPG6LEL+fSrvDlcPVEh/mnYxmnrEKS9Q4v9EooDEgTacQY9TtegLP6dXNxybAGgVz1qDe70JFFmIjqrBoKGJYFPFZxx54riNOjDCVncZXS3MvWnnGgZNbId3Quo+J+2Ut4PblzU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=MEbi+rzv; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=uZ5djL0M; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="MEbi+rzv"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="uZ5djL0M" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dCLsUIZH7feV/C68ZxWORyFQ7jaG1YvJe8ZcR90vaGY=; b=MEbi+rzv/jTvhg5/6ADVT9kct1jhketAJapwlhuXEIN1s2bqWqlNZdj0zJkZB6xSIq+Y4R WbcY1KbvM/NCbyNBaptw0stkwfzbZqfjEGQsIO74/hoX9rnY033pkyKTpGrlpyN3V1woh0 DX7BEG6eHlmCHie9VSwVF2TGnGnZrMQsuVCe4BIsgxc6dTBoBoL+iCzGEudXgNIR+yeKkx 3WBU8ekKdp7+xeLGj2JLiFVD8eh3BbGa9OCMmbS/vKZs2fXj6uhPEgFeY2mZ5L2ZeVVJHD i/FKYsp0AzPJJDRED59A6t15J40G+8x5KlzY6klDkkHCQSTvoO3lXcDG0tIG2w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dCLsUIZH7feV/C68ZxWORyFQ7jaG1YvJe8ZcR90vaGY=; b=uZ5djL0Mfh7jfkdvQtj2DwvU0bVVlJbaIGSkpwPDXxwWcC+zu8neFUugX+5zdjyJCLfLq4 75J6Ya6UTPdTTzCA== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH net-next 09/15] dev: Remove PREEMPT_RT ifdefs from backlog_lock.*(). Date: Fri, 3 May 2024 20:25:13 +0200 Message-ID: <20240503182957.1042122-10-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The backlog_napi locking (previously RPS) relies on explicit locking if either RPS or backlog NAPI is enabled. If both are disabled then locking was achieved by disabling interrupts except on PREEMPT_RT. PREEMPT_RT was excluded because the needed synchronisation was already provided local_bh_disable(). Since the introduction of backlog NAPI and making it mandatory for PREEMPT_RT the ifdef within backlog_lock.*() is obsolete and can be removed. Remove the ifdefs in backlog_lock.*(). Signed-off-by: Sebastian Andrzej Siewior --- net/core/dev.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index e02d2363347e2..cf7b452ce0d74 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -230,7 +230,7 @@ static inline void backlog_lock_irq_save(struct softnet_data *sd, { if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads()) spin_lock_irqsave(&sd->input_pkt_queue.lock, *flags); - else if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + else local_irq_save(*flags); } @@ -238,7 +238,7 @@ static inline void backlog_lock_irq_disable(struct softnet_data *sd) { if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads()) spin_lock_irq(&sd->input_pkt_queue.lock); - else if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + else local_irq_disable(); } @@ -247,7 +247,7 @@ static inline void backlog_unlock_irq_restore(struct softnet_data *sd, { if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads()) spin_unlock_irqrestore(&sd->input_pkt_queue.lock, *flags); - else if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + else local_irq_restore(*flags); } @@ -255,7 +255,7 @@ static inline void backlog_unlock_irq_enable(struct softnet_data *sd) { if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads()) spin_unlock_irq(&sd->input_pkt_queue.lock); - else if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + else local_irq_enable(); } From patchwork Fri May 3 18:25:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653319 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B138158D9F; Fri, 3 May 2024 18:30:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761013; cv=none; b=aNyzyNVA6ox3iW+J1UubStnm/VQjX2dyZxvv5WQNl+7H//kqTwP0mx5PR7qfHef1b8DWEOrhmUb+RVrsb4mQ+WH72qkF/pr8l0eEPFV6UtGWM3MrIjZLqMMBIxtITROVoFF2KTraNhO+xGsB21if2tiwJL1rsr8tsYbT1+hgNe8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761013; c=relaxed/simple; bh=owGjXj1gRcQHvfXuD018vQxaCTN0J8efK7YMuWyF8As=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t4jySER6s4bXxOkHn9mfm4cGEdeNQAGwMlR/zch3slWJzSPTf9ij2pTHUYTLCNWKrI3VtMIEqnUfAveBrhyT5fhRXyN3PxdQ0O7QSV+TOhoh+tv9e7bRg6g0baQPjUvxn7cKAbLujsSDZ9MUihLlrq6AJ0of9/BRB4sS6FQaYQg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=b44alsuR; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=YeNEJOBT; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="b44alsuR"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="YeNEJOBT" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bsXcW5uZG6cWaMVlyAXK8eicxATBh3uvSrRgqWotKdM=; b=b44alsuR2QcrVlf59gZX0/PyvBQ3IOqwVn1z1csODKySqTQ0CI5M251JLg46rxDWQyJTC0 k0fAKLRn+wdULW7GKRDTEWPs8Dp3ms99AZXnkK9nTrrw9w1ZjOl8b8aHFj9JK3jfAHHTcC dKePv+zpQ23u8ntg/qV+kibZr6QYgBQZeEadjTOManbb4/1Lk5HBTR2+40P3aumaJYoqxW pEAvQWxS4yHQujjzCB8bfgPEhfr180KvF2EyV9BiMZiJkWTo6X0A5HhywlCadh85k+o4DS QQpa0A9owaUc6Ce/di1SpwDdGng9EfjFxBi5UlvlIzs2c9VGIa7wpYuHKr4G/w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bsXcW5uZG6cWaMVlyAXK8eicxATBh3uvSrRgqWotKdM=; b=YeNEJOBTMY+L6UBwB4PdTxNfxklNmtz+hgddyru9gz8MnCWeQkdHeZY3YfDE99OCC2sn+k ZCWDz75clMOBIoBg== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior Subject: [PATCH net-next 10/15] dev: Use nested-BH locking for softnet_data.process_queue. Date: Fri, 3 May 2024 20:25:14 +0200 Message-ID: <20240503182957.1042122-11-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org softnet_data::process_queue is a per-CPU variable and relies on disabled BH for its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. softnet_data::input_queue_head can be updated lockless. This is fine because this value is only update CPU local by the local backlog_napi thread. Add a local_lock_t to softnet_data and use local_lock_nested_bh() for locking of process_queue. This change adds only lockdep coverage and does not alter the functional behaviour for !PREEMPT_RT. Signed-off-by: Sebastian Andrzej Siewior --- include/linux/netdevice.h | 1 + net/core/dev.c | 12 +++++++++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index c551ec235f9af..9d19e6ace7cb7 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3199,6 +3199,7 @@ static inline bool dev_has_header(const struct net_device *dev) struct softnet_data { struct list_head poll_list; struct sk_buff_head process_queue; + local_lock_t process_queue_bh_lock; /* stats */ unsigned int processed; diff --git a/net/core/dev.c b/net/core/dev.c index cf7b452ce0d74..1503883ce15a4 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -450,7 +450,9 @@ static RAW_NOTIFIER_HEAD(netdev_chain); * queue in the local softnet handler. */ -DEFINE_PER_CPU_ALIGNED(struct softnet_data, softnet_data); +DEFINE_PER_CPU_ALIGNED(struct softnet_data, softnet_data) = { + .process_queue_bh_lock = INIT_LOCAL_LOCK(process_queue_bh_lock), +}; EXPORT_PER_CPU_SYMBOL(softnet_data); /* Page_pool has a lockless array/stack to alloc/recycle pages. @@ -5935,6 +5937,7 @@ static void flush_backlog(struct work_struct *work) } backlog_unlock_irq_enable(sd); + local_lock_nested_bh(&softnet_data.process_queue_bh_lock); skb_queue_walk_safe(&sd->process_queue, skb, tmp) { if (skb->dev->reg_state == NETREG_UNREGISTERING) { __skb_unlink(skb, &sd->process_queue); @@ -5942,6 +5945,7 @@ static void flush_backlog(struct work_struct *work) rps_input_queue_head_incr(sd); } } + local_unlock_nested_bh(&softnet_data.process_queue_bh_lock); local_bh_enable(); } @@ -6063,7 +6067,9 @@ static int process_backlog(struct napi_struct *napi, int quota) while (again) { struct sk_buff *skb; + local_lock_nested_bh(&softnet_data.process_queue_bh_lock); while ((skb = __skb_dequeue(&sd->process_queue))) { + local_unlock_nested_bh(&softnet_data.process_queue_bh_lock); rcu_read_lock(); __netif_receive_skb(skb); rcu_read_unlock(); @@ -6072,7 +6078,9 @@ static int process_backlog(struct napi_struct *napi, int quota) return work; } + local_lock_nested_bh(&softnet_data.process_queue_bh_lock); } + local_unlock_nested_bh(&softnet_data.process_queue_bh_lock); backlog_lock_irq_disable(sd); if (skb_queue_empty(&sd->input_pkt_queue)) { @@ -6087,8 +6095,10 @@ static int process_backlog(struct napi_struct *napi, int quota) napi->state &= NAPIF_STATE_THREADED; again = false; } else { + local_lock_nested_bh(&softnet_data.process_queue_bh_lock); skb_queue_splice_tail_init(&sd->input_pkt_queue, &sd->process_queue); + local_unlock_nested_bh(&softnet_data.process_queue_bh_lock); } backlog_unlock_irq_enable(sd); } From patchwork Fri May 3 18:25:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653303 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81B23158D8B; Fri, 3 May 2024 18:30:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761013; cv=none; b=IGGyVzpqArLeJ7wJtsAbRreuZHqP4ioh/HLPew+v6G0ASQsm8BsdavcOaQj2zYk/7OYJP5Nto+UJXhd1ThQr3lArr4iFokpcyX/6HV9VVFmj5JTDRIqgt+CbBgUdOliICVxVZMJVoPY7AzWiymalZV5Lk0zq3yl7zTdSfkqUCNo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761013; c=relaxed/simple; bh=LVVqnyIbiCKyUaRvfyLa95e+IMRUFI5GejbEI60O2Mg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BxO88jCPoAthvv9rC1oiQ/f3s6SYN1kY9u7pVdws6G6L83/0lTwD2bK6neJcnoiq/eMs7dlJvWeotjk2pluDHeGCJAq7Cj33bTo/CLoL0cgFYKvas76lH5yVIzd2js4qVbzUoXDgSIcSnrmuC15FmvB+iu9+fDzxrZm5SOrCW5A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=frSEDg5/; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=rIyYwDGY; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="frSEDg5/"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="rIyYwDGY" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hccXJ/tMSuJL/qI6XmVbVx3Yyp9L6BDQI7DM1y19Tvc=; b=frSEDg5/LwZG4mYodLbulmlP7vse6VIu+qtvGfppw5wQHXI2+dXANmQqF2YLLh90DwCDEH yItXWq4EC8E2Z1UZ5uHHk7iVkoX1sI8QiQfXVgVJAbabJ2ULyyLW9s2eilAdQAXEsq2BjO AL60RWkWNxvm1hIJzx5mLJUnqysH0WEebz4pqEz6cUJMarwRC1N+SB6oecpJpBKFqB20uv AzlTA8PxjkGs7q33UFsD3j0jm/LOVUzMQwaanf/DwoVAUJUYD/PnY5zgX2A0AJDTHuhD9a oTcSa5WHVfbyRapeJuv3GD4uQaP3WIoCC77sKscKjVe+rV/jF0SNxObMHezfqA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hccXJ/tMSuJL/qI6XmVbVx3Yyp9L6BDQI7DM1y19Tvc=; b=rIyYwDGYOcqOApqkUA/imNmArMjG8d3+vT+Qdv/NJKsa+q0JpOVsFBCOJk1CtfQ7XoNIpX /lEBZB6Cdj3KaeDQ== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , bpf@vger.kernel.org Subject: [PATCH net-next 11/15] lwt: Don't disable migration prio invoking BPF. Date: Fri, 3 May 2024 20:25:15 +0200 Message-ID: <20240503182957.1042122-12-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org There is no need to explicitly disable migration if bottom halves are also disabled. Disabling BH implies disabling migration. Remove migrate_disable() and rely solely on disabling BH to remain on the same CPU. Cc: bpf@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- net/core/lwt_bpf.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c index 4a0797f0a154b..a94943681e5aa 100644 --- a/net/core/lwt_bpf.c +++ b/net/core/lwt_bpf.c @@ -40,10 +40,9 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt, { int ret; - /* Migration disable and BH disable are needed to protect per-cpu - * redirect_info between BPF prog and skb_do_redirect(). + /* Disabling BH is needed to protect per-CPU bpf_redirect_info between + * BPF prog and skb_do_redirect(). */ - migrate_disable(); local_bh_disable(); bpf_compute_data_pointers(skb); ret = bpf_prog_run_save_cb(lwt->prog, skb); @@ -78,7 +77,6 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt, } local_bh_enable(); - migrate_enable(); return ret; } From patchwork Fri May 3 18:25:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653304 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1565F158DCF; Fri, 3 May 2024 18:30:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761014; cv=none; b=GzCSE773WKe0ioQK/DDky1+n5PlTKtFQ6hzsmVXnW6FkQfPNjDioDdvpcGuoaIOnI0f2QIiaBANM0F3jaglyTl3L2lmvW0DeOfM4ArBEGfUpWJl1SrBcfPH4FSQgDLORbPIcGPxwk0i6vxb5W8t1AIbPCxYpKPbyaxjCNtga/Lc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761014; c=relaxed/simple; bh=UqQsOsIdw/NCKbm3B4SBRLQVDSrIs6N1hAaimaDm2mA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BZHrCIEi1iFjtmZN5H1JjFQQmiJ+4ef2HqMybfdrhG34u3QsFpku6EO2+jTbUcaoZWFY5SeBkHq7fHwTzm4B221P7ff0V8Vw9zf/LBmNYL+08BU65ZsVZvZoL9OWDQjdNvthbza4dwzy49BkduNgLgrsTm7LcvzpUJQyBK9H+1w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=E2ipGnj5; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=+TB01zxg; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="E2ipGnj5"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="+TB01zxg" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761010; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YksF+17eSWR7vO0o/oV0GSivG8QwNSGUcEcBHmVrZ1U=; b=E2ipGnj5QpriW9bNN52d1bOV1ezdOaNezIQ3rkNiAX6Cj+OzFV4GfpJLUt/PybwJUlHW5m 16AKEISPEOMxyXCCJn6aAzg1iOlV5bcOqDjyuAG91cfFFZ7ffiSA6Txm7YeAvAyP4YQc4T 5Yc7gH0hY6brkzR5v9M2b1b/P+ylmxvr1RZxEhaEp3NZpb0vZMtJATUxlndD4dl0X5T00/ upLjblvWr3k9tdqVPvV+McYfYvvnOmwPf5IlNQwpsCYKnoC6pIfer/pdvh7sSu6P/xJuvj lcux546jb9qdGrHb30D4Qr8KrwWv3Ej0UXXQfwkCDCigBOcA/ALVrKerrd5YwQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761010; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YksF+17eSWR7vO0o/oV0GSivG8QwNSGUcEcBHmVrZ1U=; b=+TB01zxgLIwJw4fXJFZQyTTmI+KKhVg38XpxJZm0HyYPSaQaHVRq/V7cfShB9fYruSR69b T01uDT+BzqcNIQCg== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Alexei Starovoitov , Andrii Nakryiko , David Ahern , Hao Luo , Jiri Olsa , John Fastabend , KP Singh , Martin KaFai Lau , Song Liu , Stanislav Fomichev , Yonghong Song , bpf@vger.kernel.org Subject: [PATCH net-next 12/15] seg6: Use nested-BH locking for seg6_bpf_srh_states. Date: Fri, 3 May 2024 20:25:16 +0200 Message-ID: <20240503182957.1042122-13-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The access to seg6_bpf_srh_states is protected by disabling preemption. Based on the code, the entry point is input_action_end_bpf() and every other function (the bpf helper functions bpf_lwt_seg6_*()), that is accessing seg6_bpf_srh_states, should be called from within input_action_end_bpf(). input_action_end_bpf() accesses seg6_bpf_srh_states first at the top of the function and then disables preemption. This looks wrong because if preemption needs to be disabled as part of the locking mechanism then the variable shouldn't be accessed beforehand. Looking at how it is used via test_lwt_seg6local.sh then input_action_end_bpf() is always invoked from softirq context. If this is always the case then the preempt_disable() statement is superfluous. If this is not always invoked from softirq then disabling only preemption is not sufficient. Replace the preempt_disable() statement with nested-BH locking. This is not an equivalent replacement as it assumes that the invocation of input_action_end_bpf() always occurs in softirq context and thus the preempt_disable() is superfluous. Add a local_lock_t the data structure and use local_lock_nested_bh() in guard notation for locking. Add lockdep_assert_held() to ensure the lock is held while the per-CPU variable is referenced in the helper functions. Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: David Ahern Cc: Hao Luo Cc: Jiri Olsa Cc: John Fastabend Cc: KP Singh Cc: Martin KaFai Lau Cc: Song Liu Cc: Stanislav Fomichev Cc: Yonghong Song Cc: bpf@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- include/net/seg6_local.h | 1 + net/core/filter.c | 3 +++ net/ipv6/seg6_local.c | 22 ++++++++++++++-------- 3 files changed, 18 insertions(+), 8 deletions(-) diff --git a/include/net/seg6_local.h b/include/net/seg6_local.h index 3fab9dec2ec45..888c1ce6f5272 100644 --- a/include/net/seg6_local.h +++ b/include/net/seg6_local.h @@ -19,6 +19,7 @@ extern int seg6_lookup_nexthop(struct sk_buff *skb, struct in6_addr *nhaddr, extern bool seg6_bpf_has_valid_srh(struct sk_buff *skb); struct seg6_bpf_srh_state { + local_lock_t bh_lock; struct ipv6_sr_hdr *srh; u16 hdrlen; bool valid; diff --git a/net/core/filter.c b/net/core/filter.c index 2510464692af0..cfe8ea59fd9db 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -6450,6 +6450,7 @@ BPF_CALL_4(bpf_lwt_seg6_store_bytes, struct sk_buff *, skb, u32, offset, void *srh_tlvs, *srh_end, *ptr; int srhoff = 0; + lockdep_assert_held(&srh_state->bh_lock); if (srh == NULL) return -EINVAL; @@ -6506,6 +6507,7 @@ BPF_CALL_4(bpf_lwt_seg6_action, struct sk_buff *, skb, int hdroff = 0; int err; + lockdep_assert_held(&srh_state->bh_lock); switch (action) { case SEG6_LOCAL_ACTION_END_X: if (!seg6_bpf_has_valid_srh(skb)) @@ -6582,6 +6584,7 @@ BPF_CALL_3(bpf_lwt_seg6_adjust_srh, struct sk_buff *, skb, u32, offset, int srhoff = 0; int ret; + lockdep_assert_held(&srh_state->bh_lock); if (unlikely(srh == NULL)) return -EINVAL; diff --git a/net/ipv6/seg6_local.c b/net/ipv6/seg6_local.c index 24e2b4b494cb0..c4828c6620f07 100644 --- a/net/ipv6/seg6_local.c +++ b/net/ipv6/seg6_local.c @@ -1380,7 +1380,9 @@ static int input_action_end_b6_encap(struct sk_buff *skb, return err; } -DEFINE_PER_CPU(struct seg6_bpf_srh_state, seg6_bpf_srh_states); +DEFINE_PER_CPU(struct seg6_bpf_srh_state, seg6_bpf_srh_states) = { + .bh_lock = INIT_LOCAL_LOCK(bh_lock), +}; bool seg6_bpf_has_valid_srh(struct sk_buff *skb) { @@ -1388,6 +1390,7 @@ bool seg6_bpf_has_valid_srh(struct sk_buff *skb) this_cpu_ptr(&seg6_bpf_srh_states); struct ipv6_sr_hdr *srh = srh_state->srh; + lockdep_assert_held(&srh_state->bh_lock); if (unlikely(srh == NULL)) return false; @@ -1408,8 +1411,7 @@ bool seg6_bpf_has_valid_srh(struct sk_buff *skb) static int input_action_end_bpf(struct sk_buff *skb, struct seg6_local_lwt *slwt) { - struct seg6_bpf_srh_state *srh_state = - this_cpu_ptr(&seg6_bpf_srh_states); + struct seg6_bpf_srh_state *srh_state; struct ipv6_sr_hdr *srh; int ret; @@ -1420,10 +1422,14 @@ static int input_action_end_bpf(struct sk_buff *skb, } advance_nextseg(srh, &ipv6_hdr(skb)->daddr); - /* preempt_disable is needed to protect the per-CPU buffer srh_state, - * which is also accessed by the bpf_lwt_seg6_* helpers + /* The access to the per-CPU buffer srh_state is protected by running + * always in softirq context (with disabled BH). On PREEMPT_RT the + * required locking is provided by the following local_lock_nested_bh() + * statement. It is also accessed by the bpf_lwt_seg6_* helpers via + * bpf_prog_run_save_cb(). */ - preempt_disable(); + local_lock_nested_bh(&seg6_bpf_srh_states.bh_lock); + srh_state = this_cpu_ptr(&seg6_bpf_srh_states); srh_state->srh = srh; srh_state->hdrlen = srh->hdrlen << 3; srh_state->valid = true; @@ -1446,15 +1452,15 @@ static int input_action_end_bpf(struct sk_buff *skb, if (srh_state->srh && !seg6_bpf_has_valid_srh(skb)) goto drop; + local_unlock_nested_bh(&seg6_bpf_srh_states.bh_lock); - preempt_enable(); if (ret != BPF_REDIRECT) seg6_lookup_nexthop(skb, NULL, 0); return dst_input(skb); drop: - preempt_enable(); + local_unlock_nested_bh(&seg6_bpf_srh_states.bh_lock); kfree_skb(skb); return -EINVAL; } From patchwork Fri May 3 18:25:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653305 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD63C1591F1; Fri, 3 May 2024 18:30:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761014; cv=none; b=I+mFn6yyVje1Bguv9xqS2xgzMADEFWtS/MEafynthM4AEIIxJTB3z60z8aOyoxWRrnorsOMBr9N3B00e6rAvZ9sTg4NaaXXAehG7oPuWox1I4gV2HyblIndKk6yFd6u6KEBgMRUBD3DMCpR1mdFxzSkr1DtbcN59rcX5jVQmFxw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761014; c=relaxed/simple; bh=6fVCWFsuEBR7nJV+h+rv294+QgH0Wy9inghRo5IE69k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jPQ37TuQFkOM/Y5JDnR5HNrjrR8pRmyt2G0TUjYAOMf/mofmu2iCvhsJ9ulWxuc+hkPUKnbv29m/KiGcL9N4wALC+qsPWfCj529+Wx2QWFSmhNl6h6emidcAU0Qba7fl8/h1Q3bCq4WcoqPjWX1kHvAcM3fn902ip6Uycr49vxY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=hpW97fX4; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=Ssmiv9hm; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="hpW97fX4"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Ssmiv9hm" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761010; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ybrNKVMPkZ4Ap31+Up4JCU5gLeDUDgUGDfpmjWAnCM=; b=hpW97fX4evbEt0tWQbOvOP61Uj5ksTD/64vN6UYwBwECNGgT89P7IQWlYezsnZ67Vymtsq tcpqYqRCHoHti+ZTXiAH6um/7o3j4E/Q9+4tZofYYGvCT+PXrmdizM0DXsQGjV9mRlASuv 6j1l6KxjNZfuZGxEcX4NgQX6tMfzfW5WW+0vLvDyLkEWmxuhuz9HxvQE8OdBFniSeZN1DN WGs3YnW8oe0vS+tt4hpZnFTNgWIgkbd4oamLnQm/kv5tTEZVjtTb7chdKRotAY+k0RKpiX UavJJ4P/QnD4G9qBVkAYCpc2hdpToj+O1lgSnoBEqGQD2QKyYMVuoVtFv6lwkA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761010; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ybrNKVMPkZ4Ap31+Up4JCU5gLeDUDgUGDfpmjWAnCM=; b=Ssmiv9hmiUSp1iHRdezmkdqkkK3COLazKJDAi/W/DvYC0fZ3hOoM1UGn1Om40iRt7c5xvm RlcAOAQLtBV9HVCQ== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Alexei Starovoitov , Andrii Nakryiko , Hao Luo , Jiri Olsa , John Fastabend , KP Singh , Martin KaFai Lau , Song Liu , Stanislav Fomichev , Yonghong Song , bpf@vger.kernel.org Subject: [PATCH net-next 13/15] net: Use nested-BH locking for bpf_scratchpad. Date: Fri, 3 May 2024 20:25:17 +0200 Message-ID: <20240503182957.1042122-14-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org bpf_scratchpad is a per-CPU variable and relies on disabled BH for its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. Add a local_lock_t to the data structure and use local_lock_nested_bh() for locking. This change adds only lockdep coverage and does not alter the functional behaviour for !PREEMPT_RT. Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Hao Luo Cc: Jiri Olsa Cc: John Fastabend Cc: KP Singh Cc: Martin KaFai Lau Cc: Song Liu Cc: Stanislav Fomichev Cc: Yonghong Song Cc: bpf@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- net/core/filter.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/net/core/filter.c b/net/core/filter.c index cfe8ea59fd9db..e95b235a1e4f4 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -1658,9 +1658,12 @@ struct bpf_scratchpad { __be32 diff[MAX_BPF_STACK / sizeof(__be32)]; u8 buff[MAX_BPF_STACK]; }; + local_lock_t bh_lock; }; -static DEFINE_PER_CPU(struct bpf_scratchpad, bpf_sp); +static DEFINE_PER_CPU(struct bpf_scratchpad, bpf_sp) = { + .bh_lock = INIT_LOCAL_LOCK(bh_lock), +}; static inline int __bpf_try_make_writable(struct sk_buff *skb, unsigned int write_len) @@ -2029,6 +2032,7 @@ BPF_CALL_5(bpf_csum_diff, __be32 *, from, u32, from_size, diff_size > sizeof(sp->diff))) return -EINVAL; + guard(local_lock_nested_bh)(&bpf_sp.bh_lock); for (i = 0; i < from_size / sizeof(__be32); i++, j++) sp->diff[j] = ~from[i]; for (i = 0; i < to_size / sizeof(__be32); i++, j++) From patchwork Fri May 3 18:25:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653306 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2345515357D; Fri, 3 May 2024 18:30:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761016; cv=none; b=Y0Nts+wdCTEkIRxxy4Rkn82CyFw0607/QBbkxOReALjDcNphPoiHqKweQyjcsbCbNc3r/klxECGIkWQsdHaf2OgXeTrm0TUBm38kC20RGicXBAOlNwY2O27U9aZiiK1dQKXHi0btHqRlZkAEeERqbpVde8PtAmT367JCofM8wmE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761016; c=relaxed/simple; bh=jKPBdTZBWfXKshm2xN+edmAIDX4Stlo1qSApKaQRygs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Uzhl4cSbsiDRHIg7h8QBU5sq6mfhRLaE/xRyYHINjx2zjNFfbq0ZRhhEqZhG8Xbkczxy5+63xqHBvPRRhA3wn+sAY8Mf+suF5JPwJPl+xnb9DqPrf43wSe3P+vXA/djbzaF5vFrzd4aDLAhArQbHSjjtyEKHBGZrAvZea/9iqoc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=onJPdjEj; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=st7aW3ad; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="onJPdjEj"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="st7aW3ad" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761011; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U+z76MrGFwu7LZiEGuHxi2dOAcDWW4QXJPxN3zJkFME=; b=onJPdjEj8jL9KimRcOxbquuqcoC8IDUTUVVSg3pas9glgEQ63DVInmbdSVJtCdAfAloK42 BoV0vM5pDD7J74hDyAmKqMKnSvhcnHb90sICGa54jwxWHUQA1I1hQav7ISUu41eb7kzqQs vTCuWCXPu/FkGopYxm50a5d/EAakHk3PE8tG5eoC7rPoKOF5fK+zwrf/xZF+G5ZLVqaMyC CQfMpayieLJX5Z0kIPRzIXkeOTSvvShTLuOlB7AqsUSS5ZSF9bGL81CCwAwOTL71HMuVzU e+VNwTyzUuckcw0JYiJ499LepdN6wM683ChSv9AhEndRJrZM037udNAfaLGmXw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761011; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U+z76MrGFwu7LZiEGuHxi2dOAcDWW4QXJPxN3zJkFME=; b=st7aW3adoCnUVLRXDCTB32glVy1tRQv03nxp/lZlBsionKm79y9/AUeiyo7lLCgrdh6mAt 3BIDabzJm/iCIoCA== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Alexei Starovoitov , Andrii Nakryiko , Eduard Zingerman , Hao Luo , Jesper Dangaard Brouer , Jiri Olsa , John Fastabend , KP Singh , Martin KaFai Lau , Song Liu , Stanislav Fomichev , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rge?= =?utf-8?q?nsen?= , Yonghong Song , bpf@vger.kernel.org Subject: [PATCH net-next 14/15] net: Reference bpf_redirect_info via task_struct on PREEMPT_RT. Date: Fri, 3 May 2024 20:25:18 +0200 Message-ID: <20240503182957.1042122-15-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The XDP redirect process is two staged: - bpf_prog_run_xdp() is invoked to run a eBPF program which inspects the packet and makes decisions. While doing that, the per-CPU variable bpf_redirect_info is used. - Afterwards xdp_do_redirect() is invoked and accesses bpf_redirect_info and it may also access other per-CPU variables like xskmap_flush_list. At the very end of the NAPI callback, xdp_do_flush() is invoked which does not access bpf_redirect_info but will touch the individual per-CPU lists. The per-CPU variables are only used in the NAPI callback hence disabling bottom halves is the only protection mechanism. Users from preemptible context (like cpu_map_kthread_run()) explicitly disable bottom halves for protections reasons. Without locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. PREEMPT_RT has forced-threaded interrupts enabled and every NAPI-callback runs in a thread. If each thread has its own data structure then locking can be avoided. Create a struct bpf_net_context which contains struct bpf_redirect_info. Define the variable on stack, use bpf_net_ctx_set() to save a pointer to it. Use the __free() annotation to automatically reset the pointer once function returns. The bpf_net_ctx_set() may nest. For instance a function can be used from within NET_RX_SOFTIRQ/ net_rx_action which uses bpf_net_ctx_set() and NET_TX_SOFTIRQ which does not. Therefore only the first invocations updates the pointer. Use bpf_net_ctx_get_ri() as a wrapper to retrieve the current struct bpf_redirect_info. On PREEMPT_RT the pointer to bpf_net_context is saved task's task_struct. On non-PREEMPT_RT builds the pointer saved in a per-CPU variable (which is always NODE-local memory). Using always the bpf_net_context approach has the advantage that there is almost zero differences between PREEMPT_RT and non-PREEMPT_RT builds. Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Eduard Zingerman Cc: Hao Luo Cc: Jesper Dangaard Brouer Cc: Jiri Olsa Cc: John Fastabend Cc: KP Singh Cc: Martin KaFai Lau Cc: Song Liu Cc: Stanislav Fomichev Cc: Toke Høiland-Jørgensen Cc: Yonghong Song Cc: bpf@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- include/linux/filter.h | 96 ++++++++++++++++++++++++++++++++++++++--- include/linux/sched.h | 5 +++ kernel/bpf/cpumap.c | 3 ++ kernel/fork.c | 3 ++ net/bpf/test_run.c | 11 ++++- net/core/dev.c | 19 +++++++- net/core/filter.c | 98 ++++++++++++++++++++++++++---------------- net/core/lwt_bpf.c | 3 ++ 8 files changed, 191 insertions(+), 47 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index d5fea03cb6e61..bdd69bd81df45 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -744,7 +744,83 @@ struct bpf_redirect_info { struct bpf_nh_params nh; }; -DECLARE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info); +struct bpf_net_context { + struct bpf_redirect_info ri; +}; + +#ifndef CONFIG_PREEMPT_RT +DECLARE_PER_CPU(struct bpf_net_context *, bpf_net_context); + +static inline struct bpf_net_context *bpf_net_ctx_set(struct bpf_net_context *bpf_net_ctx) +{ + struct bpf_net_context *ctx; + + ctx = this_cpu_read(bpf_net_context); + if (ctx != NULL) + return NULL; + this_cpu_write(bpf_net_context, bpf_net_ctx); + return bpf_net_ctx; +} + +static inline void bpf_net_ctx_clear(struct bpf_net_context *bpf_net_ctx) +{ + struct bpf_net_context *ctx; + + ctx = this_cpu_read(bpf_net_context); + if (ctx != bpf_net_ctx) + return; + this_cpu_write(bpf_net_context, NULL); +} + +static inline struct bpf_net_context *bpf_net_ctx_get(void) +{ + struct bpf_net_context *bpf_net_ctx = this_cpu_read(bpf_net_context); + + WARN_ON_ONCE(!bpf_net_ctx); + return bpf_net_ctx; +} + +#else + +static inline struct bpf_net_context *bpf_net_ctx_set(struct bpf_net_context *bpf_net_ctx) +{ + struct task_struct *tsk = current; + + if (tsk->bpf_net_context != NULL) + return NULL; + tsk->bpf_net_context = bpf_net_ctx; + return bpf_net_ctx; +} + +static inline void bpf_net_ctx_clear(struct bpf_net_context *bpf_net_ctx) +{ + struct task_struct *tsk = current; + + if (tsk->bpf_net_context != bpf_net_ctx) + return; + tsk->bpf_net_context = NULL; +} + +static inline struct bpf_net_context *bpf_net_ctx_get(void) +{ + struct bpf_net_context *bpf_net_ctx = current->bpf_net_context; + + WARN_ON_ONCE(!bpf_net_ctx); + return bpf_net_ctx; +} + +#endif + +static inline struct bpf_redirect_info *bpf_net_ctx_get_ri(void) +{ + struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get(); + + if (!bpf_net_ctx) + return NULL; + return &bpf_net_ctx->ri; +} + +DEFINE_FREE(bpf_net_ctx_clear, struct bpf_net_context *, if (_T) bpf_net_ctx_clear(_T)); /* flags for bpf_redirect_info kern_flags */ #define BPF_RI_F_RF_NO_DIRECT BIT(0) /* no napi_direct on return_frame */ @@ -1021,23 +1097,27 @@ void bpf_clear_redirect_map(struct bpf_map *map); static inline bool xdp_return_frame_no_direct(void) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); + if (!ri) + return false; return ri->kern_flags & BPF_RI_F_RF_NO_DIRECT; } static inline void xdp_set_return_frame_no_direct(void) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); - ri->kern_flags |= BPF_RI_F_RF_NO_DIRECT; + if (ri) + ri->kern_flags |= BPF_RI_F_RF_NO_DIRECT; } static inline void xdp_clear_return_frame_no_direct(void) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); - ri->kern_flags &= ~BPF_RI_F_RF_NO_DIRECT; + if (ri) + ri->kern_flags &= ~BPF_RI_F_RF_NO_DIRECT; } static inline int xdp_ok_fwd_dev(const struct net_device *fwd, @@ -1591,9 +1671,11 @@ static __always_inline long __bpf_xdp_redirect_map(struct bpf_map *map, u64 inde u64 flags, const u64 flag_mask, void *lookup_elem(struct bpf_map *map, u32 key)) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); const u64 action_mask = XDP_ABORTED | XDP_DROP | XDP_PASS | XDP_TX; + if (!ri) + return XDP_ABORTED; /* Lower bits of the flags are used as return code on lookup failure */ if (unlikely(flags & ~(action_mask | flag_mask))) return XDP_ABORTED; diff --git a/include/linux/sched.h b/include/linux/sched.h index 6779d3b8f2578..26324fb0e532d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -53,6 +53,7 @@ struct bio_list; struct blk_plug; struct bpf_local_storage; struct bpf_run_ctx; +struct bpf_net_context; struct capture_control; struct cfs_rq; struct fs_struct; @@ -1504,6 +1505,10 @@ struct task_struct { /* Used for BPF run context */ struct bpf_run_ctx *bpf_ctx; #endif +#ifdef CONFIG_PREEMPT_RT + /* Used by BPF for per-TASK xdp storage */ + struct bpf_net_context *bpf_net_context; +#endif #ifdef CONFIG_GCC_PLUGIN_STACKLEAK unsigned long lowest_stack; diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index a8e34416e960f..66974bd027109 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -240,12 +240,14 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, int xdp_n, struct xdp_cpumap_stats *stats, struct list_head *list) { + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; int nframes; if (!rcpu->prog) return xdp_n; rcu_read_lock_bh(); + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats); @@ -255,6 +257,7 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, if (unlikely(!list_empty(list))) cpu_map_bpf_prog_run_skb(rcpu, list, stats); + bpf_net_ctx_clear(bpf_net_ctx); rcu_read_unlock_bh(); /* resched point, may call do_softirq() */ return nframes; diff --git a/kernel/fork.c b/kernel/fork.c index aebb3e6c96dc6..82c16c22d960c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2355,6 +2355,9 @@ __latent_entropy struct task_struct *copy_process( RCU_INIT_POINTER(p->bpf_storage, NULL); p->bpf_ctx = NULL; #endif +#ifdef CONFIG_PREEMPT_RT + p->bpf_net_context = NULL; +#endif /* Perform scheduler related setup. Assign this task to a CPU. */ retval = sched_fork(clone_flags, p); diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index f6aad4ed2ab2f..600cc8e428c1a 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -283,9 +283,10 @@ static int xdp_recv_frames(struct xdp_frame **frames, int nframes, static int xdp_test_run_batch(struct xdp_test_data *xdp, struct bpf_prog *prog, u32 repeat) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; int err = 0, act, ret, i, nframes = 0, batch_sz; struct xdp_frame **frames = xdp->frames; + struct bpf_redirect_info *ri; struct xdp_page_head *head; struct xdp_frame *frm; bool redirect = false; @@ -295,6 +296,8 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp, struct bpf_prog *prog, batch_sz = min_t(u32, repeat, xdp->batch_size); local_bh_disable(); + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); + ri = bpf_net_ctx_get_ri(); xdp_set_return_frame_no_direct(); for (i = 0; i < batch_sz; i++) { @@ -359,6 +362,7 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp, struct bpf_prog *prog, } xdp_clear_return_frame_no_direct(); + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); return err; } @@ -394,6 +398,7 @@ static int bpf_test_run_xdp_live(struct bpf_prog *prog, struct xdp_buff *ctx, static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, u32 *retval, u32 *time, bool xdp) { + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; struct bpf_prog_array_item item = {.prog = prog}; struct bpf_run_ctx *old_ctx; struct bpf_cg_run_ctx run_ctx; @@ -419,10 +424,14 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, do { run_ctx.prog_item = &item; local_bh_disable(); + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); + if (xdp) *retval = bpf_prog_run_xdp(prog, ctx); else *retval = bpf_prog_run(prog, ctx); + + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); } while (bpf_test_timer_continue(&t, 1, repeat, &ret, time)); bpf_reset_run_ctx(old_ctx); diff --git a/net/core/dev.c b/net/core/dev.c index 1503883ce15a4..26e524544942d 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4031,11 +4031,15 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret, struct net_device *orig_dev, bool *another) { struct bpf_mprog_entry *entry = rcu_dereference_bh(skb->dev->tcx_ingress); + struct bpf_net_context *bpf_net_ctx __free(bpf_net_ctx_clear) = NULL; enum skb_drop_reason drop_reason = SKB_DROP_REASON_TC_INGRESS; + struct bpf_net_context __bpf_net_ctx; int sch_ret; if (!entry) return skb; + + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); if (*pt_prev) { *ret = deliver_skb(skb, *pt_prev, orig_dev); *pt_prev = NULL; @@ -4086,13 +4090,17 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret, static __always_inline struct sk_buff * sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev) { + struct bpf_net_context *bpf_net_ctx __free(bpf_net_ctx_clear) = NULL; struct bpf_mprog_entry *entry = rcu_dereference_bh(dev->tcx_egress); enum skb_drop_reason drop_reason = SKB_DROP_REASON_TC_EGRESS; + struct bpf_net_context __bpf_net_ctx; int sch_ret; if (!entry) return skb; + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); + /* qdisc_skb_cb(skb)->pkt_len & tcx_set_ingress() was * already set by the caller. */ @@ -6357,13 +6365,15 @@ static void __napi_busy_loop(unsigned int napi_id, bool (*loop_end)(void *, unsigned long), void *loop_end_arg, unsigned flags, u16 budget) { + struct bpf_net_context *bpf_net_ctx __free(bpf_net_ctx_clear) = NULL; unsigned long start_time = loop_end ? busy_loop_current_time() : 0; int (*napi_poll)(struct napi_struct *napi, int budget); + struct bpf_net_context __bpf_net_ctx; void *have_poll_lock = NULL; struct napi_struct *napi; WARN_ON_ONCE(!rcu_read_lock_held()); - + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); restart: napi_poll = NULL; @@ -6834,6 +6844,7 @@ static int napi_thread_wait(struct napi_struct *napi) static void napi_threaded_poll_loop(struct napi_struct *napi) { + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; struct softnet_data *sd; unsigned long last_qs = jiffies; @@ -6842,6 +6853,8 @@ static void napi_threaded_poll_loop(struct napi_struct *napi) void *have; local_bh_disable(); + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); + sd = this_cpu_ptr(&softnet_data); sd->in_napi_threaded_poll = true; @@ -6857,6 +6870,7 @@ static void napi_threaded_poll_loop(struct napi_struct *napi) net_rps_action_and_irq_enable(sd); } skb_defer_free_flush(sd); + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); if (!repoll) @@ -6879,13 +6893,16 @@ static int napi_threaded_poll(void *data) static __latent_entropy void net_rx_action(struct softirq_action *h) { + struct bpf_net_context *bpf_net_ctx __free(bpf_net_ctx_clear); struct softnet_data *sd = this_cpu_ptr(&softnet_data); unsigned long time_limit = jiffies + usecs_to_jiffies(READ_ONCE(net_hotdata.netdev_budget_usecs)); int budget = READ_ONCE(net_hotdata.netdev_budget); + struct bpf_net_context __bpf_net_ctx; LIST_HEAD(list); LIST_HEAD(repoll); + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); start: sd->in_net_rx_action = true; local_irq_disable(); diff --git a/net/core/filter.c b/net/core/filter.c index e95b235a1e4f4..90afa393d0648 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -2475,8 +2475,10 @@ static const struct bpf_func_proto bpf_clone_redirect_proto = { .arg3_type = ARG_ANYTHING, }; -DEFINE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info); -EXPORT_PER_CPU_SYMBOL_GPL(bpf_redirect_info); +#ifndef CONFIG_PREEMPT_RT +DEFINE_PER_CPU(struct bpf_net_context *, bpf_net_context); +EXPORT_PER_CPU_SYMBOL_GPL(bpf_net_context); +#endif static struct net_device *skb_get_peer_dev(struct net_device *dev) { @@ -2490,11 +2492,15 @@ static struct net_device *skb_get_peer_dev(struct net_device *dev) int skb_do_redirect(struct sk_buff *skb) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); struct net *net = dev_net(skb->dev); + struct bpf_redirect_info *ri; struct net_device *dev; - u32 flags = ri->flags; + u32 flags; + ri = bpf_net_ctx_get_ri(); + if (!ri) + goto out_drop; + flags = ri->flags; dev = dev_get_by_index_rcu(net, ri->tgt_index); ri->tgt_index = 0; ri->flags = 0; @@ -2523,9 +2529,9 @@ int skb_do_redirect(struct sk_buff *skb) BPF_CALL_2(bpf_redirect, u32, ifindex, u64, flags) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); - if (unlikely(flags & (~(BPF_F_INGRESS) | BPF_F_REDIRECT_INTERNAL))) + if (unlikely((flags & (~(BPF_F_INGRESS) | BPF_F_REDIRECT_INTERNAL)) || !ri)) return TC_ACT_SHOT; ri->flags = flags; @@ -2544,9 +2550,9 @@ static const struct bpf_func_proto bpf_redirect_proto = { BPF_CALL_2(bpf_redirect_peer, u32, ifindex, u64, flags) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); - if (unlikely(flags)) + if (unlikely(flags || !ri)) return TC_ACT_SHOT; ri->flags = BPF_F_PEER; @@ -2566,9 +2572,9 @@ static const struct bpf_func_proto bpf_redirect_peer_proto = { BPF_CALL_4(bpf_redirect_neigh, u32, ifindex, struct bpf_redir_neigh *, params, int, plen, u64, flags) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); - if (unlikely((plen && plen < sizeof(*params)) || flags)) + if (unlikely((plen && plen < sizeof(*params)) || flags || !ri)) return TC_ACT_SHOT; ri->flags = BPF_F_NEIGH | (plen ? BPF_F_NEXTHOP : 0); @@ -4294,19 +4300,17 @@ void xdp_do_check_flushed(struct napi_struct *napi) void bpf_clear_redirect_map(struct bpf_map *map) { - struct bpf_redirect_info *ri; - int cpu; - - for_each_possible_cpu(cpu) { - ri = per_cpu_ptr(&bpf_redirect_info, cpu); - /* Avoid polluting remote cacheline due to writes if - * not needed. Once we pass this test, we need the - * cmpxchg() to make sure it hasn't been changed in - * the meantime by remote CPU. - */ - if (unlikely(READ_ONCE(ri->map) == map)) - cmpxchg(&ri->map, map, NULL); - } + /* ri->map is assigned in __bpf_xdp_redirect_map() from within a eBPF + * program/ during NAPI callback. It is used during + * xdp_do_generic_redirect_map()/ __xdp_do_redirect_frame() from the + * redirect callback afterwards. ri->map is cleared after usage. + * The path has no explicit RCU read section but the local_bh_disable() + * is also a RCU read section which makes the complete softirq callback + * RCU protected. This in turn makes ri->map RCU protocted and it is + * sufficient to wait a grace period to ensure that no "ri->map == map" + * exist. dev_map_free() removes the map from the list and then + * invokes synchronize_rcu() after calling this function. + */ } DEFINE_STATIC_KEY_FALSE(bpf_master_redirect_enabled_key); @@ -4315,11 +4319,14 @@ EXPORT_SYMBOL_GPL(bpf_master_redirect_enabled_key); u32 xdp_master_redirect(struct xdp_buff *xdp) { struct net_device *master, *slave; - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri; master = netdev_master_upper_dev_get_rcu(xdp->rxq->dev); slave = master->netdev_ops->ndo_xdp_get_xmit_slave(master, xdp); if (slave && slave != xdp->rxq->dev) { + ri = bpf_net_ctx_get_ri(); + if (!ri) + return XDP_ABORTED; /* The target device is different from the receiving device, so * redirect it to the new device. * Using XDP_REDIRECT gets the correct behaviour from XDP enabled @@ -4432,10 +4439,12 @@ static __always_inline int __xdp_do_redirect_frame(struct bpf_redirect_info *ri, int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - enum bpf_map_type map_type = ri->map_type; + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); - if (map_type == BPF_MAP_TYPE_XSKMAP) + if (!ri) + return -EINVAL; + + if (ri->map_type == BPF_MAP_TYPE_XSKMAP) return __xdp_do_redirect_xsk(ri, dev, xdp, xdp_prog); return __xdp_do_redirect_frame(ri, dev, xdp_convert_buff_to_frame(xdp), @@ -4446,10 +4455,12 @@ EXPORT_SYMBOL_GPL(xdp_do_redirect); int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, struct xdp_frame *xdpf, struct bpf_prog *xdp_prog) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - enum bpf_map_type map_type = ri->map_type; + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); - if (map_type == BPF_MAP_TYPE_XSKMAP) + if (!ri) + return -EINVAL; + + if (ri->map_type == BPF_MAP_TYPE_XSKMAP) return __xdp_do_redirect_xsk(ri, dev, xdp, xdp_prog); return __xdp_do_redirect_frame(ri, dev, xdpf, xdp_prog); @@ -4463,10 +4474,13 @@ static int xdp_do_generic_redirect_map(struct net_device *dev, enum bpf_map_type map_type, u32 map_id, u32 flags) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); struct bpf_map *map; int err; + if (!ri) + return -EINVAL; + switch (map_type) { case BPF_MAP_TYPE_DEVMAP: fallthrough; @@ -4517,13 +4531,21 @@ static int xdp_do_generic_redirect_map(struct net_device *dev, int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - enum bpf_map_type map_type = ri->map_type; - void *fwd = ri->tgt_value; - u32 map_id = ri->map_id; - u32 flags = ri->flags; + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); + enum bpf_map_type map_type; + u32 map_id; + void *fwd; + u32 flags; int err; + if (!ri) + return -EINVAL; + + map_type = ri->map_type; + fwd = ri->tgt_value; + map_id = ri->map_id; + flags = ri->flags; + ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ ri->flags = 0; ri->map_type = BPF_MAP_TYPE_UNSPEC; @@ -4553,9 +4575,9 @@ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, BPF_CALL_2(bpf_xdp_redirect, u32, ifindex, u64, flags) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri = bpf_net_ctx_get_ri(); - if (unlikely(flags)) + if (unlikely(flags || !ri)) return XDP_ABORTED; /* NB! Map type UNSPEC and map_id == INT_MAX (never generated diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c index a94943681e5aa..afb05f58b64c5 100644 --- a/net/core/lwt_bpf.c +++ b/net/core/lwt_bpf.c @@ -38,12 +38,14 @@ static inline struct bpf_lwt *bpf_lwt_lwtunnel(struct lwtunnel_state *lwt) static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt, struct dst_entry *dst, bool can_redirect) { + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; int ret; /* Disabling BH is needed to protect per-CPU bpf_redirect_info between * BPF prog and skb_do_redirect(). */ local_bh_disable(); + bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); bpf_compute_data_pointers(skb); ret = bpf_prog_run_save_cb(lwt->prog, skb); @@ -76,6 +78,7 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt, break; } + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); return ret; From patchwork Fri May 3 18:25:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13653307 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE22B1598F1; Fri, 3 May 2024 18:30:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761016; cv=none; b=gmkELLPPqCMKlLO4lQP0vmQKC9MKBOldFBr5UjpiN22YGDQVnO1I4ON+GozZAn3q/mRzwzeG0NfjrMVp2Ilz3Ldc4YDksNO0BWBfmKlRVu84+8ssx4v+Bh6ImjFgQJ2YbkPIwF/VLtNZD+VYEAouhQ5F2uU6BqaWIs6ey3qcCSQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714761016; c=relaxed/simple; bh=wm5ww8C4qKZxQ6mCMLZnZ+mAQC81D1WZAVEZcK5B0uk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=U4X5zqSyIveGrjULUxry8ps3AnjNYxP0CdyVwXtqtvYM02tj9mfKlZ1vNZrGSyOfizFDL+yxSH8V3r6iakOthBYZqiEbNEM7fgrq05Vw6nZulSFrXuwUOUGCkQvdmEH24p8eqP4kDW4BFCA4AwX63NASI/XBcWXYKGnGRH8ipso= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=WcG9qESG; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=Ttqij8ev; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="WcG9qESG"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Ttqij8ev" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1714761012; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J5YbLJ2llC9pieceANdnEfYd6g4h8H/h61cy+SmdDCw=; b=WcG9qESGxMPYH13uLsYFWXyhJwFwd6Bal+VlQrXuH02P7720Dt9nXCZTqK0ERwJI2D7EL9 lefW50bmlY8ssgzTxTCy4Un7uzuESPI/DxM3kWsc8uC+u+FDrvvLi/uwYMeaw5eAUkv5Ak gAX4JJ7HMehXUm/eJk+ARCt+SqKF6jQFVI/BbX84kk2vipLfLdTMW/S8Id5NxYgQxxCL9q mS77190wHnCsm/IIh2buRIWsGzlLY8hMJ7LFQC8d26wcqJKLncGBE4G7eKcJ87Z/jUqoHv 04Kw8Cq9SBvw5e9ezj1JP6Pb4wX9xTd/KBAlVV2CEhV7Ne4b8AUkpFBzlIwp/Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1714761012; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J5YbLJ2llC9pieceANdnEfYd6g4h8H/h61cy+SmdDCw=; b=Ttqij8ev+ZV6sia6n7og5kwP48ei6KhRUmy9xGOVlVqj6i0hihLqXA0+0bN6yoo+bMuX9j jLgWyXIIS/un7yAw== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Alexei Starovoitov , Andrii Nakryiko , Eduard Zingerman , Hao Luo , Jesper Dangaard Brouer , Jiri Olsa , John Fastabend , Jonathan Lemon , KP Singh , Maciej Fijalkowski , Magnus Karlsson , Martin KaFai Lau , Song Liu , Stanislav Fomichev , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rge?= =?utf-8?q?nsen?= , Yonghong Song , bpf@vger.kernel.org Subject: [PATCH net-next 15/15] net: Move per-CPU flush-lists to bpf_net_context on PREEMPT_RT. Date: Fri, 3 May 2024 20:25:19 +0200 Message-ID: <20240503182957.1042122-16-bigeasy@linutronix.de> In-Reply-To: <20240503182957.1042122-1-bigeasy@linutronix.de> References: <20240503182957.1042122-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The per-CPU flush lists, which are accessed from within the NAPI callback (xdp_do_flush() for instance), are per-CPU. There are subject to the same problem as struct bpf_redirect_info. Add the per-CPU lists cpu_map_flush_list, dev_map_flush_list and xskmap_map_flush_list to struct bpf_net_context. Add wrappers for the access. Cc: "Björn Töpel" Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Eduard Zingerman Cc: Hao Luo Cc: Jesper Dangaard Brouer Cc: Jiri Olsa Cc: John Fastabend Cc: Jonathan Lemon Cc: KP Singh Cc: Maciej Fijalkowski Cc: Magnus Karlsson Cc: Martin KaFai Lau Cc: Song Liu Cc: Stanislav Fomichev Cc: Toke Høiland-Jørgensen Cc: Yonghong Song Cc: bpf@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- include/linux/filter.h | 38 ++++++++++++++++++++++++++++++++++++++ kernel/bpf/cpumap.c | 24 ++++++++---------------- kernel/bpf/devmap.c | 16 ++++++++-------- net/xdp/xsk.c | 19 +++++++++++-------- 4 files changed, 65 insertions(+), 32 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index bdd69bd81df45..68401d84e2050 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -746,6 +746,9 @@ struct bpf_redirect_info { struct bpf_net_context { struct bpf_redirect_info ri; + struct list_head cpu_map_flush_list; + struct list_head dev_map_flush_list; + struct list_head xskmap_map_flush_list; }; #ifndef CONFIG_PREEMPT_RT @@ -758,6 +761,10 @@ static inline struct bpf_net_context *bpf_net_ctx_set(struct bpf_net_context *bp ctx = this_cpu_read(bpf_net_context); if (ctx != NULL) return NULL; + INIT_LIST_HEAD(&bpf_net_ctx->cpu_map_flush_list); + INIT_LIST_HEAD(&bpf_net_ctx->dev_map_flush_list); + INIT_LIST_HEAD(&bpf_net_ctx->xskmap_map_flush_list); + this_cpu_write(bpf_net_context, bpf_net_ctx); return bpf_net_ctx; } @@ -788,6 +795,10 @@ static inline struct bpf_net_context *bpf_net_ctx_set(struct bpf_net_context *bp if (tsk->bpf_net_context != NULL) return NULL; + INIT_LIST_HEAD(&bpf_net_ctx->cpu_map_flush_list); + INIT_LIST_HEAD(&bpf_net_ctx->dev_map_flush_list); + INIT_LIST_HEAD(&bpf_net_ctx->xskmap_map_flush_list); + tsk->bpf_net_context = bpf_net_ctx; return bpf_net_ctx; } @@ -820,6 +831,33 @@ static inline struct bpf_redirect_info *bpf_net_ctx_get_ri(void) return &bpf_net_ctx->ri; } +static inline struct list_head *bpf_net_ctx_get_cpu_map_flush_list(void) +{ + struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get(); + + if (!bpf_net_ctx) + return NULL; + return &bpf_net_ctx->cpu_map_flush_list; +} + +static inline struct list_head *bpf_net_ctx_get_dev_flush_list(void) +{ + struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get(); + + if (!bpf_net_ctx) + return NULL; + return &bpf_net_ctx->dev_map_flush_list; +} + +static inline struct list_head *bpf_net_ctx_get_xskmap_flush_list(void) +{ + struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get(); + + if (!bpf_net_ctx) + return NULL; + return &bpf_net_ctx->xskmap_map_flush_list; +} + DEFINE_FREE(bpf_net_ctx_clear, struct bpf_net_context *, if (_T) bpf_net_ctx_clear(_T)); /* flags for bpf_redirect_info kern_flags */ diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 66974bd027109..0d18ffc93dcab 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -79,8 +79,6 @@ struct bpf_cpu_map { struct bpf_cpu_map_entry __rcu **cpu_map; }; -static DEFINE_PER_CPU(struct list_head, cpu_map_flush_list); - static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) { u32 value_size = attr->value_size; @@ -709,7 +707,7 @@ static void bq_flush_to_queue(struct xdp_bulk_queue *bq) */ static void bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf) { - struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list); + struct list_head *flush_list = bpf_net_ctx_get_cpu_map_flush_list(); struct xdp_bulk_queue *bq = this_cpu_ptr(rcpu->bulkq); if (unlikely(bq->count == CPU_MAP_BULK_SIZE)) @@ -761,9 +759,12 @@ int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu, void __cpu_map_flush(void) { - struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list); + struct list_head *flush_list = bpf_net_ctx_get_cpu_map_flush_list(); struct xdp_bulk_queue *bq, *tmp; + if (!flush_list) + return; + list_for_each_entry_safe(bq, tmp, flush_list, flush_node) { bq_flush_to_queue(bq); @@ -775,20 +776,11 @@ void __cpu_map_flush(void) #ifdef CONFIG_DEBUG_NET bool cpu_map_check_flush(void) { - if (list_empty(this_cpu_ptr(&cpu_map_flush_list))) + struct list_head *flush_list = bpf_net_ctx_get_cpu_map_flush_list(); + + if (!flush_list || list_empty(bpf_net_ctx_get_cpu_map_flush_list())) return false; __cpu_map_flush(); return true; } #endif - -static int __init cpu_map_init(void) -{ - int cpu; - - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(&per_cpu(cpu_map_flush_list, cpu)); - return 0; -} - -subsys_initcall(cpu_map_init); diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 4e2cdbb5629f2..03533e45399a0 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -83,7 +83,6 @@ struct bpf_dtab { u32 n_buckets; }; -static DEFINE_PER_CPU(struct list_head, dev_flush_list); static DEFINE_SPINLOCK(dev_map_lock); static LIST_HEAD(dev_map_list); @@ -408,9 +407,12 @@ static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags) */ void __dev_flush(void) { - struct list_head *flush_list = this_cpu_ptr(&dev_flush_list); + struct list_head *flush_list = bpf_net_ctx_get_dev_flush_list(); struct xdp_dev_bulk_queue *bq, *tmp; + if (!flush_list) + return; + list_for_each_entry_safe(bq, tmp, flush_list, flush_node) { bq_xmit_all(bq, XDP_XMIT_FLUSH); bq->dev_rx = NULL; @@ -422,7 +424,9 @@ void __dev_flush(void) #ifdef CONFIG_DEBUG_NET bool dev_check_flush(void) { - if (list_empty(this_cpu_ptr(&dev_flush_list))) + struct list_head *flush_list = bpf_net_ctx_get_dev_flush_list(); + + if (!flush_list || list_empty(bpf_net_ctx_get_dev_flush_list())) return false; __dev_flush(); return true; @@ -453,7 +457,7 @@ static void *__dev_map_lookup_elem(struct bpf_map *map, u32 key) static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf, struct net_device *dev_rx, struct bpf_prog *xdp_prog) { - struct list_head *flush_list = this_cpu_ptr(&dev_flush_list); + struct list_head *flush_list = bpf_net_ctx_get_dev_flush_list(); struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq); if (unlikely(bq->count == DEV_MAP_BULK_SIZE)) @@ -1156,15 +1160,11 @@ static struct notifier_block dev_map_notifier = { static int __init dev_map_init(void) { - int cpu; - /* Assure tracepoint shadow struct _bpf_dtab_netdev is in sync */ BUILD_BUG_ON(offsetof(struct bpf_dtab_netdev, dev) != offsetof(struct _bpf_dtab_netdev, dev)); register_netdevice_notifier(&dev_map_notifier); - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(&per_cpu(dev_flush_list, cpu)); return 0; } diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 727aa20be4bde..0ac5c80eef6bf 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -35,8 +35,6 @@ #define TX_BATCH_SIZE 32 #define MAX_PER_SOCKET_BUDGET (TX_BATCH_SIZE) -static DEFINE_PER_CPU(struct list_head, xskmap_flush_list); - void xsk_set_rx_need_wakeup(struct xsk_buff_pool *pool) { if (pool->cached_need_wakeup & XDP_WAKEUP_RX) @@ -375,9 +373,12 @@ static int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) { - struct list_head *flush_list = this_cpu_ptr(&xskmap_flush_list); + struct list_head *flush_list = bpf_net_ctx_get_xskmap_flush_list(); int err; + if (!flush_list) + return -EINVAL; + err = xsk_rcv(xs, xdp); if (err) return err; @@ -390,9 +391,11 @@ int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) void __xsk_map_flush(void) { - struct list_head *flush_list = this_cpu_ptr(&xskmap_flush_list); + struct list_head *flush_list = bpf_net_ctx_get_xskmap_flush_list(); struct xdp_sock *xs, *tmp; + if (!flush_list) + return; list_for_each_entry_safe(xs, tmp, flush_list, flush_node) { xsk_flush(xs); __list_del_clearprev(&xs->flush_node); @@ -402,7 +405,9 @@ void __xsk_map_flush(void) #ifdef CONFIG_DEBUG_NET bool xsk_map_check_flush(void) { - if (list_empty(this_cpu_ptr(&xskmap_flush_list))) + struct list_head *flush_list = bpf_net_ctx_get_xskmap_flush_list(); + + if (!flush_list || list_empty(flush_list)) return false; __xsk_map_flush(); return true; @@ -1775,7 +1780,7 @@ static struct pernet_operations xsk_net_ops = { static int __init xsk_init(void) { - int err, cpu; + int err; err = proto_register(&xsk_proto, 0 /* no slab */); if (err) @@ -1793,8 +1798,6 @@ static int __init xsk_init(void) if (err) goto out_pernet; - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(&per_cpu(xskmap_flush_list, cpu)); return 0; out_pernet: