diff mbox series

[net-next,01/18] net: page_pool: Don't recycle into cache on PREEMPT_RT.

Message ID 20250309144653.825351-2-bigeasy@linutronix.de (mailing list archive)
State New
Delegated to: Netdev Maintainers
Headers show
Series net: Cover more per-CPU storage with local nested BH locking. | expand

Commit Message

Sebastian Andrzej Siewior March 9, 2025, 2:46 p.m. UTC
With preemptible softirq and no per-CPU locking in local_bh_disable() on
PREEMPT_RT the consumer can be preempted while a skb is returned.

Avoid the race by disabling the recycle into the cache on PREEMPT_RT.

Cc: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 net/core/page_pool.c | 4 ++++
 1 file changed, 4 insertions(+)
diff mbox series

Patch

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index f5e908c9e7ad8..1d669d4382729 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -800,6 +800,10 @@  static bool page_pool_napi_local(const struct page_pool *pool)
 	const struct napi_struct *napi;
 	u32 cpuid;
 
+	/* On PREEMPT_RT the softirq can be preempted by the consumer */
+	if (IS_ENABLED(CONFIG_PREEMPT_RT))
+		return false;
+
 	if (unlikely(!in_softirq()))
 		return false;