Message ID | 20250414160754.503321-2-bigeasy@linutronix.de (mailing list archive) |
---|---|
State | New |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | net: Cover more per-CPU storage with local nested BH locking. | expand |
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7745ad924ae2d..ba8803c2c0b20 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -805,6 +805,10 @@ static bool page_pool_napi_local(const struct page_pool *pool) const struct napi_struct *napi; u32 cpuid; + /* On PREEMPT_RT the softirq can be preempted by the consumer */ + if (IS_ENABLED(CONFIG_PREEMPT_RT)) + return false; + if (unlikely(!in_softirq())) return false;
With preemptible softirq and no per-CPU locking in local_bh_disable() on PREEMPT_RT the consumer can be preempted while a skb is returned. Avoid the race by disabling the recycle into the cache on PREEMPT_RT. Cc: Jesper Dangaard Brouer <hawk@kernel.org> Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- net/core/page_pool.c | 4 ++++ 1 file changed, 4 insertions(+)