Message ID | 20220204201259.1095226-2-bigeasy@linutronix.de (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | net: dev: PREEMPT_RT fixups. | expand |
Sebastian Andrzej Siewior <bigeasy@linutronix.de> writes: > The preempt_disable() () section was introduced in commit > cece1945bffcf ("net: disable preemption before call smp_processor_id()") > > and adds it in case this function is invoked from preemtible context and > because get_cpu() later on as been added. > > The get_cpu() usage was added in commit > b0e28f1effd1d ("net: netif_rx() must disable preemption") > > because ip_dev_loopback_xmit() invoked netif_rx() with enabled preemption > causing a warning in smp_processor_id(). The function netif_rx() should > only be invoked from an interrupt context which implies disabled > preemption. The commit > e30b38c298b55 ("ip: Fix ip_dev_loopback_xmit()") > > was addressing this and replaced netif_rx() with in netif_rx_ni() in > ip_dev_loopback_xmit(). > > Based on the discussion on the list, the former patch (b0e28f1effd1d) > should not have been applied only the latter (e30b38c298b55). > > Remove get_cpu() and preempt_disable() since the function is supposed to > be invoked from context with stable per-CPU pointers. Bottom halves have > to be disabled at this point because the function may raise softirqs > which need to be processed. > > Link: https://lkml.kernel.org/r/20100415.013347.98375530.davem@davemloft.net > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
diff --git a/net/core/dev.c b/net/core/dev.c index 1baab07820f65..0d13340ed4054 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4796,7 +4796,6 @@ static int netif_rx_internal(struct sk_buff *skb) struct rps_dev_flow voidflow, *rflow = &voidflow; int cpu; - preempt_disable(); rcu_read_lock(); cpu = get_rps_cpu(skb->dev, skb, &rflow); @@ -4806,14 +4805,12 @@ static int netif_rx_internal(struct sk_buff *skb) ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail); rcu_read_unlock(); - preempt_enable(); } else #endif { unsigned int qtail; - ret = enqueue_to_backlog(skb, get_cpu(), &qtail); - put_cpu(); + ret = enqueue_to_backlog(skb, smp_processor_id(), &qtail); } return ret; }