Message ID | 20240416074232.23525-2-kerneljasonxing@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | locklessly protect left members in struct rps_dev_flow | expand |
On Tue, Apr 16, 2024 at 3:42 PM Jason Xing <kerneljasonxing@gmail.com> wrote: > > From: Jason Xing <kernelxing@tencent.com> > > Only one left place should be proctected locklessly. This patch made it. > > Signed-off-by: Jason Xing <kernelxing@tencent.com> > --- > net/core/dev.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/net/core/dev.c b/net/core/dev.c > index 854a3a28a8d8..cd97eeae8218 100644 > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -4501,7 +4501,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb, > struct netdev_rx_queue *rxqueue; > struct rps_dev_flow_table *flow_table; > struct rps_dev_flow *old_rflow; > - u32 flow_id; > + u32 flow_id, head; > u16 rxq_index; > int rc; > > @@ -4529,8 +4529,8 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb, > old_rflow->filter = RPS_NO_FILTER; > out: > #endif > - rflow->last_qtail = > - READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head); > + head = READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head); > + rps_input_queue_tail_save(rflow->last_qtail, head); I made a mistake. I should pass &rflow->last_qtail actually. Will update it. > } > > rflow->cpu = next_cpu; > -- > 2.37.3 >
diff --git a/net/core/dev.c b/net/core/dev.c index 854a3a28a8d8..cd97eeae8218 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4501,7 +4501,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb, struct netdev_rx_queue *rxqueue; struct rps_dev_flow_table *flow_table; struct rps_dev_flow *old_rflow; - u32 flow_id; + u32 flow_id, head; u16 rxq_index; int rc; @@ -4529,8 +4529,8 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb, old_rflow->filter = RPS_NO_FILTER; out: #endif - rflow->last_qtail = - READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head); + head = READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head); + rps_input_queue_tail_save(rflow->last_qtail, head); } rflow->cpu = next_cpu;