Message ID | 20230606074115.3789733-3-edumazet@google.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 5c3b74a92aa285a3df722bf6329ba7ccf70346d6 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | rfs: annotate lockless accesses | expand |
On Tue, Jun 06, 2023 at 07:41:15AM +0000, Eric Dumazet wrote: > Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table. > > This also prevents a (smart ?) compiler to remove the condition in: > > if (table->ents[index] != newval) > table->ents[index] = newval; > > We need the condition to avoid dirtying a shared cache line. > > Fixes: fec5e652e58f ("rfs: Receive Flow Steering") > Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Simon Horman <simon.horman@corigine.com>
From: Eric Dumazet <edumazet@google.com> Date: Tue, 6 Jun 2023 07:41:15 +0000 > Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table. > > This also prevents a (smart ?) compiler to remove the condition in: > > if (table->ents[index] != newval) > table->ents[index] = newval; > > We need the condition to avoid dirtying a shared cache line. > > Fixes: fec5e652e58f ("rfs: Receive Flow Steering") > Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> > --- > include/linux/netdevice.h | 7 +++++-- > net/core/dev.c | 6 ++++-- > 2 files changed, 9 insertions(+), 4 deletions(-) > > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h > index 08fbd4622ccf731daaee34ad99773d6dc2e82fa6..e6f22b7403d014a2cf4d81d931109a594ce1398e 100644 > --- a/include/linux/netdevice.h > +++ b/include/linux/netdevice.h > @@ -768,8 +768,11 @@ static inline void rps_record_sock_flow(struct rps_sock_flow_table *table, > /* We only give a hint, preemption can change CPU under us */ > val |= raw_smp_processor_id(); > > - if (table->ents[index] != val) > - table->ents[index] = val; > + /* The following WRITE_ONCE() is paired with the READ_ONCE() > + * here, and another one in get_rps_cpu(). > + */ > + if (READ_ONCE(table->ents[index]) != val) > + WRITE_ONCE(table->ents[index], val); > } > } > > diff --git a/net/core/dev.c b/net/core/dev.c > index b3c13e0419356b943e90b1f46dd7e035c6ec1a9c..1495f8aff288e944c8cab21297f244a6fcde752f 100644 > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -4471,8 +4471,10 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb, > u32 next_cpu; > u32 ident; > > - /* First check into global flow table if there is a match */ > - ident = sock_flow_table->ents[hash & sock_flow_table->mask]; > + /* First check into global flow table if there is a match. > + * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow(). > + */ > + ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]); > if ((ident ^ hash) & ~rps_cpu_mask) > goto try_rps; > > -- > 2.41.0.rc0.172.g3f132b7071-goog
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 08fbd4622ccf731daaee34ad99773d6dc2e82fa6..e6f22b7403d014a2cf4d81d931109a594ce1398e 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -768,8 +768,11 @@ static inline void rps_record_sock_flow(struct rps_sock_flow_table *table, /* We only give a hint, preemption can change CPU under us */ val |= raw_smp_processor_id(); - if (table->ents[index] != val) - table->ents[index] = val; + /* The following WRITE_ONCE() is paired with the READ_ONCE() + * here, and another one in get_rps_cpu(). + */ + if (READ_ONCE(table->ents[index]) != val) + WRITE_ONCE(table->ents[index], val); } } diff --git a/net/core/dev.c b/net/core/dev.c index b3c13e0419356b943e90b1f46dd7e035c6ec1a9c..1495f8aff288e944c8cab21297f244a6fcde752f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4471,8 +4471,10 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb, u32 next_cpu; u32 ident; - /* First check into global flow table if there is a match */ - ident = sock_flow_table->ents[hash & sock_flow_table->mask]; + /* First check into global flow table if there is a match. + * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow(). + */ + ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]); if ((ident ^ hash) & ~rps_cpu_mask) goto try_rps;
Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table. This also prevents a (smart ?) compiler to remove the condition in: if (table->ents[index] != newval) table->ents[index] = newval; We need the condition to avoid dirtying a shared cache line. Fixes: fec5e652e58f ("rfs: Receive Flow Steering") Signed-off-by: Eric Dumazet <edumazet@google.com> --- include/linux/netdevice.h | 7 +++++-- net/core/dev.c | 6 ++++-- 2 files changed, 9 insertions(+), 4 deletions(-)