diff mbox series

[v2,net,2/2] rfs: annotate lockless accesses to RFS sock flow table

Message ID 20230606074115.3789733-3-edumazet@google.com (mailing list archive)
State Accepted
Commit 5c3b74a92aa285a3df722bf6329ba7ccf70346d6
Delegated to: Netdev Maintainers
Headers show
Series rfs: annotate lockless accesses | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net, async
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 4193 this patch: 4193
netdev/cc_maintainers fail 1 blamed authors not CCed: therbert@google.com; 1 maintainers not CCed: therbert@google.com
netdev/build_clang success Errors and warnings before: 921 this patch: 921
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 4413 this patch: 4413
netdev/checkpatch warning WARNING: line length of 87 exceeds 80 columns WARNING: line length of 88 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Eric Dumazet June 6, 2023, 7:41 a.m. UTC
Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table.

This also prevents a (smart ?) compiler to remove the condition in:

if (table->ents[index] != newval)
        table->ents[index] = newval;

We need the condition to avoid dirtying a shared cache line.

Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 include/linux/netdevice.h | 7 +++++--
 net/core/dev.c            | 6 ++++--
 2 files changed, 9 insertions(+), 4 deletions(-)

Comments

Simon Horman June 6, 2023, 9:40 a.m. UTC | #1
On Tue, Jun 06, 2023 at 07:41:15AM +0000, Eric Dumazet wrote:
> Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table.
> 
> This also prevents a (smart ?) compiler to remove the condition in:
> 
> if (table->ents[index] != newval)
>         table->ents[index] = newval;
> 
> We need the condition to avoid dirtying a shared cache line.
> 
> Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Reviewed-by: Simon Horman <simon.horman@corigine.com>
Kuniyuki Iwashima June 6, 2023, 5:42 p.m. UTC | #2
From: Eric Dumazet <edumazet@google.com>
Date: Tue,  6 Jun 2023 07:41:15 +0000
> Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table.
> 
> This also prevents a (smart ?) compiler to remove the condition in:
> 
> if (table->ents[index] != newval)
>         table->ents[index] = newval;
> 
> We need the condition to avoid dirtying a shared cache line.
> 
> Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>


> ---
>  include/linux/netdevice.h | 7 +++++--
>  net/core/dev.c            | 6 ++++--
>  2 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index 08fbd4622ccf731daaee34ad99773d6dc2e82fa6..e6f22b7403d014a2cf4d81d931109a594ce1398e 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -768,8 +768,11 @@ static inline void rps_record_sock_flow(struct rps_sock_flow_table *table,
>  		/* We only give a hint, preemption can change CPU under us */
>  		val |= raw_smp_processor_id();
>  
> -		if (table->ents[index] != val)
> -			table->ents[index] = val;
> +		/* The following WRITE_ONCE() is paired with the READ_ONCE()
> +		 * here, and another one in get_rps_cpu().
> +		 */
> +		if (READ_ONCE(table->ents[index]) != val)
> +			WRITE_ONCE(table->ents[index], val);
>  	}
>  }
>  
> diff --git a/net/core/dev.c b/net/core/dev.c
> index b3c13e0419356b943e90b1f46dd7e035c6ec1a9c..1495f8aff288e944c8cab21297f244a6fcde752f 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4471,8 +4471,10 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
>  		u32 next_cpu;
>  		u32 ident;
>  
> -		/* First check into global flow table if there is a match */
> -		ident = sock_flow_table->ents[hash & sock_flow_table->mask];
> +		/* First check into global flow table if there is a match.
> +		 * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow().
> +		 */
> +		ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]);
>  		if ((ident ^ hash) & ~rps_cpu_mask)
>  			goto try_rps;
>  
> -- 
> 2.41.0.rc0.172.g3f132b7071-goog
diff mbox series

Patch

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 08fbd4622ccf731daaee34ad99773d6dc2e82fa6..e6f22b7403d014a2cf4d81d931109a594ce1398e 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -768,8 +768,11 @@  static inline void rps_record_sock_flow(struct rps_sock_flow_table *table,
 		/* We only give a hint, preemption can change CPU under us */
 		val |= raw_smp_processor_id();
 
-		if (table->ents[index] != val)
-			table->ents[index] = val;
+		/* The following WRITE_ONCE() is paired with the READ_ONCE()
+		 * here, and another one in get_rps_cpu().
+		 */
+		if (READ_ONCE(table->ents[index]) != val)
+			WRITE_ONCE(table->ents[index], val);
 	}
 }
 
diff --git a/net/core/dev.c b/net/core/dev.c
index b3c13e0419356b943e90b1f46dd7e035c6ec1a9c..1495f8aff288e944c8cab21297f244a6fcde752f 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4471,8 +4471,10 @@  static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 		u32 next_cpu;
 		u32 ident;
 
-		/* First check into global flow table if there is a match */
-		ident = sock_flow_table->ents[hash & sock_flow_table->mask];
+		/* First check into global flow table if there is a match.
+		 * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow().
+		 */
+		ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]);
 		if ((ident ^ hash) & ~rps_cpu_mask)
 			goto try_rps;