diff mbox series

[net,v2] rds: introduce acquire/release ordering in acquire/release_in_xmit()

Message ID ZfQUxnNTO9AJmzwc@libra05 (mailing list archive)
State Not Applicable
Headers show
Series [net,v2] rds: introduce acquire/release ordering in acquire/release_in_xmit() | expand

Commit Message

Yewon Choi March 15, 2024, 9:28 a.m. UTC
acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so they
are expected to ensure acquire/release memory ordering semantics.
However, test_and_set_bit/clear_bit() don't imply such semantics, on
top of this, following smp_mb__after_atomic() does not guarantee release
ordering (memory barrier actually should be placed before clear_bit()).

Instead, we use clear_bit_unlock/test_and_set_bit_lock() here.

Fixes: 0f4b1c7e89e6 ("rds: fix rds_send_xmit() serialization")
Fixes: 1f9ecd7eacfd ("RDS: Pass rds_conn_path to rds_send_xmit()")
Signed-off-by: Yewon Choi <woni9911@gmail.com>
---
Changes in v1 -> v2:
- Added missing Fixes tags

 net/rds/send.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

Comments

Michal Kubiak March 15, 2024, 1:32 p.m. UTC | #1
On Fri, Mar 15, 2024 at 06:28:38PM +0900, Yewon Choi wrote:
> acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so they
> are expected to ensure acquire/release memory ordering semantics.
> However, test_and_set_bit/clear_bit() don't imply such semantics, on
> top of this, following smp_mb__after_atomic() does not guarantee release
> ordering (memory barrier actually should be placed before clear_bit()).
> 
> Instead, we use clear_bit_unlock/test_and_set_bit_lock() here.
> 
> Fixes: 0f4b1c7e89e6 ("rds: fix rds_send_xmit() serialization")
> Fixes: 1f9ecd7eacfd ("RDS: Pass rds_conn_path to rds_send_xmit()")
> Signed-off-by: Yewon Choi <woni9911@gmail.com>
> ---
> Changes in v1 -> v2:
> - Added missing Fixes tags
> 
>  net/rds/send.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/net/rds/send.c b/net/rds/send.c
> index 5e57a1581dc6..8f38009721b7 100644
> --- a/net/rds/send.c
> +++ b/net/rds/send.c
> @@ -103,13 +103,12 @@ EXPORT_SYMBOL_GPL(rds_send_path_reset);
>  
>  static int acquire_in_xmit(struct rds_conn_path *cp)
>  {
> -	return test_and_set_bit(RDS_IN_XMIT, &cp->cp_flags) == 0;
> +	return test_and_set_bit_lock(RDS_IN_XMIT, &cp->cp_flags) == 0;
>  }
>  
>  static void release_in_xmit(struct rds_conn_path *cp)
>  {
> -	clear_bit(RDS_IN_XMIT, &cp->cp_flags);
> -	smp_mb__after_atomic();
> +	clear_bit_unlock(RDS_IN_XMIT, &cp->cp_flags);
>  	/*
>  	 * We don't use wait_on_bit()/wake_up_bit() because our waking is in a
>  	 * hot path and finding waiters is very rare.  We don't want to walk
> -- 
> 2.43.0
> 

LGTM

Thanks,
Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
patchwork-bot+netdevbpf@kernel.org March 19, 2024, 11:30 a.m. UTC | #2
Hello:

This patch was applied to netdev/net.git (main)
by Paolo Abeni <pabeni@redhat.com>:

On Fri, 15 Mar 2024 18:28:38 +0900 you wrote:
> acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so they
> are expected to ensure acquire/release memory ordering semantics.
> However, test_and_set_bit/clear_bit() don't imply such semantics, on
> top of this, following smp_mb__after_atomic() does not guarantee release
> ordering (memory barrier actually should be placed before clear_bit()).
> 
> Instead, we use clear_bit_unlock/test_and_set_bit_lock() here.
> 
> [...]

Here is the summary with links:
  - [net,v2] rds: introduce acquire/release ordering in acquire/release_in_xmit()
    https://git.kernel.org/netdev/net/c/1422f28826d2

You are awesome, thank you!
diff mbox series

Patch

diff --git a/net/rds/send.c b/net/rds/send.c
index 5e57a1581dc6..8f38009721b7 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -103,13 +103,12 @@  EXPORT_SYMBOL_GPL(rds_send_path_reset);
 
 static int acquire_in_xmit(struct rds_conn_path *cp)
 {
-	return test_and_set_bit(RDS_IN_XMIT, &cp->cp_flags) == 0;
+	return test_and_set_bit_lock(RDS_IN_XMIT, &cp->cp_flags) == 0;
 }
 
 static void release_in_xmit(struct rds_conn_path *cp)
 {
-	clear_bit(RDS_IN_XMIT, &cp->cp_flags);
-	smp_mb__after_atomic();
+	clear_bit_unlock(RDS_IN_XMIT, &cp->cp_flags);
 	/*
 	 * We don't use wait_on_bit()/wake_up_bit() because our waking is in a
 	 * hot path and finding waiters is very rare.  We don't want to walk