diff mbox series

[net] rds: introduce acquire/release ordering in acquire/release_in_xmit()

Message ID ZfLdv5DZvBg0wajJ@libra05 (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net] rds: introduce acquire/release ordering in acquire/release_in_xmit() | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present fail Series targets non-next tree, but doesn't contain any Fixes tags
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 943 this patch: 943
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/build_clang success Errors and warnings before: 956 this patch: 956
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 960 this patch: 960
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 15 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-03-14--12-00 (tests: 908)

Commit Message

Yewon Choi March 14, 2024, 11:21 a.m. UTC
acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so they
are expected to ensure acquire/release memory ordering semantics.
However, test_and_set_bit/clear_bit() don't imply such semantics, on
top of this, following smp_mb__after_atomic() does not guarantee release
ordering (memory barrier actually should be placed before clear_bit()).

Instead, we use clear_bit_unlock/test_and_set_bit_lock() here.

Signed-off-by: Yewon Choi <woni9911@gmail.com>
---
 net/rds/send.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

Comments

Michal Kubiak March 14, 2024, 11:51 a.m. UTC | #1
On Thu, Mar 14, 2024 at 08:21:35PM +0900, Yewon Choi wrote:
> acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so they
> are expected to ensure acquire/release memory ordering semantics.
> However, test_and_set_bit/clear_bit() don't imply such semantics, on
> top of this, following smp_mb__after_atomic() does not guarantee release
> ordering (memory barrier actually should be placed before clear_bit()).
> 
> Instead, we use clear_bit_unlock/test_and_set_bit_lock() here.
> 
> Signed-off-by: Yewon Choi <woni9911@gmail.com>

Missing "Fixes" tag for the patch addressed to the "net" tree.

Thanks,
Michal
Allison Henderson March 14, 2024, 10:37 p.m. UTC | #2
On Thu, 2024-03-14 at 12:51 +0100, Michal Kubiak wrote:
> On Thu, Mar 14, 2024 at 08:21:35PM +0900, Yewon Choi wrote:
> > acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so
> > they
> > are expected to ensure acquire/release memory ordering semantics.
> > However, test_and_set_bit/clear_bit() don't imply such semantics,
> > on
> > top of this, following smp_mb__after_atomic() does not guarantee
> > release
> > ordering (memory barrier actually should be placed before
> > clear_bit()).
> > 
> > Instead, we use clear_bit_unlock/test_and_set_bit_lock() here.
> > 
> > Signed-off-by: Yewon Choi <woni9911@gmail.com>
> 
> Missing "Fixes" tag for the patch addressed to the "net" tree.
> 
> Thanks,
> Michal

Yes, I think it needs:

Fixes: 1f9ecd7eacfd ("RDS: Pass rds_conn_path to rds_send_xmit()")

Since that is the last patch to modify the affected code.  Other than
that I think the patch looks good.  With the tag fixed, you can add my
rvb:

Reviewed-by: Allison Henderson <allison.henderson@oracle.com>
>
Yewon Choi March 15, 2024, 9:24 a.m. UTC | #3
On Thu, Mar 14, 2024 at 10:37:29PM +0000, Allison Henderson wrote:
> On Thu, 2024-03-14 at 12:51 +0100, Michal Kubiak wrote:
> > On Thu, Mar 14, 2024 at 08:21:35PM +0900, Yewon Choi wrote:
> > > acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so
> > > they
> > > are expected to ensure acquire/release memory ordering semantics.
> > > However, test_and_set_bit/clear_bit() don't imply such semantics,
> > > on
> > > top of this, following smp_mb__after_atomic() does not guarantee
> > > release
> > > ordering (memory barrier actually should be placed before
> > > clear_bit()).
> > > 
> > > Instead, we use clear_bit_unlock/test_and_set_bit_lock() here.
> > > 
> > > Signed-off-by: Yewon Choi <woni9911@gmail.com>
> > 
> > Missing "Fixes" tag for the patch addressed to the "net" tree.
> >

Sorry for mistake, I'll correct this and send v2 patch.

> > Thanks,
> > Michal
> 
> Yes, I think it needs:
> 
> Fixes: 1f9ecd7eacfd ("RDS: Pass rds_conn_path to rds_send_xmit()")
>
> Since that is the last patch to modify the affected code.  Other than
> that I think the patch looks good.  With the tag fixed, you can add my
> rvb:
> 

Also, test_and_set_bit/clear_bit() was first introduced in
commit 0f4b1c7e89e6. I think this can be added, too:

Fixes: 0f4b1c7e89e6 ("rds: fix rds_send_xmit() serialization")

> Reviewed-by: Allison Henderson <allison.henderson@oracle.com>
> > 
> 

Thank you for the reviewing.

Sincerely,
Yewon Choi
diff mbox series

Patch

diff --git a/net/rds/send.c b/net/rds/send.c
index 5e57a1581dc6..8f38009721b7 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -103,13 +103,12 @@  EXPORT_SYMBOL_GPL(rds_send_path_reset);
 
 static int acquire_in_xmit(struct rds_conn_path *cp)
 {
-	return test_and_set_bit(RDS_IN_XMIT, &cp->cp_flags) == 0;
+	return test_and_set_bit_lock(RDS_IN_XMIT, &cp->cp_flags) == 0;
 }
 
 static void release_in_xmit(struct rds_conn_path *cp)
 {
-	clear_bit(RDS_IN_XMIT, &cp->cp_flags);
-	smp_mb__after_atomic();
+	clear_bit_unlock(RDS_IN_XMIT, &cp->cp_flags);
 	/*
 	 * We don't use wait_on_bit()/wake_up_bit() because our waking is in a
 	 * hot path and finding waiters is very rare.  We don't want to walk