diff mbox series

riscv: fix test_and_{set,clear}_bit ordering documentation

Message ID 20250311-riscv-fix-test-and-set-bit-comment-v1-1-8d2598e1e43b@iencinas.com (mailing list archive)
State New
Headers show
Series riscv: fix test_and_{set,clear}_bit ordering documentation | expand

Checks

Context Check Description
bjorn/pre-ci_am success Success
bjorn/build-rv32-defconfig success build-rv32-defconfig
bjorn/build-rv64-clang-allmodconfig success build-rv64-clang-allmodconfig
bjorn/build-rv64-gcc-allmodconfig success build-rv64-gcc-allmodconfig
bjorn/build-rv64-nommu-k210-defconfig success build-rv64-nommu-k210-defconfig
bjorn/build-rv64-nommu-k210-virt success build-rv64-nommu-k210-virt
bjorn/checkpatch success checkpatch
bjorn/dtb-warn-rv64 success dtb-warn-rv64
bjorn/header-inline success header-inline
bjorn/kdoc success kdoc
bjorn/module-param success module-param
bjorn/verify-fixes success verify-fixes
bjorn/verify-signedoff success verify-signedoff

Commit Message

Ignacio Encinas March 11, 2025, 5:20 p.m. UTC
test_and_{set,clear}_bit are fully ordered as specified in
Documentation/atomic_bitops.txt. Fix incorrect comment stating otherwise.

Note that the implementation is correct since commit
9347ce54cd69 ("RISC-V: __test_and_op_bit_ord should be strongly ordered")
was introduced.

Signed-off-by: Ignacio Encinas <ignacio@iencinas.com>
---
This seems to be a leftover comment from the initial implementation
which assumed these operations were relaxed.

Documentation/atomic_bitops.txt states:

  [...]
  RMW atomic operations with return value:
  
    test_and_{set,clear,change}_bit()
    test_and_set_bit_lock()
  [...]

   - RMW operations that have a return value are fully ordered.

Similar comments can be found in
include/asm-generic/bitops/instrumented-atomic.h,
include/linux/atomic/atomic-long.h, etc...
---
 arch/riscv/include/asm/bitops.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


---
base-commit: 2014c95afecee3e76ca4a56956a936e23283f05b
change-id: 20250311-riscv-fix-test-and-set-bit-comment-aa9081a27d61

Best regards,

Comments

Yury Norov March 12, 2025, 11:38 p.m. UTC | #1
On Tue, Mar 11, 2025 at 06:20:22PM +0100, Ignacio Encinas wrote:
> test_and_{set,clear}_bit are fully ordered as specified in
> Documentation/atomic_bitops.txt. Fix incorrect comment stating otherwise.
> 
> Note that the implementation is correct since commit
> 9347ce54cd69 ("RISC-V: __test_and_op_bit_ord should be strongly ordered")
> was introduced.
> 
> Signed-off-by: Ignacio Encinas <ignacio@iencinas.com>

Applied in bitmap-for-next.

Thanks,
Yury

> ---
> This seems to be a leftover comment from the initial implementation
> which assumed these operations were relaxed.
> 
> Documentation/atomic_bitops.txt states:
> 
>   [...]
>   RMW atomic operations with return value:
>   
>     test_and_{set,clear,change}_bit()
>     test_and_set_bit_lock()
>   [...]
> 
>    - RMW operations that have a return value are fully ordered.
> 
> Similar comments can be found in
> include/asm-generic/bitops/instrumented-atomic.h,
> include/linux/atomic/atomic-long.h, etc...
> ---
>  arch/riscv/include/asm/bitops.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
> index c6bd3d8354a96b4e7bbef0e98a201da412301b57..49a0f48d93df5be4d38fe25b437378467e4ca433 100644
> --- a/arch/riscv/include/asm/bitops.h
> +++ b/arch/riscv/include/asm/bitops.h
> @@ -226,7 +226,7 @@ static __always_inline int variable_fls(unsigned int x)
>   * @nr: Bit to set
>   * @addr: Address to count from
>   *
> - * This operation may be reordered on other architectures than x86.
> + * This is an atomic fully-ordered operation (implied full memory barrier).
>   */
>  static __always_inline int arch_test_and_set_bit(int nr, volatile unsigned long *addr)
>  {
> @@ -238,7 +238,7 @@ static __always_inline int arch_test_and_set_bit(int nr, volatile unsigned long
>   * @nr: Bit to clear
>   * @addr: Address to count from
>   *
> - * This operation can be reordered on other architectures other than x86.
> + * This is an atomic fully-ordered operation (implied full memory barrier).
>   */
>  static __always_inline int arch_test_and_clear_bit(int nr, volatile unsigned long *addr)
>  {
> 
> ---
> base-commit: 2014c95afecee3e76ca4a56956a936e23283f05b
> change-id: 20250311-riscv-fix-test-and-set-bit-comment-aa9081a27d61
> 
> Best regards,
> -- 
> Ignacio Encinas <ignacio@iencinas.com>
diff mbox series

Patch

diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
index c6bd3d8354a96b4e7bbef0e98a201da412301b57..49a0f48d93df5be4d38fe25b437378467e4ca433 100644
--- a/arch/riscv/include/asm/bitops.h
+++ b/arch/riscv/include/asm/bitops.h
@@ -226,7 +226,7 @@  static __always_inline int variable_fls(unsigned int x)
  * @nr: Bit to set
  * @addr: Address to count from
  *
- * This operation may be reordered on other architectures than x86.
+ * This is an atomic fully-ordered operation (implied full memory barrier).
  */
 static __always_inline int arch_test_and_set_bit(int nr, volatile unsigned long *addr)
 {
@@ -238,7 +238,7 @@  static __always_inline int arch_test_and_set_bit(int nr, volatile unsigned long
  * @nr: Bit to clear
  * @addr: Address to count from
  *
- * This operation can be reordered on other architectures other than x86.
+ * This is an atomic fully-ordered operation (implied full memory barrier).
  */
 static __always_inline int arch_test_and_clear_bit(int nr, volatile unsigned long *addr)
 {