diff mbox series

[v11,2/9] xen: introduce generic non-atomic test_*bit()

Message ID 79c3c31f0032a79c25d0a458b6091904457c8939.1716547693.git.oleksii.kurochko@gmail.com (mailing list archive)
State Superseded
Headers show
Series Enable build of full Xen for RISC-V | expand

Commit Message

Oleksii Kurochko May 24, 2024, 11:08 a.m. UTC
The following generic functions were introduced:
* test_bit
* generic__test_and_set_bit
* generic__test_and_clear_bit
* generic__test_and_change_bit

These functions and macros can be useful for architectures
that don't have corresponding arch-specific instructions.

Also, the patch introduces the following generics which are
used by the functions mentioned above:
* BITOP_BITS_PER_WORD
* BITOP_MASK
* BITOP_WORD
* BITOP_TYPE

BITS_PER_BYTE was moved to xen/bitops.h as BITS_PER_BYTE is the same
across architectures.

The following approach was chosen for generic*() and arch*() bit
operation functions:
If the bit operation function that is going to be generic starts
with the prefix "__", then the corresponding generic/arch function
will also contain the "__" prefix. For example:
 * test_bit() will be defined using arch_test_bit() and
   generic_test_bit().
 * __test_and_set_bit() will be defined using
   arch__test_and_set_bit() and generic__test_and_set_bit().

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 Reviewed-by: Jan Beulich jbeulich@suse.com? Jan gave his R-by for the previous
 version of the patch, but some changes were done, so I wasn't sure if I could
 use the R-by in this patch version.


 The current patch can be merged without waiting for Andrew's patch series as
 it was rebased on top of staging.
---
Changes in V11:
 - fix identation in generic_test_bit() function.
 - move definition of BITS_PER_BYTE from <arch>/bitops.h to xen/bitops.h
 - drop the changes in arm64/livepatch.c.
 - update the the comments on top of functions: generic__test_and_set_bit(),
   generic__test_and_clear_bit(),  generic__test_and_change_bit(),
   generic_test_bit().
 - Update footer after Signed-off section.
 - Rebase the patch on top of staging branch, so it can be merged when necessary
   approves will be given.
 - Update the commit message.
---
Changes in V10:
 - update the commit message. ( re-order paragraphs and add explanation usage of prefix "__" in bit
   operation function names )
 - add  parentheses around the whole expression of bitop_bad_size() macros.
 - move macros bitop_bad_size() above asm/bitops.h as it is not arch-specific anymore and there is
   no need for overriding it.
 - drop macros check_bitop_size() and use "if ( bitop_bad_size(addr) ) __bitop_bad_size();" implictly
   where it is needed.
 - in <xen/bitops.h> use 'int' as a first parameter for __test_and_*(), generic__test_and_*() to be
   consistent with how the mentioned functions were declared in the original per-arch functions.
 - add 'const' to p variable in generic_test_bit().
 - move definition of BITOP_BITS_PER_WORD and bitop_uint_t to xen/bitops.h as we don't allow for arch
   overrides these definitions anymore.
---
Changes in V9:
  - move up xen/bitops.h in ppc/asm/page.h.
  - update defintion of arch_check_bitop_size.
    And drop correspondent macros from x86/asm/bitops.h
  - drop parentheses in generic__test_and_set_bit() for definition of
    local variable p.
  - fix indentation inside #ifndef BITOP_TYPE...#endif
  - update the commit message.
---
 Changes in V8:
  - drop __pure for function which uses volatile.
  - drop unnessary () in generic__test_and_change_bit() for addr casting.
  - update prototype of generic_test_bit() and test_bit(): now it returns bool
    instead of int.
  - update generic_test_bit() to use BITOP_MASK().
  - Deal with fls{l} changes: it should be in the patch with introduced generic fls{l}.
  - add a footer with explanation of dependency on an uncommitted patch after Signed-off.
  - abstract bitop_size().
  - move BITOP_TYPE define to <xen/types.h>.
---
 Changes in V7:
  - move everything to xen/bitops.h to follow the same approach for all generic
    bit ops.
  - put together BITOP_BITS_PER_WORD and bitops_uint_t.
  - make BITOP_MASK more generic.
  - drop #ifdef ... #endif around BITOP_MASK, BITOP_WORD as they are generic
    enough.
  - drop "_" for generic__{test_and_set_bit,...}().
  - drop " != 0" for functions which return bool.
  - add volatile during the cast for generic__{...}().
  - update the commit message.
  - update arch related code to follow the proposed generic approach.
---
 Changes in V6:
  - Nothing changed ( only rebase )
---
 Changes in V5:
   - new patch
---
 xen/arch/arm/include/asm/bitops.h |  69 -----------
 xen/arch/ppc/include/asm/bitops.h |  54 ---------
 xen/arch/ppc/include/asm/page.h   |   2 +-
 xen/arch/ppc/mm-radix.c           |   2 +-
 xen/arch/x86/include/asm/bitops.h |  31 ++---
 xen/include/xen/bitops.h          | 185 ++++++++++++++++++++++++++++++
 6 files changed, 196 insertions(+), 147 deletions(-)

Comments

Jan Beulich May 28, 2024, 6:20 a.m. UTC | #1
On 24.05.2024 13:08, Oleksii Kurochko wrote:
> The following generic functions were introduced:
> * test_bit
> * generic__test_and_set_bit
> * generic__test_and_clear_bit
> * generic__test_and_change_bit
> 
> These functions and macros can be useful for architectures
> that don't have corresponding arch-specific instructions.
> 
> Also, the patch introduces the following generics which are
> used by the functions mentioned above:
> * BITOP_BITS_PER_WORD
> * BITOP_MASK
> * BITOP_WORD
> * BITOP_TYPE
> 
> BITS_PER_BYTE was moved to xen/bitops.h as BITS_PER_BYTE is the same
> across architectures.
> 
> The following approach was chosen for generic*() and arch*() bit
> operation functions:
> If the bit operation function that is going to be generic starts
> with the prefix "__", then the corresponding generic/arch function
> will also contain the "__" prefix. For example:
>  * test_bit() will be defined using arch_test_bit() and
>    generic_test_bit().
>  * __test_and_set_bit() will be defined using
>    arch__test_and_set_bit() and generic__test_and_set_bit().
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  Reviewed-by: Jan Beulich jbeulich@suse.com? Jan gave his R-by for the previous
>  version of the patch, but some changes were done, so I wasn't sure if I could
>  use the R-by in this patch version.

That really depends on the nature of the changes. Seeing the pretty
long list below, I think it was appropriate to drop the R-b.

> ---
> Changes in V11:
>  - fix identation in generic_test_bit() function.
>  - move definition of BITS_PER_BYTE from <arch>/bitops.h to xen/bitops.h

I realize you were asked to do this, but I'm not overly happy with it.
BITS_PER_BYTE is far more similar to BITS_PER_LONG than to
BITOP_BITS_PER_WORD. Plus in any event that change is entirely unrelated
here.

>  - drop the changes in arm64/livepatch.c.
>  - update the the comments on top of functions: generic__test_and_set_bit(),
>    generic__test_and_clear_bit(),  generic__test_and_change_bit(),
>    generic_test_bit().
>  - Update footer after Signed-off section.
>  - Rebase the patch on top of staging branch, so it can be merged when necessary
>    approves will be given.
>  - Update the commit message.
> ---
>  xen/arch/arm/include/asm/bitops.h |  69 -----------
>  xen/arch/ppc/include/asm/bitops.h |  54 ---------
>  xen/arch/ppc/include/asm/page.h   |   2 +-
>  xen/arch/ppc/mm-radix.c           |   2 +-
>  xen/arch/x86/include/asm/bitops.h |  31 ++---
>  xen/include/xen/bitops.h          | 185 ++++++++++++++++++++++++++++++
>  6 files changed, 196 insertions(+), 147 deletions(-)
> 
> --- a/xen/include/xen/bitops.h
> +++ b/xen/include/xen/bitops.h
> @@ -103,8 +103,193 @@ static inline int generic_flsl(unsigned long x)
>   * Include this here because some architectures need generic_ffs/fls in
>   * scope
>   */
> +
> +#define BITOP_BITS_PER_WORD 32
> +typedef uint32_t bitop_uint_t;
> +
> +#define BITOP_MASK(nr)  ((bitop_uint_t)1 << ((nr) % BITOP_BITS_PER_WORD))
> +
> +#define BITOP_WORD(nr)  ((nr) / BITOP_BITS_PER_WORD)
> +
> +#define BITS_PER_BYTE 8

This, if to be moved at all, very clearly doesn't belong here. As much
as it's unrelated to this change, it's also unrelated to bitops as a
whole.

> +extern void __bitop_bad_size(void);
> +
> +#define bitop_bad_size(addr) (sizeof(*(addr)) < sizeof(bitop_uint_t))
> +
> +/* --------------------- Please tidy above here --------------------- */
> +
>  #include <asm/bitops.h>
>  
> +/**
> + * generic__test_and_set_bit - Set a bit and return its old value
> + * @nr: Bit to set
> + * @addr: Address to count from
> + *
> + * This operation is non-atomic and can be reordered.
> + * If two examples of this operation race, one can appear to succeed

Why "examples"? Do you mean "instances"? It's further unclear what
"succeed" and "fail" here mean. The function after all has two
purposes: Checking and setting the specified bit. Therefore I think
you mean "succeed in updating the bit in memory", yet then it's
still unclear what the "appear" here means: The caller has no
indication of success/failure. Therefore I think "appear to" also
wants dropping.

> + * but actually fail.  You must protect multiple accesses with a lock.

That's not "must". An often better alternative is to use the atomic
forms instead. "Multiple" is imprecise, too: "Potentially racy" or
some such would come closer.

Also did Julien(?) really ask that these comments be reproduced on
both the functions supposed to be used throughout the codebase _and_
the generic_*() ones (being merely internal helpers/fallbacks)?

> +/**
> + * generic_test_bit - Determine whether a bit is set
> + * @nr: bit number to test
> + * @addr: Address to start counting from
> + *
> + * This operation is non-atomic and can be reordered.
> + * If two examples of this operation race, one can appear to succeed
> + * but actually fail.  You must protect multiple accesses with a lock.
> + */

You got carried away updating comments - there's no raciness for
simple test_bit(). As is also expressed by its name not having those
double underscores that the others have.

> +static always_inline bool generic_test_bit(int nr, const volatile void *addr)
> +{
> +    bitop_uint_t mask = BITOP_MASK(nr);
> +    const volatile bitop_uint_t *p =
> +        (const volatile bitop_uint_t *)addr + BITOP_WORD(nr);
> +
> +    return (*p & mask);
> +}
> +
> +/**
> + * __test_and_set_bit - Set a bit and return its old value
> + * @nr: Bit to set
> + * @addr: Address to count from
> + *
> + * This operation is non-atomic and can be reordered.
> + * If two examples of this operation race, one can appear to succeed
> + * but actually fail.  You must protect multiple accesses with a lock.
> + */
> +static always_inline bool
> +__test_and_set_bit(int nr, volatile void *addr)
> +{
> +#ifndef arch__test_and_set_bit
> +#define arch__test_and_set_bit generic__test_and_set_bit
> +#endif
> +
> +    return arch__test_and_set_bit(nr, addr);
> +}

See Julien's comments on the earlier version as well as what Andrew has
now done for ffs()/fls(), avoiding the largely pointless fallback
#define.

Jan
Oleksii Kurochko May 28, 2024, 8:37 a.m. UTC | #2
On Tue, 2024-05-28 at 08:20 +0200, Jan Beulich wrote:
> On 24.05.2024 13:08, Oleksii Kurochko wrote:
> > The following generic functions were introduced:
> > * test_bit
> > * generic__test_and_set_bit
> > * generic__test_and_clear_bit
> > * generic__test_and_change_bit
> > 
> > These functions and macros can be useful for architectures
> > that don't have corresponding arch-specific instructions.
> > 
> > Also, the patch introduces the following generics which are
> > used by the functions mentioned above:
> > * BITOP_BITS_PER_WORD
> > * BITOP_MASK
> > * BITOP_WORD
> > * BITOP_TYPE
> > 
> > BITS_PER_BYTE was moved to xen/bitops.h as BITS_PER_BYTE is the
> > same
> > across architectures.
> > 
> > The following approach was chosen for generic*() and arch*() bit
> > operation functions:
> > If the bit operation function that is going to be generic starts
> > with the prefix "__", then the corresponding generic/arch function
> > will also contain the "__" prefix. For example:
> >  * test_bit() will be defined using arch_test_bit() and
> >    generic_test_bit().
> >  * __test_and_set_bit() will be defined using
> >    arch__test_and_set_bit() and generic__test_and_set_bit().
> > 
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> >  Reviewed-by: Jan Beulich jbeulich@suse.com? Jan gave his R-by for
> > the previous
> >  version of the patch, but some changes were done, so I wasn't sure
> > if I could
> >  use the R-by in this patch version.
> 
> That really depends on the nature of the changes. Seeing the pretty
> long list below, I think it was appropriate to drop the R-b.
> 
> > ---
> > Changes in V11:
> >  - fix identation in generic_test_bit() function.
> >  - move definition of BITS_PER_BYTE from <arch>/bitops.h to
> > xen/bitops.h
> 
> I realize you were asked to do this, but I'm not overly happy with
> it.
> BITS_PER_BYTE is far more similar to BITS_PER_LONG than to
> BITOP_BITS_PER_WORD. Plus in any event that change is entirely
> unrelated
> here.
So where then it should be introduced? x86 introduces that in config.h,
Arm in asm/bitops.h.
I am okay to revert this change and make a separate patch.

> 
> >  - drop the changes in arm64/livepatch.c.
> >  - update the the comments on top of functions:
> > generic__test_and_set_bit(),
> >    generic__test_and_clear_bit(),  generic__test_and_change_bit(),
> >    generic_test_bit().
> >  - Update footer after Signed-off section.
> >  - Rebase the patch on top of staging branch, so it can be merged
> > when necessary
> >    approves will be given.
> >  - Update the commit message.
> > ---
> >  xen/arch/arm/include/asm/bitops.h |  69 -----------
> >  xen/arch/ppc/include/asm/bitops.h |  54 ---------
> >  xen/arch/ppc/include/asm/page.h   |   2 +-
> >  xen/arch/ppc/mm-radix.c           |   2 +-
> >  xen/arch/x86/include/asm/bitops.h |  31 ++---
> >  xen/include/xen/bitops.h          | 185
> > ++++++++++++++++++++++++++++++
> >  6 files changed, 196 insertions(+), 147 deletions(-)
> > 
> > --- a/xen/include/xen/bitops.h
> > +++ b/xen/include/xen/bitops.h
> > @@ -103,8 +103,193 @@ static inline int generic_flsl(unsigned long
> > x)
> >   * Include this here because some architectures need
> > generic_ffs/fls in
> >   * scope
> >   */
> > +
> > +#define BITOP_BITS_PER_WORD 32
> > +typedef uint32_t bitop_uint_t;
> > +
> > +#define BITOP_MASK(nr)  ((bitop_uint_t)1 << ((nr) %
> > BITOP_BITS_PER_WORD))
> > +
> > +#define BITOP_WORD(nr)  ((nr) / BITOP_BITS_PER_WORD)
> > +
> > +#define BITS_PER_BYTE 8
> 
> This, if to be moved at all, very clearly doesn't belong here. As
> much
> as it's unrelated to this change, it's also unrelated to bitops as a
> whole.
> 
> > +extern void __bitop_bad_size(void);
> > +
> > +#define bitop_bad_size(addr) (sizeof(*(addr)) <
> > sizeof(bitop_uint_t))
> > +
> > +/* --------------------- Please tidy above here ------------------
> > --- */
> > +
> >  #include <asm/bitops.h>
> >  
> > +/**
> > + * generic__test_and_set_bit - Set a bit and return its old value
> > + * @nr: Bit to set
> > + * @addr: Address to count from
> > + *
> > + * This operation is non-atomic and can be reordered.
> > + * If two examples of this operation race, one can appear to
> > succeed
> 
> Why "examples"? Do you mean "instances"? It's further unclear what
> "succeed" and "fail" here mean. The function after all has two
> purposes: Checking and setting the specified bit. Therefore I think
> you mean "succeed in updating the bit in memory", yet then it's
> still unclear what the "appear" here means: The caller has no
> indication of success/failure. Therefore I think "appear to" also
> wants dropping.
> 
> > + * but actually fail.  You must protect multiple accesses with a
> > lock.
> 
> That's not "must". An often better alternative is to use the atomic
> forms instead. "Multiple" is imprecise, too: "Potentially racy" or
> some such would come closer.
I think we can go only with:
 " This operation is non-atomic and can be reordered."
and drop:
 " If two examples of this operation race, one can appear to ... "

> 
> Also did Julien(?) really ask that these comments be reproduced on
> both the functions supposed to be used throughout the codebase _and_
> the generic_*() ones (being merely internal helpers/fallbacks)?
At least, I understood that in this way.

> 
> > +/**
> > + * generic_test_bit - Determine whether a bit is set
> > + * @nr: bit number to test
> > + * @addr: Address to start counting from
> > + *
> > + * This operation is non-atomic and can be reordered.
> > + * If two examples of this operation race, one can appear to
> > succeed
> > + * but actually fail.  You must protect multiple accesses with a
> > lock.
> > + */
> 
> You got carried away updating comments - there's no raciness for
> simple test_bit(). As is also expressed by its name not having those
> double underscores that the others have.
Then it is true for every function in this header. Based on the naming
the conclusion can be done if it is atomic/npn-atomic and can/can't be
reordered. So the comments aren't seem very useful.

~ Oleksii

> 
> > +static always_inline bool generic_test_bit(int nr, const volatile
> > void *addr)
> > +{
> > +    bitop_uint_t mask = BITOP_MASK(nr);
> > +    const volatile bitop_uint_t *p =
> > +        (const volatile bitop_uint_t *)addr + BITOP_WORD(nr);
> > +
> > +    return (*p & mask);
> > +}
> > +
> > +/**
> > + * __test_and_set_bit - Set a bit and return its old value
> > + * @nr: Bit to set
> > + * @addr: Address to count from
> > + *
> > + * This operation is non-atomic and can be reordered.
> > + * If two examples of this operation race, one can appear to
> > succeed
> > + * but actually fail.  You must protect multiple accesses with a
> > lock.
> > + */
> > +static always_inline bool
> > +__test_and_set_bit(int nr, volatile void *addr)
> > +{
> > +#ifndef arch__test_and_set_bit
> > +#define arch__test_and_set_bit generic__test_and_set_bit
> > +#endif
> > +
> > +    return arch__test_and_set_bit(nr, addr);
> > +}
> 
> See Julien's comments on the earlier version as well as what Andrew
> has
> now done for ffs()/fls(), avoiding the largely pointless fallback
> #define.
> 
> Jan
Julien Grall May 28, 2024, 8:53 a.m. UTC | #3
Hi Oleksii,

On 28/05/2024 09:37, Oleksii K. wrote:
> On Tue, 2024-05-28 at 08:20 +0200, Jan Beulich wrote:
>> On 24.05.2024 13:08, Oleksii Kurochko wrote:
>>> The following generic functions were introduced:
>>> * test_bit
>>> * generic__test_and_set_bit
>>> * generic__test_and_clear_bit
>>> * generic__test_and_change_bit
>>>
>>> These functions and macros can be useful for architectures
>>> that don't have corresponding arch-specific instructions.
>>>
>>> Also, the patch introduces the following generics which are
>>> used by the functions mentioned above:
>>> * BITOP_BITS_PER_WORD
>>> * BITOP_MASK
>>> * BITOP_WORD
>>> * BITOP_TYPE
>>>
>>> BITS_PER_BYTE was moved to xen/bitops.h as BITS_PER_BYTE is the
>>> same
>>> across architectures.
>>>
>>> The following approach was chosen for generic*() and arch*() bit
>>> operation functions:
>>> If the bit operation function that is going to be generic starts
>>> with the prefix "__", then the corresponding generic/arch function
>>> will also contain the "__" prefix. For example:
>>>   * test_bit() will be defined using arch_test_bit() and
>>>     generic_test_bit().
>>>   * __test_and_set_bit() will be defined using
>>>     arch__test_and_set_bit() and generic__test_and_set_bit().
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>>   Reviewed-by: Jan Beulich jbeulich@suse.com? Jan gave his R-by for
>>> the previous
>>>   version of the patch, but some changes were done, so I wasn't sure
>>> if I could
>>>   use the R-by in this patch version.
>>
>> That really depends on the nature of the changes. Seeing the pretty
>> long list below, I think it was appropriate to drop the R-b.
>>
>>> ---
>>> Changes in V11:
>>>   - fix identation in generic_test_bit() function.
>>>   - move definition of BITS_PER_BYTE from <arch>/bitops.h to
>>> xen/bitops.h
>>
>> I realize you were asked to do this, but I'm not overly happy with
>> it.
>> BITS_PER_BYTE is far more similar to BITS_PER_LONG than to
>> BITOP_BITS_PER_WORD. Plus in any event that change is entirely
>> unrelated
>> here.
> So where then it should be introduced? x86 introduces that in config.h,
> Arm in asm/bitops.h.

I would be fine if it is defined in config.h for Arm.

> I am okay to revert this change and make a separate patch.

[...]


>> Also did Julien(?) really ask that these comments be reproduced on
>> both the functions supposed to be used throughout the codebase _and_
>> the generic_*() ones (being merely internal helpers/fallbacks)?
> At least, I understood that in this way.

I suggested to duplicate. But I also proposed an alternative to move the 
comment:

"I think this comment should be duplicated (or moved to) on top of"

I don't have a strong preference which one is used.

> 
>>
>>> +/**
>>> + * generic_test_bit - Determine whether a bit is set
>>> + * @nr: bit number to test
>>> + * @addr: Address to start counting from
>>> + *
>>> + * This operation is non-atomic and can be reordered.
>>> + * If two examples of this operation race, one can appear to
>>> succeed
>>> + * but actually fail.  You must protect multiple accesses with a
>>> lock.
>>> + */
>>
>> You got carried away updating comments - there's no raciness for
>> simple test_bit(). As is also expressed by its name not having those
>> double underscores that the others have.
> Then it is true for every function in this header. Based on the naming
> the conclusion can be done if it is atomic/npn-atomic and can/can't be
> reordered. 

So let me start with that my only request is to keep the existing 
comments as you move it. It looks like there were none of test_bit() before.

 > So the comments aren't seem very useful.

The comments *are* useful. We had several instances where non-Arm folks 
thought all the operations were atomic on Arm as well. And the usual 
suggestion is to add barriers in the Arm version which is a no-go.

Cheers,
Jan Beulich May 28, 2024, 9:39 a.m. UTC | #4
On 28.05.2024 10:37, Oleksii K. wrote:
> On Tue, 2024-05-28 at 08:20 +0200, Jan Beulich wrote:
>> On 24.05.2024 13:08, Oleksii Kurochko wrote:
>>> +/**
>>> + * generic_test_bit - Determine whether a bit is set
>>> + * @nr: bit number to test
>>> + * @addr: Address to start counting from
>>> + *
>>> + * This operation is non-atomic and can be reordered.
>>> + * If two examples of this operation race, one can appear to
>>> succeed
>>> + * but actually fail.  You must protect multiple accesses with a
>>> lock.
>>> + */
>>
>> You got carried away updating comments - there's no raciness for
>> simple test_bit(). As is also expressed by its name not having those
>> double underscores that the others have.
> Then it is true for every function in this header. Based on the naming
> the conclusion can be done if it is atomic/npn-atomic and can/can't be
> reordered. So the comments aren't seem very useful.

I disagree - test_bit() is different, in not being a read-modify-write
operation.

Jan
Oleksii Kurochko May 29, 2024, 7:50 a.m. UTC | #5
On Tue, 2024-05-28 at 09:53 +0100, Julien Grall wrote:
> > > > +/**
> > > > + * generic_test_bit - Determine whether a bit is set
> > > > + * @nr: bit number to test
> > > > + * @addr: Address to start counting from
> > > > + *
> > > > + * This operation is non-atomic and can be reordered.
> > > > + * If two examples of this operation race, one can appear to
> > > > succeed
> > > > + * but actually fail.  You must protect multiple accesses with
> > > > a
> > > > lock.
> > > > + */
> > > 
> > > You got carried away updating comments - there's no raciness for
> > > simple test_bit(). As is also expressed by its name not having
> > > those
> > > double underscores that the others have.
> > Then it is true for every function in this header. Based on the
> > naming
> > the conclusion can be done if it is atomic/npn-atomic and can/can't
> > be
> > reordered. 
> 
> So let me start with that my only request is to keep the existing 
> comments as you move it. It looks like there were none of test_bit()
> before.
Just to clarify that I understand correctly.

Do we need any comment above functions generic_*()? Based on that they
are implemented in generic way they will be always "non-atomic and can
be reordered.".

Do you find the following comment useful?

" * If two examples of this operation race, one can appear to succeed
 * but actually fail.  You must protect multiple accesses with a lock."

It seems to me that it can dropped as basically "non-atomic and can be
reordered." means that.

~ Oleksii
Jan Beulich May 29, 2024, 8:36 a.m. UTC | #6
On 29.05.2024 09:50, Oleksii K. wrote:
> On Tue, 2024-05-28 at 09:53 +0100, Julien Grall wrote:
>>>>> +/**
>>>>> + * generic_test_bit - Determine whether a bit is set
>>>>> + * @nr: bit number to test
>>>>> + * @addr: Address to start counting from
>>>>> + *
>>>>> + * This operation is non-atomic and can be reordered.
>>>>> + * If two examples of this operation race, one can appear to
>>>>> succeed
>>>>> + * but actually fail.  You must protect multiple accesses with
>>>>> a
>>>>> lock.
>>>>> + */
>>>>
>>>> You got carried away updating comments - there's no raciness for
>>>> simple test_bit(). As is also expressed by its name not having
>>>> those
>>>> double underscores that the others have.
>>> Then it is true for every function in this header. Based on the
>>> naming
>>> the conclusion can be done if it is atomic/npn-atomic and can/can't
>>> be
>>> reordered. 
>>
>> So let me start with that my only request is to keep the existing 
>> comments as you move it. It looks like there were none of test_bit()
>> before.
> Just to clarify that I understand correctly.
> 
> Do we need any comment above functions generic_*()? Based on that they
> are implemented in generic way they will be always "non-atomic and can
> be reordered.".

I indicated before that I think reproducing the same comments __test_and_*
already have also for generic_* isn't overly useful. If someone insisted
on them being there as well, I could live with that, though.

> Do you find the following comment useful?
> 
> " * If two examples of this operation race, one can appear to succeed
>  * but actually fail.  You must protect multiple accesses with a lock."
> 
> It seems to me that it can dropped as basically "non-atomic and can be
> reordered." means that.

I agree, or else - as indicated before - the wording would need to further
change. Yet iirc you've added that in response to a comment from Julien,
so you'll primarily want his input as to the presence of something along
these lines.

Jan
Julien Grall May 29, 2024, 9:59 a.m. UTC | #7
Hi,

On 29/05/2024 09:36, Jan Beulich wrote:
> On 29.05.2024 09:50, Oleksii K. wrote:
>> On Tue, 2024-05-28 at 09:53 +0100, Julien Grall wrote:
>>>>>> +/**
>>>>>> + * generic_test_bit - Determine whether a bit is set
>>>>>> + * @nr: bit number to test
>>>>>> + * @addr: Address to start counting from
>>>>>> + *
>>>>>> + * This operation is non-atomic and can be reordered.
>>>>>> + * If two examples of this operation race, one can appear to
>>>>>> succeed
>>>>>> + * but actually fail.  You must protect multiple accesses with
>>>>>> a
>>>>>> lock.
>>>>>> + */
>>>>>
>>>>> You got carried away updating comments - there's no raciness for
>>>>> simple test_bit(). As is also expressed by its name not having
>>>>> those
>>>>> double underscores that the others have.
>>>> Then it is true for every function in this header. Based on the
>>>> naming
>>>> the conclusion can be done if it is atomic/npn-atomic and can/can't
>>>> be
>>>> reordered.
>>>
>>> So let me start with that my only request is to keep the existing
>>> comments as you move it. It looks like there were none of test_bit()
>>> before.
>> Just to clarify that I understand correctly.
>>
>> Do we need any comment above functions generic_*()? Based on that they
>> are implemented in generic way they will be always "non-atomic and can
>> be reordered.".
> 
> I indicated before that I think reproducing the same comments __test_and_*
> already have also for generic_* isn't overly useful. If someone insisted
> on them being there as well, I could live with that, though.

Would you be ok if the comment is only on top of the __test_and_* 
version? (So no comments on top of the generic_*)

> 
>> Do you find the following comment useful?
>>
>> " * If two examples of this operation race, one can appear to succeed
>>   * but actually fail.  You must protect multiple accesses with a lock."
>>
>> It seems to me that it can dropped as basically "non-atomic and can be
>> reordered." means that.
> 
> I agree, or else - as indicated before - the wording would need to further
> change. Yet iirc you've added that in response to a comment from Julien,
> so you'll primarily want his input as to the presence of something along
> these lines.

I didn't realise this was an existing comment. I think the suggestion is 
a little bit odd because you could use the atomic version of the helper.

Looking at Linux, the second sentence was dropped. But not the first 
one. I would suggest to do the same. IOW keep:

"
If two examples of this operation race, one can appear to succeed but 
actually fail.
"
Jan Beulich May 29, 2024, 10:06 a.m. UTC | #8
On 29.05.2024 11:59, Julien Grall wrote:
> Hi,
> 
> On 29/05/2024 09:36, Jan Beulich wrote:
>> On 29.05.2024 09:50, Oleksii K. wrote:
>>> On Tue, 2024-05-28 at 09:53 +0100, Julien Grall wrote:
>>>>>>> +/**
>>>>>>> + * generic_test_bit - Determine whether a bit is set
>>>>>>> + * @nr: bit number to test
>>>>>>> + * @addr: Address to start counting from
>>>>>>> + *
>>>>>>> + * This operation is non-atomic and can be reordered.
>>>>>>> + * If two examples of this operation race, one can appear to
>>>>>>> succeed
>>>>>>> + * but actually fail.  You must protect multiple accesses with
>>>>>>> a
>>>>>>> lock.
>>>>>>> + */
>>>>>>
>>>>>> You got carried away updating comments - there's no raciness for
>>>>>> simple test_bit(). As is also expressed by its name not having
>>>>>> those
>>>>>> double underscores that the others have.
>>>>> Then it is true for every function in this header. Based on the
>>>>> naming
>>>>> the conclusion can be done if it is atomic/npn-atomic and can/can't
>>>>> be
>>>>> reordered.
>>>>
>>>> So let me start with that my only request is to keep the existing
>>>> comments as you move it. It looks like there were none of test_bit()
>>>> before.
>>> Just to clarify that I understand correctly.
>>>
>>> Do we need any comment above functions generic_*()? Based on that they
>>> are implemented in generic way they will be always "non-atomic and can
>>> be reordered.".
>>
>> I indicated before that I think reproducing the same comments __test_and_*
>> already have also for generic_* isn't overly useful. If someone insisted
>> on them being there as well, I could live with that, though.
> 
> Would you be ok if the comment is only on top of the __test_and_* 
> version? (So no comments on top of the generic_*)

That's my preferred variant, actually. The alternative I would also be
okay-ish with is to have the comments also ahead of generic_*.

>>> Do you find the following comment useful?
>>>
>>> " * If two examples of this operation race, one can appear to succeed
>>>   * but actually fail.  You must protect multiple accesses with a lock."
>>>
>>> It seems to me that it can dropped as basically "non-atomic and can be
>>> reordered." means that.
>>
>> I agree, or else - as indicated before - the wording would need to further
>> change. Yet iirc you've added that in response to a comment from Julien,
>> so you'll primarily want his input as to the presence of something along
>> these lines.
> 
> I didn't realise this was an existing comment. I think the suggestion is 
> a little bit odd because you could use the atomic version of the helper.
> 
> Looking at Linux, the second sentence was dropped. But not the first 
> one. I would suggest to do the same. IOW keep:
> 
> "
> If two examples of this operation race, one can appear to succeed but 
> actually fail.
> "

As indicated, I'm okay with that being retained, but only in a form that
actually makes sense. I've explained before (to Oleksii) what I consider
wrong in this way of wording things.

Jan
Oleksii Kurochko May 29, 2024, 2:58 p.m. UTC | #9
static always_inline bool test_bit(int nr, const volatile void *addr)On
Wed, 2024-05-29 at 12:06 +0200, Jan Beulich wrote:
> On 29.05.2024 11:59, Julien Grall wrote:
> > Hi,
> > 
> > On 29/05/2024 09:36, Jan Beulich wrote:
> > > On 29.05.2024 09:50, Oleksii K. wrote:
> > > > On Tue, 2024-05-28 at 09:53 +0100, Julien Grall wrote:
> > > > > > > > +/**
> > > > > > > > + * generic_test_bit - Determine whether a bit is set
> > > > > > > > + * @nr: bit number to test
> > > > > > > > + * @addr: Address to start counting from
> > > > > > > > + *
> > > > > > > > + * This operation is non-atomic and can be reordered.
> > > > > > > > + * If two examples of this operation race, one can
> > > > > > > > appear to
> > > > > > > > succeed
> > > > > > > > + * but actually fail.  You must protect multiple
> > > > > > > > accesses with
> > > > > > > > a
> > > > > > > > lock.
> > > > > > > > + */
> > > > > > > 
> > > > > > > You got carried away updating comments - there's no
> > > > > > > raciness for
> > > > > > > simple test_bit(). As is also expressed by its name not
> > > > > > > having
> > > > > > > those
> > > > > > > double underscores that the others have.
> > > > > > Then it is true for every function in this header. Based on
> > > > > > the
> > > > > > naming
> > > > > > the conclusion can be done if it is atomic/npn-atomic and
> > > > > > can/can't
> > > > > > be
> > > > > > reordered.
> > > > > 
> > > > > So let me start with that my only request is to keep the
> > > > > existing
> > > > > comments as you move it. It looks like there were none of
> > > > > test_bit()
> > > > > before.
> > > > Just to clarify that I understand correctly.
> > > > 
> > > > Do we need any comment above functions generic_*()? Based on
> > > > that they
> > > > are implemented in generic way they will be always "non-atomic
> > > > and can
> > > > be reordered.".
> > > 
> > > I indicated before that I think reproducing the same comments
> > > __test_and_*
> > > already have also for generic_* isn't overly useful. If someone
> > > insisted
> > > on them being there as well, I could live with that, though.
> > 
> > Would you be ok if the comment is only on top of the __test_and_* 
> > version? (So no comments on top of the generic_*)
> 
> That's my preferred variant, actually. The alternative I would also
> be
> okay-ish with is to have the comments also ahead of generic_*.
> 
> > > > Do you find the following comment useful?
> > > > 
> > > > " * If two examples of this operation race, one can appear to
> > > > succeed
> > > >   * but actually fail.  You must protect multiple accesses with
> > > > a lock."
> > > > 
> > > > It seems to me that it can dropped as basically "non-atomic and
> > > > can be
> > > > reordered." means that.
> > > 
> > > I agree, or else - as indicated before - the wording would need
> > > to further
> > > change. Yet iirc you've added that in response to a comment from
> > > Julien,
> > > so you'll primarily want his input as to the presence of
> > > something along
> > > these lines.
> > 
> > I didn't realise this was an existing comment. I think the
> > suggestion is 
> > a little bit odd because you could use the atomic version of the
> > helper.
> > 
> > Looking at Linux, the second sentence was dropped. But not the
> > first 
> > one. I would suggest to do the same. IOW keep:
> > 
> > "
> > If two examples of this operation race, one can appear to succeed
> > but 
> > actually fail.
> > "
> 
> As indicated, I'm okay with that being retained, but only in a form
> that
> actually makes sense. I've explained before (to Oleksii) what I
> consider
> wrong in this way of wording things.

Would it be suitable to rephrase it in the following way:
     * This operation is non-atomic and can be reordered.
   - * If two examples of this operation race, one can appear to
   succeed
   - * but actually fail.  You must protect multiple accesses with a
   lock.
   + * If two instances of this operation race, one may succeed in
   updating
   + * the bit in memory, but actually fail. It should be protected
   from
   + * potentially racy behavior.
     */
    static always_inline bool
    __test_and_set_bit(int nr, volatile void *addr)
   @@ -228,8 +191,9 @@ __test_and_set_bit(int nr, volatile void *addr)
     * @addr: Address to count from
     *
     * This operation is non-atomic and can be reordered.
   - * If two examples of this operation race, one can appear to
   succeed
   - * but actually fail.  You must protect multiple accesses with a
   lock.
   + * If two instances of this operation race, one may succeed in
   clearing
   + * the bit in memory, but actually fail. It should be protected
   from
   + * potentially racy behavior.
     */
    static always_inline bool
    __test_and_clear_bit(int nr, volatile void *addr)
   @@ -251,8 +215,9 @@ __test_and_clear_bit(int nr, volatile void
   *addr)
     * @addr: Address to count from
     *
     * This operation is non-atomic and can be reordered.
   - * If two examples of this operation race, one can appear to
   succeed
   - * but actually fail.  You must protect multiple accesses with a
   lock.
   + * If two instances of this operation race, one may succeed in
   changing
   + * the bit in memory, but actually fail. It should be protected
   from
   + * potentially racy behavior.
     */
    static always_inline bool
    __test_and_change_bit(int nr, volatile void *addr)
   @@ -274,8 +239,9 @@ __test_and_change_bit(int nr, volatile void
   *addr)
     * @addr: Address to start counting from
     *
     * This operation is non-atomic and can be reordered.
   - * If two examples of this operation race, one can appear to
   succeed
   - * but actually fail.  You must protect multiple accesses with a
   lock.
   + * If two instances of this operation race, one may succeed in
   testing
   + * the bit in memory, but actually fail. It should be protected
   from
   + * potentially racy behavior.
     */
    static always_inline bool test_bit(int nr, const volatile void
   *addr)

~ Oleksii
> 
> Jan
Jan Beulich May 29, 2024, 3:22 p.m. UTC | #10
On 29.05.2024 16:58, Oleksii K. wrote:
> static always_inline bool test_bit(int nr, const volatile void *addr)On
> Wed, 2024-05-29 at 12:06 +0200, Jan Beulich wrote:
>> On 29.05.2024 11:59, Julien Grall wrote:
>>> I didn't realise this was an existing comment. I think the
>>> suggestion is 
>>> a little bit odd because you could use the atomic version of the
>>> helper.
>>>
>>> Looking at Linux, the second sentence was dropped. But not the
>>> first 
>>> one. I would suggest to do the same. IOW keep:
>>>
>>> "
>>> If two examples of this operation race, one can appear to succeed
>>> but 
>>> actually fail.
>>> "
>>
>> As indicated, I'm okay with that being retained, but only in a form
>> that
>> actually makes sense. I've explained before (to Oleksii) what I
>> consider
>> wrong in this way of wording things.
> 
> Would it be suitable to rephrase it in the following way:
>      * This operation is non-atomic and can be reordered.
>    - * If two examples of this operation race, one can appear to
>    succeed
>    - * but actually fail.  You must protect multiple accesses with a
>    lock.
>    + * If two instances of this operation race, one may succeed in
>    updating
>    + * the bit in memory, but actually fail. It should be protected
>    from
>    + * potentially racy behavior.
>      */
>     static always_inline bool
>     __test_and_set_bit(int nr, volatile void *addr)

You've lost the "appear to" ahead of "succeed". Yet even with the adjustment
I still don't see what the "appear to succeed" really is supposed to mean
here. The issue (imo) isn't with success or failure, but with the write of
one CPU potentially being immediately overwritten by another CPU, not
observing the bit change that the first CPU did. IOW both will succeed (or
appear to succeed), but one update may end up being lost. Maybe "..., both
may update memory with their view of the new value, not taking into account
the update the respectively other one did"? Or "..., both will appear to
succeed, but one of the updates may be lost"?

Jan
Oleksii Kurochko May 29, 2024, 4:29 p.m. UTC | #11
On Wed, 2024-05-29 at 17:22 +0200, Jan Beulich wrote:
> On 29.05.2024 16:58, Oleksii K. wrote:
> > static always_inline bool test_bit(int nr, const volatile void
> > *addr)On
> > Wed, 2024-05-29 at 12:06 +0200, Jan Beulich wrote:
> > > On 29.05.2024 11:59, Julien Grall wrote:
> > > > I didn't realise this was an existing comment. I think the
> > > > suggestion is 
> > > > a little bit odd because you could use the atomic version of
> > > > the
> > > > helper.
> > > > 
> > > > Looking at Linux, the second sentence was dropped. But not the
> > > > first 
> > > > one. I would suggest to do the same. IOW keep:
> > > > 
> > > > "
> > > > If two examples of this operation race, one can appear to
> > > > succeed
> > > > but 
> > > > actually fail.
> > > > "
> > > 
> > > As indicated, I'm okay with that being retained, but only in a
> > > form
> > > that
> > > actually makes sense. I've explained before (to Oleksii) what I
> > > consider
> > > wrong in this way of wording things.
> > 
> > Would it be suitable to rephrase it in the following way:
> >      * This operation is non-atomic and can be reordered.
> >    - * If two examples of this operation race, one can appear to
> >    succeed
> >    - * but actually fail.  You must protect multiple accesses with
> > a
> >    lock.
> >    + * If two instances of this operation race, one may succeed in
> >    updating
> >    + * the bit in memory, but actually fail. It should be protected
> >    from
> >    + * potentially racy behavior.
> >      */
> >     static always_inline bool
> >     __test_and_set_bit(int nr, volatile void *addr)
> 
> You've lost the "appear to" ahead of "succeed". Yet even with the
> adjustment
> I still don't see what the "appear to succeed" really is supposed to
> mean
> here. The issue (imo) isn't with success or failure, but with the
> write of
> one CPU potentially being immediately overwritten by another CPU, not
> observing the bit change that the first CPU did. IOW both will
> succeed (or
> appear to succeed), but one update may end up being lost. Maybe "...,
> both
> may update memory with their view of the new value, not taking into
> account
> the update the respectively other one did"? Or "..., both will appear
> to
> succeed, but one of the updates may be lost"?
For me, the first one is clearer.

Julien, would that be okay for you?

~ Oleksii
Oleksii Kurochko May 29, 2024, 4:36 p.m. UTC | #12
On Wed, 2024-05-29 at 18:29 +0200, Oleksii K. wrote:
> On Wed, 2024-05-29 at 17:22 +0200, Jan Beulich wrote:
> > On 29.05.2024 16:58, Oleksii K. wrote:
> > > static always_inline bool test_bit(int nr, const volatile void
> > > *addr)On
> > > Wed, 2024-05-29 at 12:06 +0200, Jan Beulich wrote:
> > > > On 29.05.2024 11:59, Julien Grall wrote:
> > > > > I didn't realise this was an existing comment. I think the
> > > > > suggestion is 
> > > > > a little bit odd because you could use the atomic version of
> > > > > the
> > > > > helper.
> > > > > 
> > > > > Looking at Linux, the second sentence was dropped. But not
> > > > > the
> > > > > first 
> > > > > one. I would suggest to do the same. IOW keep:
> > > > > 
> > > > > "
> > > > > If two examples of this operation race, one can appear to
> > > > > succeed
> > > > > but 
> > > > > actually fail.
> > > > > "
> > > > 
> > > > As indicated, I'm okay with that being retained, but only in a
> > > > form
> > > > that
> > > > actually makes sense. I've explained before (to Oleksii) what I
> > > > consider
> > > > wrong in this way of wording things.
> > > 
> > > Would it be suitable to rephrase it in the following way:
> > >      * This operation is non-atomic and can be reordered.
> > >    - * If two examples of this operation race, one can appear to
> > >    succeed
> > >    - * but actually fail.  You must protect multiple accesses
> > > with
> > > a
> > >    lock.
> > >    + * If two instances of this operation race, one may succeed
> > > in
> > >    updating
> > >    + * the bit in memory, but actually fail. It should be
> > > protected
> > >    from
> > >    + * potentially racy behavior.
> > >      */
> > >     static always_inline bool
> > >     __test_and_set_bit(int nr, volatile void *addr)
> > 
> > You've lost the "appear to" ahead of "succeed". Yet even with the
> > adjustment
> > I still don't see what the "appear to succeed" really is supposed
> > to
> > mean
> > here. The issue (imo) isn't with success or failure, but with the
> > write of
> > one CPU potentially being immediately overwritten by another CPU,
> > not
> > observing the bit change that the first CPU did. IOW both will
> > succeed (or
> > appear to succeed), but one update may end up being lost. Maybe
> > "...,
> > both
> > may update memory with their view of the new value, not taking into
> > account
> > the update the respectively other one did"? Or "..., both will
> > appear
> > to
> > succeed, but one of the updates may be lost"?
> For me, the first one is clearer.
If then this part of the comment is needed for test_bit() as it is
doing only reading?

> Julien, would that be okay for you?

~ Oleksii
Julien Grall May 29, 2024, 5:15 p.m. UTC | #13
Hi,

On 29/05/2024 17:36, Oleksii K. wrote:
> On Wed, 2024-05-29 at 18:29 +0200, Oleksii K. wrote:
>> On Wed, 2024-05-29 at 17:22 +0200, Jan Beulich wrote:
>>> On 29.05.2024 16:58, Oleksii K. wrote:
>>>> static always_inline bool test_bit(int nr, const volatile void
>>>> *addr)On
>>>> Wed, 2024-05-29 at 12:06 +0200, Jan Beulich wrote:
>>>>> On 29.05.2024 11:59, Julien Grall wrote:
>>>>>> I didn't realise this was an existing comment. I think the
>>>>>> suggestion is
>>>>>> a little bit odd because you could use the atomic version of
>>>>>> the
>>>>>> helper.
>>>>>>
>>>>>> Looking at Linux, the second sentence was dropped. But not
>>>>>> the
>>>>>> first
>>>>>> one. I would suggest to do the same. IOW keep:
>>>>>>
>>>>>> "
>>>>>> If two examples of this operation race, one can appear to
>>>>>> succeed
>>>>>> but
>>>>>> actually fail.
>>>>>> "
>>>>>
>>>>> As indicated, I'm okay with that being retained, but only in a
>>>>> form
>>>>> that
>>>>> actually makes sense. I've explained before (to Oleksii) what I
>>>>> consider
>>>>> wrong in this way of wording things.
>>>>
>>>> Would it be suitable to rephrase it in the following way:
>>>>       * This operation is non-atomic and can be reordered.
>>>>     - * If two examples of this operation race, one can appear to
>>>>     succeed
>>>>     - * but actually fail.  You must protect multiple accesses
>>>> with
>>>> a
>>>>     lock.
>>>>     + * If two instances of this operation race, one may succeed
>>>> in
>>>>     updating
>>>>     + * the bit in memory, but actually fail. It should be
>>>> protected
>>>>     from
>>>>     + * potentially racy behavior.
>>>>       */
>>>>      static always_inline bool
>>>>      __test_and_set_bit(int nr, volatile void *addr)
>>>
>>> You've lost the "appear to" ahead of "succeed". Yet even with the
>>> adjustment
>>> I still don't see what the "appear to succeed" really is supposed
>>> to
>>> mean
>>> here. The issue (imo) isn't with success or failure, but with the
>>> write of
>>> one CPU potentially being immediately overwritten by another CPU,
>>> not
>>> observing the bit change that the first CPU did. IOW both will
>>> succeed (or
>>> appear to succeed), but one update may end up being lost. Maybe
>>> "...,
>>> both
>>> may update memory with their view of the new value, not taking into
>>> account
>>> the update the respectively other one did"? Or "..., both will
>>> appear
>>> to
>>> succeed, but one of the updates may be lost"?
>> For me, the first one is clearer.

I actually find the second better because it explicitely spell out the 
issue. I can live with the first one though.

> If then this part of the comment is needed for test_bit() as it is
> doing only reading?

I don't think so. As Jan said, test_bit() is not a read-modify-write 
operation.

Cheers,
diff mbox series

Patch

diff --git a/xen/arch/arm/include/asm/bitops.h b/xen/arch/arm/include/asm/bitops.h
index ab030b6cb0..af38fbffdd 100644
--- a/xen/arch/arm/include/asm/bitops.h
+++ b/xen/arch/arm/include/asm/bitops.h
@@ -22,11 +22,6 @@ 
 #define __set_bit(n,p)            set_bit(n,p)
 #define __clear_bit(n,p)          clear_bit(n,p)
 
-#define BITOP_BITS_PER_WORD     32
-#define BITOP_MASK(nr)          (1UL << ((nr) % BITOP_BITS_PER_WORD))
-#define BITOP_WORD(nr)          ((nr) / BITOP_BITS_PER_WORD)
-#define BITS_PER_BYTE           8
-
 #define ADDR (*(volatile int *) addr)
 #define CONST_ADDR (*(const volatile int *) addr)
 
@@ -76,70 +71,6 @@  bool test_and_change_bit_timeout(int nr, volatile void *p,
 bool clear_mask16_timeout(uint16_t mask, volatile void *p,
                           unsigned int max_try);
 
-/**
- * __test_and_set_bit - Set a bit and return its old value
- * @nr: Bit to set
- * @addr: Address to count from
- *
- * This operation is non-atomic and can be reordered.
- * If two examples of this operation race, one can appear to succeed
- * but actually fail.  You must protect multiple accesses with a lock.
- */
-static inline int __test_and_set_bit(int nr, volatile void *addr)
-{
-        unsigned int mask = BITOP_MASK(nr);
-        volatile unsigned int *p =
-                ((volatile unsigned int *)addr) + BITOP_WORD(nr);
-        unsigned int old = *p;
-
-        *p = old | mask;
-        return (old & mask) != 0;
-}
-
-/**
- * __test_and_clear_bit - Clear a bit and return its old value
- * @nr: Bit to clear
- * @addr: Address to count from
- *
- * This operation is non-atomic and can be reordered.
- * If two examples of this operation race, one can appear to succeed
- * but actually fail.  You must protect multiple accesses with a lock.
- */
-static inline int __test_and_clear_bit(int nr, volatile void *addr)
-{
-        unsigned int mask = BITOP_MASK(nr);
-        volatile unsigned int *p =
-                ((volatile unsigned int *)addr) + BITOP_WORD(nr);
-        unsigned int old = *p;
-
-        *p = old & ~mask;
-        return (old & mask) != 0;
-}
-
-/* WARNING: non atomic and it can be reordered! */
-static inline int __test_and_change_bit(int nr,
-                                            volatile void *addr)
-{
-        unsigned int mask = BITOP_MASK(nr);
-        volatile unsigned int *p =
-                ((volatile unsigned int *)addr) + BITOP_WORD(nr);
-        unsigned int old = *p;
-
-        *p = old ^ mask;
-        return (old & mask) != 0;
-}
-
-/**
- * test_bit - Determine whether a bit is set
- * @nr: bit number to test
- * @addr: Address to start counting from
- */
-static inline int test_bit(int nr, const volatile void *addr)
-{
-        const volatile unsigned int *p = (const volatile unsigned int *)addr;
-        return 1UL & (p[BITOP_WORD(nr)] >> (nr & (BITOP_BITS_PER_WORD-1)));
-}
-
 /*
  * On ARMv5 and above those functions can be implemented around
  * the clz instruction for much better code efficiency.
diff --git a/xen/arch/ppc/include/asm/bitops.h b/xen/arch/ppc/include/asm/bitops.h
index bea655796d..71bbb5f16a 100644
--- a/xen/arch/ppc/include/asm/bitops.h
+++ b/xen/arch/ppc/include/asm/bitops.h
@@ -15,11 +15,6 @@ 
 #define __set_bit(n, p)         set_bit(n, p)
 #define __clear_bit(n, p)       clear_bit(n, p)
 
-#define BITOP_BITS_PER_WORD     32
-#define BITOP_MASK(nr)          (1U << ((nr) % BITOP_BITS_PER_WORD))
-#define BITOP_WORD(nr)          ((nr) / BITOP_BITS_PER_WORD)
-#define BITS_PER_BYTE           8
-
 /* PPC bit number conversion */
 #define PPC_BITLSHIFT(be)    (BITS_PER_LONG - 1 - (be))
 #define PPC_BIT(bit)         (1UL << PPC_BITLSHIFT(bit))
@@ -69,17 +64,6 @@  static inline void clear_bit(int nr, volatile void *addr)
     clear_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + BITOP_WORD(nr));
 }
 
-/**
- * test_bit - Determine whether a bit is set
- * @nr: bit number to test
- * @addr: Address to start counting from
- */
-static inline int test_bit(int nr, const volatile void *addr)
-{
-    const volatile unsigned int *p = addr;
-    return 1 & (p[BITOP_WORD(nr)] >> (nr & (BITOP_BITS_PER_WORD - 1)));
-}
-
 static inline unsigned int test_and_clear_bits(
     unsigned int mask,
     volatile unsigned int *p)
@@ -133,44 +117,6 @@  static inline int test_and_set_bit(unsigned int nr, volatile void *addr)
         (volatile unsigned int *)addr + BITOP_WORD(nr)) != 0;
 }
 
-/**
- * __test_and_set_bit - Set a bit and return its old value
- * @nr: Bit to set
- * @addr: Address to count from
- *
- * This operation is non-atomic and can be reordered.
- * If two examples of this operation race, one can appear to succeed
- * but actually fail.  You must protect multiple accesses with a lock.
- */
-static inline int __test_and_set_bit(int nr, volatile void *addr)
-{
-    unsigned int mask = BITOP_MASK(nr);
-    volatile unsigned int *p = (volatile unsigned int *)addr + BITOP_WORD(nr);
-    unsigned int old = *p;
-
-    *p = old | mask;
-    return (old & mask) != 0;
-}
-
-/**
- * __test_and_clear_bit - Clear a bit and return its old value
- * @nr: Bit to clear
- * @addr: Address to count from
- *
- * This operation is non-atomic and can be reordered.
- * If two examples of this operation race, one can appear to succeed
- * but actually fail.  You must protect multiple accesses with a lock.
- */
-static inline int __test_and_clear_bit(int nr, volatile void *addr)
-{
-    unsigned int mask = BITOP_MASK(nr);
-    volatile unsigned int *p = (volatile unsigned int *)addr + BITOP_WORD(nr);
-    unsigned int old = *p;
-
-    *p = old & ~mask;
-    return (old & mask) != 0;
-}
-
 #define flsl(x) generic_flsl(x)
 #define fls(x) generic_fls(x)
 #define ffs(x) ({ unsigned int t_ = (x); fls(t_ & -t_); })
diff --git a/xen/arch/ppc/include/asm/page.h b/xen/arch/ppc/include/asm/page.h
index 890e285051..6d4cd2611c 100644
--- a/xen/arch/ppc/include/asm/page.h
+++ b/xen/arch/ppc/include/asm/page.h
@@ -2,9 +2,9 @@ 
 #ifndef _ASM_PPC_PAGE_H
 #define _ASM_PPC_PAGE_H
 
+#include <xen/bitops.h>
 #include <xen/types.h>
 
-#include <asm/bitops.h>
 #include <asm/byteorder.h>
 
 #define PDE_VALID     PPC_BIT(0)
diff --git a/xen/arch/ppc/mm-radix.c b/xen/arch/ppc/mm-radix.c
index ab5a10695c..9055730997 100644
--- a/xen/arch/ppc/mm-radix.c
+++ b/xen/arch/ppc/mm-radix.c
@@ -1,11 +1,11 @@ 
 /* SPDX-License-Identifier: GPL-2.0-or-later */
+#include <xen/bitops.h>
 #include <xen/init.h>
 #include <xen/kernel.h>
 #include <xen/mm.h>
 #include <xen/types.h>
 #include <xen/lib.h>
 
-#include <asm/bitops.h>
 #include <asm/byteorder.h>
 #include <asm/early_printk.h>
 #include <asm/page.h>
diff --git a/xen/arch/x86/include/asm/bitops.h b/xen/arch/x86/include/asm/bitops.h
index 5a71afbc89..c4e7a06155 100644
--- a/xen/arch/x86/include/asm/bitops.h
+++ b/xen/arch/x86/include/asm/bitops.h
@@ -19,9 +19,6 @@ 
 #define ADDR (*(volatile int *) addr)
 #define CONST_ADDR (*(const volatile int *) addr)
 
-extern void __bitop_bad_size(void);
-#define bitop_bad_size(addr) (sizeof(*(addr)) < 4)
-
 /**
  * set_bit - Atomically set a bit in memory
  * @nr: the bit to set
@@ -175,7 +172,7 @@  static inline int test_and_set_bit(int nr, volatile void *addr)
 })
 
 /**
- * __test_and_set_bit - Set a bit and return its old value
+ * arch__test_and_set_bit - Set a bit and return its old value
  * @nr: Bit to set
  * @addr: Address to count from
  *
@@ -183,7 +180,7 @@  static inline int test_and_set_bit(int nr, volatile void *addr)
  * If two examples of this operation race, one can appear to succeed
  * but actually fail.  You must protect multiple accesses with a lock.
  */
-static inline int __test_and_set_bit(int nr, void *addr)
+static inline int arch__test_and_set_bit(int nr, volatile void *addr)
 {
     int oldbit;
 
@@ -194,10 +191,7 @@  static inline int __test_and_set_bit(int nr, void *addr)
 
     return oldbit;
 }
-#define __test_and_set_bit(nr, addr) ({                 \
-    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
-    __test_and_set_bit(nr, addr);                       \
-})
+#define arch__test_and_set_bit arch__test_and_set_bit
 
 /**
  * test_and_clear_bit - Clear a bit and return its old value
@@ -224,7 +218,7 @@  static inline int test_and_clear_bit(int nr, volatile void *addr)
 })
 
 /**
- * __test_and_clear_bit - Clear a bit and return its old value
+ * arch__test_and_clear_bit - Clear a bit and return its old value
  * @nr: Bit to set
  * @addr: Address to count from
  *
@@ -232,7 +226,7 @@  static inline int test_and_clear_bit(int nr, volatile void *addr)
  * If two examples of this operation race, one can appear to succeed
  * but actually fail.  You must protect multiple accesses with a lock.
  */
-static inline int __test_and_clear_bit(int nr, void *addr)
+static inline int arch__test_and_clear_bit(int nr, volatile void *addr)
 {
     int oldbit;
 
@@ -243,13 +237,10 @@  static inline int __test_and_clear_bit(int nr, void *addr)
 
     return oldbit;
 }
-#define __test_and_clear_bit(nr, addr) ({               \
-    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
-    __test_and_clear_bit(nr, addr);                     \
-})
+#define arch__test_and_clear_bit arch__test_and_clear_bit
 
 /* WARNING: non atomic and it can be reordered! */
-static inline int __test_and_change_bit(int nr, void *addr)
+static inline int arch__test_and_change_bit(int nr, volatile void *addr)
 {
     int oldbit;
 
@@ -260,10 +251,7 @@  static inline int __test_and_change_bit(int nr, void *addr)
 
     return oldbit;
 }
-#define __test_and_change_bit(nr, addr) ({              \
-    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
-    __test_and_change_bit(nr, addr);                    \
-})
+#define arch__test_and_change_bit arch__test_and_change_bit
 
 /**
  * test_and_change_bit - Change a bit and return its new value
@@ -307,8 +295,7 @@  static inline int variable_test_bit(int nr, const volatile void *addr)
     return oldbit;
 }
 
-#define test_bit(nr, addr) ({                           \
-    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
+#define arch_test_bit(nr, addr) ({                      \
     __builtin_constant_p(nr) ?                          \
         constant_test_bit(nr, addr) :                   \
         variable_test_bit(nr, addr);                    \
diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
index e3c5a4ccf3..9f89232ba6 100644
--- a/xen/include/xen/bitops.h
+++ b/xen/include/xen/bitops.h
@@ -103,8 +103,193 @@  static inline int generic_flsl(unsigned long x)
  * Include this here because some architectures need generic_ffs/fls in
  * scope
  */
+
+#define BITOP_BITS_PER_WORD 32
+typedef uint32_t bitop_uint_t;
+
+#define BITOP_MASK(nr)  ((bitop_uint_t)1 << ((nr) % BITOP_BITS_PER_WORD))
+
+#define BITOP_WORD(nr)  ((nr) / BITOP_BITS_PER_WORD)
+
+#define BITS_PER_BYTE 8
+
+extern void __bitop_bad_size(void);
+
+#define bitop_bad_size(addr) (sizeof(*(addr)) < sizeof(bitop_uint_t))
+
+/* --------------------- Please tidy above here --------------------- */
+
 #include <asm/bitops.h>
 
+/**
+ * generic__test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+generic__test_and_set_bit(int nr, volatile void *addr)
+{
+    bitop_uint_t mask = BITOP_MASK(nr);
+    volatile bitop_uint_t *p = (volatile bitop_uint_t *)addr + BITOP_WORD(nr);
+    bitop_uint_t old = *p;
+
+    *p = old | mask;
+    return (old & mask);
+}
+
+/**
+ * generic__test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+generic__test_and_clear_bit(int nr, volatile void *addr)
+{
+    bitop_uint_t mask = BITOP_MASK(nr);
+    volatile bitop_uint_t *p = (volatile bitop_uint_t *)addr + BITOP_WORD(nr);
+    bitop_uint_t old = *p;
+
+    *p = old & ~mask;
+    return (old & mask);
+}
+
+/**
+ * generic__test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+generic__test_and_change_bit(int nr, volatile void *addr)
+{
+    bitop_uint_t mask = BITOP_MASK(nr);
+    volatile bitop_uint_t *p = (volatile bitop_uint_t *)addr + BITOP_WORD(nr);
+    bitop_uint_t old = *p;
+
+    *p = old ^ mask;
+    return (old & mask);
+}
+
+/**
+ * generic_test_bit - Determine whether a bit is set
+ * @nr: bit number to test
+ * @addr: Address to start counting from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool generic_test_bit(int nr, const volatile void *addr)
+{
+    bitop_uint_t mask = BITOP_MASK(nr);
+    const volatile bitop_uint_t *p =
+        (const volatile bitop_uint_t *)addr + BITOP_WORD(nr);
+
+    return (*p & mask);
+}
+
+/**
+ * __test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+__test_and_set_bit(int nr, volatile void *addr)
+{
+#ifndef arch__test_and_set_bit
+#define arch__test_and_set_bit generic__test_and_set_bit
+#endif
+
+    return arch__test_and_set_bit(nr, addr);
+}
+#define __test_and_set_bit(nr, addr) ({             \
+    if ( bitop_bad_size(addr) ) __bitop_bad_size(); \
+    __test_and_set_bit(nr, addr);                   \
+})
+
+/**
+ * __test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+__test_and_clear_bit(int nr, volatile void *addr)
+{
+#ifndef arch__test_and_clear_bit
+#define arch__test_and_clear_bit generic__test_and_clear_bit
+#endif
+
+    return arch__test_and_clear_bit(nr, addr);
+}
+#define __test_and_clear_bit(nr, addr) ({           \
+    if ( bitop_bad_size(addr) ) __bitop_bad_size(); \
+    __test_and_clear_bit(nr, addr);                 \
+})
+
+/**
+ * __test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+__test_and_change_bit(int nr, volatile void *addr)
+{
+#ifndef arch__test_and_change_bit
+#define arch__test_and_change_bit generic__test_and_change_bit
+#endif
+
+    return arch__test_and_change_bit(nr, addr);
+}
+#define __test_and_change_bit(nr, addr) ({              \
+    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
+    __test_and_change_bit(nr, addr);                    \
+})
+
+/**
+ * test_bit - Determine whether a bit is set
+ * @nr: bit number to test
+ * @addr: Address to start counting from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool test_bit(int nr, const volatile void *addr)
+{
+#ifndef arch_test_bit
+#define arch_test_bit generic_test_bit
+#endif
+
+    return arch_test_bit(nr, addr);
+}
+#define test_bit(nr, addr) ({                           \
+    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
+    test_bit(nr, addr);                                 \
+})
+
 #ifndef find_next_bit
 /**
  * find_next_bit - find the next set bit in a memory region