diff mbox series

[XEN,v2] xen/Arm: Enforce alignment check for atomic read/write

Message ID 20221104162355.23369-1-ayan.kumar.halder@amd.com (mailing list archive)
State New, archived
Headers show
Series [XEN,v2] xen/Arm: Enforce alignment check for atomic read/write | expand

Commit Message

Ayan Kumar Halder Nov. 4, 2022, 4:23 p.m. UTC
From: Ayan Kumar Halder <ayankuma@amd.com>

Refer ARM DDI 0487I.a ID081822, B2.2.1
"Requirements for single-copy atomicity

- A read that is generated by a load instruction that loads a single
general-purpose register and is aligned to the size of the read in the
instruction is single-copy atomic.

-A write that is generated by a store instruction that stores a single
general-purpose register and is aligned to the size of the write in the
instruction is single-copy atomic"

On AArch32, the alignment check is enabled at boot time by setting HSCTLR.A bit.
("HSCTLR, Hyp System Control Register").
However in AArch64, alignment check is not enabled at boot time.

Thus, one needs to check for alignment when performing atomic operations.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com
---

Changes from :-
v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
message.

 xen/arch/arm/include/asm/atomic.h | 2 ++
 1 file changed, 2 insertions(+)

Comments

Bertrand Marquis Nov. 4, 2022, 4:32 p.m. UTC | #1
Hi Ayan,

> On 4 Nov 2022, at 16:23, Ayan Kumar Halder <ayan.kumar.halder@amd.com> wrote:
> 
> From: Ayan Kumar Halder <ayankuma@amd.com>
> 
> Refer ARM DDI 0487I.a ID081822, B2.2.1
> "Requirements for single-copy atomicity
> 
> - A read that is generated by a load instruction that loads a single
> general-purpose register and is aligned to the size of the read in the
> instruction is single-copy atomic.
> 
> -A write that is generated by a store instruction that stores a single
> general-purpose register and is aligned to the size of the write in the
> instruction is single-copy atomic"
> 
> On AArch32, the alignment check is enabled at boot time by setting HSCTLR.A bit.
> ("HSCTLR, Hyp System Control Register").
> However in AArch64, alignment check is not enabled at boot time.
> 
> Thus, one needs to check for alignment when performing atomic operations.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com

Seems like the R-B is missing a >

With that fixed (can be done on commit):
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> 
> Changes from :-
> v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
> message.
> 
> xen/arch/arm/include/asm/atomic.h | 2 ++
> 1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
> index 1f60c28b1b..64314d59b3 100644
> --- a/xen/arch/arm/include/asm/atomic.h
> +++ b/xen/arch/arm/include/asm/atomic.h
> @@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile void *p,
>                                            void *res,
>                                            unsigned int size)
> {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>     switch ( size )
>     {
>     case 1:
> @@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void *p,
>                                             void *val,
>                                             unsigned int size)
> {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>     switch ( size )
>     {
>     case 1:
> -- 
> 2.17.1
>
Julien Grall Nov. 6, 2022, 5:54 p.m. UTC | #2
Hi Ayan,

To me the title and the explaination below suggests...

On 04/11/2022 16:23, Ayan Kumar Halder wrote:
> From: Ayan Kumar Halder <ayankuma@amd.com>
> 
> Refer ARM DDI 0487I.a ID081822, B2.2.1
> "Requirements for single-copy atomicity
> 
> - A read that is generated by a load instruction that loads a single
> general-purpose register and is aligned to the size of the read in the
> instruction is single-copy atomic.
> 
> -A write that is generated by a store instruction that stores a single
> general-purpose register and is aligned to the size of the write in the
> instruction is single-copy atomic"
> 
> On AArch32, the alignment check is enabled at boot time by setting HSCTLR.A bit.
> ("HSCTLR, Hyp System Control Register").
> However in AArch64, alignment check is not enabled at boot time.

... you want to enable the alignment check on AArch64 always. However, 
this is not possible to do because memcpy() is using unaligned access.

I think the commit message/title should clarify that the check is *only* 
done during debug build. IOW, there are no enforcement in producation build.

The alternative would be to use a BUG_ON() but that might be too high 
overhead.

Cheers,

> 
> Thus, one needs to check for alignment when performing atomic operations.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com
> ---
> 
> Changes from :-
> v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
> message.
> 
>   xen/arch/arm/include/asm/atomic.h | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
> index 1f60c28b1b..64314d59b3 100644
> --- a/xen/arch/arm/include/asm/atomic.h
> +++ b/xen/arch/arm/include/asm/atomic.h
> @@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile void *p,
>                                              void *res,
>                                              unsigned int size)
>   {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>       switch ( size )
>       {
>       case 1:
> @@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void *p,
>                                               void *val,
>                                               unsigned int size)
>   {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>       switch ( size )
>       {
>       case 1:
Ayan Kumar Halder Nov. 7, 2022, 10:36 a.m. UTC | #3
On 06/11/2022 17:54, Julien Grall wrote:
> Hi Ayan,

Hi Julien,

I need some clarification.

>
> To me the title and the explaination below suggests...
>
> On 04/11/2022 16:23, Ayan Kumar Halder wrote:
>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>
>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>> "Requirements for single-copy atomicity
>>
>> - A read that is generated by a load instruction that loads a single
>> general-purpose register and is aligned to the size of the read in the
>> instruction is single-copy atomic.
>>
>> -A write that is generated by a store instruction that stores a single
>> general-purpose register and is aligned to the size of the write in the
>> instruction is single-copy atomic"
>>
>> On AArch32, the alignment check is enabled at boot time by setting 
>> HSCTLR.A bit.
>> ("HSCTLR, Hyp System Control Register").
>> However in AArch64, alignment check is not enabled at boot time.
>
> ... you want to enable the alignment check on AArch64 always. 

I want to enable alignment check *only* for atomic access.

May be I should remove this line --> "However in AArch64, alignment 
check is not enabled at boot time.".

> However, this is not possible to do because memcpy() is using 
> unaligned access.
This is a non atomic access. So the commit does not apply here.
>
> I think the commit message/title should clarify that the check is 
> *only* done during debug build. IOW, there are no enforcement in 
> producation build.

AFAICS read_atomic()/write_atomic() is enabled during non debug builds 
(ie CONFIG_DEBUG=n) as well.

For eg :- vgic_v3_distr_mmio_read() --> vgic_fetch_irouter() --> 
read_atomic() . There is no check for CONFIG_DEBUG.

- Ayan

>
> The alternative would be to use a BUG_ON() but that might be too high 
> overhead.
>
> Cheers,
>
>>
>> Thus, one needs to check for alignment when performing atomic 
>> operations.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> Reviewed-by: Michal Orzel <michal.orzel@amd.com
>> ---
>>
>> Changes from :-
>> v1 - 1. Referred to the latest Arm Architecture Reference Manual in 
>> the commit
>> message.
>>
>>   xen/arch/arm/include/asm/atomic.h | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/xen/arch/arm/include/asm/atomic.h 
>> b/xen/arch/arm/include/asm/atomic.h
>> index 1f60c28b1b..64314d59b3 100644
>> --- a/xen/arch/arm/include/asm/atomic.h
>> +++ b/xen/arch/arm/include/asm/atomic.h
>> @@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const 
>> volatile void *p,
>>                                              void *res,
>>                                              unsigned int size)
>>   {
>> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>>       switch ( size )
>>       {
>>       case 1:
>> @@ -102,6 +103,7 @@ static always_inline void 
>> write_atomic_size(volatile void *p,
>>                                               void *val,
>>                                               unsigned int size)
>>   {
>> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>>       switch ( size )
>>       {
>>       case 1:
>
Julien Grall Nov. 7, 2022, 10:44 a.m. UTC | #4
Hi Ayan,

On 07/11/2022 10:36, Ayan Kumar Halder wrote:
> 
> On 06/11/2022 17:54, Julien Grall wrote:
>> Hi Ayan,
> 
> Hi Julien,
> 
> I need some clarification.
> 
>>
>> To me the title and the explaination below suggests...
>>
>> On 04/11/2022 16:23, Ayan Kumar Halder wrote:
>>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>>
>>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>>> "Requirements for single-copy atomicity
>>>
>>> - A read that is generated by a load instruction that loads a single
>>> general-purpose register and is aligned to the size of the read in the
>>> instruction is single-copy atomic.
>>>
>>> -A write that is generated by a store instruction that stores a single
>>> general-purpose register and is aligned to the size of the write in the
>>> instruction is single-copy atomic"
>>>
>>> On AArch32, the alignment check is enabled at boot time by setting 
>>> HSCTLR.A bit.
>>> ("HSCTLR, Hyp System Control Register").
>>> However in AArch64, alignment check is not enabled at boot time.
>>
>> ... you want to enable the alignment check on AArch64 always. 
> 
> I want to enable alignment check *only* for atomic access.
> 
> May be I should remove this line --> "However in AArch64, alignment 
> check is not enabled at boot time.".
> 
>> However, this is not possible to do because memcpy() is using 
>> unaligned access.
> This is a non atomic access. So the commit does not apply here.

Right, but your commit message refers to the alignment check on arm32. 
You wrote too much for someone to wonder but not enough to explain why 
we can't enable the alignment check on arm64.

>>
>> I think the commit message/title should clarify that the check is 
>> *only* done during debug build. IOW, there are no enforcement in 
>> producation build.
> 
> AFAICS read_atomic()/write_atomic() is enabled during non debug builds 
> (ie CONFIG_DEBUG=n) as well.

My point was that ASSERT() is a NOP in production build. So you 
effectively the enforcement happens only in debug build.

IOW, unless you test exhaustively with a debug build, you may never 
notice that the access was not atomic.

Cheers,
Ayan Kumar Halder Nov. 7, 2022, 12:49 p.m. UTC | #5
On 07/11/2022 10:44, Julien Grall wrote:
> Hi Ayan,
Hi Julien,
>
> On 07/11/2022 10:36, Ayan Kumar Halder wrote:
>>
>> On 06/11/2022 17:54, Julien Grall wrote:
>>> Hi Ayan,
>>
>> Hi Julien,
>>
>> I need some clarification.
>>
>>>
>>> To me the title and the explaination below suggests...
>>>
>>> On 04/11/2022 16:23, Ayan Kumar Halder wrote:
>>>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>>>
>>>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>>>> "Requirements for single-copy atomicity
>>>>
>>>> - A read that is generated by a load instruction that loads a single
>>>> general-purpose register and is aligned to the size of the read in the
>>>> instruction is single-copy atomic.
>>>>
>>>> -A write that is generated by a store instruction that stores a single
>>>> general-purpose register and is aligned to the size of the write in 
>>>> the
>>>> instruction is single-copy atomic"
>>>>
>>>> On AArch32, the alignment check is enabled at boot time by setting 
>>>> HSCTLR.A bit.
>>>> ("HSCTLR, Hyp System Control Register").
>>>> However in AArch64, alignment check is not enabled at boot time.
>>>
>>> ... you want to enable the alignment check on AArch64 always. 
>>
>> I want to enable alignment check *only* for atomic access.
>>
>> May be I should remove this line --> "However in AArch64, alignment 
>> check is not enabled at boot time.".
>>
>>> However, this is not possible to do because memcpy() is using 
>>> unaligned access.
>> This is a non atomic access. So the commit does not apply here.
>
> Right, but your commit message refers to the alignment check on arm32. 
> You wrote too much for someone to wonder but not enough to explain why 
> we can't enable the alignment check on arm64.
>
>>>
>>> I think the commit message/title should clarify that the check is 
>>> *only* done during debug build. IOW, there are no enforcement in 
>>> producation build.
>>
>> AFAICS read_atomic()/write_atomic() is enabled during non debug 
>> builds (ie CONFIG_DEBUG=n) as well.
>
> My point was that ASSERT() is a NOP in production build. So you 
> effectively the enforcement happens only in debug build.
>
> IOW, unless you test exhaustively with a debug build, you may never 
> notice that the access was not atomic.

This makes sense.

Does the following commit message look better ?

xen/Arm: Enforce alignment check for atomic read/write

Refer ARM DDI 0487I.a ID081822, B2.2.1
"Requirements for single-copy atomicity

- A read that is generated by a load instruction that loads a single
general-purpose register and is aligned to the size of the read in the
instruction is single-copy atomic.

-A write that is generated by a store instruction that stores a single
general-purpose register and is aligned to the size of the write in the
instruction is single-copy atomic"

Thus, one needs to check for alignment when performing atomic operations.
However, as ASSERT() are disabled in production builds, so one needs to
run the debug builds to catch any unaligned access during atomic operations.
Enforcing alignment checks during production build has quite a high 
overhead.

On AArch32, the alignment check is enabled at boot time by setting 
HSCTLR.A bit.
("HSCTLR, Hyp System Control Register").
However, on AArch64, memcpy()/memset() may be used on 64bit unaligned 
addresses.
Thus, one does not wish to enable alignment check at boot time.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I think I can keep R-b as there is no code change ?

- Ayan

>
> Cheers,
>
Julien Grall Nov. 7, 2022, 6:06 p.m. UTC | #6
Hi Ayan,

On 07/11/2022 12:49, Ayan Kumar Halder wrote:
> 
> On 07/11/2022 10:44, Julien Grall wrote:
>> Hi Ayan,
> Hi Julien,
>>
>> On 07/11/2022 10:36, Ayan Kumar Halder wrote:
>>>
>>> On 06/11/2022 17:54, Julien Grall wrote:
>>>> Hi Ayan,
>>>
>>> Hi Julien,
>>>
>>> I need some clarification.
>>>
>>>>
>>>> To me the title and the explaination below suggests...
>>>>
>>>> On 04/11/2022 16:23, Ayan Kumar Halder wrote:
>>>>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>>>>
>>>>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>>>>> "Requirements for single-copy atomicity
>>>>>
>>>>> - A read that is generated by a load instruction that loads a single
>>>>> general-purpose register and is aligned to the size of the read in the
>>>>> instruction is single-copy atomic.
>>>>>
>>>>> -A write that is generated by a store instruction that stores a single
>>>>> general-purpose register and is aligned to the size of the write in 
>>>>> the
>>>>> instruction is single-copy atomic"
>>>>>
>>>>> On AArch32, the alignment check is enabled at boot time by setting 
>>>>> HSCTLR.A bit.
>>>>> ("HSCTLR, Hyp System Control Register").
>>>>> However in AArch64, alignment check is not enabled at boot time.
>>>>
>>>> ... you want to enable the alignment check on AArch64 always. 
>>>
>>> I want to enable alignment check *only* for atomic access.
>>>
>>> May be I should remove this line --> "However in AArch64, alignment 
>>> check is not enabled at boot time.".
>>>
>>>> However, this is not possible to do because memcpy() is using 
>>>> unaligned access.
>>> This is a non atomic access. So the commit does not apply here.
>>
>> Right, but your commit message refers to the alignment check on arm32. 
>> You wrote too much for someone to wonder but not enough to explain why 
>> we can't enable the alignment check on arm64.
>>
>>>>
>>>> I think the commit message/title should clarify that the check is 
>>>> *only* done during debug build. IOW, there are no enforcement in 
>>>> producation build.
>>>
>>> AFAICS read_atomic()/write_atomic() is enabled during non debug 
>>> builds (ie CONFIG_DEBUG=n) as well.
>>
>> My point was that ASSERT() is a NOP in production build. So you 
>> effectively the enforcement happens only in debug build.
>>
>> IOW, unless you test exhaustively with a debug build, you may never 
>> notice that the access was not atomic.
> 
> This makes sense.
> 
> Does the following commit message look better ?
> 
> xen/Arm: Enforce alignment check for atomic read/write

title:

xen/arm: Enforce alignment check in debug build for {read, write}_atomic

> 
> Refer ARM DDI 0487I.a ID081822, B2.2.1
> "Requirements for single-copy atomicity
> 
> - A read that is generated by a load instruction that loads a single
> general-purpose register and is aligned to the size of the read in the
> instruction is single-copy atomic.
> 
> -A write that is generated by a store instruction that stores a single
> general-purpose register and is aligned to the size of the write in the
> instruction is single-copy atomic"
> 
> Thus, one needs to check for alignment when performing atomic operations.
> However, as ASSERT() are disabled in production builds, so one needs to

This seems to be a bit out of context because you don't really explain 
that ASSERT() would be used. Also...

> run the debug builds to catch any unaligned access during atomic 
> operations.
> Enforcing alignment checks during production build has quite a high 
> overhead.
> 
> On AArch32, the alignment check is enabled at boot time by setting 
> HSCTLR.A bit.
> ("HSCTLR, Hyp System Control Register").
> However, on AArch64, memcpy()/memset() may be used on 64bit unaligned 
> addresses.
> Thus, one does not wish to enable alignment check at boot time.

... to me this paragraph should be first because this explained why we 
can't check in production. So how about the following commit message:

"
xen/arm: Enforce alignment check in debug build for {read, write}_atomic

Xen provides helper to atomically read/write memory (see {read, 
write}_atomic()). Those helpers can only work if the address is aligned 
to the size of the access (see B2.2.1 ARM DDI 08476I.a).

On Arm32, the alignment is already enforced by the processor because 
HSCTLR.A bit is set (it enforce alignment for every access). For Arm64, 
this bit is not set because memcpy()/memset() can use unaligned access 
for performance reason (the implementation is taken from the Cortex 
library).

To avoid any overhead in production build, the alignment will only be 
checked using an ASSERT. Note that it might be possible to do it in 
production build using the acquire/exclusive version of load/store. But 
this is left to a follow-up (if wanted).
"

While trying to find a justification for the debug version. I was 
wondering whether we could actually use the acquire or exclusive 
version. I am not entirely sure about the overhead.

> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> I think I can keep R-b as there is no code change ?

My signed-off-by will need to be added for the commit message I proposed 
above. So I would like Bertrand/Michal to confirm they are happy with it 
(I don't usually add my reviewed-by/acked-by for patch where my 
signed-off-by is added).

Cheers,
Michal Orzel Nov. 8, 2022, 7:26 a.m. UTC | #7
Hi Julien,

On 07/11/2022 19:06, Julien Grall wrote:
> 
> 
> Hi Ayan,
> 
> On 07/11/2022 12:49, Ayan Kumar Halder wrote:
>>
>> On 07/11/2022 10:44, Julien Grall wrote:
>>> Hi Ayan,
>> Hi Julien,
>>>
>>> On 07/11/2022 10:36, Ayan Kumar Halder wrote:
>>>>
>>>> On 06/11/2022 17:54, Julien Grall wrote:
>>>>> Hi Ayan,
>>>>
>>>> Hi Julien,
>>>>
>>>> I need some clarification.
>>>>
>>>>>
>>>>> To me the title and the explaination below suggests...
>>>>>
>>>>> On 04/11/2022 16:23, Ayan Kumar Halder wrote:
>>>>>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>>>>>
>>>>>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>>>>>> "Requirements for single-copy atomicity
>>>>>>
>>>>>> - A read that is generated by a load instruction that loads a single
>>>>>> general-purpose register and is aligned to the size of the read in the
>>>>>> instruction is single-copy atomic.
>>>>>>
>>>>>> -A write that is generated by a store instruction that stores a single
>>>>>> general-purpose register and is aligned to the size of the write in
>>>>>> the
>>>>>> instruction is single-copy atomic"
>>>>>>
>>>>>> On AArch32, the alignment check is enabled at boot time by setting
>>>>>> HSCTLR.A bit.
>>>>>> ("HSCTLR, Hyp System Control Register").
>>>>>> However in AArch64, alignment check is not enabled at boot time.
>>>>>
>>>>> ... you want to enable the alignment check on AArch64 always.
>>>>
>>>> I want to enable alignment check *only* for atomic access.
>>>>
>>>> May be I should remove this line --> "However in AArch64, alignment
>>>> check is not enabled at boot time.".
>>>>
>>>>> However, this is not possible to do because memcpy() is using
>>>>> unaligned access.
>>>> This is a non atomic access. So the commit does not apply here.
>>>
>>> Right, but your commit message refers to the alignment check on arm32.
>>> You wrote too much for someone to wonder but not enough to explain why
>>> we can't enable the alignment check on arm64.
>>>
>>>>>
>>>>> I think the commit message/title should clarify that the check is
>>>>> *only* done during debug build. IOW, there are no enforcement in
>>>>> producation build.
>>>>
>>>> AFAICS read_atomic()/write_atomic() is enabled during non debug
>>>> builds (ie CONFIG_DEBUG=n) as well.
>>>
>>> My point was that ASSERT() is a NOP in production build. So you
>>> effectively the enforcement happens only in debug build.
>>>
>>> IOW, unless you test exhaustively with a debug build, you may never
>>> notice that the access was not atomic.
>>
>> This makes sense.
>>
>> Does the following commit message look better ?
>>
>> xen/Arm: Enforce alignment check for atomic read/write
> 
> title:
> 
> xen/arm: Enforce alignment check in debug build for {read, write}_atomic
> 
>>
>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>> "Requirements for single-copy atomicity
>>
>> - A read that is generated by a load instruction that loads a single
>> general-purpose register and is aligned to the size of the read in the
>> instruction is single-copy atomic.
>>
>> -A write that is generated by a store instruction that stores a single
>> general-purpose register and is aligned to the size of the write in the
>> instruction is single-copy atomic"
>>
>> Thus, one needs to check for alignment when performing atomic operations.
>> However, as ASSERT() are disabled in production builds, so one needs to
> 
> This seems to be a bit out of context because you don't really explain
> that ASSERT() would be used. Also...
> 
>> run the debug builds to catch any unaligned access during atomic
>> operations.
>> Enforcing alignment checks during production build has quite a high
>> overhead.
>>
>> On AArch32, the alignment check is enabled at boot time by setting
>> HSCTLR.A bit.
>> ("HSCTLR, Hyp System Control Register").
>> However, on AArch64, memcpy()/memset() may be used on 64bit unaligned
>> addresses.
>> Thus, one does not wish to enable alignment check at boot time.
> 
> ... to me this paragraph should be first because this explained why we
> can't check in production. So how about the following commit message:
> 
> "
> xen/arm: Enforce alignment check in debug build for {read, write}_atomic
> 
> Xen provides helper to atomically read/write memory (see {read,
> write}_atomic()). Those helpers can only work if the address is aligned
> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
> 
> On Arm32, the alignment is already enforced by the processor because
> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
> this bit is not set because memcpy()/memset() can use unaligned access
> for performance reason (the implementation is taken from the Cortex
> library).
> 
> To avoid any overhead in production build, the alignment will only be
> checked using an ASSERT. Note that it might be possible to do it in
> production build using the acquire/exclusive version of load/store. But
> this is left to a follow-up (if wanted).
> "
This reads very well.

> 
> While trying to find a justification for the debug version. I was
> wondering whether we could actually use the acquire or exclusive
> version. I am not entirely sure about the overhead.
> 
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>
>> I think I can keep R-b as there is no code change ?
> 
> My signed-off-by will need to be added for the commit message I proposed
> above. So I would like Bertrand/Michal to confirm they are happy with it
> (I don't usually add my reviewed-by/acked-by for patch where my
> signed-off-by is added).
> 
You can keep my Rb and Bertrand or Stefano can ack it, so that we can avoid
acking a patch by one of the authors.

> Cheers,
> 
> --
> Julien Grall

~Michal
Bertrand Marquis Nov. 8, 2022, 8:34 a.m. UTC | #8
Hi,

> On 8 Nov 2022, at 07:26, Michal Orzel <michal.orzel@amd.com> wrote:
> 
> Hi Julien,
> 
> On 07/11/2022 19:06, Julien Grall wrote:
>> 
>> 
>> Hi Ayan,
>> 
>> On 07/11/2022 12:49, Ayan Kumar Halder wrote:
>>> 
>>> On 07/11/2022 10:44, Julien Grall wrote:
>>>> Hi Ayan,
>>> Hi Julien,
>>>> 
>>>> On 07/11/2022 10:36, Ayan Kumar Halder wrote:
>>>>> 
>>>>> On 06/11/2022 17:54, Julien Grall wrote:
>>>>>> Hi Ayan,
>>>>> 
>>>>> Hi Julien,
>>>>> 
>>>>> I need some clarification.
>>>>> 
>>>>>> 
>>>>>> To me the title and the explaination below suggests...
>>>>>> 
>>>>>> On 04/11/2022 16:23, Ayan Kumar Halder wrote:
>>>>>>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>>>>>> 
>>>>>>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>>>>>>> "Requirements for single-copy atomicity
>>>>>>> 
>>>>>>> - A read that is generated by a load instruction that loads a single
>>>>>>> general-purpose register and is aligned to the size of the read in the
>>>>>>> instruction is single-copy atomic.
>>>>>>> 
>>>>>>> -A write that is generated by a store instruction that stores a single
>>>>>>> general-purpose register and is aligned to the size of the write in
>>>>>>> the
>>>>>>> instruction is single-copy atomic"
>>>>>>> 
>>>>>>> On AArch32, the alignment check is enabled at boot time by setting
>>>>>>> HSCTLR.A bit.
>>>>>>> ("HSCTLR, Hyp System Control Register").
>>>>>>> However in AArch64, alignment check is not enabled at boot time.
>>>>>> 
>>>>>> ... you want to enable the alignment check on AArch64 always.
>>>>> 
>>>>> I want to enable alignment check *only* for atomic access.
>>>>> 
>>>>> May be I should remove this line --> "However in AArch64, alignment
>>>>> check is not enabled at boot time.".
>>>>> 
>>>>>> However, this is not possible to do because memcpy() is using
>>>>>> unaligned access.
>>>>> This is a non atomic access. So the commit does not apply here.
>>>> 
>>>> Right, but your commit message refers to the alignment check on arm32.
>>>> You wrote too much for someone to wonder but not enough to explain why
>>>> we can't enable the alignment check on arm64.
>>>> 
>>>>>> 
>>>>>> I think the commit message/title should clarify that the check is
>>>>>> *only* done during debug build. IOW, there are no enforcement in
>>>>>> producation build.
>>>>> 
>>>>> AFAICS read_atomic()/write_atomic() is enabled during non debug
>>>>> builds (ie CONFIG_DEBUG=n) as well.
>>>> 
>>>> My point was that ASSERT() is a NOP in production build. So you
>>>> effectively the enforcement happens only in debug build.
>>>> 
>>>> IOW, unless you test exhaustively with a debug build, you may never
>>>> notice that the access was not atomic.
>>> 
>>> This makes sense.
>>> 
>>> Does the following commit message look better ?
>>> 
>>> xen/Arm: Enforce alignment check for atomic read/write
>> 
>> title:
>> 
>> xen/arm: Enforce alignment check in debug build for {read, write}_atomic
>> 
>>> 
>>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>>> "Requirements for single-copy atomicity
>>> 
>>> - A read that is generated by a load instruction that loads a single
>>> general-purpose register and is aligned to the size of the read in the
>>> instruction is single-copy atomic.
>>> 
>>> -A write that is generated by a store instruction that stores a single
>>> general-purpose register and is aligned to the size of the write in the
>>> instruction is single-copy atomic"
>>> 
>>> Thus, one needs to check for alignment when performing atomic operations.
>>> However, as ASSERT() are disabled in production builds, so one needs to
>> 
>> This seems to be a bit out of context because you don't really explain
>> that ASSERT() would be used. Also...
>> 
>>> run the debug builds to catch any unaligned access during atomic
>>> operations.
>>> Enforcing alignment checks during production build has quite a high
>>> overhead.
>>> 
>>> On AArch32, the alignment check is enabled at boot time by setting
>>> HSCTLR.A bit.
>>> ("HSCTLR, Hyp System Control Register").
>>> However, on AArch64, memcpy()/memset() may be used on 64bit unaligned
>>> addresses.
>>> Thus, one does not wish to enable alignment check at boot time.
>> 
>> ... to me this paragraph should be first because this explained why we
>> can't check in production. So how about the following commit message:
>> 
>> "
>> xen/arm: Enforce alignment check in debug build for {read, write}_atomic
>> 
>> Xen provides helper to atomically read/write memory (see {read,
>> write}_atomic()). Those helpers can only work if the address is aligned
>> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
>> 
>> On Arm32, the alignment is already enforced by the processor because
>> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
>> this bit is not set because memcpy()/memset() can use unaligned access
>> for performance reason (the implementation is taken from the Cortex
>> library).
>> 
>> To avoid any overhead in production build, the alignment will only be
>> checked using an ASSERT. Note that it might be possible to do it in
>> production build using the acquire/exclusive version of load/store. But
>> this is left to a follow-up (if wanted).
>> "
> This reads very well.
> 
>> 
>> While trying to find a justification for the debug version. I was
>> wondering whether we could actually use the acquire or exclusive
>> version. I am not entirely sure about the overhead.
>> 
>>> 
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> 
>>> I think I can keep R-b as there is no code change ?
>> 
>> My signed-off-by will need to be added for the commit message I proposed
>> above. So I would like Bertrand/Michal to confirm they are happy with it
>> (I don't usually add my reviewed-by/acked-by for patch where my
>> signed-off-by is added).
>> 
> You can keep my Rb and Bertrand or Stefano can ack it, so that we can avoid
> acking a patch by one of the authors.

I will check and ack the v3 once out.

Cheers
Bertrand

> 
>> Cheers,
>> 
>> --
>> Julien Grall
> 
> ~Michal
Ayan Kumar Halder Nov. 8, 2022, 9:48 a.m. UTC | #9
On 08/11/2022 08:34, Bertrand Marquis wrote:
> Hi,
Hi Julien/Bertrand/Michal,
>
>> On 8 Nov 2022, at 07:26, Michal Orzel <michal.orzel@amd.com> wrote:
>>
>> Hi Julien,
>>
>> On 07/11/2022 19:06, Julien Grall wrote:
>>>
>>> Hi Ayan,
>>>
>>> On 07/11/2022 12:49, Ayan Kumar Halder wrote:
>>>> On 07/11/2022 10:44, Julien Grall wrote:
>>>>> Hi Ayan,
>>>> Hi Julien,
>>>>> On 07/11/2022 10:36, Ayan Kumar Halder wrote:
>>>>>> On 06/11/2022 17:54, Julien Grall wrote:
>>>>>>> Hi Ayan,
>>>>>> Hi Julien,
>>>>>>
>>>>>> I need some clarification.
>>>>>>
>>>>>>> To me the title and the explaination below suggests...
>>>>>>>
>>>>>>> On 04/11/2022 16:23, Ayan Kumar Halder wrote:
>>>>>>>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>>>>>>>
>>>>>>>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>>>>>>>> "Requirements for single-copy atomicity
>>>>>>>>
>>>>>>>> - A read that is generated by a load instruction that loads a single
>>>>>>>> general-purpose register and is aligned to the size of the read in the
>>>>>>>> instruction is single-copy atomic.
>>>>>>>>
>>>>>>>> -A write that is generated by a store instruction that stores a single
>>>>>>>> general-purpose register and is aligned to the size of the write in
>>>>>>>> the
>>>>>>>> instruction is single-copy atomic"
>>>>>>>>
>>>>>>>> On AArch32, the alignment check is enabled at boot time by setting
>>>>>>>> HSCTLR.A bit.
>>>>>>>> ("HSCTLR, Hyp System Control Register").
>>>>>>>> However in AArch64, alignment check is not enabled at boot time.
>>>>>>> ... you want to enable the alignment check on AArch64 always.
>>>>>> I want to enable alignment check *only* for atomic access.
>>>>>>
>>>>>> May be I should remove this line --> "However in AArch64, alignment
>>>>>> check is not enabled at boot time.".
>>>>>>
>>>>>>> However, this is not possible to do because memcpy() is using
>>>>>>> unaligned access.
>>>>>> This is a non atomic access. So the commit does not apply here.
>>>>> Right, but your commit message refers to the alignment check on arm32.
>>>>> You wrote too much for someone to wonder but not enough to explain why
>>>>> we can't enable the alignment check on arm64.
>>>>>
>>>>>>> I think the commit message/title should clarify that the check is
>>>>>>> *only* done during debug build. IOW, there are no enforcement in
>>>>>>> producation build.
>>>>>> AFAICS read_atomic()/write_atomic() is enabled during non debug
>>>>>> builds (ie CONFIG_DEBUG=n) as well.
>>>>> My point was that ASSERT() is a NOP in production build. So you
>>>>> effectively the enforcement happens only in debug build.
>>>>>
>>>>> IOW, unless you test exhaustively with a debug build, you may never
>>>>> notice that the access was not atomic.
>>>> This makes sense.
>>>>
>>>> Does the following commit message look better ?
>>>>
>>>> xen/Arm: Enforce alignment check for atomic read/write
>>> title:
>>>
>>> xen/arm: Enforce alignment check in debug build for {read, write}_atomic
>>>
>>>> Refer ARM DDI 0487I.a ID081822, B2.2.1
>>>> "Requirements for single-copy atomicity
>>>>
>>>> - A read that is generated by a load instruction that loads a single
>>>> general-purpose register and is aligned to the size of the read in the
>>>> instruction is single-copy atomic.
>>>>
>>>> -A write that is generated by a store instruction that stores a single
>>>> general-purpose register and is aligned to the size of the write in the
>>>> instruction is single-copy atomic"
>>>>
>>>> Thus, one needs to check for alignment when performing atomic operations.
>>>> However, as ASSERT() are disabled in production builds, so one needs to
>>> This seems to be a bit out of context because you don't really explain
>>> that ASSERT() would be used. Also...
>>>
>>>> run the debug builds to catch any unaligned access during atomic
>>>> operations.
>>>> Enforcing alignment checks during production build has quite a high
>>>> overhead.
>>>>
>>>> On AArch32, the alignment check is enabled at boot time by setting
>>>> HSCTLR.A bit.
>>>> ("HSCTLR, Hyp System Control Register").
>>>> However, on AArch64, memcpy()/memset() may be used on 64bit unaligned
>>>> addresses.
>>>> Thus, one does not wish to enable alignment check at boot time.
>>> ... to me this paragraph should be first because this explained why we
>>> can't check in production. So how about the following commit message:
>>>
>>> "
>>> xen/arm: Enforce alignment check in debug build for {read, write}_atomic
>>>
>>> Xen provides helper to atomically read/write memory (see {read,
>>> write}_atomic()). Those helpers can only work if the address is aligned
>>> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
>>>
>>> On Arm32, the alignment is already enforced by the processor because
>>> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
>>> this bit is not set because memcpy()/memset() can use unaligned access
>>> for performance reason (the implementation is taken from the Cortex
>>> library).
>>>
>>> To avoid any overhead in production build, the alignment will only be
>>> checked using an ASSERT. Note that it might be possible to do it in
>>> production build using the acquire/exclusive version of load/store. But
>>> this is left to a follow-up (if wanted).
>>> "
>> This reads very well.
>>
>>> While trying to find a justification for the debug version. I was
>>> wondering whether we could actually use the acquire or exclusive
>>> version. I am not entirely sure about the overhead.
>>>
>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>>> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>
>>>> I think I can keep R-b as there is no code change ?
>>> My signed-off-by will need to be added for the commit message I proposed
>>> above. So I would like Bertrand/Michal to confirm they are happy with it
>>> (I don't usually add my reviewed-by/acked-by for patch where my
>>> signed-off-by is added).
>>>
>> You can keep my Rb and Bertrand or Stefano can ack it, so that we can avoid
>> acking a patch by one of the authors.
> I will check and ack the v3 once out.

Many thanks for this.

I have sent out "[XEN v3] xen/arm: Enforce alignment check in debug 
build for {read, write}_atomic"

- Ayan

>
> Cheers
> Bertrand
>
>>> Cheers,
>>>
>>> --
>>> Julien Grall
>> ~Michal
diff mbox series

Patch

diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
index 1f60c28b1b..64314d59b3 100644
--- a/xen/arch/arm/include/asm/atomic.h
+++ b/xen/arch/arm/include/asm/atomic.h
@@ -78,6 +78,7 @@  static always_inline void read_atomic_size(const volatile void *p,
                                            void *res,
                                            unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1:
@@ -102,6 +103,7 @@  static always_inline void write_atomic_size(volatile void *p,
                                             void *val,
                                             unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1: