diff mbox series

[XEN,v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic

Message ID 20221108094503.40253-1-ayan.kumar.halder@amd.com (mailing list archive)
State New, archived
Headers show
Series [XEN,v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic | expand

Commit Message

Ayan Kumar Halder Nov. 8, 2022, 9:45 a.m. UTC
From: Ayan Kumar Halder <ayankuma@amd.com>

Xen provides helper to atomically read/write memory (see {read,
write}_atomic()). Those helpers can only work if the address is aligned
to the size of the access (see B2.2.1 ARM DDI 08476I.a).

On Arm32, the alignment is already enforced by the processor because
HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
this bit is not set because memcpy()/memset() can use unaligned access
for performance reason (the implementation is taken from the Cortex
library).

To avoid any overhead in production build, the alignment will only be
checked using an ASSERT. Note that it might be possible to do it in
production build using the acquire/exclusive version of load/store. But
this is left to a follow-up (if wanted).

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Signed-off-by: Julien Grall <julien@xen.org>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---

Changes from :-
v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
message.

v2 - 1. Updated commit message to specify the reason for using ASSERT().
2. Added Julien's SoB.

 xen/arch/arm/include/asm/atomic.h | 2 ++
 1 file changed, 2 insertions(+)

Comments

Bertrand Marquis Nov. 8, 2022, 11:22 a.m. UTC | #1
Hi Ayan,,

> On 8 Nov 2022, at 09:45, Ayan Kumar Halder <ayan.kumar.halder@amd.com> wrote:
> 
> From: Ayan Kumar Halder <ayankuma@amd.com>
> 
> Xen provides helper to atomically read/write memory (see {read,
> write}_atomic()). Those helpers can only work if the address is aligned
> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
> 
> On Arm32, the alignment is already enforced by the processor because
> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
> this bit is not set because memcpy()/memset() can use unaligned access
> for performance reason (the implementation is taken from the Cortex
> library).
> 
> To avoid any overhead in production build, the alignment will only be
> checked using an ASSERT. Note that it might be possible to do it in
> production build using the acquire/exclusive version of load/store. But
> this is left to a follow-up (if wanted).
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Signed-off-by: Julien Grall <julien@xen.org>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I confirm my Reviewed-by.

Side note: You should actually have removed it :-)

Cheers
Bertrand

> ---
> 
> Changes from :-
> v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
> message.
> 
> v2 - 1. Updated commit message to specify the reason for using ASSERT().
> 2. Added Julien's SoB.
> 
> xen/arch/arm/include/asm/atomic.h | 2 ++
> 1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
> index 1f60c28b1b..64314d59b3 100644
> --- a/xen/arch/arm/include/asm/atomic.h
> +++ b/xen/arch/arm/include/asm/atomic.h
> @@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile void *p,
>                                            void *res,
>                                            unsigned int size)
> {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>     switch ( size )
>     {
>     case 1:
> @@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void *p,
>                                             void *val,
>                                             unsigned int size)
> {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>     switch ( size )
>     {
>     case 1:
> -- 
> 2.17.1
>
Stefano Stabellini Nov. 10, 2022, midnight UTC | #2
On Tue, 8 Nov 2022, Ayan Kumar Halder wrote:
> From: Ayan Kumar Halder <ayankuma@amd.com>
> 
> Xen provides helper to atomically read/write memory (see {read,
> write}_atomic()). Those helpers can only work if the address is aligned
> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
> 
> On Arm32, the alignment is already enforced by the processor because
> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
> this bit is not set because memcpy()/memset() can use unaligned access
> for performance reason (the implementation is taken from the Cortex
> library).
> 
> To avoid any overhead in production build, the alignment will only be
> checked using an ASSERT. Note that it might be possible to do it in
> production build using the acquire/exclusive version of load/store. But
> this is left to a follow-up (if wanted).
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Signed-off-by: Julien Grall <julien@xen.org>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Changes from :-
> v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
> message.
> 
> v2 - 1. Updated commit message to specify the reason for using ASSERT().
> 2. Added Julien's SoB.
> 
>  xen/arch/arm/include/asm/atomic.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
> index 1f60c28b1b..64314d59b3 100644
> --- a/xen/arch/arm/include/asm/atomic.h
> +++ b/xen/arch/arm/include/asm/atomic.h
> @@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile void *p,
>                                             void *res,
>                                             unsigned int size)
>  {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>      switch ( size )
>      {
>      case 1:
> @@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void *p,
>                                              void *val,
>                                              unsigned int size)
>  {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>      switch ( size )
>      {
>      case 1:
> -- 
> 2.17.1
>
Julien Grall Nov. 10, 2022, 7:03 p.m. UTC | #3
Hi,

On 10/11/2022 00:00, Stefano Stabellini wrote:
> On Tue, 8 Nov 2022, Ayan Kumar Halder wrote:
>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>
>> Xen provides helper to atomically read/write memory (see {read,
>> write}_atomic()). Those helpers can only work if the address is aligned
>> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
>>
>> On Arm32, the alignment is already enforced by the processor because
>> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
>> this bit is not set because memcpy()/memset() can use unaligned access
>> for performance reason (the implementation is taken from the Cortex
>> library).
>>
>> To avoid any overhead in production build, the alignment will only be
>> checked using an ASSERT. Note that it might be possible to do it in
>> production build using the acquire/exclusive version of load/store. But
>> this is left to a follow-up (if wanted).
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> Signed-off-by: Julien Grall <julien@xen.org>
>> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

I have pushed this patch in a branch for-next/4.18 on my public repo. I 
will apply the patch to staging once the tree re-opened.

Cheers,
diff mbox series

Patch

diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
index 1f60c28b1b..64314d59b3 100644
--- a/xen/arch/arm/include/asm/atomic.h
+++ b/xen/arch/arm/include/asm/atomic.h
@@ -78,6 +78,7 @@  static always_inline void read_atomic_size(const volatile void *p,
                                            void *res,
                                            unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1:
@@ -102,6 +103,7 @@  static always_inline void write_atomic_size(volatile void *p,
                                             void *val,
                                             unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1: