diff mbox series

crypto: memneq: avoid implicit unaligned accesses

Message ID 20220119093109.1567314-1-ardb@kernel.org (mailing list archive)
State Accepted
Delegated to: Herbert Xu
Headers show
Series crypto: memneq: avoid implicit unaligned accesses | expand

Commit Message

Ard Biesheuvel Jan. 19, 2022, 9:31 a.m. UTC
The C standard does not support dereferencing pointers that are not
aligned with respect to the pointed-to type, and doing so is technically
undefined behavior, even if the underlying hardware supports it.

This means that conditionally dereferencing such pointers based on
whether CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y is not the right thing
to do, and actually results in alignment faults on ARM, which are fixed
up on a slow path. Instead, we should use the unaligned accessors in
such cases: on architectures that don't care about alignment, they will
result in identical codegen whereas, e.g., codegen on ARM will avoid
doubleword loads and stores but use ordinary ones, which are able to
tolerate misalignment.

Link: https://lore.kernel.org/linux-crypto/CAHk-=wiKkdYLY0bv+nXrcJz3NH9mAqPAafX7PpW5EwVtxsEu7Q@mail.gmail.com/
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 crypto/memneq.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

Comments

Arnd Bergmann Jan. 19, 2022, 10:17 a.m. UTC | #1
On Wed, Jan 19, 2022 at 10:31 AM Ard Biesheuvel <ardb@kernel.org> wrote:
>
> The C standard does not support dereferencing pointers that are not
> aligned with respect to the pointed-to type, and doing so is technically
> undefined behavior, even if the underlying hardware supports it.
>
> This means that conditionally dereferencing such pointers based on
> whether CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y is not the right thing
> to do, and actually results in alignment faults on ARM, which are fixed
> up on a slow path. Instead, we should use the unaligned accessors in
> such cases: on architectures that don't care about alignment, they will
> result in identical codegen whereas, e.g., codegen on ARM will avoid
> doubleword loads and stores but use ordinary ones, which are able to
> tolerate misalignment.
>
> Link: https://lore.kernel.org/linux-crypto/CAHk-=wiKkdYLY0bv+nXrcJz3NH9mAqPAafX7PpW5EwVtxsEu7Q@mail.gmail.com/
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Eric Biggers Jan. 21, 2022, 7:51 p.m. UTC | #2
On Wed, Jan 19, 2022 at 10:31:09AM +0100, Ard Biesheuvel wrote:
> The C standard does not support dereferencing pointers that are not
> aligned with respect to the pointed-to type, and doing so is technically
> undefined behavior, even if the underlying hardware supports it.
> 
> This means that conditionally dereferencing such pointers based on
> whether CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y is not the right thing
> to do, and actually results in alignment faults on ARM, which are fixed
> up on a slow path. Instead, we should use the unaligned accessors in
> such cases: on architectures that don't care about alignment, they will
> result in identical codegen whereas, e.g., codegen on ARM will avoid
> doubleword loads and stores but use ordinary ones, which are able to
> tolerate misalignment.
> 
> Link: https://lore.kernel.org/linux-crypto/CAHk-=wiKkdYLY0bv+nXrcJz3NH9mAqPAafX7PpW5EwVtxsEu7Q@mail.gmail.com/
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  crypto/memneq.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
> 

Reviewed-by: Eric Biggers <ebiggers@google.com>

- Eric
Herbert Xu Jan. 28, 2022, 6:27 a.m. UTC | #3
On Wed, Jan 19, 2022 at 10:31:09AM +0100, Ard Biesheuvel wrote:
> The C standard does not support dereferencing pointers that are not
> aligned with respect to the pointed-to type, and doing so is technically
> undefined behavior, even if the underlying hardware supports it.
> 
> This means that conditionally dereferencing such pointers based on
> whether CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y is not the right thing
> to do, and actually results in alignment faults on ARM, which are fixed
> up on a slow path. Instead, we should use the unaligned accessors in
> such cases: on architectures that don't care about alignment, they will
> result in identical codegen whereas, e.g., codegen on ARM will avoid
> doubleword loads and stores but use ordinary ones, which are able to
> tolerate misalignment.
> 
> Link: https://lore.kernel.org/linux-crypto/CAHk-=wiKkdYLY0bv+nXrcJz3NH9mAqPAafX7PpW5EwVtxsEu7Q@mail.gmail.com/
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  crypto/memneq.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)

Patch applied.  Thanks.
diff mbox series

Patch

diff --git a/crypto/memneq.c b/crypto/memneq.c
index afed1bd16aee..fb11608b1ec1 100644
--- a/crypto/memneq.c
+++ b/crypto/memneq.c
@@ -60,6 +60,7 @@ 
  */
 
 #include <crypto/algapi.h>
+#include <asm/unaligned.h>
 
 #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ
 
@@ -71,7 +72,8 @@  __crypto_memneq_generic(const void *a, const void *b, size_t size)
 
 #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
 	while (size >= sizeof(unsigned long)) {
-		neq |= *(unsigned long *)a ^ *(unsigned long *)b;
+		neq |= get_unaligned((unsigned long *)a) ^
+		       get_unaligned((unsigned long *)b);
 		OPTIMIZER_HIDE_VAR(neq);
 		a += sizeof(unsigned long);
 		b += sizeof(unsigned long);
@@ -95,18 +97,24 @@  static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
 
 #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
 	if (sizeof(unsigned long) == 8) {
-		neq |= *(unsigned long *)(a)   ^ *(unsigned long *)(b);
+		neq |= get_unaligned((unsigned long *)a) ^
+		       get_unaligned((unsigned long *)b);
 		OPTIMIZER_HIDE_VAR(neq);
-		neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8);
+		neq |= get_unaligned((unsigned long *)(a + 8)) ^
+		       get_unaligned((unsigned long *)(b + 8));
 		OPTIMIZER_HIDE_VAR(neq);
 	} else if (sizeof(unsigned int) == 4) {
-		neq |= *(unsigned int *)(a)    ^ *(unsigned int *)(b);
+		neq |= get_unaligned((unsigned int *)a) ^
+		       get_unaligned((unsigned int *)b);
 		OPTIMIZER_HIDE_VAR(neq);
-		neq |= *(unsigned int *)(a+4)  ^ *(unsigned int *)(b+4);
+		neq |= get_unaligned((unsigned int *)(a + 4)) ^
+		       get_unaligned((unsigned int *)(b + 4));
 		OPTIMIZER_HIDE_VAR(neq);
-		neq |= *(unsigned int *)(a+8)  ^ *(unsigned int *)(b+8);
+		neq |= get_unaligned((unsigned int *)(a + 8)) ^
+		       get_unaligned((unsigned int *)(b + 8));
 		OPTIMIZER_HIDE_VAR(neq);
-		neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12);
+		neq |= get_unaligned((unsigned int *)(a + 12)) ^
+		       get_unaligned((unsigned int *)(b + 12));
 		OPTIMIZER_HIDE_VAR(neq);
 	} else
 #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */