diff mbox series

[for-next/fixes] arm64/mm: Fix false-positive !virt_addr_valid() for kernel image

Message ID 90667b2b7f773308318261f96ebefd1a67133c4c.1732464395.git.lukas@wunner.de (mailing list archive)
State New
Headers show
Series [for-next/fixes] arm64/mm: Fix false-positive !virt_addr_valid() for kernel image | expand

Commit Message

Lukas Wunner Nov. 24, 2024, 4:15 p.m. UTC
Zorro reports a false-positive BUG_ON() when running crypto selftests on
boot:  Since commit 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to
sig_alg backend"), test_sig_one() invokes an RSA verify operation with a
test vector in the kernel's .rodata section.  The test vector is passed
to sg_set_buf(), which performs a virt_addr_valid() check.

On arm64, virt_addr_valid() returns false for kernel image addresses
such as this one, even though they're valid virtual addresses.
x86 returns true for kernel image addresses, so the BUG_ON() does not
occur there.  In fact, x86 has been doing so for 16 years, i.e. since
commit af5c2bd16ac2 ("x86: fix virt_addr_valid() with
CONFIG_DEBUG_VIRTUAL=y, v2").

Do the same on arm64 to avoid the false-positive BUG_ON() and to achieve
consistent virt_addr_valid() behavior across arches.

Silence a WARN splat in __virt_to_phys() which occurs once the BUG_ON()
is avoided.

The is_kernel_address() helper introduced herein cannot be put directly
in the virt_addr_valid() macro:  It has to be part of the kernel proper
so that it has visibility of the _text and _end symbols (referenced
through KERNEL_START and KERNEL_END).  These symbols are not exported,
so modules expanding the virt_addr_valid() macro could not access them.
For almost all invocations of virt_addr_valid(), __is_lm_address()
returns true, so jumping to the is_kernel_address() helper hardly ever
occurs and its performance impact is thus negligible.

Likewise, calling is_kernel_address() from the functions in physaddr.c
ought to be fine as they depend on CONFIG_DEBUG_VIRTUAL=y, which is
explicitly described as "costly" in the Kconfig help text.  (And this
doesn't add much cost really.)

Abridged stack trace:

  kernel BUG at include/linux/scatterlist.h:187!
  sg_init_one()
  rsassa_pkcs1_verify()
  test_sig_one()
  alg_test_sig()
  alg_test()
  cryptomgr_test()

Fixes: 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend")
Reported-by: Zorro Lang <zlang@redhat.com>
Closes: https://lore.kernel.org/r/20241122045106.tzhvm2wrqvttub6k@dell-per750-06-vm-08.rhts.eng.pek2.redhat.com/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
---
Just from looking at the code it seems arm's virt_addr_valid() returns
true for kernel image addresses, so apparently arm64 is the odd man out.

Note that this fix would have obviated the need for commit c02e7c5c6da8
("arm64/mm: use lm_alias() with addresses passed to memblock_free()").

 arch/arm64/include/asm/memory.h | 6 +++++-
 arch/arm64/mm/init.c            | 7 +++++++
 arch/arm64/mm/physaddr.c        | 6 +++---
 3 files changed, 15 insertions(+), 4 deletions(-)

Comments

Ard Biesheuvel Nov. 24, 2024, 4:38 p.m. UTC | #1
On Sun, 24 Nov 2024 at 17:16, Lukas Wunner <lukas@wunner.de> wrote:
>
> Zorro reports a false-positive BUG_ON() when running crypto selftests on
> boot:  Since commit 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to
> sig_alg backend"), test_sig_one() invokes an RSA verify operation with a
> test vector in the kernel's .rodata section.  The test vector is passed
> to sg_set_buf(), which performs a virt_addr_valid() check.
>
> On arm64, virt_addr_valid() returns false for kernel image addresses
> such as this one, even though they're valid virtual addresses.
> x86 returns true for kernel image addresses, so the BUG_ON() does not
> occur there.  In fact, x86 has been doing so for 16 years, i.e. since
> commit af5c2bd16ac2 ("x86: fix virt_addr_valid() with
> CONFIG_DEBUG_VIRTUAL=y, v2").
>
> Do the same on arm64 to avoid the false-positive BUG_ON() and to achieve
> consistent virt_addr_valid() behavior across arches.
>
> Silence a WARN splat in __virt_to_phys() which occurs once the BUG_ON()
> is avoided.
>
> The is_kernel_address() helper introduced herein cannot be put directly
> in the virt_addr_valid() macro:  It has to be part of the kernel proper
> so that it has visibility of the _text and _end symbols (referenced
> through KERNEL_START and KERNEL_END).  These symbols are not exported,
> so modules expanding the virt_addr_valid() macro could not access them.
> For almost all invocations of virt_addr_valid(), __is_lm_address()
> returns true, so jumping to the is_kernel_address() helper hardly ever
> occurs and its performance impact is thus negligible.
>
> Likewise, calling is_kernel_address() from the functions in physaddr.c
> ought to be fine as they depend on CONFIG_DEBUG_VIRTUAL=y, which is
> explicitly described as "costly" in the Kconfig help text.  (And this
> doesn't add much cost really.)
>
> Abridged stack trace:
>
>   kernel BUG at include/linux/scatterlist.h:187!
>   sg_init_one()
>   rsassa_pkcs1_verify()
>   test_sig_one()
>   alg_test_sig()
>   alg_test()
>   cryptomgr_test()
>
> Fixes: 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend")
> Reported-by: Zorro Lang <zlang@redhat.com>
> Closes: https://lore.kernel.org/r/20241122045106.tzhvm2wrqvttub6k@dell-per750-06-vm-08.rhts.eng.pek2.redhat.com/
> Signed-off-by: Lukas Wunner <lukas@wunner.de>
> ---
> Just from looking at the code it seems arm's virt_addr_valid() returns
> true for kernel image addresses, so apparently arm64 is the odd man out.
>

That is because ARM maps the kernel in the linear map, whereas arm64
maps the kernel in the vmalloc space.

vmalloc addresses cannot be used for DMA, which is why
virt_addr_valid() rejects them. On arm64, the same applies to the
kernel image, as well as the vmap'ed stack.

> Note that this fix would have obviated the need for commit c02e7c5c6da8
> ("arm64/mm: use lm_alias() with addresses passed to memblock_free()").
>

Your 'fix' will break other stuff: it is used, e.g., to decide whether
__pa() may be used on the input VA, which applies a fixed translation
on the input, rather than walk the page tables to obtain the physical
address.


>  arch/arm64/include/asm/memory.h | 6 +++++-
>  arch/arm64/mm/init.c            | 7 +++++++
>  arch/arm64/mm/physaddr.c        | 6 +++---
>  3 files changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index b9b9929..bb83315 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -416,9 +416,13 @@ static inline unsigned long virt_to_pfn(const void *kaddr)
>  })
>  #endif /* CONFIG_DEBUG_VIRTUAL */
>
> +bool is_kernel_address(unsigned long x);
> +
>  #define virt_addr_valid(addr)  ({                                      \
>         __typeof__(addr) __addr = __tag_reset(addr);                    \
> -       __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr));      \
> +       (__is_lm_address(__addr) ||                                     \
> +        is_kernel_address((unsigned long)__addr)) &&                   \
> +       pfn_is_map_memory(virt_to_pfn(__addr));                         \
>  })
>
>  void dump_mem_limit(void);
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index d21f67d..2e8a00f 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -156,6 +156,13 @@ static void __init zone_sizes_init(void)
>         free_area_init(max_zone_pfns);
>  }
>
> +bool is_kernel_address(unsigned long x)
> +{
> +       return x >= (unsigned long) KERNEL_START &&
> +              x <= (unsigned long) KERNEL_END;
> +}
> +EXPORT_SYMBOL(is_kernel_address);
> +
>  int pfn_is_map_memory(unsigned long pfn)
>  {
>         phys_addr_t addr = PFN_PHYS(pfn);
> diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
> index cde44c1..2d6755b 100644
> --- a/arch/arm64/mm/physaddr.c
> +++ b/arch/arm64/mm/physaddr.c
> @@ -9,7 +9,8 @@
>
>  phys_addr_t __virt_to_phys(unsigned long x)
>  {
> -       WARN(!__is_lm_address(__tag_reset(x)),
> +       WARN(!__is_lm_address(__tag_reset(x)) &&
> +            !is_kernel_address(__tag_reset(x)),
>              "virt_to_phys used for non-linear address: %pK (%pS)\n",
>               (void *)x,
>               (void *)x);
> @@ -24,8 +25,7 @@ phys_addr_t __phys_addr_symbol(unsigned long x)
>          * This is bounds checking against the kernel image only.
>          * __pa_symbol should only be used on kernel symbol addresses.
>          */
> -       VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
> -                      x > (unsigned long) KERNEL_END);
> +       VIRTUAL_BUG_ON(!is_kernel_address(x));
>         return __pa_symbol_nodebug(x);
>  }
>  EXPORT_SYMBOL(__phys_addr_symbol);
> --
> 2.43.0
>
>
Ard Biesheuvel Nov. 24, 2024, 5:13 p.m. UTC | #2
On Sun, 24 Nov 2024 at 17:38, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> On Sun, 24 Nov 2024 at 17:16, Lukas Wunner <lukas@wunner.de> wrote:
> >
> > Zorro reports a false-positive BUG_ON() when running crypto selftests on
> > boot:  Since commit 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to
> > sig_alg backend"), test_sig_one() invokes an RSA verify operation with a
> > test vector in the kernel's .rodata section.  The test vector is passed
> > to sg_set_buf(), which performs a virt_addr_valid() check.
> >
> > On arm64, virt_addr_valid() returns false for kernel image addresses
> > such as this one, even though they're valid virtual addresses.
> > x86 returns true for kernel image addresses, so the BUG_ON() does not
> > occur there.  In fact, x86 has been doing so for 16 years, i.e. since
> > commit af5c2bd16ac2 ("x86: fix virt_addr_valid() with
> > CONFIG_DEBUG_VIRTUAL=y, v2").
> >
> > Do the same on arm64 to avoid the false-positive BUG_ON() and to achieve
> > consistent virt_addr_valid() behavior across arches.
> >
> > Silence a WARN splat in __virt_to_phys() which occurs once the BUG_ON()
> > is avoided.
> >
> > The is_kernel_address() helper introduced herein cannot be put directly
> > in the virt_addr_valid() macro:  It has to be part of the kernel proper
> > so that it has visibility of the _text and _end symbols (referenced
> > through KERNEL_START and KERNEL_END).  These symbols are not exported,
> > so modules expanding the virt_addr_valid() macro could not access them.
> > For almost all invocations of virt_addr_valid(), __is_lm_address()
> > returns true, so jumping to the is_kernel_address() helper hardly ever
> > occurs and its performance impact is thus negligible.
> >
> > Likewise, calling is_kernel_address() from the functions in physaddr.c
> > ought to be fine as they depend on CONFIG_DEBUG_VIRTUAL=y, which is
> > explicitly described as "costly" in the Kconfig help text.  (And this
> > doesn't add much cost really.)
> >
> > Abridged stack trace:
> >
> >   kernel BUG at include/linux/scatterlist.h:187!
> >   sg_init_one()
> >   rsassa_pkcs1_verify()
> >   test_sig_one()
> >   alg_test_sig()
> >   alg_test()
> >   cryptomgr_test()
> >
> > Fixes: 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend")
> > Reported-by: Zorro Lang <zlang@redhat.com>
> > Closes: https://lore.kernel.org/r/20241122045106.tzhvm2wrqvttub6k@dell-per750-06-vm-08.rhts.eng.pek2.redhat.com/
> > Signed-off-by: Lukas Wunner <lukas@wunner.de>
> > ---
> > Just from looking at the code it seems arm's virt_addr_valid() returns
> > true for kernel image addresses, so apparently arm64 is the odd man out.
> >
>
> That is because ARM maps the kernel in the linear map, whereas arm64
> maps the kernel in the vmalloc space.
>
> vmalloc addresses cannot be used for DMA, which is why
> virt_addr_valid() rejects them. On arm64, the same applies to the
> kernel image, as well as the vmap'ed stack.
>
> > Note that this fix would have obviated the need for commit c02e7c5c6da8
> > ("arm64/mm: use lm_alias() with addresses passed to memblock_free()").
> >
>
> Your 'fix' will break other stuff: it is used, e.g., to decide whether
> __pa() may be used on the input VA, which applies a fixed translation
> on the input, rather than walk the page tables to obtain the physical
> address.
>

Apologies, I replied a little too quickly.

__pa() on arm64 permits linear addresses and kernel image addresses,
and so the kernel image is treated differently from the vmap'ed stack
and other vmalloc addresses.

However, that doesn't mean doing DMA from the kernel image is a great
idea. Allocations in the linear map are rounded up to cacheline size
to ensure that they are safe for non-coherent DMA, but this does not
apply to the kernel image. .rodata should still be safe in this
regard, but the general idea of allowing kernel image addresses in
places where DMA'able virtual addresses are expected is something we
should consider with care.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b9b9929..bb83315 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -416,9 +416,13 @@  static inline unsigned long virt_to_pfn(const void *kaddr)
 })
 #endif /* CONFIG_DEBUG_VIRTUAL */
 
+bool is_kernel_address(unsigned long x);
+
 #define virt_addr_valid(addr)	({					\
 	__typeof__(addr) __addr = __tag_reset(addr);			\
-	__is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr));	\
+	(__is_lm_address(__addr) ||					\
+	 is_kernel_address((unsigned long)__addr)) &&			\
+	pfn_is_map_memory(virt_to_pfn(__addr));				\
 })
 
 void dump_mem_limit(void);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index d21f67d..2e8a00f 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -156,6 +156,13 @@  static void __init zone_sizes_init(void)
 	free_area_init(max_zone_pfns);
 }
 
+bool is_kernel_address(unsigned long x)
+{
+	return x >= (unsigned long) KERNEL_START &&
+	       x <= (unsigned long) KERNEL_END;
+}
+EXPORT_SYMBOL(is_kernel_address);
+
 int pfn_is_map_memory(unsigned long pfn)
 {
 	phys_addr_t addr = PFN_PHYS(pfn);
diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
index cde44c1..2d6755b 100644
--- a/arch/arm64/mm/physaddr.c
+++ b/arch/arm64/mm/physaddr.c
@@ -9,7 +9,8 @@ 
 
 phys_addr_t __virt_to_phys(unsigned long x)
 {
-	WARN(!__is_lm_address(__tag_reset(x)),
+	WARN(!__is_lm_address(__tag_reset(x)) &&
+	     !is_kernel_address(__tag_reset(x)),
 	     "virt_to_phys used for non-linear address: %pK (%pS)\n",
 	      (void *)x,
 	      (void *)x);
@@ -24,8 +25,7 @@  phys_addr_t __phys_addr_symbol(unsigned long x)
 	 * This is bounds checking against the kernel image only.
 	 * __pa_symbol should only be used on kernel symbol addresses.
 	 */
-	VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
-		       x > (unsigned long) KERNEL_END);
+	VIRTUAL_BUG_ON(!is_kernel_address(x));
 	return __pa_symbol_nodebug(x);
 }
 EXPORT_SYMBOL(__phys_addr_symbol);