diff mbox series

[v2] arm64/mm: use lm_alias() with addresses passed to memblock_free()

Message ID 20240905152935.4156469-1-joey.gouly@arm.com (mailing list archive)
State New
Headers show
Series [v2] arm64/mm: use lm_alias() with addresses passed to memblock_free() | expand

Commit Message

Joey Gouly Sept. 5, 2024, 3:29 p.m. UTC
The pointer argument to memblock_free() needs to be a linear map address, but
in mem_init() we pass __init_begin/__init_end, which is a kernel image address.

This results in warnings when building with CONFIG_DEBUG_VIRTUAL=y:

    virt_to_phys used for non-linear address: ffff800081270000 (set_reset_devices+0x0/0x10)
    WARNING: CPU: 0 PID: 1 at arch/arm64/mm/physaddr.c:12 __virt_to_phys+0x54/0x70
    Modules linked in:
    CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc6-next-20240905 #5810 b1ebb0ad06653f35ce875413d5afad24668df3f3
    Hardware name: FVP Base RevC (DT)
    pstate: 2161402005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
    pc : __virt_to_phys+0x54/0x70
    lr : __virt_to_phys+0x54/0x70
    sp : ffff80008169be20
    ...
    Call trace:
     __virt_to_phys+0x54/0x70
     memblock_free+0x18/0x30
     free_initmem+0x3c/0x9c
     kernel_init+0x30/0x1cc
     ret_from_fork+0x10/0x20

Fix this by having mem_init() convert the pointers via lm_alias().

Fixes: 1db9716d4487 ("arm64/mm: Delete __init region from memblock.reserved")
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Rong Qianfeng <rongqianfeng@vivo.com>
---
 arch/arm64/mm/init.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

Comments

Rong Qianfeng Sept. 6, 2024, 6:47 a.m. UTC | #1
Hi Joey,

在 2024/9/5 23:29, Joey Gouly 写道:
> [You don't often get email from joey.gouly@arm.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>
> The pointer argument to memblock_free() needs to be a linear map address, but
> in mem_init() we pass __init_begin/__init_end, which is a kernel image address.
>
> This results in warnings when building with CONFIG_DEBUG_VIRTUAL=y:
>
>      virt_to_phys used for non-linear address: ffff800081270000 (set_reset_devices+0x0/0x10)
>      WARNING: CPU: 0 PID: 1 at arch/arm64/mm/physaddr.c:12 __virt_to_phys+0x54/0x70
>      Modules linked in:
>      CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc6-next-20240905 #5810 b1ebb0ad06653f35ce875413d5afad24668df3f3
>      Hardware name: FVP Base RevC (DT)
>      pstate: 2161402005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
>      pc : __virt_to_phys+0x54/0x70
>      lr : __virt_to_phys+0x54/0x70
>      sp : ffff80008169be20
>      ...
>      Call trace:
>       __virt_to_phys+0x54/0x70
>       memblock_free+0x18/0x30
>       free_initmem+0x3c/0x9c
>       kernel_init+0x30/0x1cc
>       ret_from_fork+0x10/0x20
>
> Fix this by having mem_init() convert the pointers via lm_alias().
>
> Fixes: 1db9716d4487 ("arm64/mm: Delete __init region from memblock.reserved")
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Rong Qianfeng <rongqianfeng@vivo.com>
> ---
>   arch/arm64/mm/init.c | 12 +++++++-----
>   1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 0bde16aadc83..27a32ff15412 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -411,14 +411,16 @@ void __init mem_init(void)
>
>   void free_initmem(void)
>   {
> -       unsigned long aligned_begin = ALIGN_DOWN((u64)__init_begin, PAGE_SIZE);
> -       unsigned long aligned_end = ALIGN((u64)__init_end, PAGE_SIZE);
> +       void *lm_init_begin = lm_alias(__init_begin);
> +       void *lm_init_end = lm_alias(__init_end);
> +
> +       WARN_ON(!IS_ALIGNED((unsigned long)lm_init_begin, PAGE_SIZE));
> +       WARN_ON(!IS_ALIGNED((unsigned long)lm_init_end, PAGE_SIZE));
>
>          /* Delete __init region from memblock.reserved. */
> -       memblock_free((void *)aligned_begin, aligned_end - aligned_begin);
> +       memblock_free(lm_init_begin, lm_init_end - lm_init_begin);
Thank you for discovering and fixing this hidden problem.  I will pay more
attention to it in the future. Thank you again.
>
> -       free_reserved_area(lm_alias(__init_begin),
> -                          lm_alias(__init_end),
> +       free_reserved_area(lm_init_begin, lm_init_end,
>                             POISON_FREE_INITMEM, "unused kernel");
>          /*
>           * Unmap the __init region but leave the VM area in place. This
> --
> 2.25.1
Best Regards,
Qianfeng
Will Deacon Sept. 6, 2024, 12:29 p.m. UTC | #2
On Thu, 05 Sep 2024 16:29:35 +0100, Joey Gouly wrote:
> The pointer argument to memblock_free() needs to be a linear map address, but
> in mem_init() we pass __init_begin/__init_end, which is a kernel image address.
> 
> This results in warnings when building with CONFIG_DEBUG_VIRTUAL=y:
> 
>     virt_to_phys used for non-linear address: ffff800081270000 (set_reset_devices+0x0/0x10)
>     WARNING: CPU: 0 PID: 1 at arch/arm64/mm/physaddr.c:12 __virt_to_phys+0x54/0x70
>     Modules linked in:
>     CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc6-next-20240905 #5810 b1ebb0ad06653f35ce875413d5afad24668df3f3
>     Hardware name: FVP Base RevC (DT)
>     pstate: 2161402005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
>     pc : __virt_to_phys+0x54/0x70
>     lr : __virt_to_phys+0x54/0x70
>     sp : ffff80008169be20
>     ...
>     Call trace:
>      __virt_to_phys+0x54/0x70
>      memblock_free+0x18/0x30
>      free_initmem+0x3c/0x9c
>      kernel_init+0x30/0x1cc
>      ret_from_fork+0x10/0x20
> 
> [...]

Applied to arm64 (for-next/mm), thanks!

[1/1] arm64/mm: use lm_alias() with addresses passed to memblock_free()
      https://git.kernel.org/arm64/c/c02e7c5c6da8

Cheers,
diff mbox series

Patch

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 0bde16aadc83..27a32ff15412 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -411,14 +411,16 @@  void __init mem_init(void)
 
 void free_initmem(void)
 {
-	unsigned long aligned_begin = ALIGN_DOWN((u64)__init_begin, PAGE_SIZE);
-	unsigned long aligned_end = ALIGN((u64)__init_end, PAGE_SIZE);
+	void *lm_init_begin = lm_alias(__init_begin);
+	void *lm_init_end = lm_alias(__init_end);
+
+	WARN_ON(!IS_ALIGNED((unsigned long)lm_init_begin, PAGE_SIZE));
+	WARN_ON(!IS_ALIGNED((unsigned long)lm_init_end, PAGE_SIZE));
 
 	/* Delete __init region from memblock.reserved. */
-	memblock_free((void *)aligned_begin, aligned_end - aligned_begin);
+	memblock_free(lm_init_begin, lm_init_end - lm_init_begin);
 
-	free_reserved_area(lm_alias(__init_begin),
-			   lm_alias(__init_end),
+	free_reserved_area(lm_init_begin, lm_init_end,
 			   POISON_FREE_INITMEM, "unused kernel");
 	/*
 	 * Unmap the __init region but leave the VM area in place. This