diff mbox series

[v1] arm64/mm: use lm_alias() with addresses passed to memblock_free()

Message ID 20240905130115.4135490-1-joey.gouly@arm.com (mailing list archive)
State New, archived
Headers show
Series [v1] arm64/mm: use lm_alias() with addresses passed to memblock_free() | expand

Commit Message

Joey Gouly Sept. 5, 2024, 1:01 p.m. UTC
__init_begin and __init_end are kernel image addresses, use lm_alias() to
convert them to linear mapping addresses.

This fixes the following splat at boot time (seen with CONFIG_DEBUG_VIRTUAL=y):

[    1.574253] virt_to_phys used for non-linear address: ffff800081270000 (set_reset_devices+0x0/0x10)
[    1.574272] WARNING: CPU: 0 PID: 1 at arch/arm64/mm/physaddr.c:12 __virt_to_phys+0x54/0x70
[    1.574291] Modules linked in:
[    1.574300] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc6-next-20240905 #5810 b1ebb0ad06653f35ce875413d5afad24668df3f3
[    1.574316] Hardware name: FVP Base RevC (DT)
[    1.574324] pstate: 2161402005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
[    1.574338] pc : __virt_to_phys+0x54/0x70
[    1.574351] lr : __virt_to_phys+0x54/0x70
[    1.574365] sp : ffff80008169be20
[...]
[    1.574568] Call trace:
[    1.574574]  __virt_to_phys+0x54/0x70
[    1.574588]  memblock_free+0x18/0x30
[    1.574599]  free_initmem+0x3c/0x9c
[    1.574610]  kernel_init+0x30/0x1cc
[    1.574624]  ret_from_fork+0x10/0x20

Fixes: 1db9716d4487 ("arm64/mm: Delete __init region from memblock.reserved")
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Rong Qianfeng <rongqianfeng@vivo.com>
---

Saw this when trying out next-20240905.

 arch/arm64/mm/init.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Mark Rutland Sept. 5, 2024, 2:50 p.m. UTC | #1
On Thu, Sep 05, 2024 at 02:01:15PM +0100, Joey Gouly wrote:
> __init_begin and __init_end are kernel image addresses, use lm_alias() to
> convert them to linear mapping addresses.
> 
> This fixes the following splat at boot time (seen with CONFIG_DEBUG_VIRTUAL=y):
> 
> [    1.574253] virt_to_phys used for non-linear address: ffff800081270000 (set_reset_devices+0x0/0x10)
> [    1.574272] WARNING: CPU: 0 PID: 1 at arch/arm64/mm/physaddr.c:12 __virt_to_phys+0x54/0x70
> [    1.574291] Modules linked in:
> [    1.574300] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc6-next-20240905 #5810 b1ebb0ad06653f35ce875413d5afad24668df3f3
> [    1.574316] Hardware name: FVP Base RevC (DT)
> [    1.574324] pstate: 2161402005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
> [    1.574338] pc : __virt_to_phys+0x54/0x70
> [    1.574351] lr : __virt_to_phys+0x54/0x70
> [    1.574365] sp : ffff80008169be20
> [...]
> [    1.574568] Call trace:
> [    1.574574]  __virt_to_phys+0x54/0x70
> [    1.574588]  memblock_free+0x18/0x30
> [    1.574599]  free_initmem+0x3c/0x9c
> [    1.574610]  kernel_init+0x30/0x1cc
> [    1.574624]  ret_from_fork+0x10/0x20

It'd be good to introduce the problem first, and we can trim the
timestamps, e.g.

| The pointer argument to memblock_free() needs to be a linear map
| address, but in mem_init() we pass __init_begin, which is a kernel
| image address. This results in warnings when building with
| CONFIG_DEBUG_VIRTUAL=y, e.g.
|
|    virt_to_phys used for non-linear address: ffff800081270000 (set_reset_devices+0x0/0x10)
|    WARNING: CPU: 0 PID: 1 at arch/arm64/mm/physaddr.c:12 __virt_to_phys+0x54/0x70
|    Modules linked in:
|    CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc6-next-20240905 #5810 b1ebb0ad06653f35ce875413d5afad24668df3f3
|    Hardware name: FVP Base RevC (DT)
|    pstate: 2161402005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
|    pc : __virt_to_phys+0x54/0x70
|    lr : __virt_to_phys+0x54/0x70
|    sp : ffff80008169be20
|    ...
|    Call trace:
|     __virt_to_phys+0x54/0x70
|     memblock_free+0x18/0x30
|     free_initmem+0x3c/0x9c
|     kernel_init+0x30/0x1cc
|     ret_from_fork+0x10/0x20
|
| Fix this by having mem_init() convert the pointers via lm_alias().

> Fixes: 1db9716d4487 ("arm64/mm: Delete __init region from memblock.reserved")
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Rong Qianfeng <rongqianfeng@vivo.com>
> ---
> 
> Saw this when trying out next-20240905.

IIUC the commit this fixes is queued up in the arm64 for-next/mm branch,
so I'd expect the fix to be queued in the arm64 tree.

>  arch/arm64/mm/init.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 0bde16aadc83..f24095bd7e7d 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -411,8 +411,8 @@ void __init mem_init(void)
>  
>  void free_initmem(void)
>  {
> -	unsigned long aligned_begin = ALIGN_DOWN((u64)__init_begin, PAGE_SIZE);
> -	unsigned long aligned_end = ALIGN((u64)__init_end, PAGE_SIZE);
> +	unsigned long aligned_begin = ALIGN_DOWN((u64)lm_alias(__init_begin), PAGE_SIZE);
> +	unsigned long aligned_end = ALIGN((u64)lm_alias(__init_end), PAGE_SIZE);
>  
>  	/* Delete __init region from memblock.reserved. */
>  	memblock_free((void *)aligned_begin, aligned_end - aligned_begin);

Hold on, if __init_begin and __init_end aren't aligned already, this
memblock_free() isn't safe, as we're also freeing the bytes before/after. Those
ALIGN_DOWN() and ALIGN() are bogus and only serve to mask a bug.

Per arch/arm64/kernel/vmlinux.lds.S those are aligned to SEGMENT_ALIGN, which
is a multiple of PAGE_SIZE.

I reckon we should have something like:

void free_initmem(void)
{
	void *lm_init_begin = lm_alias(__init_begin);
	void *lm_init_end = lm_alias(__init_end);

	WARN_ON(!IS_ALIGNED((unsigned long)lm_init_begin, PAGE_SIZE));
	WARN_ON(!IS_ALIGNED((unsigned long)lm_init_end, PAGE_SIZE));

	/* Delete __init region from memblock.reserved. */
	memblock_free(lm_init_begin, lm_init_end - lm_init_begin);

	free_reserved_area(lm_init_begin, lm_init_end,
			   POISON_FREE_INITMEM, "unused kernel");

	...
}

... which works for me atop arm64/for-next/mm, building defconfig +
DEBUG_VIRTUAL with GCC 14.1.0.

Are you happy to spin a v2 to that effect?

Mark.
diff mbox series

Patch

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 0bde16aadc83..f24095bd7e7d 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -411,8 +411,8 @@  void __init mem_init(void)
 
 void free_initmem(void)
 {
-	unsigned long aligned_begin = ALIGN_DOWN((u64)__init_begin, PAGE_SIZE);
-	unsigned long aligned_end = ALIGN((u64)__init_end, PAGE_SIZE);
+	unsigned long aligned_begin = ALIGN_DOWN((u64)lm_alias(__init_begin), PAGE_SIZE);
+	unsigned long aligned_end = ALIGN((u64)lm_alias(__init_end), PAGE_SIZE);
 
 	/* Delete __init region from memblock.reserved. */
 	memblock_free((void *)aligned_begin, aligned_end - aligned_begin);