Message ID | 20201119175556.18681-1-catalin.marinas@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required | expand |
Hi Catalin, Nicolas, On 19/11/2020 17:55, Catalin Marinas wrote: > mem_init() currently relies on knowing the boundaries of the crashkernel > reservation to map such region with page granularity for later > unmapping via set_memory_valid(..., 0). If the crashkernel reservation > is deferred, such boundaries are not known when the linear mapping is > created. Simply parse the command line for "crashkernel" and, if found, > create the linear map with NO_BLOCK_MAPPINGS. Looks good to me! Acked-by: James Morse <james.morse@arm.com> Thanks, James
On Thu, 2020-11-19 at 17:55 +0000, Catalin Marinas wrote: > mem_init() currently relies on knowing the boundaries of the crashkernel > reservation to map such region with page granularity for later > unmapping via set_memory_valid(..., 0). If the crashkernel reservation > is deferred, such boundaries are not known when the linear mapping is > created. Simply parse the command line for "crashkernel" and, if found, > create the linear map with NO_BLOCK_MAPPINGS. > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Cc: James Morse <james.morse@arm.com> > Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > --- Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> On RPi4: Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Regards, Nicolas
On 11/19/20 11:25 PM, Catalin Marinas wrote: > mem_init() currently relies on knowing the boundaries of the crashkernel > reservation to map such region with page granularity for later > unmapping via set_memory_valid(..., 0). If the crashkernel reservation > is deferred, such boundaries are not known when the linear mapping is > created. Simply parse the command line for "crashkernel" and, if found, > create the linear map with NO_BLOCK_MAPPINGS. > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Cc: James Morse <james.morse@arm.com> > Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > --- > > Following the online (and offline) discussion with James and Nicolas, > this aims to avoid issues with moving the reserve_crashkernel() call to > after the memory map has been created (the ZONE_DMA patches). > > https://lore.kernel.org/r/e60d643e-4879-3fc3-737d-2c145332a6d7@arm.com > > arch/arm64/mm/mmu.c | 37 ++++++++++++++++--------------------- > 1 file changed, 16 insertions(+), 21 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index d7fe72ee678a..dd214157a026 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -469,6 +469,21 @@ void __init mark_linear_text_alias_ro(void) > PAGE_KERNEL_RO); > } > > +static bool crash_mem_map __initdata; > + > +static int __init enable_crash_mem_map(char *arg) > +{ > + /* > + * Proper parameter parsing is done by reserve_crashkernel(). We only > + * need to know if the linear map has to avoid block mappings so that > + * the crashkernel reservations can be unmapped later. > + */ > + crash_mem_map = true; > + > + return 0; > +} > +early_param("crashkernel", enable_crash_mem_map); Should not the crash kernel cmdline parameter gets parsed enough, just to ensure it is atleast a valid one ? Otherwise an invalid cmdline request can prevent the block mapping. In that case, the kernel will neither have a crash kernel nor the block mapping.
On Fri, Nov 20, 2020 at 09:25:22AM +0530, Anshuman Khandual wrote: > On 11/19/20 11:25 PM, Catalin Marinas wrote: > > mem_init() currently relies on knowing the boundaries of the crashkernel > > reservation to map such region with page granularity for later > > unmapping via set_memory_valid(..., 0). If the crashkernel reservation > > is deferred, such boundaries are not known when the linear mapping is > > created. Simply parse the command line for "crashkernel" and, if found, > > create the linear map with NO_BLOCK_MAPPINGS. > > > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > > Cc: James Morse <james.morse@arm.com> > > Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > > --- > > > > Following the online (and offline) discussion with James and Nicolas, > > this aims to avoid issues with moving the reserve_crashkernel() call to > > after the memory map has been created (the ZONE_DMA patches). > > > > https://lore.kernel.org/r/e60d643e-4879-3fc3-737d-2c145332a6d7@arm.com > > > > arch/arm64/mm/mmu.c | 37 ++++++++++++++++--------------------- > > 1 file changed, 16 insertions(+), 21 deletions(-) > > > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > > index d7fe72ee678a..dd214157a026 100644 > > --- a/arch/arm64/mm/mmu.c > > +++ b/arch/arm64/mm/mmu.c > > @@ -469,6 +469,21 @@ void __init mark_linear_text_alias_ro(void) > > PAGE_KERNEL_RO); > > } > > > > +static bool crash_mem_map __initdata; > > + > > +static int __init enable_crash_mem_map(char *arg) > > +{ > > + /* > > + * Proper parameter parsing is done by reserve_crashkernel(). We only > > + * need to know if the linear map has to avoid block mappings so that > > + * the crashkernel reservations can be unmapped later. > > + */ > > + crash_mem_map = true; > > + > > + return 0; > > +} > > +early_param("crashkernel", enable_crash_mem_map); > > Should not the crash kernel cmdline parameter gets parsed enough, just to > ensure it is atleast a valid one ? Otherwise an invalid cmdline request > can prevent the block mapping. In that case, the kernel will neither have > a crash kernel nor the block mapping. Does it actually matter? If people pass random strings on the command line, they should expect some side-effects. That's about the intention to have a crashkernel. We can't fully validate the crashkernel parameter until we know the zones layout. Also note that with defconfig, we don't get block mappings anyway (rodata_full).
On 11/20/20 2:53 PM, Catalin Marinas wrote: > On Fri, Nov 20, 2020 at 09:25:22AM +0530, Anshuman Khandual wrote: >> On 11/19/20 11:25 PM, Catalin Marinas wrote: >>> mem_init() currently relies on knowing the boundaries of the crashkernel >>> reservation to map such region with page granularity for later >>> unmapping via set_memory_valid(..., 0). If the crashkernel reservation >>> is deferred, such boundaries are not known when the linear mapping is >>> created. Simply parse the command line for "crashkernel" and, if found, >>> create the linear map with NO_BLOCK_MAPPINGS. >>> >>> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> >>> Cc: James Morse <james.morse@arm.com> >>> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> >>> --- >>> >>> Following the online (and offline) discussion with James and Nicolas, >>> this aims to avoid issues with moving the reserve_crashkernel() call to >>> after the memory map has been created (the ZONE_DMA patches). >>> >>> https://lore.kernel.org/r/e60d643e-4879-3fc3-737d-2c145332a6d7@arm.com >>> >>> arch/arm64/mm/mmu.c | 37 ++++++++++++++++--------------------- >>> 1 file changed, 16 insertions(+), 21 deletions(-) >>> >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>> index d7fe72ee678a..dd214157a026 100644 >>> --- a/arch/arm64/mm/mmu.c >>> +++ b/arch/arm64/mm/mmu.c >>> @@ -469,6 +469,21 @@ void __init mark_linear_text_alias_ro(void) >>> PAGE_KERNEL_RO); >>> } >>> >>> +static bool crash_mem_map __initdata; >>> + >>> +static int __init enable_crash_mem_map(char *arg) >>> +{ >>> + /* >>> + * Proper parameter parsing is done by reserve_crashkernel(). We only >>> + * need to know if the linear map has to avoid block mappings so that >>> + * the crashkernel reservations can be unmapped later. >>> + */ >>> + crash_mem_map = true; >>> + >>> + return 0; >>> +} >>> +early_param("crashkernel", enable_crash_mem_map); >> >> Should not the crash kernel cmdline parameter gets parsed enough, just to >> ensure it is atleast a valid one ? Otherwise an invalid cmdline request >> can prevent the block mapping. In that case, the kernel will neither have >> a crash kernel nor the block mapping. > > Does it actually matter? If people pass random strings on the command > line, they should expect some side-effects. That's about the intention > to have a crashkernel. We can't fully validate the crashkernel parameter > until we know the zones layout. Makes sense. > > Also note that with defconfig, we don't get block mappings anyway > (rodata_full). > Sure, seems alight then.
On Thu, 19 Nov 2020 17:55:56 +0000, Catalin Marinas wrote: > mem_init() currently relies on knowing the boundaries of the crashkernel > reservation to map such region with page granularity for later > unmapping via set_memory_valid(..., 0). If the crashkernel reservation > is deferred, such boundaries are not known when the linear mapping is > created. Simply parse the command line for "crashkernel" and, if found, > create the linear map with NO_BLOCK_MAPPINGS. Applied to arm64 (for-next/zone-dma-default-32-bit), thanks for the review/test! [1/1] arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required https://git.kernel.org/arm64/c/2687275a5843
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index d7fe72ee678a..dd214157a026 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -469,6 +469,21 @@ void __init mark_linear_text_alias_ro(void) PAGE_KERNEL_RO); } +static bool crash_mem_map __initdata; + +static int __init enable_crash_mem_map(char *arg) +{ + /* + * Proper parameter parsing is done by reserve_crashkernel(). We only + * need to know if the linear map has to avoid block mappings so that + * the crashkernel reservations can be unmapped later. + */ + crash_mem_map = true; + + return 0; +} +early_param("crashkernel", enable_crash_mem_map); + static void __init map_mem(pgd_t *pgdp) { phys_addr_t kernel_start = __pa_symbol(_stext); @@ -477,7 +492,7 @@ static void __init map_mem(pgd_t *pgdp) int flags = 0; u64 i; - if (rodata_full || debug_pagealloc_enabled()) + if (rodata_full || crash_mem_map || debug_pagealloc_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; /* @@ -487,11 +502,6 @@ static void __init map_mem(pgd_t *pgdp) * the following for-loop */ memblock_mark_nomap(kernel_start, kernel_end - kernel_start); -#ifdef CONFIG_KEXEC_CORE - if (crashk_res.end) - memblock_mark_nomap(crashk_res.start, - resource_size(&crashk_res)); -#endif /* map all the memory banks */ for_each_mem_range(i, &start, &end) { @@ -518,21 +528,6 @@ static void __init map_mem(pgd_t *pgdp) __map_memblock(pgdp, kernel_start, kernel_end, PAGE_KERNEL, NO_CONT_MAPPINGS); memblock_clear_nomap(kernel_start, kernel_end - kernel_start); - -#ifdef CONFIG_KEXEC_CORE - /* - * Use page-level mappings here so that we can shrink the region - * in page granularity and put back unused memory to buddy system - * through /sys/kernel/kexec_crash_size interface. - */ - if (crashk_res.end) { - __map_memblock(pgdp, crashk_res.start, crashk_res.end + 1, - PAGE_KERNEL, - NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); - memblock_clear_nomap(crashk_res.start, - resource_size(&crashk_res)); - } -#endif } void mark_rodata_ro(void)
mem_init() currently relies on knowing the boundaries of the crashkernel reservation to map such region with page granularity for later unmapping via set_memory_valid(..., 0). If the crashkernel reservation is deferred, such boundaries are not known when the linear mapping is created. Simply parse the command line for "crashkernel" and, if found, create the linear map with NO_BLOCK_MAPPINGS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> --- Following the online (and offline) discussion with James and Nicolas, this aims to avoid issues with moving the reserve_crashkernel() call to after the memory map has been created (the ZONE_DMA patches). https://lore.kernel.org/r/e60d643e-4879-3fc3-737d-2c145332a6d7@arm.com arch/arm64/mm/mmu.c | 37 ++++++++++++++++--------------------- 1 file changed, 16 insertions(+), 21 deletions(-)