Message ID | 20230316131711.1284451-4-alexghiti@rivosinc.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | riscv: Use PUD/P4D/PGD pages for the linear mapping | expand |
On Thu, Mar 16, 2023 at 02:17:10PM +0100, Alexandre Ghiti wrote: > In order to isolate the kernel text mapping and the crash kernel > region, we used some sort of hack to isolate thoses ranges which consisted > in marking them as not mappable with memblock_mark_nomap. > > Simply use the newly introduced memblock_isolate_memory function which does > exactly the same but does not uselessly mark the region as not mappable. But that's not what this patch does -- it's also adding special-case code for kexec and, honestly, I'm struggling to see why this is improving anything. Can we leave the code like it is, or is there something else going on? Will > Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> > --- > arch/arm64/mm/mmu.c | 25 ++++++++++++++++--------- > 1 file changed, 16 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 6f9d8898a025..387c2a065a09 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -548,19 +548,18 @@ static void __init map_mem(pgd_t *pgdp) > > /* > * Take care not to create a writable alias for the > - * read-only text and rodata sections of the kernel image. > - * So temporarily mark them as NOMAP to skip mappings in > - * the following for-loop > + * read-only text and rodata sections of the kernel image so isolate > + * those regions and map them after the for loop. > */ > - memblock_mark_nomap(kernel_start, kernel_end - kernel_start); > + memblock_isolate_memory(kernel_start, kernel_end - kernel_start); > > #ifdef CONFIG_KEXEC_CORE > if (crash_mem_map) { > if (defer_reserve_crashkernel()) > flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > else if (crashk_res.end) > - memblock_mark_nomap(crashk_res.start, > - resource_size(&crashk_res)); > + memblock_isolate_memory(crashk_res.start, > + resource_size(&crashk_res)); > } > #endif > > @@ -568,6 +567,17 @@ static void __init map_mem(pgd_t *pgdp) > for_each_mem_range(i, &start, &end) { > if (start >= end) > break; > + > + if (start == kernel_start) > + continue; > + > +#ifdef CONFIG_KEXEC_CORE > + if (start == crashk_res.start && > + crash_mem_map && !defer_reserve_crashkernel() && > + crashk_res.end) > + continue; > +#endif > + > /* > * The linear map must allow allocation tags reading/writing > * if MTE is present. Otherwise, it has the same attributes as > @@ -589,7 +599,6 @@ static void __init map_mem(pgd_t *pgdp) > */ > __map_memblock(pgdp, kernel_start, kernel_end, > PAGE_KERNEL, NO_CONT_MAPPINGS); > - memblock_clear_nomap(kernel_start, kernel_end - kernel_start); > > /* > * Use page-level mappings here so that we can shrink the region > @@ -603,8 +612,6 @@ static void __init map_mem(pgd_t *pgdp) > crashk_res.end + 1, > PAGE_KERNEL, > NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); > - memblock_clear_nomap(crashk_res.start, > - resource_size(&crashk_res)); > } > } > #endif > -- > 2.37.2 >
Hi Will, On 3/24/23 16:21, Will Deacon wrote: > On Thu, Mar 16, 2023 at 02:17:10PM +0100, Alexandre Ghiti wrote: >> In order to isolate the kernel text mapping and the crash kernel >> region, we used some sort of hack to isolate thoses ranges which consisted >> in marking them as not mappable with memblock_mark_nomap. >> >> Simply use the newly introduced memblock_isolate_memory function which does >> exactly the same but does not uselessly mark the region as not mappable. > But that's not what this patch does -- it's also adding special-case code > for kexec and, honestly, I'm struggling to see why this is improving > anything. > > Can we leave the code like it is, or is there something else going on? Yes, the next version won't touch arm64 at all since actually, I'll remove this new api. Thanks for your time, Alex > > Will > >> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> >> --- >> arch/arm64/mm/mmu.c | 25 ++++++++++++++++--------- >> 1 file changed, 16 insertions(+), 9 deletions(-) >> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index 6f9d8898a025..387c2a065a09 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -548,19 +548,18 @@ static void __init map_mem(pgd_t *pgdp) >> >> /* >> * Take care not to create a writable alias for the >> - * read-only text and rodata sections of the kernel image. >> - * So temporarily mark them as NOMAP to skip mappings in >> - * the following for-loop >> + * read-only text and rodata sections of the kernel image so isolate >> + * those regions and map them after the for loop. >> */ >> - memblock_mark_nomap(kernel_start, kernel_end - kernel_start); >> + memblock_isolate_memory(kernel_start, kernel_end - kernel_start); >> >> #ifdef CONFIG_KEXEC_CORE >> if (crash_mem_map) { >> if (defer_reserve_crashkernel()) >> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; >> else if (crashk_res.end) >> - memblock_mark_nomap(crashk_res.start, >> - resource_size(&crashk_res)); >> + memblock_isolate_memory(crashk_res.start, >> + resource_size(&crashk_res)); >> } >> #endif >> >> @@ -568,6 +567,17 @@ static void __init map_mem(pgd_t *pgdp) >> for_each_mem_range(i, &start, &end) { >> if (start >= end) >> break; >> + >> + if (start == kernel_start) >> + continue; >> + >> +#ifdef CONFIG_KEXEC_CORE >> + if (start == crashk_res.start && >> + crash_mem_map && !defer_reserve_crashkernel() && >> + crashk_res.end) >> + continue; >> +#endif >> + >> /* >> * The linear map must allow allocation tags reading/writing >> * if MTE is present. Otherwise, it has the same attributes as >> @@ -589,7 +599,6 @@ static void __init map_mem(pgd_t *pgdp) >> */ >> __map_memblock(pgdp, kernel_start, kernel_end, >> PAGE_KERNEL, NO_CONT_MAPPINGS); >> - memblock_clear_nomap(kernel_start, kernel_end - kernel_start); >> >> /* >> * Use page-level mappings here so that we can shrink the region >> @@ -603,8 +612,6 @@ static void __init map_mem(pgd_t *pgdp) >> crashk_res.end + 1, >> PAGE_KERNEL, >> NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); >> - memblock_clear_nomap(crashk_res.start, >> - resource_size(&crashk_res)); >> } >> } >> #endif >> -- >> 2.37.2 >> > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6f9d8898a025..387c2a065a09 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -548,19 +548,18 @@ static void __init map_mem(pgd_t *pgdp) /* * Take care not to create a writable alias for the - * read-only text and rodata sections of the kernel image. - * So temporarily mark them as NOMAP to skip mappings in - * the following for-loop + * read-only text and rodata sections of the kernel image so isolate + * those regions and map them after the for loop. */ - memblock_mark_nomap(kernel_start, kernel_end - kernel_start); + memblock_isolate_memory(kernel_start, kernel_end - kernel_start); #ifdef CONFIG_KEXEC_CORE if (crash_mem_map) { if (defer_reserve_crashkernel()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; else if (crashk_res.end) - memblock_mark_nomap(crashk_res.start, - resource_size(&crashk_res)); + memblock_isolate_memory(crashk_res.start, + resource_size(&crashk_res)); } #endif @@ -568,6 +567,17 @@ static void __init map_mem(pgd_t *pgdp) for_each_mem_range(i, &start, &end) { if (start >= end) break; + + if (start == kernel_start) + continue; + +#ifdef CONFIG_KEXEC_CORE + if (start == crashk_res.start && + crash_mem_map && !defer_reserve_crashkernel() && + crashk_res.end) + continue; +#endif + /* * The linear map must allow allocation tags reading/writing * if MTE is present. Otherwise, it has the same attributes as @@ -589,7 +599,6 @@ static void __init map_mem(pgd_t *pgdp) */ __map_memblock(pgdp, kernel_start, kernel_end, PAGE_KERNEL, NO_CONT_MAPPINGS); - memblock_clear_nomap(kernel_start, kernel_end - kernel_start); /* * Use page-level mappings here so that we can shrink the region @@ -603,8 +612,6 @@ static void __init map_mem(pgd_t *pgdp) crashk_res.end + 1, PAGE_KERNEL, NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); - memblock_clear_nomap(crashk_res.start, - resource_size(&crashk_res)); } } #endif
In order to isolate the kernel text mapping and the crash kernel region, we used some sort of hack to isolate thoses ranges which consisted in marking them as not mappable with memblock_mark_nomap. Simply use the newly introduced memblock_isolate_memory function which does exactly the same but does not uselessly mark the region as not mappable. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> --- arch/arm64/mm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-)