Message ID | 4cea3a03fb0a9f52dbd6b62ec21209abf14fb7bf.1728585512.git.ritesh.list@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | cma: powerpc fadump fixes | expand |
On 11/10/24 12:53 pm, Ritesh Harjani (IBM) wrote: > This patch refactors all CMA related initialization and alignment code > to within fadump_cma_init() which gets called in the end. This also means > that we keep [reserve_dump_area_start, boot_memory_size] page aligned > during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the > aligned chunk and provide it to CMA. This inherently also fixes an issue in > the current code where the reserve_dump_area_start is not aligned > when the physical memory can have holes and the suitable chunk starts at > an unaligned boundary. > > After this we should be able to call fadump_cma_init() independently > later in setup_arch() where pageblock_order is non-zero. > > Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> > --- > arch/powerpc/kernel/fadump.c | 34 ++++++++++++++++++++++------------ > 1 file changed, 22 insertions(+), 12 deletions(-) > > diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c > index 162327d66982..ffaec625b7a8 100644 > --- a/arch/powerpc/kernel/fadump.c > +++ b/arch/powerpc/kernel/fadump.c > @@ -80,7 +80,7 @@ static struct cma *fadump_cma; > */ > static void __init fadump_cma_init(void) > { > - unsigned long long base, size; > + unsigned long long base, size, end; > int rc; > > if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled || > @@ -92,8 +92,24 @@ static void __init fadump_cma_init(void) > if (fw_dump.nocma || !fw_dump.boot_memory_size) > return; > > + /* > + * [base, end) should be reserved during early init in > + * fadump_reserve_mem(). No need to check this here as > + * cma_init_reserved_mem() already checks for overlap. > + * Here we give the aligned chunk of this reserved memory to CMA. > + */ > base = fw_dump.reserve_dump_area_start; > size = fw_dump.boot_memory_size; > + end = base + size; > + > + base = ALIGN(base, CMA_MIN_ALIGNMENT_BYTES); > + end = ALIGN_DOWN(end, CMA_MIN_ALIGNMENT_BYTES); > + size = end - base; > + > + if (end <= base) { > + pr_warn("%s: Too less memory to give to CMA\n", __func__); > + return; > + } > > rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma); > if (rc) { > @@ -116,11 +132,12 @@ static void __init fadump_cma_init(void) > /* > * So we now have successfully initialized cma area for fadump. > */ > - pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx " > + pr_info("Initialized [0x%llx, %luMB] cma area from [0x%lx, %luMB] " > "bytes of memory reserved for firmware-assisted dump\n", > - cma_get_size(fadump_cma), > - (unsigned long)cma_get_base(fadump_cma) >> 20, > - fw_dump.reserve_dump_area_size); > + cma_get_base(fadump_cma), cma_get_size(fadump_cma) >> 20, > + fw_dump.reserve_dump_area_start, > + fw_dump.boot_memory_size >> 20); The changes look good. Thanks for looking into it. For patches 2, 3 & 4 Acked-by: Hari Bathini <hbathini@linux.ibm.com> > + return; > } > #else > static void __init fadump_cma_init(void) { } > @@ -553,13 +570,6 @@ int __init fadump_reserve_mem(void) > if (!fw_dump.dump_active) { > fw_dump.boot_memory_size = > PAGE_ALIGN(fadump_calculate_reserve_size()); > -#ifdef CONFIG_CMA > - if (!fw_dump.nocma) { > - fw_dump.boot_memory_size = > - ALIGN(fw_dump.boot_memory_size, > - CMA_MIN_ALIGNMENT_BYTES); > - } > -#endif > > bootmem_min = fw_dump.ops->fadump_get_bootmem_min(); > if (fw_dump.boot_memory_size < bootmem_min) {
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index 162327d66982..ffaec625b7a8 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c @@ -80,7 +80,7 @@ static struct cma *fadump_cma; */ static void __init fadump_cma_init(void) { - unsigned long long base, size; + unsigned long long base, size, end; int rc; if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled || @@ -92,8 +92,24 @@ static void __init fadump_cma_init(void) if (fw_dump.nocma || !fw_dump.boot_memory_size) return; + /* + * [base, end) should be reserved during early init in + * fadump_reserve_mem(). No need to check this here as + * cma_init_reserved_mem() already checks for overlap. + * Here we give the aligned chunk of this reserved memory to CMA. + */ base = fw_dump.reserve_dump_area_start; size = fw_dump.boot_memory_size; + end = base + size; + + base = ALIGN(base, CMA_MIN_ALIGNMENT_BYTES); + end = ALIGN_DOWN(end, CMA_MIN_ALIGNMENT_BYTES); + size = end - base; + + if (end <= base) { + pr_warn("%s: Too less memory to give to CMA\n", __func__); + return; + } rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma); if (rc) { @@ -116,11 +132,12 @@ static void __init fadump_cma_init(void) /* * So we now have successfully initialized cma area for fadump. */ - pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx " + pr_info("Initialized [0x%llx, %luMB] cma area from [0x%lx, %luMB] " "bytes of memory reserved for firmware-assisted dump\n", - cma_get_size(fadump_cma), - (unsigned long)cma_get_base(fadump_cma) >> 20, - fw_dump.reserve_dump_area_size); + cma_get_base(fadump_cma), cma_get_size(fadump_cma) >> 20, + fw_dump.reserve_dump_area_start, + fw_dump.boot_memory_size >> 20); + return; } #else static void __init fadump_cma_init(void) { } @@ -553,13 +570,6 @@ int __init fadump_reserve_mem(void) if (!fw_dump.dump_active) { fw_dump.boot_memory_size = PAGE_ALIGN(fadump_calculate_reserve_size()); -#ifdef CONFIG_CMA - if (!fw_dump.nocma) { - fw_dump.boot_memory_size = - ALIGN(fw_dump.boot_memory_size, - CMA_MIN_ALIGNMENT_BYTES); - } -#endif bootmem_min = fw_dump.ops->fadump_get_bootmem_min(); if (fw_dump.boot_memory_size < bootmem_min) {
This patch refactors all CMA related initialization and alignment code to within fadump_cma_init() which gets called in the end. This also means that we keep [reserve_dump_area_start, boot_memory_size] page aligned during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the aligned chunk and provide it to CMA. This inherently also fixes an issue in the current code where the reserve_dump_area_start is not aligned when the physical memory can have holes and the suitable chunk starts at an unaligned boundary. After this we should be able to call fadump_cma_init() independently later in setup_arch() where pageblock_order is non-zero. Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> --- arch/powerpc/kernel/fadump.c | 34 ++++++++++++++++++++++------------ 1 file changed, 22 insertions(+), 12 deletions(-)