Message ID | 8579f887412720bd6f2fbce513c1c9904772ead4.1728585512.git.ritesh.list@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | cma: powerpc fadump fixes | expand |
On 11.10.24 09:23, Ritesh Harjani (IBM) wrote: > cma_init_reserved_mem() checks base and size alignment with > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during > early boot when pageblock_order is 0. That means if base and size does > not have pageblock_order alignment, it can cause functional failures > during cma activate area. > > So let's enforce pageblock_order to be non-zero during > cma_init_reserved_mem(). > > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> > --- > mm/cma.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/mm/cma.c b/mm/cma.c > index 3e9724716bad..36d753e7a0bf 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, > if (!size || !memblock_is_region_reserved(base, size)) > return -EINVAL; > > + /* > + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which > + * needs pageblock_order to be initialized. Let's enforce it. > + */ > + if (!pageblock_order) { > + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); > + return -EINVAL; > + } > + > /* ensure minimal alignment required by mm core */ > if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) > return -EINVAL; Acked-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/cma.c b/mm/cma.c index 3e9724716bad..36d753e7a0bf 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, if (!size || !memblock_is_region_reserved(base, size)) return -EINVAL; + /* + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which + * needs pageblock_order to be initialized. Let's enforce it. + */ + if (!pageblock_order) { + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); + return -EINVAL; + } + /* ensure minimal alignment required by mm core */ if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) return -EINVAL;
cma_init_reserved_mem() checks base and size alignment with CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during early boot when pageblock_order is 0. That means if base and size does not have pageblock_order alignment, it can cause functional failures during cma activate area. So let's enforce pageblock_order to be non-zero during cma_init_reserved_mem(). Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> --- mm/cma.c | 9 +++++++++ 1 file changed, 9 insertions(+)