mbox series

[RFC,v2,0/4] cma: powerpc fadump fixes

Message ID cover.1728585512.git.ritesh.list@gmail.com (mailing list archive)
Headers show
Series cma: powerpc fadump fixes | expand

Message

Ritesh Harjani (IBM) Oct. 11, 2024, 7:23 a.m. UTC
Please find the v2 of cma related powerpc fadump fixes.

Patch-1 is a change in mm/cma.c to make sure we return an error if someone uses
cma_init_reserved_mem() before the pageblock_order is initalized.

I guess, it's best if Patch-1 goes via mm tree and since rest of the changes
are powerpc fadump fixes hence those should go via powerpc tree. Right?

v1 -> v2:
=========
1. Review comments from David to call fadump_cma_init() after the
   pageblock_order is initialized. Also to catch usages if someone tries
   to call cma_init_reserved_mem() before pageblock_order is initialized.

[v1]: https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.list@gmail.com/

Ritesh Harjani (IBM) (4):
  cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
  fadump: Refactor and prepare fadump_cma_init for late init
  fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
  fadump: Move fadump_cma_init to setup_arch() after initmem_init()

 arch/powerpc/include/asm/fadump.h  |  7 ++++
 arch/powerpc/kernel/fadump.c       | 55 +++++++++++++++---------------
 arch/powerpc/kernel/setup-common.c |  6 ++--
 mm/cma.c                           |  9 +++++
 4 files changed, 48 insertions(+), 29 deletions(-)

--
2.46.0

Comments

Michael Ellerman Oct. 11, 2024, 10:17 a.m. UTC | #1
"Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes:
> Please find the v2 of cma related powerpc fadump fixes.
>
> Patch-1 is a change in mm/cma.c to make sure we return an error if someone uses
> cma_init_reserved_mem() before the pageblock_order is initalized.
>
> I guess, it's best if Patch-1 goes via mm tree and since rest of the changes
> are powerpc fadump fixes hence those should go via powerpc tree. Right?

Yes I think that will work.

Because there's no actual dependency on patch 1, correct?

Let's see if the mm folks are happy with the approach, and if so you
should send patch 1 on its own, and patches 2-4 as a separate series.

Then I can take the series (2-4) as fixes, and patch 1 can go via the mm
tree (probably in next, not as a fix).

cheers

> v1 -> v2:
> =========
> 1. Review comments from David to call fadump_cma_init() after the
>    pageblock_order is initialized. Also to catch usages if someone tries
>    to call cma_init_reserved_mem() before pageblock_order is initialized.
>
> [v1]: https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.list@gmail.com/
>
> Ritesh Harjani (IBM) (4):
>   cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
>   fadump: Refactor and prepare fadump_cma_init for late init
>   fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
>   fadump: Move fadump_cma_init to setup_arch() after initmem_init()
>
>  arch/powerpc/include/asm/fadump.h  |  7 ++++
>  arch/powerpc/kernel/fadump.c       | 55 +++++++++++++++---------------
>  arch/powerpc/kernel/setup-common.c |  6 ++--
>  mm/cma.c                           |  9 +++++
>  4 files changed, 48 insertions(+), 29 deletions(-)
>
> --
> 2.46.0
David Hildenbrand Oct. 11, 2024, 10:25 a.m. UTC | #2
On 11.10.24 12:17, Michael Ellerman wrote:
> "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes:
>> Please find the v2 of cma related powerpc fadump fixes.
>>
>> Patch-1 is a change in mm/cma.c to make sure we return an error if someone uses
>> cma_init_reserved_mem() before the pageblock_order is initalized.
>>
>> I guess, it's best if Patch-1 goes via mm tree and since rest of the changes
>> are powerpc fadump fixes hence those should go via powerpc tree. Right?
> 
> Yes I think that will work.
> 
> Because there's no actual dependency on patch 1, correct?
> 
> Let's see if the mm folks are happy with the approach, and if so you
> should send patch 1 on its own, and patches 2-4 as a separate series.

Makes sense to me.
Ritesh Harjani (IBM) Oct. 11, 2024, 11 a.m. UTC | #3
Michael Ellerman <mpe@ellerman.id.au> writes:

> "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes:
>> Please find the v2 of cma related powerpc fadump fixes.
>>
>> Patch-1 is a change in mm/cma.c to make sure we return an error if someone uses
>> cma_init_reserved_mem() before the pageblock_order is initalized.
>>
>> I guess, it's best if Patch-1 goes via mm tree and since rest of the changes
>> are powerpc fadump fixes hence those should go via powerpc tree. Right?
>
> Yes I think that will work.
>
> Because there's no actual dependency on patch 1, correct?

There is no dependency, yes.

>
> Let's see if the mm folks are happy with the approach, and if so you
> should send patch 1 on its own, and patches 2-4 as a separate series.
>
> Then I can take the series (2-4) as fixes, and patch 1 can go via the mm
> tree (probably in next, not as a fix).
>

Sure. Since David has acked patch-1, let me split this into 2 series
as you mentioned above and re-send both seperately, so that it can be
picked up in their respective trees.

Will just do it in sometime. Thanks!

-ritesh


> cheers
>
>> v1 -> v2:
>> =========
>> 1. Review comments from David to call fadump_cma_init() after the
>>    pageblock_order is initialized. Also to catch usages if someone tries
>>    to call cma_init_reserved_mem() before pageblock_order is initialized.
>>
>> [v1]: https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.list@gmail.com/
>>
>> Ritesh Harjani (IBM) (4):
>>   cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
>>   fadump: Refactor and prepare fadump_cma_init for late init
>>   fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
>>   fadump: Move fadump_cma_init to setup_arch() after initmem_init()
>>
>>  arch/powerpc/include/asm/fadump.h  |  7 ++++
>>  arch/powerpc/kernel/fadump.c       | 55 +++++++++++++++---------------
>>  arch/powerpc/kernel/setup-common.c |  6 ++--
>>  mm/cma.c                           |  9 +++++
>>  4 files changed, 48 insertions(+), 29 deletions(-)
>>
>> --
>> 2.46.0