diff mbox series

[2/3] arm64: mm: reserve hugetlb CMA after numa_init

Message ID 20200603024231.61748-3-song.bao.hua@hisilicon.com (mailing list archive)
State New, archived
Headers show
Series support per-numa CMA for ARM server | expand

Commit Message

Song Bao Hua (Barry Song) June 3, 2020, 2:42 a.m. UTC
hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
done yet. so all reserved memory will be located at node0.

Cc: Roman Gushchin <guro@fb.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 arch/arm64/mm/init.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Roman Gushchin June 3, 2020, 3:22 a.m. UTC | #1
On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
> done yet. so all reserved memory will be located at node0.
> 
> Cc: Roman Gushchin <guro@fb.com>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>

Acked-by: Roman Gushchin <guro@fb.com>

Thanks!

> ---
>  arch/arm64/mm/init.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index e42727e3568e..8f0e70ebb49d 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
>  	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
>  
>  	dma_contiguous_reserve(arm64_dma32_phys_limit);
> -
> -#ifdef CONFIG_ARM64_4K_PAGES
> -	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> -#endif
> -
>  }
>  
>  void __init bootmem_init(void)
> @@ -478,6 +473,11 @@ void __init bootmem_init(void)
>  	min_low_pfn = min;
>  
>  	arm64_numa_init();
> +
> +#ifdef CONFIG_ARM64_4K_PAGES
> +	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> +#endif
> +
>  	/*
>  	 * Sparsemem tries to allocate bootmem in memory_present(), so must be
>  	 * done after the fixed reservations.
> -- 
> 2.23.0
> 
>
Matthias Brugger June 7, 2020, 8:14 p.m. UTC | #2
On 03/06/2020 05:22, Roman Gushchin wrote:
> On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
>> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
>> done yet. so all reserved memory will be located at node0.
>>
>> Cc: Roman Gushchin <guro@fb.com>
>> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> 
> Acked-by: Roman Gushchin <guro@fb.com>
> 

When did this break or was it broken since the beginning?
In any case, could you provide a "Fixes" tag for it, so that it can easily be
backported to older releases.

Regards,
Matthias

> Thanks!
> 
>> ---
>>  arch/arm64/mm/init.c | 10 +++++-----
>>  1 file changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index e42727e3568e..8f0e70ebb49d 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
>>  	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
>>  
>>  	dma_contiguous_reserve(arm64_dma32_phys_limit);
>> -
>> -#ifdef CONFIG_ARM64_4K_PAGES
>> -	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
>> -#endif
>> -
>>  }
>>  
>>  void __init bootmem_init(void)
>> @@ -478,6 +473,11 @@ void __init bootmem_init(void)
>>  	min_low_pfn = min;
>>  
>>  	arm64_numa_init();
>> +
>> +#ifdef CONFIG_ARM64_4K_PAGES
>> +	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
>> +#endif
>> +
>>  	/*
>>  	 * Sparsemem tries to allocate bootmem in memory_present(), so must be
>>  	 * done after the fixed reservations.
>> -- 
>> 2.23.0
>>
>>
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
Song Bao Hua (Barry Song) June 8, 2020, 12:50 a.m. UTC | #3
> -----Original Message-----
> From: Matthias Brugger [mailto:matthias.bgg@gmail.com]
> Sent: Monday, June 8, 2020 8:15 AM
> To: Roman Gushchin <guro@fb.com>; Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com>
> Cc: catalin.marinas@arm.com; John Garry <john.garry@huawei.com>;
> linux-kernel@vger.kernel.org; Linuxarm <linuxarm@huawei.com>;
> iommu@lists.linux-foundation.org; Zengtao (B) <prime.zeng@hisilicon.com>;
> Jonathan Cameron <jonathan.cameron@huawei.com>;
> robin.murphy@arm.com; hch@lst.de; linux-arm-kernel@lists.infradead.org;
> m.szyprowski@samsung.com
> Subject: Re: [PATCH 2/3] arm64: mm: reserve hugetlb CMA after numa_init
> 
> 
> 
> On 03/06/2020 05:22, Roman Gushchin wrote:
> > On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
> >> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
> >> done yet. so all reserved memory will be located at node0.
> >>
> >> Cc: Roman Gushchin <guro@fb.com>
> >> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> >
> > Acked-by: Roman Gushchin <guro@fb.com>
> >
> 
> When did this break or was it broken since the beginning?
> In any case, could you provide a "Fixes" tag for it, so that it can easily be
> backported to older releases.

I guess it was broken at the first beginning.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cf11e85fc08cc

Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")

Would you think it is better for me to send v2 for this patch separately with this tag and take this out of my original patch set for per-numa CMA?
Please give your suggestion.

Best Regards
Barry

> 
> Regards,
> Matthias
> 
> > Thanks!
> >
> >> ---
> >>  arch/arm64/mm/init.c | 10 +++++-----
> >>  1 file changed, 5 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> >> index e42727e3568e..8f0e70ebb49d 100644
> >> --- a/arch/arm64/mm/init.c
> >> +++ b/arch/arm64/mm/init.c
> >> @@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
> >>  	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> >>
> >>  	dma_contiguous_reserve(arm64_dma32_phys_limit);
> >> -
> >> -#ifdef CONFIG_ARM64_4K_PAGES
> >> -	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >> -#endif
> >> -
> >>  }
> >>
> >>  void __init bootmem_init(void)
> >> @@ -478,6 +473,11 @@ void __init bootmem_init(void)
> >>  	min_low_pfn = min;
> >>
> >>  	arm64_numa_init();
> >> +
> >> +#ifdef CONFIG_ARM64_4K_PAGES
> >> +	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >> +#endif
> >> +
> >>  	/*
> >>  	 * Sparsemem tries to allocate bootmem in memory_present(), so must
> be
> >>  	 * done after the fixed reservations.
> >> --
> >> 2.23.0
Matthias Brugger June 9, 2020, 3:33 p.m. UTC | #4
On 08/06/2020 02:50, Song Bao Hua (Barry Song) wrote:
> 
> 
>> -----Original Message-----
>> From: Matthias Brugger [mailto:matthias.bgg@gmail.com]
>> Sent: Monday, June 8, 2020 8:15 AM
>> To: Roman Gushchin <guro@fb.com>; Song Bao Hua (Barry Song)
>> <song.bao.hua@hisilicon.com>
>> Cc: catalin.marinas@arm.com; John Garry <john.garry@huawei.com>;
>> linux-kernel@vger.kernel.org; Linuxarm <linuxarm@huawei.com>;
>> iommu@lists.linux-foundation.org; Zengtao (B) <prime.zeng@hisilicon.com>;
>> Jonathan Cameron <jonathan.cameron@huawei.com>;
>> robin.murphy@arm.com; hch@lst.de; linux-arm-kernel@lists.infradead.org;
>> m.szyprowski@samsung.com
>> Subject: Re: [PATCH 2/3] arm64: mm: reserve hugetlb CMA after numa_init
>>
>>
>>
>> On 03/06/2020 05:22, Roman Gushchin wrote:
>>> On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
>>>> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
>>>> done yet. so all reserved memory will be located at node0.
>>>>
>>>> Cc: Roman Gushchin <guro@fb.com>
>>>> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
>>>
>>> Acked-by: Roman Gushchin <guro@fb.com>
>>>
>>
>> When did this break or was it broken since the beginning?
>> In any case, could you provide a "Fixes" tag for it, so that it can easily be
>> backported to older releases.
> 
> I guess it was broken at the first beginning.
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cf11e85fc08cc
> 
> Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
> 
> Would you think it is better for me to send v2 for this patch separately with this tag and take this out of my original patch set for per-numa CMA?
> Please give your suggestion.
> 

I'm not the maintainer but I think it could help to get the patch accepted
earlier while you address the rest of the series.

Regards,
Matthias
diff mbox series

Patch

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index e42727e3568e..8f0e70ebb49d 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -458,11 +458,6 @@  void __init arm64_memblock_init(void)
 	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
 
 	dma_contiguous_reserve(arm64_dma32_phys_limit);
-
-#ifdef CONFIG_ARM64_4K_PAGES
-	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
-#endif
-
 }
 
 void __init bootmem_init(void)
@@ -478,6 +473,11 @@  void __init bootmem_init(void)
 	min_low_pfn = min;
 
 	arm64_numa_init();
+
+#ifdef CONFIG_ARM64_4K_PAGES
+	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
+#endif
+
 	/*
 	 * Sparsemem tries to allocate bootmem in memory_present(), so must be
 	 * done after the fixed reservations.