Message ID | 1592893296-22040-1-git-send-email-anshuman.khandual@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs | expand |
Hi Anshuman, Thank you for the patch! Yet something to improve: [auto build test ERROR on v5.8-rc2] [also build test ERROR on next-20200623] [cannot apply to arm64/for-next/core] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Anshuman-Khandual/arm64-hugetlb-Reserve-CMA-areas-for-gigantic-pages-on-16K-and-64K-configs/20200623-142507 base: 48778464bb7d346b47157d21ffde2af6b2d39110 config: arm64-allnoconfig (attached as .config) compiler: aarch64-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): arch/arm64/mm/init.c: In function 'bootmem_init': >> arch/arm64/mm/init.c:428:2: error: implicit declaration of function 'arm64_hugetlb_cma_reserve'; did you mean 'hugetlb_cma_reserve'? [-Werror=implicit-function-declaration] 428 | arm64_hugetlb_cma_reserve(); | ^~~~~~~~~~~~~~~~~~~~~~~~~ | hugetlb_cma_reserve cc1: some warnings being treated as errors vim +428 arch/arm64/mm/init.c 408 409 void __init bootmem_init(void) 410 { 411 unsigned long min, max; 412 413 min = PFN_UP(memblock_start_of_DRAM()); 414 max = PFN_DOWN(memblock_end_of_DRAM()); 415 416 early_memtest(min << PAGE_SHIFT, max << PAGE_SHIFT); 417 418 max_pfn = max_low_pfn = max; 419 min_low_pfn = min; 420 421 arm64_numa_init(); 422 423 /* 424 * must be done after arm64_numa_init() which calls numa_init() to 425 * initialize node_online_map that gets used in hugetlb_cma_reserve() 426 * while allocating required CMA size across online nodes. 427 */ > 428 arm64_hugetlb_cma_reserve(); 429 430 /* 431 * Sparsemem tries to allocate bootmem in memory_present(), so must be 432 * done after the fixed reservations. 433 */ 434 memblocks_present(); 435 436 sparse_init(); 437 zone_sizes_init(min, max); 438 439 memblock_dump_all(); 440 } 441 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
On 06/23/2020 02:54 PM, kernel test robot wrote: > 423 /* > 424 * must be done after arm64_numa_init() which calls numa_init() to > 425 * initialize node_online_map that gets used in hugetlb_cma_reserve() > 426 * while allocating required CMA size across online nodes. > 427 */ > > 428 arm64_hugetlb_cma_reserve(); Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem.
On 2020-06-23 13:48, Anshuman Khandual wrote: > > On 06/23/2020 02:54 PM, kernel test robot wrote: >> 423 /* >> 424 * must be done after arm64_numa_init() which calls numa_init() to >> 425 * initialize node_online_map that gets used in hugetlb_cma_reserve() >> 426 * while allocating required CMA size across online nodes. >> 427 */ >> > 428 arm64_hugetlb_cma_reserve(); > > Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem. ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably. Robin.
On 06/23/2020 10:10 PM, Robin Murphy wrote: > On 2020-06-23 13:48, Anshuman Khandual wrote: >> >> On 06/23/2020 02:54 PM, kernel test robot wrote: >>> 423 /* >>> 424 * must be done after arm64_numa_init() which calls numa_init() to >>> 425 * initialize node_online_map that gets used in hugetlb_cma_reserve() >>> 426 * while allocating required CMA size across online nodes. >>> 427 */ >>> > 428 arm64_hugetlb_cma_reserve(); >> >> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem. > > ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably. Including <asm/hugetlb.h> directly does not solve the problem and <linux/hugetlb.h> is no better. arm64_hugetlb_cma_reserve() needs protection wrt both CMA and HUGETLB_PAGE. Dropped HUGETLB_PAGE assuming it should have been taken care as the stub itself was in <asm/hugetlb.h>, which turns out to be not true.
On 2020-06-24 01:17, Anshuman Khandual wrote: > > > On 06/23/2020 10:10 PM, Robin Murphy wrote: >> On 2020-06-23 13:48, Anshuman Khandual wrote: >>> >>> On 06/23/2020 02:54 PM, kernel test robot wrote: >>>> 423 /* >>>> 424 * must be done after arm64_numa_init() which calls numa_init() to >>>> 425 * initialize node_online_map that gets used in hugetlb_cma_reserve() >>>> 426 * while allocating required CMA size across online nodes. >>>> 427 */ >>>> > 428 arm64_hugetlb_cma_reserve(); >>> >>> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem. >> >> ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably. > > Including <asm/hugetlb.h> directly does not solve the problem and > <linux/hugetlb.h> is no better. arm64_hugetlb_cma_reserve() needs > protection wrt both CMA and HUGETLB_PAGE. Dropped HUGETLB_PAGE > assuming it should have been taken care as the stub itself was in > <asm/hugetlb.h>, which turns out to be not true. Sure, I wasn't suggesting that the implementation of the header itself wouldn't need tweaking - the point I was trying to get at is that it's preferable to have *either* a stub definition in an always-reachable header, *or* inline #ifdefs around the caller. Mixing both such that there are 3 or 4 possible combinations just isn't nice to maintain. Robin.
On 06/24/2020 03:15 PM, Robin Murphy wrote: > On 2020-06-24 01:17, Anshuman Khandual wrote: >> >> >> On 06/23/2020 10:10 PM, Robin Murphy wrote: >>> On 2020-06-23 13:48, Anshuman Khandual wrote: >>>> >>>> On 06/23/2020 02:54 PM, kernel test robot wrote: >>>>> 423 /* >>>>> 424 * must be done after arm64_numa_init() which calls numa_init() to >>>>> 425 * initialize node_online_map that gets used in hugetlb_cma_reserve() >>>>> 426 * while allocating required CMA size across online nodes. >>>>> 427 */ >>>>> > 428 arm64_hugetlb_cma_reserve(); >>>> >>>> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem. >>> >>> ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably. >> >> Including <asm/hugetlb.h> directly does not solve the problem and >> <linux/hugetlb.h> is no better. arm64_hugetlb_cma_reserve() needs >> protection wrt both CMA and HUGETLB_PAGE. Dropped HUGETLB_PAGE >> assuming it should have been taken care as the stub itself was in >> <asm/hugetlb.h>, which turns out to be not true. > > Sure, I wasn't suggesting that the implementation of the header itself wouldn't need tweaking - the point I was trying to get at is that it's preferable to have *either* a stub definition in an always-reachable header, *or* inline #ifdefs around the caller. Mixing both such that there are 3 or 4 possible combinations just isn't nice to maintain. > Makes sense, which one of these would be preferred instead. 1. Call site is protected with HUGETLB_PAGE and CMA 2. Definition is protected with CMA and HUGETLB_PAGE - in arch/arm64/mm/hugetlb.c 3. Declaration is in arch/arm64/include/asm/hugetlb.h (without any #ifdef) OR 1. Definition is protected with CMA and HUGETLB_PAGE - in arch/arm64/mm/hugetlb.c 2. Stub definition and declaration with #ifdefs (HUGETLB_PAGE and CMA) in arch/arm64/mm/init.c 3. Call site is without any #ifdef
Hi Anshuman, Thank you for the patch! Yet something to improve: [auto build test ERROR on v5.8-rc2] [also build test ERROR on next-20200626] [cannot apply to arm64/for-next/core] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Anshuman-Khandual/arm64-hugetlb-Reserve-CMA-areas-for-gigantic-pages-on-16K-and-64K-configs/20200623-142507 base: 48778464bb7d346b47157d21ffde2af6b2d39110 config: arm64-randconfig-r025-20200624 (attached as .config) compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project 6e11ed52057ffc39941cb2de6d93cae522db4782) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install arm64 cross compiling tool for clang build # apt-get install binutils-aarch64-linux-gnu # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): >> arch/arm64/mm/init.c:428:2: error: implicit declaration of function 'arm64_hugetlb_cma_reserve' [-Werror,-Wimplicit-function-declaration] arm64_hugetlb_cma_reserve(); ^ arch/arm64/mm/init.c:428:2: note: did you mean 'hugetlb_cma_reserve'? include/linux/hugetlb.h:915:27: note: 'hugetlb_cma_reserve' declared here static inline __init void hugetlb_cma_reserve(int order) ^ 1 error generated. vim +/arm64_hugetlb_cma_reserve +428 arch/arm64/mm/init.c 408 409 void __init bootmem_init(void) 410 { 411 unsigned long min, max; 412 413 min = PFN_UP(memblock_start_of_DRAM()); 414 max = PFN_DOWN(memblock_end_of_DRAM()); 415 416 early_memtest(min << PAGE_SHIFT, max << PAGE_SHIFT); 417 418 max_pfn = max_low_pfn = max; 419 min_low_pfn = min; 420 421 arm64_numa_init(); 422 423 /* 424 * must be done after arm64_numa_init() which calls numa_init() to 425 * initialize node_online_map that gets used in hugetlb_cma_reserve() 426 * while allocating required CMA size across online nodes. 427 */ > 428 arm64_hugetlb_cma_reserve(); 429 430 /* 431 * Sparsemem tries to allocate bootmem in memory_present(), so must be 432 * done after the fixed reservations. 433 */ 434 memblocks_present(); 435 436 sparse_init(); 437 zone_sizes_init(min, max); 438 439 memblock_dump_all(); 440 } 441 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 94ba0c5..8eea0e0 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -17,6 +17,14 @@ extern bool arch_hugetlb_migration_supported(struct hstate *h); #endif +#ifdef CONFIG_CMA +void arm64_hugetlb_cma_reserve(void); +#else +static inline void arm64_hugetlb_cma_reserve(void) +{ +} +#endif + static inline void arch_clear_hugepage_flags(struct page *page) { clear_bit(PG_dcache_clean, &page->flags); diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 0a52ce4..ea7fb48 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -19,6 +19,44 @@ #include <asm/tlbflush.h> #include <asm/pgalloc.h> +/* + * HugeTLB Support Matrix + * + * --------------------------------------------------- + * | Page Size | CONT PTE | PMD | CONT PMD | PUD | + * --------------------------------------------------- + * | 4K | 64K | 2M | 32M | 1G | + * | 16K | 2M | 32M | 1G | | + * | 64K | 2M | 512M | 16G | | + * --------------------------------------------------- + */ + +/* + * Reserve CMA areas for the largest supported gigantic + * huge page when requested. Any other smaller gigantic + * huge pages could still be served from those areas. + */ +#ifdef CONFIG_CMA +void __init arm64_hugetlb_cma_reserve(void) +{ + int order; + +#ifdef CONFIG_ARM64_4K_PAGES + order = PUD_SHIFT - PAGE_SHIFT; +#else + order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT; +#endif + /* + * HugeTLB CMA reservation is required for gigantic + * huge pages which could not be allocated via the + * page allocator. Just warn if there is any change + * breaking this assumption. + */ + WARN_ON(order <= MAX_ORDER); + hugetlb_cma_reserve(order); +} +#endif /* CONFIG_CMA */ + #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION bool arch_hugetlb_migration_supported(struct hstate *h) { diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1e93cfc..fabf8b0 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -425,9 +425,7 @@ void __init bootmem_init(void) * initialize node_online_map that gets used in hugetlb_cma_reserve() * while allocating required CMA size across online nodes. */ -#ifdef CONFIG_ARM64_4K_PAGES - hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); -#endif + arm64_hugetlb_cma_reserve(); /* * Sparsemem tries to allocate bootmem in memory_present(), so must be
Currently 'hugetlb_cma=' command line argument does not create CMA area on ARM64_16K_PAGES and ARM64_64K_PAGES based platforms. Instead, it just ends up with the following warning message. Reason being, hugetlb_cma_reserve() never gets called for these huge page sizes. [ 64.255669] hugetlb_cma: the option isn't supported by current arch This enables CMA areas reservation on ARM64_16K_PAGES and ARM64_64K_PAGES configs by defining an unified arm64_hugetlb_cma_reseve() that is wrapped in CONFIG_CMA. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Barry Song <song.bao.hua@hisilicon.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> --- Applies on 5.8-rc2. arch/arm64/include/asm/hugetlb.h | 8 ++++++++ arch/arm64/mm/hugetlbpage.c | 38 ++++++++++++++++++++++++++++++++++++++ arch/arm64/mm/init.c | 4 +--- 3 files changed, 47 insertions(+), 3 deletions(-)