mbox series

[v2,0/6] mm/memblock: Skip prep and initialization of struct pages freed later by HVO

Message ID 20230730151606.2871391-1-usama.arif@bytedance.com (mailing list archive)
Headers show
Series mm/memblock: Skip prep and initialization of struct pages freed later by HVO | expand

Message

Usama Arif July 30, 2023, 3:16 p.m. UTC
If the region is for gigantic hugepages and if HVO is enabled, then those
struct pages which will be freed later by HVO don't need to be prepared and
initialized. This can save significant time when a large number of hugepages
are allocated at boot time.

For a 1G hugepage, this series avoid initialization and preparation of
262144 - 64 = 262080 struct pages per hugepage.

When tested on a 512G system (which can allocate max 500 1G hugepages), the
kexec-boot time with HVO and DEFERRED_STRUCT_PAGE_INIT enabled without this
patchseries to running init is 3.9 seconds. With this patch it is 1.2 seconds.
This represents an approximately 70% reduction in boot time and will
significantly reduce server downtime when using a large number of
gigantic pages.

Thanks,
Usama

[v1->v2]:
- (Mike Rapoport) Code quality improvements (function names, arguments,
comments).

[RFC->v1]:
- (Mike Rapoport) Change from passing hugepage_size in
memblock_alloc_try_nid_raw for skipping struct page initialization to
using MEMBLOCK_RSRV_NOINIT flag



Usama Arif (6):
  mm: hugetlb: Skip prep of tail pages when HVO is enabled
  mm: hugetlb_vmemmap: Use nid of the head page to reallocate it
  memblock: pass memblock_type to memblock_setclr_flag
  memblock: introduce MEMBLOCK_RSRV_NOINIT flag
  mm: move allocation of gigantic hstates to the start of mm_core_init
  mm: hugetlb: Skip initialization of struct pages freed later by HVO

 include/linux/memblock.h |  9 +++++
 mm/hugetlb.c             | 71 +++++++++++++++++++++++++---------------
 mm/hugetlb_vmemmap.c     |  6 ++--
 mm/hugetlb_vmemmap.h     | 18 +++++++---
 mm/internal.h            |  9 +++++
 mm/memblock.c            | 45 +++++++++++++++++--------
 mm/mm_init.c             |  6 ++++
 7 files changed, 118 insertions(+), 46 deletions(-)

Comments

Usama Arif July 30, 2023, 10:28 p.m. UTC | #1
On 30/07/2023 16:16, Usama Arif wrote:
> If the region is for gigantic hugepages and if HVO is enabled, then those
> struct pages which will be freed later by HVO don't need to be prepared and
> initialized. This can save significant time when a large number of hugepages
> are allocated at boot time.
> 
> For a 1G hugepage, this series avoid initialization and preparation of
> 262144 - 64 = 262080 struct pages per hugepage.
> 
> When tested on a 512G system (which can allocate max 500 1G hugepages), the
> kexec-boot time with HVO and DEFERRED_STRUCT_PAGE_INIT enabled without this
> patchseries to running init is 3.9 seconds. With this patch it is 1.2 seconds.
> This represents an approximately 70% reduction in boot time and will
> significantly reduce server downtime when using a large number of
> gigantic pages.
> 
> Thanks,
> Usama
> 

There were build errors reported by kernel-bot when 
CONFIG_HUGETLBFS/CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is disabled due to 
patches 5 and 6 which should be fixed by below diff. Will wait for 
review and include it in next revision as its a trivial diff

diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 3fff6f611c19..285b59b71203 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -38,6 +38,8 @@ static inline unsigned int 
hugetlb_vmemmap_optimizable_size(const struct hstate
                 return 0;
         return size > 0 ? size : 0;
  }
+
+extern bool vmemmap_optimize_enabled;
  #else
  static inline int hugetlb_vmemmap_restore(const struct hstate *h, 
struct page *head)
  {
@@ -58,6 +60,8 @@ static inline bool vmemmap_should_optimize(const 
struct hstate *h, const struct
         return false;
  }

+static bool vmemmap_optimize_enabled = false;
+
  #endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */

  static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h)
@@ -65,6 +69,4 @@ static inline bool hugetlb_vmemmap_optimizable(const 
struct hstate *h)
         return hugetlb_vmemmap_optimizable_size(h) != 0;
  }

-extern bool vmemmap_optimize_enabled;
-
  #endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/internal.h b/mm/internal.h
index 692bb1136a39..c3321afa36cb 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1106,7 +1106,7 @@ struct vma_prepare {
  #ifdef CONFIG_HUGETLBFS
  void __init hugetlb_hstate_alloc_gigantic_pages(void);
  #else
-static inline void __init hugetlb_hstate_alloc_gigantic_pages(void);
+static inline void __init hugetlb_hstate_alloc_gigantic_pages(void)
  {
  }
  #endif /* CONFIG_HUGETLBFS */


> [v1->v2]:
> - (Mike Rapoport) Code quality improvements (function names, arguments,
> comments).
> 
> [RFC->v1]:
> - (Mike Rapoport) Change from passing hugepage_size in
> memblock_alloc_try_nid_raw for skipping struct page initialization to
> using MEMBLOCK_RSRV_NOINIT flag
> 
> 
> 
> Usama Arif (6):
>    mm: hugetlb: Skip prep of tail pages when HVO is enabled
>    mm: hugetlb_vmemmap: Use nid of the head page to reallocate it
>    memblock: pass memblock_type to memblock_setclr_flag
>    memblock: introduce MEMBLOCK_RSRV_NOINIT flag
>    mm: move allocation of gigantic hstates to the start of mm_core_init
>    mm: hugetlb: Skip initialization of struct pages freed later by HVO
> 
>   include/linux/memblock.h |  9 +++++
>   mm/hugetlb.c             | 71 +++++++++++++++++++++++++---------------
>   mm/hugetlb_vmemmap.c     |  6 ++--
>   mm/hugetlb_vmemmap.h     | 18 +++++++---
>   mm/internal.h            |  9 +++++
>   mm/memblock.c            | 45 +++++++++++++++++--------
>   mm/mm_init.c             |  6 ++++
>   7 files changed, 118 insertions(+), 46 deletions(-)
>