mbox series

[v20,0/9] Free some vmemmap pages of HugeTLB page

Message ID 20210415084005.25049-1-songmuchun@bytedance.com (mailing list archive)
Headers show
Series Free some vmemmap pages of HugeTLB page | expand

Message

Muchun Song April 15, 2021, 8:39 a.m. UTC
Hi,

Since Mike's patches (make hugetlb put_page safe for all calling contexts[1])
applied into the next-20210412. We can move forward on this patch series now.

This patch series will free some vmemmap pages(struct page structures)
associated with each HugeTLB page when preallocated to save memory.

In order to reduce the difficulty of the first version of code review.
From this version, we disable PMD/huge page mapping of vmemmap if this
feature was enabled. This acutely eliminates a bunch of the complex code
doing page table manipulation. When this patch series is solid, we cam add
the code of vmemmap page table manipulation in the future.

The struct page structures (page structs) are used to describe a physical
page frame. By default, there is an one-to-one mapping from a page frame to
it's corresponding page struct.

The HugeTLB pages consist of multiple base page size pages and is supported
by many architectures. See hugetlbpage.rst in the Documentation directory
for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
are currently supported. Since the base page size on x86 is 4KB, a 2MB
HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
4096 base pages. For each base page, there is a corresponding page struct.

Within the HugeTLB subsystem, only the first 4 page structs are used to
contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
provides this upper limit. The only 'useful' information in the remaining
page structs is the compound_head field, and this field is the same for all
tail pages.

By removing redundant page structs for HugeTLB pages, memory can returned to
the buddy allocator for other uses.

When the system boot up, every 2M HugeTLB has 512 struct page structs which
size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | -------------> |     2     |
 |           |                     +-----------+                +-----------+
 |           |                     |     3     | -------------> |     3     |
 |           |                     +-----------+                +-----------+
 |           |                     |     4     | -------------> |     4     |
 |    2MB    |                     +-----------+                +-----------+
 |           |                     |     5     | -------------> |     5     |
 |           |                     +-----------+                +-----------+
 |           |                     |     6     | -------------> |     6     |
 |           |                     +-----------+                +-----------+
 |           |                     |     7     | -------------> |     7     |
 |           |                     +-----------+                +-----------+
 |           |
 |           |
 |           |
 +-----------+

The value of page->compound_head is the same for all tail pages. The first
page of page structs (page 0) associated with the HugeTLB page contains the 4
page structs necessary to describe the HugeTLB. The only use of the remaining
pages of page structs (page 1 to page 7) is to point to page->compound_head.
Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
will be used for each HugeTLB page. This will allow us to free the remaining
6 pages to the buddy allocator.

Here is how things look after remapping.

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
 |           |                     +-----------+                   | | | | |
 |           |                     |     3     | ------------------+ | | | |
 |           |                     +-----------+                     | | | |
 |           |                     |     4     | --------------------+ | | |
 |    2MB    |                     +-----------+                       | | |
 |           |                     |     5     | ----------------------+ | |
 |           |                     +-----------+                         | |
 |           |                     |     6     | ------------------------+ |
 |           |                     +-----------+                           |
 |           |                     |     7     | --------------------------+
 |           |                     +-----------+
 |           |
 |           |
 |           |
 +-----------+

When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
vmemmap pages and restore the previous mapping relationship.

Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
pages.

In this case, for the 1GB HugeTLB page, we can save 4094 pages. This is a
very substantial gain. On our server, run some SPDK/QEMU applications which
will use 1024GB HugeTLB page. With this feature enabled, we can save ~16GB
(1G hugepage)/~12GB (2MB hugepage) memory.

Because there are vmemmap page tables reconstruction on the freeing/allocating
path, it increases some overhead. Here are some overhead analysis.

1) Allocating 10240 2MB HugeTLB pages.

   a) With this patch series applied:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.166s
   user     0m0.000s
   sys      0m0.166s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
     kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)           5476 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [16K, 32K)          4760 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@       |
   [32K, 64K)             4 |                                                    |

   b) Without this patch series:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.067s
   user     0m0.000s
   sys      0m0.067s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
     kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)           10147 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)             93 |                                                    |

   Summarize: this feature is about ~2x slower than before.

2) Freeing 10240 2MB HugeTLB pages.

   a) With this patch series applied:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.213s
   user     0m0.000s
   sys      0m0.213s

   # bpftrace -e 'kprobe:free_pool_huge_page { @start[tid] = nsecs; }
     kretprobe:free_pool_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)              6 |                                                    |
   [16K, 32K)         10227 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [32K, 64K)             7 |                                                    |

   b) Without this patch series:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.081s
   user     0m0.000s
   sys      0m0.081s

   # bpftrace -e 'kprobe:free_pool_huge_page { @start[tid] = nsecs; }
     kretprobe:free_pool_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)            6805 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)           3427 |@@@@@@@@@@@@@@@@@@@@@@@@@@                          |
   [16K, 32K)             8 |                                                    |

   Summarize: The overhead of __free_hugepage is about ~2-3x slower than before.

Although the overhead has increased, the overhead is not significant. Like Mike
said, "However, remember that the majority of use cases create HugeTLB pages at
or shortly after boot time and add them to the pool. So, additional overhead is
at pool creation time. There is no change to 'normal run time' operations of
getting a page from or returning a page to the pool (think page fault/unmap)".

Despite the overhead and in addition to the memory gains from this series. The
following data is obtained by Joao Martins. Very thanks to his effort.

There's an additional benefit which is page (un)pinners will see an improvement
and Joao presumes because there are fewer memmap pages and thus the tail/head
pages are staying in cache more often.

Out of the box Joao saw (when comparing linux-next against linux-next + this series)
with gup_test and pinning a 16G HugeTLB file (with 1G pages):

	get_user_pages(): ~32k -> ~9k
	unpin_user_pages(): ~75k -> ~70k

Usually any tight loop fetching compound_head(), or reading tail pages data (e.g.
compound_head) benefit a lot. There's some unpinning inefficiencies Joao was
fixing[2], but with that in added it shows even more:

	unpin_user_pages(): ~27k -> ~3.8k

[1] https://lore.kernel.org/linux-mm/20210409205254.242291-1-mike.kravetz@oracle.com/
[2] https://lore.kernel.org/linux-mm/20210204202500.26474-1-joao.m.martins@oracle.com/

Todo:
  - Free all of the tail vmemmap pages
    Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really can
    free 7 vmemmap pages. In this case, we can see 8 of the 512 struct page
    structures has beed set PG_head flag. If we can adjust compound_head()
    slightly and make compound_head() return the real head struct page when
    the parameter is the tail struct page but with PG_head flag set.

    In order to make the code evolution route clearer. This feature can can be
    a separate patch after this patchset is solid.

  - Support for other architectures (e.g. aarch64).
  - Enable PMD/huge page mapping of vmemmap even if this feature was enabled.

Changelog in v19 -> v20:
  - Rebase to next-20210412.
  - Introduce workqueue to defer freeing HugeTLB pages.
  - Remove all tags (Reviewed-by ot Tested-by) from patch 6.
  - Disable memmap_on_memory when hugetlb_free_vmemmap enabled (patch 8).

Changelog in v18 -> v19:
  - Fix compiler error on sparc arch. Thanks Stephen.
  - Make patch "gather discrete indexes of tail page" prior to "free the vmemmap
    pages associated with each HugeTLB page".
  - Remove some BUG_ON from patch #4.
  - Update patch #6 changelog.
  - Update Documentation/admin-guide/mm/memory-hotplug.rst.
  - Drop the patch of "optimize the code with the help of the compiler".
  - Update Documentation/admin-guide/kernel-parameters.txt in patch #7.
  - Trim update_and_free_page.

 Thanks to Michal, Oscar and Mike's review and suggestions.

Changelog in v17 -> v18:
  - Add complete copyright to bootmem_info.c (Suggested by Balbir).
  - Fix some issues (in patch #4) suggested by Mike.

  Thanks to Balbir and Mike's review. Also thanks to Chen Huang and
  Bodeddula Balasubramaniam's test.

Changelog in v16 -> v17:
  - Fix issues suggested by Mike and Oscar.
  - Update commit log suggested by Michal.

  Thanks to Mike, David H and Michal's suggestions and review.

Changelog in v15 -> v16:
  - Use GFP_KERNEL to allocate vmemmap pages.

  Thanks to Mike, David H and Michal's suggestions.

Changelog in v14 -> v15:
  - Fix some issues suggested by Oscar. Thanks to Oscar.
  - Add numbers which Joao Martins tested to cover letter. Thanks to his effort.

Changelog in v13 -> v14:
  - Refuse to free the HugeTLB page when the system is under memory pressure.
  - Use GFP_ATOMIC to allocate vmemmap pages instead of GFP_KERNEL.
  - Rebase to linux-next 20210202.
  - Fix and add some comments for vmemmap_remap_free().

  Thanks to Oscar, Mike, David H and David R's suggestions and review.

Changelog in v12 -> v13:
  - Remove VM_WARN_ON_PAGE macro.
  - Add more comments in vmemmap_pte_range() and vmemmap_remap_free().

  Thanks to Oscar and Mike's suggestions and review.

Changelog in v11 -> v12:
  - Move VM_WARN_ON_PAGE to a separate patch.
  - Call __free_hugepage() with hugetlb_lock (See patch #5.) to serialize
    with dissolve_free_huge_page(). It is to prepare for patch #9.
  - Introduce PageHugeInflight. See patch #9.

Changelog in v10 -> v11:
  - Fix compiler error when !CONFIG_HUGETLB_PAGE_FREE_VMEMMAP.
  - Rework some comments and commit changes.
  - Rework vmemmap_remap_free() to 3 parameters.

  Thanks to Oscar and Mike's suggestions and review.

Changelog in v9 -> v10:
  - Fix a bug in patch #11. Thanks to Oscar for pointing that out.
  - Rework some commit log or comments. Thanks Mike and Oscar for the suggestions.
  - Drop VMEMMAP_TAIL_PAGE_REUSE in the patch #3.

  Thank you very much Mike and Oscar for reviewing the code.

Changelog in v8 -> v9:
  - Rework some code. Very thanks to Oscar.
  - Put all the non-hugetlb vmemmap functions under sparsemem-vmemmap.c.

Changelog in v7 -> v8:
  - Adjust the order of patches.

  Very thanks to David and Oscar. Your suggestions are very valuable.

Changelog in v6 -> v7:
  - Rebase to linux-next 20201130
  - Do not use basepage mapping for vmemmap when this feature is disabled.
  - Rework some patchs.
    [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
    [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page

  Thanks to Oscar and Barry.

Changelog in v5 -> v6:
  - Disable PMD/huge page mapping of vmemmap if this feature was enabled.
  - Simplify the first version code.

Changelog in v4 -> v5:
  - Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 05/21].

  Thanks to Mike and Oscar's suggestions.

Changelog in v3 -> v4:
  - Move all the vmemmap functions to hugetlb_vmemmap.c.
  - Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
    disable this feature, we should disable it by a boot/kernel command line.
  - Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
  - Initialize page table lock for vmemmap through core_initcall mechanism.

  Thanks for Mike and Oscar's suggestions.

Changelog in v2 -> v3:
  - Rename some helps function name. Thanks Mike.
  - Rework some code. Thanks Mike and Oscar.
  - Remap the tail vmemmap page with PAGE_KERNEL_RO instead of PAGE_KERNEL.
    Thanks Matthew.
  - Add some overhead analysis in the cover letter.
  - Use vmemap pmd table lock instead of a hugetlb specific global lock.

Changelog in v1 -> v2:
  - Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
  - Fix some typo and code style problems.
  - Remove unused handle_vmemmap_fault().
  - Merge some commits to one commit suggested by Mike.

Muchun Song (9):
  mm: memory_hotplug: factor out bootmem core functions to
    bootmem_info.c
  mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm: hugetlb: gather discrete indexes of tail page
  mm: hugetlb: free the vmemmap pages associated with each HugeTLB page
  mm: hugetlb: defer freeing of HugeTLB pages
  mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  mm: memory_hotplug: disable memmap_on_memory when hugetlb_free_vmemmap
    enabled
  mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate

 Documentation/admin-guide/kernel-parameters.txt |  21 ++
 Documentation/admin-guide/mm/hugetlbpage.rst    |  11 +
 Documentation/admin-guide/mm/memory-hotplug.rst |  13 ++
 arch/sparc/mm/init_64.c                         |   1 +
 arch/x86/mm/init_64.c                           |  13 +-
 drivers/acpi/acpi_memhotplug.c                  |   1 +
 fs/Kconfig                                      |   5 +
 include/linux/bootmem_info.h                    |  66 ++++++
 include/linux/hugetlb.h                         |  46 +++-
 include/linux/hugetlb_cgroup.h                  |  19 +-
 include/linux/memory_hotplug.h                  |  27 ---
 include/linux/mm.h                              |   5 +
 mm/Makefile                                     |   2 +
 mm/bootmem_info.c                               | 127 ++++++++++
 mm/hugetlb.c                                    | 157 +++++++++++--
 mm/hugetlb_vmemmap.c                            | 297 ++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h                            |  45 ++++
 mm/memory_hotplug.c                             | 134 ++---------
 mm/sparse-vmemmap.c                             | 267 +++++++++++++++++++++
 mm/sparse.c                                     |   1 +
 20 files changed, 1078 insertions(+), 180 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

Comments

Mike Kravetz April 19, 2021, 11:19 p.m. UTC | #1
On 4/15/21 1:40 AM, Muchun Song wrote:
> When we free a HugeTLB page to the buddy allocator, we need to allocate
> the vmemmap pages associated with it. However, we may not be able to
> allocate the vmemmap pages when the system is under memory pressure. In
> this case, we just refuse to free the HugeTLB page. This changes behavior
> in some corner cases as listed below:
> 
>  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> 
>     User needs to try again later.
> 
>  2) Failing to free a surplus huge page when freed by the application.
> 
>     Try again later when freeing a huge page next time.
> 
>  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
>     offline_pages().
> 
>     This can happen when we have plenty of ZONE_MOVABLE memory, but
>     not enough kernel memory to allocate vmemmmap pages.  We may even
>     be able to migrate huge page contents, but will not be able to
>     dissolve the source huge page.  This will prevent an offline
>     operation and is unfortunate as memory offlining is expected to
>     succeed on movable zones.  Users that depend on memory hotplug
>     to succeed for movable zones should carefully consider whether the
>     memory savings gained from this feature are worth the risk of
>     possibly not being able to offline memory in certain situations.
> 
>  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
>     alloc_contig_range() - once we have that handling in place. Mainly
>     affects CMA and virtio-mem.
> 
>     Similar to 3). virito-mem will handle migration errors gracefully.
>     CMA might be able to fallback on other free areas within the CMA
>     region.
> 
> Vmemmap pages are allocated from the page freeing context. In order for
> those allocations to be not disruptive (e.g. trigger oom killer)
> __GFP_NORETRY is used. hugetlb_lock is dropped for the allocation
> because a non sleeping allocation would be too fragile and it could fail
> too easily under memory pressure. GFP_ATOMIC or other modes to access
> memory reserves is not used because we want to prevent consuming
> reserves under heavy hugetlb freeing.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  Documentation/admin-guide/mm/hugetlbpage.rst    |  8 +++
>  Documentation/admin-guide/mm/memory-hotplug.rst | 13 ++++
>  include/linux/hugetlb.h                         |  3 +
>  include/linux/mm.h                              |  2 +
>  mm/hugetlb.c                                    | 85 ++++++++++++++++++++-----
>  mm/hugetlb_vmemmap.c                            | 34 ++++++++++
>  mm/hugetlb_vmemmap.h                            |  6 ++
>  mm/sparse-vmemmap.c                             | 75 +++++++++++++++++++++-
>  8 files changed, 210 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> index f7b1c7462991..6988895d09a8 100644
> --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> @@ -60,6 +60,10 @@ HugePages_Surp
>          the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
>          maximum number of surplus huge pages is controlled by
>          ``/proc/sys/vm/nr_overcommit_hugepages``.
> +	Note: When the feature of freeing unused vmemmap pages associated
> +	with each hugetlb page is enabled, the number of surplus huge pages
> +	may be temporarily larger than the maximum number of surplus huge
> +	pages when the system is under memory pressure.
>  Hugepagesize
>  	is the default hugepage size (in Kb).
>  Hugetlb
> @@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task.  A user with root
>  privileges can dynamically allocate more or free some persistent huge pages
>  by increasing or decreasing the value of ``nr_hugepages``.
>  
> +Note: When the feature of freeing unused vmemmap pages associated with each
> +hugetlb page is enabled, we can fail to free the huge pages triggered by
> +the user when ths system is under memory pressure.  Please try again later.
> +
>  Pages that are used as huge pages are reserved inside the kernel and cannot
>  be used for other purposes.  Huge pages cannot be swapped out under
>  memory pressure.
> diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst
> index 05d51d2d8beb..c6bae2d77160 100644
> --- a/Documentation/admin-guide/mm/memory-hotplug.rst
> +++ b/Documentation/admin-guide/mm/memory-hotplug.rst
> @@ -357,6 +357,19 @@ creates ZONE_MOVABLE as following.
>     Unfortunately, there is no information to show which memory block belongs
>     to ZONE_MOVABLE. This is TBD.
>  
> +   Memory offlining can fail when dissolving a free huge page on ZONE_MOVABLE
> +   and the feature of freeing unused vmemmap pages associated with each hugetlb
> +   page is enabled.
> +
> +   This can happen when we have plenty of ZONE_MOVABLE memory, but not enough
> +   kernel memory to allocate vmemmmap pages.  We may even be able to migrate
> +   huge page contents, but will not be able to dissolve the source huge page.
> +   This will prevent an offline operation and is unfortunate as memory offlining
> +   is expected to succeed on movable zones.  Users that depend on memory hotplug
> +   to succeed for movable zones should carefully consider whether the memory
> +   savings gained from this feature are worth the risk of possibly not being
> +   able to offline memory in certain situations.
> +
>  .. note::
>     Techniques that rely on long-term pinnings of memory (especially, RDMA and
>     vfio) are fundamentally problematic with ZONE_MOVABLE and, therefore, memory
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 0abed7e766b8..6e970a7d3480 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -525,6 +525,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>   *	code knows it has only reference.  All other examinations and
>   *	modifications require hugetlb_lock.
>   * HPG_freed - Set when page is on the free lists.
> + * HPG_vmemmap_optimized - Set when the vmemmap pages of the page are freed.
>   *	Synchronization: hugetlb_lock held for examination and modification.

I like the per-page flag.  In previous versions of the series, you just
checked the free_vmemmap_pages_per_hpage() to determine if vmemmmap
should be allocated.  Is there any change in functionality that makes is
necessary to set the flag in each page, or is it mostly for flexability
going forward?

>   */
>  enum hugetlb_page_flags {
> @@ -532,6 +533,7 @@ enum hugetlb_page_flags {
>  	HPG_migratable,
>  	HPG_temporary,
>  	HPG_freed,
> +	HPG_vmemmap_optimized,
>  	__NR_HPAGEFLAGS,
>  };
>  
> @@ -577,6 +579,7 @@ HPAGEFLAG(RestoreReserve, restore_reserve)
>  HPAGEFLAG(Migratable, migratable)
>  HPAGEFLAG(Temporary, temporary)
>  HPAGEFLAG(Freed, freed)
> +HPAGEFLAG(VmemmapOptimized, vmemmap_optimized)
>  
>  #ifdef CONFIG_HUGETLB_PAGE
>  
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index a4d160ddb749..d0854828bb9c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3048,6 +3048,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
>  
>  void vmemmap_remap_free(unsigned long start, unsigned long end,
>  			unsigned long reuse);
> +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> +			unsigned long reuse, gfp_t gfp_mask);
>  
>  void *sparse_buffer_alloc(unsigned long size);
>  struct page * __populate_section_memmap(unsigned long pfn,
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index eeb8f5480170..1c37f0098e00 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1376,6 +1376,34 @@ static void remove_hugetlb_page(struct hstate *h, struct page *page,
>  	h->nr_huge_pages_node[nid]--;
>  }
>  
> +static void add_hugetlb_page(struct hstate *h, struct page *page,
> +			     bool adjust_surplus)
> +{

We need to be a bit careful with hugepage specific flags that may be
set.  The routine remove_hugetlb_page which is called for 'page' before
this routine will not clear any of the hugepage specific flags.  If the
calling path goes through free_huge_page, most but not all flags are
cleared.

We had a discussion about clearing the page->private field in Oscar's
series.  In the case of 'new' pages we can assume page->private is
cleared, but perhaps we should not make that assumption here.  Since we
hope to rarely call this routine, it might be safer to do something
like:

	set_page_private(page, 0);
	SetHPageVmemmapOptimized(page);

> +	int nid = page_to_nid(page);
> +
> +	lockdep_assert_held(&hugetlb_lock);
> +
> +	INIT_LIST_HEAD(&page->lru);
> +	h->nr_huge_pages++;
> +	h->nr_huge_pages_node[nid]++;
> +
> +	if (adjust_surplus) {
> +		h->surplus_huge_pages++;
> +		h->surplus_huge_pages_node[nid]++;
> +	}
> +
> +	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> +
> +	/*
> +	 * The refcount can possibly be increased by memory-failure or
> +	 * soft_offline handlers.
> +	 */
> +	if (likely(put_page_testzero(page))) {

In the existing code there is no such test.  Is the need for the test
because of something introduced in the new code?  Or, should this test
be in the existing code?

Sorry, I am not seeing why this is needed.

> +		arch_clear_hugepage_flags(page);
> +		enqueue_huge_page(h, page);
> +	}
> +}
> +
>  static void __update_and_free_page(struct hstate *h, struct page *page)
>  {
>  	int i;
> @@ -1384,6 +1412,18 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
>  	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
>  		return;
>  
> +	if (alloc_huge_page_vmemmap(h, page)) {
> +		spin_lock_irq(&hugetlb_lock);
> +		/*
> +		 * If we cannot allocate vmemmap pages, just refuse to free the
> +		 * page and put the page back on the hugetlb free list and treat
> +		 * as a surplus page.
> +		 */
> +		add_hugetlb_page(h, page, true);
> +		spin_unlock_irq(&hugetlb_lock);
> +		return;
> +	}
> +
>  	for (i = 0; i < pages_per_huge_page(h);
>  	     i++, subpage = mem_map_next(subpage, page, i)) {
>  		subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
> @@ -1444,7 +1484,7 @@ static inline void flush_free_hpage_work(struct hstate *h)
>  static void update_and_free_page(struct hstate *h, struct page *page,
>  				 bool atomic)
>  {
> -	if (!free_vmemmap_pages_per_hpage(h) || !atomic) {
> +	if (!HPageVmemmapOptimized(page) || !atomic) {
>  		__update_and_free_page(h, page);
>  		return;
>  	}

When update_and_free_pages_bulk was added it was done to avoid
lock/unlock cycles with each page.  At the time, I thought about the
addition of code to allocate vmmemmap, and the possibility that those
allocations could fail.  I thought it might make sense to perhaps
process the pages one at a time so that we could quit at the first
allocation failure.  After more thought, I think it is best to leave the
code to do bulk operations as you have done above.  Why?
- Just because one allocation fails does not mean the next will fail.
  It is possible the allocations could be from different nodes/zones.
- We will still need to put the requested number of pages into surplus
  state.

I am not suggesting you change anything.  Just wanted to share my
thoughts in case someone thought otherwise.

> @@ -1790,10 +1830,14 @@ static struct page *remove_pool_huge_page(struct hstate *h,
>   * nothing for in-use hugepages and non-hugepages.
>   * This function returns values like below:
>   *
> - *  -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
> - *          (allocated or reserved.)
> - *       0: successfully dissolved free hugepages or the page is not a
> - *          hugepage (considered as already dissolved)
> + *  -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages
> + *           when the system is under memory pressure and the feature of
> + *           freeing unused vmemmap pages associated with each hugetlb page
> + *           is enabled.
> + *  -EBUSY:  failed to dissolved free hugepages or the hugepage is in-use
> + *           (allocated or reserved.)
> + *       0:  successfully dissolved free hugepages or the page is not a
> + *           hugepage (considered as already dissolved)
>   */
>  int dissolve_free_huge_page(struct page *page)
>  {
> @@ -1835,19 +1879,30 @@ int dissolve_free_huge_page(struct page *page)
>  			goto retry;
>  		}
>  
> -		/*
> -		 * Move PageHWPoison flag from head page to the raw error page,
> -		 * which makes any subpages rather than the error page reusable.
> -		 */
> -		if (PageHWPoison(head) && page != head) {
> -			SetPageHWPoison(page);
> -			ClearPageHWPoison(head);
> -		}
>  		remove_hugetlb_page(h, page, false);
>  		h->max_huge_pages--;
>  		spin_unlock_irq(&hugetlb_lock);
> -		update_and_free_page(h, head, false);
> -		return 0;
> +
> +		rc = alloc_huge_page_vmemmap(h, page);
> +		if (!rc) {
> +			/*
> +			 * Move PageHWPoison flag from head page to the raw
> +			 * error page, which makes any subpages rather than
> +			 * the error page reusable.
> +			 */
> +			if (PageHWPoison(head) && page != head) {
> +				SetPageHWPoison(page);
> +				ClearPageHWPoison(head);
> +			}
> +			update_and_free_page(h, head, false);
> +		} else {
> +			spin_lock_irq(&hugetlb_lock);
> +			add_hugetlb_page(h, page, false);
> +			h->max_huge_pages++;
> +			spin_unlock_irq(&hugetlb_lock);
> +		}
> +
> +		return rc;
>  	}
>  out:
>  	spin_unlock_irq(&hugetlb_lock);

Changes in the files below have not changed in any significant way
since the previous version.  The code looks good to me, but I would
like to see if there are comments from others.

Thanks,
Muchun Song April 20, 2021, 8:46 a.m. UTC | #2
On Tue, Apr 20, 2021 at 7:20 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 4/15/21 1:40 AM, Muchun Song wrote:
> > When we free a HugeTLB page to the buddy allocator, we need to allocate
> > the vmemmap pages associated with it. However, we may not be able to
> > allocate the vmemmap pages when the system is under memory pressure. In
> > this case, we just refuse to free the HugeTLB page. This changes behavior
> > in some corner cases as listed below:
> >
> >  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> >
> >     User needs to try again later.
> >
> >  2) Failing to free a surplus huge page when freed by the application.
> >
> >     Try again later when freeing a huge page next time.
> >
> >  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
> >     offline_pages().
> >
> >     This can happen when we have plenty of ZONE_MOVABLE memory, but
> >     not enough kernel memory to allocate vmemmmap pages.  We may even
> >     be able to migrate huge page contents, but will not be able to
> >     dissolve the source huge page.  This will prevent an offline
> >     operation and is unfortunate as memory offlining is expected to
> >     succeed on movable zones.  Users that depend on memory hotplug
> >     to succeed for movable zones should carefully consider whether the
> >     memory savings gained from this feature are worth the risk of
> >     possibly not being able to offline memory in certain situations.
> >
> >  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
> >     alloc_contig_range() - once we have that handling in place. Mainly
> >     affects CMA and virtio-mem.
> >
> >     Similar to 3). virito-mem will handle migration errors gracefully.
> >     CMA might be able to fallback on other free areas within the CMA
> >     region.
> >
> > Vmemmap pages are allocated from the page freeing context. In order for
> > those allocations to be not disruptive (e.g. trigger oom killer)
> > __GFP_NORETRY is used. hugetlb_lock is dropped for the allocation
> > because a non sleeping allocation would be too fragile and it could fail
> > too easily under memory pressure. GFP_ATOMIC or other modes to access
> > memory reserves is not used because we want to prevent consuming
> > reserves under heavy hugetlb freeing.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  Documentation/admin-guide/mm/hugetlbpage.rst    |  8 +++
> >  Documentation/admin-guide/mm/memory-hotplug.rst | 13 ++++
> >  include/linux/hugetlb.h                         |  3 +
> >  include/linux/mm.h                              |  2 +
> >  mm/hugetlb.c                                    | 85 ++++++++++++++++++++-----
> >  mm/hugetlb_vmemmap.c                            | 34 ++++++++++
> >  mm/hugetlb_vmemmap.h                            |  6 ++
> >  mm/sparse-vmemmap.c                             | 75 +++++++++++++++++++++-
> >  8 files changed, 210 insertions(+), 16 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> > index f7b1c7462991..6988895d09a8 100644
> > --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> > @@ -60,6 +60,10 @@ HugePages_Surp
> >          the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
> >          maximum number of surplus huge pages is controlled by
> >          ``/proc/sys/vm/nr_overcommit_hugepages``.
> > +     Note: When the feature of freeing unused vmemmap pages associated
> > +     with each hugetlb page is enabled, the number of surplus huge pages
> > +     may be temporarily larger than the maximum number of surplus huge
> > +     pages when the system is under memory pressure.
> >  Hugepagesize
> >       is the default hugepage size (in Kb).
> >  Hugetlb
> > @@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task.  A user with root
> >  privileges can dynamically allocate more or free some persistent huge pages
> >  by increasing or decreasing the value of ``nr_hugepages``.
> >
> > +Note: When the feature of freeing unused vmemmap pages associated with each
> > +hugetlb page is enabled, we can fail to free the huge pages triggered by
> > +the user when ths system is under memory pressure.  Please try again later.
> > +
> >  Pages that are used as huge pages are reserved inside the kernel and cannot
> >  be used for other purposes.  Huge pages cannot be swapped out under
> >  memory pressure.
> > diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst
> > index 05d51d2d8beb..c6bae2d77160 100644
> > --- a/Documentation/admin-guide/mm/memory-hotplug.rst
> > +++ b/Documentation/admin-guide/mm/memory-hotplug.rst
> > @@ -357,6 +357,19 @@ creates ZONE_MOVABLE as following.
> >     Unfortunately, there is no information to show which memory block belongs
> >     to ZONE_MOVABLE. This is TBD.
> >
> > +   Memory offlining can fail when dissolving a free huge page on ZONE_MOVABLE
> > +   and the feature of freeing unused vmemmap pages associated with each hugetlb
> > +   page is enabled.
> > +
> > +   This can happen when we have plenty of ZONE_MOVABLE memory, but not enough
> > +   kernel memory to allocate vmemmmap pages.  We may even be able to migrate
> > +   huge page contents, but will not be able to dissolve the source huge page.
> > +   This will prevent an offline operation and is unfortunate as memory offlining
> > +   is expected to succeed on movable zones.  Users that depend on memory hotplug
> > +   to succeed for movable zones should carefully consider whether the memory
> > +   savings gained from this feature are worth the risk of possibly not being
> > +   able to offline memory in certain situations.
> > +
> >  .. note::
> >     Techniques that rely on long-term pinnings of memory (especially, RDMA and
> >     vfio) are fundamentally problematic with ZONE_MOVABLE and, therefore, memory
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index 0abed7e766b8..6e970a7d3480 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -525,6 +525,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
> >   *   code knows it has only reference.  All other examinations and
> >   *   modifications require hugetlb_lock.
> >   * HPG_freed - Set when page is on the free lists.
> > + * HPG_vmemmap_optimized - Set when the vmemmap pages of the page are freed.
> >   *   Synchronization: hugetlb_lock held for examination and modification.
>
> I like the per-page flag.  In previous versions of the series, you just
> checked the free_vmemmap_pages_per_hpage() to determine if vmemmmap
> should be allocated.  Is there any change in functionality that makes is
> necessary to set the flag in each page, or is it mostly for flexibility
> going forward?

Actually, only the routine of dissolving the page cares whether
the page is on the buddy free list when update_and_free_page
returns. But we cannot change the return type of the
update_and_free_page (e.g. change return type from 'void' to 'int').
Why? If the hugepage is freed through a kworker, we cannot
know the return value when update_and_free_page returns.
So adding a return value seems odd.

In the dissolving routine, We can allocate vmemmap pages first,
if it is successful, then we can make sure that
update_and_free_page can successfully free page. So I need
some stuff to mark the page which does not need to allocate
vmemmap pages.

On the surface, we seem to have a straightforward method
to do this.

Add a new parameter 'alloc_vmemmap' to update_and_free_page() to
indicate that the caller is already allocated the vmemmap pages.
update_and_free_page() do not need to allocate. Just like below.

   void update_and_free_page(struct hstate *h, struct page *page, bool atomic,
           bool alloc_vmemmap)
   {
       if (alloc_vmemmap)
           // allocate vmemmap pages
   }

But if the page is freed through a kworker. How to pass
'alloc_vmemmap' to the kworker? We can embed this
information into the per-page flag. So if we introduce
HPG_vmemmap_optimized, the parameter of
alloc_vmemmap is also necessary.

So it seems that introducing HPG_vmemmap_optimized is
a good choice.

>
> >   */
> >  enum hugetlb_page_flags {
> > @@ -532,6 +533,7 @@ enum hugetlb_page_flags {
> >       HPG_migratable,
> >       HPG_temporary,
> >       HPG_freed,
> > +     HPG_vmemmap_optimized,
> >       __NR_HPAGEFLAGS,
> >  };
> >
> > @@ -577,6 +579,7 @@ HPAGEFLAG(RestoreReserve, restore_reserve)
> >  HPAGEFLAG(Migratable, migratable)
> >  HPAGEFLAG(Temporary, temporary)
> >  HPAGEFLAG(Freed, freed)
> > +HPAGEFLAG(VmemmapOptimized, vmemmap_optimized)
> >
> >  #ifdef CONFIG_HUGETLB_PAGE
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index a4d160ddb749..d0854828bb9c 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -3048,6 +3048,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
> >
> >  void vmemmap_remap_free(unsigned long start, unsigned long end,
> >                       unsigned long reuse);
> > +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> > +                     unsigned long reuse, gfp_t gfp_mask);
> >
> >  void *sparse_buffer_alloc(unsigned long size);
> >  struct page * __populate_section_memmap(unsigned long pfn,
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index eeb8f5480170..1c37f0098e00 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1376,6 +1376,34 @@ static void remove_hugetlb_page(struct hstate *h, struct page *page,
> >       h->nr_huge_pages_node[nid]--;
> >  }
> >
> > +static void add_hugetlb_page(struct hstate *h, struct page *page,
> > +                          bool adjust_surplus)
> > +{
>
> We need to be a bit careful with hugepage specific flags that may be
> set.  The routine remove_hugetlb_page which is called for 'page' before
> this routine will not clear any of the hugepage specific flags.  If the
> calling path goes through free_huge_page, most but not all flags are
> cleared.
>
> We had a discussion about clearing the page->private field in Oscar's
> series.  In the case of 'new' pages we can assume page->private is
> cleared, but perhaps we should not make that assumption here.  Since we
> hope to rarely call this routine, it might be safer to do something
> like:
>
>         set_page_private(page, 0);
>         SetHPageVmemmapOptimized(page);

Agree. Thanks for your reminder. I will fix this.

>
> > +     int nid = page_to_nid(page);
> > +
> > +     lockdep_assert_held(&hugetlb_lock);
> > +
> > +     INIT_LIST_HEAD(&page->lru);
> > +     h->nr_huge_pages++;
> > +     h->nr_huge_pages_node[nid]++;
> > +
> > +     if (adjust_surplus) {
> > +             h->surplus_huge_pages++;
> > +             h->surplus_huge_pages_node[nid]++;
> > +     }
> > +
> > +     set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> > +
> > +     /*
> > +      * The refcount can possibly be increased by memory-failure or
> > +      * soft_offline handlers.
> > +      */
> > +     if (likely(put_page_testzero(page))) {
>
> In the existing code there is no such test.  Is the need for the test
> because of something introduced in the new code?

No.

> Or, should this test be in the existing code?

Yes. gather_surplus_pages should be fixed. I can fix it
in a separate patch.

The possible bad scenario:

CPU0:                           CPU1:
                                set_compound_page_dtor(HUGETLB_PAGE_DTOR);
memory_failure_hugetlb
  get_hwpoison_page
    __get_hwpoison_page
      get_page_unless_zero
                                put_page_testzero()

  put_page(page)


More details and discussion can refer to:

https://lore.kernel.org/linux-doc/CAMZfGtVRSBkKe=tKAKLY8dp_hywotq3xL+EJZNjXuSKt3HK3bQ@mail.gmail.com/

>
> Sorry, I am not seeing why this is needed.
>
> > +             arch_clear_hugepage_flags(page);
> > +             enqueue_huge_page(h, page);
> > +     }
> > +}
> > +
> >  static void __update_and_free_page(struct hstate *h, struct page *page)
> >  {
> >       int i;
> > @@ -1384,6 +1412,18 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
> >       if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> >               return;
> >
> > +     if (alloc_huge_page_vmemmap(h, page)) {
> > +             spin_lock_irq(&hugetlb_lock);
> > +             /*
> > +              * If we cannot allocate vmemmap pages, just refuse to free the
> > +              * page and put the page back on the hugetlb free list and treat
> > +              * as a surplus page.
> > +              */
> > +             add_hugetlb_page(h, page, true);
> > +             spin_unlock_irq(&hugetlb_lock);
> > +             return;
> > +     }
> > +
> >       for (i = 0; i < pages_per_huge_page(h);
> >            i++, subpage = mem_map_next(subpage, page, i)) {
> >               subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
> > @@ -1444,7 +1484,7 @@ static inline void flush_free_hpage_work(struct hstate *h)
> >  static void update_and_free_page(struct hstate *h, struct page *page,
> >                                bool atomic)
> >  {
> > -     if (!free_vmemmap_pages_per_hpage(h) || !atomic) {
> > +     if (!HPageVmemmapOptimized(page) || !atomic) {
> >               __update_and_free_page(h, page);
> >               return;
> >       }
>
> When update_and_free_pages_bulk was added it was done to avoid
> lock/unlock cycles with each page.  At the time, I thought about the
> addition of code to allocate vmmemmap, and the possibility that those
> allocations could fail.  I thought it might make sense to perhaps
> process the pages one at a time so that we could quit at the first
> allocation failure.  After more thought, I think it is best to leave the
> code to do bulk operations as you have done above.  Why?
> - Just because one allocation fails does not mean the next will fail.
>   It is possible the allocations could be from different nodes/zones.
> - We will still need to put the requested number of pages into surplus
>   state.
>
> I am not suggesting you change anything.  Just wanted to share my
> thoughts in case someone thought otherwise.
>
> > @@ -1790,10 +1830,14 @@ static struct page *remove_pool_huge_page(struct hstate *h,
> >   * nothing for in-use hugepages and non-hugepages.
> >   * This function returns values like below:
> >   *
> > - *  -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
> > - *          (allocated or reserved.)
> > - *       0: successfully dissolved free hugepages or the page is not a
> > - *          hugepage (considered as already dissolved)
> > + *  -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages
> > + *           when the system is under memory pressure and the feature of
> > + *           freeing unused vmemmap pages associated with each hugetlb page
> > + *           is enabled.
> > + *  -EBUSY:  failed to dissolved free hugepages or the hugepage is in-use
> > + *           (allocated or reserved.)
> > + *       0:  successfully dissolved free hugepages or the page is not a
> > + *           hugepage (considered as already dissolved)
> >   */
> >  int dissolve_free_huge_page(struct page *page)
> >  {
> > @@ -1835,19 +1879,30 @@ int dissolve_free_huge_page(struct page *page)
> >                       goto retry;
> >               }
> >
> > -             /*
> > -              * Move PageHWPoison flag from head page to the raw error page,
> > -              * which makes any subpages rather than the error page reusable.
> > -              */
> > -             if (PageHWPoison(head) && page != head) {
> > -                     SetPageHWPoison(page);
> > -                     ClearPageHWPoison(head);
> > -             }
> >               remove_hugetlb_page(h, page, false);
> >               h->max_huge_pages--;
> >               spin_unlock_irq(&hugetlb_lock);
> > -             update_and_free_page(h, head, false);
> > -             return 0;
> > +
> > +             rc = alloc_huge_page_vmemmap(h, page);
> > +             if (!rc) {
> > +                     /*
> > +                      * Move PageHWPoison flag from head page to the raw
> > +                      * error page, which makes any subpages rather than
> > +                      * the error page reusable.
> > +                      */
> > +                     if (PageHWPoison(head) && page != head) {
> > +                             SetPageHWPoison(page);
> > +                             ClearPageHWPoison(head);
> > +                     }
> > +                     update_and_free_page(h, head, false);
> > +             } else {
> > +                     spin_lock_irq(&hugetlb_lock);
> > +                     add_hugetlb_page(h, page, false);
> > +                     h->max_huge_pages++;
> > +                     spin_unlock_irq(&hugetlb_lock);
> > +             }
> > +
> > +             return rc;
> >       }
> >  out:
> >       spin_unlock_irq(&hugetlb_lock);
>
> Changes in the files below have not changed in any significant way
> since the previous version.  The code looks good to me, but I would
> like to see if there are comments from others.

Thanks for your review. :-)

>
> Thanks,
> --
> Mike Kravetz
>
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > index cb28c5b6c9ff..a897c7778246 100644
> > --- a/mm/hugetlb_vmemmap.c
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -185,6 +185,38 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> >       return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> >  }
> >
> > +/*
> > + * Previously discarded vmemmap pages will be allocated and remapping
> > + * after this function returns zero.
> > + */
> > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +     int ret;
> > +     unsigned long vmemmap_addr = (unsigned long)head;
> > +     unsigned long vmemmap_end, vmemmap_reuse;
> > +
> > +     if (!HPageVmemmapOptimized(head))
> > +             return 0;
> > +
> > +     vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> > +     vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> > +     vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> > +     /*
> > +      * The pages which the vmemmap virtual address range [@vmemmap_addr,
> > +      * @vmemmap_end) are mapped to are freed to the buddy allocator, and
> > +      * the range is mapped to the page which @vmemmap_reuse is mapped to.
> > +      * When a HugeTLB page is freed to the buddy allocator, previously
> > +      * discarded vmemmap pages must be allocated and remapping.
> > +      */
> > +     ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
> > +                               GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
> > +
> > +     if (!ret)
> > +             ClearHPageVmemmapOptimized(head);
> > +
> > +     return ret;
> > +}
> > +
> >  void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> >  {
> >       unsigned long vmemmap_addr = (unsigned long)head;
> > @@ -203,4 +235,6 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> >        * which the range [@vmemmap_addr, @vmemmap_end] is mapped to.
> >        */
> >       vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse);
> > +
> > +     SetHPageVmemmapOptimized(head);
> >  }
> > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> > index 01f8637adbe0..a37771b0b82a 100644
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -11,6 +11,7 @@
> >  #include <linux/hugetlb.h>
> >
> >  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
> >  void free_huge_page_vmemmap(struct hstate *h, struct page *head);
> >
> >  /*
> > @@ -25,6 +26,11 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
> >       return 0;
> >  }
> >  #else
> > +static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +     return 0;
> > +}
> > +
> >  static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> >  {
> >  }
> > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > index 7d40b5bd7046..693de0aec7a8 100644
> > --- a/mm/sparse-vmemmap.c
> > +++ b/mm/sparse-vmemmap.c
> > @@ -40,7 +40,8 @@
> >   * @remap_pte:               called for each lowest-level entry (PTE).
> >   * @reuse_page:              the page which is reused for the tail vmemmap pages.
> >   * @reuse_addr:              the virtual address of the @reuse_page page.
> > - * @vmemmap_pages:   the list head of the vmemmap pages that can be freed.
> > + * @vmemmap_pages:   the list head of the vmemmap pages that can be freed
> > + *                   or is mapped from.
> >   */
> >  struct vmemmap_remap_walk {
> >       void (*remap_pte)(pte_t *pte, unsigned long addr,
> > @@ -224,6 +225,78 @@ void vmemmap_remap_free(unsigned long start, unsigned long end,
> >       free_vmemmap_page_list(&vmemmap_pages);
> >  }
> >
> > +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
> > +                             struct vmemmap_remap_walk *walk)
> > +{
> > +     pgprot_t pgprot = PAGE_KERNEL;
> > +     struct page *page;
> > +     void *to;
> > +
> > +     BUG_ON(pte_page(*pte) != walk->reuse_page);
> > +
> > +     page = list_first_entry(walk->vmemmap_pages, struct page, lru);
> > +     list_del(&page->lru);
> > +     to = page_to_virt(page);
> > +     copy_page(to, (void *)walk->reuse_addr);
> > +
> > +     set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
> > +}
> > +
> > +static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
> > +                                gfp_t gfp_mask, struct list_head *list)
> > +{
> > +     unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
> > +     int nid = page_to_nid((struct page *)start);
> > +     struct page *page, *next;
> > +
> > +     while (nr_pages--) {
> > +             page = alloc_pages_node(nid, gfp_mask, 0);
> > +             if (!page)
> > +                     goto out;
> > +             list_add_tail(&page->lru, list);
> > +     }
> > +
> > +     return 0;
> > +out:
> > +     list_for_each_entry_safe(page, next, list, lru)
> > +             __free_pages(page, 0);
> > +     return -ENOMEM;
> > +}
> > +
> > +/**
> > + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end)
> > + *                    to the page which is from the @vmemmap_pages
> > + *                    respectively.
> > + * @start:   start address of the vmemmap virtual address range that we want
> > + *           to remap.
> > + * @end:     end address of the vmemmap virtual address range that we want to
> > + *           remap.
> > + * @reuse:   reuse address.
> > + * @gpf_mask:        GFP flag for allocating vmemmap pages.
> > + */
> > +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> > +                     unsigned long reuse, gfp_t gfp_mask)
> > +{
> > +     LIST_HEAD(vmemmap_pages);
> > +     struct vmemmap_remap_walk walk = {
> > +             .remap_pte      = vmemmap_restore_pte,
> > +             .reuse_addr     = reuse,
> > +             .vmemmap_pages  = &vmemmap_pages,
> > +     };
> > +
> > +     /* See the comment in the vmemmap_remap_free(). */
> > +     BUG_ON(start - reuse != PAGE_SIZE);
> > +
> > +     might_sleep_if(gfpflags_allow_blocking(gfp_mask));
> > +
> > +     if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages))
> > +             return -ENOMEM;
> > +
> > +     vmemmap_remap_range(reuse, end, &walk);
> > +
> > +     return 0;
> > +}
> > +
> >  /*
> >   * Allocate a block of memory to be used to back the virtual memory map
> >   * or to back the page tables that are used to create the mapping.
> >
Mike Kravetz April 20, 2021, 5:48 p.m. UTC | #3
On 4/20/21 1:46 AM, Muchun Song wrote:
> On Tue, Apr 20, 2021 at 7:20 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>>
>> On 4/15/21 1:40 AM, Muchun Song wrote:
>>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>>> index 0abed7e766b8..6e970a7d3480 100644
>>> --- a/include/linux/hugetlb.h
>>> +++ b/include/linux/hugetlb.h
>>> @@ -525,6 +525,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>>>   *   code knows it has only reference.  All other examinations and
>>>   *   modifications require hugetlb_lock.
>>>   * HPG_freed - Set when page is on the free lists.
>>> + * HPG_vmemmap_optimized - Set when the vmemmap pages of the page are freed.
>>>   *   Synchronization: hugetlb_lock held for examination and modification.
>>
>> I like the per-page flag.  In previous versions of the series, you just
>> checked the free_vmemmap_pages_per_hpage() to determine if vmemmmap
>> should be allocated.  Is there any change in functionality that makes is
>> necessary to set the flag in each page, or is it mostly for flexibility
>> going forward?
> 
> Actually, only the routine of dissolving the page cares whether
> the page is on the buddy free list when update_and_free_page
> returns. But we cannot change the return type of the
> update_and_free_page (e.g. change return type from 'void' to 'int').
> Why? If the hugepage is freed through a kworker, we cannot
> know the return value when update_and_free_page returns.
> So adding a return value seems odd.
> 
> In the dissolving routine, We can allocate vmemmap pages first,
> if it is successful, then we can make sure that
> update_and_free_page can successfully free page. So I need
> some stuff to mark the page which does not need to allocate
> vmemmap pages.
> 
> On the surface, we seem to have a straightforward method
> to do this.
> 
> Add a new parameter 'alloc_vmemmap' to update_and_free_page() to
> indicate that the caller is already allocated the vmemmap pages.
> update_and_free_page() do not need to allocate. Just like below.
> 
>    void update_and_free_page(struct hstate *h, struct page *page, bool atomic,
>            bool alloc_vmemmap)
>    {
>        if (alloc_vmemmap)
>            // allocate vmemmap pages
>    }
> 
> But if the page is freed through a kworker. How to pass
> 'alloc_vmemmap' to the kworker? We can embed this
> information into the per-page flag. So if we introduce
> HPG_vmemmap_optimized, the parameter of
> alloc_vmemmap is also necessary.
> 
> So it seems that introducing HPG_vmemmap_optimized is
> a good choice.

Thanks for the explanation!

Agree that the flag is a good choice.  How about adding a comment like
this above the alloc_huge_page_vmemmap call in dissolve_free_huge_page?

/*
 * Normally update_and_free_page will allocate required vmemmmap before
 * freeing the page.  update_and_free_page will fail to free the page
 * if it can not allocate required vmemmap.  We need to adjust
 * max_huge_pages if the page is not freed.  Attempt to allocate
 * vmemmmap here so that we can take appropriate action on failure.
 */

...
>>> +static void add_hugetlb_page(struct hstate *h, struct page *page,
>>> +                          bool adjust_surplus)
>>> +{
>>
>> We need to be a bit careful with hugepage specific flags that may be
>> set.  The routine remove_hugetlb_page which is called for 'page' before
>> this routine will not clear any of the hugepage specific flags.  If the
>> calling path goes through free_huge_page, most but not all flags are
>> cleared.
>>
>> We had a discussion about clearing the page->private field in Oscar's
>> series.  In the case of 'new' pages we can assume page->private is
>> cleared, but perhaps we should not make that assumption here.  Since we
>> hope to rarely call this routine, it might be safer to do something
>> like:
>>
>>         set_page_private(page, 0);
>>         SetHPageVmemmapOptimized(page);
> 
> Agree. Thanks for your reminder. I will fix this.
> 
>>
>>> +     int nid = page_to_nid(page);
>>> +
>>> +     lockdep_assert_held(&hugetlb_lock);
>>> +
>>> +     INIT_LIST_HEAD(&page->lru);
>>> +     h->nr_huge_pages++;
>>> +     h->nr_huge_pages_node[nid]++;
>>> +
>>> +     if (adjust_surplus) {
>>> +             h->surplus_huge_pages++;
>>> +             h->surplus_huge_pages_node[nid]++;
>>> +     }
>>> +
>>> +     set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
>>> +
>>> +     /*
>>> +      * The refcount can possibly be increased by memory-failure or
>>> +      * soft_offline handlers.
>>> +      */
>>> +     if (likely(put_page_testzero(page))) {
>>
>> In the existing code there is no such test.  Is the need for the test
>> because of something introduced in the new code?
> 
> No.
> 
>> Or, should this test be in the existing code?
> 
> Yes. gather_surplus_pages should be fixed. I can fix it
> in a separate patch.
> 
> The possible bad scenario:
> 
> CPU0:                           CPU1:
>                                 set_compound_page_dtor(HUGETLB_PAGE_DTOR);
> memory_failure_hugetlb
>   get_hwpoison_page
>     __get_hwpoison_page
>       get_page_unless_zero
>                                 put_page_testzero()
> 
>   put_page(page)
> 
> 
> More details and discussion can refer to:
> 
> https://lore.kernel.org/linux-doc/CAMZfGtVRSBkKe=tKAKLY8dp_hywotq3xL+EJZNjXuSKt3HK3bQ@mail.gmail.com/
> 

Thanks you!  I did not remember that discussion.

It would be helpful to add a separate patch for gather_surplus_pages.
Otherwise, we have the VM_BUG_ON there and not in add_hugetlb_page.
Muchun Song April 21, 2021, 3:42 a.m. UTC | #4
On Wed, Apr 21, 2021 at 1:48 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 4/20/21 1:46 AM, Muchun Song wrote:
> > On Tue, Apr 20, 2021 at 7:20 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
> >>
> >> On 4/15/21 1:40 AM, Muchun Song wrote:
> >>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> >>> index 0abed7e766b8..6e970a7d3480 100644
> >>> --- a/include/linux/hugetlb.h
> >>> +++ b/include/linux/hugetlb.h
> >>> @@ -525,6 +525,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
> >>>   *   code knows it has only reference.  All other examinations and
> >>>   *   modifications require hugetlb_lock.
> >>>   * HPG_freed - Set when page is on the free lists.
> >>> + * HPG_vmemmap_optimized - Set when the vmemmap pages of the page are freed.
> >>>   *   Synchronization: hugetlb_lock held for examination and modification.
> >>
> >> I like the per-page flag.  In previous versions of the series, you just
> >> checked the free_vmemmap_pages_per_hpage() to determine if vmemmmap
> >> should be allocated.  Is there any change in functionality that makes is
> >> necessary to set the flag in each page, or is it mostly for flexibility
> >> going forward?
> >
> > Actually, only the routine of dissolving the page cares whether
> > the page is on the buddy free list when update_and_free_page
> > returns. But we cannot change the return type of the
> > update_and_free_page (e.g. change return type from 'void' to 'int').
> > Why? If the hugepage is freed through a kworker, we cannot
> > know the return value when update_and_free_page returns.
> > So adding a return value seems odd.
> >
> > In the dissolving routine, We can allocate vmemmap pages first,
> > if it is successful, then we can make sure that
> > update_and_free_page can successfully free page. So I need
> > some stuff to mark the page which does not need to allocate
> > vmemmap pages.
> >
> > On the surface, we seem to have a straightforward method
> > to do this.
> >
> > Add a new parameter 'alloc_vmemmap' to update_and_free_page() to
> > indicate that the caller is already allocated the vmemmap pages.
> > update_and_free_page() do not need to allocate. Just like below.
> >
> >    void update_and_free_page(struct hstate *h, struct page *page, bool atomic,
> >            bool alloc_vmemmap)
> >    {
> >        if (alloc_vmemmap)
> >            // allocate vmemmap pages
> >    }
> >
> > But if the page is freed through a kworker. How to pass
> > 'alloc_vmemmap' to the kworker? We can embed this
> > information into the per-page flag. So if we introduce
> > HPG_vmemmap_optimized, the parameter of
> > alloc_vmemmap is also necessary.
> >
> > So it seems that introducing HPG_vmemmap_optimized is
> > a good choice.
>
> Thanks for the explanation!
>
> Agree that the flag is a good choice.  How about adding a comment like
> this above the alloc_huge_page_vmemmap call in dissolve_free_huge_page?
>
> /*
>  * Normally update_and_free_page will allocate required vmemmmap before
>  * freeing the page.  update_and_free_page will fail to free the page
>  * if it can not allocate required vmemmap.  We need to adjust
>  * max_huge_pages if the page is not freed.  Attempt to allocate
>  * vmemmmap here so that we can take appropriate action on failure.
>  */

Thanks. I will add this comment.

>
> ...
> >>> +static void add_hugetlb_page(struct hstate *h, struct page *page,
> >>> +                          bool adjust_surplus)
> >>> +{
> >>
> >> We need to be a bit careful with hugepage specific flags that may be
> >> set.  The routine remove_hugetlb_page which is called for 'page' before
> >> this routine will not clear any of the hugepage specific flags.  If the
> >> calling path goes through free_huge_page, most but not all flags are
> >> cleared.
> >>
> >> We had a discussion about clearing the page->private field in Oscar's
> >> series.  In the case of 'new' pages we can assume page->private is
> >> cleared, but perhaps we should not make that assumption here.  Since we
> >> hope to rarely call this routine, it might be safer to do something
> >> like:
> >>
> >>         set_page_private(page, 0);
> >>         SetHPageVmemmapOptimized(page);
> >
> > Agree. Thanks for your reminder. I will fix this.
> >
> >>
> >>> +     int nid = page_to_nid(page);
> >>> +
> >>> +     lockdep_assert_held(&hugetlb_lock);
> >>> +
> >>> +     INIT_LIST_HEAD(&page->lru);
> >>> +     h->nr_huge_pages++;
> >>> +     h->nr_huge_pages_node[nid]++;
> >>> +
> >>> +     if (adjust_surplus) {
> >>> +             h->surplus_huge_pages++;
> >>> +             h->surplus_huge_pages_node[nid]++;
> >>> +     }
> >>> +
> >>> +     set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> >>> +
> >>> +     /*
> >>> +      * The refcount can possibly be increased by memory-failure or
> >>> +      * soft_offline handlers.
> >>> +      */
> >>> +     if (likely(put_page_testzero(page))) {
> >>
> >> In the existing code there is no such test.  Is the need for the test
> >> because of something introduced in the new code?
> >
> > No.
> >
> >> Or, should this test be in the existing code?
> >
> > Yes. gather_surplus_pages should be fixed. I can fix it
> > in a separate patch.
> >
> > The possible bad scenario:
> >
> > CPU0:                           CPU1:
> >                                 set_compound_page_dtor(HUGETLB_PAGE_DTOR);
> > memory_failure_hugetlb
> >   get_hwpoison_page
> >     __get_hwpoison_page
> >       get_page_unless_zero
> >                                 put_page_testzero()
> >
> >   put_page(page)
> >
> >
> > More details and discussion can refer to:
> >
> > https://lore.kernel.org/linux-doc/CAMZfGtVRSBkKe=tKAKLY8dp_hywotq3xL+EJZNjXuSKt3HK3bQ@mail.gmail.com/
> >
>
> Thanks you!  I did not remember that discussion.
>
> It would be helpful to add a separate patch for gather_surplus_pages.
> Otherwise, we have the VM_BUG_ON there and not in add_hugetlb_page.
>

Agree. Will do.

> --
> Mike Kravetz