diff mbox series

[v3] mm/hugetlb_vmemmap: remap head page to newly allocated page

Message ID 20221109200623.96867-1-joao.m.martins@oracle.com (mailing list archive)
State New
Headers show
Series [v3] mm/hugetlb_vmemmap: remap head page to newly allocated page | expand

Commit Message

Joao Martins Nov. 9, 2022, 8:06 p.m. UTC
Today with `hugetlb_free_vmemmap=on` the struct page memory that is freed
back to page allocator is as following: for a 2M hugetlb page it will reuse
the first 4K vmemmap page to remap the remaining 7 vmemmap pages, and for a
1G hugetlb it will remap the remaining 4095 vmemmap pages. Essentially,
that means that it breaks the first 4K of a potentially contiguous chunk of
memory of 32K (for 2M hugetlb pages) or 16M (for 1G hugetlb pages). For
this reason the memory that it's free back to page allocator cannot be used
for hugetlb to allocate huge pages of the same size, but rather only of a
smaller huge page size:

Trying to assign a 64G node to hugetlb (on a 128G 2node guest, each node
having 64G):

* Before allocation:
Free pages count per migrate type at order       0      1      2      3
4      5      6      7      8      9     10
...
Node    0, zone   Normal, type      Movable    340    100     32     15
1      2      0      0      0      1  15558

$ echo 32768 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
$ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
 31987

* After:

Node    0, zone   Normal, type      Movable  30893  32006  31515      7
0      0      0      0      0      0      0

Notice how the memory freed back are put back into 4K / 8K / 16K page
pools. And it allocates a total of 31987 pages (63974M).

To fix this behaviour rather than remapping second vmemmap page (thus
breaking the contiguous block of memory backing the struct pages)
repopulate the first vmemmap page with a new one. We allocate and copy
from the currently mapped vmemmap page, and then remap it later on.
The same algorithm works if there's a pre initialized walk::reuse_page
and the head page doesn't need to be skipped and instead we remap it
when the @addr being changed is the @reuse_addr.

The new head page is allocated in vmemmap_remap_free() given that on
restore there's no need for functional change. Note that, because right
now one hugepage is remapped at a time, thus only one free 4K page at a
time is needed to remap the head page. Should it fail to allocate said
new page, it reuses the one that's already mapped just like before. As a
result, for every 64G of contiguous hugepages it can give back 1G more
of contiguous memory per 64G, while needing in total 128M new 4K pages
(for 2M hugetlb) or 256k (for 1G hugetlb).

After the changes, try to assign a 64G node to hugetlb (on a 128G 2node
guest, each node with 64G):

* Before allocation
Free pages count per migrate type at order       0      1      2      3
4      5      6      7      8      9     10
...
Node    0, zone   Normal, type      Movable      1      1      1      0
0      1      0      0      1      1  15564

$ echo 32768  > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
$ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
32394

* After:

Node    0, zone   Normal, type      Movable      0     50     97    108
96     81     70     46     18      0      0

In the example above, 407 more hugeltb 2M pages are allocated i.e. 814M out
of the 32394 (64788M) allocated. So the memory freed back is indeed being
used back in hugetlb and there's no massive order-0..order-2 pages
accumulated unused.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
---
Changes since v2:
 Comments from Muchun:
 * Delete the comment above the tlb flush
 * Move the head vmemmap page copy into vmemmap_remap_free()
 * Add and del the new head page to the vmemmap_pages (to be freed
   in case of error)
 * Move the remap of the head like the tail pages in vmemmap_remap_pte()
   but special casing only when addr == reuse_Addr
 * Removes the PAGE_SIZE alignment check as the code has the assumption
   that start/end are page-aligned (and VM_BUG_ON otherwise).
 * Adjusted commit message taking the above changes into account.
---
 mm/hugetlb_vmemmap.c | 34 +++++++++++++++++++++++++++-------
 1 file changed, 27 insertions(+), 7 deletions(-)

Comments

Muchun Song Nov. 10, 2022, 3:28 a.m. UTC | #1
> On Nov 10, 2022, at 04:06, Joao Martins <joao.m.martins@oracle.com> wrote:
> 
> Today with `hugetlb_free_vmemmap=on` the struct page memory that is freed
> back to page allocator is as following: for a 2M hugetlb page it will reuse
> the first 4K vmemmap page to remap the remaining 7 vmemmap pages, and for a
> 1G hugetlb it will remap the remaining 4095 vmemmap pages. Essentially,
> that means that it breaks the first 4K of a potentially contiguous chunk of
> memory of 32K (for 2M hugetlb pages) or 16M (for 1G hugetlb pages). For
> this reason the memory that it's free back to page allocator cannot be used
> for hugetlb to allocate huge pages of the same size, but rather only of a
> smaller huge page size:
> 
> Trying to assign a 64G node to hugetlb (on a 128G 2node guest, each node
> having 64G):
> 
> * Before allocation:
> Free pages count per migrate type at order       0      1      2      3
> 4      5      6      7      8      9     10
> ...
> Node    0, zone   Normal, type      Movable    340    100     32     15
> 1      2      0      0      0      1  15558
> 
> $ echo 32768 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 31987
> 
> * After:
> 
> Node    0, zone   Normal, type      Movable  30893  32006  31515      7
> 0      0      0      0      0      0      0
> 
> Notice how the memory freed back are put back into 4K / 8K / 16K page
> pools. And it allocates a total of 31987 pages (63974M).
> 
> To fix this behaviour rather than remapping second vmemmap page (thus
> breaking the contiguous block of memory backing the struct pages)
> repopulate the first vmemmap page with a new one. We allocate and copy
> from the currently mapped vmemmap page, and then remap it later on.
> The same algorithm works if there's a pre initialized walk::reuse_page
> and the head page doesn't need to be skipped and instead we remap it
> when the @addr being changed is the @reuse_addr.
> 
> The new head page is allocated in vmemmap_remap_free() given that on
> restore there's no need for functional change. Note that, because right
> now one hugepage is remapped at a time, thus only one free 4K page at a
> time is needed to remap the head page. Should it fail to allocate said
> new page, it reuses the one that's already mapped just like before. As a
> result, for every 64G of contiguous hugepages it can give back 1G more
> of contiguous memory per 64G, while needing in total 128M new 4K pages
> (for 2M hugetlb) or 256k (for 1G hugetlb).
> 
> After the changes, try to assign a 64G node to hugetlb (on a 128G 2node
> guest, each node with 64G):
> 
> * Before allocation
> Free pages count per migrate type at order       0      1      2      3
> 4      5      6      7      8      9     10
> ...
> Node    0, zone   Normal, type      Movable      1      1      1      0
> 0      1      0      0      1      1  15564
> 
> $ echo 32768  > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 32394
> 
> * After:
> 
> Node    0, zone   Normal, type      Movable      0     50     97    108
> 96     81     70     46     18      0      0
> 
> In the example above, 407 more hugeltb 2M pages are allocated i.e. 814M out
> of the 32394 (64788M) allocated. So the memory freed back is indeed being
> used back in hugetlb and there's no massive order-0..order-2 pages
> accumulated unused.
> 
> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>

Thanks.

Reviewed-by: Muchun Song <songmuchun@bytedance.com>

A nit below.

> ---
> Changes since v2:
> Comments from Muchun:
> * Delete the comment above the tlb flush
> * Move the head vmemmap page copy into vmemmap_remap_free()
> * Add and del the new head page to the vmemmap_pages (to be freed
>   in case of error)
> * Move the remap of the head like the tail pages in vmemmap_remap_pte()
>   but special casing only when addr == reuse_Addr
> * Removes the PAGE_SIZE alignment check as the code has the assumption
>   that start/end are page-aligned (and VM_BUG_ON otherwise).
> * Adjusted commit message taking the above changes into account.
> ---
> mm/hugetlb_vmemmap.c | 34 +++++++++++++++++++++++++++-------
> 1 file changed, 27 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 7898c2c75e35..f562b3f46410 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -203,12 +203,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
> return ret;
> } while (pgd++, addr = next, addr != end);
> 
> - /*
> - * We only change the mapping of the vmemmap virtual address range
> - * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
> - * belongs to the range.
> - */
> - flush_tlb_kernel_range(start + PAGE_SIZE, end);
> + flush_tlb_kernel_range(start, end);
> 
> return 0;
> }
> @@ -244,9 +239,16 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
> * to the tail pages.
> */
> pgprot_t pgprot = PAGE_KERNEL_RO;
> - pte_t entry = mk_pte(walk->reuse_page, pgprot);
> struct page *page = pte_page(*pte);
> + pte_t entry;
> 
> + /* Remapping the head page requires r/w */
> + if (unlikely(addr == walk->reuse_addr)) {
> + pgprot = PAGE_KERNEL;
> + list_del(&walk->reuse_page->lru);

Maybe smp_wmb() should be inserted here to make sure the copied data is visible
before set_pte_at() like the commit 939de63d35dde45 does.
Joao Martins Nov. 10, 2022, 10:10 a.m. UTC | #2
On 10/11/2022 03:28, Muchun Song wrote:
>> On Nov 10, 2022, at 04:06, Joao Martins <joao.m.martins@oracle.com> wrote:
>>
>> Today with `hugetlb_free_vmemmap=on` the struct page memory that is freed
>> back to page allocator is as following: for a 2M hugetlb page it will reuse
>> the first 4K vmemmap page to remap the remaining 7 vmemmap pages, and for a
>> 1G hugetlb it will remap the remaining 4095 vmemmap pages. Essentially,
>> that means that it breaks the first 4K of a potentially contiguous chunk of
>> memory of 32K (for 2M hugetlb pages) or 16M (for 1G hugetlb pages). For
>> this reason the memory that it's free back to page allocator cannot be used
>> for hugetlb to allocate huge pages of the same size, but rather only of a
>> smaller huge page size:
>>
>> Trying to assign a 64G node to hugetlb (on a 128G 2node guest, each node
>> having 64G):
>>
>> * Before allocation:
>> Free pages count per migrate type at order       0      1      2      3
>> 4      5      6      7      8      9     10
>> ...
>> Node    0, zone   Normal, type      Movable    340    100     32     15
>> 1      2      0      0      0      1  15558
>>
>> $ echo 32768 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>> $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>> 31987
>>
>> * After:
>>
>> Node    0, zone   Normal, type      Movable  30893  32006  31515      7
>> 0      0      0      0      0      0      0
>>
>> Notice how the memory freed back are put back into 4K / 8K / 16K page
>> pools. And it allocates a total of 31987 pages (63974M).
>>
>> To fix this behaviour rather than remapping second vmemmap page (thus
>> breaking the contiguous block of memory backing the struct pages)
>> repopulate the first vmemmap page with a new one. We allocate and copy
>> from the currently mapped vmemmap page, and then remap it later on.
>> The same algorithm works if there's a pre initialized walk::reuse_page
>> and the head page doesn't need to be skipped and instead we remap it
>> when the @addr being changed is the @reuse_addr.
>>
>> The new head page is allocated in vmemmap_remap_free() given that on
>> restore there's no need for functional change. Note that, because right
>> now one hugepage is remapped at a time, thus only one free 4K page at a
>> time is needed to remap the head page. Should it fail to allocate said
>> new page, it reuses the one that's already mapped just like before. As a
>> result, for every 64G of contiguous hugepages it can give back 1G more
>> of contiguous memory per 64G, while needing in total 128M new 4K pages
>> (for 2M hugetlb) or 256k (for 1G hugetlb).
>>
>> After the changes, try to assign a 64G node to hugetlb (on a 128G 2node
>> guest, each node with 64G):
>>
>> * Before allocation
>> Free pages count per migrate type at order       0      1      2      3
>> 4      5      6      7      8      9     10
>> ...
>> Node    0, zone   Normal, type      Movable      1      1      1      0
>> 0      1      0      0      1      1  15564
>>
>> $ echo 32768  > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>> $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>> 32394
>>
>> * After:
>>
>> Node    0, zone   Normal, type      Movable      0     50     97    108
>> 96     81     70     46     18      0      0
>>
>> In the example above, 407 more hugeltb 2M pages are allocated i.e. 814M out
>> of the 32394 (64788M) allocated. So the memory freed back is indeed being
>> used back in hugetlb and there's no massive order-0..order-2 pages
>> accumulated unused.
>>
>> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> 
> Thanks.
> 
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> 
> A nit below.
> 
Thanks

>> ---
>> Changes since v2:
>> Comments from Muchun:
>> * Delete the comment above the tlb flush
>> * Move the head vmemmap page copy into vmemmap_remap_free()
>> * Add and del the new head page to the vmemmap_pages (to be freed
>>   in case of error)
>> * Move the remap of the head like the tail pages in vmemmap_remap_pte()
>>   but special casing only when addr == reuse_Addr
>> * Removes the PAGE_SIZE alignment check as the code has the assumption
>>   that start/end are page-aligned (and VM_BUG_ON otherwise).
>> * Adjusted commit message taking the above changes into account.
>> ---
>> mm/hugetlb_vmemmap.c | 34 +++++++++++++++++++++++++++-------
>> 1 file changed, 27 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
>> index 7898c2c75e35..f562b3f46410 100644
>> --- a/mm/hugetlb_vmemmap.c
>> +++ b/mm/hugetlb_vmemmap.c
>> @@ -203,12 +203,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
>> return ret;
>> } while (pgd++, addr = next, addr != end);
>>
>> - /*
>> - * We only change the mapping of the vmemmap virtual address range
>> - * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
>> - * belongs to the range.
>> - */
>> - flush_tlb_kernel_range(start + PAGE_SIZE, end);
>> + flush_tlb_kernel_range(start, end);
>>
>> return 0;
>> }
>> @@ -244,9 +239,16 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
>> * to the tail pages.
>> */
>> pgprot_t pgprot = PAGE_KERNEL_RO;
>> - pte_t entry = mk_pte(walk->reuse_page, pgprot);
>> struct page *page = pte_page(*pte);
>> + pte_t entry;
>>
>> + /* Remapping the head page requires r/w */
>> + if (unlikely(addr == walk->reuse_addr)) {
>> + pgprot = PAGE_KERNEL;
>> + list_del(&walk->reuse_page->lru);
> 
> Maybe smp_wmb() should be inserted here to make sure the copied data is visible
> before set_pte_at() like the commit 939de63d35dde45 does.
> 

I've added the barrier and comment above the barrier as the copy is not
immediately obvious where it takes place. See below snip as to what I added
in v4:

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index f562b3f46410..45e93a545dd7 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -246,6 +246,13 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
        if (unlikely(addr == walk->reuse_addr)) {
                pgprot = PAGE_KERNEL;
                list_del(&walk->reuse_page->lru);
+
+               /*
+                * Makes sure that preceding stores to the page contents from
+                * vmemmap_remap_free() become visible before the set_pte_at()
+                * write.
+                */
+               smp_wmb();
        }

        entry = mk_pte(walk->reuse_page, pgprot);
Muchun Song Nov. 10, 2022, 11:25 a.m. UTC | #3
> On Nov 10, 2022, at 18:10, Joao Martins <joao.m.martins@oracle.com> wrote:
> 
> On 10/11/2022 03:28, Muchun Song wrote:
>>> On Nov 10, 2022, at 04:06, Joao Martins <joao.m.martins@oracle.com> wrote:
>>> 
>>> Today with `hugetlb_free_vmemmap=on` the struct page memory that is freed
>>> back to page allocator is as following: for a 2M hugetlb page it will reuse
>>> the first 4K vmemmap page to remap the remaining 7 vmemmap pages, and for a
>>> 1G hugetlb it will remap the remaining 4095 vmemmap pages. Essentially,
>>> that means that it breaks the first 4K of a potentially contiguous chunk of
>>> memory of 32K (for 2M hugetlb pages) or 16M (for 1G hugetlb pages). For
>>> this reason the memory that it's free back to page allocator cannot be used
>>> for hugetlb to allocate huge pages of the same size, but rather only of a
>>> smaller huge page size:
>>> 
>>> Trying to assign a 64G node to hugetlb (on a 128G 2node guest, each node
>>> having 64G):
>>> 
>>> * Before allocation:
>>> Free pages count per migrate type at order       0      1      2      3
>>> 4      5      6      7      8      9     10
>>> ...
>>> Node    0, zone   Normal, type      Movable    340    100     32     15
>>> 1      2      0      0      0      1  15558
>>> 
>>> $ echo 32768 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>>> $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>>> 31987
>>> 
>>> * After:
>>> 
>>> Node    0, zone   Normal, type      Movable  30893  32006  31515      7
>>> 0      0      0      0      0      0      0
>>> 
>>> Notice how the memory freed back are put back into 4K / 8K / 16K page
>>> pools. And it allocates a total of 31987 pages (63974M).
>>> 
>>> To fix this behaviour rather than remapping second vmemmap page (thus
>>> breaking the contiguous block of memory backing the struct pages)
>>> repopulate the first vmemmap page with a new one. We allocate and copy
>>> from the currently mapped vmemmap page, and then remap it later on.
>>> The same algorithm works if there's a pre initialized walk::reuse_page
>>> and the head page doesn't need to be skipped and instead we remap it
>>> when the @addr being changed is the @reuse_addr.
>>> 
>>> The new head page is allocated in vmemmap_remap_free() given that on
>>> restore there's no need for functional change. Note that, because right
>>> now one hugepage is remapped at a time, thus only one free 4K page at a
>>> time is needed to remap the head page. Should it fail to allocate said
>>> new page, it reuses the one that's already mapped just like before. As a
>>> result, for every 64G of contiguous hugepages it can give back 1G more
>>> of contiguous memory per 64G, while needing in total 128M new 4K pages
>>> (for 2M hugetlb) or 256k (for 1G hugetlb).
>>> 
>>> After the changes, try to assign a 64G node to hugetlb (on a 128G 2node
>>> guest, each node with 64G):
>>> 
>>> * Before allocation
>>> Free pages count per migrate type at order       0      1      2      3
>>> 4      5      6      7      8      9     10
>>> ...
>>> Node    0, zone   Normal, type      Movable      1      1      1      0
>>> 0      1      0      0      1      1  15564
>>> 
>>> $ echo 32768  > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>>> $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>>> 32394
>>> 
>>> * After:
>>> 
>>> Node    0, zone   Normal, type      Movable      0     50     97    108
>>> 96     81     70     46     18      0      0
>>> 
>>> In the example above, 407 more hugeltb 2M pages are allocated i.e. 814M out
>>> of the 32394 (64788M) allocated. So the memory freed back is indeed being
>>> used back in hugetlb and there's no massive order-0..order-2 pages
>>> accumulated unused.
>>> 
>>> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
>> 
>> Thanks.
>> 
>> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
>> 
>> A nit below.
>> 
> Thanks
> 
>>> ---
>>> Changes since v2:
>>> Comments from Muchun:
>>> * Delete the comment above the tlb flush
>>> * Move the head vmemmap page copy into vmemmap_remap_free()
>>> * Add and del the new head page to the vmemmap_pages (to be freed
>>>  in case of error)
>>> * Move the remap of the head like the tail pages in vmemmap_remap_pte()
>>>  but special casing only when addr == reuse_Addr
>>> * Removes the PAGE_SIZE alignment check as the code has the assumption
>>>  that start/end are page-aligned (and VM_BUG_ON otherwise).
>>> * Adjusted commit message taking the above changes into account.
>>> ---
>>> mm/hugetlb_vmemmap.c | 34 +++++++++++++++++++++++++++-------
>>> 1 file changed, 27 insertions(+), 7 deletions(-)
>>> 
>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
>>> index 7898c2c75e35..f562b3f46410 100644
>>> --- a/mm/hugetlb_vmemmap.c
>>> +++ b/mm/hugetlb_vmemmap.c
>>> @@ -203,12 +203,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
>>> return ret;
>>> } while (pgd++, addr = next, addr != end);
>>> 
>>> - /*
>>> - * We only change the mapping of the vmemmap virtual address range
>>> - * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
>>> - * belongs to the range.
>>> - */
>>> - flush_tlb_kernel_range(start + PAGE_SIZE, end);
>>> + flush_tlb_kernel_range(start, end);
>>> 
>>> return 0;
>>> }
>>> @@ -244,9 +239,16 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
>>> * to the tail pages.
>>> */
>>> pgprot_t pgprot = PAGE_KERNEL_RO;
>>> - pte_t entry = mk_pte(walk->reuse_page, pgprot);
>>> struct page *page = pte_page(*pte);
>>> + pte_t entry;
>>> 
>>> + /* Remapping the head page requires r/w */
>>> + if (unlikely(addr == walk->reuse_addr)) {
>>> + pgprot = PAGE_KERNEL;
>>> + list_del(&walk->reuse_page->lru);
>> 
>> Maybe smp_wmb() should be inserted here to make sure the copied data is visible
>> before set_pte_at() like the commit 939de63d35dde45 does.
>> 
> 
> I've added the barrier and comment above the barrier as the copy is not
> immediately obvious where it takes place. See below snip as to what I added
> in v4:
> 
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index f562b3f46410..45e93a545dd7 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -246,6 +246,13 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
>        if (unlikely(addr == walk->reuse_addr)) {
>                pgprot = PAGE_KERNEL;
>                list_del(&walk->reuse_page->lru);
> +
> +               /*
> +                * Makes sure that preceding stores to the page contents from
> +                * vmemmap_remap_free() become visible before the set_pte_at()
> +                * write.
> +                */
> +               smp_wmb();
>        }
> 
>        entry = mk_pte(walk->reuse_page, pgprot);

make sense to me.
diff mbox series

Patch

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 7898c2c75e35..f562b3f46410 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -203,12 +203,7 @@  static int vmemmap_remap_range(unsigned long start, unsigned long end,
 			return ret;
 	} while (pgd++, addr = next, addr != end);
 
-	/*
-	 * We only change the mapping of the vmemmap virtual address range
-	 * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
-	 * belongs to the range.
-	 */
-	flush_tlb_kernel_range(start + PAGE_SIZE, end);
+	flush_tlb_kernel_range(start, end);
 
 	return 0;
 }
@@ -244,9 +239,16 @@  static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
 	 * to the tail pages.
 	 */
 	pgprot_t pgprot = PAGE_KERNEL_RO;
-	pte_t entry = mk_pte(walk->reuse_page, pgprot);
 	struct page *page = pte_page(*pte);
+	pte_t entry;
 
+	/* Remapping the head page requires r/w */
+	if (unlikely(addr == walk->reuse_addr)) {
+		pgprot = PAGE_KERNEL;
+		list_del(&walk->reuse_page->lru);
+	}
+
+	entry = mk_pte(walk->reuse_page, pgprot);
 	list_add_tail(&page->lru, walk->vmemmap_pages);
 	set_pte_at(&init_mm, addr, pte, entry);
 }
@@ -315,6 +317,24 @@  static int vmemmap_remap_free(unsigned long start, unsigned long end,
 		.reuse_addr	= reuse,
 		.vmemmap_pages	= &vmemmap_pages,
 	};
+	int nid = page_to_nid((struct page *)start);
+	gfp_t gfp_mask = GFP_KERNEL | __GFP_THISNODE | __GFP_NORETRY |
+			__GFP_NOWARN;
+
+	/*
+	 * Allocate a new head vmemmap page to avoid breaking a contiguous
+	 * block of struct page memory when freeing it back to page allocator
+	 * in free_vmemmap_page_list(). This will allow the likely contiguous
+	 * struct page backing memory to be kept contiguous and allowing for
+	 * more allocations of hugepages. Fallback to the currently
+	 * mapped head page in case should it fail to allocate.
+	 */
+	walk.reuse_page = alloc_pages_node(nid, gfp_mask, 0);
+	if (walk.reuse_page) {
+		copy_page(page_to_virt(walk.reuse_page),
+			  (void *)walk.reuse_addr);
+		list_add(&walk.reuse_page->lru, &vmemmap_pages);
+	}
 
 	/*
 	 * In order to make remapping routine most efficient for the huge pages,