mbox series

[0/7] Free user PTE page table pages

Message ID 20210718043034.76431-1-zhengqi.arch@bytedance.com (mailing list archive)
Headers show
Series Free user PTE page table pages | expand

Message

Qi Zheng July 18, 2021, 4:30 a.m. UTC
Hi,

This patch series aims to free user PTE page table pages when all PTE entries
are empty.

The beginning of this story is that some malloc libraries(e.g. jemalloc or
tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs.
They will use madvise(MADV_DONTNEED) to free physical memory if they want.
But the page tables do not be freed by madvise(), so it can produce many
page tables when the process touches an enormous virtual address space.

The following figures are a memory usage snapshot of one process which actually
happened on our server:

	VIRT:  55t
	RES:   590g
	VmPTE: 110g

As we can see, the PTE page tables size is 110g, while the RES is 590g. In
theory, the process only need 1.2g PTE page tables to map those physical
memory. The reason why PTE page tables occupy a lot of memory is that
madvise(MADV_DONTNEED) only empty the PTE and free physical memory but
doesn't free the PTE page table pages. So we can free those empty PTE page
tables to save memory. In the above cases, we can save memory about 108g(best
case). And the larger the difference between the size of VIRT and RES, the
more memory we save.

In this patch series, we add a pte_refcount field to the struct page of page
table to track how many users of PTE page table. Similar to the mechanism of
page refcount, the user of PTE page table should hold a refcount to it before
accessing. The PTE page table page will be freed when the last refcount is
dropped.

Testing:

The following code snippet can show the effect of optimization:

	mmap 50G
	while (1) {
		for (; i < 1024 * 25; i++) {
			touch 2M memory
			madvise MADV_DONTNEED 2M
		}
	}

As we can see, the memory usage of VmPTE is reduced:

			before		                after
VIRT		       50.0 GB			      50.0 GB
RES		        3.1 MB			       3.6 MB
VmPTE		     102640 kB			       248 kB

I also have tested the stability by LTP[1] for several weeks. I have not seen
any crash so far.

The performance of page fault can be affected because of the allocation/freeing
of PTE page table pages. The following is the test result by using a micro
benchmark[2]:

root@~# perf stat -e page-faults --repeat 5 ./multi-fault $threads:

threads         before (pf/min)                     after (pf/min)
    1                32,085,255                         31,880,833 (-0.64%)
    8               101,674,967                        100,588,311 (-1.17%)
   16               113,207,000                        112,801,832 (-0.36%)

(The "pfn/min" means how many page faults in one minute.)

The performance of page fault is ~1% slower than before.

This series is based on next-20210708.

Patch 1 is a bug fix.
Patch 2-4 are code simplification.
Patch 5 free user PTE page tables dynamically.
Patch 6 defer freeing PTE page tables for a grace period.
Patch 7 uses mmu_gather to free PTE page tables.

Comments and suggestions are welcome.

Thanks,
Qi.

[1] https://github.com/linux-test-project/ltp
[2] https://lore.kernel.org/patchwork/comment/296794/

Qi Zheng (7):
	mm: fix the deadlock in finish_fault()
	mm: introduce pte_install() helper
	mm: remove redundant smp_wmb()
	mm: rework the parameter of lock_page_or_retry()
	mm: free user PTE page table pages
	mm: defer freeing PTE page table for a grace period
	mm: use mmu_gather to free PTE page table

 Documentation/vm/split_page_table_lock.rst |   2 +-
 arch/arm/mm/pgd.c                          |   2 +-
 arch/arm64/mm/hugetlbpage.c                |   4 +-
 arch/ia64/mm/hugetlbpage.c                 |   2 +-
 arch/parisc/mm/hugetlbpage.c               |   2 +-
 arch/powerpc/mm/hugetlbpage.c              |   2 +-
 arch/s390/mm/gmap.c                        |   8 +-
 arch/s390/mm/pgtable.c                     |   6 +-
 arch/sh/mm/hugetlbpage.c                   |   2 +-
 arch/sparc/mm/hugetlbpage.c                |   2 +-
 arch/x86/Kconfig                           |   2 +-
 arch/x86/kernel/tboot.c                    |   2 +-
 fs/proc/task_mmu.c                         |  23 ++-
 fs/userfaultfd.c                           |   2 +
 include/linux/mm.h                         |  12 +-
 include/linux/mm_types.h                   |   8 +-
 include/linux/pagemap.h                    |   8 +-
 include/linux/pgtable.h                    |   3 +-
 include/linux/pte_ref.h                    | 241 +++++++++++++++++++++++++
 include/linux/rmap.h                       |   3 +
 kernel/events/uprobes.c                    |   3 +
 mm/Kconfig                                 |   4 +
 mm/Makefile                                |   3 +-
 mm/debug_vm_pgtable.c                      |   3 +-
 mm/filemap.c                               |  56 +++---
 mm/gup.c                                   |  10 +-
 mm/hmm.c                                   |   4 +
 mm/internal.h                              |   2 +
 mm/khugepaged.c                            |  10 ++
 mm/ksm.c                                   |   4 +
 mm/madvise.c                               |  20 ++-
 mm/memcontrol.c                            |  11 +-
 mm/memory.c                                | 279 +++++++++++++++++++----------
 mm/mempolicy.c                             |   5 +-
 mm/migrate.c                               |  21 ++-
 mm/mincore.c                               |   6 +-
 mm/mlock.c                                 |   1 +
 mm/mmu_gather.c                            |  40 ++---
 mm/mprotect.c                              |  10 +-
 mm/mremap.c                                |  12 +-
 mm/page_vma_mapped.c                       |   4 +
 mm/pagewalk.c                              |  19 +-
 mm/pgtable-generic.c                       |   2 +
 mm/pte_ref.c                               | 146 +++++++++++++++
 mm/rmap.c                                  |  13 +-
 mm/sparse-vmemmap.c                        |   2 +-
 mm/swapfile.c                              |   6 +-
 mm/userfaultfd.c                           |  15 +-
 48 files changed, 825 insertions(+), 222 deletions(-)
 create mode 100644 include/linux/pte_ref.h
 create mode 100644 mm/pte_ref.c

Comments

David Hildenbrand July 19, 2021, 7:34 a.m. UTC | #1
On 18.07.21 06:30, Qi Zheng wrote:
> Hi,
> 
> This patch series aims to free user PTE page table pages when all PTE entries
> are empty.
> 
> The beginning of this story is that some malloc libraries(e.g. jemalloc or
> tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs.
> They will use madvise(MADV_DONTNEED) to free physical memory if they want.
> But the page tables do not be freed by madvise(), so it can produce many
> page tables when the process touches an enormous virtual address space.

... did you see that I am actually looking into this?

https://lkml.kernel.org/r/bae8b967-c206-819d-774c-f57b94c4b362@redhat.com

and have already spent a significant time on it as part of my research, 
which is *really* unfortunate and makes me quite frustrated at the 
beginning of the week alreadty ...

Ripping out page tables is quite difficult, as we have to stop all page 
table walkers from touching it, including the fast_gup, rmap and page 
faults. This usually involves taking the mmap lock in write. My approach 
does page table reclaim asynchronously from another thread and do not 
rely on reference counts.
David Hildenbrand July 19, 2021, 11:28 a.m. UTC | #2
On 19.07.21 09:34, David Hildenbrand wrote:
> On 18.07.21 06:30, Qi Zheng wrote:
>> Hi,
>>
>> This patch series aims to free user PTE page table pages when all PTE entries
>> are empty.
>>
>> The beginning of this story is that some malloc libraries(e.g. jemalloc or
>> tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs.
>> They will use madvise(MADV_DONTNEED) to free physical memory if they want.
>> But the page tables do not be freed by madvise(), so it can produce many
>> page tables when the process touches an enormous virtual address space.
> 
> ... did you see that I am actually looking into this?
> 
> https://lkml.kernel.org/r/bae8b967-c206-819d-774c-f57b94c4b362@redhat.com
> 
> and have already spent a significant time on it as part of my research,
> which is *really* unfortunate and makes me quite frustrated at the
> beginning of the week alreadty ...
> 
> Ripping out page tables is quite difficult, as we have to stop all page
> table walkers from touching it, including the fast_gup, rmap and page
> faults. This usually involves taking the mmap lock in write. My approach
> does page table reclaim asynchronously from another thread and do not
> rely on reference counts.

FWIW, I had a quick peek and I like the simplistic approach using 
reference counting, although it seems to come with a price. By hooking 
using pte_alloc_get_map_lock() instead of pte_alloc_map_lock, we can 
handle quite some cases easily.

There are cases where we might immediately see a reuse after discarding 
memory (especially, with virtio-balloon free page reporting), in which 
case it's suboptimal to immediately discard instead of waiting a bit if 
there is a reuse. However, the performance impact seems to be 
comparatively small.

I do wonder if the 1% overhead you're seeing is actually because of 
allcoating/freeing or because of the reference count handling on some 
hot paths.

I'm primarily looking into asynchronous reclaim, because it somewhat 
makes sense to only reclaim (+ pay a cost) when there is really need to 
reclaim memory -- similar to our shrinker infrastructure.
Muchun Song July 19, 2021, 12:42 p.m. UTC | #3
On Mon, Jul 19, 2021 at 7:28 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 19.07.21 09:34, David Hildenbrand wrote:
> > On 18.07.21 06:30, Qi Zheng wrote:
> >> Hi,
> >>
> >> This patch series aims to free user PTE page table pages when all PTE entries
> >> are empty.
> >>
> >> The beginning of this story is that some malloc libraries(e.g. jemalloc or
> >> tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs.
> >> They will use madvise(MADV_DONTNEED) to free physical memory if they want.
> >> But the page tables do not be freed by madvise(), so it can produce many
> >> page tables when the process touches an enormous virtual address space.
> >
> > ... did you see that I am actually looking into this?
> >
> > https://lkml.kernel.org/r/bae8b967-c206-819d-774c-f57b94c4b362@redhat.com
> >
> > and have already spent a significant time on it as part of my research,
> > which is *really* unfortunate and makes me quite frustrated at the
> > beginning of the week alreadty ...
> >
> > Ripping out page tables is quite difficult, as we have to stop all page
> > table walkers from touching it, including the fast_gup, rmap and page
> > faults. This usually involves taking the mmap lock in write. My approach
> > does page table reclaim asynchronously from another thread and do not
> > rely on reference counts.
>

Hi David,

> FWIW, I had a quick peek and I like the simplistic approach using
> reference counting, although it seems to come with a price. By hooking
> using pte_alloc_get_map_lock() instead of pte_alloc_map_lock, we can
> handle quite some cases easily.

Totally agree.

>
> There are cases where we might immediately see a reuse after discarding
> memory (especially, with virtio-balloon free page reporting), in which
> case it's suboptimal to immediately discard instead of waiting a bit if
> there is a reuse. However, the performance impact seems to be
> comparatively small.
>
> I do wonder if the 1% overhead you're seeing is actually because of
> allcoating/freeing or because of the reference count handling on some
> hot paths.

Qi Zheng has compared the results collected by using the "perf top"
command. The LRU lock is more contended with this patchset applied.
I think the reason is that this patchset will free more pages (including
PTE page table pages). We don't see the overhead caused by reference
count handling.

Thanks,

Muchun

>
> I'm primarily looking into asynchronous reclaim, because it somewhat
> makes sense to only reclaim (+ pay a cost) when there is really need to
> reclaim memory -- similar to our shrinker infrastructure.
>
> --
> Thanks,
>
> David / dhildenb
>
Muchun Song July 19, 2021, 1:30 p.m. UTC | #4
On Mon, Jul 19, 2021 at 8:42 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Mon, Jul 19, 2021 at 7:28 PM David Hildenbrand <david@redhat.com> wrote:
> >
> > On 19.07.21 09:34, David Hildenbrand wrote:
> > > On 18.07.21 06:30, Qi Zheng wrote:
> > >> Hi,
> > >>
> > >> This patch series aims to free user PTE page table pages when all PTE entries
> > >> are empty.
> > >>
> > >> The beginning of this story is that some malloc libraries(e.g. jemalloc or
> > >> tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs.
> > >> They will use madvise(MADV_DONTNEED) to free physical memory if they want.
> > >> But the page tables do not be freed by madvise(), so it can produce many
> > >> page tables when the process touches an enormous virtual address space.
> > >
> > > ... did you see that I am actually looking into this?
> > >
> > > https://lkml.kernel.org/r/bae8b967-c206-819d-774c-f57b94c4b362@redhat.com
> > >
> > > and have already spent a significant time on it as part of my research,
> > > which is *really* unfortunate and makes me quite frustrated at the
> > > beginning of the week alreadty ...
> > >
> > > Ripping out page tables is quite difficult, as we have to stop all page
> > > table walkers from touching it, including the fast_gup, rmap and page
> > > faults. This usually involves taking the mmap lock in write. My approach
> > > does page table reclaim asynchronously from another thread and do not
> > > rely on reference counts.
> >
>
> Hi David,
>
> > FWIW, I had a quick peek and I like the simplistic approach using
> > reference counting, although it seems to come with a price. By hooking
> > using pte_alloc_get_map_lock() instead of pte_alloc_map_lock, we can
> > handle quite some cases easily.
>
> Totally agree.
>
> >
> > There are cases where we might immediately see a reuse after discarding
> > memory (especially, with virtio-balloon free page reporting), in which
> > case it's suboptimal to immediately discard instead of waiting a bit if
> > there is a reuse. However, the performance impact seems to be
> > comparatively small.
> >
> > I do wonder if the 1% overhead you're seeing is actually because of
> > allcoating/freeing or because of the reference count handling on some
> > hot paths.
>
> Qi Zheng has compared the results collected by using the "perf top"
> command. The LRU lock is more contended with this patchset applied.
> I think the reason is that this patchset will free more pages (including
> PTE page table pages). We don't see the overhead caused by reference
> count handling.

Sorry for the confusion. I am wrong. The PTE page table page does
not add to LRU list, so it should not be the LRU lock. We actually see
that _raw_spin_unlock_irqrestore is hotter than before. I guess it is
zone lock.

>
> Thanks,
>
> Muchun
>
> >
> > I'm primarily looking into asynchronous reclaim, because it somewhat
> > makes sense to only reclaim (+ pay a cost) when there is really need to
> > reclaim memory -- similar to our shrinker infrastructure.
> >
> > --
> > Thanks,
> >
> > David / dhildenb
> >
Qi Zheng July 20, 2021, 4 a.m. UTC | #5
On 7/19/21 7:28 PM, David Hildenbrand wrote:
> On 19.07.21 09:34, David Hildenbrand wrote:
>> On 18.07.21 06:30, Qi Zheng wrote:
>>> Hi,
>>>
>>> This patch series aims to free user PTE page table pages when all PTE 
>>> entries
>>> are empty.
>>>
>>> The beginning of this story is that some malloc libraries(e.g. 
>>> jemalloc or
>>> tcmalloc) usually allocate the amount of VAs by mmap() and do not 
>>> unmap those VAs.
>>> They will use madvise(MADV_DONTNEED) to free physical memory if they 
>>> want.
>>> But the page tables do not be freed by madvise(), so it can produce many
>>> page tables when the process touches an enormous virtual address space.
>>
>> ... did you see that I am actually looking into this?
>>
>> https://lkml.kernel.org/r/bae8b967-c206-819d-774c-f57b94c4b362@redhat.com
>>
>> and have already spent a significant time on it as part of my research,
>> which is *really* unfortunate and makes me quite frustrated at the
>> beginning of the week alreadty ...
>>
>> Ripping out page tables is quite difficult, as we have to stop all page
>> table walkers from touching it, including the fast_gup, rmap and page
>> faults. This usually involves taking the mmap lock in write. My approach
>> does page table reclaim asynchronously from another thread and do not
>> rely on reference counts.
> 
> FWIW, I had a quick peek and I like the simplistic approach using 
> reference counting, although it seems to come with a price. By hooking 
> using pte_alloc_get_map_lock() instead of pte_alloc_map_lock, we can 
> handle quite some cases easily.
> 
> There are cases where we might immediately see a reuse after discarding 
> memory (especially, with virtio-balloon free page reporting), in which 
> case it's suboptimal to immediately discard instead of waiting a bit if 
> there is a reuse. However, the performance impact seems to be 
> comparatively small.

Good point, maybe we can wait a bit in the free_pte_table() in the added
optimiztion patch if the frequency of immediate reuse is high.

> 
> I do wonder if the 1% overhead you're seeing is actually because of 
> allcoating/freeing or because of the reference count handling on some 
> hot paths.
> 
> I'm primarily looking into asynchronous reclaim, because it somewhat 
> makes sense to only reclaim (+ pay a cost) when there is really need to 
> reclaim memory -- similar to our shrinker infrastructure.
>