diff mbox series

[v4,2/3] mm: changes to split_huge_page() to free zero filled tail pages

Message ID ff9fd618be7b9cb48d36e3635c89a2fe0b7fca65.1666150565.git.alexlzhu@fb.com (mailing list archive)
State New
Headers show
Series THP Shrinker | expand

Commit Message

Alex Zhu (Kernel) Oct. 19, 2022, 3:42 a.m. UTC
From: Alexander Zhu <alexlzhu@fb.com>

Currently, when /sys/kernel/mm/transparent_hugepage/enabled=always is set
there are a large number of transparent hugepages that are almost entirely
zero filled.  This is mentioned in a number of previous patchsets
including:
https://lore.kernel.org/all/20210731063938.1391602-1-yuzhao@google.com/
https://lore.kernel.org/all/
1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/

Currently, split_huge_page() does not have a way to identify zero filled
pages within the THP. Thus these zero pages get remapped and continue to
create memory waste. In this patch, we identify and free tail pages that
are zero filled in split_huge_page(). In this way, we avoid mapping these
pages back into page table entries and can free up unused memory within
THPs. This is based off the previously mentioned patchset by Yu Zhao.
However, we chose to free anonymous zero tail pages whenever they are
encountered instead of only on reclaim or migration.

We also add self tests to verify the RssAnon value to make sure zero
pages are not remapped except in the case of userfaultfd. In the case
of userfaultfd we remap to the shared zero page, similar to what is
done by KSM.

Signed-off-by: Alexander Zhu <alexlzhu@fb.com>
---
v1 to v2
-Modified split_huge_page self test based off more recent changes. 

RFC to v1

-Added support to map to the read only zero page when splitting a THP registered with userfaultfd. Also added a self test to verify that this is working.
-Only trigger the unmap_clean/zap in split_huge_page on anonymous THPs. We cannot zap zero pages for file THPs.

 include/linux/rmap.h                          |   2 +-
 include/linux/vm_event_item.h                 |   3 +
 mm/huge_memory.c                              |  45 ++++++-
 mm/migrate.c                                  |  73 +++++++++--
 mm/migrate_device.c                           |   4 +-
 mm/vmstat.c                                   |   3 +
 .../selftests/vm/split_huge_page_test.c       | 115 +++++++++++++++++-
 tools/testing/selftests/vm/vm_util.c          |  23 ++++
 tools/testing/selftests/vm/vm_util.h          |   3 +
 9 files changed, 256 insertions(+), 15 deletions(-)

Comments

Yu Zhao Oct. 19, 2022, 5:12 a.m. UTC | #1
On Tue, Oct 18, 2022 at 9:42 PM <alexlzhu@fb.com> wrote:
>
> From: Alexander Zhu <alexlzhu@fb.com>
>
> Currently, when /sys/kernel/mm/transparent_hugepage/enabled=always is set
> there are a large number of transparent hugepages that are almost entirely
> zero filled.  This is mentioned in a number of previous patchsets
> including:
> https://lore.kernel.org/all/20210731063938.1391602-1-yuzhao@google.com/
> https://lore.kernel.org/all/
> 1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/
>
> Currently, split_huge_page() does not have a way to identify zero filled
> pages within the THP. Thus these zero pages get remapped and continue to
> create memory waste. In this patch, we identify and free tail pages that
> are zero filled in split_huge_page(). In this way, we avoid mapping these
> pages back into page table entries and can free up unused memory within
> THPs. This is based off the previously mentioned patchset by Yu Zhao.

Hi Alex,

Generally the process [1] to follow is that you keep my patches
separate from yours, rather than squash them into one, e.g., [2].

[1] https://www.kernel.org/doc/html/latest/process/submitting-patches.html
[2] https://lore.kernel.org/linux-mm/cover.1665568707.git.christophe.leroy@csgroup.eu/

Also it's a courtesy to cc Ning, since his approach is (very) similar
to yours. Naturally he would wonder if you are reinventing the wheel,
so you'd have to address it in your cover letter.

> However, we chose to free anonymous zero tail pages whenever they are
> encountered instead of only on reclaim or migration.

What are cases that are not on reclaim or migration?

As I've explained off the mailing list, it's likely a bug if you
really have one. And I don't think you do. I'm currently under the
impression that you have a slab shrinker, and slab shrinkers are on
the reclaim path.

Thanks.
Alex Zhu (Kernel) Oct. 19, 2022, 6:48 p.m. UTC | #2
On Oct 18, 2022, at 10:12 PM, Yu Zhao <yuzhao@google.com<mailto:yuzhao@google.com>> wrote:

On Tue, Oct 18, 2022 at 9:42 PM <alexlzhu@fb.com<mailto:alexlzhu@fb.com>> wrote:

From: Alexander Zhu <alexlzhu@fb.com<mailto:alexlzhu@fb.com>>

Currently, when /sys/kernel/mm/transparent_hugepage/enabled=always is set
there are a large number of transparent hugepages that are almost entirely
zero filled.  This is mentioned in a number of previous patchsets
including:
https://lore.kernel.org/all/20210731063938.1391602-1-yuzhao@google.com/
https://lore.kernel.org/all/
1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/

Currently, split_huge_page() does not have a way to identify zero filled
pages within the THP. Thus these zero pages get remapped and continue to
create memory waste. In this patch, we identify and free tail pages that
are zero filled in split_huge_page(). In this way, we avoid mapping these
pages back into page table entries and can free up unused memory within
THPs. This is based off the previously mentioned patchset by Yu Zhao.

Hi Alex,

Generally the process [1] to follow is that you keep my patches
separate from yours, rather than squash them into one, e.g., [2].

[1] https://www.kernel.org/doc/html/latest/process/submitting-patches.html
[2] https://lore.kernel.org/linux-mm/cover.1665568707.git.christophe.leroy@csgroup.eu/

Also it's a courtesy to cc Ning, since his approach is (very) similar
to yours. Naturally he would wonder if you are reinventing the wheel,
so you'd have to address it in your cover letter.

Sorry about that. Will cc Ning as well in future iterations. I will split out the second patch into a few patches as well.

This patchset differs from Ning's RFC in that we make use of list_lru and a shrinker, as discussed previously:
https://lore.kernel.org/linux-mm/CAOUHufYeuMN9As58BVwMKSN6viOZKReXNeCBgGeeL6ToWGsEKw@mail.gmail.com/

The approach is different, but we are fundamentally still cleaning up underutilized THPs (contain a large number of zero pages).


However, we chose to free anonymous zero tail pages whenever they are
encountered instead of only on reclaim or migration.

What are cases that are not on reclaim or migration?

It would be any case where split_huge_page is called on anonymous memory. split_huge_page is also called from KSM and madvise. It can also be called from debugfs, which is what the self test relies on. We thought this implementation would be more generic. As far as I can tell there is no reason to keep zero pages around in anonymous THPs that have been split.

We also handled remapping to a shared zero page on userfaultfd in a previous iteration. That is the only use case I am aware of where we do not want to zap the zero pages.

As I've explained off the mailing list, it's likely a bug if you
really have one. And I don't think you do. I'm currently under the
impression that you have a slab shrinker, and slab shrinkers are on
the reclaim path.

Thanks.

This shrinker is not only for slabs. It’s for all anonymous THPs in physical memory. That’s why we needed to add list_lru_add_page and list_lru_delete_page as well, as list_lru_add/delete assumes slab objects.
kernel test robot Oct. 20, 2022, 9:57 p.m. UTC | #3
Hi,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on shuah-kselftest/next]
[also build test ERROR on linus/master v6.1-rc1 next-20221020]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/alexlzhu-fb-com/THP-Shrinker/20221019-114447
base:   https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git next
patch link:    https://lore.kernel.org/r/ff9fd618be7b9cb48d36e3635c89a2fe0b7fca65.1666150565.git.alexlzhu%40fb.com
patch subject: [PATCH v4 2/3] mm: changes to split_huge_page() to free zero filled tail pages
config: parisc-defconfig
compiler: hppa-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/40ddae5e98a86e8f5e168cac6056d2353ec40c1f
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review alexlzhu-fb-com/THP-Shrinker/20221019-114447
        git checkout 40ddae5e98a86e8f5e168cac6056d2353ec40c1f
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=parisc SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/migrate.c: In function 'try_to_unmap_clean':
   mm/migrate.c:206:32: error: 'THP_SPLIT_REMAP_READONLY_ZERO_PAGE' undeclared (first use in this function)
     206 |                 count_vm_event(THP_SPLIT_REMAP_READONLY_ZERO_PAGE);
         |                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   mm/migrate.c:206:32: note: each undeclared identifier is reported only once for each function it appears in
>> mm/migrate.c:211:24: error: 'THP_SPLIT_UNMAP' undeclared (first use in this function)
     211 |         count_vm_event(THP_SPLIT_UNMAP);
         |                        ^~~~~~~~~~~~~~~


vim +/THP_SPLIT_UNMAP +211 mm/migrate.c

   171	
   172	static bool try_to_unmap_clean(struct page_vma_mapped_walk *pvmw, struct page *page)
   173	{
   174		void *addr;
   175		bool dirty;
   176		pte_t newpte;
   177	
   178		VM_BUG_ON_PAGE(PageCompound(page), page);
   179		VM_BUG_ON_PAGE(!PageAnon(page), page);
   180		VM_BUG_ON_PAGE(!PageLocked(page), page);
   181		VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page);
   182	
   183		if (PageMlocked(page) || (pvmw->vma->vm_flags & VM_LOCKED))
   184			return false;
   185	
   186		/*
   187		 * The pmd entry mapping the old thp was flushed and the pte mapping
   188		 * this subpage has been non present. Therefore, this subpage is
   189		 * inaccessible. We don't need to remap it if it contains only zeros.
   190		 */
   191		addr = kmap_local_page(page);
   192		dirty = memchr_inv(addr, 0, PAGE_SIZE);
   193		kunmap_local(addr);
   194	
   195		if (dirty)
   196			return false;
   197	
   198		pte_clear_not_present_full(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, false);
   199	
   200		if (userfaultfd_armed(pvmw->vma)) {
   201			newpte = pte_mkspecial(pfn_pte(page_to_pfn(ZERO_PAGE(pvmw->address)),
   202						       pvmw->vma->vm_page_prot));
   203			ptep_clear_flush(pvmw->vma, pvmw->address, pvmw->pte);
   204			set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
   205			dec_mm_counter(pvmw->vma->vm_mm, MM_ANONPAGES);
   206			count_vm_event(THP_SPLIT_REMAP_READONLY_ZERO_PAGE);
   207			return true;
   208		}
   209	
   210		dec_mm_counter(pvmw->vma->vm_mm, mm_counter(page));
 > 211		count_vm_event(THP_SPLIT_UNMAP);
   212		return true;
   213	}
   214
Ning Zhang Oct. 25, 2022, 6:21 a.m. UTC | #4
在 2022/10/20 02:48, Alex Zhu (Kernel) 写道:
>
>
>> On Oct 18, 2022, at 10:12 PM, Yu Zhao <yuzhao@google.com> wrote:
>>
>> On Tue, Oct 18, 2022 at 9:42 PM <alexlzhu@fb.com> wrote:
>>>
>>> From: Alexander Zhu <alexlzhu@fb.com>
>>>
>>> Currently, when /sys/kernel/mm/transparent_hugepage/enabled=always 
>>> is set
>>> there are a large number of transparent hugepages that are almost 
>>> entirely
>>> zero filled.  This is mentioned in a number of previous patchsets
>>> including:
>>> https://lore.kernel.org/all/20210731063938.1391602-1-yuzhao@google.com/
>>> https://lore.kernel.org/all/
>>> 1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/
>>>
>>> Currently, split_huge_page() does not have a way to identify zero filled
>>> pages within the THP. Thus these zero pages get remapped and continue to
>>> create memory waste. In this patch, we identify and free tail pages that
>>> are zero filled in split_huge_page(). In this way, we avoid mapping 
>>> these
>>> pages back into page table entries and can free up unused memory within
>>> THPs. This is based off the previously mentioned patchset by Yu Zhao.
>>
>> Hi Alex,
>>
>> Generally the process [1] to follow is that you keep my patches
>> separate from yours, rather than squash them into one, e.g., [2].
>>
>> [1]https://www.kernel.org/doc/html/latest/process/submitting-patches.html
>> [2]https://lore.kernel.org/linux-mm/cover.1665568707.git.christophe.leroy@csgroup.eu/
>>
>> Also it's a courtesy to cc Ning, since his approach is (very) similar
>> to yours. Naturally he would wonder if you are reinventing the wheel,
>> so you'd have to address it in your cover letter.
>
> Sorry about that. Will cc Ning as well in future iterations. I will 
> split out the second patch into a few patches as well.
>
> This patchset differs from Ning's RFC in that we make use of list_lru 
> and a shrinker, as discussed previously:
> https://lore.kernel.org/linux-mm/CAOUHufYeuMN9As58BVwMKSN6viOZKReXNeCBgGeeL6ToWGsEKw@mail.gmail.com/
>
> The approach is different, but we are fundamentally still cleaning up 
> underutilized THPs (contain a large number of zero pages).
>
I have used a shrinker in previous version (see 
https://gitee.com/anolis/cloud-kernel/commit/62f8852885cc7f23063886d36fd36d94b48d3982) 
.

But the shrinker has a problem that it can't control the split number 
accurately. For example, I only want to split two THPs to avoid OOM, but 
shrinker may split many THPs.

>>
>>> However, we chose to free anonymous zero tail pages whenever they are
>>> encountered instead of only on reclaim or migration.
>>
>> What are cases that are not on reclaim or migration?
>
> It would be any case where split_huge_page is called on anonymous 
> memory. split_huge_page is also called from KSM and madvise. It can 
> also be called from debugfs, which is what the self test relies on. We 
> thought this implementation would be more generic. As far as I can 
> tell there is no reason to keep zero pages around in anonymous THPs 
> that have been split.
>
> We also handled remapping to a shared zero page on userfaultfd in a 
> previous iteration. That is the only use case I am aware of where we 
> do not want to zap the zero pages.
>>
>> As I've explained off the mailing list, it's likely a bug if you
>> really have one. And I don't think you do. I'm currently under the
>> impression that you have a slab shrinker, and slab shrinkers are on
>> the reclaim path.
>>
>> Thanks.
>
> This shrinker is not only for slabs. It’s for all anonymous THPs in 
> physical memory. That’s why we needed to add list_lru_add_page and 
> list_lru_delete_page as well, as list_lru_add/delete assumes slab 
> objects.
>
>
Alex Zhu (Kernel) Oct. 26, 2022, 7:43 p.m. UTC | #5
On Oct 24, 2022, at 11:21 PM, Ning Zhang <ningzhang@linux.alibaba.com<mailto:ningzhang@linux.alibaba.com>> wrote:

This Message Is From an External Sender
在 2022/10/20 02:48, Alex Zhu (Kernel) 写道:


On Oct 18, 2022, at 10:12 PM, Yu Zhao <yuzhao@google.com<mailto:yuzhao@google.com>> wrote:

On Tue, Oct 18, 2022 at 9:42 PM <alexlzhu@fb.com<mailto:alexlzhu@fb.com>> wrote:

From: Alexander Zhu <alexlzhu@fb.com<mailto:alexlzhu@fb.com>>

Currently, when /sys/kernel/mm/transparent_hugepage/enabled=always is set
there are a large number of transparent hugepages that are almost entirely
zero filled.  This is mentioned in a number of previous patchsets
including:
https://lore.kernel.org/all/20210731063938.1391602-1-yuzhao@google.com/
https://lore.kernel.org/all/
1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/<mailto:1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/>

Currently, split_huge_page() does not have a way to identify zero filled
pages within the THP. Thus these zero pages get remapped and continue to
create memory waste. In this patch, we identify and free tail pages that
are zero filled in split_huge_page(). In this way, we avoid mapping these
pages back into page table entries and can free up unused memory within
THPs. This is based off the previously mentioned patchset by Yu Zhao.

Hi Alex,

Generally the process [1] to follow is that you keep my patches
separate from yours, rather than squash them into one, e.g., [2].

[1] https://www.kernel.org/doc/html/latest/process/submitting-patches.html
[2] https://lore.kernel.org/linux-mm/cover.1665568707.git.christophe.leroy@csgroup.eu/

Also it's a courtesy to cc Ning, since his approach is (very) similar
to yours. Naturally he would wonder if you are reinventing the wheel,
so you'd have to address it in your cover letter.

Sorry about that. Will cc Ning as well in future iterations. I will split out the second patch into a few patches as well.

This patchset differs from Ning's RFC in that we make use of list_lru and a shrinker, as discussed previously:
https://lore.kernel.org/linux-mm/CAOUHufYeuMN9As58BVwMKSN6viOZKReXNeCBgGeeL6ToWGsEKw@mail.gmail.com/

The approach is different, but we are fundamentally still cleaning up underutilized THPs (contain a large number of zero pages).


I have used a shrinker in previous version (see https://gitee.com/anolis/cloud-kernel/commit/62f8852885cc7f23063886d36fd36d94b48d3982<https://gitee.com/anolis/cloud-kernel/commit/62f8852885cc7f23063886d36fd36d94b48d3982>) .

But the shrinker has a problem that it can't control the split number accurately. For example, I only want to split two THPs to avoid OOM, but shrinker may split many THPs.

I was not able to open the link, but what kind of algorithm did you use to determine the split number? We are currently looking into how we can control the number of THPs split based off memory waste in THPs.


However, we chose to free anonymous zero tail pages whenever they are
encountered instead of only on reclaim or migration.

What are cases that are not on reclaim or migration?

It would be any case where split_huge_page is called on anonymous memory. split_huge_page is also called from KSM and madvise. It can also be called from debugfs, which is what the self test relies on. We thought this implementation would be more generic. As far as I can tell there is no reason to keep zero pages around in anonymous THPs that have been split.

We also handled remapping to a shared zero page on userfaultfd in a previous iteration. That is the only use case I am aware of where we do not want to zap the zero pages.

As I've explained off the mailing list, it's likely a bug if you
really have one. And I don't think you do. I'm currently under the
impression that you have a slab shrinker, and slab shrinkers are on
the reclaim path.

Thanks.

This shrinker is not only for slabs. It’s for all anonymous THPs in physical memory. That’s why we needed to add list_lru_add_page and list_lru_delete_page as well, as list_lru_add/delete assumes slab objects.
diff mbox series

Patch

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index bd3504d11b15..3f83bbcf1333 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -428,7 +428,7 @@  int folio_mkclean(struct folio *);
 int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
 		      struct vm_area_struct *vma);
 
-void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked);
+void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked, bool unmap_clean);
 
 int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
 
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 3518dba1e02f..3618b10ddec9 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -111,6 +111,9 @@  enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
 		THP_SPLIT_PUD,
 #endif
+		THP_SPLIT_FREE,
+		THP_SPLIT_UNMAP,
+		THP_SPLIT_REMAP_READONLY_ZERO_PAGE,
 		THP_ZERO_PAGE_ALLOC,
 		THP_ZERO_PAGE_ALLOC_FAILED,
 		THP_SWPOUT,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1cc4a5f4791e..f68a353e0adf 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2373,7 +2373,7 @@  static void unmap_folio(struct folio *folio)
 		try_to_unmap(folio, ttu_flags | TTU_IGNORE_MLOCK);
 }
 
-static void remap_page(struct folio *folio, unsigned long nr)
+static void remap_page(struct folio *folio, unsigned long nr, bool unmap_clean)
 {
 	int i = 0;
 
@@ -2381,7 +2381,7 @@  static void remap_page(struct folio *folio, unsigned long nr)
 	if (!folio_test_anon(folio))
 		return;
 	for (;;) {
-		remove_migration_ptes(folio, folio, true);
+		remove_migration_ptes(folio, folio, true, unmap_clean);
 		i += folio_nr_pages(folio);
 		if (i >= nr)
 			break;
@@ -2496,6 +2496,8 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 	struct address_space *swap_cache = NULL;
 	unsigned long offset = 0;
 	unsigned int nr = thp_nr_pages(head);
+	LIST_HEAD(pages_to_free);
+	int nr_pages_to_free = 0;
 	int i;
 
 	/* complete memcg works before add pages to LRU */
@@ -2558,7 +2560,7 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 	}
 	local_irq_enable();
 
-	remap_page(folio, nr);
+	remap_page(folio, nr, PageAnon(head));
 
 	if (PageSwapCache(head)) {
 		swp_entry_t entry = { .val = page_private(head) };
@@ -2572,6 +2574,34 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 			continue;
 		unlock_page(subpage);
 
+		/*
+		 * If a tail page has only two references left, one inherited
+		 * from the isolation of its head and the other from
+		 * lru_add_page_tail() which we are about to drop, it means this
+		 * tail page was concurrently zapped. Then we can safely free it
+		 * and save page reclaim or migration the trouble of trying it.
+		 */
+		if (list && page_ref_freeze(subpage, 2)) {
+			VM_BUG_ON_PAGE(PageLRU(subpage), subpage);
+			VM_BUG_ON_PAGE(PageCompound(subpage), subpage);
+			VM_BUG_ON_PAGE(page_mapped(subpage), subpage);
+
+			ClearPageActive(subpage);
+			ClearPageUnevictable(subpage);
+			list_move(&subpage->lru, &pages_to_free);
+			nr_pages_to_free++;
+			continue;
+		}
+
+		/*
+		 * If a tail page has only one reference left, it will be freed
+		 * by the call to free_page_and_swap_cache below. Since zero
+		 * subpages are no longer remapped, there will only be one
+		 * reference left in cases outside of reclaim or migration.
+		 */
+		if (page_ref_count(subpage) == 1)
+			nr_pages_to_free++;
+
 		/*
 		 * Subpages may be freed if there wasn't any mapping
 		 * like if add_to_swap() is running on a lru page that
@@ -2581,6 +2611,13 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 		 */
 		free_page_and_swap_cache(subpage);
 	}
+
+	if (!nr_pages_to_free)
+		return;
+
+	mem_cgroup_uncharge_list(&pages_to_free);
+	free_unref_page_list(&pages_to_free);
+	count_vm_events(THP_SPLIT_FREE, nr_pages_to_free);
 }
 
 /* Racy check whether the huge page can be split */
@@ -2752,7 +2789,7 @@  int split_huge_page_to_list(struct page *page, struct list_head *list)
 		if (mapping)
 			xas_unlock(&xas);
 		local_irq_enable();
-		remap_page(folio, folio_nr_pages(folio));
+		remap_page(folio, folio_nr_pages(folio), false);
 		ret = -EBUSY;
 	}
 
diff --git a/mm/migrate.c b/mm/migrate.c
index 1379e1912772..bc96a084d925 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -30,6 +30,7 @@ 
 #include <linux/writeback.h>
 #include <linux/mempolicy.h>
 #include <linux/vmalloc.h>
+#include <linux/vm_event_item.h>
 #include <linux/security.h>
 #include <linux/backing-dev.h>
 #include <linux/compaction.h>
@@ -168,13 +169,62 @@  void putback_movable_pages(struct list_head *l)
 	}
 }
 
+static bool try_to_unmap_clean(struct page_vma_mapped_walk *pvmw, struct page *page)
+{
+	void *addr;
+	bool dirty;
+	pte_t newpte;
+
+	VM_BUG_ON_PAGE(PageCompound(page), page);
+	VM_BUG_ON_PAGE(!PageAnon(page), page);
+	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page);
+
+	if (PageMlocked(page) || (pvmw->vma->vm_flags & VM_LOCKED))
+		return false;
+
+	/*
+	 * The pmd entry mapping the old thp was flushed and the pte mapping
+	 * this subpage has been non present. Therefore, this subpage is
+	 * inaccessible. We don't need to remap it if it contains only zeros.
+	 */
+	addr = kmap_local_page(page);
+	dirty = memchr_inv(addr, 0, PAGE_SIZE);
+	kunmap_local(addr);
+
+	if (dirty)
+		return false;
+
+	pte_clear_not_present_full(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, false);
+
+	if (userfaultfd_armed(pvmw->vma)) {
+		newpte = pte_mkspecial(pfn_pte(page_to_pfn(ZERO_PAGE(pvmw->address)),
+					       pvmw->vma->vm_page_prot));
+		ptep_clear_flush(pvmw->vma, pvmw->address, pvmw->pte);
+		set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
+		dec_mm_counter(pvmw->vma->vm_mm, MM_ANONPAGES);
+		count_vm_event(THP_SPLIT_REMAP_READONLY_ZERO_PAGE);
+		return true;
+	}
+
+	dec_mm_counter(pvmw->vma->vm_mm, mm_counter(page));
+	count_vm_event(THP_SPLIT_UNMAP);
+	return true;
+}
+
+struct rmap_walk_arg {
+	struct folio *folio;
+	bool unmap_clean;
+};
+
 /*
  * Restore a potential migration pte to a working pte entry
  */
 static bool remove_migration_pte(struct folio *folio,
-		struct vm_area_struct *vma, unsigned long addr, void *old)
+		struct vm_area_struct *vma, unsigned long addr, void *arg)
 {
-	DEFINE_FOLIO_VMA_WALK(pvmw, old, vma, addr, PVMW_SYNC | PVMW_MIGRATION);
+	struct rmap_walk_arg *rmap_walk_arg = arg;
+	DEFINE_FOLIO_VMA_WALK(pvmw, rmap_walk_arg->folio, vma, addr, PVMW_SYNC | PVMW_MIGRATION);
 
 	while (page_vma_mapped_walk(&pvmw)) {
 		rmap_t rmap_flags = RMAP_NONE;
@@ -197,6 +247,8 @@  static bool remove_migration_pte(struct folio *folio,
 			continue;
 		}
 #endif
+		if (rmap_walk_arg->unmap_clean && try_to_unmap_clean(&pvmw, new))
+			continue;
 
 		folio_get(folio);
 		pte = mk_pte(new, READ_ONCE(vma->vm_page_prot));
@@ -272,13 +324,20 @@  static bool remove_migration_pte(struct folio *folio,
  * Get rid of all migration entries and replace them by
  * references to the indicated page.
  */
-void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked)
+void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked, bool unmap_clean)
 {
+	struct rmap_walk_arg rmap_walk_arg = {
+		.folio = src,
+		.unmap_clean = unmap_clean,
+	};
+
 	struct rmap_walk_control rwc = {
 		.rmap_one = remove_migration_pte,
-		.arg = src,
+		.arg = &rmap_walk_arg,
 	};
 
+	VM_BUG_ON_FOLIO(unmap_clean && src != dst, src);
+
 	if (locked)
 		rmap_walk_locked(dst, &rwc);
 	else
@@ -872,7 +931,7 @@  static int writeout(struct address_space *mapping, struct folio *folio)
 	 * At this point we know that the migration attempt cannot
 	 * be successful.
 	 */
-	remove_migration_ptes(folio, folio, false);
+	remove_migration_ptes(folio, folio, false, false);
 
 	rc = mapping->a_ops->writepage(&folio->page, &wbc);
 
@@ -1128,7 +1187,7 @@  static int __unmap_and_move(struct folio *src, struct folio *dst,
 
 	if (page_was_mapped)
 		remove_migration_ptes(src,
-			rc == MIGRATEPAGE_SUCCESS ? dst : src, false);
+			rc == MIGRATEPAGE_SUCCESS ? dst : src, false, false);
 
 out_unlock_both:
 	folio_unlock(dst);
@@ -1338,7 +1397,7 @@  static int unmap_and_move_huge_page(new_page_t get_new_page,
 
 	if (page_was_mapped)
 		remove_migration_ptes(src,
-			rc == MIGRATEPAGE_SUCCESS ? dst : src, false);
+			rc == MIGRATEPAGE_SUCCESS ? dst : src, false, false);
 
 unlock_put_anon:
 	folio_unlock(dst);
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 6fa682eef7a0..6508a083d7fd 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -421,7 +421,7 @@  static unsigned long migrate_device_unmap(unsigned long *src_pfns,
 			continue;
 
 		folio = page_folio(page);
-		remove_migration_ptes(folio, folio, false);
+		remove_migration_ptes(folio, folio, false, false);
 
 		src_pfns[i] = 0;
 		folio_unlock(folio);
@@ -847,7 +847,7 @@  void migrate_device_finalize(unsigned long *src_pfns,
 
 		src = page_folio(page);
 		dst = page_folio(newpage);
-		remove_migration_ptes(src, dst, false);
+		remove_migration_ptes(src, dst, false, false);
 		folio_unlock(src);
 
 		if (is_zone_device_page(page))
diff --git a/mm/vmstat.c b/mm/vmstat.c
index b2371d745e00..3d802eb6754d 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1359,6 +1359,9 @@  const char * const vmstat_text[] = {
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
 	"thp_split_pud",
 #endif
+	"thp_split_free",
+	"thp_split_unmap",
+	"thp_split_remap_readonly_zero_page",
 	"thp_zero_page_alloc",
 	"thp_zero_page_alloc_failed",
 	"thp_swpout",
diff --git a/tools/testing/selftests/vm/split_huge_page_test.c b/tools/testing/selftests/vm/split_huge_page_test.c
index 76e1c36dd9e5..42f0e79a4508 100644
--- a/tools/testing/selftests/vm/split_huge_page_test.c
+++ b/tools/testing/selftests/vm/split_huge_page_test.c
@@ -16,6 +16,9 @@ 
 #include <sys/mount.h>
 #include <malloc.h>
 #include <stdbool.h>
+#include <sys/syscall.h> /* Definition of SYS_* constants */
+#include <linux/userfaultfd.h>
+#include <sys/ioctl.h>
 #include "vm_util.h"
 
 uint64_t pagesize;
@@ -88,6 +91,115 @@  static void write_debugfs(const char *fmt, ...)
 	}
 }
 
+static char *allocate_zero_filled_hugepage(size_t len)
+{
+	char *result;
+	size_t i;
+
+	result = memalign(pmd_pagesize, len);
+	if (!result) {
+		printf("Fail to allocate memory\n");
+		exit(EXIT_FAILURE);
+	}
+
+	madvise(result, len, MADV_HUGEPAGE);
+
+	for (i = 0; i < len; i++)
+		result[i] = (char)0;
+
+	return result;
+}
+
+static void verify_rss_anon_split_huge_page_all_zeroes(char *one_page, int nr_hpages, size_t len)
+{
+	uint64_t rss_anon_before, rss_anon_after;
+	size_t i;
+
+	if (!check_huge_anon(one_page, 4, pmd_pagesize)) {
+		printf("No THP is allocated\n");
+		exit(EXIT_FAILURE);
+	}
+
+	rss_anon_before = rss_anon();
+	if (!rss_anon_before) {
+		printf("No RssAnon is allocated before split\n");
+		exit(EXIT_FAILURE);
+	}
+
+	/* split all THPs */
+	write_debugfs(PID_FMT, getpid(), (uint64_t)one_page,
+		      (uint64_t)one_page + len);
+
+	for (i = 0; i < len; i++)
+		if (one_page[i] != (char)0) {
+			printf("%ld byte corrupted\n", i);
+			exit(EXIT_FAILURE);
+		}
+
+	if (!check_huge_anon(one_page, 0, pmd_pagesize)) {
+		printf("Still AnonHugePages not split\n");
+		exit(EXIT_FAILURE);
+	}
+
+	rss_anon_after = rss_anon();
+	if (rss_anon_after >= rss_anon_before) {
+		printf("Incorrect RssAnon value. Before: %ld After: %ld\n",
+		       rss_anon_before, rss_anon_after);
+		exit(EXIT_FAILURE);
+	}
+}
+
+void split_pmd_zero_pages(void)
+{
+	char *one_page;
+	int nr_hpages = 4;
+	size_t len = nr_hpages * pmd_pagesize;
+
+	one_page = allocate_zero_filled_hugepage(len);
+	verify_rss_anon_split_huge_page_all_zeroes(one_page, nr_hpages, len);
+	printf("Split zero filled huge pages successful\n");
+	free(one_page);
+}
+
+void split_pmd_zero_pages_uffd(void)
+{
+	char *one_page;
+	int nr_hpages = 4;
+	size_t len = nr_hpages * pmd_pagesize;
+	long uffd; /* userfaultfd file descriptor */
+	struct uffdio_api uffdio_api;
+	struct uffdio_register uffdio_register;
+
+	/* Create and enable userfaultfd object. */
+
+	uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK);
+	if (uffd == -1) {
+		perror("userfaultfd");
+		exit(1);
+	}
+
+	uffdio_api.api = UFFD_API;
+	uffdio_api.features = 0;
+	if (ioctl(uffd, UFFDIO_API, &uffdio_api) == -1) {
+		perror("ioctl-UFFDIO_API");
+		exit(1);
+	}
+
+	one_page = allocate_zero_filled_hugepage(len);
+
+	uffdio_register.range.start = (unsigned long)one_page;
+	uffdio_register.range.len = len;
+	uffdio_register.mode = UFFDIO_REGISTER_MODE_WP;
+	if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) {
+		perror("ioctl-UFFDIO_REGISTER");
+		exit(1);
+	}
+
+	verify_rss_anon_split_huge_page_all_zeroes(one_page, nr_hpages, len);
+	printf("Split zero filled huge pages with uffd successful\n");
+	free(one_page);
+}
+
 void split_pmd_thp(void)
 {
 	char *one_page;
@@ -121,7 +233,6 @@  void split_pmd_thp(void)
 			exit(EXIT_FAILURE);
 		}
 
-
 	if (check_huge_anon(one_page, 0, pmd_pagesize)) {
 		printf("Still AnonHugePages not split\n");
 		exit(EXIT_FAILURE);
@@ -301,6 +412,8 @@  int main(int argc, char **argv)
 	pageshift = ffs(pagesize) - 1;
 	pmd_pagesize = read_pmd_pagesize();
 
+	split_pmd_zero_pages();
+	split_pmd_zero_pages_uffd();
 	split_pmd_thp();
 	split_pte_mapped_thp();
 	split_file_backed_thp();
diff --git a/tools/testing/selftests/vm/vm_util.c b/tools/testing/selftests/vm/vm_util.c
index f11f8adda521..72f3edc64aaf 100644
--- a/tools/testing/selftests/vm/vm_util.c
+++ b/tools/testing/selftests/vm/vm_util.c
@@ -6,6 +6,7 @@ 
 
 #define PMD_SIZE_FILE_PATH "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size"
 #define SMAP_FILE_PATH "/proc/self/smaps"
+#define STATUS_FILE_PATH "/proc/self/status"
 #define MAX_LINE_LENGTH 500
 
 uint64_t pagemap_get_entry(int fd, char *start)
@@ -72,6 +73,28 @@  uint64_t read_pmd_pagesize(void)
 	return strtoul(buf, NULL, 10);
 }
 
+uint64_t rss_anon(void)
+{
+	uint64_t rss_anon = 0;
+	int ret;
+	FILE *fp;
+	char buffer[MAX_LINE_LENGTH];
+
+	fp = fopen(STATUS_FILE_PATH, "r");
+	if (!fp)
+		ksft_exit_fail_msg("%s: Failed to open file %s\n", __func__, STATUS_FILE_PATH);
+
+	if (!check_for_pattern(fp, "RssAnon:", buffer, sizeof(buffer)))
+		goto err_out;
+
+	if (sscanf(buffer, "RssAnon:%10ld kB", &rss_anon) != 1)
+		ksft_exit_fail_msg("Reading status error\n");
+
+err_out:
+	fclose(fp);
+	return rss_anon;
+}
+
 bool __check_huge(void *addr, char *pattern, int nr_hpages,
 		  uint64_t hpage_size)
 {
diff --git a/tools/testing/selftests/vm/vm_util.h b/tools/testing/selftests/vm/vm_util.h
index 5c35de454e08..dd1885f66097 100644
--- a/tools/testing/selftests/vm/vm_util.h
+++ b/tools/testing/selftests/vm/vm_util.h
@@ -1,12 +1,15 @@ 
 /* SPDX-License-Identifier: GPL-2.0 */
 #include <stdint.h>
 #include <stdbool.h>
+#include <stddef.h>
+#include <stdio.h>
 
 uint64_t pagemap_get_entry(int fd, char *start);
 bool pagemap_is_softdirty(int fd, char *start);
 void clear_softdirty(void);
 bool check_for_pattern(FILE *fp, const char *pattern, char *buf, size_t len);
 uint64_t read_pmd_pagesize(void);
+uint64_t rss_anon(void);
 bool check_huge_anon(void *addr, int nr_hpages, uint64_t hpage_size);
 bool check_huge_file(void *addr, int nr_hpages, uint64_t hpage_size);
 bool check_huge_shmem(void *addr, int nr_hpages, uint64_t hpage_size);