diff mbox series

[v2] mm: replace is_zero_pfn with is_huge_zero_pmd for thp

Message ID 20191108192629.201556-1-yuzhao@google.com (mailing list archive)
State New, archived
Headers show
Series [v2] mm: replace is_zero_pfn with is_huge_zero_pmd for thp | expand

Commit Message

Yu Zhao Nov. 8, 2019, 7:26 p.m. UTC
For hugely mapped thp, we use is_huge_zero_pmd() to check if it's
zero page or not.

We do fill ptes with my_zero_pfn() when we split zero thp pmd, but
this is not what we have in vm_normal_page_pmd() --
pmd_trans_huge_lock() makes sure of it.

This is a trivial fix for /proc/pid/numa_maps, and AFAIK nobody
complains about it.

Gerald Schaefer asked:
> Maybe the description could also mention the symptom of this bug?
> I would assume that it affects anon/dirty accounting in gather_pte_stats(),
> for huge mappings, if zero page mappings are not correctly recognized.

I came across this while I was looking at the code, so I'm not aware of
any symptom.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index b1ca51a079f2..cf209f84ce4a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -654,7 +654,7 @@  struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
 
 	if (pmd_devmap(pmd))
 		return NULL;
-	if (is_zero_pfn(pfn))
+	if (is_huge_zero_pmd(pmd))
 		return NULL;
 	if (unlikely(pfn > highest_memmap_pfn))
 		return NULL;