diff mbox series

[RFC,v2,4/5] mm/autonuma: call .numa_protect() when page is protected for NUMA migrate

Message ID 20230810090048.26184-1-yan.y.zhao@intel.com (mailing list archive)
State New, archived
Headers show
Series Reduce NUMA balance caused TLB-shootdowns in a VM | expand

Commit Message

Yan Zhao Aug. 10, 2023, 9 a.m. UTC
Call mmu notifier's callback .numa_protect() in change_pmd_range() when
a page is ensured to be protected by PROT_NONE for NUMA migration purpose.

Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
---
 mm/huge_memory.c | 1 +
 mm/mprotect.c    | 1 +
 2 files changed, 2 insertions(+)

Comments

Nadav Amit Aug. 11, 2023, 6:52 p.m. UTC | #1
> On Aug 10, 2023, at 2:00 AM, Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> Call mmu notifier's callback .numa_protect() in change_pmd_range() when
> a page is ensured to be protected by PROT_NONE for NUMA migration purpose.

Consider squashing with the previous patch. It’s better to see the user
(caller) with the new functionality.

It would be useful to describe what the expected course of action that
numa_protect callback should take.
Yan Zhao Aug. 14, 2023, 7:52 a.m. UTC | #2
On Fri, Aug 11, 2023 at 11:52:53AM -0700, Nadav Amit wrote:
> 
> > On Aug 10, 2023, at 2:00 AM, Yan Zhao <yan.y.zhao@intel.com> wrote:
> > 
> > Call mmu notifier's callback .numa_protect() in change_pmd_range() when
> > a page is ensured to be protected by PROT_NONE for NUMA migration purpose.
> 
> Consider squashing with the previous patch. It’s better to see the user
> (caller) with the new functionality.
> 
> It would be useful to describe what the expected course of action that
> numa_protect callback should take.
Thanks! I'll do in this way when I prepare patches in future :)
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a71cf686e3b2..8ae56507da12 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1892,6 +1892,7 @@  int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 		    !toptier)
 			xchg_page_access_time(page, jiffies_to_msecs(jiffies));
+		mmu_notifier_numa_protect(vma->vm_mm, addr, addr + PMD_SIZE);
 	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
diff --git a/mm/mprotect.c b/mm/mprotect.c
index a1f63df34b86..c401814b2992 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -164,6 +164,7 @@  static long change_pte_range(struct mmu_gather *tlb,
 				    !toptier)
 					xchg_page_access_time(page,
 						jiffies_to_msecs(jiffies));
+				mmu_notifier_numa_protect(vma->vm_mm, addr, addr + PAGE_SIZE);
 			}
 
 			oldpte = ptep_modify_prot_start(vma, addr, pte);