diff mbox series

[01/16] mm/huge_memory: use flush_pmd_tlb_range in move_huge_pmd

Message ID 20220622170627.19786-2-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series A few cleanup patches for huge_memory | expand

Commit Message

Miaohe Lin June 22, 2022, 5:06 p.m. UTC
ARCHes with special requirements for evicting THP backing TLB entries can
implement flush_pmd_tlb_range. Otherwise also, it can help optimize TLB
flush in THP regime. Using flush_pmd_tlb_range to take advantage of this
in move_huge_pmd.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Muchun Song June 23, 2022, 6:30 a.m. UTC | #1
On Thu, Jun 23, 2022 at 01:06:12AM +0800, Miaohe Lin wrote:
> ARCHes with special requirements for evicting THP backing TLB entries can
> implement flush_pmd_tlb_range. Otherwise also, it can help optimize TLB
> flush in THP regime. Using flush_pmd_tlb_range to take advantage of this
> in move_huge_pmd.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>

LGTM.

Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Zach O'Keefe June 24, 2022, 6:32 p.m. UTC | #2
On 23 Jun 14:30, Muchun Song wrote:
> On Thu, Jun 23, 2022 at 01:06:12AM +0800, Miaohe Lin wrote:
> > ARCHes with special requirements for evicting THP backing TLB entries can
> > implement flush_pmd_tlb_range. Otherwise also, it can help optimize TLB
> > flush in THP regime. Using flush_pmd_tlb_range to take advantage of this
> > in move_huge_pmd.
> > 
> > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> 
> LGTM.
> 
> Reviewed-by: Muchun Song <songmuchun@bytedace.com>

Reviewed-by: Zach O'Keefe <zokeefe@google.com>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index af0751a79c19..fd6da053a13e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1746,7 +1746,7 @@  bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
 		pmd = move_soft_dirty_pmd(pmd);
 		set_pmd_at(mm, new_addr, new_pmd, pmd);
 		if (force_flush)
-			flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+			flush_pmd_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
 		if (new_ptl != old_ptl)
 			spin_unlock(new_ptl);
 		spin_unlock(old_ptl);