diff mbox series

[v2,1/2] mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables()

Message ID 20230801023145.17026-2-wangkefeng.wang@huawei.com (mailing list archive)
State New, archived
Headers show
Series mm: hugetlb: fix mremap tlb flush | expand

Commit Message

Kefeng Wang Aug. 1, 2023, 2:31 a.m. UTC
Archs may need to do special things when flushing hugepage tlb,
so use the more applicable flush_hugetlb_tlb_range() instead of
flush_tlb_range().

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Muchun Song <songmuchun@bytedance.com>
Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/hugetlb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 64a3239b6407..ac876bfba340 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5281,9 +5281,9 @@  int move_hugetlb_page_tables(struct vm_area_struct *vma,
 	}
 
 	if (shared_pmd)
-		flush_tlb_range(vma, range.start, range.end);
+		flush_hugetlb_tlb_range(vma, range.start, range.end);
 	else
-		flush_tlb_range(vma, old_end - len, old_end);
+		flush_hugetlb_tlb_range(vma, old_end - len, old_end);
 	mmu_notifier_invalidate_range_end(&range);
 	i_mmap_unlock_write(mapping);
 	hugetlb_vma_unlock_write(vma);