diff mbox series

[v5,2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI

Message ID 20221129154730.2274278-2-jannh@google.com (mailing list archive)
State New
Headers show
Series [v5,1/3] mm/khugepaged: Take the right locks for page table retraction | expand

Commit Message

Jann Horn Nov. 29, 2022, 3:47 p.m. UTC
The khugepaged paths that remove page tables have to be careful to
synchronize against the lockless_pages_from_mm() path, which traverses
page tables while only being protected by disabled IRQs.
lockless_pages_from_mm() must not:

 1. interpret the contents of freed memory as page tables (and once a
    page table has been deposited, it can be freed)
 2. interpret the contents of deposited page tables as PTEs, since some
    architectures will store non-PTE data inside deposited page tables
    (see radix__pgtable_trans_huge_deposit())
 3. create new page references from PTEs after the containing page
    table has been detached and:
    3a. __collapse_huge_page_isolate() has checked the page refcount
    3b. the page table has been reused at another virtual address and
        populated with new PTEs

("new page references" here refer to stable references returned to the
caller; speculative references that are dropped on an error path are
fine)

commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
collapse") addressed issue 3 by making the lockless_pages_from_mm()
fastpath recheck the pmd_t to ensure that the page table was not
removed by khugepaged in between (under the assumption that the page
table is not repeatedly moving back and forth between two addresses,
with one PTE repeatedly being populated with the same value).

But to address issues 1 and 2, we need to send IPIs before
freeing/reusing page tables. By doing that, issue 3 is also
automatically addressed, so the fix from commit 70cbc3cc78a99 ("mm: gup:
fix the fast GUP race against THP collapse") becomes redundant.

We can ensure that the necessary IPI is sent by calling
tlb_remove_table_sync_one() because, as noted in mm/gup.c, under
configurations that define CONFIG_HAVE_FAST_GUP, there are two possible
cases:

 1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
    tlb_remove_table_sync_one() to send an IPI to synchronize with
    lockless_pages_from_mm().
 2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
    TLB flushes are already guaranteed to send IPIs.
    tlb_remove_table_sync_one() will do nothing, but we've already
    run pmdp_collapse_flush(), which did a TLB flush, which must have
    involved IPIs.

Cc: stable@kernel.org
Fixes: ba76149f47d8 ("thp: khugepaged")
Reviewed-by: Yang Shi <shy828301@gmail.com>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Jann Horn <jannh@google.com>
---

Notes:
    v4:
     - added ack from David Hildenbrand
     - made commit message more verbose
    v5:
     - added reviewed-by from Yang Shi
     - rewrote commit message based on feedback from Yang Shi

 include/asm-generic/tlb.h | 4 ++++
 mm/khugepaged.c           | 2 ++
 mm/mmu_gather.c           | 4 +---
 3 files changed, 7 insertions(+), 3 deletions(-)

Comments

Yang Shi Nov. 29, 2022, 5:28 p.m. UTC | #1
On Tue, Nov 29, 2022 at 7:47 AM Jann Horn <jannh@google.com> wrote:
>
> The khugepaged paths that remove page tables have to be careful to
> synchronize against the lockless_pages_from_mm() path, which traverses
> page tables while only being protected by disabled IRQs.
> lockless_pages_from_mm() must not:
>
>  1. interpret the contents of freed memory as page tables (and once a
>     page table has been deposited, it can be freed)
>  2. interpret the contents of deposited page tables as PTEs, since some
>     architectures will store non-PTE data inside deposited page tables
>     (see radix__pgtable_trans_huge_deposit())
>  3. create new page references from PTEs after the containing page
>     table has been detached and:
>     3a. __collapse_huge_page_isolate() has checked the page refcount
>     3b. the page table has been reused at another virtual address and
>         populated with new PTEs
>
> ("new page references" here refer to stable references returned to the
> caller; speculative references that are dropped on an error path are
> fine)
>
> commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> collapse") addressed issue 3 by making the lockless_pages_from_mm()
> fastpath recheck the pmd_t to ensure that the page table was not
> removed by khugepaged in between (under the assumption that the page
> table is not repeatedly moving back and forth between two addresses,
> with one PTE repeatedly being populated with the same value).
>
> But to address issues 1 and 2, we need to send IPIs before
> freeing/reusing page tables. By doing that, issue 3 is also
> automatically addressed, so the fix from commit 70cbc3cc78a99 ("mm: gup:
> fix the fast GUP race against THP collapse") becomes redundant.
>
> We can ensure that the necessary IPI is sent by calling
> tlb_remove_table_sync_one() because, as noted in mm/gup.c, under
> configurations that define CONFIG_HAVE_FAST_GUP, there are two possible
> cases:
>
>  1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
>     tlb_remove_table_sync_one() to send an IPI to synchronize with
>     lockless_pages_from_mm().
>  2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
>     TLB flushes are already guaranteed to send IPIs.
>     tlb_remove_table_sync_one() will do nothing, but we've already
>     run pmdp_collapse_flush(), which did a TLB flush, which must have
>     involved IPIs.
>
> Cc: stable@kernel.org
> Fixes: ba76149f47d8 ("thp: khugepaged")
> Reviewed-by: Yang Shi <shy828301@gmail.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
>
> Notes:
>     v4:
>      - added ack from David Hildenbrand
>      - made commit message more verbose
>     v5:
>      - added reviewed-by from Yang Shi
>      - rewrote commit message based on feedback from Yang Shi

Thanks, Jann. Looks good to me.

>
>  include/asm-generic/tlb.h | 4 ++++
>  mm/khugepaged.c           | 2 ++
>  mm/mmu_gather.c           | 4 +---
>  3 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 492dce43236ea..cab7cfebf40bd 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -222,12 +222,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
>  #define tlb_needs_table_invalidate() (true)
>  #endif
>
> +void tlb_remove_table_sync_one(void);
> +
>  #else
>
>  #ifdef tlb_needs_table_invalidate
>  #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
>  #endif
>
> +static inline void tlb_remove_table_sync_one(void) { }
> +
>  #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 674b111a24fa7..c3d3ce596bff7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1057,6 +1057,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>         _pmd = pmdp_collapse_flush(vma, address, pmd);
>         spin_unlock(pmd_ptl);
>         mmu_notifier_invalidate_range_end(&range);
> +       tlb_remove_table_sync_one();
>
>         spin_lock(pte_ptl);
>         result =  __collapse_huge_page_isolate(vma, address, pte, cc,
> @@ -1415,6 +1416,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
>                 lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
>
>         pmd = pmdp_collapse_flush(vma, addr, pmdp);
> +       tlb_remove_table_sync_one();
>         mm_dec_nr_ptes(mm);
>         page_table_check_pte_clear_range(mm, addr, pmd);
>         pte_free(mm, pmd_pgtable(pmd));
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index add4244e5790d..3a2c3f8cad2fe 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -153,7 +153,7 @@ static void tlb_remove_table_smp_sync(void *arg)
>         /* Simply deliver the interrupt */
>  }
>
> -static void tlb_remove_table_sync_one(void)
> +void tlb_remove_table_sync_one(void)
>  {
>         /*
>          * This isn't an RCU grace period and hence the page-tables cannot be
> @@ -177,8 +177,6 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
>
>  #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
> -static void tlb_remove_table_sync_one(void) { }
> -
>  static void tlb_remove_table_free(struct mmu_table_batch *batch)
>  {
>         __tlb_remove_table_free(batch);
> --
> 2.38.1.584.g0f3c55d4c2-goog
>
diff mbox series

Patch

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 492dce43236ea..cab7cfebf40bd 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -222,12 +222,16 @@  extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
 #define tlb_needs_table_invalidate() (true)
 #endif
 
+void tlb_remove_table_sync_one(void);
+
 #else
 
 #ifdef tlb_needs_table_invalidate
 #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
 #endif
 
+static inline void tlb_remove_table_sync_one(void) { }
+
 #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
 
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 674b111a24fa7..c3d3ce596bff7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1057,6 +1057,7 @@  static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	_pmd = pmdp_collapse_flush(vma, address, pmd);
 	spin_unlock(pmd_ptl);
 	mmu_notifier_invalidate_range_end(&range);
+	tlb_remove_table_sync_one();
 
 	spin_lock(pte_ptl);
 	result =  __collapse_huge_page_isolate(vma, address, pte, cc,
@@ -1415,6 +1416,7 @@  static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
 		lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
 
 	pmd = pmdp_collapse_flush(vma, addr, pmdp);
+	tlb_remove_table_sync_one();
 	mm_dec_nr_ptes(mm);
 	page_table_check_pte_clear_range(mm, addr, pmd);
 	pte_free(mm, pmd_pgtable(pmd));
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index add4244e5790d..3a2c3f8cad2fe 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -153,7 +153,7 @@  static void tlb_remove_table_smp_sync(void *arg)
 	/* Simply deliver the interrupt */
 }
 
-static void tlb_remove_table_sync_one(void)
+void tlb_remove_table_sync_one(void)
 {
 	/*
 	 * This isn't an RCU grace period and hence the page-tables cannot be
@@ -177,8 +177,6 @@  static void tlb_remove_table_free(struct mmu_table_batch *batch)
 
 #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
 
-static void tlb_remove_table_sync_one(void) { }
-
 static void tlb_remove_table_free(struct mmu_table_batch *batch)
 {
 	__tlb_remove_table_free(batch);