diff mbox series

mm: Fix PTE_AF handling in fault path on architectures with HW AF support

Message ID 20240710000942.623704-1-rtummala@nvidia.com (mailing list archive)
State New
Headers show
Series mm: Fix PTE_AF handling in fault path on architectures with HW AF support | expand

Commit Message

Ram Tummala July 10, 2024, 12:09 a.m. UTC
Commit 3bd786f76de2 ("mm: convert do_set_pte() to set_pte_range()")
replaced do_set_pte() with set_pte_range() and that introduced a regression
in the following faulting path of non-anonymous vmas on CPUs with HW AF
support.

handle_pte_fault()
  do_pte_missing()
    do_fault()
      do_read_fault() || do_cow_fault() || do_shared_fault()
        finish_fault()
          set_pte_range()

The polarity of prefault calculation is incorrect. This leads to prefault
being incorrectly set for the faulting address. The following if check will
incorrectly clear the PTE_AF bit instead of setting it and the access will
fault again on the same address due to the missing PTE_AF bit.

    if (prefault && arch_wants_old_prefaulted_pte())
        entry = pte_mkold(entry);

On a subsequent fault on the same address, the faulting path will see a non
NULL vmf->pte and instead of reaching the do_pte_missing() path, PTE_AF
will be correctly set in handle_pte_fault() itself.

Due to this bug, performance degradation in the fault handling path will be
observed due to unnecessary double faulting.

Cc: stable@vger.kernel.org
Fixes: 3bd786f76de2 ("mm: convert do_set_pte() to set_pte_range()")
Signed-off-by: Ram Tummala <rtummala@nvidia.com>
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Matthew Wilcox July 10, 2024, 12:13 a.m. UTC | #1
On Tue, Jul 09, 2024 at 05:09:42PM -0700, Ram Tummala wrote:
> Commit 3bd786f76de2 ("mm: convert do_set_pte() to set_pte_range()")
> replaced do_set_pte() with set_pte_range() and that introduced a regression
> in the following faulting path of non-anonymous vmas on CPUs with HW AF

At no point in this do you say what "AF" stands for.
Alistair Popple July 10, 2024, 1:02 a.m. UTC | #2
Matthew Wilcox <willy@infradead.org> writes:

> On Tue, Jul 09, 2024 at 05:09:42PM -0700, Ram Tummala wrote:
>> Commit 3bd786f76de2 ("mm: convert do_set_pte() to set_pte_range()")
>> replaced do_set_pte() with set_pte_range() and that introduced a regression
>> in the following faulting path of non-anonymous vmas on CPUs with HW AF
>
> At no point in this do you say what "AF" stands for.

It stands for "Access Flag", but that is specific to ARM64. As the fix
is in generic architecture independent code it would be better to use
that terminology (ie. old/young).
Yin Fengwei July 10, 2024, 1:08 a.m. UTC | #3
On 7/10/2024 8:09 AM, Ram Tummala wrote:
> The polarity of prefault calculation is incorrect. This leads to prefault
> being incorrectly set for the faulting address. The following if check will
> incorrectly clear the PTE_AF bit instead of setting it and the access will
> fault again on the same address due to the missing PTE_AF bit.
> 
>      if (prefault && arch_wants_old_prefaulted_pte())
>          entry = pte_mkold(entry);

I have same confusion as Matthew about the PTE_AF.

But I think this is a good catch as old code is like:
         bool prefault = vmf->address != addr;

Sorry for the issue by me. And

Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>


Regards
Yin, Fengwei
David Hildenbrand July 10, 2024, 4:07 a.m. UTC | #4
On 10.07.24 02:09, Ram Tummala wrote:
> Commit 3bd786f76de2 ("mm: convert do_set_pte() to set_pte_range()")
> replaced do_set_pte() with set_pte_range() and that introduced a regression
> in the following faulting path of non-anonymous vmas on CPUs with HW AF
> support.
> 
> handle_pte_fault()
>    do_pte_missing()
>      do_fault()
>        do_read_fault() || do_cow_fault() || do_shared_fault()
>          finish_fault()
>            set_pte_range()
> 
> The polarity of prefault calculation is incorrect. This leads to prefault
> being incorrectly set for the faulting address. The following if check will
> incorrectly clear the PTE_AF bit instead of setting it and the access will
> fault again on the same address due to the missing PTE_AF bit.
> 
>      if (prefault && arch_wants_old_prefaulted_pte())
>          entry = pte_mkold(entry);
> 
> On a subsequent fault on the same address, the faulting path will see a non
> NULL vmf->pte and instead of reaching the do_pte_missing() path, PTE_AF
> will be correctly set in handle_pte_fault() itself.
> 
> Due to this bug, performance degradation in the fault handling path will be
> observed due to unnecessary double faulting.
> 
> Cc: stable@vger.kernel.org
> Fixes: 3bd786f76de2 ("mm: convert do_set_pte() to set_pte_range()")
> Signed-off-by: Ram Tummala <rtummala@nvidia.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index 0a769f34bbb2..03263034a040 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4781,7 +4781,7 @@  void set_pte_range(struct vm_fault *vmf, struct folio *folio,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
-	bool prefault = in_range(vmf->address, addr, nr * PAGE_SIZE);
+	bool prefault = !in_range(vmf->address, addr, nr * PAGE_SIZE);
 	pte_t entry;
 
 	flush_icache_pages(vma, page, nr);