diff mbox series

arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block

Message ID 20230524131305.2808-1-jszhang@kernel.org (mailing list archive)
State New, archived
Headers show
Series arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block | expand

Commit Message

Jisheng Zhang May 24, 2023, 1:13 p.m. UTC
When reading the arm64's PER_VMA_LOCK support code, I found a bit
difference between arm64 and other arch when calling handle_mm_fault()
during VMA lock-based page fault handling: the fault address is masked
before passing to handle_mm_fault(). This is also different from the
usage in mmap_lock-based handling. I think we need to pass the
original fault address to handle_mm_fault() as we did in
commit 84c5e23edecd ("arm64: mm: Pass original fault address to
handle_mm_fault()").

If we go through the code path further, we can find that the "masked"
fault address can cause mismatched fault address between perf sw
major/minor page fault sw event and perf page fault sw event:

do_page_fault
  perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr)   // orig addr
  handle_mm_fault
    mm_account_fault
      perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr

Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
---
 arch/arm64/mm/fault.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Catalin Marinas May 25, 2023, 5:03 p.m. UTC | #1
On Wed, May 24, 2023 at 09:13:05PM +0800, Jisheng Zhang wrote:
> When reading the arm64's PER_VMA_LOCK support code, I found a bit
> difference between arm64 and other arch when calling handle_mm_fault()
> during VMA lock-based page fault handling: the fault address is masked
> before passing to handle_mm_fault(). This is also different from the
> usage in mmap_lock-based handling. I think we need to pass the
> original fault address to handle_mm_fault() as we did in
> commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> handle_mm_fault()").
> 
> If we go through the code path further, we can find that the "masked"
> fault address can cause mismatched fault address between perf sw
> major/minor page fault sw event and perf page fault sw event:
> 
> do_page_fault
>   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr)   // orig addr
>   handle_mm_fault
>     mm_account_fault
>       perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr
> 
> Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
> Signed-off-by: Jisheng Zhang <jszhang@kernel.org>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Will Deacon June 2, 2023, 12:33 p.m. UTC | #2
On Wed, 24 May 2023 21:13:05 +0800, Jisheng Zhang wrote:
> When reading the arm64's PER_VMA_LOCK support code, I found a bit
> difference between arm64 and other arch when calling handle_mm_fault()
> during VMA lock-based page fault handling: the fault address is masked
> before passing to handle_mm_fault(). This is also different from the
> usage in mmap_lock-based handling. I think we need to pass the
> original fault address to handle_mm_fault() as we did in
> commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> handle_mm_fault()").
> 
> [...]

Applied to arm64 (for-next/fixes), thanks!

[1/1] arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block
      https://git.kernel.org/arm64/c/0e2aba694866

Cheers,
diff mbox series

Patch

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index cb21ccd7940d..6045a5117ac1 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -600,8 +600,7 @@  static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
 		vma_end_read(vma);
 		goto lock_mmap;
 	}
-	fault = handle_mm_fault(vma, addr & PAGE_MASK,
-				mm_flags | FAULT_FLAG_VMA_LOCK, regs);
+	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
 	vma_end_read(vma);
 
 	if (!(fault & VM_FAULT_RETRY)) {