diff mbox series

arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block

Message ID 20230524131238.2791-1-jszhang@kernel.org (mailing list archive)
State New, archived
Headers show
Series arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block | expand

Commit Message

Jisheng Zhang May 24, 2023, 1:12 p.m. UTC
When reading the arm64's PER_VMA_LOCK support code, I found a bit
difference between arm64 and other arch when calling handle_mm_fault()
during VMA lock-based page fault handling: the fault address is masked
before passing to handle_mm_fault(). This is also different from the
usage in mmap_lock-based handling. I think we need to pass the
original fault address to handle_mm_fault() as we did in
commit 84c5e23edecd ("arm64: mm: Pass original fault address to
handle_mm_fault()").

If we go through the code path further, we can find that the "masked"
fault address can cause mismatched fault address between perf sw
major/minor page fault sw event and perf page fault sw event:

do_page_fault
  perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr)   // orig addr
  handle_mm_fault
    mm_account_fault
      perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr

Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
---
 arch/arm64/mm/fault.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Jisheng Zhang May 24, 2023, 1:26 p.m. UTC | #1
On Wed, May 24, 2023 at 09:12:38PM +0800, Jisheng Zhang wrote:
> When reading the arm64's PER_VMA_LOCK support code, I found a bit
> difference between arm64 and other arch when calling handle_mm_fault()
> during VMA lock-based page fault handling: the fault address is masked
> before passing to handle_mm_fault(). This is also different from the
> usage in mmap_lock-based handling. I think we need to pass the
> original fault address to handle_mm_fault() as we did in
> commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> handle_mm_fault()").
> 
> If we go through the code path further, we can find that the "masked"
> fault address can cause mismatched fault address between perf sw
> major/minor page fault sw event and perf page fault sw event:

OOPS, sorry please ignore this one. I pressed ctrl-c to interrupt the
git send-mail, but it's still sent out ;)

Instead, let's focus on
https://lore.kernel.org/linux-arm-kernel/20230524131305.2808-1-jszhang@kernel.org/T/#u

The two patches are the same, I just added Suren into CC list.

> 
> do_page_fault
>   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr)   // orig addr
>   handle_mm_fault
>     mm_account_fault
>       perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr
> 
> Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
> Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
> ---
>  arch/arm64/mm/fault.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index cb21ccd7940d..6045a5117ac1 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
>  		vma_end_read(vma);
>  		goto lock_mmap;
>  	}
> -	fault = handle_mm_fault(vma, addr & PAGE_MASK,
> -				mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> +	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
>  	vma_end_read(vma);
>  
>  	if (!(fault & VM_FAULT_RETRY)) {
> -- 
> 2.40.1
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Suren Baghdasaryan May 24, 2023, 2:39 p.m. UTC | #2
On Wed, May 24, 2023 at 6:38 AM Jisheng Zhang <jszhang@kernel.org> wrote:
>
> On Wed, May 24, 2023 at 09:12:38PM +0800, Jisheng Zhang wrote:
> > When reading the arm64's PER_VMA_LOCK support code, I found a bit
> > difference between arm64 and other arch when calling handle_mm_fault()
> > during VMA lock-based page fault handling: the fault address is masked
> > before passing to handle_mm_fault(). This is also different from the
> > usage in mmap_lock-based handling. I think we need to pass the
> > original fault address to handle_mm_fault() as we did in
> > commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> > handle_mm_fault()").

Thanks for noticing. I'm not sure why this masking leaked into my
patch. I don't think I wrote it before 84c5e23edecd was merged in June
2021.
Anyway, your assessment looks correct to me.

> >
> > If we go through the code path further, we can find that the "masked"
> > fault address can cause mismatched fault address between perf sw
> > major/minor page fault sw event and perf page fault sw event:
>
> OOPS, sorry please ignore this one. I pressed ctrl-c to interrupt the
> git send-mail, but it's still sent out ;)
>
> Instead, let's focus on
> https://lore.kernel.org/linux-arm-kernel/20230524131305.2808-1-jszhang@kernel.org/T/#u
>
> The two patches are the same, I just added Suren into CC list.
>
> >
> > do_page_fault
> >   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr)   // orig addr
> >   handle_mm_fault
> >     mm_account_fault
> >       perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr
> >
> > Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
> > Signed-off-by: Jisheng Zhang <jszhang@kernel.org>

Reviewed-by: Suren Baghdasaryan <surenb@google.com>

> > ---
> >  arch/arm64/mm/fault.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > index cb21ccd7940d..6045a5117ac1 100644
> > --- a/arch/arm64/mm/fault.c
> > +++ b/arch/arm64/mm/fault.c
> > @@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> >               vma_end_read(vma);
> >               goto lock_mmap;
> >       }
> > -     fault = handle_mm_fault(vma, addr & PAGE_MASK,
> > -                             mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> > +     fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> >       vma_end_read(vma);
> >
> >       if (!(fault & VM_FAULT_RETRY)) {
> > --
> > 2.40.1
> >
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Anshuman Khandual May 25, 2023, 7 a.m. UTC | #3
On 5/24/23 18:42, Jisheng Zhang wrote:
> When reading the arm64's PER_VMA_LOCK support code, I found a bit
> difference between arm64 and other arch when calling handle_mm_fault()
> during VMA lock-based page fault handling: the fault address is masked
> before passing to handle_mm_fault(). This is also different from the
> usage in mmap_lock-based handling. I think we need to pass the
> original fault address to handle_mm_fault() as we did in
> commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> handle_mm_fault()").
> 
> If we go through the code path further, we can find that the "masked"
> fault address can cause mismatched fault address between perf sw
> major/minor page fault sw event and perf page fault sw event:
> 
> do_page_fault
>   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr)   // orig addr
>   handle_mm_fault
>     mm_account_fault
>       perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr
> 
> Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
> Signed-off-by: Jisheng Zhang <jszhang@kernel.org>

LGTM

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

> ---
>  arch/arm64/mm/fault.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index cb21ccd7940d..6045a5117ac1 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
>  		vma_end_read(vma);
>  		goto lock_mmap;
>  	}
> -	fault = handle_mm_fault(vma, addr & PAGE_MASK,
> -				mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> +	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
>  	vma_end_read(vma);
>  
>  	if (!(fault & VM_FAULT_RETRY)) {
diff mbox series

Patch

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index cb21ccd7940d..6045a5117ac1 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -600,8 +600,7 @@  static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
 		vma_end_read(vma);
 		goto lock_mmap;
 	}
-	fault = handle_mm_fault(vma, addr & PAGE_MASK,
-				mm_flags | FAULT_FLAG_VMA_LOCK, regs);
+	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
 	vma_end_read(vma);
 
 	if (!(fault & VM_FAULT_RETRY)) {