diff mbox series

[06/26] mm/csky: Use general page fault accounting

Message ID 20200619160538.8641-7-peterx@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm: Page fault accounting cleanups | expand

Commit Message

Peter Xu June 19, 2020, 4:05 p.m. UTC
Use the general page fault accounting by passing regs into handle_mm_fault().
It naturally solve the issue of multiple page fault accounting when page fault
retry happened.

CC: Guo Ren <guoren@kernel.org>
CC: linux-csky@vger.kernel.org
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 arch/csky/mm/fault.c | 12 +-----------
 1 file changed, 1 insertion(+), 11 deletions(-)

Comments

Guo Ren June 20, 2020, 1:44 a.m. UTC | #1
On Sat, Jun 20, 2020 at 12:05 AM Peter Xu <peterx@redhat.com> wrote:
>
> Use the general page fault accounting by passing regs into handle_mm_fault().
> It naturally solve the issue of multiple page fault accounting when page fault
> retry happened.
>
> CC: Guo Ren <guoren@kernel.org>
> CC: linux-csky@vger.kernel.org
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  arch/csky/mm/fault.c | 12 +-----------
>  1 file changed, 1 insertion(+), 11 deletions(-)
>
> diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c
> index b14f97d3cb15..a3e0aa3ebb79 100644
> --- a/arch/csky/mm/fault.c
> +++ b/arch/csky/mm/fault.c
> @@ -151,7 +151,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
>          * the fault.
>          */
>         fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0,
> -                               NULL);
> +                               regs);
what's your kernel version ? (4th arg exsist ?)
        /*
         * If for any reason at all we couldn't handle the fault,
         * make sure we exit gracefully rather than endlessly redo
         * the fault.
         */
        fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0);
        if (unlikely(fault & VM_FAULT_ERROR)) {
Peter Xu June 20, 2020, 4:08 p.m. UTC | #2
On Sat, Jun 20, 2020 at 09:44:31AM +0800, Guo Ren wrote:
> On Sat, Jun 20, 2020 at 12:05 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > Use the general page fault accounting by passing regs into handle_mm_fault().
> > It naturally solve the issue of multiple page fault accounting when page fault
> > retry happened.
> >
> > CC: Guo Ren <guoren@kernel.org>
> > CC: linux-csky@vger.kernel.org
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  arch/csky/mm/fault.c | 12 +-----------
> >  1 file changed, 1 insertion(+), 11 deletions(-)
> >
> > diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c
> > index b14f97d3cb15..a3e0aa3ebb79 100644
> > --- a/arch/csky/mm/fault.c
> > +++ b/arch/csky/mm/fault.c
> > @@ -151,7 +151,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
> >          * the fault.
> >          */
> >         fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0,
> > -                               NULL);
> > +                               regs);
> what's your kernel version ? (4th arg exsist ?)
>         /*
>          * If for any reason at all we couldn't handle the fault,
>          * make sure we exit gracefully rather than endlessly redo
>          * the fault.
>          */
>         fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0);
>         if (unlikely(fault & VM_FAULT_ERROR)) {

Hi, Guo,

Sorry to be unclear.  This patch is based on patch 1 in the same series:

https://lore.kernel.org/lkml/20200619160538.8641-2-peterx@redhat.com/

Thanks,
diff mbox series

Patch

diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c
index b14f97d3cb15..a3e0aa3ebb79 100644
--- a/arch/csky/mm/fault.c
+++ b/arch/csky/mm/fault.c
@@ -151,7 +151,7 @@  asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
 	 * the fault.
 	 */
 	fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0,
-				NULL);
+				regs);
 	if (unlikely(fault & VM_FAULT_ERROR)) {
 		if (fault & VM_FAULT_OOM)
 			goto out_of_memory;
@@ -161,16 +161,6 @@  asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
 			goto bad_area;
 		BUG();
 	}
-	if (fault & VM_FAULT_MAJOR) {
-		tsk->maj_flt++;
-		perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs,
-			      address);
-	} else {
-		tsk->min_flt++;
-		perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs,
-			      address);
-	}
-
 	up_read(&mm->mmap_sem);
 	return;