From patchwork Fri Jun 19 16:05:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614495 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7FB34912 for ; Fri, 19 Jun 2020 16:07:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2537821532 for ; Fri, 19 Jun 2020 16:07:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WAL/BUhK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2537821532 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D16BB8D00B4; Fri, 19 Jun 2020 12:07:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CC9DF8D00AD; Fri, 19 Jun 2020 12:07:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACB9A8D00B4; Fri, 19 Jun 2020 12:07:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id 7DBDB8D00AD for ; Fri, 19 Jun 2020 12:07:15 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3A1A2824556B for ; Fri, 19 Jun 2020 16:07:15 +0000 (UTC) X-FDA: 76946440830.04.lock53_1912df626e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 1EDC38136B6B for ; Fri, 19 Jun 2020 16:05:50 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30036:30054:30070:30091,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:36,LUA_SUMMARY:none X-HE-Tag: lock53_1912df626e1a X-Filterd-Recvd-Size: 25452 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:05:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582748; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6NGA9T5cJXXyWywoR5htqDEYvlTJ7XQZSVFQQbRUkpw=; b=WAL/BUhKiYRkBk4pWJ9ETdG80mj2bkQbLWjxlbo02B51WUujwetgGvBx5q8dTdV4pZayn8 2r/dfeWGFacrUMnoiBtDol46kI297tsXQoY1qYAYfTRwQMY3/n1Kxnn+rDrK3gJc0oUOFg Ss91popdyGdVFxnwBSlDuYbiazidXMg= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-288-EgICJN0oNAGmzX_kKa9MAg-1; Fri, 19 Jun 2020 12:05:45 -0400 X-MC-Unique: EgICJN0oNAGmzX_kKa9MAg-1 Received: by mail-qt1-f198.google.com with SMTP id w14so7393578qtv.19 for ; Fri, 19 Jun 2020 09:05:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6NGA9T5cJXXyWywoR5htqDEYvlTJ7XQZSVFQQbRUkpw=; b=cDfehjd46e7GIOx/K0G5mV/0wS6OfG+qySmsHkWXrfLOQHZkWR3rNTSNaHMmNCRHpm hgJhQNlglXuuXzBrVrY30B/PvFZDlpBvzns3SBXJuHP+PdaHEFJ82+dFT+PR6/7cmqy8 lh58vBCILaovBYjfEyi295+p5WwUTG4KBCS8V5OtgglDusZCIfTF2882fkW2v9zqnVAx m2Zmzz1S6/b53x3cyTosbOGK0UKSmgLu/RugjBSxoMJt+1wU59SYzIM5nf0b7iHsrNq9 HZBO+DYLTiTU3zTrFkfYOVV/SacXT3YoRWD2rxsEvlL8JA1dlbxvzfH6OpMLXi4CXlXM n74w== X-Gm-Message-State: AOAM533w9CBd/1Nz75rbPwQbn4IZNcDcSFBLrL0YlXEidnWae67/gFFv NxnK3rslErFTC7+d4scF5dQvoIvHNFmcEho6g7I2psY5eqbSgi92L8J44L5iCUM5FLDZZ4Mtq3C Yw3AHIJokCxg= X-Received: by 2002:a37:2781:: with SMTP id n123mr4284813qkn.106.1592582742666; Fri, 19 Jun 2020 09:05:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzuIGMk5c2weKGvkv4UMkaEq/qP7eKbboaJEn0LweCoPfDgx3YNFiBizEaFJuq+nKhpQulGnw== X-Received: by 2002:a37:2781:: with SMTP id n123mr4284726qkn.106.1592582741796; Fri, 19 Jun 2020 09:05:41 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:41 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds Subject: [PATCH 01/26] mm: Do page fault accounting in handle_mm_fault Date: Fri, 19 Jun 2020 12:05:13 -0400 Message-Id: <20200619160538.8641-2-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 1EDC38136B6B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a preparation patch to move page fault accountings into the general code in handle_mm_fault(). This includes both the per task flt_maj/flt_min counters, and the major/minor page fault perf events. To do this, the pt_regs pointer is passed into handle_mm_fault(). PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault handlers. So far, all the pt_regs pointer that passed into handle_mm_fault() is NULL, which means this patch should have no intented functional change. Suggested-by: Linus Torvalds Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 2 +- arch/arc/mm/fault.c | 2 +- arch/arm/mm/fault.c | 2 +- arch/arm64/mm/fault.c | 2 +- arch/csky/mm/fault.c | 3 +- arch/hexagon/mm/vm_fault.c | 2 +- arch/ia64/mm/fault.c | 2 +- arch/m68k/mm/fault.c | 2 +- arch/microblaze/mm/fault.c | 2 +- arch/mips/mm/fault.c | 2 +- arch/nds32/mm/fault.c | 2 +- arch/nios2/mm/fault.c | 2 +- arch/openrisc/mm/fault.c | 2 +- arch/parisc/mm/fault.c | 2 +- arch/powerpc/mm/copro_fault.c | 2 +- arch/powerpc/mm/fault.c | 2 +- arch/riscv/mm/fault.c | 2 +- arch/s390/mm/fault.c | 2 +- arch/sh/mm/fault.c | 2 +- arch/sparc/mm/fault_32.c | 4 +-- arch/sparc/mm/fault_64.c | 2 +- arch/um/kernel/trap.c | 2 +- arch/unicore32/mm/fault.c | 2 +- arch/x86/mm/fault.c | 2 +- arch/xtensa/mm/fault.c | 2 +- drivers/iommu/amd_iommu_v2.c | 2 +- drivers/iommu/intel-svm.c | 2 +- include/linux/mm.h | 7 ++-- mm/gup.c | 4 +-- mm/hmm.c | 3 +- mm/ksm.c | 3 +- mm/memory.c | 66 ++++++++++++++++++++++++++++++++++- 32 files changed, 105 insertions(+), 35 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index c2d7b6d7bac7..82e72f24486e 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -148,7 +148,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, /* If for any reason at all we couldn't handle the fault, make sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 92b339c7adba..34380139e7a2 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -131,7 +131,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 2dd5c41cbb8d..0d6be0f4f27c 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -223,7 +223,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, goto out; } - return handle_mm_fault(vma, addr & PAGE_MASK, flags); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c9cedc0432d2..5f6607b951b8 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -422,7 +422,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); } static bool is_el0_instruction_abort(unsigned int esr) diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c index 4e6dc68f3258..b14f97d3cb15 100644 --- a/arch/csky/mm/fault.c +++ b/arch/csky/mm/fault.c @@ -150,7 +150,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0); + fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0, + NULL); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index 72334b26317a..f04cd0a6d905 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -89,7 +89,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) break; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index 30d0c1fca99e..caa93e083c9d 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -139,7 +139,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re * sure we exit gracefully rather than endlessly redo the * fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index 3bfb5c8ac3c7..2db38dfbc00c 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -135,7 +135,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); pr_debug("handle_mm_fault returns %x\n", fault); if (fault_signal_pending(fault, regs)) diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index 3248141f8ed5..9abfa5224386 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -215,7 +215,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index f8d62cd83b36..31c2afb8f8a5 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -152,7 +152,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index f331e533edc2..22527129025c 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -207,7 +207,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, * the fault. */ - fault = handle_mm_fault(vma, addr, flags); + fault = handle_mm_fault(vma, addr, flags, NULL); /* * If we need to retry but a fatal signal is pending, handle the diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index ec9d8a9c426f..88abf297c759 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -131,7 +131,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index 8af1cc78c4fb..45aedc572361 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -159,7 +159,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index 86e8c848f3d7..c10908ea8803 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -302,7 +302,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, * fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c index beb060b96632..c0478bef1f14 100644 --- a/arch/powerpc/mm/copro_fault.c +++ b/arch/powerpc/mm/copro_fault.c @@ -64,7 +64,7 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea, } ret = 0; - *flt = handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0); + *flt = handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0, NULL); if (unlikely(*flt & VM_FAULT_ERROR)) { if (*flt & VM_FAULT_OOM) { ret = -ENOMEM; diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 84af6c8eecf7..992b10c3761c 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -563,7 +563,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); #ifdef CONFIG_PPC_MEM_KEYS /* diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index be84e32adc4c..677ee1bb11ac 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -110,7 +110,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags); + fault = handle_mm_fault(vma, addr, flags, NULL); /* * If we need to retry but a fatal signal is pending, handle the diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index dedc28be27ab..ab6d7eedcfab 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -479,7 +479,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; if (flags & FAULT_FLAG_RETRY_NOWAIT) diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index 5f23d7907597..a4e670a9c9b3 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -464,7 +464,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR))) if (mm_fault_error(regs, error_code, address, fault)) diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index f6e0e601f857..61524d284706 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -235,7 +235,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; @@ -411,7 +411,7 @@ static void force_user_fault(unsigned long address, int write) if (!(vma->vm_flags & (VM_READ | VM_EXEC))) goto bad_area; } - switch (handle_mm_fault(vma, address, flags)) { + switch (handle_mm_fault(vma, address, flags, NULL)) { case VM_FAULT_SIGBUS: case VM_FAULT_OOM: goto do_sigbus; diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index c0c0dd471b6b..6b702a0a8155 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -423,7 +423,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) goto exit_exception; diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 8f18cf56b3dd..32cc8f59322b 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -75,7 +75,7 @@ int handle_page_fault(unsigned long address, unsigned long ip, do { vm_fault_t fault; - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) goto out_nosemaphore; diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c index 3022104aa613..847ff24fcc2a 100644 --- a/arch/unicore32/mm/fault.c +++ b/arch/unicore32/mm/fault.c @@ -186,7 +186,7 @@ static vm_fault_t __do_pf(struct mm_struct *mm, unsigned long addr, * If for any reason at all we couldn't handle the fault, make * sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, addr & PAGE_MASK, flags); + fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); return fault; check_stack: diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index a51df516b87b..3e27ed85af06 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1461,7 +1461,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); major |= fault & VM_FAULT_MAJOR; /* Quick path to respond to signals */ diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index e7172bd53ced..722ef3c98d60 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -108,7 +108,7 @@ void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c index d6d85debd01b..66042b816943 100644 --- a/drivers/iommu/amd_iommu_v2.c +++ b/drivers/iommu/amd_iommu_v2.c @@ -497,7 +497,7 @@ static void do_fault(struct work_struct *work) if (access_error(vma, fault)) goto out; - ret = handle_mm_fault(vma, address, flags); + ret = handle_mm_fault(vma, address, flags, NULL); out: up_read(&mm->mmap_sem); diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c index 2998418f0a38..c9cb5e5b6c34 100644 --- a/drivers/iommu/intel-svm.c +++ b/drivers/iommu/intel-svm.c @@ -629,7 +629,7 @@ static irqreturn_t prq_event_thread(int irq, void *d) goto invalid; ret = handle_mm_fault(vma, address, - req->wr_req ? FAULT_FLAG_WRITE : 0); + req->wr_req ? FAULT_FLAG_WRITE : 0, NULL); if (ret & VM_FAULT_ERROR) goto invalid; diff --git a/include/linux/mm.h b/include/linux/mm.h index f3fe7371855c..46bee4044ac1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -36,6 +36,7 @@ struct file_ra_state; struct user_struct; struct writeback_control; struct bdi_writeback; +struct pt_regs; void init_mm_internals(void); @@ -1652,7 +1653,8 @@ int invalidate_inode_page(struct page *page); #ifdef CONFIG_MMU extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags); + unsigned long address, unsigned int flags, + struct pt_regs *regs); extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); @@ -1662,7 +1664,8 @@ void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows); #else static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) + unsigned long address, unsigned int flags, + struct pt_regs *regs) { /* should never happen if there's no MMU */ BUG(); diff --git a/mm/gup.c b/mm/gup.c index 87a6a59fe667..1a48c639ea49 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -876,7 +876,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_TRIED; } - ret = handle_mm_fault(vma, address, fault_flags); + ret = handle_mm_fault(vma, address, fault_flags, NULL); if (ret & VM_FAULT_ERROR) { int err = vm_fault_to_errno(ret, *flags); @@ -1222,7 +1222,7 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, fatal_signal_pending(current)) return -EINTR; - ret = handle_mm_fault(vma, address, fault_flags); + ret = handle_mm_fault(vma, address, fault_flags, NULL); major |= ret & VM_FAULT_MAJOR; if (ret & VM_FAULT_ERROR) { int err = vm_fault_to_errno(ret, 0); diff --git a/mm/hmm.c b/mm/hmm.c index 280585833adf..5fca59a1f6e9 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -90,7 +90,8 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end, } for (; addr < end; addr += PAGE_SIZE) - if (handle_mm_fault(vma, addr, fault_flags) & VM_FAULT_ERROR) + if (handle_mm_fault(vma, addr, fault_flags, NULL) & + VM_FAULT_ERROR) return -EFAULT; return -EBUSY; } diff --git a/mm/ksm.c b/mm/ksm.c index 281c00129a2e..2e2b02abcc0f 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -480,7 +480,8 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) break; if (PageKsm(page)) ret = handle_mm_fault(vma, addr, - FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE); + FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, + NULL); else ret = VM_FAULT_WRITE; put_page(page); diff --git a/mm/memory.c b/mm/memory.c index f703fe8c8346..23c738b3756e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -71,6 +71,8 @@ #include #include #include +#include +#include #include @@ -4345,6 +4347,36 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return handle_pte_fault(&vmf); } +/** + * mm_account_fault - Do page fault accountings + * @regs: the pt_regs struct pointer. When set to NULL, will skip accounting + * @address: faulted address. + * @major: whether this is a major fault. + * + * This will take care of most of the page fault accountings. It should only + * be called when a page fault is completed. For example, VM_FAULT_RETRY means + * the fault needs to be retried again later, so it should not contribute to + * the accounting. + * + * The accounting will also include the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] + * perf counter updates. Note: the handling of PERF_COUNT_SW_PAGE_FAULTS + * should still be in per-arch page fault handlers at the entry of page fault. + */ +static inline void mm_account_fault(struct pt_regs *regs, + unsigned long address, bool major) +{ + if (!regs) + return; + + if (major) { + current->maj_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); + } else { + current->min_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); + } +} + /* * By the time we get here, we already hold the mm semaphore * @@ -4352,7 +4384,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, * return value. See filemap_fault() and __lock_page_or_retry(). */ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, - unsigned int flags) + unsigned int flags, struct pt_regs *regs) { vm_fault_t ret; @@ -4393,6 +4425,38 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, mem_cgroup_oom_synchronize(false); } + if (ret & VM_FAULT_RETRY) + return ret; + + /* + * Do accounting in the common code, to avoid unnecessary + * architecture differences or duplicated code. + * + * We arbitrarily make the rules be: + * + * - faults that never even got here (because the address + * wasn't valid). That includes arch_vma_access_permitted() + * failing above. + * + * So this is expressly not a "this many hardware page + * faults" counter. Use the hw profiling for that. + * + * - incomplete faults (ie RETRY) do not count (see above). + * They will only count once completed. + * + * - the fault counts as a "major" fault when the final + * successful fault is VM_FAULT_MAJOR, or if it was a + * retry (which implies that we couldn't handle it + * immediately previously). + * + * - if the fault is done for GUP, regs wil be NULL and + * no accounting will be done (but you _could_ pass in + * your own regs and it would be accounted to the thread + * doing the fault, not to the target!) + */ + mm_account_fault(regs, address, (ret & VM_FAULT_MAJOR) || + (flags & FAULT_FLAG_TRIED)); + return ret; } EXPORT_SYMBOL_GPL(handle_mm_fault); From patchwork Fri Jun 19 16:05:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614493 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2111C912 for ; Fri, 19 Jun 2020 16:07:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E1EBE21532 for ; Fri, 19 Jun 2020 16:07:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JCwtbhpb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E1EBE21532 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 12F408D00B3; Fri, 19 Jun 2020 12:07:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E01C8D00AD; Fri, 19 Jun 2020 12:07:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F361A8D00B3; Fri, 19 Jun 2020 12:07:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id D8EB88D00AD for ; Fri, 19 Jun 2020 12:07:14 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 96A0A180AE5B3 for ; Fri, 19 Jun 2020 16:07:14 +0000 (UTC) X-FDA: 76946440788.04.coat20_1417ea726e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 9DB878136B5B for ; Fri, 19 Jun 2020 16:05:48 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: coat20_1417ea726e1a X-Filterd-Recvd-Size: 5703 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:05:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582747; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QX5XNFqam9mdd/6wKfca40oUCGNepMLO5UWfniIypyY=; b=JCwtbhpbTL3hXCLCo5tnZXDPeaLxRi1ePeKKXUGk5Sr/r17V2tXbNeL8urAWT/6P/dcJV4 imZHLaSidqjdoP9+GJvCrXxGwRGyBxpYCGdFAJML6Oy8Af8fijwWUZaxuNqmnxB1PZlapc ahw0pPPc8gmOy99p7bst7oKz/lSLIbk= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-43-jFnu6bNKNMKZMA2yy23c9g-1; Fri, 19 Jun 2020 12:05:46 -0400 X-MC-Unique: jFnu6bNKNMKZMA2yy23c9g-1 Received: by mail-qk1-f198.google.com with SMTP id x22so7442702qkj.6 for ; Fri, 19 Jun 2020 09:05:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QX5XNFqam9mdd/6wKfca40oUCGNepMLO5UWfniIypyY=; b=Q2HiN/v+udCAcjQX5D8RsMW5GmwDZFitpGCwY6SdxoyTGVECYZQywmcOU7AxvY6aI9 QV9GdRcCD28sbLeM/eut0KaECSrzX1kwP3cjmC1/Sb+I/FjMMmxuP+h8V+pzBsMGULjz 0CnpYU4riTWgM40wBm/VG1+nMYUlPcEVamf1YGEU7i+jyyaztyveVLgegE89LlBdj90B VmsFvfqnn9T/S2DLygpeUcI3OMyL8UNgWiAPUyQXuO/7gXh7mai21zMX/zhGYHBnb4ww hnX/EQg98DFrYRnNyNyrrubfiRATYG18NVm+Dg5M/Ylck+dSCEUQzenz15ZuNlgS1TFL P2CQ== X-Gm-Message-State: AOAM5334kJGYywCViw2Ecn9gzKxKOWLdQQuF7RfkamYy+/+k53/fnErV qOZRK0nCE/FY0GPJ62QYZXzBzJ1yHuciiissU0UoT/8gu2NraJUhopUzg+4Sfk/411/9I8AOFYT 9He4ZUSRW1Ao= X-Received: by 2002:ae9:df84:: with SMTP id t126mr4010338qkf.420.1592582743582; Fri, 19 Jun 2020 09:05:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxXJk79qIDPkgg9Bu+gM9Xo3p9+ll31vLKTGG+uA4L1k/Aqbh37gU+sPivXDad1xaGVNQjOFg== X-Received: by 2002:ae9:df84:: with SMTP id t126mr4010307qkf.420.1592582743346; Fri, 19 Jun 2020 09:05:43 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:42 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH 02/26] mm/alpha: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:14 -0400 Message-Id: <20200619160538.8641-3-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 9DB878136B5B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Richard Henderson CC: Ivan Kokshaysky CC: Matt Turner CC: linux-alpha@vger.kernel.org Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index 82e72f24486e..2e325af081bc 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -25,6 +25,7 @@ #include #include #include +#include extern void die_if_kernel(char *,struct pt_regs *,long, unsigned long *); @@ -116,6 +117,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, #endif if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: down_read(&mm->mmap_sem); vma = find_vma(mm, address); @@ -148,7 +150,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, /* If for any reason at all we couldn't handle the fault, make sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -164,10 +166,6 @@ do_page_fault(unsigned long address, unsigned long mmcsr, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:05:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614513 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35567912 for ; Fri, 19 Jun 2020 16:08:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 01FA021532 for ; Fri, 19 Jun 2020 16:08:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OlAITeyX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 01FA021532 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3C6738D00BD; Fri, 19 Jun 2020 12:08:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 376AE8D00AD; Fri, 19 Jun 2020 12:08:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 265FC8D00BD; Fri, 19 Jun 2020 12:08:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 0C8108D00AD for ; Fri, 19 Jun 2020 12:08:58 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C4CD8180ACA37 for ; Fri, 19 Jun 2020 16:08:57 +0000 (UTC) X-FDA: 76946445114.27.bead60_38125a626e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 69453354D7 for ; Fri, 19 Jun 2020 16:05:48 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bead60_38125a626e1a X-Filterd-Recvd-Size: 5563 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:05:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582747; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZrEBS+6D6tr7sKM8RKvuVARuC8ujPdG5Fi1hrXsWHAc=; b=OlAITeyXpJxHtKosLZoYPXapXbzRPe4jP5/uNTmFi5xkI0B+hrqKGqpyL2QdwJMLqqQD36 ohVkZCK/n8I6TkvgTYvaCtPhZYNZrfxJdlE6Qv3wYWcLa9qME2mZsjVribXG1VazR5RoW8 tpZEd8/YV+vxyEf33OGtZYlDhvIt30A= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-308-DVlFT6ZcNiSc-S0voy35GQ-1; Fri, 19 Jun 2020 12:05:45 -0400 X-MC-Unique: DVlFT6ZcNiSc-S0voy35GQ-1 Received: by mail-qv1-f70.google.com with SMTP id ba13so7031363qvb.15 for ; Fri, 19 Jun 2020 09:05:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZrEBS+6D6tr7sKM8RKvuVARuC8ujPdG5Fi1hrXsWHAc=; b=bwpymz0lmFmcfq2Zehm6osLHEaQb0c51O99qVx9VXVt0Vf+ijuAFXhOP1GJn3ErrsT 73uiBsDtAghxO7t9FpW2VJklHhGKCcUGuem/Oib/XA4vGTVarT6v915nn0vbdeEIyab8 3pa5JqEqAPHDucbpa22j1e7AjIP5Jf0QEWMDo7dsgqN6w1A/u4BWJTZX2vKGw0btkys3 QHUrOSQBy606hZZ1hLlIdP/TodYsJWDOTuhbmcvMg3PebZEiFV+h0mNJs6uDISzNpxtE 1e9tUbA7hiGMUCvlDQEsC7wllm/b5/46rb3g70C6TbQOpGR2GrTyeFcumioVRI7dp66H 5UeA== X-Gm-Message-State: AOAM530IfteFEKbnZwQ/kiWU5DKre+/buzvVLDBSNtFqlrgbAgKo0jmn 27d9gtizEDq1tXfNGhed6h9bSpssQfX1uCn6dV2RKiX6q1nY88oZ4pGWCygWoxCJNhxeY7Wift9 sVYMFwgo4tqs= X-Received: by 2002:a37:6d4:: with SMTP id 203mr1558343qkg.62.1592582745459; Fri, 19 Jun 2020 09:05:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw1Ah5qSbhiFYhHGzi97I57hEEaVVSsMBbZi7Nga0UAUiVp9Ck6HGvQPSjZ83MKPB5TgG3lyw== X-Received: by 2002:a37:6d4:: with SMTP id 203mr1558310qkg.62.1592582745151; Fri, 19 Jun 2020 09:05:45 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:44 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH 03/26] mm/arc: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:15 -0400 Message-Id: <20200619160538.8641-4-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 69453354D7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Vineet Gupta CC: linux-snps-arc@lists.infradead.org Signed-off-by: Peter Xu --- arch/arc/mm/fault.c | 18 +++--------------- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 34380139e7a2..68e6849cf086 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -106,6 +106,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) if (write) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: down_read(&mm->mmap_sem); @@ -131,7 +132,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -156,22 +157,9 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) * Major/minor page fault accounting * (in case of retry we only land here once) */ - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - - if (likely(!(fault & VM_FAULT_ERROR))) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } - + if (likely(!(fault & VM_FAULT_ERROR))) /* Normal return path: fault Handled Gracefully */ return; - } if (!user_mode(regs)) goto no_context; From patchwork Fri Jun 19 16:05:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614499 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2AB2A14B7 for ; Fri, 19 Jun 2020 16:07:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E2F2421532 for ; Fri, 19 Jun 2020 16:07:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NwiIi3Fr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2F2421532 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 21A4C8D00B6; Fri, 19 Jun 2020 12:07:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1C9DF8D00AD; Fri, 19 Jun 2020 12:07:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E10E8D00B6; Fri, 19 Jun 2020 12:07:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id E4EAA8D00AD for ; Fri, 19 Jun 2020 12:07:29 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 991CB181AC9C6 for ; Fri, 19 Jun 2020 16:07:29 +0000 (UTC) X-FDA: 76946441418.28.aunt22_5812bfa26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id E3A4E110F2D for ; Fri, 19 Jun 2020 16:05:50 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30036:30051:30054:30090,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:1:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: aunt22_5812bfa26e1a X-Filterd-Recvd-Size: 6673 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:05:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582749; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oYNDwtlfjBw7SDjKSzeBn7nLzHhEdZ/Papn4VLBZIoQ=; b=NwiIi3FrTXGtSGd/onF0Gsi5EHoR9M5HBXIgqFIo2P301JrqmL/l71z3LK/FM0SvaOGkP1 axud4pPuD9umQV+EUZvqQ7gnnCS5gS+6jLO+JWphHrvVb5d9v+GtS/1A4oTt4auo0hpy56 1CrqUDz60y31Fg9v4hq+F4AnkYGrjgw= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-199-8tt_wdFeOw2on-5hO7py_Q-1; Fri, 19 Jun 2020 12:05:48 -0400 X-MC-Unique: 8tt_wdFeOw2on-5hO7py_Q-1 Received: by mail-qk1-f197.google.com with SMTP id a6so7459684qka.9 for ; Fri, 19 Jun 2020 09:05:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oYNDwtlfjBw7SDjKSzeBn7nLzHhEdZ/Papn4VLBZIoQ=; b=PC/PLa+MbLwlUyit92tmiLwE31k6SSmaerPNoJZSmqfW02NNi8S1EBRW2dYgHdks+T r+ki/36+juNWqnwNzBGARFFXGyfbLi5ofzik1zS00TddWv/IcrXvJOXlJ8Rrl9dQyN1B mzXadGWOJzDiNvXZ4i53dn7BxTZzfzMw4dThFF0ErxJBmtDvhMzZaXvy2CPdfV4LbIZ/ 45Yw9HCRYYGR1nxR011OKxiV5UR5WWajE0XE89gyok03Zd9SmcnLw1VGHm93vLA0F3W/ dR/2RD+wpwGxHZnpa1dbf8/zkihSl+x/uujBaYYc7XvjWf7iJJusUoOggAKWKfucXkQl AiXQ== X-Gm-Message-State: AOAM532YKdpIXhpokiJppyOiE7Yej/p9OmAcfvuvwmgzQZ5xkHI2dC/5 fd7Z1FERw7RG7Jgm8jhh38XglG9cHV4IQRvJHlnxRVKdQVS+39AcUtV4GfXLrSarwzoA1H4w277 x6Hc5+D/dFng= X-Received: by 2002:aed:21c8:: with SMTP id m8mr4187962qtc.224.1592582747582; Fri, 19 Jun 2020 09:05:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxzQfwrOgEHeZ4fmlf+FrgeIhkxqk3pvz1Rd9/fj39BAorh/CRlZ+KkHZPVWnZcxaKSEJpm8Q== X-Received: by 2002:aed:21c8:: with SMTP id m8mr4187936qtc.224.1592582747304; Fri, 19 Jun 2020 09:05:47 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:46 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH 04/26] mm/arm: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:16 -0400 Message-Id: <20200619160538.8641-5-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: E3A4E110F2D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we need to pass the pt_regs pointer into __do_page_fault(). Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Russell King CC: Will Deacon CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Peter Xu --- arch/arm/mm/fault.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 0d6be0f4f27c..8530befee012 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -201,7 +201,8 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma) static vm_fault_t __kprobes __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, - unsigned int flags, struct task_struct *tsk) + unsigned int flags, struct task_struct *tsk, + struct pt_regs *regs) { struct vm_area_struct *vma; vm_fault_t fault; @@ -223,7 +224,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, goto out; } - return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ @@ -265,6 +266,8 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -289,7 +292,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) #endif } - fault = __do_page_fault(mm, addr, fsr, flags, tsk); + fault = __do_page_fault(mm, addr, fsr, flags, tsk, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_sem because @@ -301,23 +304,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) return 0; } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ - - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (!(fault & VM_FAULT_ERROR) && flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; From patchwork Fri Jun 19 16:05:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614497 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3CB3912 for ; Fri, 19 Jun 2020 16:07:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C16BF2168B for ; Fri, 19 Jun 2020 16:07:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TDjtEJkd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C16BF2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D16B28D00B5; Fri, 19 Jun 2020 12:07:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CC5FC8D00AD; Fri, 19 Jun 2020 12:07:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B8EE08D00B5; Fri, 19 Jun 2020 12:07:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 9D8588D00AD for ; Fri, 19 Jun 2020 12:07:21 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 624B010EE4 for ; Fri, 19 Jun 2020 16:07:21 +0000 (UTC) X-FDA: 76946441082.08.tramp56_0115c9a26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 7E8D318129828 for ; Fri, 19 Jun 2020 16:05:54 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30036:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: tramp56_0115c9a26e1a X-Filterd-Recvd-Size: 6616 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:05:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582753; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u3Vh1HVYE5zmkqw2sOdn8zxGKpBRt5nL/FWkJRhYB9o=; b=TDjtEJkdDk1dTQ+ykdPL5B3fP0U55u6/GLfkiDaRm/qYa5WAy/D3S2ZuqTy4bLxyjzlrug C6OunT0nG4RzbWY+V8XBXtkU0jLsaIhIEokXnVS/r1aER1ZR81q5HuQdD62XWc6bPQ1dxW wck6+6xsLzN0HNgla9MOy68RkMoWkmQ= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-482-dO7QboDVNd-WnCGm32CCeg-1; Fri, 19 Jun 2020 12:05:49 -0400 X-MC-Unique: dO7QboDVNd-WnCGm32CCeg-1 Received: by mail-qk1-f197.google.com with SMTP id w14so7499244qkb.0 for ; Fri, 19 Jun 2020 09:05:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u3Vh1HVYE5zmkqw2sOdn8zxGKpBRt5nL/FWkJRhYB9o=; b=B/Fz1AB1N69OvPyQLBVXHPzKIb/AFYGm0W5I85V4abCY0aQf44ccXQgoMrVAqQ5VzM 4DCZrv6ATOOylCN3bDQXAsMhwWwIiEK4vC8AJgym3fQkSvWeIK1IEiVZvmfQXqUp97sS nkC/3/yxMFaU6vbeiFjADlcebbCsY2kuroHAHa0qSBtZeucxgfmnlCZJwRNnwK4fYkY0 rLJ+s/G6f2Vf//E5c0+BPQQpbrK9TFVTQcTd4UTS2/Axw6Afi5SARRLKTagTTwLwHf2E WeH+S6X7zBINb2BlmNe4Syxib6azcf/YGOrGiN+6vtxa0+7P+T+kxKVX0L1mcULmXNCk sToA== X-Gm-Message-State: AOAM530Iqtty+Ou+ClyxKLoa4g35EZZ97S2M/v225u4BcCud5BwI/9QX BuabjbLZV0g1KGUIajtEEj7DF9a2faU9fD6K/WyartBLrf1FXuM8QP66oGt3osiDPNLpODu8g94 6gOxHlmYj9c4= X-Received: by 2002:ae9:e841:: with SMTP id a62mr4262425qkg.497.1592582749502; Fri, 19 Jun 2020 09:05:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJySpRUMhXyNfEl10XktKVgqTs3uRy85zxUDqvWPGZZNqeyiCaJtUfJ8ugLHtIHHC9pHiVb8gQ== X-Received: by 2002:ae9:e841:: with SMTP id a62mr4262393qkg.497.1592582749242; Fri, 19 Jun 2020 09:05:49 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:48 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: [PATCH 05/26] mm/arm64: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:17 -0400 Message-Id: <20200619160538.8641-6-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 7E8D318129828 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we pass pt_regs pointer into __do_page_fault(). CC: Catalin Marinas CC: Will Deacon CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Peter Xu --- arch/arm64/mm/fault.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 5f6607b951b8..09b206521559 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -398,7 +398,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re #define VM_FAULT_BADACCESS 0x020000 static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags) + unsigned int mm_flags, unsigned long vm_flags, + struct pt_regs *regs) { struct vm_area_struct *vma = find_vma(mm, addr); @@ -422,7 +423,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs); } static bool is_el0_instruction_abort(unsigned int esr) @@ -444,7 +445,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, { const struct fault_info *inf; struct mm_struct *mm = current->mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned long vm_flags = VM_ACCESS_FLAGS; unsigned int mm_flags = FAULT_FLAG_DEFAULT; @@ -510,8 +511,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, #endif } - fault = __do_page_fault(mm, addr, mm_flags, vm_flags); - major |= fault & VM_FAULT_MAJOR; + fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -532,25 +532,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, * Handle the "normal" (no error) case first. */ if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | - VM_FAULT_BADACCESS)))) { - /* - * Major/minor page fault accounting is only done - * once. If we go through a retry, it is extremely - * likely that the page will be found in page cache at - * that point. - */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - addr); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - addr); - } - + VM_FAULT_BADACCESS)))) return 0; - } /* * If we are in kernel mode at this point, we have no context to From patchwork Fri Jun 19 16:05:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 34782912 for ; Fri, 19 Jun 2020 16:08:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC1182158C for ; Fri, 19 Jun 2020 16:08:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="awbdOBXE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC1182158C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A02D8D00BA; Fri, 19 Jun 2020 12:08:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 350898D00AD; Fri, 19 Jun 2020 12:08:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 265568D00BA; Fri, 19 Jun 2020 12:08:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0022.hostedemail.com [216.40.44.22]) by kanga.kvack.org (Postfix) with ESMTP id 0BA1F8D00AD for ; Fri, 19 Jun 2020 12:08:16 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A3710824556B for ; Fri, 19 Jun 2020 16:08:15 +0000 (UTC) X-FDA: 76946443350.26.berry01_290f1e726e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id B923018074312 for ; Fri, 19 Jun 2020 16:05:54 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: berry01_290f1e726e1a X-Filterd-Recvd-Size: 4897 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:05:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582753; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nYKuiXa6JqSlFakb02F9N/z/khfKW7nv0ZKU3o10xCg=; b=awbdOBXE5BlzdHkjXyMjsSIUz9K69oRKmmMSElM33wRi8DBW1sdbgYjm8jr/ekKTCj7Xjk nZDh0jftu/ZWhVzatIMmnXclq3k7VbGQIUS3AmoP3AqV1Z8r6JFmlU5WRhTaZpaskvUgsh AbdRCPDVs2ZDAnipqDmpnEKn/4LIvXM= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-458-UiCdIQ0JPQyWUkqfUOX7pw-1; Fri, 19 Jun 2020 12:05:52 -0400 X-MC-Unique: UiCdIQ0JPQyWUkqfUOX7pw-1 Received: by mail-qv1-f72.google.com with SMTP id y2so7066111qvp.1 for ; Fri, 19 Jun 2020 09:05:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nYKuiXa6JqSlFakb02F9N/z/khfKW7nv0ZKU3o10xCg=; b=e4lm3Q64+nWl7aVHA8Gai+uNzHfWv09VYCbh2wPgQwGv+qjRy73mpJM6YyqEIF6w6Z dlrMS6Io+ZHpPBDNsrcHa72xBmA4H+cUFI9GSVk7fml8lH9lvDWeviL2c4JAdVavjpzw t+C5QKrId1D9dPaPXhTaR0R1ORYsjpX5RJXbTh3MGcI9aW0RagQg9pQbB2v+WVJBfBaM OlVifm6WAW/7Bsglz1rPmjFcaMo+za0zKyHAty1RChv5A9MNNSZU56glpR86r2BufXIe w+x/HWREKmvqKo8mgMvCfTd9qwlDT1RlUK2GzMn28TuPBo5zG3ebd+ytLoefZ3adjVJo 0I1Q== X-Gm-Message-State: AOAM531FmJ6YoQHEkGQiHPGUcl7ewxRh2bcqi2dBUN5VON2eiaPIBGAp fUPFkeqsfQmai+oYqWEyFWIfJS6H7taSz19FAJQHofwZpR2fh4SasEyx93RUzORftBCEJb+ncWb ySTrfOn86Wc0= X-Received: by 2002:a05:620a:3cc:: with SMTP id r12mr4302284qkm.44.1592582751558; Fri, 19 Jun 2020 09:05:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJySAqeFc3bvWBPkc6npddWja9I+KCezM5DnLrSAgO76qm+hQxaisHscRU59a9ognA5IwopVqQ== X-Received: by 2002:a05:620a:3cc:: with SMTP id r12mr4302251qkm.44.1592582751313; Fri, 19 Jun 2020 09:05:51 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:50 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH 06/26] mm/csky: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:18 -0400 Message-Id: <20200619160538.8641-7-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: B923018074312 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Guo Ren CC: linux-csky@vger.kernel.org Signed-off-by: Peter Xu --- arch/csky/mm/fault.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c index b14f97d3cb15..a3e0aa3ebb79 100644 --- a/arch/csky/mm/fault.c +++ b/arch/csky/mm/fault.c @@ -151,7 +151,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, * the fault. */ fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0, - NULL); + regs); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; @@ -161,16 +161,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, goto bad_area; BUG(); } - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - address); - } - up_read(&mm->mmap_sem); return; From patchwork Fri Jun 19 16:05:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614501 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 735DB912 for ; Fri, 19 Jun 2020 16:07:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 40FCC21532 for ; Fri, 19 Jun 2020 16:07:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KE1/gEqh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40FCC21532 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 610BB8D00B7; Fri, 19 Jun 2020 12:07:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5C0AD8D00AD; Fri, 19 Jun 2020 12:07:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D6888D00B7; Fri, 19 Jun 2020 12:07:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 33B7F8D00AD for ; Fri, 19 Jun 2020 12:07:35 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E0093181ACC80 for ; Fri, 19 Jun 2020 16:07:34 +0000 (UTC) X-FDA: 76946441628.28.sock31_3c0599926e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 9F1CEC2884 for ; Fri, 19 Jun 2020 16:05:56 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: sock31_3c0599926e1a X-Filterd-Recvd-Size: 5662 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:05:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582755; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SagB2x6xDXMHuNfH0nszNoiVH7/+fXWLgMfhv8WOo7E=; b=KE1/gEqhi5/bGhEVFbKxBtiTxp0hpX0wbNGpz54kYSGPP/+R2mN9/a7bJClyUew5FMuWBn MziDNxhERo2HIusFGmAPpnIijJJwxAPzZ80oREHdm9tJ5srhrx8k+l0NjrD4HSt5AUatIO RdnmcSaKx7DuzhEYqBevXn+T7JdJzDQ= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-9-cd9vu8R2NsqKP8lCVKPZIA-1; Fri, 19 Jun 2020 12:05:54 -0400 X-MC-Unique: cd9vu8R2NsqKP8lCVKPZIA-1 Received: by mail-qv1-f69.google.com with SMTP id z7so7065439qve.0 for ; Fri, 19 Jun 2020 09:05:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SagB2x6xDXMHuNfH0nszNoiVH7/+fXWLgMfhv8WOo7E=; b=stlQuOy+3N0WZG+4vzwkRj5ebA7TIwaOdqeqG9XiCXzkCICq+Os/UxRVDjlfBEI6gB QpEcCPdubld86YoZIIvk1NPMBd/dvxMh9ze/Eer2hAKB6ESHmJFy5JjQcrjMFpYEIigP 1TydYT9z3ABKJSgalDKEijIxpWg42wb2aMsLhu7+JmsMpLaTS2DnN3+EVemStjZ/rXHr dZiaP0qXqxA69AFbfd42OUOyhXxp6oAm/Cnv9/9saxRkOohLsvlyFQyYFrIDY4PJP9o4 xiLke8uZyM02n7sq3unRn+VOfliQwiPYNpNook5lv/YUgP8NPoX/Cq0UKi5zUrFcAWQu OIrQ== X-Gm-Message-State: AOAM533RUoyXhppGI/EKxJ7O6HGBQ0LgwOAiwlzxw9kmAYDNGWqdflf2 sqlPIHgdixvWaBZZ+r9KKWkCFaj9vg5Dk46PkGe13JjvXqVQRJSNBH1S+DS0/VSNX1AgHu2eMlw CibUDlssmzpg= X-Received: by 2002:a37:9e52:: with SMTP id h79mr3975300qke.467.1592582753535; Fri, 19 Jun 2020 09:05:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzd2Ml6f1U4IsoXVUG+sGmbZtZswjvwyuO+DQ4aozpetfpM522QVCnolfKkgB3x792Y2yKCcA== X-Received: by 2002:a37:9e52:: with SMTP id h79mr3975266qke.467.1592582753231; Fri, 19 Jun 2020 09:05:53 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:52 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Brian Cain , linux-hexagon@vger.kernel.org Subject: [PATCH 07/26] mm/hexagon: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:19 -0400 Message-Id: <20200619160538.8641-8-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 9F1CEC2884 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Brian Cain CC: linux-hexagon@vger.kernel.org Signed-off-by: Peter Xu --- arch/hexagon/mm/vm_fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index f04cd0a6d905..1b1802f30862 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -19,6 +19,7 @@ #include #include #include +#include /* * Decode of hardware exception sends us to one of several @@ -54,6 +55,8 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: down_read(&mm->mmap_sem); vma = find_vma(mm, address); @@ -89,7 +92,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) break; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -97,10 +100,6 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) /* The most common case -- we are done. */ if (likely(!(fault & VM_FAULT_ERROR))) { if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; From patchwork Fri Jun 19 16:05:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614515 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6DEF4912 for ; Fri, 19 Jun 2020 16:09:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3B1E321532 for ; Fri, 19 Jun 2020 16:09:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JrVGhmK4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B1E321532 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 76F888D00BE; Fri, 19 Jun 2020 12:09:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 720568D00AD; Fri, 19 Jun 2020 12:09:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 635BB8D00BE; Fri, 19 Jun 2020 12:09:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 4BF618D00AD for ; Fri, 19 Jun 2020 12:09:38 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0B50C181AC9C6 for ; Fri, 19 Jun 2020 16:09:38 +0000 (UTC) X-FDA: 76946446836.15.coal31_3813b8e26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 7146018037FC3 for ; Fri, 19 Jun 2020 16:05:58 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: coal31_3813b8e26e1a X-Filterd-Recvd-Size: 5348 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:05:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582757; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YX1SShq6O2x1DHMP88Y+7d3iXwbELhsA/dlQ5QIA45o=; b=JrVGhmK45U7Zr05Lb8W8+nfyy7YYyWntBTjQKCstCqJ6AAAAF7UN7JGmSm87KW3tmkgGkg aAPr1ef8qB+zNze4oTWCQkpWCbaAQdYZWxj0QN/zD6o/DJPEm6n2HOd8iUFPCVpLl2StCu uQQSONQ8C93Qw0F4ZuqedXKEO7WmINA= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-422-Ux-veNLeOmiKetCkI3tMoA-1; Fri, 19 Jun 2020 12:05:56 -0400 X-MC-Unique: Ux-veNLeOmiKetCkI3tMoA-1 Received: by mail-qv1-f72.google.com with SMTP id ba13so7032858qvb.15 for ; Fri, 19 Jun 2020 09:05:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YX1SShq6O2x1DHMP88Y+7d3iXwbELhsA/dlQ5QIA45o=; b=huQMEDiPfRkExtGRPWVIqC1ok9QcCzUtTfEtS3maV72jROAgOYLN9KhKwyF6hYqics DRfal9nKBtN1Y+oe6GDi1S2f/hWBT7YBrVSGNqxyAbtDp+a2TJLKC7tK8oZzk9p4tnZG 3BAksBWs6QPkJuWeOB9wJxHKRj5EPPR+14W+zJ1QXTTcsDA9XbVBSjtY9BXmxpg12+6W w+STYOM7E6HGay1lCYD2pqaDx45E0IeRWQCAz4pbwAFba9ftzmLI4Hl9DHCOflkvLF/+ Wh+IGfVL53z7cGT5fX+HVE6YsWo0q8tSmET8cKaZnB3FJ+nPtZYds0phge1GPd3cMFEl CIuQ== X-Gm-Message-State: AOAM5337jRJbmcHqWeqWSC7EReyKobnX6luY14bbF6MGJjueX4nyZGTU K96x9oSJnmfrnm0Z1e4l3O6Esvm3Pwa9xlKv4UbpsSprW+XBDhQ748l5J75toMehiDt/CuPQYSp wwjBatlu6K0c= X-Received: by 2002:ac8:260b:: with SMTP id u11mr4105555qtu.380.1592582755372; Fri, 19 Jun 2020 09:05:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw8QwZZxkgEFmPt6AZgLJp9Xz1I4hFPFifumyMXU4bDr2uo3ojnJGbh+6QgvE5DO5po/9pEuw== X-Received: by 2002:ac8:260b:: with SMTP id u11mr4105527qtu.380.1592582755088; Fri, 19 Jun 2020 09:05:55 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:54 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds Subject: [PATCH 08/26] mm/ia64: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:20 -0400 Message-Id: <20200619160538.8641-9-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 7146018037FC3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). Signed-off-by: Peter Xu --- arch/ia64/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index caa93e083c9d..613255e947a8 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -101,6 +102,8 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re flags |= FAULT_FLAG_USER; if (mask & VM_WRITE) flags |= FAULT_FLAG_WRITE; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: down_read(&mm->mmap_sem); @@ -139,7 +142,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re * sure we exit gracefully rather than endlessly redo the * fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -162,10 +165,6 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:05:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614491 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 936C214B7 for ; Fri, 19 Jun 2020 16:07:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6008021532 for ; Fri, 19 Jun 2020 16:07:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FmAyfUGx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6008021532 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4B11E8D00B2; Fri, 19 Jun 2020 12:07:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4619B8D00AD; Fri, 19 Jun 2020 12:07:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 350058D00B2; Fri, 19 Jun 2020 12:07:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 1897A8D00AD for ; Fri, 19 Jun 2020 12:07:06 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C91E08248D51 for ; Fri, 19 Jun 2020 16:07:05 +0000 (UTC) X-FDA: 76946440410.02.trade29_1015bf126e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 9C0F020001E55EFD for ; Fri, 19 Jun 2020 16:06:06 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: trade29_1015bf126e1a X-Filterd-Recvd-Size: 5635 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:06:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582762; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lYOPBG0iFD9F9VjgENlAdm0PiHZqNHDJn1sXF2V/+vk=; b=FmAyfUGxEKyYicabTvOX9hXhE1BmJMBeD4/0q5ZYFaFNijNIegY0MkYsHo0oXs4IpXNCQg D98PHm/OWR53VUme+PfNu3sS11IP1c4Mdjheh11HcRQTfty91vGJMPpo0kXWXF6LPVibmX v5Fb1+phDDGY6Dv32upynfAQlnqHjuU= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-303-JMHK5trlNJCPS3j6VqhzPA-1; Fri, 19 Jun 2020 12:06:00 -0400 X-MC-Unique: JMHK5trlNJCPS3j6VqhzPA-1 Received: by mail-qt1-f199.google.com with SMTP id y5so7444219qtd.0 for ; Fri, 19 Jun 2020 09:06:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lYOPBG0iFD9F9VjgENlAdm0PiHZqNHDJn1sXF2V/+vk=; b=lOoAxYW7e2n+SzX/MfYf+xRxiqrF0BpPB9UQQC8E7WWwdY8xJxwxR+wXL0A1/3CghO ra98Fw444SkirtYBnK134jQ5pDxsfmv7VEL14O7vA7ZKbTvxL+uoyQ2Ruk1C5xdD9WOQ 607hBgAmmmfBvdvWad0scinTWi3+PnlytX60DSQOKsSCaZc2/JbqWpTEzovyczIPNvi+ Nhpx9s/nHbuA2Muz9vYFBfr931kc6ZY5bJMllR1n7yikfP62itkwtnuysNTRnUC0B3ay cx4SW6S86zVaDpO8bAciQSoFi5FX5ZzNO46nf6/etiEfW+8yPf34trnfaCvPIdQPzWTw 9KtA== X-Gm-Message-State: AOAM5301zOtTM6R3H+DlYbK41p53DmPDF/KA7Txp7JQ04SVXJAlV2s/N oQmx2ypmjKgm0DR/oYOsuTirlr5Nl0LWOEmbc9bxgHOGGqymuSjeIw/qoIZ5V147xfa+0l66N4B Ooyq8hmQCqfU= X-Received: by 2002:aed:29e6:: with SMTP id o93mr4023510qtd.135.1592582757407; Fri, 19 Jun 2020 09:05:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxxln6wPfL9eSCFZYzdRyC9ZdfjMa92r0xY4V7AGxdmExb5DyEte9R6BPboKGMNjvpqtx9bDA== X-Received: by 2002:aed:29e6:: with SMTP id o93mr4023464qtd.135.1592582756856; Fri, 19 Jun 2020 09:05:56 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:56 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Geert Uytterhoeven , linux-m68k@lists.linux-m68k.org Subject: [PATCH 09/26] mm/m68k: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:21 -0400 Message-Id: <20200619160538.8641-10-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 9C0F020001E55EFD X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Geert Uytterhoeven CC: linux-m68k@lists.linux-m68k.org Signed-off-by: Peter Xu --- arch/m68k/mm/fault.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index 2db38dfbc00c..983054d209bc 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -85,6 +86,8 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: down_read(&mm->mmap_sem); @@ -135,7 +138,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); pr_debug("handle_mm_fault returns %x\n", fault); if (fault_signal_pending(fault, regs)) @@ -151,16 +154,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:05:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614505 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9578B912 for ; Fri, 19 Jun 2020 16:07:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 58F7721927 for ; Fri, 19 Jun 2020 16:07:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QPFjdK9Y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 58F7721927 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8D0DC8D00B9; Fri, 19 Jun 2020 12:07:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 880B98D00AD; Fri, 19 Jun 2020 12:07:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 76F0C8D00B9; Fri, 19 Jun 2020 12:07:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 5A68A8D00AD for ; Fri, 19 Jun 2020 12:07:51 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1C99D8248D51 for ; Fri, 19 Jun 2020 16:07:51 +0000 (UTC) X-FDA: 76946442342.19.kiss31_37100ae26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 49E422ACE2 for ; Fri, 19 Jun 2020 16:06:02 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30051:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: kiss31_37100ae26e1a X-Filterd-Recvd-Size: 5568 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:06:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582761; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3dqCpbomUjM3yzMbNQtQFnUhCIZ/IBDwpBGYZavg7bk=; b=QPFjdK9YfksQFF04jzkOnDoslgVju9IMYWHrPSahgWtAcni5NTuX0i3JXnL/BbaawhIP4P 0gL/1de5x3btGlIxFg4VgP+gnASWyXkaq7nRYJsffxFG3Xj0t2u0I0xSm7/6cRrDAgnMBS GYusxMqtR51xYBJx+N5pV6rzeE9QmCg= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-468-iHO0xmTgOI20Nu0vWs0--g-1; Fri, 19 Jun 2020 12:05:59 -0400 X-MC-Unique: iHO0xmTgOI20Nu0vWs0--g-1 Received: by mail-qk1-f198.google.com with SMTP id 205so7476429qkh.5 for ; Fri, 19 Jun 2020 09:05:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3dqCpbomUjM3yzMbNQtQFnUhCIZ/IBDwpBGYZavg7bk=; b=JYwkzOPk4iSotRASLIvyEUlNLq7njqAz81zGlX9PM3gdm+HixiVgndG8AgKo48hr2t T2XbhqODyPlYcxJLBPm07kIW+ubs+d4SnN1WPYsAU5enDiPljvzJWAwApjXEHTiB6r3B 3QZP92x3ZMYnZOc//fvGpiGwJ7tPScNmeUWWuhBng74dynHTKsmxEkgawE+pheFx8mU5 8pD50zOu3aRCeff1dRKaeh+AE3HLupS9dZjkqsVhi04h5J3W0PmgS+vbT4jIvCCg7J5R OIRTHA3k1nYFd/PnsNwEHZLr982YhY8v9vpO4GZk1+dMSkzJsfXpQETWA2GlrRr5uKum FTiA== X-Gm-Message-State: AOAM533OP2ZucOrycAiEjbDlmhRFnIuOJzlx+PSlaeHT6o8waoxWH85m /8yhzk7IZCWDVrTJRvCnUKHXjoox6hMtNjyPM5kEpzi7zl6MxCvQcMjZMtjXwJGg2vhqPvRtVFG jK5mstSxMIoU= X-Received: by 2002:a05:620a:10ad:: with SMTP id h13mr4366729qkk.373.1592582759112; Fri, 19 Jun 2020 09:05:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx3/0AzPGmc/mgScudox87Z+KKyBmz3uCW7RXHY47m0M7+4IoVxehRGqL+MdgH5iyR2bGppkA== X-Received: by 2002:a05:620a:10ad:: with SMTP id h13mr4366702qkk.373.1592582758840; Fri, 19 Jun 2020 09:05:58 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:57 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Michal Simek Subject: [PATCH 10/26] mm/microblaze: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:22 -0400 Message-Id: <20200619160538.8641-11-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 49E422ACE2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Michal Simek Signed-off-by: Peter Xu --- arch/microblaze/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index 9abfa5224386..3d58dbd227cd 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include @@ -122,6 +123,8 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the * kernel and should generate an OOPS. Unfortunately, in the case of an @@ -215,7 +218,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -231,10 +234,6 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (unlikely(fault & VM_FAULT_MAJOR)) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:05:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614511 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7DFA014B7 for ; Fri, 19 Jun 2020 16:08:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4A9A02166E for ; Fri, 19 Jun 2020 16:08:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GIYUss6F" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A9A02166E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8599F8D00BC; Fri, 19 Jun 2020 12:08:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 80BAC8D00AD; Fri, 19 Jun 2020 12:08:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F93C8D00BC; Fri, 19 Jun 2020 12:08:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0083.hostedemail.com [216.40.44.83]) by kanga.kvack.org (Postfix) with ESMTP id 55C378D00AD for ; Fri, 19 Jun 2020 12:08:37 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 19A1C180ACC8C for ; Fri, 19 Jun 2020 16:08:37 +0000 (UTC) X-FDA: 76946444274.06.comb78_19141e326e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 840DC1012C2C0 for ; Fri, 19 Jun 2020 16:06:06 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: comb78_19141e326e1a X-Filterd-Recvd-Size: 5609 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:06:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582765; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdwEEPmGbF+7vaa9v+aN1XHbOlUdwOKUfmYW6uYEoCM=; b=GIYUss6FFYzfXk6a5xb24arkQPD7tud/C9TGFOu17wcr7F3K96haMT2UFaz5JJsXnxem7K hWNVZj1/7h++tnQnkY2IBpFawPh9QLklrBuIAiP8m37kz02WnGRqa9KZaoR2mYQMe7Uw7E GlS1ytFdXqUnIHATSqYcrcfONCrwgso= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-445-PVT8LVJfPIy1GZQcTCbkMA-1; Fri, 19 Jun 2020 12:06:01 -0400 X-MC-Unique: PVT8LVJfPIy1GZQcTCbkMA-1 Received: by mail-qv1-f70.google.com with SMTP id j4so7030508qvt.20 for ; Fri, 19 Jun 2020 09:06:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fdwEEPmGbF+7vaa9v+aN1XHbOlUdwOKUfmYW6uYEoCM=; b=szBJvsXindJ5pHG82esZJ3+MAP6nfvN+NVdGo7j8+7Jv5Yeb1GUd54CzEHsIGOZ5to AEH+iuOqHdV4ONMj6noC5fIBNMlyQdnuMeE77EXT5qTLxCsZAnPoq2p9snH884AG6nFV gTPZzHfpkIMx79csKjFgQDnyaxxJmK7Cyt7fvVmsDWifEKeCYoMcsgs4C8S09fl0EQY0 URN5mr58hDVwMoYGl48X7lzNypJjv+Rx47Dc/oEdzzkc5XwCYW9zB4W8LwSMrZq1jF5e fjNQaVe3nWWSRqrlEgxX05xQw4H13zDuPnIuZ100h9PGPTNAKOoeJjJ/UeK42kaP3WK4 TFRw== X-Gm-Message-State: AOAM533fK1Q+0KXCHQeJzn0GQ184XFA7sMpA+bI/IWktf5ZAe+gg2z+9 S6P6VuM+V2TMUKLT0wuGwD4y2WSFpH6Qsix52D4A5XuQSqDsPI3280sDvlI2B3vbygoUB9Le2FE cSrB9hd236cM= X-Received: by 2002:ac8:fec:: with SMTP id f41mr1437130qtk.212.1592582761023; Fri, 19 Jun 2020 09:06:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz8wG2GTQ9R1eiAtzaifkaB+aa3jdUijFfrRdNWsFEhdltAKDggKtTyb2MbmKOZqrM66lR9pA== X-Received: by 2002:ac8:fec:: with SMTP id f41mr1437088qtk.212.1592582760619; Fri, 19 Jun 2020 09:06:00 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.05.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:05:59 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH 11/26] mm/mips: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:23 -0400 Message-Id: <20200619160538.8641-12-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 840DC1012C2C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Thomas Bogendoerfer CC: linux-mips@vger.kernel.org Signed-off-by: Peter Xu Acked-by: Thomas Bogendoerfer --- arch/mips/mm/fault.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index 31c2afb8f8a5..750a4978a12b 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -96,6 +96,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: down_read(&mm->mmap_sem); vma = find_vma(mm, address); @@ -152,12 +154,11 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; @@ -168,15 +169,6 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - tsk->maj_flt++; - } else { - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - tsk->min_flt++; - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:05:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 64D7014B7 for ; Fri, 19 Jun 2020 16:07:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2DE5621532 for ; Fri, 19 Jun 2020 16:07:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="b7XNNxL7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2DE5621532 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3EED88D00B8; Fri, 19 Jun 2020 12:07:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 39E628D00AD; Fri, 19 Jun 2020 12:07:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28CFD8D00B8; Fri, 19 Jun 2020 12:07:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 0C1548D00AD for ; Fri, 19 Jun 2020 12:07:43 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C2002181AC9C6 for ; Fri, 19 Jun 2020 16:07:42 +0000 (UTC) X-FDA: 76946441964.09.grip57_621324826e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id A093518033DEC for ; Fri, 19 Jun 2020 16:06:06 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30051:30054:30090,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: grip57_621324826e1a X-Filterd-Recvd-Size: 5793 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:06:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592582765; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JN+es+8H1l3agmqz1OlDugYxis3rubXdkuh1XRgK5Rw=; b=b7XNNxL7A7PdxQLUuVARp2ZhRNMozpxbKGZbMfirfvBMR+ER0u7OTS0JtATeGnzAqQm+N/ LduA3i1fcgxi7ofXwLOZUF4zAFFPIZfpCtJQjnPeW0VeDDX8XOBugrCPfF1dxN5Xb0sVFm N87MgwpdACxOpnwnk+CoIbV9AeO16GM= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-316-dJXDecToMKiWh-Hn_b0HoA-1; Fri, 19 Jun 2020 12:06:03 -0400 X-MC-Unique: dJXDecToMKiWh-Hn_b0HoA-1 Received: by mail-qk1-f198.google.com with SMTP id j16so7450994qka.11 for ; Fri, 19 Jun 2020 09:06:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JN+es+8H1l3agmqz1OlDugYxis3rubXdkuh1XRgK5Rw=; b=KBuRK/EjNk1BILG3yEiS78j63sD9dJGa7RsVNceSoP0ZMKAzDyV86Z7yEUvD1Ef4fa Gbxqw3LyL5HlDyd4YDTv4HHXF1k++olmqGnX2qagfZcy2+6BhyrdIPgn67Zo71vvFqg0 9efpQVWX5xVEJzGPzO2N+yqf3plOabKblbF4XRtejzXjyENfvn68CHN6DilFRQaCZ4HN Wij/PJtbAkJxYxMJyTr+nx2wK3ExooQYWOOvtzYDlcZ6Kw7PCfhIBUr6XTIompcGioCi HE22k/VK7+fa/bih6d7itatlGJT4v6r/o6uIcSxovGLZ8ouxAjpOsvNVY0gDqj0FfnXB B51A== X-Gm-Message-State: AOAM5320ej/MGusxqWTmzhnuGmoF57+ERhIK/75u+OqEDJx0KrqbLVUw FZXVOSseLwZ+E3EMbRxroKUwMJGLy67bzqpZl3N6XUfQnRWDgSsBRnACupPzHBQ+J9IcUYeo2GF vyocIaGs/Bbg= X-Received: by 2002:a05:620a:810:: with SMTP id s16mr4133036qks.360.1592582762740; Fri, 19 Jun 2020 09:06:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJykH0gVLj5//M8Cz8/QdCcP1QeRwcUw6yJV2iyuWBKSIFyWPnkNb4597nojKp96iqgMHS74AQ== X-Received: by 2002:a05:620a:810:: with SMTP id s16mr4133016qks.360.1592582762503; Fri, 19 Jun 2020 09:06:02 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g11sm6412604qkk.123.2020.06.19.09.06.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:06:01 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Nick Hu , Greentime Hu , Vincent Chen Subject: [PATCH 12/26] mm/nds32: Use general page fault accounting Date: Fri, 19 Jun 2020 12:05:24 -0400 Message-Id: <20200619160538.8641-13-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: A093518033DEC X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Nick Hu CC: Greentime Hu CC: Vincent Chen Signed-off-by: Peter Xu --- arch/nds32/mm/fault.c | 19 +++---------------- 1 file changed, 3 insertions(+), 16 deletions(-) diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index 22527129025c..e7344440623c 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -122,6 +122,8 @@ void do_page_fault(unsigned long entry, unsigned long addr, if (unlikely(faulthandler_disabled() || !mm)) goto no_context; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -207,7 +209,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, * the fault. */ - fault = handle_mm_fault(vma, addr, flags, NULL); + fault = handle_mm_fault(vma, addr, flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -229,22 +231,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, goto bad_area; } - /* - * Major/minor page fault accounting is only done on the initial - * attempt. If we go through a retry, it is extremely likely that the - * page will be found in page cache at that point. - */ - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:12:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614563 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5775A13B1 for ; Fri, 19 Jun 2020 16:15:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 24FEF2168B for ; Fri, 19 Jun 2020 16:15:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AVqqfaqI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 24FEF2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DAF548D00CB; Fri, 19 Jun 2020 12:15:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D86728D00AD; Fri, 19 Jun 2020 12:15:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C28628D00CB; Fri, 19 Jun 2020 12:15:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id 9AB9D8D00AD for ; Fri, 19 Jun 2020 12:15:47 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5F73A824556B for ; Fri, 19 Jun 2020 16:15:47 +0000 (UTC) X-FDA: 76946462334.16.spark29_1c00bb426e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 5A4DC100D9ACF for ; Fri, 19 Jun 2020 16:12:52 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054:30064,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: spark29_1c00bb426e1a X-Filterd-Recvd-Size: 5720 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:12:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583171; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ewTqBfexdBAI6q+mW9BG6/0EyFw+GuUSZgL1KkcFzfA=; b=AVqqfaqI8puY8Vc7hq4KQe5/auQ8bNUjBgGJXwIB2gjqkeAiVy2LpbHafSNx/W6Ej8Rl7D PKYb+rPeUNqLPbnDMdqFvB8Nc7wmFRkm8rkpoJodC3ugmUczHIB0qhiXG+lKKTXsk5s2q2 UQLq1KnTpD/jwL/HJ7sUKVFgUGWmBE8= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-495-YRIKSF6WPLafUBHNO3UaIA-1; Fri, 19 Jun 2020 12:12:49 -0400 X-MC-Unique: YRIKSF6WPLafUBHNO3UaIA-1 Received: by mail-qk1-f200.google.com with SMTP id u186so1032164qka.4 for ; Fri, 19 Jun 2020 09:12:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ewTqBfexdBAI6q+mW9BG6/0EyFw+GuUSZgL1KkcFzfA=; b=FEWuTi1L0XrYsNiVEcMCexc9kmLWtt0zzX7Zbas0L7O2SJ1d77tZ0BEn9Tv0u3d5uo g6KyuecMowIldxCA3cVj6Ee6IYAwbWbJpBiQJ4mEXwq45BFd3Wc+6VfzEQMtCZw9UoBr ZGwm1SPNHZy9a0g/RN3Cr+UrY5bgILG2FGvKbwAr1qFbht1gtmps52lpyw2JuE43jMX0 C314s9geka67OTD9nETaxBKtVC5q9YJUueN/bZW9X042RxNkzBPHM6jvy8osx9nsFIB7 RWARNiS66p72OPIWRIRPS6obj5Y9NFe7JnNtx+JpGpZvvpMutaYEqYNFH2/LVTgBijwZ a+ng== X-Gm-Message-State: AOAM530ENBh315puv9z5COZQel+yYOfBrkxdNprgpWpuWLSFO7fWIgzl JrEofT0GDJeVdD3FChSavyyKoRO2g4MIY3+p6MlmJNx7sa1shKLnxei4/MOa0Ek/mpPgkUIrYoQ Hk8jEkYGpg8A= X-Received: by 2002:a37:e87:: with SMTP id 129mr4114920qko.499.1592583169281; Fri, 19 Jun 2020 09:12:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwvSVp3DvCUxxCHnzJ36+YGsZtjQWrVVS6joe8rh/iaYG3kK43rnoX9nmcA1Z4z7yyGo8LoFg== X-Received: by 2002:a37:e87:: with SMTP id 129mr4114895qko.499.1592583169012; Fri, 19 Jun 2020 09:12:49 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id w204sm6713249qka.41.2020.06.19.09.12.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:12:48 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Ley Foon Tan Subject: [PATCH 13/26] mm/nios2: Use general page fault accounting Date: Fri, 19 Jun 2020 12:12:46 -0400 Message-Id: <20200619161246.9347-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 5A4DC100D9ACF X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Ley Foon Tan Signed-off-by: Peter Xu --- arch/nios2/mm/fault.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index 88abf297c759..823e7d0a9e97 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -83,6 +84,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + if (!down_read_trylock(&mm->mmap_sem)) { if (!user_mode(regs) && !search_exception_tables(regs->ea)) goto bad_area_nosemaphore; @@ -131,7 +134,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -146,16 +149,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:12:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614529 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E235912 for ; Fri, 19 Jun 2020 16:12:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0B7ED2186A for ; Fri, 19 Jun 2020 16:12:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hPUHWbxC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B7ED2186A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1C9458D00BF; Fri, 19 Jun 2020 12:12:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1531B8D00AD; Fri, 19 Jun 2020 12:12:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01B568D00BF; Fri, 19 Jun 2020 12:12:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id DA8F68D00AD for ; Fri, 19 Jun 2020 12:12:56 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5B787180ADD05 for ; Fri, 19 Jun 2020 16:12:56 +0000 (UTC) X-FDA: 76946455152.21.fear86_580c30226e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 2BD7E18044735 for ; Fri, 19 Jun 2020 16:12:56 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: fear86_580c30226e1a X-Filterd-Recvd-Size: 5714 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:12:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583175; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QEacpwhabx4oqP5P1sDsSQYEJDVmYeAmQf6kbwzkxic=; b=hPUHWbxCCm4Buu/7MsG+f5mJXjiaqnoonYBVvlQP0xgsywuTVuVvFHTxSXuP12tHBOTJMa Wgs341e1PNLxxDP3sobyxAXOHSzeaeQHMpPQHzsRaWgRt/RBKkg+eWEwZ+aLTN1YJ37ZZ7 rjBH9LqZuDcPXUyN+mJPLCmB7U3EF0M= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-445-jat-oHifP6-EuOGVkV4Mpg-1; Fri, 19 Jun 2020 12:12:53 -0400 X-MC-Unique: jat-oHifP6-EuOGVkV4Mpg-1 Received: by mail-qt1-f200.google.com with SMTP id e8so7432211qtq.22 for ; Fri, 19 Jun 2020 09:12:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QEacpwhabx4oqP5P1sDsSQYEJDVmYeAmQf6kbwzkxic=; b=sLNMUhw/+S8X1XZfCtzTbfdtOvw4I3r9WneYgYgUpHO/KicIzAmuGfnyeGDBfpR6ZM vm9fR3Aagy23Z8f9k3iqpXFk5BlHtY3o3RlKDm4NOzMMXA8FyIgPaqzJ1Q+38eyLSPbt N1wb09wiQeKCmNmcARR+SAOGtJCV0l7A5csjyp63Ennb9+48mFoSHTYpRsS+WycWldbt O7k4MuYnv4fwdPHKCIT2Ae7AbTEDrOsiLY+2strMTzSqaJH1HdUNref/qlFDBDWtOQ9I IMz7/HnGnxKdVL2gwt4nTETi42pEb83el0KwleDO8zCyubpSJTWUvYOx/8BqEsMvpIPJ m1xw== X-Gm-Message-State: AOAM532QN7n9vqCg/boQprLR5UMQZ7aa8GdORVkrKiJjZSwiCGZZ4w4u ZYnrtnelopWxEizG5Beg7jXA12zhIVAlfos7A2PFmSgHgJn1UWgDhTlHKpM1JMQGqXc+JMt+x7P 2BWTEjQ7ZZog= X-Received: by 2002:a0c:ea4a:: with SMTP id u10mr9341635qvp.225.1592583173005; Fri, 19 Jun 2020 09:12:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/KbzvQf8GbrVk0E0Lv5jPmHcSh5JxBeheLHw2C8ZFelbbsjI3LhaiwkF4VxKMOl5dF8XE/g== X-Received: by 2002:a0c:ea4a:: with SMTP id u10mr9341584qvp.225.1592583172386; Fri, 19 Jun 2020 09:12:52 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id f203sm499961qke.135.2020.06.19.09.12.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:12:51 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Jonas Bonn , Stefan Kristiansson , Stafford Horne , openrisc@lists.librecores.org Subject: [PATCH 14/26] mm/openrisc: Use general page fault accounting Date: Fri, 19 Jun 2020 12:12:50 -0400 Message-Id: <20200619161250.9443-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 2BD7E18044735 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Jonas Bonn CC: Stefan Kristiansson CC: Stafford Horne CC: openrisc@lists.librecores.org Acked-by: Stafford Horne Signed-off-by: Peter Xu --- arch/openrisc/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index 45aedc572361..5255d73ce180 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -103,6 +104,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, if (in_interrupt() || !mm) goto no_context; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + retry: down_read(&mm->mmap_sem); vma = find_vma(mm, address); @@ -159,7 +162,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -176,10 +179,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, if (flags & FAULT_FLAG_ALLOW_RETRY) { /*RGD modeled on Cris */ - if (fault & VM_FAULT_MAJOR) - tsk->maj_flt++; - else - tsk->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:13:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614555 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20D8A14B7 for ; Fri, 19 Jun 2020 16:14:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D88CE217A0 for ; Fri, 19 Jun 2020 16:14:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KL2XOkNP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D88CE217A0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 17B368D00C7; Fri, 19 Jun 2020 12:14:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 12AD28D00AD; Fri, 19 Jun 2020 12:14:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 019958D00C7; Fri, 19 Jun 2020 12:14:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id D953B8D00AD for ; Fri, 19 Jun 2020 12:14:34 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 92578824556B for ; Fri, 19 Jun 2020 16:14:34 +0000 (UTC) X-FDA: 76946459268.11.rod70_1906f1d26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 51AD5180FA157 for ; Fri, 19 Jun 2020 16:13:24 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: rod70_1906f1d26e1a X-Filterd-Recvd-Size: 5684 Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:13:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583203; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eYfbH2Z9PmFD3YC1EEjF5e5lSljD0oX0ZInZG0GJcWM=; b=KL2XOkNPJP02gVz/DwNXigj9Iel2PGbHa/2UgtgnWcykRA788u5On7kj6eNstDzndSYQo2 8wvLcIaIgL2b0kwp72mj2hA6g9KrZ1p/jS2+AG7fFeGIlAv9OrAEzFHjXFwnmLkslicqnl c6o+XBcePIIU6tNqZ91D3/iVcT+ucb4= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-7-CQ82RJCrPTaVPegexl3JXw-1; Fri, 19 Jun 2020 12:13:21 -0400 X-MC-Unique: CQ82RJCrPTaVPegexl3JXw-1 Received: by mail-qt1-f199.google.com with SMTP id u48so7449880qth.17 for ; Fri, 19 Jun 2020 09:13:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eYfbH2Z9PmFD3YC1EEjF5e5lSljD0oX0ZInZG0GJcWM=; b=G8Gux38dG1FbL8XTJObM9HruQl0yh3GAtb9arUjG7UQp6pwvmYmQub3DwVMdPsdA60 wiNbaENzxYevCXdXYt3BeVdzsxAlj9Wb+Lu5ad7tHDiSZTuGv7FHX98dous0KYc0IOYs WF/vltCJhD4gIzp3ZUenSGHbvf4TNf749mGM261wWSrN/Bupl/C35pyHIkXKOxI830ha n9WAFZfivFiRxbFI8eI1hros6uwYfxSAaVPi2dXw1GuipFLezkhkPiGL+00LAmQMyPZE VBceJ92K0UB26nYMR7VQY5vfFXupWZvZYCM70H2k9tZ0nQ29L1oix+Gnm2iIwUPLKmNR fIYQ== X-Gm-Message-State: AOAM530hwjqEZlIBFuySjk7qMs+nbPBP0i9oeH0yL9gonl0GLOrOr59b uyvT29Eh3ebrz+wQlbuyWNI78CQ/FEXyQrDSs4EyFtpixxv3DISfqxjIQQL/hrlue7XZYbDwDAa u4d65YOWPSaM= X-Received: by 2002:a37:5d6:: with SMTP id 205mr4101500qkf.46.1592583200925; Fri, 19 Jun 2020 09:13:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx+/kXn6XM9J3AHchJ11d9906jc9OEfvlqbxTG9lTxQJaaFL7aW/z0ASmv5F6WdZBp/RES2MA== X-Received: by 2002:a37:5d6:: with SMTP id 205mr4101475qkf.46.1592583200713; Fri, 19 Jun 2020 09:13:20 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id c83sm7040588qkb.75.2020.06.19.09.13.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:20 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , "James E . J . Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH 15/26] mm/parisc: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:18 -0400 Message-Id: <20200619161318.9492-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 51AD5180FA157 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: James E.J. Bottomley CC: Helge Deller CC: linux-parisc@vger.kernel.org Signed-off-by: Peter Xu --- arch/parisc/mm/fault.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index c10908ea8803..65661e22678e 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -18,6 +18,7 @@ #include #include #include +#include #include @@ -281,6 +282,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, acc_type = parisc_acctyp(code, regs->iir); if (acc_type & VM_WRITE) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: down_read(&mm->mmap_sem); vma = find_vma_prev(mm, address, &prev_vma); @@ -302,7 +304,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, * fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -323,10 +325,6 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { /* * No need to up_read(&mm->mmap_sem) as we would From patchwork Fri Jun 19 16:13:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614543 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 51D8414B7 for ; Fri, 19 Jun 2020 16:13:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1542C2168B for ; Fri, 19 Jun 2020 16:13:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EqHl4ycS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1542C2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3C53C8D00C2; Fri, 19 Jun 2020 12:13:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 374B18D00AD; Fri, 19 Jun 2020 12:13:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 263CB8D00C2; Fri, 19 Jun 2020 12:13:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0080.hostedemail.com [216.40.44.80]) by kanga.kvack.org (Postfix) with ESMTP id 08E468D00AD for ; Fri, 19 Jun 2020 12:13:53 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9E8122EC43 for ; Fri, 19 Jun 2020 16:13:52 +0000 (UTC) X-FDA: 76946457504.10.game87_350a1c226e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 2679B2D4D9 for ; Fri, 19 Jun 2020 16:13:34 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: game87_350a1c226e1a X-Filterd-Recvd-Size: 5045 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:13:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583213; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8qSToaPYAxy41TNCcRoZVJZv9P7/HBXlbwWJ2Rk2g+I=; b=EqHl4ycSS5u2oU9hPLnzBKqPSF0s90WDgz9l4AOYVRMd8XHIzCkERMJ+kpckAqJjW5xq7R /4uCjQJ7t7YRLRnx/mNYuKHbqfGILzfKazHoUS8PELrXS5sGbKrdCyQReqR1M5xiOALI5r iq4kuw3HB4XTxkwRSiM5U9jv28isXIg= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-381-htguvb4cNbiLFEpoRyeCMw-1; Fri, 19 Jun 2020 12:13:31 -0400 X-MC-Unique: htguvb4cNbiLFEpoRyeCMw-1 Received: by mail-qv1-f70.google.com with SMTP id g13so359436qvp.5 for ; Fri, 19 Jun 2020 09:13:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8qSToaPYAxy41TNCcRoZVJZv9P7/HBXlbwWJ2Rk2g+I=; b=R3tyDbkUhBTkIKJCeFcS5K3EJ3s/MnAak71MTNBgb+3UmxZUtnow3HUGh9DUKgnTB2 29wlK5crV5GK/KGtAoYKK+/L1bZndNKr1UTDatYqOk35l+EMO7g7Nmuzqr9OO+NsPdjD hRy/F+PP620s5b9zfYEbMX8c2SmRPxVw8C6yspsy0FruCWGnVkCb0jBzbUW25rv4ds/h 4ioWv0SPPW0QH48JD9pp0aExpy0fnqAxkqY7+9SSlLCT8zbKRRM3r46Xyww2Fdg3KnhO 1e8wWhwUm19lIuVJoKEaEZUmRstzeurPZY84676FcU12gzQfxlhbAonw/mP/PGn1s7L2 9TWg== X-Gm-Message-State: AOAM5307BDj5veQw8oOfpMTjWq/ykofhntOJqYWeYhODDLDBkhvMf0Eu AzrDx9htPHfdWrfL4W9oc0mDhPh8oHGF51rZ1ZIzN88JJigASwhkIIrO35FkHkur/LXdZTfAbEp TExklKbYKTU4= X-Received: by 2002:ac8:46d6:: with SMTP id h22mr4171901qto.145.1592583211170; Fri, 19 Jun 2020 09:13:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxPf3L9dwnzhcKrCo1xqDr8zLwbACSJhy9YaZ+W+WiIy3zkGSSlH37gZSMbp9D62VN2NzGxsA== X-Received: by 2002:ac8:46d6:: with SMTP id h22mr4171872qto.145.1592583210896; Fri, 19 Jun 2020 09:13:30 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id s52sm7597075qtb.3.2020.06.19.09.13.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:30 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org Subject: [PATCH 16/26] mm/powerpc: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:27 -0400 Message-Id: <20200619161327.9564-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 2679B2D4D9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). CC: Michael Ellerman CC: Benjamin Herrenschmidt CC: Paul Mackerras CC: linuxppc-dev@lists.ozlabs.org Signed-off-by: Peter Xu --- arch/powerpc/mm/fault.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 992b10c3761c..e325d13efaf5 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -563,7 +563,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); #ifdef CONFIG_PPC_MEM_KEYS /* @@ -604,14 +604,9 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, /* * Major/minor page fault accounting. */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); + if (major) cmo_account_page_fault(); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } + return 0; } NOKPROBE_SYMBOL(__do_page_fault); From patchwork Fri Jun 19 16:13:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614565 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2674714B7 for ; Fri, 19 Jun 2020 16:16:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E78E921707 for ; Fri, 19 Jun 2020 16:16:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="T1U75ZpN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E78E921707 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3EB838D00CC; Fri, 19 Jun 2020 12:16:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3C1D98D00AD; Fri, 19 Jun 2020 12:16:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D8B98D00CC; Fri, 19 Jun 2020 12:16:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id 120118D00AD for ; Fri, 19 Jun 2020 12:16:24 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B6B87824556B for ; Fri, 19 Jun 2020 16:16:23 +0000 (UTC) X-FDA: 76946463846.16.match01_020a82726e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id C7F6710156C02 for ; Fri, 19 Jun 2020 16:13:41 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: match01_020a82726e1a X-Filterd-Recvd-Size: 5373 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:13:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583220; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8WnWhn2O9ddpmgT5q7yGLFrwE17K5+WMrtUB+3cLWEw=; b=T1U75ZpNpC45WlkgXS0oig3ex9HLXQe4Yt+wxmwegkJWMtNZGgyxk81Q+y7ga8WpfLUaa+ lxXAO/Sbhhuf9GWP4VqBGUc5uoOv7VCHkpoiAMT+PFlXbFekchkrfrdRcPaVScKvRHGZml ivoqtece9YHEhopc5uO+RnvbYHEsVeg= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-105-furv0_LnOY6Hu-uvwTV35g-1; Fri, 19 Jun 2020 12:13:35 -0400 X-MC-Unique: furv0_LnOY6Hu-uvwTV35g-1 Received: by mail-qk1-f200.google.com with SMTP id 204so7500838qki.20 for ; Fri, 19 Jun 2020 09:13:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8WnWhn2O9ddpmgT5q7yGLFrwE17K5+WMrtUB+3cLWEw=; b=fu+aNzzwHXdMhYY2MxTaB2KAtjCRCOgK7pf9umsdEKWHrwUa77dmYIZ1GqEVzyKsrb hLyLKVRkp4JEVITVraHUqyFjTn0UuTOw6bWNgXP0SlyXJqlpawVIEs5P4lpqz0sRFFi8 vERs+NqyGrz61m5eAfHDb+mbX8DlsjyA4kXOjwn/c9HKk5cJGcSdqt4Le74AgxZLXunB lEDdRXUkttrFbY+ioH96xts18yegRgesubGMj3Gf+a4LICmGzezjHk9ylygFGz3K9BiF W2allMh/aoJk6rYYAFynYYg2l/OFlzWbw+cZEfUT2HKF+x3iHtS+h7bdquImLY7z21NS SRwA== X-Gm-Message-State: AOAM531d4WeW5U3jsGLdu6JzsT8MEFXdmkMuUG5f9QXXH8ZEhsUcHVsG wCuwtT5edHoNYeMlhywI4l04ozqzBTfyaH5OY27qmYmHrCvU53KrWFl5Lrw924qO9UyftGzsiOT dAE467dnlEf4= X-Received: by 2002:ae9:ef4d:: with SMTP id d74mr4214779qkg.41.1592583214605; Fri, 19 Jun 2020 09:13:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwXGpbO/9nv4BANBmX+guvWqX+FL56EmA0NXR+BJXb0IgqFlArSWQFas9dDfa1ga+CYTZhslw== X-Received: by 2002:ae9:ef4d:: with SMTP id d74mr4214756qkg.41.1592583214399; Fri, 19 Jun 2020 09:13:34 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id x11sm6338488qti.60.2020.06.19.09.13.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:33 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [PATCH 17/26] mm/riscv: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:31 -0400 Message-Id: <20200619161332.9614-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: C7F6710156C02 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Paul Walmsley CC: Palmer Dabbelt CC: Albert Ou CC: linux-riscv@lists.infradead.org Signed-off-by: Peter Xu --- arch/riscv/mm/fault.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 677ee1bb11ac..e796ba02b572 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -110,7 +110,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags, NULL); + fault = handle_mm_fault(vma, addr, flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -128,21 +128,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:13:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614559 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DFFF314B7 for ; Fri, 19 Jun 2020 16:15:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AC29721707 for ; Fri, 19 Jun 2020 16:15:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XS6byVXH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC29721707 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E4CF88D00C9; Fri, 19 Jun 2020 12:15:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DFB808D00AD; Fri, 19 Jun 2020 12:15:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D11C98D00C9; Fri, 19 Jun 2020 12:15:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id A36DE8D00AD for ; Fri, 19 Jun 2020 12:15:02 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 46751181AC9C6 for ; Fri, 19 Jun 2020 16:15:01 +0000 (UTC) X-FDA: 76946460402.13.salt90_4506e0126e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 1C72318141983 for ; Fri, 19 Jun 2020 16:13:41 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: salt90_4506e0126e1a X-Filterd-Recvd-Size: 5570 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:13:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583219; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cHCFBr0H2Wj98J64LoxM3OjPsGeWNQz7WDWbAHsmGis=; b=XS6byVXHPPw1TmcoMOExa7pSmWSqvzTc2jeEEN58wwv+rGMls2txLOgALaxg+0ppJwwACp 6zEqOjuBbTujv3RKQgQ+ltwSCUZ/APWX/WbInfj51e5+PPYo7V9IVztLueqbB8iptqXqRf I187/cbaTP2r9q4YCV5OBSlafZpZi5s= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-250-Je01dAmKPd-olfFJhOmHFQ-1; Fri, 19 Jun 2020 12:13:38 -0400 X-MC-Unique: Je01dAmKPd-olfFJhOmHFQ-1 Received: by mail-qk1-f197.google.com with SMTP id j16so7510107qka.11 for ; Fri, 19 Jun 2020 09:13:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cHCFBr0H2Wj98J64LoxM3OjPsGeWNQz7WDWbAHsmGis=; b=JbKFi8vQWI9UQfuzghlKSDgsZDaARtYo5THB4TcTQTWCAoBJoJgV0yOi8bsbHNGFd0 /ZxN5rFEaGqIb8i0rQpufPwwQHt/Zi1npSsOjDwLhdX7lZnl7YE/WanLIvU7fhvHpawV /cakZKCtFHMY0a38rAKg4ENA82/RIbA/lpNenbO+xH4trRhtd1Vg1GZYsXAK0jEsbDiz 1gAYC6VsGUlwPurgv9+9E1AMwahSawTcY8DT5d/IPDgCBfbqr7kLxXpucbuy03Ll14/w Mf5KdIBx0Y9ODJU48WvuGFjk6Ql6AqlR6Z4IxveckfYgmaXG1K5W/bAS530ZkMryor43 UCew== X-Gm-Message-State: AOAM530APCDpgElULoEJekXoufskTyvFMmmSSqUGgmoIO1UHX51aeiFS WvT+BoGSwkfA4Z9ZZvrdtyE7gKsrnt4zPqSU7k6g4gFWjIx6tQmh/IXbDt5FXUQ8wq9MyL27tSC DtZVEBcTnTFo= X-Received: by 2002:ac8:4746:: with SMTP id k6mr4197649qtp.234.1592583217599; Fri, 19 Jun 2020 09:13:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw8FxSE1nBWsTRVRNtg8kxaR64T2HjUjPlWnGOgZGa7/UjveZ5zpprBuiIcJe4cj3bpOnvD0g== X-Received: by 2002:ac8:4746:: with SMTP id k6mr4197618qtp.234.1592583217335; Fri, 19 Jun 2020 09:13:37 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id r76sm6090318qka.30.2020.06.19.09.13.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:36 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , linux-s390@vger.kernel.org Subject: [PATCH 18/26] mm/s390: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:35 -0400 Message-Id: <20200619161335.9664-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 1C72318141983 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Heiko Carstens CC: Vasily Gorbik CC: Christian Borntraeger CC: linux-s390@vger.kernel.org Signed-off-by: Peter Xu --- arch/s390/mm/fault.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index ab6d7eedcfab..4d62ca7d3e09 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -479,7 +479,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; if (flags & FAULT_FLAG_RETRY_NOWAIT) @@ -489,21 +489,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (unlikely(fault & VM_FAULT_ERROR)) goto out_up; - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } if (fault & VM_FAULT_RETRY) { if (IS_ENABLED(CONFIG_PGSTE) && gmap && (flags & FAULT_FLAG_RETRY_NOWAIT)) { From patchwork Fri Jun 19 16:13:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 305D7912 for ; Fri, 19 Jun 2020 16:13:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F18F82184D for ; Fri, 19 Jun 2020 16:13:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L3NonW8u" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F18F82184D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 23DCC8D00C0; Fri, 19 Jun 2020 12:13:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1C7938D00AD; Fri, 19 Jun 2020 12:13:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B6508D00C0; Fri, 19 Jun 2020 12:13:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id E4A7E8D00AD for ; Fri, 19 Jun 2020 12:13:45 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A5521AFBF6 for ; Fri, 19 Jun 2020 16:13:45 +0000 (UTC) X-FDA: 76946457210.28.sleep59_310d3f626e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 7E2EB7284E for ; Fri, 19 Jun 2020 16:13:45 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:error,Custom_rules:0:0:0,LFtime:1766,LUA_SUMMARY:none X-HE-Tag: sleep59_310d3f626e1a X-Filterd-Recvd-Size: 5049 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:13:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583222; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sy544NXeYcNhbxOT1ou9Gq9AWlj9fShUn5juJrQIV8c=; b=L3NonW8ugoQF2B3bq92FL9Jz2umLwmfANcm40etLfFUnLmp+r7c3CMhuqTVsfJdDvb8Nuw 90DsH8Og3PvXtSPtJg/4Vclm+S/T/6NKxWcRI5Yt0PqOyNp/FhCmmMi0+aGY862OuCsCk4 PywZOCDFyKKu4x1lKBPUQwM7z5t42QE= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-311-_5ZBLxP7P7mOJfXK7zu2UA-1; Fri, 19 Jun 2020 12:13:41 -0400 X-MC-Unique: _5ZBLxP7P7mOJfXK7zu2UA-1 Received: by mail-qt1-f200.google.com with SMTP id o11so7431102qti.23 for ; Fri, 19 Jun 2020 09:13:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sy544NXeYcNhbxOT1ou9Gq9AWlj9fShUn5juJrQIV8c=; b=jBN/Cfo5+l+0mvRCO58PAXEunPVuF8R5otZbPTtCv2p0SgErgpjotCCuYAyTsKDorj BoEqHY41uzlqrbqZWcJpdtaDyTZA4feN4gTtHZwiDocQnM5YzlYtT1juk6e3evPgI4BN 1CubAgmV/tTenGosV5thLx3eqKNC7xoSGQkDg/NJPpPFyeobIwO2RJqtJ+7gbfc1SlvK t8hgw+engSHd0RfCWy/SCCXJFX+xpJmEbuVWq/LKb+2rzrHBgpoxG6pXEtlS2d0md4/5 omGcwSE3osFGHSi4lPbu1TMmVgATpz50WQCsJmwWfM9hCeJKChACbY9GM8QOf/+gBjTV wT1w== X-Gm-Message-State: AOAM531GpPQ7v7kplVaPsBgHphiuQV8reA2x09iksMLdFC1M09fnVsL+ M5gqwcjWOJMIGnvRCj4pOUVdWe3RHORSSe5pu9/SZkHpGaQs6Oo3xoIs7gOY+DDtoaqUtGgeVqw hP53zmfXFGVU= X-Received: by 2002:a05:6214:134f:: with SMTP id b15mr9246752qvw.208.1592583220598; Fri, 19 Jun 2020 09:13:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxYciDEcz+6ytfRwjREBcuIkBV9R3diCFzYltXp0h47AZIc6TyB3mcZDytKnQbnKaSsy8vd+w== X-Received: by 2002:a05:6214:134f:: with SMTP id b15mr9246728qvw.208.1592583220420; Fri, 19 Jun 2020 09:13:40 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id f203sm502143qke.135.2020.06.19.09.13.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:39 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Subject: [PATCH 19/26] mm/sh: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:38 -0400 Message-Id: <20200619161338.9714-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 7E2EB7284E X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Yoshinori Sato CC: Rich Felker CC: linux-sh@vger.kernel.org Signed-off-by: Peter Xu --- arch/sh/mm/fault.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index a4e670a9c9b3..ba6f7ed570e5 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -464,22 +464,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR))) if (mm_fault_error(regs, error_code, address, fault)) return; if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:13:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614541 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C483414B7 for ; Fri, 19 Jun 2020 16:13:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 914E921924 for ; Fri, 19 Jun 2020 16:13:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YwU56Gu+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 914E921924 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C1B98D00C1; Fri, 19 Jun 2020 12:13:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 54BF18D00AD; Fri, 19 Jun 2020 12:13:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2654F8D00C1; Fri, 19 Jun 2020 12:13:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id 00DBB8D00AD for ; Fri, 19 Jun 2020 12:13:46 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B16A68245571 for ; Fri, 19 Jun 2020 16:13:46 +0000 (UTC) X-FDA: 76946457252.19.space74_2f1803826e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 802831ACDE7 for ; Fri, 19 Jun 2020 16:13:46 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: space74_2f1803826e1a X-Filterd-Recvd-Size: 5154 Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:13:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583225; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+T4PahBLwp+qMqIXI+a8sKJL/5YpPnDyt5GJ45wieOk=; b=YwU56Gu+5e6pA30A6P6F+JQEos8VTNl0Y0C+ZIGHBqtclC+aYBUSD1z2B1Ar+NNaJqvnsh ewKvi3lHOXZ41TaTOJHz0vELcoADPH7rdSRwHA36zGSgat0RZJjpQ1qaUoJ2MLuQcoPPnm eYvVYcG6w7kTMSoMLiACdhgCHJjgjGc= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-16-5mpE-EDLMHu1bwNVtlyVbg-1; Fri, 19 Jun 2020 12:13:44 -0400 X-MC-Unique: 5mpE-EDLMHu1bwNVtlyVbg-1 Received: by mail-qv1-f69.google.com with SMTP id s20so7092421qvw.12 for ; Fri, 19 Jun 2020 09:13:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+T4PahBLwp+qMqIXI+a8sKJL/5YpPnDyt5GJ45wieOk=; b=puJtGSn1PccFTgPjpJqyoIru7MFcuV3hNnlZdy3IeOwhUvSg7a+SJLsEmEw0xzWpI1 sR3pi71fnFnwztV28Eb0YCJD7000YZpZkEcJ7DPJJ0NGMv2kiiOk5TuHVjgANAEd1VID qfJNGdpTnpCWLm56Q4km0d2fCYw20nULyVSuoNdzGOBu1L9wnIwfNINW/IbuP1i9BMPa Z99vE57U2Y3e0pXlnaVUd4Hk7vxKDlSgACOLLIBvhoG3QHBZwJYTnzNCjJKlnbiBH9J5 TALfn1ycbwlsEQKh/lS3RGwxVce26FzEkt7SN7sMK/h0+PWovk3CVbOeYSUWsfCDLTAA iWNA== X-Gm-Message-State: AOAM53238e5m37WwgNj08boWqIxQOSrYR0mWVTikdObu0Df/4VJHlAaT zwro3ZBWapVAxwfwIKMuYoqHTMdRIcuNgI+uMKcwL3blqqoWda+5EouRkcGPI35cnUFu4+CxXvq eIrLvjUUPxC0= X-Received: by 2002:a37:4ac6:: with SMTP id x189mr4077386qka.323.1592583223568; Fri, 19 Jun 2020 09:13:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw6soZaP4dnoM13q0hEmGSoc9m+CP9QFqBJTTn4uB3sxC+M1N2RbKnkZrYMa4KMFGRLAMCg6A== X-Received: by 2002:a37:4ac6:: with SMTP id x189mr4077348qka.323.1592583223233; Fri, 19 Jun 2020 09:13:43 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id k26sm7453496qtk.55.2020.06.19.09.13.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:42 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , "David S . Miller" , sparclinux@vger.kernel.org Subject: [PATCH 20/26] mm/sparc32: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:41 -0400 Message-Id: <20200619161341.9762-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 802831ACDE7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: David S. Miller CC: sparclinux@vger.kernel.org Signed-off-by: Peter Xu --- arch/sparc/mm/fault_32.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index 61524d284706..542bf034962f 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -235,7 +235,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -251,15 +251,6 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, address); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:13:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614557 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39E6D14B7 for ; Fri, 19 Jun 2020 16:14:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 06432217A0 for ; Fri, 19 Jun 2020 16:14:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Dl6l1099" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06432217A0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D5AA8D00C8; Fri, 19 Jun 2020 12:14:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 486858D00AD; Fri, 19 Jun 2020 12:14:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39BE08D00C8; Fri, 19 Jun 2020 12:14:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0156.hostedemail.com [216.40.44.156]) by kanga.kvack.org (Postfix) with ESMTP id 1D2698D00AD for ; Fri, 19 Jun 2020 12:14:54 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D21592D009 for ; Fri, 19 Jun 2020 16:14:53 +0000 (UTC) X-FDA: 76946460066.02.woman80_5c12c0526e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id B6CE63000087A654 for ; Fri, 19 Jun 2020 16:13:54 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: woman80_5c12c0526e1a X-Filterd-Recvd-Size: 5083 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:13:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583233; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hHOoOMvKCdAD0ZPNXEYJ7zwb9/Vhd5GoKeFrCVrHadg=; b=Dl6l1099rx/eTzBXxmwCWJQrDdTzgU6x6L9FQQJUgE3jOD5Uk/eW0jt6ZPcDePb3eIVW7C lwIf/Nf6UD+AYQ0KTOykmiqbMuGqLvVjkEIDKvqfpIwX7EVN/ehVKG2Qx5KWD4q1nvOMX+ 0qhVcgXRxaWrwRgYxA7eIuLY5YdqiPQ= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-39-GxDAKYClOKqbmjVFS1ctrw-1; Fri, 19 Jun 2020 12:13:50 -0400 X-MC-Unique: GxDAKYClOKqbmjVFS1ctrw-1 Received: by mail-qv1-f71.google.com with SMTP id z7so7126662qve.0 for ; Fri, 19 Jun 2020 09:13:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hHOoOMvKCdAD0ZPNXEYJ7zwb9/Vhd5GoKeFrCVrHadg=; b=ecgCMBLNzRSlM5UaolBZP3a5M73UA6akhK29BpUd9zVqWqwXRKAwpy43z1F9Vr01aO rOmlArRPX1hHHsK6sfxFz8z9QPSjwmWsBdJYx2SHQp3JpECn3Wd2v7XVm3jHQKSAmhuz vXJqGaK3C6nwS8oe6DwUSAoJ57Oq9fCkIdk2QM7HdyiisyTrSKsK7NvWEKYP5uLsc80g qorHuebmkUJJMmQkVhX4DiX2b/LglXmzI2t/V0mpRnq8CXsjOK1l8WU3aBtw2UDsSldG vq6XIdKzVleOORYrRl6gPGZd4I5PtUxcx9ghaz2mBtQjhpg81fDzuWSF+j+JdqyLjz0M 1pGg== X-Gm-Message-State: AOAM532N8XwyHEpgHoa3VmC5Q9eBbS1sGGT3J/jCH0Ju4BWT7Z77QZWb 9fk9wy6teyi3wcmXZiZVjO1OylH3IdXhvo/euVXvXqadJcycAKboyNko8a6YjHRSIHKsdpOegti Qky1EM2ODSpc= X-Received: by 2002:a37:9cb:: with SMTP id 194mr4168762qkj.456.1592583230421; Fri, 19 Jun 2020 09:13:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxlsFSGYwLSHrNQCrtseIMJR1uot4X4VtcHhU1f6HFiKSJmKiiV4hUEBJBylfQ6YO+vR6eJOw== X-Received: by 2002:a37:9cb:: with SMTP id 194mr4168749qkj.456.1592583230187; Fri, 19 Jun 2020 09:13:50 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id v184sm1283720qki.12.2020.06.19.09.13.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:49 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , "David S . Miller" , sparclinux@vger.kernel.org Subject: [PATCH 21/26] mm/sparc64: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:48 -0400 Message-Id: <20200619161348.9811-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: B6CE63000087A654 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: David S. Miller CC: sparclinux@vger.kernel.org Signed-off-by: Peter Xu --- arch/sparc/mm/fault_64.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index 6b702a0a8155..fe8854d447ed 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -423,7 +423,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) goto exit_exception; @@ -439,15 +439,6 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, address); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Fri Jun 19 16:13:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614545 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92C7F912 for ; Fri, 19 Jun 2020 16:14:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F1F9217A0 for ; Fri, 19 Jun 2020 16:14:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Eo0LwMD0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F1F9217A0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8F4A98D00C3; Fri, 19 Jun 2020 12:13:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 87D968D00AD; Fri, 19 Jun 2020 12:13:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 76CA08D00C3; Fri, 19 Jun 2020 12:13:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 5B3E58D00AD for ; Fri, 19 Jun 2020 12:13:59 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 220B2180ADD0E for ; Fri, 19 Jun 2020 16:13:59 +0000 (UTC) X-FDA: 76946457798.05.plane39_0c1365626e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id A1B8318028204 for ; Fri, 19 Jun 2020 16:13:57 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30051:30054:30090,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: plane39_0c1365626e1a X-Filterd-Recvd-Size: 6379 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:13:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583236; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=djMzqxEX2KOhdS0mA0I6TixiZlYRFWjkTEipCw3bAqk=; b=Eo0LwMD0s9ySr3NzXZ9MyvFhOu9ZJl8bvnzJsCYLPjlGcIAZreCdtv+XhgLQUIW61D9SF5 0jp2q9w1OS7o6T2Pr7HXmPNOUyhsY/expJrKMIXfOD2X/I5I2oIXxQTeNhbVXC5yg0USVW G2x0sATbWwmuKdrHFTw8mO9zDCxM8t4= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-249-GSQ-zA9_NtuhICEpVMahmw-1; Fri, 19 Jun 2020 12:13:54 -0400 X-MC-Unique: GSQ-zA9_NtuhICEpVMahmw-1 Received: by mail-qt1-f197.google.com with SMTP id q21so7446443qtn.20 for ; Fri, 19 Jun 2020 09:13:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=djMzqxEX2KOhdS0mA0I6TixiZlYRFWjkTEipCw3bAqk=; b=WbvWc4I2uly70H+0ed4j9F8nBna6wCv6iHyCr+OnV/TLt8v0bEmPr/ntov+GPnn7Hw VzB7uXv0BFBjcJ+hPSKGDpddZUT8hyZnJSm+0gbqq+giEgwexMOsEJt5/JfgvaOF0oeK a5l4fU9wGDEIevDByHEo/S4GnnBLRAS2anI4NbsTqbcgpgSdK/ppTFxvfb+Z6jDj+J5E Zha/GbUHGw4aTNDko4HpBW8PZKy2mCxfg6gqfR0IQkeMM+MKuaIdAhKG9c2I2F+I2AkD ZRoWhU9Xu2NnGL4e+yyFQFLJqxnQEoClS80N0HpSNuQbQZdNLZKDaz8Lpf2rXPpj1gn0 ygJw== X-Gm-Message-State: AOAM532ZwoGTIn3WhskX2JIY4EaxBMpOkuAcmV4UqTrgcGjX1tY1PRuT U561CcTIeDGBTxXZhITPXiaTCom9zx1ZnyqmtH7g/FX+0iagJ/0sytB5cNSfAbplooLnIGngcXd MjM1ZTKj8+Eg= X-Received: by 2002:a05:620a:538:: with SMTP id h24mr3790587qkh.13.1592583233880; Fri, 19 Jun 2020 09:13:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxoIDne3D6Lyc/+ePjNzlFOb4ZPCNYmFPc0mxtEwDaPmhR5/H3npMIraUQuPAc8rRqyj8GbKA== X-Received: by 2002:a05:620a:538:: with SMTP id h24mr3790569qkh.13.1592583233626; Fri, 19 Jun 2020 09:13:53 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id z46sm6066253qtb.57.2020.06.19.09.13.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:52 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Guan Xuetao Subject: [PATCH 22/26] mm/unicore32: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:51 -0400 Message-Id: <20200619161351.9859-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: A1B8318028204 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Guan Xuetao Signed-off-by: Peter Xu --- arch/unicore32/mm/fault.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c index 847ff24fcc2a..b272a389d977 100644 --- a/arch/unicore32/mm/fault.c +++ b/arch/unicore32/mm/fault.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -160,7 +161,8 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma) } static vm_fault_t __do_pf(struct mm_struct *mm, unsigned long addr, - unsigned int fsr, unsigned int flags, struct task_struct *tsk) + unsigned int fsr, unsigned int flags, + struct task_struct *tsk, struct pt_regs *regs) { struct vm_area_struct *vma; vm_fault_t fault; @@ -186,7 +188,7 @@ static vm_fault_t __do_pf(struct mm_struct *mm, unsigned long addr, * If for any reason at all we couldn't handle the fault, make * sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); + fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); return fault; check_stack: @@ -219,6 +221,8 @@ static int do_pf(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (!(fsr ^ 0x12)) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -244,7 +248,7 @@ static int do_pf(unsigned long addr, unsigned int fsr, struct pt_regs *regs) #endif } - fault = __do_pf(mm, addr, fsr, flags, tsk); + fault = __do_pf(mm, addr, fsr, flags, tsk, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_sem because @@ -254,10 +258,6 @@ static int do_pf(unsigned long addr, unsigned int fsr, struct pt_regs *regs) return 0; if (!(fault & VM_FAULT_ERROR) && (flags & FAULT_FLAG_ALLOW_RETRY)) { - if (fault & VM_FAULT_MAJOR) - tsk->maj_flt++; - else - tsk->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; From patchwork Fri Jun 19 16:13:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614551 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E102912 for ; Fri, 19 Jun 2020 16:14:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E597D217A0 for ; Fri, 19 Jun 2020 16:14:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DwWexxyE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E597D217A0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 17BFD8D00C6; Fri, 19 Jun 2020 12:14:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0DDD98D00AD; Fri, 19 Jun 2020 12:14:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC0778D00C6; Fri, 19 Jun 2020 12:14:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id CEE6D8D00AD for ; Fri, 19 Jun 2020 12:14:20 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 80D6D181AC9CC for ; Fri, 19 Jun 2020 16:14:20 +0000 (UTC) X-FDA: 76946458680.23.trail34_030585626e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id D00E937612 for ; Fri, 19 Jun 2020 16:14:00 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30001:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: trail34_030585626e1a X-Filterd-Recvd-Size: 5777 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:14:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583239; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hAWzjcK11ehl6Ry1XWEz8DeKxbF5tzxJ+GAj1+W4u20=; b=DwWexxyE6eqggTsL7gUgDOtnqzTgDrpNYzgZ1n6IeR8T7dUKyho3Sj69uT9zH8oACibyx+ QYBwyfgPClXqf96EPOF0zTMX1SnrI9Ef+JPvV5C/S7oRVsNHckBcQe+KHOAKck+QRB0Yji z5SumLMufTcupuMhBkxhDL+xsGTbXAY= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-395-RAtQU-J_Ms2ATfA9DEvsfw-1; Fri, 19 Jun 2020 12:13:58 -0400 X-MC-Unique: RAtQU-J_Ms2ATfA9DEvsfw-1 Received: by mail-qt1-f199.google.com with SMTP id q21so7446822qtn.20 for ; Fri, 19 Jun 2020 09:13:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hAWzjcK11ehl6Ry1XWEz8DeKxbF5tzxJ+GAj1+W4u20=; b=sl1mQcqBrcclWB9onDY+CnCchZRj0Kq1kK4dQuyq3Nnm7qGqSBkoEnEZhJ0ka3Pk6d FT8ufpWb2rpJCQJo/5jGiZLr+Nqj91ckdtTgRMoeNts2oVppFjw2ooOTRPpWbx9r1EX2 PpRl3rAjaQA5X+LK0li5KrCqRURx3xP/2hQkRB+h1TeKg0bl6dct8LnPiiyjLfAbXz4W XZJrJwBn8umiAApawdKzNpu4g6eOVU0prWz3RODRkj5FXIDV7nm8GP9hFpAC4bovlXsb eOxXGhrMAmVTd+/wcVJgsnaI4OSc7DpODxUkoEPXzVVYP2pK+oN5cwZmlNIlbdzNAe91 pUQw== X-Gm-Message-State: AOAM533lbGShxI3oOvZ7E/YfFnB/6haYCnaz3oaT9S2cGMbvl+4C/JrC o9g6RIxZbnvu0oJmBCfD3tTQNjh+DEjV9dSucLVGScoCSFdaqSTprZIOWZYFo6MWPodTYvRIouL L6pWvjUPEtw8= X-Received: by 2002:a05:620a:16b7:: with SMTP id s23mr4396634qkj.142.1592583237971; Fri, 19 Jun 2020 09:13:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzvBwhl7EOG8ylIuxoeyJJKEMjZ8bo+nIfzIN7OFxYP9urzZkDH6dVu5uteKX7SDFgvKlBOUw== X-Received: by 2002:a05:620a:16b7:: with SMTP id s23mr4396588qkj.142.1592583237551; Fri, 19 Jun 2020 09:13:57 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id i40sm7809041qte.67.2020.06.19.09.13.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:13:56 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" Subject: [PATCH 23/26] mm/x86: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:54 -0400 Message-Id: <20200619161354.9907-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: D00E937612 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). CC: Dave Hansen CC: Andy Lutomirski CC: Peter Zijlstra CC: Thomas Gleixner CC: Ingo Molnar CC: Borislav Petkov CC: x86@kernel.org CC: H. Peter Anvin Signed-off-by: Peter Xu --- arch/x86/mm/fault.c | 17 ++--------------- 1 file changed, 2 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 3e27ed85af06..4604755a303d 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1309,7 +1309,7 @@ void do_user_addr_fault(struct pt_regs *regs, struct vm_area_struct *vma; struct task_struct *tsk; struct mm_struct *mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned int flags = FAULT_FLAG_DEFAULT; tsk = current; @@ -1461,8 +1461,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags, NULL); - major |= fault & VM_FAULT_MAJOR; + fault = handle_mm_fault(vma, address, flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -1489,18 +1488,6 @@ void do_user_addr_fault(struct pt_regs *regs, return; } - /* - * Major/minor page fault accounting. If any of the events - * returned VM_FAULT_MAJOR, we account it as a major fault. - */ - if (major) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } - check_v8086_mode(regs, address, tsk); } NOKPROBE_SYMBOL(do_user_addr_fault); From patchwork Fri Jun 19 16:13:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614549 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 69845912 for ; Fri, 19 Jun 2020 16:14:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3721E217A0 for ; Fri, 19 Jun 2020 16:14:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XAow0kmV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3721E217A0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3C55B8D00C5; Fri, 19 Jun 2020 12:14:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 375DE8D00AD; Fri, 19 Jun 2020 12:14:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 263E58D00C5; Fri, 19 Jun 2020 12:14:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 0A2F88D00AD for ; Fri, 19 Jun 2020 12:14:13 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C1D7F181AC9CC for ; Fri, 19 Jun 2020 16:14:12 +0000 (UTC) X-FDA: 76946458344.10.songs36_4a13f6826e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 5691A13E08A for ; Fri, 19 Jun 2020 16:14:04 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: songs36_4a13f6826e1a X-Filterd-Recvd-Size: 5842 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:14:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nuO+Yb0GY5ATg//nVupU36tJbGPoBpffcxCpoCITILo=; b=XAow0kmVkFpyhDXW9pTMZl+x8U6Fmsh0q+PXz10qqgKISjIeylF2NHfSUJ1dZr/CgG7c4r FP3SsVWxvuQE57YUGf78wLc6I/uzXqzUuwmsT1aN87k61X2PjcRiYEfW9QQ9zJ4Cj4ZRwL LwvHCmI3CaqYzZXDUE5Ln7MI9/RtTn0= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-507-Yd0ivzz7NkiX-Za_5RWauQ-1; Fri, 19 Jun 2020 12:14:01 -0400 X-MC-Unique: Yd0ivzz7NkiX-Za_5RWauQ-1 Received: by mail-qt1-f197.google.com with SMTP id l19so7465158qtp.12 for ; Fri, 19 Jun 2020 09:14:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nuO+Yb0GY5ATg//nVupU36tJbGPoBpffcxCpoCITILo=; b=AP1Fy20/7cCuxHKShiNcyYfNyyLNG8BfABJetc19eZGQ5AIsooqvxvh2HvNSPadgJZ StPQZRoGPJwGiT/3y7RRLxjW/afXcIihk1U1oU5QhHAZ2Cq4iNTH9lctNSIFhX78cHF9 Pi8WV4hy8MiRUF0jxFVeZOmCj4WbV/6c7/YqbelMgSVMgepz44TjdoRjjaDlYngz3dLF ogyllH7wl8M65a6BHXAdNA9+cP9NY/GsAWSDdPIrYx4ryijliYZI9CaHIeR28bhzCFkH BLCr588feS5c8ccHA3TcVRH1Zc82U/g+K2VIQYp/+N2O8gVhofXy5vBdkjPRLTn5HEs+ OKTw== X-Gm-Message-State: AOAM531A+HeFhZFfeeW0MslnOlpdnY4fBEZt0lnAu+NYSxlcO9hkdkWL Zi/Vzo8ecBI0RAK3RYtQwcUXmaDPGoyfC6vy8rbEm049xtfQyAUmwGhmDOl+/Frq7HdqOLo3sgg 9ld3nU0cg/r8= X-Received: by 2002:a37:4a90:: with SMTP id x138mr4285142qka.74.1592583241033; Fri, 19 Jun 2020 09:14:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJywyVrKaOWk1ZbIHQBjTISBQH/HyM8FFN6FV+FgYS4h/6d6EkOdlZqisWhzLL/0kWu70yra1g== X-Received: by 2002:a37:4a90:: with SMTP id x138mr4285122qka.74.1592583240788; Fri, 19 Jun 2020 09:14:00 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id m26sm7588426qtm.73.2020.06.19.09.13.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:14:00 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds , Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH 24/26] mm/xtensa: Use general page fault accounting Date: Fri, 19 Jun 2020 12:13:58 -0400 Message-Id: <20200619161358.9956-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 5691A13E08A X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Remove the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] perf events because it's now also done in handle_mm_fault(). Move the PERF_COUNT_SW_PAGE_FAULTS event higher before taking mmap_sem for the fault, then it'll match with the rest of the archs. CC: Chris Zankel CC: Max Filippov CC: linux-xtensa@linux-xtensa.org Acked-by: Max Filippov Signed-off-by: Peter Xu --- arch/xtensa/mm/fault.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index 722ef3c98d60..9ef7331e37f8 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -73,6 +73,9 @@ void do_page_fault(struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + retry: down_read(&mm->mmap_sem); vma = find_vma(mm, address); @@ -108,7 +111,7 @@ void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -123,10 +126,6 @@ void do_page_fault(struct pt_regs *regs) BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; @@ -140,12 +139,6 @@ void do_page_fault(struct pt_regs *regs) } up_read(&mm->mmap_sem); - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - if (flags & VM_FAULT_MAJOR) - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - else - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - return; /* Something tried to access memory that isn't in our memory map.. From patchwork Fri Jun 19 16:14:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614547 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE100912 for ; Fri, 19 Jun 2020 16:14:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8D23C21707 for ; Fri, 19 Jun 2020 16:14:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="I/6GWipW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D23C21707 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B67B48D00C4; Fri, 19 Jun 2020 12:14:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B15CC8D00AD; Fri, 19 Jun 2020 12:14:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B6CC8D00C4; Fri, 19 Jun 2020 12:14:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0107.hostedemail.com [216.40.44.107]) by kanga.kvack.org (Postfix) with ESMTP id 7FCE18D00AD for ; Fri, 19 Jun 2020 12:14:08 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 40CE4180ADD1A for ; Fri, 19 Jun 2020 16:14:08 +0000 (UTC) X-FDA: 76946458176.30.wheel41_4e0fe1c26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 0BC0318215133 for ; Fri, 19 Jun 2020 16:14:08 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30001:30003:30012:30036:30054:30070:30075:30091,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: wheel41_4e0fe1c26e1a X-Filterd-Recvd-Size: 8663 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:14:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583247; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ceY5WUXlxKdkNl2ZPUhmC7i0B7kz37sYu0bxG24wxmk=; b=I/6GWipWw5wITgbXGTR5YicREojIQY2Qcs/nCmEhYDq09OigWO4alXrChz0/EHFUkZSJ3n B8/PQAN3N99696rNFU/9f/+6u7sckqvc/VQi3kwBzK3dGYHUawY2SA7mw7O+JWZ58bXlai rs9Fb9FMT5aXRDOeEwu3lwFyDmEFMs4= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-40-qpKhPLJBODWLOyxgxRsD4w-1; Fri, 19 Jun 2020 12:14:05 -0400 X-MC-Unique: qpKhPLJBODWLOyxgxRsD4w-1 Received: by mail-qv1-f72.google.com with SMTP id t20so7081257qvy.16 for ; Fri, 19 Jun 2020 09:14:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ceY5WUXlxKdkNl2ZPUhmC7i0B7kz37sYu0bxG24wxmk=; b=Z5hDs0nKw5dABpnqV7KeOy62tYowJngMBux67oeNM2w569m65WUQvgIIwwApvijz9V pjkCw/h5ED/uD4dzzr5O3SEgMWXsG7jXoZmFU1Y6T+4Ituon3ZMIu8hphxscsWnyLjLh jfGjWqO1IF/o3gF/BzW+xRrJ/4b9ysF+ERxrK1gmBOP6CZw7zUEzsJoX9c1TWcR5v9Kv iOox03Pa0gHbxlISkqcTsNlXAbU/UOkqMM9U+bLHxc3TRXrMawXGL0WpGGUGCGjEHyEW Rl2iPT+Wk2+6QLKt61uu2EW06fh2vAnOKDZYfTTOM1DbzYeHmHFbuQXRDhiymHTX4Tw4 t3Rg== X-Gm-Message-State: AOAM532hvM5ydYVZ64YIUhbqh8vgT7YEHV5wdk7+EMk6d1zmQwC1NiDv 0M0NRa9YjIcnXYOypzeE80LZ0Sa5L1D6+CNNdCPDUaN/SmM8GYawBWnIog0SipNlPycfT4vMfln MGvXieOnzyhU= X-Received: by 2002:a0c:ee41:: with SMTP id m1mr9169864qvs.95.1592583245030; Fri, 19 Jun 2020 09:14:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy0zCecxV9JtXknOOZNLR2+6hLoId0LmwX50hMH84sIFNjeC89K0w8Jv2AYP2pWScsHfroN5Q== X-Received: by 2002:a0c:ee41:: with SMTP id m1mr9169833qvs.95.1592583244707; Fri, 19 Jun 2020 09:14:04 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id s4sm6383847qkh.120.2020.06.19.09.14.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:14:04 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds Subject: [PATCH 25/26] mm: Clean up the last pieces of page fault accountings Date: Fri, 19 Jun 2020 12:14:02 -0400 Message-Id: <20200619161402.10004-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 0BC0318215133 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Here're the last pieces of page fault accounting that were still done outside handle_mm_fault() where we still have regs==NULL when calling handle_mm_fault(): arch/powerpc/mm/copro_fault.c: copro_handle_mm_fault arch/sparc/mm/fault_32.c: force_user_fault arch/um/kernel/trap.c: handle_page_fault mm/gup.c: faultin_page fixup_user_fault mm/hmm.c: hmm_vma_fault mm/ksm.c: break_ksm Some of them has the issue of duplicated accounting for page fault retries. Some of them didn't do the accounting at all. This patch cleans all these up by letting handle_mm_fault() to do per-task page fault accounting even if regs==NULL (though we'll still skip the perf event accountings). With that, we can safely remove all the outliers now. There's another functional change in that now we account the page faults to the caller of gup, rather than the task_struct that passed into the gup code. More information of this can be found at [1]. After this patch, below things should never be touched again outside handle_mm_fault(): - task_struct.[maj|min]_flt - PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] [1] https://lore.kernel.org/lkml/CAHk-=wj_V2Tps2QrMn20_W0OJF9xqNh52XSGA42s-ZJ8Y+GyKw@mail.gmail.com/ Signed-off-by: Peter Xu --- arch/powerpc/mm/copro_fault.c | 5 ----- arch/um/kernel/trap.c | 4 ---- mm/gup.c | 13 ------------- mm/memory.c | 20 ++++++++++++-------- 4 files changed, 12 insertions(+), 30 deletions(-) diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c index c0478bef1f14..2e59be1a9359 100644 --- a/arch/powerpc/mm/copro_fault.c +++ b/arch/powerpc/mm/copro_fault.c @@ -76,11 +76,6 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea, BUG(); } - if (*flt & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; - out_unlock: up_read(&mm->mmap_sem); return ret; diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 32cc8f59322b..c881831de357 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -92,10 +92,6 @@ int handle_page_fault(unsigned long address, unsigned long ip, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; diff --git a/mm/gup.c b/mm/gup.c index 1a48c639ea49..17b4d0c45a6b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -885,13 +885,6 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, BUG(); } - if (tsk) { - if (ret & VM_FAULT_MAJOR) - tsk->maj_flt++; - else - tsk->min_flt++; - } - if (ret & VM_FAULT_RETRY) { if (locked && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) *locked = 0; @@ -1239,12 +1232,6 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, goto retry; } - if (tsk) { - if (major) - tsk->maj_flt++; - else - tsk->min_flt++; - } return 0; } EXPORT_SYMBOL_GPL(fixup_user_fault); diff --git a/mm/memory.c b/mm/memory.c index 23c738b3756e..59a2989231fa 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4350,6 +4350,8 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, /** * mm_account_fault - Do page fault accountings * @regs: the pt_regs struct pointer. When set to NULL, will skip accounting + * of perf event counters, but we'll still do the per-task accounting to + * the task who triggered this page fault. * @address: faulted address. * @major: whether this is a major fault. * @@ -4365,16 +4367,18 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, static inline void mm_account_fault(struct pt_regs *regs, unsigned long address, bool major) { + if (major) + current->maj_flt++; + else + current->min_flt++; + if (!regs) return; - if (major) { - current->maj_flt++; + if (major) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - } else { - current->min_flt++; + else perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } } /* @@ -4450,9 +4454,9 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, * immediately previously). * * - if the fault is done for GUP, regs wil be NULL and - * no accounting will be done (but you _could_ pass in - * your own regs and it would be accounted to the thread - * doing the fault, not to the target!) + * we only do the accounting for the per thread fault + * counters who triggered the fault, and we skip the + * perf event updates. */ mm_account_fault(regs, address, (ret & VM_FAULT_MAJOR) || (flags & FAULT_FLAG_TRIED)); From patchwork Fri Jun 19 16:14:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11614561 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6016A13B1 for ; Fri, 19 Jun 2020 16:15:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC3602184D for ; Fri, 19 Jun 2020 16:15:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JiN8wwQX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC3602184D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2B2FC8D00CA; Fri, 19 Jun 2020 12:15:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 28A6A8D00AD; Fri, 19 Jun 2020 12:15:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 152C58D00CA; Fri, 19 Jun 2020 12:15:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id E9DE48D00AD for ; Fri, 19 Jun 2020 12:15:46 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 87DBA180AD822 for ; Fri, 19 Jun 2020 16:15:46 +0000 (UTC) X-FDA: 76946462292.14.sheep96_361391926e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 6F0D218219557 for ; Fri, 19 Jun 2020 16:14:13 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30034:30036:30051:30054:30070:30090,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: sheep96_361391926e1a X-Filterd-Recvd-Size: 32000 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:14:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xV2PovQm045/QizuhxAz8IIK8SrlqUwArSvyZIxZL4Y=; b=JiN8wwQXnkf/8mI29+bZ/hKi9y0nJZrjq0jiNVszb55ID80dp9dxVOR5WGRORF3wVJuF2m dO4wmkVF83b8oFGeLxAIsBA0o6vnBXY+qzJAgRnUvenVi7ITGU076HlnglTaPNsIx0Av7h Bqlatd8QTDWGN94l5ETuAs50iqq/+l4= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-511-JXCq3LvbN6uuS-_TnCt1hg-1; Fri, 19 Jun 2020 12:14:10 -0400 X-MC-Unique: JXCq3LvbN6uuS-_TnCt1hg-1 Received: by mail-qk1-f199.google.com with SMTP id 16so7514453qka.15 for ; Fri, 19 Jun 2020 09:14:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xV2PovQm045/QizuhxAz8IIK8SrlqUwArSvyZIxZL4Y=; b=RUSpyV+1tCqj05e5mdgIKQUUuZhsPUjIJ1uXq0gCbFTMZ+9PPWjre/q0Lo1Jr6TRWN ov1pOAFrd1GAAyO53BgsB2MC+wDyBhIHSWHEX2qZqdQg8EI3S6T+w33kJ0hSzZZKG766 QhDJzE/BxqLZt2dh0yolDgpVcrHhYpLgP62yeWi/8LDwc9DeWKrYXl2hdcP+vejcrVQJ BlIXHebkNEgVtpLMz4oWF+MDlyNyfZPk1fhGpCSXJws5ZPRZZk9Sm6FOE5MeGy+PaM6a bQsK7szCr0yt8Xcqh0An2p/xtOULhOaFSmvOFso/aYGlBkluyDmzjyNdXfMaEWUu2XJn LxiQ== X-Gm-Message-State: AOAM5335FogDYaY6lZCrlCddgd/WXlmChSx6vSG7Jlmivt5PBmQeqN9C Ly/xO4FQLa+A08Nks5XDR4r0+s2RzvZplmHUUQinaoNVvOoKhSd8dWBVjRAmN2piRii5Vl1bsyI kQHIo/IHiuqc= X-Received: by 2002:ac8:42c3:: with SMTP id g3mr4043229qtm.313.1592583248463; Fri, 19 Jun 2020 09:14:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwwUeHsYkbEueYJ15jQx19sJe8/sJdZV1SpjCgGY439d5WVDbo8sLZ3Wwh64+I1wGvc17Nz7A== X-Received: by 2002:ac8:42c3:: with SMTP id g3mr4043164qtm.313.1592583247763; Fri, 19 Jun 2020 09:14:07 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id q31sm6539140qtk.36.2020.06.19.09.14.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 09:14:07 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Xu , Gerald Schaefer , Andrew Morton , Andrea Arcangeli , Will Deacon , Michael Ellerman , Linus Torvalds Subject: [PATCH 26/26] mm/gup: Remove task_struct pointer for all gup code Date: Fri, 19 Jun 2020 12:14:05 -0400 Message-Id: <20200619161405.10052-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200619160538.8641-1-peterx@redhat.com> References: <20200619160538.8641-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 6F0D218219557 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After the cleanup of page fault accounting, gup does not need to pass task_struct around any more. Remove that parameter in the whole gup stack. Signed-off-by: Peter Xu --- arch/arc/kernel/process.c | 2 +- arch/s390/kvm/interrupt.c | 2 +- arch/s390/kvm/kvm-s390.c | 2 +- arch/s390/kvm/priv.c | 8 +- arch/s390/mm/gmap.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 +- drivers/infiniband/core/umem_odp.c | 2 +- drivers/vfio/vfio_iommu_type1.c | 2 +- fs/exec.c | 2 +- include/linux/mm.h | 9 +-- kernel/events/uprobes.c | 6 +- kernel/futex.c | 2 +- mm/gup.c | 90 +++++++++------------ mm/memory.c | 2 +- mm/process_vm_access.c | 2 +- security/tomoyo/domain.c | 2 +- virt/kvm/async_pf.c | 2 +- virt/kvm/kvm_main.c | 2 +- 18 files changed, 63 insertions(+), 80 deletions(-) diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c index 315528f04bc1..2aad79ffc7f8 100644 --- a/arch/arc/kernel/process.c +++ b/arch/arc/kernel/process.c @@ -91,7 +91,7 @@ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new) goto fail; down_read(¤t->mm->mmap_sem); - ret = fixup_user_fault(current, current->mm, (unsigned long) uaddr, + ret = fixup_user_fault(current->mm, (unsigned long) uaddr, FAULT_FLAG_WRITE, NULL); up_read(¤t->mm->mmap_sem); diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index bfb481134994..7f4c5895aabd 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -2768,7 +2768,7 @@ static struct page *get_map_page(struct kvm *kvm, u64 uaddr) struct page *page = NULL; down_read(&kvm->mm->mmap_sem); - get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE, + get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE, &page, NULL, NULL); up_read(&kvm->mm->mmap_sem); return page; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index d05bb040fd42..12fa299986f8 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1892,7 +1892,7 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args) r = set_guest_storage_key(current->mm, hva, keys[i], 0); if (r) { - r = fixup_user_fault(current, current->mm, hva, + r = fixup_user_fault(current->mm, hva, FAULT_FLAG_WRITE, &unlocked); if (r) break; diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 893893642415..45b7d5df72d7 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -274,7 +274,7 @@ static int handle_iske(struct kvm_vcpu *vcpu) rc = get_guest_storage_key(current->mm, vmaddr, &key); if (rc) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { up_read(¤t->mm->mmap_sem); @@ -320,7 +320,7 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) down_read(¤t->mm->mmap_sem); rc = reset_guest_reference_bit(current->mm, vmaddr); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { up_read(¤t->mm->mmap_sem); @@ -391,7 +391,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) m3 & SSKE_MC); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc = !rc ? -EAGAIN : rc; } @@ -1095,7 +1095,7 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) rc = cond_set_guest_storage_key(current->mm, vmaddr, key, NULL, nq, mr, mc); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc = !rc ? -EAGAIN : rc; } diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 1a95d8809cc3..0faf4f5f3fd4 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -649,7 +649,7 @@ int gmap_fault(struct gmap *gmap, unsigned long gaddr, rc = vmaddr; goto out_up; } - if (fixup_user_fault(current, gmap->mm, vmaddr, fault_flags, + if (fixup_user_fault(gmap->mm, vmaddr, fault_flags, &unlocked)) { rc = -EFAULT; goto out_up; @@ -879,7 +879,7 @@ static int gmap_pte_op_fixup(struct gmap *gmap, unsigned long gaddr, BUG_ON(gmap_is_shadow(gmap)); fault_flags = (prot == PROT_WRITE) ? FAULT_FLAG_WRITE : 0; - if (fixup_user_fault(current, mm, vmaddr, fault_flags, &unlocked)) + if (fixup_user_fault(mm, vmaddr, fault_flags, &unlocked)) return -EFAULT; if (unlocked) /* lost mmap_sem, caller has to retry __gmap_translate */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 7ffd7afeb7a5..e87fa79c18d5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -472,7 +472,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work) locked = 1; } ret = get_user_pages_remote - (work->task, mm, + (mm, obj->userptr.ptr + pinned * PAGE_SIZE, npages - pinned, flags, diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 3b1e627d9a8d..73b1a01b7339 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -437,7 +437,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, * complex (and doesn't gain us much performance in most use * cases). */ - npages = get_user_pages_remote(owning_process, owning_mm, + npages = get_user_pages_remote(owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); up_read(&owning_mm->mmap_sem); diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index cc1d64765ce7..d77b34d6ee19 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -329,7 +329,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr, flags |= FOLL_WRITE; down_read(&mm->mmap_sem); - ret = pin_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, + ret = pin_user_pages_remote(mm, vaddr, 1, flags | FOLL_LONGTERM, page, NULL, NULL); if (ret == 1) { *pfn = page_to_pfn(page[0]); diff --git a/fs/exec.c b/fs/exec.c index 2c465119affc..f3f87911f3d0 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -213,7 +213,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos, * We are doing an exec(). 'current' is the process * doing the exec and bprm->mm is the new process's mm. */ - ret = get_user_pages_remote(current, bprm->mm, pos, 1, gup_flags, + ret = get_user_pages_remote(bprm->mm, pos, 1, gup_flags, &page, NULL, NULL); if (ret <= 0) return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index 46bee4044ac1..5e347ffb049f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1655,7 +1655,7 @@ int invalidate_inode_page(struct page *page); extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct pt_regs *regs); -extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, +extern int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); void unmap_mapping_pages(struct address_space *mapping, @@ -1671,8 +1671,7 @@ static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, BUG(); return VM_FAULT_SIGBUS; } -static inline int fixup_user_fault(struct task_struct *tsk, - struct mm_struct *mm, unsigned long address, +static inline int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { /* should never happen if there's no MMU */ @@ -1698,11 +1697,11 @@ extern int access_remote_vm(struct mm_struct *mm, unsigned long addr, extern int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, unsigned long addr, void *buf, int len, unsigned int gup_flags); -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index ece7e13f6e4a..b7c9ad7e7d54 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -382,7 +382,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) if (!vaddr || !d) return -EINVAL; - ret = get_user_pages_remote(NULL, mm, vaddr, 1, + ret = get_user_pages_remote(mm, vaddr, 1, FOLL_WRITE, &page, &vma, NULL); if (unlikely(ret <= 0)) { /* @@ -483,7 +483,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (is_register) gup_flags |= FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - ret = get_user_pages_remote(NULL, mm, vaddr, 1, gup_flags, + ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, &vma, NULL); if (ret <= 0) return ret; @@ -2027,7 +2027,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) * but we treat this as a 'remote' access since it is * essentially a kernel access to the memory. */ - result = get_user_pages_remote(NULL, mm, vaddr, 1, FOLL_FORCE, &page, + result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, NULL, NULL); if (result < 0) return result; diff --git a/kernel/futex.c b/kernel/futex.c index b59532862bc0..1466b4322491 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -696,7 +696,7 @@ static int fault_in_user_writeable(u32 __user *uaddr) int ret; down_read(&mm->mmap_sem); - ret = fixup_user_fault(current, mm, (unsigned long)uaddr, + ret = fixup_user_fault(mm, (unsigned long)uaddr, FAULT_FLAG_WRITE, NULL); up_read(&mm->mmap_sem); diff --git a/mm/gup.c b/mm/gup.c index 17b4d0c45a6b..b8eb02673c10 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -851,7 +851,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, * does not include FOLL_NOWAIT, the mmap_sem may be released. If it * is, *@locked will be set to 0 and -EBUSY returned. */ -static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, +static int faultin_page(struct vm_area_struct *vma, unsigned long address, unsigned int *flags, int *locked) { unsigned int fault_flags = 0; @@ -954,7 +954,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) /** * __get_user_pages() - pin user pages in memory - * @tsk: task_struct of target task * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1012,7 +1011,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * instead of __get_user_pages. __get_user_pages should be used only if * you need some special @gup_flags. */ -static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, +static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1088,8 +1087,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, page = follow_page_mask(vma, start, foll_flags, &ctx); if (!page) { - ret = faultin_page(tsk, vma, start, &foll_flags, - locked); + ret = faultin_page(vma, start, &foll_flags, locked); switch (ret) { case 0: goto retry; @@ -1163,8 +1161,6 @@ static bool vma_permits_fault(struct vm_area_struct *vma, /* * fixup_user_fault() - manually resolve a user page fault - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @address: user address * @fault_flags:flags to pass down to handle_mm_fault() @@ -1191,7 +1187,7 @@ static bool vma_permits_fault(struct vm_area_struct *vma, * This function will not return with an unlocked mmap_sem. So it has not the * same semantics wrt the @mm->mmap_sem as does filemap_fault(). */ -int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, +int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { @@ -1236,8 +1232,7 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, } EXPORT_SYMBOL_GPL(fixup_user_fault); -static __always_inline long __get_user_pages_locked(struct task_struct *tsk, - struct mm_struct *mm, +static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1270,7 +1265,7 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, pages_done = 0; lock_dropped = false; for (;;) { - ret = __get_user_pages(tsk, mm, start, nr_pages, flags, pages, + ret = __get_user_pages(mm, start, nr_pages, flags, pages, vmas, locked); if (!locked) /* VM_FAULT_RETRY couldn't trigger, bypass */ @@ -1330,7 +1325,7 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, } *locked = 1; - ret = __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED, + ret = __get_user_pages(mm, start, 1, flags | FOLL_TRIED, pages, NULL, locked); if (!*locked) { /* Continue to retry until we succeeded */ @@ -1416,7 +1411,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * We made sure addr is within a VMA, so the following will * not result in a stack expansion that recurses back here. */ - return __get_user_pages(current, mm, start, nr_pages, gup_flags, + return __get_user_pages(mm, start, nr_pages, gup_flags, NULL, NULL, locked); } @@ -1500,7 +1495,7 @@ struct page *get_dump_page(unsigned long addr) struct vm_area_struct *vma; struct page *page; - if (__get_user_pages(current, current->mm, addr, 1, + if (__get_user_pages(current->mm, addr, 1, FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, NULL) < 1) return NULL; @@ -1509,8 +1504,7 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ #else /* CONFIG_MMU */ -static long __get_user_pages_locked(struct task_struct *tsk, - struct mm_struct *mm, unsigned long start, +static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, int *locked, unsigned int foll_flags) @@ -1626,8 +1620,7 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) return __alloc_pages_node(nid, gfp_mask, 0); } -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1701,7 +1694,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * again migrating any new CMA pages which we failed to isolate * earlier. */ - ret = __get_user_pages_locked(tsk, mm, start, nr_pages, + ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); @@ -1715,8 +1708,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, return ret; } #else -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1731,8 +1723,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which * allows us to process the FOLL_LONGTERM flag. */ -static long __gup_longterm_locked(struct task_struct *tsk, - struct mm_struct *mm, +static long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1757,7 +1748,7 @@ static long __gup_longterm_locked(struct task_struct *tsk, flags = memalloc_nocma_save(); } - rc = __get_user_pages_locked(tsk, mm, start, nr_pages, pages, + rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas_tmp, NULL, gup_flags); if (gup_flags & FOLL_LONGTERM) { @@ -1772,7 +1763,7 @@ static long __gup_longterm_locked(struct task_struct *tsk, goto out; } - rc = check_and_migrate_cma_pages(tsk, mm, start, rc, pages, + rc = check_and_migrate_cma_pages(mm, start, rc, pages, vmas_tmp, gup_flags); } @@ -1782,22 +1773,20 @@ static long __gup_longterm_locked(struct task_struct *tsk, return rc; } #else /* !CONFIG_FS_DAX && !CONFIG_CMA */ -static __always_inline long __gup_longterm_locked(struct task_struct *tsk, - struct mm_struct *mm, +static __always_inline long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, unsigned int flags) { - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, flags); } #endif /* CONFIG_FS_DAX || CONFIG_CMA */ #ifdef CONFIG_MMU -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1816,20 +1805,18 @@ static long __get_user_pages_remote(struct task_struct *tsk, * This will check the vmas (even if our vmas arg is NULL) * and return -ENOTSUPP if DAX isn't allowed in this case: */ - return __gup_longterm_locked(tsk, mm, start, nr_pages, pages, + return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } /* * get_user_pages_remote() - pin user pages in memory - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1888,7 +1875,7 @@ static long __get_user_pages_remote(struct task_struct *tsk, * should use get_user_pages because it cannot pass * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault. */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1900,13 +1887,13 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(get_user_pages_remote); #else /* CONFIG_MMU */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1914,8 +1901,7 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, return 0; } -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1942,7 +1928,7 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } EXPORT_SYMBOL(get_user_pages); @@ -1956,7 +1942,7 @@ EXPORT_SYMBOL(get_user_pages); * * down_read(&mm->mmap_sem); * do_something() - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * up_read(&mm->mmap_sem); * * to: @@ -1964,7 +1950,7 @@ EXPORT_SYMBOL(get_user_pages); * int locked = 1; * down_read(&mm->mmap_sem); * do_something() - * get_user_pages_locked(tsk, mm, ..., pages, &locked); + * get_user_pages_locked(mm, ..., pages, &locked); * if (locked) * up_read(&mm->mmap_sem); */ @@ -1981,7 +1967,7 @@ long get_user_pages_locked(unsigned long start, unsigned long nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) return -EINVAL; - return __get_user_pages_locked(current, current->mm, start, nr_pages, + return __get_user_pages_locked(current->mm, start, nr_pages, pages, NULL, locked, gup_flags | FOLL_TOUCH); } @@ -1991,12 +1977,12 @@ EXPORT_SYMBOL(get_user_pages_locked); * get_user_pages_unlocked() is suitable to replace the form: * * down_read(&mm->mmap_sem); - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * up_read(&mm->mmap_sem); * * with: * - * get_user_pages_unlocked(tsk, mm, ..., pages); + * get_user_pages_unlocked(mm, ..., pages); * * It is functionally equivalent to get_user_pages_fast so * get_user_pages_fast should be used instead if specific gup_flags @@ -2019,7 +2005,7 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, return -EINVAL; down_read(&mm->mmap_sem); - ret = __get_user_pages_locked(current, mm, start, nr_pages, pages, NULL, + ret = __get_user_pages_locked(mm, start, nr_pages, pages, NULL, &locked, gup_flags | FOLL_TOUCH); if (locked) up_read(&mm->mmap_sem); @@ -2720,7 +2706,7 @@ static int __gup_longterm_unlocked(unsigned long start, int nr_pages, */ if (gup_flags & FOLL_LONGTERM) { down_read(¤t->mm->mmap_sem); - ret = __gup_longterm_locked(current, current->mm, + ret = __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, gup_flags); up_read(¤t->mm->mmap_sem); @@ -2850,10 +2836,8 @@ int pin_user_pages_fast(unsigned long start, int nr_pages, EXPORT_SYMBOL_GPL(pin_user_pages_fast); /** - * pin_user_pages_remote() - pin pages of a remote process (task != current) + * pin_user_pages_remote() - pin pages of a remote process * - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -2877,7 +2861,7 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rst. It * is NOT intended for Case 2 (RDMA: long-term pins). */ -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -2887,7 +2871,7 @@ long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, return -EINVAL; gup_flags |= FOLL_PIN; - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(pin_user_pages_remote); @@ -2922,7 +2906,7 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, return -EINVAL; gup_flags |= FOLL_PIN; - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags); } EXPORT_SYMBOL(pin_user_pages); diff --git a/mm/memory.c b/mm/memory.c index 59a2989231fa..5af912cabe9a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4742,7 +4742,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, void *maddr; struct page *page = NULL; - ret = get_user_pages_remote(tsk, mm, addr, 1, + ret = get_user_pages_remote(mm, addr, 1, gup_flags, &page, &vma, NULL); if (ret <= 0) { #ifndef CONFIG_HAVE_IOREMAP_PROT diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 74e957e302fe..5523464d0ab5 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -105,7 +105,7 @@ static int process_vm_rw_single_vec(unsigned long addr, * current/current->mm */ down_read(&mm->mmap_sem); - pinned_pages = pin_user_pages_remote(task, mm, pa, pinned_pages, + pinned_pages = pin_user_pages_remote(mm, pa, pinned_pages, flags, process_pages, NULL, &locked); if (locked) diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 7869d6a9980b..afe5e68ede77 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -914,7 +914,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsigned long pos, * (represented by bprm). 'current' is the process doing * the execve(). */ - if (get_user_pages_remote(current, bprm->mm, pos, 1, + if (get_user_pages_remote(bprm->mm, pos, 1, FOLL_FORCE, &page, NULL, NULL) <= 0) return false; #else diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 15e5b037f92d..73098e18baaf 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -60,7 +60,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ down_read(&mm->mmap_sem); - get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, &locked); if (locked) up_read(&mm->mmap_sem); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 731c1e517716..3e1b2ec4ec96 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1829,7 +1829,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * not call the fault handler, so do it here. */ bool unlocked = false; - r = fixup_user_fault(current, current->mm, addr, + r = fixup_user_fault(current->mm, addr, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked)