From patchwork Tue Jul 7 22:49:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650383 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 87A45739 for ; Tue, 7 Jul 2020 22:50:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2A4A0206BE for ; Tue, 7 Jul 2020 22:50:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ka9TGPtj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A4A0206BE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BE7916B009A; Tue, 7 Jul 2020 18:50:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AA5016B00A6; Tue, 7 Jul 2020 18:50:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F0196B009F; Tue, 7 Jul 2020 18:50:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 526286B0081 for ; Tue, 7 Jul 2020 18:50:31 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C56CD180AD806 for ; Tue, 7 Jul 2020 22:50:30 +0000 (UTC) X-FDA: 77012775420.20.help22_2a0b54b26eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 93DB5180C07A3 for ; Tue, 7 Jul 2020 22:50:30 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30054:30055:30070:30091,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y8pnuii5d877gw731xj1e13k66iocdi7xgq5q1tb1wbomjfrxtcgs6qr38cuy.uh43mcom9hcjnic5oisi67zd6o9zynzmz5gn3x57fqk3bwd8nubtubd31os66e6.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:33,LUA_SUMMARY:none X-HE-Tag: help22_2a0b54b26eb8 X-Filterd-Recvd-Size: 24860 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162229; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WOgxlJaBDHRdhPiysM4bErvDesX8OsHWNwshhmEutLk=; b=Ka9TGPtjP/p2XWV3uM0X1Vz2SjXlmeWIbRgPQBz8Sydw7tktq3bv1N8rZW2qmTMs5RS2Ob wWdJJdb1+OLYrlCQbsq0LTgQNYRTgtJ89lnydVpEfs47X2jZkTDe1z53KLQia7WbKt7hu2 fcTM1Z2wC9WvrGxlU2V8+v9owbRUH5E= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-253-zfGMqXdDOX60NvNrwjtMFw-1; Tue, 07 Jul 2020 18:50:27 -0400 X-MC-Unique: zfGMqXdDOX60NvNrwjtMFw-1 Received: by mail-qt1-f198.google.com with SMTP id u93so31777161qtd.8 for ; Tue, 07 Jul 2020 15:50:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WOgxlJaBDHRdhPiysM4bErvDesX8OsHWNwshhmEutLk=; b=LSQXeg7745D+NgzYVJHc/5LDLibl5qWeUAwGWfZcoP5EUk4k+wLn0KQOKVr+mA1rjj 6xSQzUrLLNUV2SI4NqZljGGkgoY6qiiWnY8RVpqSoE4t36vaAikXmUpLZx+WBgTeUCEr N84Z5BMR6AON2tZl5h95c1wQEzxl+wlr2sNf5tRvb/sdX1zNK5pbpWRaDf8XdKjfHH6z dcEtaDDsg9q8k72GOtJB86VwGnPMINnQU0Bb9FZ+lDn/ZpX2JL0UW5edEvvoDTWN57SU 7PzdIMITWWIJtI6cgQlmEQpbAXhfqmPSCS1QVQipWb78q3+YzcoaRn6CCOhGOJOswKNF /YKQ== X-Gm-Message-State: AOAM533LABl+xdLm3BYrYADfVuY1a5xEUB6OkstBk5mUCVEghEUjrexZ mRmvNfR4V3bIG5P59XSZNHwNBqDzM+BjN4z63mbQmgI6OSsIN4f/R3dwz+p9AM+g25tvsCUuPow b9uLge/SkIAg= X-Received: by 2002:a05:620a:50:: with SMTP id t16mr55730966qkt.82.1594162226508; Tue, 07 Jul 2020 15:50:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxDbJh3mdlcqMj2gcuFQwgf+qlV3HsV2boCushIcPSem9RJEwvEnQ4whoA4khGepjw4ume4mg== X-Received: by 2002:a05:620a:50:: with SMTP id t16mr55730932qkt.82.1594162225989; Tue, 07 Jul 2020 15:50:25 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:25 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman Subject: [PATCH v5 01/25] mm: Do page fault accounting in handle_mm_fault Date: Tue, 7 Jul 2020 18:49:57 -0400 Message-Id: <20200707225021.200906-2-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 93DB5180C07A3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a preparation patch to move page fault accountings into the general code in handle_mm_fault(). This includes both the per task flt_maj/flt_min counters, and the major/minor page fault perf events. To do this, the pt_regs pointer is passed into handle_mm_fault(). PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault handlers. So far, all the pt_regs pointer that passed into handle_mm_fault() is NULL, which means this patch should have no intented functional change. Suggested-by: Linus Torvalds Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 2 +- arch/arc/mm/fault.c | 2 +- arch/arm/mm/fault.c | 2 +- arch/arm64/mm/fault.c | 2 +- arch/csky/mm/fault.c | 3 +- arch/hexagon/mm/vm_fault.c | 2 +- arch/ia64/mm/fault.c | 2 +- arch/m68k/mm/fault.c | 2 +- arch/microblaze/mm/fault.c | 2 +- arch/mips/mm/fault.c | 2 +- arch/nds32/mm/fault.c | 2 +- arch/nios2/mm/fault.c | 2 +- arch/openrisc/mm/fault.c | 2 +- arch/parisc/mm/fault.c | 2 +- arch/powerpc/mm/copro_fault.c | 2 +- arch/powerpc/mm/fault.c | 2 +- arch/riscv/mm/fault.c | 2 +- arch/s390/mm/fault.c | 2 +- arch/sh/mm/fault.c | 2 +- arch/sparc/mm/fault_32.c | 4 +-- arch/sparc/mm/fault_64.c | 2 +- arch/um/kernel/trap.c | 2 +- arch/x86/mm/fault.c | 2 +- arch/xtensa/mm/fault.c | 2 +- drivers/iommu/amd/iommu_v2.c | 2 +- drivers/iommu/intel/svm.c | 3 +- include/linux/mm.h | 7 ++-- mm/gup.c | 4 +-- mm/hmm.c | 3 +- mm/ksm.c | 3 +- mm/memory.c | 64 ++++++++++++++++++++++++++++++++++- 31 files changed, 103 insertions(+), 34 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index c2303a8c2b9f..1983e43a5e2f 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -148,7 +148,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, /* If for any reason at all we couldn't handle the fault, make sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 7287c793d1c9..587dea524e6b 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -130,7 +130,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index c6550eddfce1..01a8e0f8fef7 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -224,7 +224,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, goto out; } - return handle_mm_fault(vma, addr & PAGE_MASK, flags); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 5e832b3387f1..f885940035ce 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -428,7 +428,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); } static bool is_el0_instruction_abort(unsigned int esr) diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c index 0b9cbf2cf6a9..7137e2e8dc57 100644 --- a/arch/csky/mm/fault.c +++ b/arch/csky/mm/fault.c @@ -150,7 +150,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0); + fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0, + NULL); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index cd3808f96b93..f12f330e7946 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -88,7 +88,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) break; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index 3a4dec334cc5..abf2808f9b4b 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -143,7 +143,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re * sure we exit gracefully rather than endlessly redo the * fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index 508abb63da67..08b35a318ebe 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -134,7 +134,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); pr_debug("handle_mm_fault returns %x\n", fault); if (fault_signal_pending(fault, regs)) diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index a2bfe587b491..1a3d4c4ca28b 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -214,7 +214,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index 01b168a90434..b1db39784db9 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -152,7 +152,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index 8fb73f6401a0..d0ecc8fb5b23 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -206,7 +206,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, * the fault. */ - fault = handle_mm_fault(vma, addr, flags); + fault = handle_mm_fault(vma, addr, flags, NULL); /* * If we need to retry but a fatal signal is pending, handle the diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index 4112ef0e247e..86beb9a2698e 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -131,7 +131,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index d2224ccca294..3daa491d1edb 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -159,7 +159,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index 66ac0719bd49..e32d06928c24 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -302,7 +302,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, * fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c index b83abbead4a2..2d0276abe0a6 100644 --- a/arch/powerpc/mm/copro_fault.c +++ b/arch/powerpc/mm/copro_fault.c @@ -64,7 +64,7 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea, } ret = 0; - *flt = handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0); + *flt = handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0, NULL); if (unlikely(*flt & VM_FAULT_ERROR)) { if (*flt & VM_FAULT_OOM) { ret = -ENOMEM; diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 641fc5f3d7dd..25dee001d8e1 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -607,7 +607,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); major |= fault & VM_FAULT_MAJOR; diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 5873835a3e6b..30c1124d0fb6 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -109,7 +109,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags); + fault = handle_mm_fault(vma, addr, flags, NULL); /* * If we need to retry but a fatal signal is pending, handle the diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index d53c2e2ea1fd..fc14df0b4d6e 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -478,7 +478,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; if (flags & FAULT_FLAG_RETRY_NOWAIT) diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index fbe1f2fe9a8c..3c0a11827f7e 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -482,7 +482,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR))) if (mm_fault_error(regs, error_code, address, fault)) diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index cfef656eda0f..06af03db4417 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -234,7 +234,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; @@ -410,7 +410,7 @@ static void force_user_fault(unsigned long address, int write) if (!(vma->vm_flags & (VM_READ | VM_EXEC))) goto bad_area; } - switch (handle_mm_fault(vma, address, flags)) { + switch (handle_mm_fault(vma, address, flags, NULL)) { case VM_FAULT_SIGBUS: case VM_FAULT_OOM: goto do_sigbus; diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index a3806614e4dc..9ebee14ee893 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -422,7 +422,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) goto exit_exception; diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 2b3afa354a90..8d9870d76da1 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -71,7 +71,7 @@ int handle_page_fault(unsigned long address, unsigned long ip, do { vm_fault_t fault; - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) goto out_nosemaphore; diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 02536b04d9f3..0adbff41adec 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1291,7 +1291,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); major |= fault & VM_FAULT_MAJOR; /* Quick path to respond to signals */ diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index c128dcc7c85b..e72c8c1359a6 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -107,7 +107,7 @@ void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index e4b025c5637c..c259108ab6dd 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -495,7 +495,7 @@ static void do_fault(struct work_struct *work) if (access_error(vma, fault)) goto out; - ret = handle_mm_fault(vma, address, flags); + ret = handle_mm_fault(vma, address, flags, NULL); out: mmap_read_unlock(mm); diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 6c87c807a0ab..5ae59a6ad681 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -872,7 +872,8 @@ static irqreturn_t prq_event_thread(int irq, void *d) goto invalid; ret = handle_mm_fault(vma, address, - req->wr_req ? FAULT_FLAG_WRITE : 0); + req->wr_req ? FAULT_FLAG_WRITE : 0, + NULL); if (ret & VM_FAULT_ERROR) goto invalid; diff --git a/include/linux/mm.h b/include/linux/mm.h index 809cbbf98fbc..33f8236a68a2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -39,6 +39,7 @@ struct file_ra_state; struct user_struct; struct writeback_control; struct bdi_writeback; +struct pt_regs; void init_mm_internals(void); @@ -1659,7 +1660,8 @@ int invalidate_inode_page(struct page *page); #ifdef CONFIG_MMU extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags); + unsigned long address, unsigned int flags, + struct pt_regs *regs); extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); @@ -1669,7 +1671,8 @@ void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows); #else static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) + unsigned long address, unsigned int flags, + struct pt_regs *regs) { /* should never happen if there's no MMU */ BUG(); diff --git a/mm/gup.c b/mm/gup.c index 6ec1807cd2a7..80fd1610d43e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -884,7 +884,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_TRIED; } - ret = handle_mm_fault(vma, address, fault_flags); + ret = handle_mm_fault(vma, address, fault_flags, NULL); if (ret & VM_FAULT_ERROR) { int err = vm_fault_to_errno(ret, *flags); @@ -1238,7 +1238,7 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, fatal_signal_pending(current)) return -EINTR; - ret = handle_mm_fault(vma, address, fault_flags); + ret = handle_mm_fault(vma, address, fault_flags, NULL); major |= ret & VM_FAULT_MAJOR; if (ret & VM_FAULT_ERROR) { int err = vm_fault_to_errno(ret, 0); diff --git a/mm/hmm.c b/mm/hmm.c index e9a545751108..0be32b8a47be 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -75,7 +75,8 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end, } for (; addr < end; addr += PAGE_SIZE) - if (handle_mm_fault(vma, addr, fault_flags) & VM_FAULT_ERROR) + if (handle_mm_fault(vma, addr, fault_flags, NULL) & + VM_FAULT_ERROR) return -EFAULT; return -EBUSY; } diff --git a/mm/ksm.c b/mm/ksm.c index 5fb176d497ea..90a625b02a1d 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -480,7 +480,8 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) break; if (PageKsm(page)) ret = handle_mm_fault(vma, addr, - FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE); + FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, + NULL); else ret = VM_FAULT_WRITE; put_page(page); diff --git a/mm/memory.c b/mm/memory.c index 072c72d88471..bb7ba127661a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -71,6 +71,8 @@ #include #include #include +#include +#include #include @@ -4360,6 +4362,64 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return handle_pte_fault(&vmf); } +/** + * mm_account_fault - Do page fault accountings + * + * @regs: the pt_regs struct pointer. When set to NULL, will skip accounting + * of perf event counters, but we'll still do the per-task accounting to + * the task who triggered this page fault. + * @address: the faulted address. + * @flags: the fault flags. + * @ret: the fault retcode. + * + * This will take care of most of the page fault accountings. Meanwhile, it + * will also include the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] perf counter + * updates. However note that the handling of PERF_COUNT_SW_PAGE_FAULTS should + * still be in per-arch page fault handlers at the entry of page fault. + */ +static inline void mm_account_fault(struct pt_regs *regs, + unsigned long address, unsigned int flags, + vm_fault_t ret) +{ + bool major; + + /* + * We don't do accounting for some specific faults: + * + * - Unsuccessful faults (e.g. when the address wasn't valid). That + * includes arch_vma_access_permitted() failing before reaching here. + * So this is not a "this many hardware page faults" counter. We + * should use the hw profiling for that. + * + * - Incomplete faults (VM_FAULT_RETRY). They will only be counted + * once they're completed. + */ + if (ret & (VM_FAULT_ERROR | VM_FAULT_RETRY)) + return; + + /* + * We define the fault as a major fault when the final successful fault + * is VM_FAULT_MAJOR, or if it retried (which implies that we couldn't + * handle it immediately previously). + */ + major = (ret & VM_FAULT_MAJOR) || (flags & FAULT_FLAG_TRIED); + + /* + * If the fault is done for GUP, regs will be NULL, and we will skip + * the fault accounting. + */ + if (!regs) + return; + + if (major) { + current->maj_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); + } else { + current->min_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); + } +} + /* * By the time we get here, we already hold the mm semaphore * @@ -4367,7 +4427,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, * return value. See filemap_fault() and __lock_page_or_retry(). */ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, - unsigned int flags) + unsigned int flags, struct pt_regs *regs) { vm_fault_t ret; @@ -4408,6 +4468,8 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, mem_cgroup_oom_synchronize(false); } + mm_account_fault(regs, address, flags, ret); + return ret; } EXPORT_SYMBOL_GPL(handle_mm_fault); From patchwork Tue Jul 7 22:49:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BBD81739 for ; Tue, 7 Jul 2020 22:50:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 88D23207CD for ; Tue, 7 Jul 2020 22:50:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IQIEdRPn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88D23207CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ED1DF6B0081; Tue, 7 Jul 2020 18:50:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DEB346B00A5; Tue, 7 Jul 2020 18:50:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B25266B0081; Tue, 7 Jul 2020 18:50:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 807426B0081 for ; Tue, 7 Jul 2020 18:50:31 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 42E0B8248D52 for ; Tue, 7 Jul 2020 22:50:31 +0000 (UTC) X-FDA: 77012775462.22.team72_5d1674d26eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 1724418038E60 for ; Tue, 7 Jul 2020 22:50:31 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y8ps7ykoon6gpyt7wf4i3mxriceyp5ys3j1hzazcdfeawjtm593r6xpr1fh3s.idhs9oa6hfryttjb84d3n8dmqyr7xcbmiozyjkctqkwyf4nja8hmdrirqc696z4.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: team72_5d1674d26eb8 X-Filterd-Recvd-Size: 5788 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162229; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9zINhPgn0dLU827u0kCMWiXWqm1EOoX2bkCYV8BQtz4=; b=IQIEdRPnIugyLmM3EWvcgS/XmZwMrBujl7wzUgL1V5ihOG9+bvnqcxqYrTMUPU4RV8z6Mo xTQpbeVsMLXZlverL7E/OzxRmO/hj/CZAH9Gz1kVoIXGS4IZMXVIdd/WFcsxeidVwWnvDo NGeL0abZCbXjeoiTdRW+c1B41EBTzbs= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-220-JPxlKqHqM-O0HdDGURFRRQ-1; Tue, 07 Jul 2020 18:50:28 -0400 X-MC-Unique: JPxlKqHqM-O0HdDGURFRRQ-1 Received: by mail-qt1-f199.google.com with SMTP id c26so31749846qtq.6 for ; Tue, 07 Jul 2020 15:50:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9zINhPgn0dLU827u0kCMWiXWqm1EOoX2bkCYV8BQtz4=; b=tB5uG0vmrzHbR0PBlOBiwmauIFSjZOrn6msdQBxHE5Zy24jUr190RPXwFRrQNgIKVn YQrCMLNtejKIXpW8QfZJzA03q1QyUxAmKl9tVnBIQ5OoGFxQRiZpb/NZTk8K25XUbGwc qo5Fn9Sy5+MaXKibhtraRbxiQQYdSQCE/SmqFMiSze/yfggMQx+mmNl7WMnaSY/iDiUt xWNr5O3MmsGvNMOXHP18VOvlDNDO7Z5cLWYtCwyX9JE3N4YsdIqEMdOnfgmMZf4Vep+R kQXQ+32ot1V9uzgLv98Hz8h3rC5qkVSGBtnMJEQ1GR+OI+tR94dCW9aijVgFDcDoXZuV DTIg== X-Gm-Message-State: AOAM532YcrvYA11FVBbHKt3MVQrrb0SiMgW2ZNh295XpZki/uKxcmTUi 6k262YJJrakunszhwd//Dt0KgbRyfBmb3TZoeEWiSpLOGrZdvgXqUqD7Jf8jGWXzY1/2LKqk3Y1 ERTf5HhPPMmI= X-Received: by 2002:ac8:7b57:: with SMTP id m23mr36398732qtu.379.1594162227887; Tue, 07 Jul 2020 15:50:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzjtNA8lnpn1m/KE5+eQlmjotnypaLOF4WLVE5X368ySfVMh096MOzRrWMEJA23JXXqHHd3XA== X-Received: by 2002:ac8:7b57:: with SMTP id m23mr36398711qtu.379.1594162227647; Tue, 07 Jul 2020 15:50:27 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:27 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH v5 02/25] mm/alpha: Use general page fault accounting Date: Tue, 7 Jul 2020 18:49:58 -0400 Message-Id: <20200707225021.200906-3-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 1724418038E60 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Richard Henderson CC: Ivan Kokshaysky CC: Matt Turner CC: linux-alpha@vger.kernel.org Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index 1983e43a5e2f..09172f017efc 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -25,6 +25,7 @@ #include #include #include +#include extern void die_if_kernel(char *,struct pt_regs *,long, unsigned long *); @@ -116,6 +117,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, #endif if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -148,7 +150,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, /* If for any reason at all we couldn't handle the fault, make sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -164,10 +166,6 @@ do_page_fault(unsigned long address, unsigned long mmcsr, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:49:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650387 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D8FC4739 for ; Tue, 7 Jul 2020 22:50:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A5CDA20720 for ; Tue, 7 Jul 2020 22:50:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="goe9ljKA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5CDA20720 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8DEDF6B00A5; Tue, 7 Jul 2020 18:50:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 868756B00A6; Tue, 7 Jul 2020 18:50:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BC1B6B00A8; Tue, 7 Jul 2020 18:50:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 502756B00A5 for ; Tue, 7 Jul 2020 18:50:33 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 17BBA1EE6 for ; Tue, 7 Jul 2020 22:50:33 +0000 (UTC) X-FDA: 77012775546.28.drink34_561273826eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id DD4E06C26 for ; Tue, 7 Jul 2020 22:50:32 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y8o11s3p7a4c7xft163f1rb69e4ycnrrp4t7jzpn9finebregyz3fr5mpxjwk.646dfeq1syhkm96y3f4jdrarbb8aw8yzz6be11papif5bmzoumg5e7op6p3w6tp.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: drink34_561273826eb8 X-Filterd-Recvd-Size: 5754 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162231; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7D381D/H14FHe4T4c1JYQQxnwU06uKAee3rsUSJokIU=; b=goe9ljKAaVBX5Gyrh8c72NwgsNkWBq7TX8U7anj5xGtnju+0eNPB4DVoVe2ZRGeL+ljUWO SOf7pI/X8ZZWSQDWn3Mt1LO63WscbExp2dWu4scf5sUvLDrz9twucDzsTMg/hSW24i9toN 61ETZyhknQtasRw5SZncfdB7JuMZcy0= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-269-qkcoJzFxPMejEzb6ssUTcw-1; Tue, 07 Jul 2020 18:50:29 -0400 X-MC-Unique: qkcoJzFxPMejEzb6ssUTcw-1 Received: by mail-qv1-f71.google.com with SMTP id u1so14571587qvu.18 for ; Tue, 07 Jul 2020 15:50:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7D381D/H14FHe4T4c1JYQQxnwU06uKAee3rsUSJokIU=; b=a0r64MphMmKP9RHd+H4rXBtOP3EF9Lu0fIwinvVGFXBISubWHZYvpSVi3oOvmQ4/0Q EM77uFSkZ1UTIuCxXtwXwwx7Rs6Hi/JON4udhRGD/G4JW0ZJnZM9rbNjRALPxV+XJoqE sbUA95ZcqvTmG8+HG6bbnCXVZkCG+5MvCTrl/GLPfsY9kngSlBC/NJQfmzpzhsrjlNGD cHurMsa/0v2krXoVm2NGz0GE5lUfjxjsBMKj4+VoNEAyxLODmsvWWf79I6QD6G17+2om ZswHArzuwuxTzAr3C1Rh4Z0q8MiwRTNJd2Dpy34bWBHs9Qy42hzpUU3GtWPJmIn1avDa 8tBQ== X-Gm-Message-State: AOAM531KGz7eco33VcmODc2MtSUKrpz7848bzKkSWzHZh9wcmP830zWV inY93ccHn8t7SeQZAlMVoE+RVigEWAnS23sAv29L/CM+LuqU8JfIgOb4DBEgsiokMAXVLr1/qP/ B9Udg1IV95HI= X-Received: by 2002:a37:957:: with SMTP id 84mr47901155qkj.392.1594162229536; Tue, 07 Jul 2020 15:50:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw8Qe2aOe1IpWrTkBCeJI7Q41I9z0vTh86xMOYiLo7vcyohVA6pGslr3yB/GvOVaXZ0OVm2lw== X-Received: by 2002:a37:957:: with SMTP id 84mr47901134qkj.392.1594162229298; Tue, 07 Jul 2020 15:50:29 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:28 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH v5 03/25] mm/arc: Use general page fault accounting Date: Tue, 7 Jul 2020 18:49:59 -0400 Message-Id: <20200707225021.200906-4-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: DD4E06C26 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Vineet Gupta CC: linux-snps-arc@lists.infradead.org Signed-off-by: Peter Xu --- arch/arc/mm/fault.c | 18 +++--------------- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 587dea524e6b..f5657cb68e4f 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -105,6 +105,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) if (write) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); @@ -130,7 +131,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -155,22 +156,9 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) * Major/minor page fault accounting * (in case of retry we only land here once) */ - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - - if (likely(!(fault & VM_FAULT_ERROR))) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } - + if (likely(!(fault & VM_FAULT_ERROR))) /* Normal return path: fault Handled Gracefully */ return; - } if (!user_mode(regs)) goto no_context; From patchwork Tue Jul 7 22:50:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650391 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C2C3914B7 for ; Tue, 7 Jul 2020 22:50:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8622B2075B for ; Tue, 7 Jul 2020 22:50:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BYDbYeDi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8622B2075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DDD096B00A6; Tue, 7 Jul 2020 18:50:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AB3096B00AD; Tue, 7 Jul 2020 18:50:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86B346B00AB; Tue, 7 Jul 2020 18:50:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 6A79D6B00AA for ; Tue, 7 Jul 2020 18:50:36 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 269098248D52 for ; Tue, 7 Jul 2020 22:50:36 +0000 (UTC) X-FDA: 77012775672.02.unit63_2f0cf5a26eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 41C73300015DA0AD for ; Tue, 7 Jul 2020 22:50:35 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30036:30051:30054:30090,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yfhwee8x8cpb1fe7ky96mzf6diyopyhc9r3ic391wwemtxc8t6jaqzke4qb4n.br7nxi6tppc596o5tumntwt7bmytmht7jcn7bfxrgbucwi1a3uwit917rbm86ch.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:1:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: unit63_2f0cf5a26eb8 X-Filterd-Recvd-Size: 6873 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162234; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D8OdeaID5or5gsgcWoj3vmPJjBIrXv0E7kOuYlamE1o=; b=BYDbYeDifz4xr+sRdXt1gwUCDKmkKvDdy8yeJngROdZJAVXBkdg4AAOE6ftLgwSyux8OQ7 SrfhSU13oP1cWX5I01ORWCHuGA/UZc6fk7DSa7f4HPkEM+Rb+OnosC0JjD/Zg8PYHsBZZz Pslg/on154o0uxT40/u1w3Uvg/jXbIY= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-179-TmbW4exeORufj-taLA6c3w-1; Tue, 07 Jul 2020 18:50:31 -0400 X-MC-Unique: TmbW4exeORufj-taLA6c3w-1 Received: by mail-qt1-f198.google.com with SMTP id k9so5983140qth.17 for ; Tue, 07 Jul 2020 15:50:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D8OdeaID5or5gsgcWoj3vmPJjBIrXv0E7kOuYlamE1o=; b=d0GHMLJDc1uXjNz8ZWljeO5c9j06loWqP/zmCKaKMmyAJXWmKL7MLpPpXxdOc07KQS O5/HKsgpN2cMefZ+3WMVga8ad7rNUb/PUzMa0lcE5O2OHEtRzQCWgJ8rrFbDwG0CRVA9 1qBJJu/skY0gU5oikluc88WmgejAMHfrN+QPMIpKO/fCeRMX5SJIAoe48d/s3IoJocKd sgliuXOm31SqO/lWlwcScc4L6OPbNQXZmyOAwjNr/vBj18YrknyM+aIWJmjp6R8yDrhO 4IVZXYepkxXRtvbln+ghPTEfG9cLeNJqM2uDN3biONWrjEV8yz5cBZ1U9fZsCYBBfJbK RCfQ== X-Gm-Message-State: AOAM531EmVcDwCFTzNATe2t84p5o6tShyJuyyrktAfTK8b2TAE6g/TBh 72i8etxRAp71ahrugfqgtHZrYJvtWse0LErFzFJozlor0hAXBHXQD2JNT/CU3ygJ5yeDVaeD8L0 8BXwKGwbXq4A= X-Received: by 2002:ac8:4419:: with SMTP id j25mr59295040qtn.0.1594162231217; Tue, 07 Jul 2020 15:50:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwz8DXKkHPxoOFEcQD6pHkWkgMOJfaMYF0tKBmTSmaZQNpqDRHsJem4bnZngb+YYMsixKQJtw== X-Received: by 2002:ac8:4419:: with SMTP id j25mr59295024qtn.0.1594162230982; Tue, 07 Jul 2020 15:50:30 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:30 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 04/25] mm/arm: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:00 -0400 Message-Id: <20200707225021.200906-5-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 41C73300015DA0AD X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we need to pass the pt_regs pointer into __do_page_fault(). Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Russell King CC: Will Deacon CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Peter Xu --- arch/arm/mm/fault.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 01a8e0f8fef7..efa402025031 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -202,7 +202,8 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma) static vm_fault_t __kprobes __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, - unsigned int flags, struct task_struct *tsk) + unsigned int flags, struct task_struct *tsk, + struct pt_regs *regs) { struct vm_area_struct *vma; vm_fault_t fault; @@ -224,7 +225,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, goto out; } - return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ @@ -266,6 +267,8 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -290,7 +293,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) #endif } - fault = __do_page_fault(mm, addr, fsr, flags, tsk); + fault = __do_page_fault(mm, addr, fsr, flags, tsk, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_lock because @@ -302,23 +305,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) return 0; } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ - - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (!(fault & VM_FAULT_ERROR) && flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; From patchwork Tue Jul 7 22:50:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650389 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4DA2513B6 for ; Tue, 7 Jul 2020 22:50:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 10CDD20720 for ; Tue, 7 Jul 2020 22:50:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZMrtCybo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10CDD20720 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F6DC6B00AA; Tue, 7 Jul 2020 18:50:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 97DAB6B00A6; Tue, 7 Jul 2020 18:50:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A8566B00AC; Tue, 7 Jul 2020 18:50:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id 5CB006B00A6 for ; Tue, 7 Jul 2020 18:50:36 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 20FE62C78 for ; Tue, 7 Jul 2020 22:50:36 +0000 (UTC) X-FDA: 77012775672.07.pig27_2501ad726eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id E420F1803F9B0 for ; Tue, 7 Jul 2020 22:50:35 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30036:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yfrswrbiwze9oca4m3obtccb444oczsfk7ke5zz3imfhby8e76r8qxtcnu8o6.adswjmomx8wdab8t7twjrydarx5y4zofedzndiy9eyojuiu5peqresrfwwdtgiz.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:45,LUA_SUMMARY:none X-HE-Tag: pig27_2501ad726eb8 X-Filterd-Recvd-Size: 6847 Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162234; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EjHuThpSN4MTMc6/ZiQrvbTE5N9rIyqvyo4j3Ae/jMQ=; b=ZMrtCyboqeuVW0ctyQFEzxAaGHSRM/y3VfdHSocghCqNQhiZ9E5Fo8kwf7+dVW8ocdd6t/ 3AAPnyhRl3HeG1vdreKffhGvgfaDHm69HBDA0bxfoZDkm8uas9CBM0wK+6UGoyFyTfqzgh KeVHsmnyybe7DjA9H4WosU9c8qqiqX8= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-16-B6QMw0WbP-SBnBtzbL5ddw-1; Tue, 07 Jul 2020 18:50:33 -0400 X-MC-Unique: B6QMw0WbP-SBnBtzbL5ddw-1 Received: by mail-qk1-f198.google.com with SMTP id 13so14829989qks.11 for ; Tue, 07 Jul 2020 15:50:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EjHuThpSN4MTMc6/ZiQrvbTE5N9rIyqvyo4j3Ae/jMQ=; b=Oyb9f8jBTpwa6z+UpQjOYQWp/SUKs8lXQFAb/97ax856lmQ79EMKt39eibb2w+OkzF OOVq4MfNJNMJ6rkptwlCwyWpVe/fqqg3coRRGBHAdSiNw+nvJkoAiBP4UMpJyF16Ldim /maF/ajO6wx17JhPK8T30Zg+N5FyYqb/ouWtKaVKp2iB2MppK6x2Mh/LPkgiE2ON9X2b 9VUfWPkuvTN7CmZ2x4rVbXJ3KfDR7IEQ0/VD0WXddrJPlIgnNivvZvptrZMyF0p8NR1O Hgp7jeJjXI0CNl/LCTzZ0Cv8jb7EOWB4tz0iDCxE+phtOPz7oqMn5NoASImb3XH0MagI 0PRA== X-Gm-Message-State: AOAM530t/k4oNEgzjwAUoXpoThAkBZIR1jAupMKzuTV1CD0xTrFZF6Qb S9fEg7cOCrDUJ0mneYCsldiZIyPc8QetdlRQMol7xzr7PV/kJ/C+rHGnV4H9gT5tj1PRA4JBcUK o/dIJKi4T5KA= X-Received: by 2002:ac8:4d5d:: with SMTP id x29mr57898340qtv.358.1594162232810; Tue, 07 Jul 2020 15:50:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyKs84wiAnpkRkYMFU65uBv9KDGiwUMHAvWd757+lDwcSvLqmU3BsGAsJ394Lj65TO/7Bc44g== X-Received: by 2002:ac8:4d5d:: with SMTP id x29mr57898318qtv.358.1594162232589; Tue, 07 Jul 2020 15:50:32 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:31 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 05/25] mm/arm64: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:01 -0400 Message-Id: <20200707225021.200906-6-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: E420F1803F9B0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we pass pt_regs pointer into __do_page_fault(). CC: Catalin Marinas CC: Will Deacon CC: linux-arm-kernel@lists.infradead.org Acked-by: Will Deacon Signed-off-by: Peter Xu --- arch/arm64/mm/fault.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index f885940035ce..a3bd189602df 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -404,7 +404,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re #define VM_FAULT_BADACCESS 0x020000 static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags) + unsigned int mm_flags, unsigned long vm_flags, + struct pt_regs *regs) { struct vm_area_struct *vma = find_vma(mm, addr); @@ -428,7 +429,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs); } static bool is_el0_instruction_abort(unsigned int esr) @@ -450,7 +451,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, { const struct fault_info *inf; struct mm_struct *mm = current->mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned long vm_flags = VM_ACCESS_FLAGS; unsigned int mm_flags = FAULT_FLAG_DEFAULT; @@ -516,8 +517,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, #endif } - fault = __do_page_fault(mm, addr, mm_flags, vm_flags); - major |= fault & VM_FAULT_MAJOR; + fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -538,25 +538,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, * Handle the "normal" (no error) case first. */ if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | - VM_FAULT_BADACCESS)))) { - /* - * Major/minor page fault accounting is only done - * once. If we go through a retry, it is extremely - * likely that the page will be found in page cache at - * that point. - */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - addr); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - addr); - } - + VM_FAULT_BADACCESS)))) return 0; - } /* * If we are in kernel mode at this point, we have no context to From patchwork Tue Jul 7 22:50:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650395 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B1D5739 for ; Tue, 7 Jul 2020 22:50:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CBD932075B for ; Tue, 7 Jul 2020 22:50:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BJSEXJu9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CBD932075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EAB296B00B0; Tue, 7 Jul 2020 18:50:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E5E1F6B00B1; Tue, 7 Jul 2020 18:50:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C875D6B00B2; Tue, 7 Jul 2020 18:50:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 8D1496B00B0 for ; Tue, 7 Jul 2020 18:50:39 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 56AC51EE6 for ; Tue, 7 Jul 2020 22:50:39 +0000 (UTC) X-FDA: 77012775798.23.board17_520c62626eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 3385037608 for ; Tue, 7 Jul 2020 22:50:39 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yfb3u6te6qqmz5r11cjocsxwatyopf55h3jr74wubu8j1xn7ohgws49zbtirs.h8b46kex55hs6dpxo7iaxxqbkhw7cbc7t5gpzonpeh9juzh8rysjeuopecomhyi.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: board17_520c62626eb8 X-Filterd-Recvd-Size: 5135 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162238; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jxvjf7Q+v/mOPCYZqZl27IogKSwgytmGoijHqe+1Kyo=; b=BJSEXJu9uzdeH8zUAHHHmcPRP/1F2zt/RwgVQiRMxt6cEJZMIwmfxKGar/97DRFus7xnmX 6OAEPeo2C1Q4e9jEhslRWI/V6a57X6JOqGP4FaLAJnGQ89tnuEy0VbqAEzoP4yZNwJx4ay ms1MLzPYsHGICiAzOBSXOZ1z1IhpVMQ= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-372-4dTwQpLSODKxQ-YNNT8n2A-1; Tue, 07 Jul 2020 18:50:34 -0400 X-MC-Unique: 4dTwQpLSODKxQ-YNNT8n2A-1 Received: by mail-qk1-f198.google.com with SMTP id i145so10757469qke.2 for ; Tue, 07 Jul 2020 15:50:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Jxvjf7Q+v/mOPCYZqZl27IogKSwgytmGoijHqe+1Kyo=; b=bjndU17mmSDBMVOe6RN1Zwu5C6RSeq2dEdl+9emQiurTvmX5ekkhCzyUhw0+MqSPMB xrh37e8feWcdXT+abm58/3HHzbT89xbl9RelXXv2Xj741kODStr4vOfvEHYeLmOs1xvy m8F62QoOUIfdydU8p9461dbULe2GB9sVCOfiCLybX2I1YLTzDQpIPJiMQmUKNlqJTU/s bmFGQ472rofrf7b68f9kFIMhEdSQSJhAaJd4QDAfTLOSwOoin9qk0R+G/ayKyIYls4PM XSqohLdooG81RLMHN2wfYkxaSzXyLMz71jSvtVerUz3q2jzjkzolJuXrzqSFg8Z06rXR UV6w== X-Gm-Message-State: AOAM53270RWqROlagWJ4HYl5s+AQJy/XGFVhSkRrK/dc2mM9PhnhCSFK sVeMUU85mrIKL7EulECL4KoIgeBRveU8OGy8xYM9jwNP+L4aMvasumRZCLlpRYe1C0wY2dAn+X/ ls3el0VSCDZ8= X-Received: by 2002:ac8:16b2:: with SMTP id r47mr57607215qtj.273.1594162234487; Tue, 07 Jul 2020 15:50:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzEUX4FOONaw2ZO8mmmd3zmBYLqC03cLLhe3ytu7deLkE6Bv0g/Xef2K2OGzTk+bX724w5ibw== X-Received: by 2002:ac8:16b2:: with SMTP id r47mr57607196qtj.273.1594162234214; Tue, 07 Jul 2020 15:50:34 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:33 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v5 06/25] mm/csky: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:02 -0400 Message-Id: <20200707225021.200906-7-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 3385037608 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Guo Ren CC: linux-csky@vger.kernel.org Acked-by: Guo Ren Signed-off-by: Peter Xu --- arch/csky/mm/fault.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c index 7137e2e8dc57..c3f580714ee4 100644 --- a/arch/csky/mm/fault.c +++ b/arch/csky/mm/fault.c @@ -151,7 +151,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, * the fault. */ fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0, - NULL); + regs); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; @@ -161,16 +161,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, goto bad_area; BUG(); } - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - address); - } - mmap_read_unlock(mm); return; From patchwork Tue Jul 7 22:50:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650393 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C891F739 for ; Tue, 7 Jul 2020 22:50:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 94F4A207BB for ; Tue, 7 Jul 2020 22:50:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="izoeIlcM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 94F4A207BB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 909366B00AD; Tue, 7 Jul 2020 18:50:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 86D2F6B00AF; Tue, 7 Jul 2020 18:50:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BF326B00B0; Tue, 7 Jul 2020 18:50:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 3BA776B00AD for ; Tue, 7 Jul 2020 18:50:39 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 06475180AD806 for ; Tue, 7 Jul 2020 22:50:39 +0000 (UTC) X-FDA: 77012775798.28.grip92_5c16e7226eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id DD02E6D68 for ; Tue, 7 Jul 2020 22:50:38 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yrsi9bppx5t8a1u8xn7xswkwtp7yp59g6nhg4ytjim73beu9kijo618fbn7ug.r77ft7ixpupdzc7sy94dtuyf7jyy5nayy83537hi16nawdqw8n5igqsg6jdryi1.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: grip92_5c16e7226eb8 X-Filterd-Recvd-Size: 5809 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162238; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+4zIYRHOACURUfy9t13d60Z4Yb5Zt+j+VbtnlH1sD2o=; b=izoeIlcMIr5cnzzSvHLlEx+gC9UTzMZV0ALnYQIzDR7+8dTcREjc4HKep5mJDIb1P5Ct1M lmpNpGWhvCmZhpNqHK9fRHFyRgfYRxtYEqttl2hX0M3gPJmKUAn7F5tebnT/wefO+A4Xdo hlPQXojRMhrwJ0ZMHjsxkRnK2GthBCg= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-368-NIPNDGTkOa2lMkUTsm0Gog-1; Tue, 07 Jul 2020 18:50:36 -0400 X-MC-Unique: NIPNDGTkOa2lMkUTsm0Gog-1 Received: by mail-qt1-f197.google.com with SMTP id d45so23988361qte.12 for ; Tue, 07 Jul 2020 15:50:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+4zIYRHOACURUfy9t13d60Z4Yb5Zt+j+VbtnlH1sD2o=; b=OCOTyv2QDx7miw2QVWLzgb045KGDwTA4KsivaBgWrsi9WhdDTj3yHzZdn+95s6Pcbq u2y7OO5+n/iwWIrzpoh/UGFkfi6N0SFjx1To05GJNCjdYYOSn53RB89EOXqoKoOypeWc jFdBIZMUYo/WYZbdqTmebocWNcLX35ZFAx3l5LqUQB5taH9F5E1M5iathOLcEsL6tZ+k OsezfbGQpTArbmugn827Cj4VopkCi1b5FNP1go9GiFiFsMwPhTq/0kkqBbgN7XqGyFBB oCqVLVAvDeRpC2iSt7zZlRMt9hdRgWzjLZHP1jinEL37+WBraizNso07PN2ATGprxRzt JGCw== X-Gm-Message-State: AOAM533uUUKekIJMSN8BWD0Bscg+SLarUIuQENVzP0TGxJYj32ocATbF Y3CiCBXnweYKAU5O5ChKTyRU3rsSNXEAQwv49QkSpAoxvgTx7A1BWXtDrrzX/3g2c45tL0LOTT3 NQ71okRVg9Y8= X-Received: by 2002:a05:620a:121a:: with SMTP id u26mr53672525qkj.398.1594162236049; Tue, 07 Jul 2020 15:50:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzluayj3u4Yocf5HHLjWUxuZvg566IRp972A3KvRd3pkGDQoJgxfIqJJmQTZ0tYxp6Eo8C2mQ== X-Received: by 2002:a05:620a:121a:: with SMTP id u26mr53672502qkj.398.1594162235782; Tue, 07 Jul 2020 15:50:35 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:35 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Brian Cain , linux-hexagon@vger.kernel.org Subject: [PATCH v5 07/25] mm/hexagon: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:03 -0400 Message-Id: <20200707225021.200906-8-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: DD02E6D68 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Brian Cain CC: linux-hexagon@vger.kernel.org Acked-by: Brian Cain Signed-off-by: Peter Xu --- arch/hexagon/mm/vm_fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index f12f330e7946..ef32c5a84ff3 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -18,6 +18,7 @@ #include #include #include +#include /* * Decode of hardware exception sends us to one of several @@ -53,6 +54,8 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -88,7 +91,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) break; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -96,10 +99,6 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) /* The most common case -- we are done. */ if (likely(!(fault & VM_FAULT_ERROR))) { if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; From patchwork Tue Jul 7 22:50:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650397 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22A3313B6 for ; Tue, 7 Jul 2020 22:50:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DA86820738 for ; Tue, 7 Jul 2020 22:50:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YTUVrizS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA86820738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 163C76B00B1; Tue, 7 Jul 2020 18:50:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0F8466B00B3; Tue, 7 Jul 2020 18:50:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E34766B00B4; Tue, 7 Jul 2020 18:50:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id CA0816B00B1 for ; Tue, 7 Jul 2020 18:50:40 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8EA84181AEF09 for ; Tue, 7 Jul 2020 22:50:40 +0000 (UTC) X-FDA: 77012775840.04.beef82_04163e426eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 696AE8006983 for ; Tue, 7 Jul 2020 22:50:40 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yf3kizuzygejian6sbp1td4ybn5yca413oxcgo8uwsq94iobk1spdzqrq7hoc.s3ezs9hg1qd3nm7be1skz1td1bxwb5sxtgdo6hmk1yc7aq3jq96ryp3o9owym7t.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: beef82_04163e426eb8 X-Filterd-Recvd-Size: 5547 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162239; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u4u5+MpkCsisdwspgoCKpfclb3FqsVFfOPYmJEG8Y9I=; b=YTUVrizSrDWzwQJAbzBeou3dGzF0cy4UTAHiZ0mImEqFgDq7+ShiXyJ0L4MDaBBBiNvn3Z Gdb05xs7kPvDtDB/uOMQgsLTvcoQuotFSTgs/SuH8/T5/0fulGBkTTnamBncJPZavBzXp5 3K1XG0Jszl1Yy1pJnVg4lizuojz+tIQ= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-414-aOBv6AGoOryRH5XI2bQgAg-1; Tue, 07 Jul 2020 18:50:38 -0400 X-MC-Unique: aOBv6AGoOryRH5XI2bQgAg-1 Received: by mail-qk1-f199.google.com with SMTP id o26so29626061qko.7 for ; Tue, 07 Jul 2020 15:50:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u4u5+MpkCsisdwspgoCKpfclb3FqsVFfOPYmJEG8Y9I=; b=H3mrltNNoZy9lWqVnWvwc4/L02/G6uqEIYKZDq7LEQ4Bx98q0ivMgo7jQEl46kj7Yu KbGLZnO8apPMPF1QYtxe0bo90PXrVNnrKdCJeeAiHMB/y7BceS7E3vprVTs2j5ckWgpN LUAKCoaRM1BR8Pgyrwhb1+Qayw4jZMvwOcS58razHBMQDGo2qALl3i+Vwkq2YQUFA2F3 2yFOdI5Yd5Ih8LoWvX5QDcyM2XnaL2ABCaUwNwS62C9YnRIQrLCW0wxZoYJhwY6jao7p Tvu86XzAiN6rNsQJsGJ8tfWCpH61ak93iDrBjQHCeBK1EUzalTkOAktTmgJmdPKjgQw3 p61Q== X-Gm-Message-State: AOAM531sPYdHqA7ljPz9DgTGcdO3NfcLlYlveNDVaAWdF4dkdbLtVfi8 AKwlOFce7P7JsOlJ5HbnavoBS0lHMigWdzy9Am+/mycKhFnspz8VNFqB/+5ZxoYKKncu6TlNYkI Eb2PsBcjdGZE= X-Received: by 2002:a37:b141:: with SMTP id a62mr48027924qkf.201.1594162237586; Tue, 07 Jul 2020 15:50:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwCDX4X568D3uYvvAIuwtoK14UDjY6KFaGDB2k0o/5J1j74ZdyFrKj0t3T0AAWQ2lBwZRcwWw== X-Received: by 2002:a37:b141:: with SMTP id a62mr48027910qkf.201.1594162237320; Tue, 07 Jul 2020 15:50:37 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:36 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman Subject: [PATCH v5 08/25] mm/ia64: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:04 -0400 Message-Id: <20200707225021.200906-9-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 696AE8006983 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). Signed-off-by: Peter Xu --- arch/ia64/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index abf2808f9b4b..cd9766d2b6e0 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -105,6 +106,8 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re flags |= FAULT_FLAG_USER; if (mask & VM_WRITE) flags |= FAULT_FLAG_WRITE; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); @@ -143,7 +146,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re * sure we exit gracefully rather than endlessly redo the * fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -166,10 +169,6 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650401 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30276739 for ; Tue, 7 Jul 2020 22:50:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F0DAF20738 for ; Tue, 7 Jul 2020 22:50:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YIV8lRrA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F0DAF20738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2EE096B00B5; Tue, 7 Jul 2020 18:50:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 206BD6B00B7; Tue, 7 Jul 2020 18:50:45 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECF716B00B8; Tue, 7 Jul 2020 18:50:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id D01306B00B5 for ; Tue, 7 Jul 2020 18:50:44 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 930F78248D52 for ; Tue, 7 Jul 2020 22:50:44 +0000 (UTC) X-FDA: 77012776008.21.month25_001126626eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 6FED1180442C2 for ; Tue, 7 Jul 2020 22:50:44 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yg7b3riun55xc86fa4ebz5xm3tkop6b8rdzbhtgygg6hiqggmxnt8yi6ubzqi.us4egy3prcjmnp138988p31917xjjr51cpxpxk6363jbatqwxoaioejxk4de3t9.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: month25_001126626eb8 X-Filterd-Recvd-Size: 5836 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8f5TTLJDyENYlbFehaNnbtlaq4/fY2SMNvjmVOggAGU=; b=YIV8lRrAxbwW6WjJ++ggcXwlAS0CnbckWgXhzbrSek0JukDj6uBHq7V+f8UKBfigNe1l7q JiAFl2SCcgKA5Y4jzfxVrbNhZab5MXTkHSBs97KC6qo08Y6/nMVhb+TZGCJLR5MQNMW//K 2816oWQW0QhTALBiGGp/jDppXOL8moo= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-259-ODuzfHgDM7eoI0ATGnvQBw-1; Tue, 07 Jul 2020 18:50:39 -0400 X-MC-Unique: ODuzfHgDM7eoI0ATGnvQBw-1 Received: by mail-qv1-f70.google.com with SMTP id m18so14962174qvt.8 for ; Tue, 07 Jul 2020 15:50:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8f5TTLJDyENYlbFehaNnbtlaq4/fY2SMNvjmVOggAGU=; b=WQ46MdwwZK+oPJ8XwO0LiQ/Yy59oqJQAjPKRKtnO+Qq+jsdEl5RxuwMxInP7i8PzgD jtX3G3G3oqwYnn1obOFFruzq33K4crNPXH36jbV0tIkUF0GTzMkMb+L6+l8UBEGrjlft MMbES+Z+JQ2eDkSj91pfmFitObDlcXyl4cF8A2Gf0bJvXwLT/Zj8EL1ArwREFXFE7i4j WUsvT2z7JOPtMduNoblqGs9vtVwXWIEOv6T67e7CTNEAbJllCSpM7DLT9Z2rSPTOQu7w uv7ltc7fQQC12Qls9dxM0kIfzoG/nyq9YV7BHmsRHhL0poCWPiZauAjJMxSpiVFOEw1i 7UsQ== X-Gm-Message-State: AOAM531Bg23TReJMROIE45jljNOJzGzfiTpXlCg3+e933V/p6VM9zD63 MNsIS85o3nXx7AGPhpNkKEoCoV0Q9564DGlCXW3wWsaffzEE5nz64ByDrVMxYA1GcWt4RPXCrxs eSftayh2pTOU= X-Received: by 2002:a05:6214:949:: with SMTP id dn9mr51711871qvb.116.1594162239229; Tue, 07 Jul 2020 15:50:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxYnZ/sBwn9kw3lX9BrDVhoNgmUHTzKr+RCjgXjVMxWi/R7wstbktVvXocMev9XoXXGqSRu1Q== X-Received: by 2002:a05:6214:949:: with SMTP id dn9mr51711848qvb.116.1594162238945; Tue, 07 Jul 2020 15:50:38 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:38 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Geert Uytterhoeven , linux-m68k@lists.linux-m68k.org Subject: [PATCH v5 09/25] mm/m68k: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:05 -0400 Message-Id: <20200707225021.200906-10-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 6FED1180442C2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Geert Uytterhoeven CC: linux-m68k@lists.linux-m68k.org Signed-off-by: Peter Xu --- arch/m68k/mm/fault.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index 08b35a318ebe..795f483b1050 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -84,6 +85,8 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); @@ -134,7 +137,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); pr_debug("handle_mm_fault returns %x\n", fault); if (fault_signal_pending(fault, regs)) @@ -150,16 +153,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650399 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C5C613B6 for ; Tue, 7 Jul 2020 22:50:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E478520738 for ; Tue, 7 Jul 2020 22:50:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TsdoNLji" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E478520738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7800F6B00B3; Tue, 7 Jul 2020 18:50:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 709936B00B5; Tue, 7 Jul 2020 18:50:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 560F16B00B6; Tue, 7 Jul 2020 18:50:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id 2EFEF6B00B3 for ; Tue, 7 Jul 2020 18:50:44 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id EF95F180AD806 for ; Tue, 7 Jul 2020 22:50:43 +0000 (UTC) X-FDA: 77012775966.24.month97_51059c926eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id C59B51A4A5 for ; Tue, 7 Jul 2020 22:50:43 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30051:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yr1xskcia89s4w7ortekkpx9tdioc63mrauohxj1sar98sj4umfu7hdjpsuen.r8durm3mqh5n8jf3gwf55oxui5qetjoy5hwnms5fjuxfwmxi6cqsmrh19ma4q5a.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:157,LUA_SUMMARY:none X-HE-Tag: month97_51059c926eb8 X-Filterd-Recvd-Size: 5754 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162242; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ntPp8bHOPIXhq64ZYjMBP9tKw1Ahb/2+VdQMPS01kXM=; b=TsdoNLjiNUdU6RQcMUXSGOPyWtFSIb1rZFkOlK0S2Xj1pxat/r3X3jE36c4OQxJS1KhbuZ 2TFzVrqJqVM4+xDiXPJCe26yVK6zpUZB7MSyntpYRA9TPXiDu0gtADD4D431SbO4ScuwyO C9VHJq9kv1d6rD6vCyDzsoAgz+0T+Lo= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-488-3SOudHRBO4-duWNYTuKrYQ-1; Tue, 07 Jul 2020 18:50:41 -0400 X-MC-Unique: 3SOudHRBO4-duWNYTuKrYQ-1 Received: by mail-qt1-f198.google.com with SMTP id u93so31777472qtd.8 for ; Tue, 07 Jul 2020 15:50:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ntPp8bHOPIXhq64ZYjMBP9tKw1Ahb/2+VdQMPS01kXM=; b=nTx4FlbA9EbPXt539e8nS56OnlnLoAoWsM9P7UPEOyeUxLqt0GuTrtY4cDXxXgP5IE ftlNmPERbMeOjaVb+PaGgcWKt9wRMpasjPfPR6FZZ5yMDQfO4bc9WAhDSYzXc3UwPSDo N7V3ZPf73Hiy/RHETeQ4c0BeadbW0vITYJ56C8EbG475KgbWSCK0Mfo4PUXvczMOh65U IUQHFXJMbCcpOWmWgAn6XI4I5TcwNKGCu4jFRsqF1dNNdumKN6/6ifdDxhQrZs1rFovD zVRbYsdLnKdCyA+JJGcnAvIJo8c0q5HGPRvkEVw3Jv/Ip+9L0KcI1E1D3NOoI34fCSUw e6pg== X-Gm-Message-State: AOAM532UMtFnqujZChP96/trYZWF5bFvflt9Q8slN3hmqnksUOCN55DM I/IJE8Mody+xu+Q80JjQZr72giaXfV5mEQXjouJxIBV2/HUqNmKKWA4tN53HR/OLqvvJfHaF7k5 ApJ1ITRFP8+o= X-Received: by 2002:a0c:e847:: with SMTP id l7mr56120981qvo.232.1594162240774; Tue, 07 Jul 2020 15:50:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwgovnWtiF+oysXYM9UD0AGaq7GBiLXSMQKB1qC50sXyGAmTr2SdnjgqnzmgDoXm4SpeSq8Kg== X-Received: by 2002:a0c:e847:: with SMTP id l7mr56120955qvo.232.1594162240518; Tue, 07 Jul 2020 15:50:40 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:39 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Michal Simek Subject: [PATCH v5 10/25] mm/microblaze: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:06 -0400 Message-Id: <20200707225021.200906-11-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: C59B51A4A5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Michal Simek Signed-off-by: Peter Xu --- arch/microblaze/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index 1a3d4c4ca28b..b3fed2cecf84 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include @@ -121,6 +122,8 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the * kernel and should generate an OOPS. Unfortunately, in the case of an @@ -214,7 +217,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -230,10 +233,6 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (unlikely(fault & VM_FAULT_MAJOR)) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650407 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9F81739 for ; Tue, 7 Jul 2020 22:50:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A5BE02075B for ; Tue, 7 Jul 2020 22:50:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LlrHonly" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5BE02075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3FE326B00B9; Tue, 7 Jul 2020 18:50:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3D3DC6B00BB; Tue, 7 Jul 2020 18:50:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 227DC6B00BC; Tue, 7 Jul 2020 18:50:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id 036476B00B9 for ; Tue, 7 Jul 2020 18:50:47 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B8BA61EE6 for ; Tue, 7 Jul 2020 22:50:47 +0000 (UTC) X-FDA: 77012776134.21.stage29_0e16eba26eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 90AF5180442C0 for ; Tue, 7 Jul 2020 22:50:47 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04ygcwjjphq7gdg83e5tm1gk5h1dfopp1wwo89r7kek8ai6ttpgfqp1xjttr84s.9pmttyqtohgqgzc8i1w3ypapw38h3y6rctdg4sbahr8nr7na1xth3cdoncpws5g.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: stage29_0e16eba26eb8 X-Filterd-Recvd-Size: 5873 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162246; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DEmzww7H5zywGqZA9zT7LUnhxIyJ8MI0ZWzVs6Wigj0=; b=LlrHonly0npK1rTAbZFhkUD3yxnRwRRtamqjRgxwGpPV5BMdV3T03uWtKBp2gZpTMUjkRi rXfM25SE86J76E07MLPrTTkAuEGlFl3OkAyEX2TIlmXxCTEfCT/0l3yhmu2v7vV7SiUA5B pTZE+Mz3ijGlkb4VAeaUwMWJef/Tngc= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-224-YVwTqfA4OH2a7S21Vz4Ezg-1; Tue, 07 Jul 2020 18:50:42 -0400 X-MC-Unique: YVwTqfA4OH2a7S21Vz4Ezg-1 Received: by mail-qk1-f199.google.com with SMTP id f79so23562292qke.9 for ; Tue, 07 Jul 2020 15:50:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DEmzww7H5zywGqZA9zT7LUnhxIyJ8MI0ZWzVs6Wigj0=; b=Br5/rbtOs4sh338EQg1D1JM+5tBzN32x1yjzCR7bXwRTiGUHkwUvlW3wGBerrzP6bO ssv+GyGe10giJSbRGPFn70KMq4xLrxQxia20JBolO/AZVMxSIsJDBPwaJNS2B+f9ysTu /DM1whD18cYi0jg0l2Tv5AC/RnNNqcZ+SQggD84Grp3hNQmiQTUH5oLxof3c7+KCxC+h 0tg6U6xtEC8G4VXW3qwWETjVqRbZ1SDDNMPR/ObzQ2uzXXPiFwLfMJx5Gg4NIazjoOl7 0C+wz9CWrDWmbV1Q03VrZ7FYd9dgUt7WTEh50CiYAhrAI7AcgJtVkhW7v6i4Q4oY4QmO hUyQ== X-Gm-Message-State: AOAM532glAEleqzgRr5l28ozOAp0qd1ABttCv3xC4J4fMP0cK6V6+21i 6EtQpF6n19bmOVbUo1ZAezArKVCNg4TfaGL9anuvvQKN/olozIia+qBubcvaUlftZME39YLbFlg CyFNmsTlQD9o= X-Received: by 2002:a05:620a:6c9:: with SMTP id 9mr52692039qky.271.1594162242429; Tue, 07 Jul 2020 15:50:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy1qTJEBcfEtMVTs2oSUGxcvlRB7kFM/hzR/mKCkF4/EjMMmKG7vtyho7cDDf2sOJlHl1+CRg== X-Received: by 2002:a05:620a:6c9:: with SMTP id 9mr52692017qky.271.1594162242180; Tue, 07 Jul 2020 15:50:42 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:41 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH v5 11/25] mm/mips: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:07 -0400 Message-Id: <20200707225021.200906-12-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 90AF5180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Thomas Bogendoerfer CC: linux-mips@vger.kernel.org Acked-by: Thomas Bogendoerfer Signed-off-by: Peter Xu --- arch/mips/mm/fault.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index b1db39784db9..7c871b14e74a 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -96,6 +96,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -152,12 +154,11 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; @@ -168,15 +169,6 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - tsk->maj_flt++; - } else { - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - tsk->min_flt++; - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650405 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B0BB739 for ; Tue, 7 Jul 2020 22:50:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CBE132075B for ; Tue, 7 Jul 2020 22:50:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TAs3d0lf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CBE132075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9076F6B00B7; Tue, 7 Jul 2020 18:50:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8932B6B00B9; Tue, 7 Jul 2020 18:50:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 692C86B00BA; Tue, 7 Jul 2020 18:50:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id 506B46B00B7 for ; Tue, 7 Jul 2020 18:50:47 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1BD68180AD806 for ; Tue, 7 Jul 2020 22:50:47 +0000 (UTC) X-FDA: 77012776134.25.hall56_1f0e7a826eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id D906B1804E3A8 for ; Tue, 7 Jul 2020 22:50:46 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30051:30054:30090,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04ygj1zk1m3r4zk5ym5mrgpzczk1fyprthxih1a6acuhh1dong6af5f9xckn1ye.p3hi5aixhfh3yf9em4fh6xdn5sfr951y3ie8ofo36cme1ohjgtfada83z3t7jm7.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: hall56_1f0e7a826eb8 X-Filterd-Recvd-Size: 6029 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162245; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/WN5suCL4vjaI1eJdzhaIBmykla0j2zqu4dhyFqEqPg=; b=TAs3d0lfBg/rKkGBPN42dd2Z+0ejwl5/iIhJiKajdPfUjK4DwGfKhEIjto0vACglOyUU+o lNStu/uUYl7ejFcoiEFpJqhWC4CA06mDTCl6hnSg5DDRKCVJxqsFkddExhhHVIq4U+n2Fn GzVHrhjLFUUCfjSn9y6b7wthPd4VZJQ= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-143-4ikJUOmXMjSwEdoixrkUVg-1; Tue, 07 Jul 2020 18:50:44 -0400 X-MC-Unique: 4ikJUOmXMjSwEdoixrkUVg-1 Received: by mail-qk1-f198.google.com with SMTP id 204so29653255qki.20 for ; Tue, 07 Jul 2020 15:50:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/WN5suCL4vjaI1eJdzhaIBmykla0j2zqu4dhyFqEqPg=; b=Xu+SQNCKkn4zIL1M09Xpsq8CDmbkf/QhfFFubZn+CCbrQl5+nvsYIqOBsypnqqmDC5 Vtpn7RJw4Dc/jr6SYdzAIdVmxQlWVO2a9DOEokkx5gH4FzEy9cY/ti/MnJ+pmuiaXTgy gGUZ/tR65nZQnvjGhj+KT7ZiENxgYh5e1MzyIhbf75oh8ygqWVLvdGVZpTGLj3WXVbs1 ai77Kl+dAVHVTHpA27DSPmJdV2p8Dz9kF6/qmc/wFhX4IzG1JOMX9WuFCsUkrxnwmsYU pSZk0ZCgO6CzOmVt5fwjI7hrTcwYO73LYV+xySxlfV+aMBT9yTxvS6Owp5EuZE4yJaIl oUDw== X-Gm-Message-State: AOAM533ubUBoB8OcllznlDMfrfzRGZBFuVdILY/LX0ZoNcqQVVpI4ae6 uov0P4WEq68PQAfmp5gEmLKeABN+jaU2xaa1kyGZb9PxVpj2EcdlWUPXF2r9GLsRqORo0pQq8HJ DzyhVq91E2WI= X-Received: by 2002:ad4:5912:: with SMTP id ez18mr44655480qvb.24.1594162243936; Tue, 07 Jul 2020 15:50:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyCsUVFm5+NKahCqT/VXt5L+qHNyxIcGiPL88zMsimW5b6qIOMZxSStxnN8FyeLMPOOGmbLRQ== X-Received: by 2002:ad4:5912:: with SMTP id ez18mr44655458qvb.24.1594162243712; Tue, 07 Jul 2020 15:50:43 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:43 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Nick Hu , Greentime Hu , Vincent Chen Subject: [PATCH v5 12/25] mm/nds32: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:08 -0400 Message-Id: <20200707225021.200906-13-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: D906B1804E3A8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Nick Hu CC: Greentime Hu CC: Vincent Chen Acked-by: Greentime Hu Signed-off-by: Peter Xu --- arch/nds32/mm/fault.c | 19 +++---------------- 1 file changed, 3 insertions(+), 16 deletions(-) diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index d0ecc8fb5b23..f02524eb6d56 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -121,6 +121,8 @@ void do_page_fault(unsigned long entry, unsigned long addr, if (unlikely(faulthandler_disabled() || !mm)) goto no_context; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -206,7 +208,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, * the fault. */ - fault = handle_mm_fault(vma, addr, flags, NULL); + fault = handle_mm_fault(vma, addr, flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -228,22 +230,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, goto bad_area; } - /* - * Major/minor page fault accounting is only done on the initial - * attempt. If we go through a retry, it is extremely likely that the - * page will be found in page cache at that point. - */ - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650409 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D86A7739 for ; Tue, 7 Jul 2020 22:51:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 994EB20738 for ; Tue, 7 Jul 2020 22:51:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="jPkhgB8S" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 994EB20738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 18E916B00BB; Tue, 7 Jul 2020 18:50:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0F06E6B00BD; Tue, 7 Jul 2020 18:50:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF95C6B00BE; Tue, 7 Jul 2020 18:50:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id D594A6B00BB for ; Tue, 7 Jul 2020 18:50:50 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9E2F6181AEF09 for ; Tue, 7 Jul 2020 22:50:50 +0000 (UTC) X-FDA: 77012776260.25.rings20_081292826eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 6FBC51804E3A8 for ; Tue, 7 Jul 2020 22:50:50 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054:30064,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04ygk1mx7x1t5f45utqz7qufkg771yce4fcm7kz6gygg6hiqggrz5npry9pidtr.po8z7ryykmgfirsj8988p3191hcsk34d8tcpts36ngxreww8t7zhhaimysu3mrw.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: rings20_081292826eb8 X-Filterd-Recvd-Size: 5897 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162249; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/OgQ3igxRFmWHn0+Y9XrBgGWT5Mzg0+fUW3Zyepd27I=; b=jPkhgB8SaZF6hNFB6XVd0R4oIY+lLEdxz+mCctAitdsND2xyZS7FDDi1pkz5wKrcwjB8uD VuIzVNlVq1ujtxETAlriwLuji4YGtLL4vKSY0ZQ9LsVmvbQis5t774DOBsdrbDY8lK4p1M nySH73l78c+cmPnMxY8QtFWDGDPAlsU= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-173-RT3rApaSPqqrKV4AARKsQg-1; Tue, 07 Jul 2020 18:50:46 -0400 X-MC-Unique: RT3rApaSPqqrKV4AARKsQg-1 Received: by mail-qk1-f199.google.com with SMTP id u186so29689447qka.4 for ; Tue, 07 Jul 2020 15:50:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/OgQ3igxRFmWHn0+Y9XrBgGWT5Mzg0+fUW3Zyepd27I=; b=Hsxo4Kugh4mlKZbrHvqQOsAnIUpixmqawnzOzTdtCbkgCJYJVn6/hQ7zYCpSxJFnYw 8yPe3qkAXUlegatEi+tea2UUKJPoxnqSWln1otzqr0ug4POrIzDIGJoH+MgvcjGdPvbe fWCOhXFX0p4roSB1VXg6Fnlt6SRgiyaB977jeEClFWz5tSzywZX8BCEkf7/23D4So9xz g+E3bw7GqTVJyY7OKypqxBhb2q0QAf2/P/GXFFKo9Ben1qjGwKAS1436xn672RhkNny3 tKFHoZAGMpNiBHuvTpMMLsmrZfTH6SMQWY59dwAxtQLsoapKiVZxp0xv6vaYuaF8lh34 3Ctg== X-Gm-Message-State: AOAM530/ha5aOOGbsvItgMNHDFn11l4KaqaVnoZUZKkdLzqxJB44hAr+ lBfOIOo5W8jK15XgxMpd2lMP+JlUptt3As1L4E0GFKS/9BVTEkoQ3euxnKVoS0B9G9qW5+dqxCL F8ted9G7oJR0= X-Received: by 2002:ad4:5483:: with SMTP id q3mr51466116qvy.99.1594162245543; Tue, 07 Jul 2020 15:50:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlFdKalFtzU5Fvl/PrOhZ5qVXzLaWruqJEUEJUgHwOhQFZp7/OQ1u5JWMVDkzqi6qvwk/FLQ== X-Received: by 2002:ad4:5483:: with SMTP id q3mr51466092qvy.99.1594162245273; Tue, 07 Jul 2020 15:50:45 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:44 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Ley Foon Tan Subject: [PATCH v5 13/25] mm/nios2: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:09 -0400 Message-Id: <20200707225021.200906-14-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 6FBC51804E3A8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Ley Foon Tan Signed-off-by: Peter Xu --- arch/nios2/mm/fault.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index 86beb9a2698e..9476feecf512 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -83,6 +84,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + if (!mmap_read_trylock(mm)) { if (!user_mode(regs) && !search_exception_tables(regs->ea)) goto bad_area_nosemaphore; @@ -131,7 +134,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -146,16 +149,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650415 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D3A313B6 for ; Tue, 7 Jul 2020 22:51:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2DE462078D for ; Tue, 7 Jul 2020 22:51:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="iVJuWLS5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2DE462078D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9A3DB6B00C1; Tue, 7 Jul 2020 18:50:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 842A56B00C3; Tue, 7 Jul 2020 18:50:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E24E6B00C4; Tue, 7 Jul 2020 18:50:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id 50E986B00C1 for ; Tue, 7 Jul 2020 18:50:55 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 17B0D180AD806 for ; Tue, 7 Jul 2020 22:50:55 +0000 (UTC) X-FDA: 77012776470.27.fifth33_1008ce226eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id DCD143D668 for ; Tue, 7 Jul 2020 22:50:54 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yg9oh97hwst386s4hiozkuten9cypyxycoaar6n8yn3sb1q1tg5t9t7kfh7md.5m4suoqfmt155x181ggcfqfrp6y6hju5n1j7fr6qetustpnoyz9mccpxaids7s9.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: fifth33_1008ce226eb8 X-Filterd-Recvd-Size: 5892 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162254; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C2j+0prlu3q0Pnx3SiAUON88ocaozCL3GS+ha/ScH8o=; b=iVJuWLS54VtTbYY6bfFfQSVs2CZ+a6tgXWIgJW3v5MRUb8IC6j/NLcHeAGa5E4eh6yn6i1 qwrB4m9SKz0NNicedzDWWLHwEOGg/Nv3ii7cWUCTTxsF1tfrSFPQ9tS1PhHhEes3lU9mZU fJ+oeQu1+Izp5X6vtgsl39r9ywfG73s= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-286-UsbOFGfONwif_u61YR5FjA-1; Tue, 07 Jul 2020 18:50:47 -0400 X-MC-Unique: UsbOFGfONwif_u61YR5FjA-1 Received: by mail-qv1-f70.google.com with SMTP id t12so17827202qvw.5 for ; Tue, 07 Jul 2020 15:50:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=C2j+0prlu3q0Pnx3SiAUON88ocaozCL3GS+ha/ScH8o=; b=nQCQVnz63r2u6fFZoy22LSMyHR4FFvdyF6/UtkxEN9XMrPPj3aM05ZVVv1diIHmccb B0M9DkYlnkSSUWh0AFMJTqPD0nf7ttzI8RgcLTgLvnXtWHgRwqLryHRhnQfVSHOOrp3R UaItxdSmTudHIoCzmpXn3saSyROJZ0HD4YHHtVlDaa4jJ4k0K26b7XOcv1k8L0v87dLE uCeuNbx7jIoKRzXF/8ii9Tz0mYn/LaK4dFAaT8le62rrdIrWTF4qSWRakWGhz7NyWgW+ sKGwKG6k/3q6hXb/0ii4EHMtbwASlTGntky2DzLGypeYcNXnh5aTVcdgdGbjY9BiUGig Y7uA== X-Gm-Message-State: AOAM531ivc5ZYKsya8ysz6ciFyhP1cPMK0aHcmoOK734TO9pHwd6KS7P uUQLXaEmMqFmCVgD0YogBGO6E/lse9APpfJRaUuOYAJaQe55xXaEUuivuuE2sMip22z3lOPGvDs OV6VSrhVVCDE= X-Received: by 2002:a0c:da04:: with SMTP id x4mr54367802qvj.71.1594162247278; Tue, 07 Jul 2020 15:50:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz91h7dze6hXBpExvldsXu0sEfg+FT9NjBcwqSN/tOXe3f7PIXsIg7+TlKvGVkbKH0rXidC5g== X-Received: by 2002:a0c:da04:: with SMTP id x4mr54367784qvj.71.1594162247063; Tue, 07 Jul 2020 15:50:47 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:46 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Jonas Bonn , Stefan Kristiansson , Stafford Horne , openrisc@lists.librecores.org Subject: [PATCH v5 14/25] mm/openrisc: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:10 -0400 Message-Id: <20200707225021.200906-15-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: DCD143D668 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Jonas Bonn CC: Stefan Kristiansson CC: Stafford Horne CC: openrisc@lists.librecores.org Acked-by: Stafford Horne Signed-off-by: Peter Xu --- arch/openrisc/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index 3daa491d1edb..ca97d9baab51 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -103,6 +104,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, if (in_interrupt() || !mm) goto no_context; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -159,7 +162,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -176,10 +179,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, if (flags & FAULT_FLAG_ALLOW_RETRY) { /*RGD modeled on Cris */ - if (fault & VM_FAULT_MAJOR) - tsk->maj_flt++; - else - tsk->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650411 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B402F739 for ; Tue, 7 Jul 2020 22:51:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 769DA20738 for ; Tue, 7 Jul 2020 22:51:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="SqUFksCE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 769DA20738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E4786B00BD; Tue, 7 Jul 2020 18:50:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6707D6B00BF; Tue, 7 Jul 2020 18:50:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 470B16B00C0; Tue, 7 Jul 2020 18:50:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 19D9E6B00BD for ; Tue, 7 Jul 2020 18:50:52 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D780D2C78 for ; Tue, 7 Jul 2020 22:50:51 +0000 (UTC) X-FDA: 77012776302.18.thing20_350117526eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id B002A100EDBF7 for ; Tue, 7 Jul 2020 22:50:51 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y8zq8m5k1oreeo1zrnz8gutq74kyp5779xypoc5n5w17ewttkfn8x4kuaic6t.tme8p4fxpupdzc7ss147s7jgh8yctddb3ngfj6emyqs789max53r8745bh3iqna.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: thing20_350117526eb8 X-Filterd-Recvd-Size: 5770 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162250; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dSLF2JUETXiqlgMgrNmE+7rrHD1HC1gVnVSET6GUPpA=; b=SqUFksCELFLkkyoe/I63BMSgBOPSOeUYVeyQWG6jUgESPg3cX9+0CZP4wdJ7AcIffynXSh MGH8v9AbayjAOm9YJabdDroou9pQfxVkplKzDyXcR4uB1U0us9Z3riEnGFwb8vg6iZ2GJB itIepBzupQrw7fm/QFaYUexxFL+TCUk= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-370-NFQtEMEeMkmt39Ixj5roSA-1; Tue, 07 Jul 2020 18:50:49 -0400 X-MC-Unique: NFQtEMEeMkmt39Ixj5roSA-1 Received: by mail-qk1-f200.google.com with SMTP id v16so19462572qka.18 for ; Tue, 07 Jul 2020 15:50:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dSLF2JUETXiqlgMgrNmE+7rrHD1HC1gVnVSET6GUPpA=; b=PyVkD9YZ10E387T/hs0i6JGlO+/T1A3J9xojENrlLD7E7gnaDAFwz92UeC8wa36WCU wwN6KvlcWHD0Hf4LJMm0S0gECLEulHI76O2H+/7giL6sJ+9O5YqUR0/DzkVLH3XjAMJG X9nTilXG1Q3DsYttivj1Fvv4fiyQHUoSJu7732oBTZJJgl+ybNe0QzM0sex9XiSuTqyH o3bwbFSDIETK6RAqLHE3bNC4jHykec4FnrLPzh0Ch9YAx5BGQ3819+otRHDitkJ41hHd eMS6XbUNgRAJ5juU+lAnewclnkCZfh7qt/CLcSTf2/9qEIy0MMhNS+8iRl/9jWwYvfRW lCRg== X-Gm-Message-State: AOAM53240rtbg23epFxgkwrdap3oF69q+ebDn8Qw+6JXHrGCkicm06/h mx6A81LuZT3IVaWmNaqkEi3yrk0Ov43B0aVb4yeJO//xKojTwH4edqfHk2d9ecI5u5GZf27E1ds 0RefrWOmIev0= X-Received: by 2002:ad4:5307:: with SMTP id y7mr54932489qvr.63.1594162248906; Tue, 07 Jul 2020 15:50:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxZZjK6Nn1bIXhPvAgy7jT1o0DD/8o7Kmnv40jvO5ZPTVwAW66E615v62v4cElkkeaFLjMBdA== X-Received: by 2002:ad4:5307:: with SMTP id y7mr54932474qvr.63.1594162248721; Tue, 07 Jul 2020 15:50:48 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:48 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , "James E . J . Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH v5 15/25] mm/parisc: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:11 -0400 Message-Id: <20200707225021.200906-16-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: B002A100EDBF7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: James E.J. Bottomley CC: Helge Deller CC: linux-parisc@vger.kernel.org Signed-off-by: Peter Xu --- arch/parisc/mm/fault.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index e32d06928c24..4bfe2da9fbe3 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -18,6 +18,7 @@ #include #include #include +#include #include @@ -281,6 +282,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, acc_type = parisc_acctyp(code, regs->iir); if (acc_type & VM_WRITE) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); vma = find_vma_prev(mm, address, &prev_vma); @@ -302,7 +304,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, * fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -323,10 +325,6 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { /* * No need to mmap_read_unlock(mm) as we would From patchwork Tue Jul 7 22:50:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650413 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9B9A13B6 for ; Tue, 7 Jul 2020 22:51:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A640820738 for ; Tue, 7 Jul 2020 22:51:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fAM/Wzsj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A640820738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D72BF6B00BF; Tue, 7 Jul 2020 18:50:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CD3C36B00C1; Tue, 7 Jul 2020 18:50:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B77706B00C2; Tue, 7 Jul 2020 18:50:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id 990E76B00BF for ; Tue, 7 Jul 2020 18:50:53 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5C2A4181AEF09 for ; Tue, 7 Jul 2020 22:50:53 +0000 (UTC) X-FDA: 77012776386.13.comb13_5b0a11726eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 2DF9B18140B70 for ; Tue, 7 Jul 2020 22:50:53 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yfjedf81ky987aifsg8siqz5dn6yptgx7mmc7ioe6anjksutquxxy5g5wtp1x.qsmzym9hzyg7iky8wmq7un7ejqr713fqdwxrbbpq6fzwmw9a9dewa13wunsdb5x.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: comb13_5b0a11726eb8 X-Filterd-Recvd-Size: 5293 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LmLZf6AJOqhj6CgptEYub+5PD04MI0cndAP6UETZVFY=; b=fAM/Wzsj93eChTz1FpqgzKwfvmtc4jqsGSSxJeVXr74LD24tARcOenEolPeTbc/DD0x87q oWydYYogwjFCZtmb4i3Oy0ZdWM515f8FIH/OrisBBj1NFF3v7E2V4PGLmQ41LjVuXKP2Ql i1vlyo34LxZ+pR0Zeb8/WBbdjhz1nJo= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-239-gucqXCONMk2gYutPpPo9Og-1; Tue, 07 Jul 2020 18:50:50 -0400 X-MC-Unique: gucqXCONMk2gYutPpPo9Og-1 Received: by mail-qt1-f199.google.com with SMTP id c5so31790360qtv.20 for ; Tue, 07 Jul 2020 15:50:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LmLZf6AJOqhj6CgptEYub+5PD04MI0cndAP6UETZVFY=; b=GOCdXPccIJtHOnxxf+eiiKQL+CvR1cT+Ehu8xX0e5DjGkZkZaEpmoFTRHGO0V4vo1v uyHsAXoy0aMvDiGuE31VQ30DLtld523R+lnWDaMXzCb1wvhfUPOYulOZtkr4jM1GEBEa Pjt8CUZibpOP9fMM49uK144o7hiAymTB/76DdRBiN6ih4NFW40u3dRKyW4WQvF/DbIqq zKabW46+fICtGbZXjt9BEjKmYjVRXLEhGpVhVHxR5LqSa/cPIXSCjSHsJLocWRDAq4MH Wfndf9LSsH6FNBPW/V9j9sv+N/j8MWHbmTX3rcS3Kqqgt+MN9cB7bhvGmQzGH/owQfn3 uSyA== X-Gm-Message-State: AOAM530ev7hlDjvcGezsVYGbp8ovbSp1aChkgM+XTxYCjLGuzFqmpu11 elXygV52hjt7SPNQOslRUxENB5N2wY3MYIUD7A/2G4CeRACTfnNlKDjCRVwUIEkeK+BTC4Bg348 hOxFQuwcbO08= X-Received: by 2002:a0c:b88d:: with SMTP id y13mr48924486qvf.82.1594162250468; Tue, 07 Jul 2020 15:50:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwiL/ertHlMtVAwiEgBsckzbOTj5OmrrUyC7CcF266qmLzt4w9AsKT415R55VH7drOKCUmHHA== X-Received: by 2002:a0c:b88d:: with SMTP id y13mr48924470qvf.82.1594162250236; Tue, 07 Jul 2020 15:50:50 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:49 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v5 16/25] mm/powerpc: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:12 -0400 Message-Id: <20200707225021.200906-17-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 2DF9B18140B70 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). CC: Michael Ellerman CC: Benjamin Herrenschmidt CC: Paul Mackerras CC: linuxppc-dev@lists.ozlabs.org Acked-by: Michael Ellerman Signed-off-by: Peter Xu --- arch/powerpc/mm/fault.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 25dee001d8e1..00259e9b452d 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -607,7 +607,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); major |= fault & VM_FAULT_MAJOR; @@ -633,14 +633,9 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, /* * Major/minor page fault accounting. */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); + if (major) cmo_account_page_fault(); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } + return 0; } NOKPROBE_SYMBOL(__do_page_fault); From patchwork Tue Jul 7 22:50:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650417 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81C25739 for ; Tue, 7 Jul 2020 22:51:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4B9292077D for ; Tue, 7 Jul 2020 22:51:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HX0enkAR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B9292077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 229446B00C3; Tue, 7 Jul 2020 18:50:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1DD956B00C5; Tue, 7 Jul 2020 18:50:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EADB16B00C6; Tue, 7 Jul 2020 18:50:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id C46366B00C3 for ; Tue, 7 Jul 2020 18:50:57 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 878FF1EE6 for ; Tue, 7 Jul 2020 22:50:57 +0000 (UTC) X-FDA: 77012776554.12.color16_500714726eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 5A55F18002E33 for ; Tue, 7 Jul 2020 22:50:57 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yrbewdoxjk6cwu15ac4kprrfu6fopden4xkbxqcprnz3kpsn5mcyxyhkk6wmu.5da8sgtg961wat5ijzhep1jbwui6wwx97qkr5sdm7fgj3pxz1sqbkzsddf3iaw6.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: color16_500714726eb8 X-Filterd-Recvd-Size: 5652 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162256; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=118ZuGBADe17DIOE5hd4LPbyDCcCNPIhXK0NF+I4+G0=; b=HX0enkAR0yDxMdka+3xuVQTrKTYoqGrtM0Gar7HLKEGZuQuPey2dShdBW9B9CpFt7u7/mB SELh1/ip4gOTgD5v8eWLjKNDblz9YhH1q7l0oB7ChC2168JFToxDkXX11kgytT3l21Nter DDUM7uVudKmkAHwLrLwYoqpgiPH25TU= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-353-OI41Of3ZMy2wNLI6ypRhog-1; Tue, 07 Jul 2020 18:50:52 -0400 X-MC-Unique: OI41Of3ZMy2wNLI6ypRhog-1 Received: by mail-qk1-f198.google.com with SMTP id u186so29689584qka.4 for ; Tue, 07 Jul 2020 15:50:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=118ZuGBADe17DIOE5hd4LPbyDCcCNPIhXK0NF+I4+G0=; b=XEBdGMy96QQxNYS9BBefgnfOq0A2Jcl5g3KA02IPdZ7CgdparreOu5i7p4Uj8fPI1D w6GSTeh2x89eZaTwhhHB2oCRApEvoa5Nd6mbDdXEXuNLK9GTJa86BlmLkm5CBQwrQAvu V+C8SYyuR2aFZ5Y3QCazBbL+n1bRkdfbDzS2zxJZZ7yQvmFffhCiMep6NSXyyz517eYr CVIWtwbqsiptkPqT7hKpZK0sUeQ7FYtfsQykmPDRikU2OfPbP6eubpXSbOQcbWfRSFyv PJys51XTQtCyD23gMtjqzxw/cUMgw6WRrEbLFPFTLDw8+EV152dXIeGL9gt6t9ktqIpG J8lA== X-Gm-Message-State: AOAM532qkLRG/tGAYWdNW6u4w09l5RgpHGHcLeTV3WMeC4aTiZnESnVU Gxx8D4y9if/G0T8Aqih1cWetelj/umU4AU5Z+5xpHGTeB/C4hruLM2YpFOPO4tFcCMtHsK+upwn m9iOBReHcI70= X-Received: by 2002:a37:b141:: with SMTP id a62mr48028568qkf.201.1594162252350; Tue, 07 Jul 2020 15:50:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxK7Y2DUfTKdLNV2DO9bFAUkgaqmVbLuA9xFf6I5zpI0aGjlm1MfWvVmwIY/UB1dT9vqIFjlg== X-Received: by 2002:a37:b141:: with SMTP id a62mr48028550qkf.201.1594162252137; Tue, 07 Jul 2020 15:50:52 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:51 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org, Pekka Enberg Subject: [PATCH v5 17/25] mm/riscv: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:13 -0400 Message-Id: <20200707225021.200906-18-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 5A55F18002E33 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Paul Walmsley CC: Palmer Dabbelt CC: Albert Ou CC: linux-riscv@lists.infradead.org Reviewed-by: Pekka Enberg Signed-off-by: Peter Xu --- arch/riscv/mm/fault.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 30c1124d0fb6..716d64e36f83 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -109,7 +109,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags, NULL); + fault = handle_mm_fault(vma, addr, flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -127,21 +127,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650421 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C8AF739 for ; Tue, 7 Jul 2020 22:51:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED95A2075B for ; Tue, 7 Jul 2020 22:51:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="cJyfklN4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED95A2075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F01976B00C7; Tue, 7 Jul 2020 18:50:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E397F6B00C8; Tue, 7 Jul 2020 18:50:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC83C6B00CA; Tue, 7 Jul 2020 18:50:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 8EDA96B00C8 for ; Tue, 7 Jul 2020 18:50:59 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5528B2C78 for ; Tue, 7 Jul 2020 22:50:59 +0000 (UTC) X-FDA: 77012776638.24.eggs67_5b055e126eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 291BE1A4A0 for ; Tue, 7 Jul 2020 22:50:59 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yfswxrsfn36k86jjqpdoddm8nymoptnjycrhdd795bymdttep8wtfbzxxnbcu.odzix68cutjygowctbwdauq6kh965juwx1fqm1xhzg46ps9by78zyys7amppfah.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:37,LUA_SUMMARY:none X-HE-Tag: eggs67_5b055e126eb8 X-Filterd-Recvd-Size: 5876 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162258; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L3XIq9qb2B089LN9k3kMf47vSw7aFkV5Znk8amd1bbo=; b=cJyfklN4TW9gl9KbVorAfHFEOC0s/hDVj4EiPLds3lh3GJRI/QhmJ6T21ZBpBs3DzGdUnT N68IVuqYYgSdyPBSR8MCmU6cW95ZjBG5f4vWikVX+ML5+Uh0cAOoYncD1ryR9LqH1Ls1wJ pMYltqqnE/rewEdjbi2353EcamCOjnw= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-397-Bb0UoktWO36mEeGdejhMFg-1; Tue, 07 Jul 2020 18:50:54 -0400 X-MC-Unique: Bb0UoktWO36mEeGdejhMFg-1 Received: by mail-qk1-f197.google.com with SMTP id v16so19462698qka.18 for ; Tue, 07 Jul 2020 15:50:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L3XIq9qb2B089LN9k3kMf47vSw7aFkV5Znk8amd1bbo=; b=hQXeZjpXK9fEcYYziQxEYiZGKe46PfOEo4HJdesc0TeZ1yxaRgbbt3o1jILl9hBjlV X6fk6WOiMZLDzrJvBDQxZjD82SP7oKTMhywzCyFvZknoDV1Qj7CRBkztybt5nyjJDWtJ 8Xgg3HmRrJdf8kC0LxtNCCJVx+VKKAw8aNCG+z2kY0gNoxn+IezaOJHbTeBQK2i/q4XE /JaOKcR4wweyVGqnNleczqEsaMwYWNM8/khdxvZ3SMB0A7yTuIgnkQpOYlabMYYtnVNx TBQESTOC1j9Vsp5KI8AmJ1p/E7UI5qTey0WKXqabjiERJ28Dnb/AWiIwlfe5BSEHxpKq +dMA== X-Gm-Message-State: AOAM53299Jh8tu3xBmmOrH6+MpLtATFFqWl4k1bkYNnzBBzkt3yTBC0D /iadLXDCMIm8cuvZ2LPR8L4ceQvtEKqmdAymAt7mEzU5YxuTMLqVtNAXTyBHvt24WSz694okqz1 b14OcxzQMjyY= X-Received: by 2002:a37:aa05:: with SMTP id t5mr54060133qke.451.1594162254233; Tue, 07 Jul 2020 15:50:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyeE2qM5GM19ugpdoKdIt+ai6du4xQwHKQ/Rb9KnI/4dISGj30ouoU1oW9Iw8yZXJini/8w8g== X-Received: by 2002:a37:aa05:: with SMTP id t5mr54060118qke.451.1594162254014; Tue, 07 Jul 2020 15:50:54 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:53 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , linux-s390@vger.kernel.org Subject: [PATCH v5 18/25] mm/s390: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:14 -0400 Message-Id: <20200707225021.200906-19-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 291BE1A4A0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Heiko Carstens CC: Vasily Gorbik CC: Christian Borntraeger CC: linux-s390@vger.kernel.org Reviewed-by: Gerald Schaefer Acked-by: Gerald Schaefer Signed-off-by: Peter Xu --- arch/s390/mm/fault.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index fc14df0b4d6e..9aa201df2e94 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -478,7 +478,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; if (flags & FAULT_FLAG_RETRY_NOWAIT) @@ -488,21 +488,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (unlikely(fault & VM_FAULT_ERROR)) goto out_up; - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } if (fault & VM_FAULT_RETRY) { if (IS_ENABLED(CONFIG_PGSTE) && gmap && (flags & FAULT_FLAG_RETRY_NOWAIT)) { From patchwork Tue Jul 7 22:50:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650419 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D05D1739 for ; Tue, 7 Jul 2020 22:51:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D2442077D for ; Tue, 7 Jul 2020 22:51:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YSkWTZLC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D2442077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 77F326B00C5; Tue, 7 Jul 2020 18:50:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 708EB6B00C7; Tue, 7 Jul 2020 18:50:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AC196B00C8; Tue, 7 Jul 2020 18:50:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 3FEB06B00C5 for ; Tue, 7 Jul 2020 18:50:59 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0828D8248D52 for ; Tue, 7 Jul 2020 22:50:59 +0000 (UTC) X-FDA: 77012776638.17.pull31_57170a826eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id D8AE2180D0185 for ; Tue, 7 Jul 2020 22:50:58 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yfzepo8t94b7473gxtiypo4769bocek6b1jh48nik7db3sn3s4m5zfi5qyqhn.1zuzg7qzqi7cuasjqeaeynhcg6csxt1zi3617gdou3cksyqbrptzz8towask3mm.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: pull31_57170a826eb8 X-Filterd-Recvd-Size: 5228 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:50:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162257; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sstMP5NBLNFJ42p1Ptb1NInEyGUfT5YQOxqtWfaqyuo=; b=YSkWTZLCR9/wH8KFRFK5MT4W7jB0bcBmUX5DhDBVNVnU+3wEdcxCeq+ePr5VBEdfs+SFF+ +GhGjI9hREaq1+kkDrtsZoz3N/q5EZeVbrGJywK+jyPXJpDtw9/7veiJyFGdGPQ7SxossY QasFudEy/NKu1LZIjpa+Vcbb725PMHs= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-146-VgjPmL23NCSD6FXIpF_DVA-1; Tue, 07 Jul 2020 18:50:56 -0400 X-MC-Unique: VgjPmL23NCSD6FXIpF_DVA-1 Received: by mail-qv1-f70.google.com with SMTP id em19so22359434qvb.14 for ; Tue, 07 Jul 2020 15:50:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sstMP5NBLNFJ42p1Ptb1NInEyGUfT5YQOxqtWfaqyuo=; b=f9N9PPVr+ncWYvtowU1qJwX14t5Zkll5oO/fz2eVq6t4LfZfXO7JiWXeI+JWcKY+76 B4FThsxUbFaNtZ6QzrEg8Mc4yBtJOozDIWvgpHs7KV7DZvdk7pzRZDoy05WGMClEqcGW YUUQpZfLh+Y7oDIgPfQAGDEKMHTLsu4h25Bwmy3ancwZri5pukeRKugxkF6NGZvuFmZA iFhOEc79jswrMj/Tfo+OItV07G5Uk9d05LCmLyKIx/ygnt1LnGBSMURja5Vqk/zNZz5P 6vxE63cT2FQEjYXrivvlMQd+kFQH8eVmc4SxjfSXmUeFEWKWye6OJIm4imcftAfeHMBw WuDA== X-Gm-Message-State: AOAM531BX7xSpPUdbcnBc03ixOMiYIUxriIMGekygdDgFy3Z2CCM3T7m 5f2n/xqT6J+O34LTdAjFlw64UPb2SdZ2S++xNrcLWpSY1M79TWpDXTyc5plBkk1Z7CoWBt6pqV5 +NEB8NVPjbqE= X-Received: by 2002:ae9:ea13:: with SMTP id f19mr48305837qkg.331.1594162255792; Tue, 07 Jul 2020 15:50:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx+fZbbr5VZat1rrRmd+D1Dar1PUsYOv9odYsqMkgw6s9ZuUOa/fKMkRQzzNbqSPEglj8MNYA== X-Received: by 2002:ae9:ea13:: with SMTP id f19mr48305824qkg.331.1594162255589; Tue, 07 Jul 2020 15:50:55 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:54 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Subject: [PATCH v5 19/25] mm/sh: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:15 -0400 Message-Id: <20200707225021.200906-20-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: D8AE2180D0185 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Yoshinori Sato CC: Rich Felker CC: linux-sh@vger.kernel.org Signed-off-by: Peter Xu --- arch/sh/mm/fault.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index 3c0a11827f7e..482668a2f6d3 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -482,22 +482,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR))) if (mm_fault_error(regs, error_code, address, fault)) return; if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650423 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52E1C13B6 for ; Tue, 7 Jul 2020 22:51:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1FE142075B for ; Tue, 7 Jul 2020 22:51:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dJBA4k6o" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1FE142075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B9FDA6B00C8; Tue, 7 Jul 2020 18:51:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B04756B00CB; Tue, 7 Jul 2020 18:51:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DEC66B00CC; Tue, 7 Jul 2020 18:51:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 6F6A46B00C8 for ; Tue, 7 Jul 2020 18:51:02 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 380DE8248D52 for ; Tue, 7 Jul 2020 22:51:02 +0000 (UTC) X-FDA: 77012776764.17.help51_0b0941926eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 10655180D0181 for ; Tue, 7 Jul 2020 22:51:02 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y8y18f9czxjxym4w1mfd5wwpdoxoc9r5wezdik1ksbe6f6xzgkxfesuaczioe.961wz1wfq35wbdsohykzncqwu3o6ib6s1txygiyrnmoj4ttwcbq5ocrqqrqrum5.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: help51_0b0941926eb8 X-Filterd-Recvd-Size: 5292 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:51:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162261; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=goLnSgg43tg59jYNEWM6oaQtVeQ81wyjekL4b1iRhCw=; b=dJBA4k6oMiYtfRR4S+px6s3Ig8U2bTiS/pteC0zwRJTjIdv7GAMNOB/FBPpEsnQLlVALHs OCaD6qZaPfE/jO29gHy/Hz8GHef33QUflJ7M2nt3PzD7UwLlfmZqSZTfOLf6qeTlqZQu7/ clhunGngxCmFp+NY526ZkxmisjubCYY= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-198-KQ5-NrNSO0qR5r8ELs0O6g-1; Tue, 07 Jul 2020 18:50:57 -0400 X-MC-Unique: KQ5-NrNSO0qR5r8ELs0O6g-1 Received: by mail-qt1-f198.google.com with SMTP id i5so31876868qtw.3 for ; Tue, 07 Jul 2020 15:50:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=goLnSgg43tg59jYNEWM6oaQtVeQ81wyjekL4b1iRhCw=; b=rpGKI6TiZ3rP0GwZw8/sHdKmkpjoNkbMJ1C3JnWVXnBJquMp6mCucRu6ufLsMKfez4 20tckq3y030oWXguOTvssqWWygtNN/vX+rfpf4g8/PjSV99lHi0GDlb11478uR2OktHy heEFK0VQ0OcnWIMTg49WlUX7k23RQtMww39FjP/g6EJKGqp31amOlMd9FYwi73jOQxll GUP/eWbTPgoHyNnBc2fJdE2fjwDhrfzuMUk95Ymn914yvoIgbIG+p+a0YEr6aL9tDv5H 8fw4lZL2rnxkfWc0i1/NYfQkSlmKt6KLOkgryNBoE7Xhy3ArE53TEX0miKt55wY6BQsP /1tA== X-Gm-Message-State: AOAM530XCE73hhUBzuoLCrqdB+uM7KwkUXdOVPiP8a72oZo3kLoOfEqI KlyoaC2bq8ZO7wZJ13AxfpHRoNlHjH4KI/Zz/R1YmbIz0V6NLEzqeJxShNtO0aJXiQJxFA010iL eVWOJcoSBm9Y= X-Received: by 2002:a37:c246:: with SMTP id j6mr51752085qkm.444.1594162257356; Tue, 07 Jul 2020 15:50:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy/jy6TEpSsbY6d9HXsHHh/wU6lHsQhKJI3A8F2vFrC2VOLnGIMx4NbmWJ0GFeoTMgVhdK5UA== X-Received: by 2002:a37:c246:: with SMTP id j6mr51752065qkm.444.1594162257119; Tue, 07 Jul 2020 15:50:57 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:56 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , "David S . Miller" , sparclinux@vger.kernel.org Subject: [PATCH v5 20/25] mm/sparc32: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:16 -0400 Message-Id: <20200707225021.200906-21-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 10655180D0181 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: David S. Miller CC: sparclinux@vger.kernel.org Acked-by: David S. Miller Signed-off-by: Peter Xu --- arch/sparc/mm/fault_32.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index 06af03db4417..8071bfd72349 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -234,7 +234,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -250,15 +250,6 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, address); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650425 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A0E213B6 for ; Tue, 7 Jul 2020 22:51:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 371CB2075B for ; Tue, 7 Jul 2020 22:51:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XZDMqBT1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 371CB2075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7AE306B00CB; Tue, 7 Jul 2020 18:51:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 70BB56B00CD; Tue, 7 Jul 2020 18:51:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 499036B00CF; Tue, 7 Jul 2020 18:51:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 1F4786B00CB for ; Tue, 7 Jul 2020 18:51:04 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E745C1EE6 for ; Tue, 7 Jul 2020 22:51:03 +0000 (UTC) X-FDA: 77012776806.17.pie47_3e0c16b26eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id BC916180D0181 for ; Tue, 7 Jul 2020 22:51:03 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04ygahgx9bde6df7t8e77zyr6o16wypgtxantjon6z7d864ha4196zi5tncpwb3.zuxr16e4i513emt6mbrfp87ixqtgmey1q7toiwyex7ahh9poizdmjg4sd35qqzu.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: pie47_3e0c16b26eb8 X-Filterd-Recvd-Size: 5227 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:51:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162263; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rPMcfxrEZ236mfezmh1VLu12AnYLZCZwUp1nicTiMR4=; b=XZDMqBT14GaT44n7+EKM1pFX2NanlUDYacRaNyVMYTFPhWS96KpVQVZHI6Rj6mEXISOkLp 2shW63OFl7hF6BkDDVXwAhthGylvjmp7YKHWgBGl4sVSXWXmXne2WTEXnPaKPUrBcRp4fy NaT1lHIcyVUE/XZpMepWQfzAxbXZByo= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-94-V7SEMQcIMaSMQnyDMCn2HQ-1; Tue, 07 Jul 2020 18:50:59 -0400 X-MC-Unique: V7SEMQcIMaSMQnyDMCn2HQ-1 Received: by mail-qt1-f198.google.com with SMTP id 94so31423308qtb.21 for ; Tue, 07 Jul 2020 15:50:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rPMcfxrEZ236mfezmh1VLu12AnYLZCZwUp1nicTiMR4=; b=X9q5aCt90/TQzmbTzVRb7zjyHXML80HQ5iNGbcVeb6JkMPDz5Z1OgEWAjouhGYd5V/ nX0dcuo4kXNfOYlc89zr6SDVTWsUAMoedGgH8WoLg3iPX7F2ty8+XA6B24ypVuFeCcA/ mCPMt0eDbeN/VDC9O70xk5aS4bt51VdmMTP6PIwvdHySOTo0P/X2ycHIVdRsiipVGN7Q BJrsAiiAhHC8Z2M6KIldnGkHFVtjK6FrWYduG5lu1zCGH069tzlVj+gd1Wk5R2NuPPHz NzAWHyWfAzulpHdvyJE3wO+Avc6/tJ9T/fZictKfEyp6bCYnSj6X1PTCmij0Ql6Jlz7/ iyAA== X-Gm-Message-State: AOAM5300uTW6853nIlUR+qCRteV6H1K9YuRqZzZXVVVIIeQfeVhr2xDb gAROB3w4Qix8IuUx9r7KA50mOBzb5rnzcSNmqoBPJFlvpABgUo0BhQeVpELfx7RCwuoZLaJqhRO 8Iio8+iY4+LM= X-Received: by 2002:a05:620a:b1a:: with SMTP id t26mr55133329qkg.473.1594162258928; Tue, 07 Jul 2020 15:50:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz2J0ngM7MXGA4Cel+DD1H81KUpKJX7/J++AxGaN1fxx6mcf6hQbpqhfEsYybz9zmsLxfBU8g== X-Received: by 2002:a05:620a:b1a:: with SMTP id t26mr55133309qkg.473.1594162258730; Tue, 07 Jul 2020 15:50:58 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:58 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , "David S . Miller" , sparclinux@vger.kernel.org Subject: [PATCH v5 21/25] mm/sparc64: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:17 -0400 Message-Id: <20200707225021.200906-22-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: BC916180D0181 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: David S. Miller CC: sparclinux@vger.kernel.org Acked-by: David S. Miller Signed-off-by: Peter Xu --- arch/sparc/mm/fault_64.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index 9ebee14ee893..0a6bcc85fba7 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -422,7 +422,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) goto exit_exception; @@ -438,15 +438,6 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, address); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jul 7 22:50:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650427 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 95B47739 for ; Tue, 7 Jul 2020 22:51:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 615E52075B for ; Tue, 7 Jul 2020 22:51:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HRq0kaPu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 615E52075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A90A76B00CD; Tue, 7 Jul 2020 18:51:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 92E5E6B00D0; Tue, 7 Jul 2020 18:51:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55A166B00CE; Tue, 7 Jul 2020 18:51:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 283406B00CD for ; Tue, 7 Jul 2020 18:51:04 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E649E180AD806 for ; Tue, 7 Jul 2020 22:51:03 +0000 (UTC) X-FDA: 77012776806.15.care51_010d12126eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id B35EE1814B0CA for ; Tue, 7 Jul 2020 22:51:03 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30001:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y8kuznh7uuu4oe999qkirg7qqdnocm8he78irijsjxepupfn3s6kwd45hbrko.krx3ukh9r66ndgajpgks1ydwainnmup561r1hyqapfojq4tqy1cnohr5f3c9313.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: care51_010d12126eb8 X-Filterd-Recvd-Size: 5959 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:51:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162262; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Kk36RboWNwg3s7a93FkyA+KGsd0nhQw91NIAXCwhVNw=; b=HRq0kaPu0cQSHO7RS9sOxgmHCZf+JIsRgRDPCXWyV8gNdLGiOKWhHBw996i5J7+Qih+CbD gVgdLZFp23wSEA2i1WNstMnd90i0C2fpzpauuBMFIch+kXjhvxRPsBl3813FvjPGIhCKAp gtwD/Q6lywuuiTiEIYux9f0WJCVttVs= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-35-Qcy_-39MMXWpKusagGyWmA-1; Tue, 07 Jul 2020 18:51:01 -0400 X-MC-Unique: Qcy_-39MMXWpKusagGyWmA-1 Received: by mail-qk1-f197.google.com with SMTP id f79so23562638qke.9 for ; Tue, 07 Jul 2020 15:51:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Kk36RboWNwg3s7a93FkyA+KGsd0nhQw91NIAXCwhVNw=; b=h7JAzB1GNpaL0cktLqXMzcMuj3D/ZpKYZVy/SONcoEj8K9p98EmLbTnJ51EXN5wypw ox3XGhaW55MYN6SA5lMmvoj3kUN7VxQmILT/RQ7OvwzksHqNZJ/I/4ZLX50+73u4u941 n7skK5CnwaxzIr/MCzOFf4et4qj5qtiJuiI0nMLUpqgq4MGFN1ZQaM/udrdjPaYtnkqI F0y93/B4R2CfChBDc6w9adHT4DmogvEO7jfRNRwzMi8hOEa9r/zkxRgnNC49Lge+QCPK QpQ+CuK/HKtrYJXKpHPkphiAY7b8I6TKBkLmmNuiZjOIkpL9qhlTdEy6kXQ9Cs3tK2YI SZsQ== X-Gm-Message-State: AOAM532peitqUJqJkiobzZoZOuWNjPSKZSHau0L5/CE2xDlc19XTFHdv q3iYYJ05pbjUL0zAVMf6KnF3UJnSzwCr+QrjwgwKo3N6bLLpl0SMznd04Nwu0OedCww2pqRN9Eg f2O5ET0HpeZ0= X-Received: by 2002:a37:6190:: with SMTP id v138mr53385951qkb.121.1594162260752; Tue, 07 Jul 2020 15:51:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxTUthE4moACxwTPkskLMlZ3H+JQN0LXa92kSVZuEtlyr0Utc0VYzJmCu+EUJvBvgJauPP9DA== X-Received: by 2002:a37:6190:: with SMTP id v138mr53385933qkb.121.1594162260533; Tue, 07 Jul 2020 15:51:00 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.50.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:50:59 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" Subject: [PATCH v5 22/25] mm/x86: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:18 -0400 Message-Id: <20200707225021.200906-23-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: B35EE1814B0CA X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). CC: Dave Hansen CC: Andy Lutomirski CC: Peter Zijlstra CC: Thomas Gleixner CC: Ingo Molnar CC: Borislav Petkov CC: x86@kernel.org CC: H. Peter Anvin Signed-off-by: Peter Xu --- arch/x86/mm/fault.c | 17 ++--------------- 1 file changed, 2 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 0adbff41adec..471cfd162b30 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1139,7 +1139,7 @@ void do_user_addr_fault(struct pt_regs *regs, struct vm_area_struct *vma; struct task_struct *tsk; struct mm_struct *mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned int flags = FAULT_FLAG_DEFAULT; tsk = current; @@ -1291,8 +1291,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags, NULL); - major |= fault & VM_FAULT_MAJOR; + fault = handle_mm_fault(vma, address, flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -1319,18 +1318,6 @@ void do_user_addr_fault(struct pt_regs *regs, return; } - /* - * Major/minor page fault accounting. If any of the events - * returned VM_FAULT_MAJOR, we account it as a major fault. - */ - if (major) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } - check_v8086_mode(regs, address, tsk); } NOKPROBE_SYMBOL(do_user_addr_fault); From patchwork Tue Jul 7 22:50:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650429 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD4C713B6 for ; Tue, 7 Jul 2020 22:51:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 907DC2075B for ; Tue, 7 Jul 2020 22:51:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UQql8WqA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 907DC2075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 362C56B00D0; Tue, 7 Jul 2020 18:51:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2A2946B00D1; Tue, 7 Jul 2020 18:51:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF6BC6B00D2; Tue, 7 Jul 2020 18:51:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id D65286B00D0 for ; Tue, 7 Jul 2020 18:51:05 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9C5422C78 for ; Tue, 7 Jul 2020 22:51:05 +0000 (UTC) X-FDA: 77012776890.27.eggs27_29074d526eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 65F8C3D668 for ; Tue, 7 Jul 2020 22:51:05 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yrf4rf99t3c9c7ihzb3ff48woyqycuw4r8ssdrnfet9io47aqr6ojkpmcjt8c.pstwhyxy58u74qaf9gqjmc88mmn4cm1gc7zsu1ahxr4zst758kmykrrbwk19mbu.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: eggs27_29074d526eb8 X-Filterd-Recvd-Size: 6025 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:51:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162264; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1WDfseXJG+YEknLKo7gGTNIu+jjXQaKHoiJwElOOM90=; b=UQql8WqAFmBirdiDTpZSpbhUjcgOCVisoNkuds430pJjCYKofjvLcFhu7UNP75lPARQC3A V4cJr3yPdczaCTxrE9YI7yLy0bqRb8EBeJkJeymiXR0Iio5TWV9jloRbOgNvSg7DYOZZb3 zOL3fgcqJWB8e4EUlPusZdrKN1g0lko= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-339-7sWZu5eNPMmjqF2IzDcxdA-1; Tue, 07 Jul 2020 18:51:03 -0400 X-MC-Unique: 7sWZu5eNPMmjqF2IzDcxdA-1 Received: by mail-qk1-f197.google.com with SMTP id l19so17285982qke.6 for ; Tue, 07 Jul 2020 15:51:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1WDfseXJG+YEknLKo7gGTNIu+jjXQaKHoiJwElOOM90=; b=HXVdE/SBD2LGnQqeyBOxmHblIInJ1thyeRAx44Dsb1+KfVKfJ03SSnv2pjRAwouxfO lS3iOqh9I/ECqtUbLcgWWQ2RZkQ1tNHCBFI0J4UIDgkhTk/LO6fbzNVCr4cM00nF1nct iXoplVQsq9l+i9iE8G0dn/qDFcf/giSSVr7Re+c11frBibcRMeBJh2Ib1JqNWjb+Zydp ubAfyZTKIfKjCk6CBXh/GSUtsPNuCYy9SdbltsaE2xUwacTQgycKes4kRLtSADtjEU0K Z0jgPYx1F5OyrBnBtmtFLKkg7TfFgBI/AjOKhhhjbBtvdPr+3zqR7UjZrs4fmaex7Pxn 75dw== X-Gm-Message-State: AOAM530O5R9cGsZtcMuhJ511rTayNtxxzg+enz3ck18Bpr70Ulkn+oac MovDfhZMQi2b7tM9RdkrfaR5TSd6WxIg4F2jdRe0MW2NvGpt13q2uaTuZgJUdEZwQddIxGeIT2Q ZGFSe1/Fx/Bs= X-Received: by 2002:ac8:1972:: with SMTP id g47mr57090711qtk.180.1594162262628; Tue, 07 Jul 2020 15:51:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxGFS+kyEs2De/yDG4/t5VdS5tufXXCkrB6G/JQRILDl9qFzVJ4Jz7OgpD+7mvusv7YHy15LQ== X-Received: by 2002:ac8:1972:: with SMTP id g47mr57090687qtk.180.1594162262390; Tue, 07 Jul 2020 15:51:02 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.51.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:51:01 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman , Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH v5 23/25] mm/xtensa: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:19 -0400 Message-Id: <20200707225021.200906-24-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 65F8C3D668 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Remove the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] perf events because it's now also done in handle_mm_fault(). Move the PERF_COUNT_SW_PAGE_FAULTS event higher before taking mmap_sem for the fault, then it'll match with the rest of the archs. CC: Chris Zankel CC: Max Filippov CC: linux-xtensa@linux-xtensa.org Acked-by: Max Filippov Signed-off-by: Peter Xu --- arch/xtensa/mm/fault.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index e72c8c1359a6..7666408ce12a 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -72,6 +72,9 @@ void do_page_fault(struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -107,7 +110,7 @@ void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -122,10 +125,6 @@ void do_page_fault(struct pt_regs *regs) BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; @@ -139,12 +138,6 @@ void do_page_fault(struct pt_regs *regs) } mmap_read_unlock(mm); - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - if (flags & VM_FAULT_MAJOR) - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - else - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - return; /* Something tried to access memory that isn't in our memory map.. From patchwork Tue Jul 7 22:50:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650433 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90941739 for ; Tue, 7 Jul 2020 22:51:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4F1852075B for ; Tue, 7 Jul 2020 22:51:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GG1unZHQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F1852075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6B9CF6B00D3; Tue, 7 Jul 2020 18:51:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 66DA16B00D5; Tue, 7 Jul 2020 18:51:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50DCD6B00D6; Tue, 7 Jul 2020 18:51:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 3A2AB6B00D3 for ; Tue, 7 Jul 2020 18:51:20 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 02BA88248D52 for ; Tue, 7 Jul 2020 22:51:20 +0000 (UTC) X-FDA: 77012777520.10.plot97_3311a0326eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 777D516A4A4 for ; Tue, 7 Jul 2020 22:51:12 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30001:30003:30012:30054:30070:30075,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yf773qudtcbut38ag3rqaxob547ypeicfo4936bwbf6oxgzzs5g9m869c7zfa.nm76ychw5j9cik873sscb364uyjg1zysdw8w3nekprorr7hb6sg8qps8faas5yz.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: plot97_3311a0326eb8 X-Filterd-Recvd-Size: 7932 Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:51:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162271; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NJjL1mXsjMkVQ/047kV0rS5FUrA3V1hN/izl2cQ6Ufs=; b=GG1unZHQYOu5/OIcTgMHaktbGetwsQnGIt2aIr2jZUT1dJ9hE9Sopie3qOQ+SnajpiMxxk Jj59aMBZ9FlJTlzPnOx9djx0z47gTq00MUH2a42ImoexiqnsMv+xxQnjRGau/ALg6MeWcc Rn7py1cJuHIT0rl8lbTzsXPohNuW9BY= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-2-3jwOfW2JMJyRIWkfrJoT8g-1; Tue, 07 Jul 2020 18:51:04 -0400 X-MC-Unique: 3jwOfW2JMJyRIWkfrJoT8g-1 Received: by mail-qk1-f199.google.com with SMTP id 124so21326565qko.8 for ; Tue, 07 Jul 2020 15:51:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NJjL1mXsjMkVQ/047kV0rS5FUrA3V1hN/izl2cQ6Ufs=; b=iZ/PtnDWxcWDnkdCyw++A7ToUATwGp9u3dVuDJDx+6z4CTKFK1EBRb8ijPVb4p2tUu nPFzERZ/PBwMkPsHsE7R8efamjQPzEDrUz1c2PtR3spY2Sh6DxY5i/xaeVoPtOEsUtTH pYeLqRFwPtMsK9OFiAOOndcqttemSjrsxx+jFdX24372yfbFgFRx0jXVpOCkaBr3aSVM +P99ql0gQIDhFITU6lQ+m/I9exkNNPP/MXAnbNt9wwZh+7dMnWj5AnCCzNnqwXMsKv0+ rIG4t+1J+0HmO5yzkyLV7ZOsPNuWRDUlJSbe9DzVX98L9KcqDSPv+S5xNVMAzIHO6y0L aWqA== X-Gm-Message-State: AOAM531YeXRZTkW/bsJlRQN0FC6ofYuHLfQN2tEeQU5qV4MOkRIRyP84 w8BLMdX0J1QKd+aLZfvzuApt2fYiVXWY94ZWwV+5dNtN3giz+mYUKSGmQSHtHwyspQJoNr8wLH0 uo5k4lP/A6kk= X-Received: by 2002:a37:957:: with SMTP id 84mr47902830qkj.392.1594162264308; Tue, 07 Jul 2020 15:51:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxbMGUDN8sNnAct7snfAr0tVsXTTFuHRnFviDLy5w3hPtrLVevs9DuyV/XuSR75xBh7Bx9u7A== X-Received: by 2002:a37:957:: with SMTP id 84mr47902815qkj.392.1594162264014; Tue, 07 Jul 2020 15:51:04 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.51.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:51:03 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman Subject: [PATCH v5 24/25] mm: Clean up the last pieces of page fault accountings Date: Tue, 7 Jul 2020 18:50:20 -0400 Message-Id: <20200707225021.200906-25-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 777D516A4A4 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Here're the last pieces of page fault accounting that were still done outside handle_mm_fault() where we still have regs==NULL when calling handle_mm_fault(): arch/powerpc/mm/copro_fault.c: copro_handle_mm_fault arch/sparc/mm/fault_32.c: force_user_fault arch/um/kernel/trap.c: handle_page_fault mm/gup.c: faultin_page fixup_user_fault mm/hmm.c: hmm_vma_fault mm/ksm.c: break_ksm Some of them has the issue of duplicated accounting for page fault retries. Some of them didn't do the accounting at all. This patch cleans all these up by letting handle_mm_fault() to do per-task page fault accounting even if regs==NULL (though we'll still skip the perf event accountings). With that, we can safely remove all the outliers now. There's another functional change in that now we account the page faults to the caller of gup, rather than the task_struct that passed into the gup code. More information of this can be found at [1]. After this patch, below things should never be touched again outside handle_mm_fault(): - task_struct.[maj|min]_flt - PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] [1] https://lore.kernel.org/lkml/CAHk-=wj_V2Tps2QrMn20_W0OJF9xqNh52XSGA42s-ZJ8Y+GyKw@mail.gmail.com/ Signed-off-by: Peter Xu --- arch/powerpc/mm/copro_fault.c | 5 ----- arch/um/kernel/trap.c | 4 ---- mm/gup.c | 13 ------------- mm/memory.c | 17 ++++++++++------- 4 files changed, 10 insertions(+), 29 deletions(-) diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c index 2d0276abe0a6..8acd00178956 100644 --- a/arch/powerpc/mm/copro_fault.c +++ b/arch/powerpc/mm/copro_fault.c @@ -76,11 +76,6 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea, BUG(); } - if (*flt & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; - out_unlock: mmap_read_unlock(mm); return ret; diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 8d9870d76da1..ad12f78bda7e 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -88,10 +88,6 @@ int handle_page_fault(unsigned long address, unsigned long ip, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; diff --git a/mm/gup.c b/mm/gup.c index 80fd1610d43e..71e1d501a1d3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -893,13 +893,6 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, BUG(); } - if (tsk) { - if (ret & VM_FAULT_MAJOR) - tsk->maj_flt++; - else - tsk->min_flt++; - } - if (ret & VM_FAULT_RETRY) { if (locked && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) *locked = 0; @@ -1255,12 +1248,6 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, goto retry; } - if (tsk) { - if (major) - tsk->maj_flt++; - else - tsk->min_flt++; - } return 0; } EXPORT_SYMBOL_GPL(fixup_user_fault); diff --git a/mm/memory.c b/mm/memory.c index bb7ba127661a..ad5eca9dd1ed 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4404,20 +4404,23 @@ static inline void mm_account_fault(struct pt_regs *regs, */ major = (ret & VM_FAULT_MAJOR) || (flags & FAULT_FLAG_TRIED); + if (major) + current->maj_flt++; + else + current->min_flt++; + /* - * If the fault is done for GUP, regs will be NULL, and we will skip - * the fault accounting. + * If the fault is done for GUP, regs will be NULL. We only do the + * accounting for the per thread fault counters who triggered the + * fault, and we skip the perf event updates. */ if (!regs) return; - if (major) { - current->maj_flt++; + if (major) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - } else { - current->min_flt++; + else perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } } /* From patchwork Tue Jul 7 22:50:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11650431 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E82813B6 for ; Tue, 7 Jul 2020 22:51:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 02A9E206BE for ; Tue, 7 Jul 2020 22:51:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LmloVdwo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 02A9E206BE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3D81C6B00D1; Tue, 7 Jul 2020 18:51:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 312926B00D3; Tue, 7 Jul 2020 18:51:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 055F96B00D4; Tue, 7 Jul 2020 18:51:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id C93406B00D1 for ; Tue, 7 Jul 2020 18:51:10 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8CF7C8248D52 for ; Tue, 7 Jul 2020 22:51:10 +0000 (UTC) X-FDA: 77012777100.29.pump28_0a101e626eb8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 65CC218086E25 for ; Tue, 7 Jul 2020 22:51:10 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30034:30036:30051:30054:30064:30070:30074:30090,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yrjzzjhckgddbqoo6iqq47ewyi7ycetqjngjukbau8tu6cod9z8ru9di9oeon.b8qi19k1egpbtydbqi5wgyc6ewb5b8fs4tw5qwnp9ta81hpcrzdtsfoxb8cd6i3.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: pump28_0a101e626eb8 X-Filterd-Recvd-Size: 33868 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 22:51:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594162269; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KtNeO3INyNL5XJFAMgVH5cdzfRxks36s0QSRiPvhN3c=; b=LmloVdwo+lhoDjzF0u9SDFgB5EVwONcPmD1d9GWdvbskM5l9BpEhBLtYxWLHBJ7kfk5beW X7HSVPWewpAZFlgPWaY945GIXn1P0CZJSeuuGIYxrfSJnHWgKBuu/Q0AK4Zkv/SUdB/xKA 7pjKxD/qoiysBzpwcyH7Cuybj+cIIBU= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-502-e0EERscsP6SbgIknD7SqRA-1; Tue, 07 Jul 2020 18:51:07 -0400 X-MC-Unique: e0EERscsP6SbgIknD7SqRA-1 Received: by mail-qv1-f70.google.com with SMTP id u1so14572327qvu.18 for ; Tue, 07 Jul 2020 15:51:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KtNeO3INyNL5XJFAMgVH5cdzfRxks36s0QSRiPvhN3c=; b=e5PDXqUP3zTOp0sVtJNEmr8pk7h+g+2a3gBv3hfPZxGpDg33pKiHubMNh8Dlu9eobz 4CHs4v1xgFbpGrZo8qEqy6QTU6QUMqt14Sw7TsbtDnwIFEl4w+jikGPjYz8dfGQIuWS7 YVJRhxGGW5Sh9okDHreeWvI/NOSmFSn4lHP3fpkXBCb/m7/ucTb6Hmwz1PCLTqSzllTT 9ORGd3+mztA2k5Csysd5ZmnzEqBLeg8hpp6UxtCjTPCeQC8WCkJoNN4C7l73nN5wL6hd VwOmoOc2wD4XuUbvNQ/lNE8o9m03+gvNmqzoUXfnOIDRxo0o/LYdmkqbwG5NP9XM2ehe 1orA== X-Gm-Message-State: AOAM532aOeWcdnF6YJKRPuj7pNj3CbiOn2LIgPXRkFtxdpqNSA8e1Ker qJ4MBLBy4Ds/pWA2JVzi5kugdokxAKezGArp04XxbX04SQnYeLbivR4XVFarJu6jTudo56vookP 6KGFYvGE9yXg= X-Received: by 2002:ac8:4b47:: with SMTP id e7mr58505098qts.242.1594162266312; Tue, 07 Jul 2020 15:51:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzpuWhgQ+LnM1iq3TmBfVxaCRtCie9H2AfW4REYZ7OdmyIVS3dmv0ZgnTdk3dIvl4IDplCYMg== X-Received: by 2002:ac8:4b47:: with SMTP id e7mr58505054qts.242.1594162265738; Tue, 07 Jul 2020 15:51:05 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j16sm26267642qtp.92.2020.07.07.15.51.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 15:51:05 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer , Linus Torvalds , peterx@redhat.com, Andrew Morton , Will Deacon , Andrea Arcangeli , David Rientjes , John Hubbard , Michael Ellerman Subject: [PATCH v5 25/25] mm/gup: Remove task_struct pointer for all gup code Date: Tue, 7 Jul 2020 18:50:21 -0400 Message-Id: <20200707225021.200906-26-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> References: <20200707225021.200906-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 65CC218086E25 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After the cleanup of page fault accounting, gup does not need to pass task_struct around any more. Remove that parameter in the whole gup stack. Reviewed-by: John Hubbard Signed-off-by: Peter Xu --- arch/arc/kernel/process.c | 2 +- arch/s390/kvm/interrupt.c | 2 +- arch/s390/kvm/kvm-s390.c | 2 +- arch/s390/kvm/priv.c | 8 +- arch/s390/mm/gmap.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 +- drivers/infiniband/core/umem_odp.c | 2 +- drivers/vfio/vfio_iommu_type1.c | 4 +- fs/exec.c | 2 +- include/linux/mm.h | 9 +- kernel/events/uprobes.c | 6 +- kernel/futex.c | 2 +- mm/gup.c | 101 ++++++++------------ mm/memory.c | 2 +- mm/process_vm_access.c | 2 +- security/tomoyo/domain.c | 2 +- virt/kvm/async_pf.c | 2 +- virt/kvm/kvm_main.c | 2 +- 18 files changed, 69 insertions(+), 87 deletions(-) diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c index 105420c23c8b..a1d2eea66bba 100644 --- a/arch/arc/kernel/process.c +++ b/arch/arc/kernel/process.c @@ -91,7 +91,7 @@ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new) goto fail; mmap_read_lock(current->mm); - ret = fixup_user_fault(current, current->mm, (unsigned long) uaddr, + ret = fixup_user_fault(current->mm, (unsigned long) uaddr, FAULT_FLAG_WRITE, NULL); mmap_read_unlock(current->mm); diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index 1608fd99bbee..2f177298c663 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -2768,7 +2768,7 @@ static struct page *get_map_page(struct kvm *kvm, u64 uaddr) struct page *page = NULL; mmap_read_lock(kvm->mm); - get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE, + get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE, &page, NULL, NULL); mmap_read_unlock(kvm->mm); return page; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 08e6cf6cb454..f78921bc11b3 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1892,7 +1892,7 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args) r = set_guest_storage_key(current->mm, hva, keys[i], 0); if (r) { - r = fixup_user_fault(current, current->mm, hva, + r = fixup_user_fault(current->mm, hva, FAULT_FLAG_WRITE, &unlocked); if (r) break; diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 2f721a923b54..cd74989ce0b0 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -273,7 +273,7 @@ static int handle_iske(struct kvm_vcpu *vcpu) rc = get_guest_storage_key(current->mm, vmaddr, &key); if (rc) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { mmap_read_unlock(current->mm); @@ -319,7 +319,7 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) mmap_read_lock(current->mm); rc = reset_guest_reference_bit(current->mm, vmaddr); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { mmap_read_unlock(current->mm); @@ -390,7 +390,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) m3 & SSKE_MC); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc = !rc ? -EAGAIN : rc; } @@ -1094,7 +1094,7 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) rc = cond_set_guest_storage_key(current->mm, vmaddr, key, NULL, nq, mr, mc); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc = !rc ? -EAGAIN : rc; } diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 190357ff86b3..8747487c50a8 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -649,7 +649,7 @@ int gmap_fault(struct gmap *gmap, unsigned long gaddr, rc = vmaddr; goto out_up; } - if (fixup_user_fault(current, gmap->mm, vmaddr, fault_flags, + if (fixup_user_fault(gmap->mm, vmaddr, fault_flags, &unlocked)) { rc = -EFAULT; goto out_up; @@ -879,7 +879,7 @@ static int gmap_pte_op_fixup(struct gmap *gmap, unsigned long gaddr, BUG_ON(gmap_is_shadow(gmap)); fault_flags = (prot == PROT_WRITE) ? FAULT_FLAG_WRITE : 0; - if (fixup_user_fault(current, mm, vmaddr, fault_flags, &unlocked)) + if (fixup_user_fault(mm, vmaddr, fault_flags, &unlocked)) return -EFAULT; if (unlocked) /* lost mmap_lock, caller has to retry __gmap_translate */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index e946032b13e4..2c2bf24140c9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -469,7 +469,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work) locked = 1; } ret = pin_user_pages_remote - (work->task, mm, + (mm, obj->userptr.ptr + pinned * PAGE_SIZE, npages - pinned, flags, diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 5e32f61a2fe4..cc6b4befde7c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -439,7 +439,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, * complex (and doesn't gain us much performance in most use * cases). */ - npages = get_user_pages_remote(owning_process, owning_mm, + npages = get_user_pages_remote(owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); mmap_read_unlock(owning_mm); diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 5e556ac9102a..9d41105bfd01 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -425,7 +425,7 @@ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm, if (ret) { bool unlocked = false; - ret = fixup_user_fault(NULL, mm, vaddr, + ret = fixup_user_fault(mm, vaddr, FAULT_FLAG_REMOTE | (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); @@ -453,7 +453,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr, flags |= FOLL_WRITE; mmap_read_lock(mm); - ret = pin_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, + ret = pin_user_pages_remote(mm, vaddr, 1, flags | FOLL_LONGTERM, page, NULL, NULL); if (ret == 1) { *pfn = page_to_pfn(page[0]); diff --git a/fs/exec.c b/fs/exec.c index 7b7cbb180785..3cf806de5710 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -217,7 +217,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos, * We are doing an exec(). 'current' is the process * doing the exec and bprm->mm is the new process's mm. */ - ret = get_user_pages_remote(current, bprm->mm, pos, 1, gup_flags, + ret = get_user_pages_remote(bprm->mm, pos, 1, gup_flags, &page, NULL, NULL); if (ret <= 0) return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index 33f8236a68a2..678ea25625d7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1662,7 +1662,7 @@ int invalidate_inode_page(struct page *page); extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct pt_regs *regs); -extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, +extern int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); void unmap_mapping_pages(struct address_space *mapping, @@ -1678,8 +1678,7 @@ static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, BUG(); return VM_FAULT_SIGBUS; } -static inline int fixup_user_fault(struct task_struct *tsk, - struct mm_struct *mm, unsigned long address, +static inline int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { /* should never happen if there's no MMU */ @@ -1705,11 +1704,11 @@ extern int access_remote_vm(struct mm_struct *mm, unsigned long addr, extern int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, unsigned long addr, void *buf, int len, unsigned int gup_flags); -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index e84eb52b646b..f500204eb70d 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -376,7 +376,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) if (!vaddr || !d) return -EINVAL; - ret = get_user_pages_remote(NULL, mm, vaddr, 1, + ret = get_user_pages_remote(mm, vaddr, 1, FOLL_WRITE, &page, &vma, NULL); if (unlikely(ret <= 0)) { /* @@ -477,7 +477,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (is_register) gup_flags |= FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - ret = get_user_pages_remote(NULL, mm, vaddr, 1, gup_flags, + ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, &vma, NULL); if (ret <= 0) return ret; @@ -2029,7 +2029,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) * but we treat this as a 'remote' access since it is * essentially a kernel access to the memory. */ - result = get_user_pages_remote(NULL, mm, vaddr, 1, FOLL_FORCE, &page, + result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, NULL, NULL); if (result < 0) return result; diff --git a/kernel/futex.c b/kernel/futex.c index 05e88562de68..d024fcef62e8 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -699,7 +699,7 @@ static int fault_in_user_writeable(u32 __user *uaddr) int ret; mmap_read_lock(mm); - ret = fixup_user_fault(current, mm, (unsigned long)uaddr, + ret = fixup_user_fault(mm, (unsigned long)uaddr, FAULT_FLAG_WRITE, NULL); mmap_read_unlock(mm); diff --git a/mm/gup.c b/mm/gup.c index 71e1d501a1d3..c4ec86ff67e4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -859,7 +859,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, * does not include FOLL_NOWAIT, the mmap_lock may be released. If it * is, *@locked will be set to 0 and -EBUSY returned. */ -static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, +static int faultin_page(struct vm_area_struct *vma, unsigned long address, unsigned int *flags, int *locked) { unsigned int fault_flags = 0; @@ -962,7 +962,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) /** * __get_user_pages() - pin user pages in memory - * @tsk: task_struct of target task * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1021,7 +1020,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * instead of __get_user_pages. __get_user_pages should be used only if * you need some special @gup_flags. */ -static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, +static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1103,8 +1102,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, page = follow_page_mask(vma, start, foll_flags, &ctx); if (!page) { - ret = faultin_page(tsk, vma, start, &foll_flags, - locked); + ret = faultin_page(vma, start, &foll_flags, locked); switch (ret) { case 0: goto retry; @@ -1178,8 +1176,6 @@ static bool vma_permits_fault(struct vm_area_struct *vma, /** * fixup_user_fault() - manually resolve a user page fault - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @address: user address * @fault_flags:flags to pass down to handle_mm_fault() @@ -1207,7 +1203,7 @@ static bool vma_permits_fault(struct vm_area_struct *vma, * This function will not return with an unlocked mmap_lock. So it has not the * same semantics wrt the @mm->mmap_lock as does filemap_fault(). */ -int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, +int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { @@ -1256,8 +1252,7 @@ EXPORT_SYMBOL_GPL(fixup_user_fault); * Please note that this function, unlike __get_user_pages will not * return 0 for nr_pages > 0 without FOLL_NOWAIT */ -static __always_inline long __get_user_pages_locked(struct task_struct *tsk, - struct mm_struct *mm, +static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1290,7 +1285,7 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, pages_done = 0; lock_dropped = false; for (;;) { - ret = __get_user_pages(tsk, mm, start, nr_pages, flags, pages, + ret = __get_user_pages(mm, start, nr_pages, flags, pages, vmas, locked); if (!locked) /* VM_FAULT_RETRY couldn't trigger, bypass */ @@ -1350,7 +1345,7 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, } *locked = 1; - ret = __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED, + ret = __get_user_pages(mm, start, 1, flags | FOLL_TRIED, pages, NULL, locked); if (!*locked) { /* Continue to retry until we succeeded */ @@ -1436,7 +1431,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * We made sure addr is within a VMA, so the following will * not result in a stack expansion that recurses back here. */ - return __get_user_pages(current, mm, start, nr_pages, gup_flags, + return __get_user_pages(mm, start, nr_pages, gup_flags, NULL, NULL, locked); } @@ -1520,7 +1515,7 @@ struct page *get_dump_page(unsigned long addr) struct vm_area_struct *vma; struct page *page; - if (__get_user_pages(current, current->mm, addr, 1, + if (__get_user_pages(current->mm, addr, 1, FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, NULL) < 1) return NULL; @@ -1529,8 +1524,7 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ #else /* CONFIG_MMU */ -static long __get_user_pages_locked(struct task_struct *tsk, - struct mm_struct *mm, unsigned long start, +static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, int *locked, unsigned int foll_flags) @@ -1606,8 +1600,7 @@ static struct page *alloc_migration_target_non_cma(struct page *page, unsigned l return alloc_migration_target(page, (unsigned long)&mtc); } -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1681,7 +1674,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * again migrating any new CMA pages which we failed to isolate * earlier. */ - ret = __get_user_pages_locked(tsk, mm, start, nr_pages, + ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); @@ -1695,8 +1688,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, return ret; } #else -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1711,8 +1703,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which * allows us to process the FOLL_LONGTERM flag. */ -static long __gup_longterm_locked(struct task_struct *tsk, - struct mm_struct *mm, +static long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1737,7 +1728,7 @@ static long __gup_longterm_locked(struct task_struct *tsk, flags = memalloc_nocma_save(); } - rc = __get_user_pages_locked(tsk, mm, start, nr_pages, pages, + rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas_tmp, NULL, gup_flags); if (gup_flags & FOLL_LONGTERM) { @@ -1752,7 +1743,7 @@ static long __gup_longterm_locked(struct task_struct *tsk, goto out; } - rc = check_and_migrate_cma_pages(tsk, mm, start, rc, pages, + rc = check_and_migrate_cma_pages(mm, start, rc, pages, vmas_tmp, gup_flags); } @@ -1762,22 +1753,20 @@ static long __gup_longterm_locked(struct task_struct *tsk, return rc; } #else /* !CONFIG_FS_DAX && !CONFIG_CMA */ -static __always_inline long __gup_longterm_locked(struct task_struct *tsk, - struct mm_struct *mm, +static __always_inline long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, unsigned int flags) { - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, flags); } #endif /* CONFIG_FS_DAX || CONFIG_CMA */ #ifdef CONFIG_MMU -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1796,20 +1785,18 @@ static long __get_user_pages_remote(struct task_struct *tsk, * This will check the vmas (even if our vmas arg is NULL) * and return -ENOTSUPP if DAX isn't allowed in this case: */ - return __gup_longterm_locked(tsk, mm, start, nr_pages, pages, + return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } /** * get_user_pages_remote() - pin user pages in memory - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1868,7 +1855,7 @@ static long __get_user_pages_remote(struct task_struct *tsk, * should use get_user_pages_remote because it cannot pass * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault. */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1880,13 +1867,13 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(get_user_pages_remote); #else /* CONFIG_MMU */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1894,8 +1881,7 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, return 0; } -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1915,11 +1901,10 @@ static long __get_user_pages_remote(struct task_struct *tsk, * @vmas: array of pointers to vmas corresponding to each page. * Or NULL if the caller does not require them. * - * This is the same as get_user_pages_remote(), just with a - * less-flexible calling convention where we assume that the task - * and mm being operated on are the current task's and don't allow - * passing of a locked parameter. We also obviously don't pass - * FOLL_REMOTE in here. + * This is the same as get_user_pages_remote(), just with a less-flexible + * calling convention where we assume that the mm being operated on belongs to + * the current task, and doesn't allow passing of a locked parameter. We also + * obviously don't pass FOLL_REMOTE in here. */ long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, @@ -1932,7 +1917,7 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } EXPORT_SYMBOL(get_user_pages); @@ -1942,7 +1927,7 @@ EXPORT_SYMBOL(get_user_pages); * * mmap_read_lock(mm); * do_something() - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * mmap_read_unlock(mm); * * to: @@ -1950,7 +1935,7 @@ EXPORT_SYMBOL(get_user_pages); * int locked = 1; * mmap_read_lock(mm); * do_something() - * get_user_pages_locked(tsk, mm, ..., pages, &locked); + * get_user_pages_locked(mm, ..., pages, &locked); * if (locked) * mmap_read_unlock(mm); * @@ -1988,7 +1973,7 @@ long get_user_pages_locked(unsigned long start, unsigned long nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; - return __get_user_pages_locked(current, current->mm, start, nr_pages, + return __get_user_pages_locked(current->mm, start, nr_pages, pages, NULL, locked, gup_flags | FOLL_TOUCH); } @@ -1998,12 +1983,12 @@ EXPORT_SYMBOL(get_user_pages_locked); * get_user_pages_unlocked() is suitable to replace the form: * * mmap_read_lock(mm); - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * mmap_read_unlock(mm); * * with: * - * get_user_pages_unlocked(tsk, mm, ..., pages); + * get_user_pages_unlocked(mm, ..., pages); * * It is functionally equivalent to get_user_pages_fast so * get_user_pages_fast should be used instead if specific gup_flags @@ -2026,7 +2011,7 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, return -EINVAL; mmap_read_lock(mm); - ret = __get_user_pages_locked(current, mm, start, nr_pages, pages, NULL, + ret = __get_user_pages_locked(mm, start, nr_pages, pages, NULL, &locked, gup_flags | FOLL_TOUCH); if (locked) mmap_read_unlock(mm); @@ -2671,7 +2656,7 @@ static int __gup_longterm_unlocked(unsigned long start, int nr_pages, */ if (gup_flags & FOLL_LONGTERM) { mmap_read_lock(current->mm); - ret = __gup_longterm_locked(current, current->mm, + ret = __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, gup_flags); mmap_read_unlock(current->mm); @@ -2914,10 +2899,8 @@ int pin_user_pages_fast_only(unsigned long start, int nr_pages, EXPORT_SYMBOL_GPL(pin_user_pages_fast_only); /** - * pin_user_pages_remote() - pin pages of a remote process (task != current) + * pin_user_pages_remote() - pin pages of a remote process * - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -2938,7 +2921,7 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast_only); * FOLL_PIN means that the pages must be released via unpin_user_page(). Please * see Documentation/core-api/pin_user_pages.rst for details. */ -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -2948,7 +2931,7 @@ long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, return -EINVAL; gup_flags |= FOLL_PIN; - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(pin_user_pages_remote); @@ -2980,7 +2963,7 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, return -EINVAL; gup_flags |= FOLL_PIN; - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags); } EXPORT_SYMBOL(pin_user_pages); @@ -3025,7 +3008,7 @@ long pin_user_pages_locked(unsigned long start, unsigned long nr_pages, return -EINVAL; gup_flags |= FOLL_PIN; - return __get_user_pages_locked(current, current->mm, start, nr_pages, + return __get_user_pages_locked(current->mm, start, nr_pages, pages, NULL, locked, gup_flags | FOLL_TOUCH); } diff --git a/mm/memory.c b/mm/memory.c index ad5eca9dd1ed..c8cfc19d17f1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4746,7 +4746,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, void *maddr; struct page *page = NULL; - ret = get_user_pages_remote(tsk, mm, addr, 1, + ret = get_user_pages_remote(mm, addr, 1, gup_flags, &page, &vma, NULL); if (ret <= 0) { #ifndef CONFIG_HAVE_IOREMAP_PROT diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index cc85ce81914a..29c052099aff 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -105,7 +105,7 @@ static int process_vm_rw_single_vec(unsigned long addr, * current/current->mm */ mmap_read_lock(mm); - pinned_pages = pin_user_pages_remote(task, mm, pa, pinned_pages, + pinned_pages = pin_user_pages_remote(mm, pa, pinned_pages, flags, process_pages, NULL, &locked); if (locked) diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 7869d6a9980b..afe5e68ede77 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -914,7 +914,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsigned long pos, * (represented by bprm). 'current' is the process doing * the execve(). */ - if (get_user_pages_remote(current, bprm->mm, pos, 1, + if (get_user_pages_remote(bprm->mm, pos, 1, FOLL_FORCE, &page, NULL, NULL) <= 0) return false; #else diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 45799606bb3e..0939ed377688 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -61,7 +61,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ mmap_read_lock(mm); - get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, &locked); if (locked) mmap_read_unlock(mm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 0a68c9d3d3ab..e684b9b74483 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1830,7 +1830,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * not call the fault handler, so do it here. */ bool unlocked = false; - r = fixup_user_fault(current, current->mm, addr, + r = fixup_user_fault(current->mm, addr, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked)