From patchwork Tue Jun 30 20:45:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634761 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D294739 for ; Tue, 30 Jun 2020 20:45:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E74D22077D for ; Tue, 30 Jun 2020 20:45:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L2sX6uKI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E74D22077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 66BBA6B0023; Tue, 30 Jun 2020 16:45:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 61C5E6B0024; Tue, 30 Jun 2020 16:45:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E5946B0025; Tue, 30 Jun 2020 16:45:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id 319F16B0023 for ; Tue, 30 Jun 2020 16:45:25 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E65258248076 for ; Tue, 30 Jun 2020 20:45:24 +0000 (UTC) X-FDA: 76987058568.07.sound03_0a1474c26e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 1A01B1803F9B5 for ; Tue, 30 Jun 2020 20:45:21 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30054:30055:30070:30091,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y8t5f93f9ku1n8qsqd16hpz1kbhop5e5fbuep8kk5h5b9u7h8h6ex58d446e4.hdcqg6rxoimmxdcnw48mjz1htofrp31qrhydjargcr6eq66drjk31fd31os66e6.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: sound03_0a1474c26e7a X-Filterd-Recvd-Size: 25290 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549919; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=s/tHcdcbc7ASES1NC7DB9h+EmhaDl2CvL1xA0Fn1tkQ=; b=L2sX6uKIi0P8aaDEXCloVzyiiPRTYw2C93GhhiAvVy2pdNHL26TcUEI39Xmj8yOlYfQBMT x89LNBiSehT8IKjE5op9Z7RtqxB+zKZqw4YmEgR5+rm+Cm8Vnqegiit7ineNRckZYdY2Qg 9Y3sTHH0cP7jnCFlUAScSMJy6D4tz/s= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-178-AKs-j_VoMe2RTnqfBAal5w-1; Tue, 30 Jun 2020 16:45:18 -0400 X-MC-Unique: AKs-j_VoMe2RTnqfBAal5w-1 Received: by mail-qk1-f199.google.com with SMTP id 124so7156866qko.8 for ; Tue, 30 Jun 2020 13:45:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=s/tHcdcbc7ASES1NC7DB9h+EmhaDl2CvL1xA0Fn1tkQ=; b=HKpNP719oF4Ux4iOeN0dkJ6KQ6oUQSQfOQtbMetbaFWRfqhuc27m9bOjyrdezDJ2Gq rdNtbGAwWI1EynJ0uEUH1pZ3TfVmMP9d3Ce7hvZTc87IOwjOf9hhaHHKMVgD4p9OB6Ex /ALGBzxjONnEGvW+nJKywt4dYRyNAiocQpidBsQ1AjIDiikWbAb3rw4vXe3M6DqARUCA B+DfqkCsIDIMkistPpovPGsBOx54q7pQGOd39u7kwOIBjvQhhs9olPsYRwAdJ6hHpfct Bga9UGnFW45t+1NzrbLDunB1WgIE6XeEF99aBAIQ52R2WUeRgoXFV4kqIFe+zn8aaNOK uKkA== X-Gm-Message-State: AOAM532Jz0wwYvTBl+9CUd8+OTpZ43WlvsaqedTXSe8WY3HYEagNnYhL V8Uv8uzZGU+HRPV9A+fAwowW1Vi89RoxvzCHI+kgKWkG93sB/hsnv8Vhz66hhjNKanitYdppgwk ahLhQeF3QO6g= X-Received: by 2002:a37:ec7:: with SMTP id 190mr21417747qko.49.1593549906920; Tue, 30 Jun 2020 13:45:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwHVmdxlpZRa7wrNNf8cVbqE4zwRTMK1+aEsQni+x4zkf/MOnttaP/o/L9HKDCy6mQRoDtl8g== X-Received: by 2002:a37:ec7:: with SMTP id 190mr21417687qko.49.1593549906263; Tue, 30 Jun 2020 13:45:06 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id 16sm3950681qkv.48.2020.06.30.13.45.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:05 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon Subject: [PATCH v4 01/26] mm: Do page fault accounting in handle_mm_fault Date: Tue, 30 Jun 2020 16:45:04 -0400 Message-Id: <20200630204504.38516-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 1A01B1803F9B5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a preparation patch to move page fault accountings into the general code in handle_mm_fault(). This includes both the per task flt_maj/flt_min counters, and the major/minor page fault perf events. To do this, the pt_regs pointer is passed into handle_mm_fault(). PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault handlers. So far, all the pt_regs pointer that passed into handle_mm_fault() is NULL, which means this patch should have no intented functional change. Suggested-by: Linus Torvalds Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 2 +- arch/arc/mm/fault.c | 2 +- arch/arm/mm/fault.c | 2 +- arch/arm64/mm/fault.c | 2 +- arch/csky/mm/fault.c | 3 +- arch/hexagon/mm/vm_fault.c | 2 +- arch/ia64/mm/fault.c | 2 +- arch/m68k/mm/fault.c | 2 +- arch/microblaze/mm/fault.c | 2 +- arch/mips/mm/fault.c | 2 +- arch/nds32/mm/fault.c | 2 +- arch/nios2/mm/fault.c | 2 +- arch/openrisc/mm/fault.c | 2 +- arch/parisc/mm/fault.c | 2 +- arch/powerpc/mm/copro_fault.c | 2 +- arch/powerpc/mm/fault.c | 2 +- arch/riscv/mm/fault.c | 2 +- arch/s390/mm/fault.c | 2 +- arch/sh/mm/fault.c | 2 +- arch/sparc/mm/fault_32.c | 4 +-- arch/sparc/mm/fault_64.c | 2 +- arch/um/kernel/trap.c | 2 +- arch/unicore32/mm/fault.c | 2 +- arch/x86/mm/fault.c | 2 +- arch/xtensa/mm/fault.c | 2 +- drivers/iommu/amd/iommu_v2.c | 2 +- drivers/iommu/intel/svm.c | 3 +- include/linux/mm.h | 7 ++-- mm/gup.c | 4 +-- mm/hmm.c | 3 +- mm/ksm.c | 3 +- mm/memory.c | 62 ++++++++++++++++++++++++++++++++++- 32 files changed, 102 insertions(+), 35 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index c2303a8c2b9f..1983e43a5e2f 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -148,7 +148,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, /* If for any reason at all we couldn't handle the fault, make sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 72f5405a7ec5..1b178dc147fd 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -131,7 +131,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index c6550eddfce1..01a8e0f8fef7 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -224,7 +224,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, goto out; } - return handle_mm_fault(vma, addr & PAGE_MASK, flags); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 8afb238ff335..be29f4076fe3 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -428,7 +428,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); } static bool is_el0_instruction_abort(unsigned int esr) diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c index 0b9cbf2cf6a9..7137e2e8dc57 100644 --- a/arch/csky/mm/fault.c +++ b/arch/csky/mm/fault.c @@ -150,7 +150,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0); + fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0, + NULL); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index cd3808f96b93..f12f330e7946 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -88,7 +88,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) break; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index 3a4dec334cc5..abf2808f9b4b 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -143,7 +143,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re * sure we exit gracefully rather than endlessly redo the * fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index a94a814ad6ad..738fff2a16f4 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -135,7 +135,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); pr_debug("handle_mm_fault returns %x\n", fault); if (fault_signal_pending(fault, regs)) diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index a2bfe587b491..1a3d4c4ca28b 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -214,7 +214,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index 01b168a90434..b1db39784db9 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -152,7 +152,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index 8fb73f6401a0..d0ecc8fb5b23 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -206,7 +206,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, * the fault. */ - fault = handle_mm_fault(vma, addr, flags); + fault = handle_mm_fault(vma, addr, flags, NULL); /* * If we need to retry but a fatal signal is pending, handle the diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index 4112ef0e247e..86beb9a2698e 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -131,7 +131,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index d2224ccca294..3daa491d1edb 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -159,7 +159,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index 66ac0719bd49..e32d06928c24 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -302,7 +302,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, * fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c index b83abbead4a2..2d0276abe0a6 100644 --- a/arch/powerpc/mm/copro_fault.c +++ b/arch/powerpc/mm/copro_fault.c @@ -64,7 +64,7 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea, } ret = 0; - *flt = handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0); + *flt = handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0, NULL); if (unlikely(*flt & VM_FAULT_ERROR)) { if (*flt & VM_FAULT_OOM) { ret = -ENOMEM; diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 641fc5f3d7dd..25dee001d8e1 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -607,7 +607,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); major |= fault & VM_FAULT_MAJOR; diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index ae7b7fe24658..50a952a68433 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -110,7 +110,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags); + fault = handle_mm_fault(vma, addr, flags, NULL); /* * If we need to retry but a fatal signal is pending, handle the diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index d53c2e2ea1fd..fc14df0b4d6e 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -478,7 +478,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; if (flags & FAULT_FLAG_RETRY_NOWAIT) diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index fbe1f2fe9a8c..3c0a11827f7e 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -482,7 +482,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR))) if (mm_fault_error(regs, error_code, address, fault)) diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index cfef656eda0f..06af03db4417 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -234,7 +234,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; @@ -410,7 +410,7 @@ static void force_user_fault(unsigned long address, int write) if (!(vma->vm_flags & (VM_READ | VM_EXEC))) goto bad_area; } - switch (handle_mm_fault(vma, address, flags)) { + switch (handle_mm_fault(vma, address, flags, NULL)) { case VM_FAULT_SIGBUS: case VM_FAULT_OOM: goto do_sigbus; diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index a3806614e4dc..9ebee14ee893 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -422,7 +422,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) goto exit_exception; diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 2b3afa354a90..8d9870d76da1 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -71,7 +71,7 @@ int handle_page_fault(unsigned long address, unsigned long ip, do { vm_fault_t fault; - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) goto out_nosemaphore; diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c index 7654bddde133..9b4539d8d669 100644 --- a/arch/unicore32/mm/fault.c +++ b/arch/unicore32/mm/fault.c @@ -185,7 +185,7 @@ static vm_fault_t __do_pf(struct mm_struct *mm, unsigned long addr, * If for any reason at all we couldn't handle the fault, make * sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, addr & PAGE_MASK, flags); + fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); return fault; check_stack: diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 1ead568c0101..fe3ca00eb121 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1292,7 +1292,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); major |= fault & VM_FAULT_MAJOR; /* Quick path to respond to signals */ diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index c4decc73fd86..6942de45f078 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -108,7 +108,7 @@ void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) return; diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index e4b025c5637c..c259108ab6dd 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -495,7 +495,7 @@ static void do_fault(struct work_struct *work) if (access_error(vma, fault)) goto out; - ret = handle_mm_fault(vma, address, flags); + ret = handle_mm_fault(vma, address, flags, NULL); out: mmap_read_unlock(mm); diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 6c87c807a0ab..5ae59a6ad681 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -872,7 +872,8 @@ static irqreturn_t prq_event_thread(int irq, void *d) goto invalid; ret = handle_mm_fault(vma, address, - req->wr_req ? FAULT_FLAG_WRITE : 0); + req->wr_req ? FAULT_FLAG_WRITE : 0, + NULL); if (ret & VM_FAULT_ERROR) goto invalid; diff --git a/include/linux/mm.h b/include/linux/mm.h index f6a0c302dc76..ebc173dddad5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -38,6 +38,7 @@ struct file_ra_state; struct user_struct; struct writeback_control; struct bdi_writeback; +struct pt_regs; void init_mm_internals(void); @@ -1651,7 +1652,8 @@ int invalidate_inode_page(struct page *page); #ifdef CONFIG_MMU extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags); + unsigned long address, unsigned int flags, + struct pt_regs *regs); extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); @@ -1661,7 +1663,8 @@ void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows); #else static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) + unsigned long address, unsigned int flags, + struct pt_regs *regs) { /* should never happen if there's no MMU */ BUG(); diff --git a/mm/gup.c b/mm/gup.c index f6124e38c965..53ad15629014 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -884,7 +884,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_TRIED; } - ret = handle_mm_fault(vma, address, fault_flags); + ret = handle_mm_fault(vma, address, fault_flags, NULL); if (ret & VM_FAULT_ERROR) { int err = vm_fault_to_errno(ret, *flags); @@ -1238,7 +1238,7 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, fatal_signal_pending(current)) return -EINTR; - ret = handle_mm_fault(vma, address, fault_flags); + ret = handle_mm_fault(vma, address, fault_flags, NULL); major |= ret & VM_FAULT_MAJOR; if (ret & VM_FAULT_ERROR) { int err = vm_fault_to_errno(ret, 0); diff --git a/mm/hmm.c b/mm/hmm.c index e9a545751108..0be32b8a47be 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -75,7 +75,8 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end, } for (; addr < end; addr += PAGE_SIZE) - if (handle_mm_fault(vma, addr, fault_flags) & VM_FAULT_ERROR) + if (handle_mm_fault(vma, addr, fault_flags, NULL) & + VM_FAULT_ERROR) return -EFAULT; return -EBUSY; } diff --git a/mm/ksm.c b/mm/ksm.c index 5fb176d497ea..90a625b02a1d 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -480,7 +480,8 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) break; if (PageKsm(page)) ret = handle_mm_fault(vma, addr, - FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE); + FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, + NULL); else ret = VM_FAULT_WRITE; put_page(page); diff --git a/mm/memory.c b/mm/memory.c index 17a3df0f3994..e594d5cdcaa0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -71,6 +71,8 @@ #include #include #include +#include +#include #include @@ -4360,6 +4362,36 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return handle_pte_fault(&vmf); } +/** + * mm_account_fault - Do page fault accountings + * @regs: the pt_regs struct pointer. When set to NULL, will skip accounting + * @address: faulted address. + * @major: whether this is a major fault. + * + * This will take care of most of the page fault accountings. It should only + * be called when a page fault is completed. For example, VM_FAULT_RETRY means + * the fault needs to be retried again later, so it should not contribute to + * the accounting. + * + * The accounting will also include the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] + * perf counter updates. Note: the handling of PERF_COUNT_SW_PAGE_FAULTS + * should still be in per-arch page fault handlers at the entry of page fault. + */ +static inline void mm_account_fault(struct pt_regs *regs, + unsigned long address, bool major) +{ + if (!regs) + return; + + if (major) { + current->maj_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); + } else { + current->min_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); + } +} + /* * By the time we get here, we already hold the mm semaphore * @@ -4367,7 +4399,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, * return value. See filemap_fault() and __lock_page_or_retry(). */ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, - unsigned int flags) + unsigned int flags, struct pt_regs *regs) { vm_fault_t ret; @@ -4408,6 +4440,34 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, mem_cgroup_oom_synchronize(false); } + if (ret & (VM_FAULT_RETRY | VM_FAULT_ERROR)) + return ret; + + /* + * Do accounting in the common code, to avoid unnecessary + * architecture differences or duplicated code. + * + * We arbitrarily make the rules be: + * + * - Unsuccessful faults do not count (e.g. when the address wasn't + * valid). That includes arch_vma_access_permitted() failing above. + * + * So this is expressly not a "this many hardware page faults" + * counter. Use the hw profiling for that. + * + * - Incomplete faults do not count (e.g. RETRY). They will only + * count once completed. + * + * - The fault counts as a "major" fault when the final successful + * fault is VM_FAULT_MAJOR, or if it was a retry (which implies that + * we couldn't handle it immediately previously). + * + * - If the fault is done for GUP, regs will be NULL and no accounting + * will be done. + */ + mm_account_fault(regs, address, (ret & VM_FAULT_MAJOR) || + (flags & FAULT_FLAG_TRIED)); + return ret; } EXPORT_SYMBOL_GPL(handle_mm_fault); From patchwork Tue Jun 30 20:45:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634753 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5ED3138C for ; Tue, 30 Jun 2020 20:45:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6863320663 for ; Tue, 30 Jun 2020 20:45:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="aYZjDzWP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6863320663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A89946B000C; Tue, 30 Jun 2020 16:45:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A3A3F6B000D; Tue, 30 Jun 2020 16:45:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 950A16B000E; Tue, 30 Jun 2020 16:45:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id 7F92C6B000C for ; Tue, 30 Jun 2020 16:45:16 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3B13E8248076 for ; Tue, 30 Jun 2020 20:45:16 +0000 (UTC) X-FDA: 76987058232.27.spade43_260a53026e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 0DDFF3D663 for ; Tue, 30 Jun 2020 20:45:16 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y819zadzgwydnrutunk7xsfoegcyp5ys3j1hzazcdfeawjtm593r6xpr1fh3s.idhs9oa6hfryttjb84d3n8dmqyr7xcbmiozyjkctqkwyf4nja8hmdrirqc696z4.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: spade43_260a53026e7a X-Filterd-Recvd-Size: 5556 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549915; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=9zINhPgn0dLU827u0kCMWiXWqm1EOoX2bkCYV8BQtz4=; b=aYZjDzWPn4mJIroxTeybKyEEzU4h0811cZwDsuiBPlsqdoKWZd+xN6we57Uc7JXGRvAvFn Yk82nSmvdJ6CZpv4mkG+IuZGWTgQBHggZeAlGvbr6P6MEEzKS6WUH1OcQ1VFOps0DT/Gh3 t8VdQuLEok4rZJVy2h6pVND3cBJg+p0= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-258-EmCr4dA1PcCL2THNz09jVA-1; Tue, 30 Jun 2020 16:45:13 -0400 X-MC-Unique: EmCr4dA1PcCL2THNz09jVA-1 Received: by mail-qt1-f199.google.com with SMTP id g6so15326488qtr.0 for ; Tue, 30 Jun 2020 13:45:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=9zINhPgn0dLU827u0kCMWiXWqm1EOoX2bkCYV8BQtz4=; b=mU4nflvTIwtztA2XGQlUg0IQYijW5mcZXFnAhagvuzSlYXHJ7Bl5AkIrIOY2fVHHdk +5fQVTpNmGHmzwMbuCCOFgcX1WsKMJdiIEH3YWiIYczMh5tfD8n3vXHJZegJamevVUES QI7pbP8xCFe9RWS/6QkAUKcOm0hS9BeFZarWidF8rUvYdckwguYAxkgXQLXFI+mN0dam tN1Mi4EXya6xBlE0RdnWg4w6SD3PtJ0ogynoI80vOfOU4ooZ2hvtSUxvBdak+l0gPygu EduvsOt4uC0G3beD+kP6o5V33Y1XN5VSmdu6XZseqRPj7T9RojoSWHC0UGFGnxBoP+3V Tcsg== X-Gm-Message-State: AOAM533t/3cWczPrr0Ijw4vikJEtLyPaucOHkQpZsMtWvwX2ljx3U4TZ oSv7e99SWe2GOQE5wJZzNb4YY/Y/Gzyp/yQuO2+Rv/b0MTW/WNPgDLd9eUWxN6XC/jitdnOHGMh 28SBIrGS4WeQ= X-Received: by 2002:ac8:100b:: with SMTP id z11mr23499532qti.157.1593549911382; Tue, 30 Jun 2020 13:45:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx0yAgQnRD/5TZveHkpZPXLU358GLo0DpfJwL9wdYY6bw9cMFV6GaGklDvsflsay5ajX+3TFA== X-Received: by 2002:ac8:100b:: with SMTP id z11mr23499328qti.157.1593549908970; Tue, 30 Jun 2020 13:45:08 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id o145sm3987048qke.2.2020.06.30.13.45.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:08 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH v4 02/26] mm/alpha: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:06 -0400 Message-Id: <20200630204506.38567-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 0DDFF3D663 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Richard Henderson CC: Ivan Kokshaysky CC: Matt Turner CC: linux-alpha@vger.kernel.org Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index 1983e43a5e2f..09172f017efc 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -25,6 +25,7 @@ #include #include #include +#include extern void die_if_kernel(char *,struct pt_regs *,long, unsigned long *); @@ -116,6 +117,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, #endif if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -148,7 +150,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, /* If for any reason at all we couldn't handle the fault, make sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -164,10 +166,6 @@ do_page_fault(unsigned long address, unsigned long mmcsr, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634755 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F175739 for ; Tue, 30 Jun 2020 20:45:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5FCF020829 for ; Tue, 30 Jun 2020 20:45:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ISsN1LUt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5FCF020829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4C47C6B000D; Tue, 30 Jun 2020 16:45:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 35ED36B000E; Tue, 30 Jun 2020 16:45:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D7196B0010; Tue, 30 Jun 2020 16:45:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 090E46B000D for ; Tue, 30 Jun 2020 16:45:17 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B1836181AC9CC for ; Tue, 30 Jun 2020 20:45:16 +0000 (UTC) X-FDA: 76987058232.09.judge21_3e0f0d726e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 8B254180AD806 for ; Tue, 30 Jun 2020 20:45:16 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yr6se9bh3j8mruk4jmkxuo36wf6ocnrrp4t7jzpn9finebregyz3fr5mpxjwk.646dfeq1syhkm96y3f4jdrarbb8aw8yzz6be11papif5bmzoumg5e7op6p3w6tp.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: judge21_3e0f0d726e7a X-Filterd-Recvd-Size: 5542 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549915; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=6LDTI5CbN1iafPepx0N8aCOkAEGCgmTd74bKKgxee6w=; b=ISsN1LUtv6O+PsgwMA0Emssc9IBClGX74bPDCw14woNBg8/hU6eRon1z+613Xknc/JfDrK 85bhq23eFrSUqhaekA+IN4ltS/lWIoeMvvkHN3GcIuyevFzjXm/KdEdMwCbb+zFxu/9HkP jf2io1EZHChOlyRcHvvi2J7552+c5aA= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-438-pugBLAVcMk-1EgQUT2H18Q-1; Tue, 30 Jun 2020 16:45:13 -0400 X-MC-Unique: pugBLAVcMk-1EgQUT2H18Q-1 Received: by mail-qt1-f197.google.com with SMTP id c22so15313675qtp.9 for ; Tue, 30 Jun 2020 13:45:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=6LDTI5CbN1iafPepx0N8aCOkAEGCgmTd74bKKgxee6w=; b=KtdUU/QRerNFBQNZF/zjAef6gvpYK1wAeY9FMB7EHNntsAS6M/dRZrJuXwep/Sj/XX vxrnQpjP6VD2U79PjdZ9gvspL3A6y8iF6sLbRx6uFJYDFaEeKNC4j9Sb1E6s+C0CVOZM 1Ryp6L0jSEJNFf5PCRKC1bqSo/2Ch8hGR+/1IvLY0YOOivZIzt+XO12UFH3bcCUBZ841 9CAIOOy84kMF2mDHfVIRoAqDQnHWzo+wfNZZj+lADgSzq9Ne2BS1/7hNGKcoLmOYhTma qK52IaTQOPIZBrhpRY3ZjMxbbmOQ0A5C/l3frLl68vvEahvEUi+OOWVXPevwtAOKr5Sk y4Vg== X-Gm-Message-State: AOAM530bOzWBCwV3+Y/RzQtKkSga3eud54FNOHLM2CnzFyXEKlBZMKfs SB+9ayS2S7Ct6H8q87c6DRokioICkdCh8YvzOxHFbO+Wztj9T/nR+f/UUMH1J0wlXf4dL5sbTxU g7eNBn8JuEkI= X-Received: by 2002:a05:620a:1035:: with SMTP id a21mr20501597qkk.321.1593549913127; Tue, 30 Jun 2020 13:45:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx+kaBc9/kmTr6BNHfQkL+ErMO1IOKMu40pdoPLe9ogbLzCBmnzOfYZlcKb4wfLKTS9uQhctA== X-Received: by 2002:a05:620a:1035:: with SMTP id a21mr20501496qkk.321.1593549911595; Tue, 30 Jun 2020 13:45:11 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id x26sm3658652qtr.4.2020.06.30.13.45.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:11 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH v4 03/26] mm/arc: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:09 -0400 Message-Id: <20200630204509.38615-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 8B254180AD806 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Vineet Gupta CC: linux-snps-arc@lists.infradead.org Signed-off-by: Peter Xu --- arch/arc/mm/fault.c | 18 +++--------------- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 1b178dc147fd..5601dec319b5 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -106,6 +106,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) if (write) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); @@ -131,7 +132,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -156,22 +157,9 @@ void do_page_fault(unsigned long address, struct pt_regs *regs) * Major/minor page fault accounting * (in case of retry we only land here once) */ - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - - if (likely(!(fault & VM_FAULT_ERROR))) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } - + if (likely(!(fault & VM_FAULT_ERROR))) /* Normal return path: fault Handled Gracefully */ return; - } if (!user_mode(regs)) goto no_context; From patchwork Tue Jun 30 20:45:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634757 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D4CE0739 for ; Tue, 30 Jun 2020 20:45:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 97BAE2077D for ; Tue, 30 Jun 2020 20:45:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HEA/N8rP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 97BAE2077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ED00D6B000E; Tue, 30 Jun 2020 16:45:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E80C66B0010; Tue, 30 Jun 2020 16:45:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D700D6B0022; Tue, 30 Jun 2020 16:45:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id C2BE86B000E for ; Tue, 30 Jun 2020 16:45:18 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 82340181AC9CC for ; Tue, 30 Jun 2020 20:45:18 +0000 (UTC) X-FDA: 76987058316.15.hole71_42092ac26e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 4901C1814B0C9 for ; Tue, 30 Jun 2020 20:45:18 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30036:30051:30054:30090,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04ygouugo8nuaffs3trnkcfacswc5opyhc9r3ic391wwemtxc8t6jaqzke4qb4n.br7nxi6tppc596o5tumntwt7bmytmht7jcn7bfxrgbucwi1a3uwit917rbm86ch.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:1:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: hole71_42092ac26e7a X-Filterd-Recvd-Size: 6642 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549917; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=D8OdeaID5or5gsgcWoj3vmPJjBIrXv0E7kOuYlamE1o=; b=HEA/N8rP3piAllknbgSf9iwXiNM0vHi09DvsYVZeAZfOGpMM1IC6kYLE62dZC7zTttfk4z Eru7gpyrQ31PYciUd8mB/UNff7jzzmuG5lbWF021nAUu1TkvWLW1CZvP3ztHLbCncCFjOR 69w6EONRYYRlusBts2wg0u9bhyD18Iw= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-155-mM2V8Q9wMGinP0DGJCETZw-1; Tue, 30 Jun 2020 16:45:15 -0400 X-MC-Unique: mM2V8Q9wMGinP0DGJCETZw-1 Received: by mail-qk1-f200.google.com with SMTP id z1so5381199qkz.3 for ; Tue, 30 Jun 2020 13:45:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=D8OdeaID5or5gsgcWoj3vmPJjBIrXv0E7kOuYlamE1o=; b=g6N9c3nbkmmstsbeNByPOKjZ2SjA/nq6GBc43G9IED3+H6uAdVKRoMYbUsd9GAsS0y 7+lXKiHa/N1Auc/KUiMBkrtCXyFA2Wm54IxD8GuVTWxju+SqziSzGCHpxB4yg5Nvnw5N 8W4m9pf0QnljNC024jmTXqrBMeh6O4tNKYp9YvJxmVWljn8PBDHQ6T49SWiKAofynUsR Mqcdp1v/SRUGsQOumDXWewP4l31WaTcxbloMQm8IMO4LiwL9KFKa1JLFQyx2/MzjK+aV s1QcBekZEb9B7QZL/FMsQrTv3RhW9pLxRadRsycDSYRHD0/0GYylc1X+os9J77+mGr5W Ewtg== X-Gm-Message-State: AOAM533Os+y2nFABP34BTyYEMoHrXv1OGZQtrJmneA9oCpvc9WN+XlqU skNTs3q3NFZgCUKF6BoeRF4KCZaYJIYSfWmuDnzDnBlo8JgEN2Sh+RCW+kMEuS075kEWL1YhWEt ODcf9K48m/sw= X-Received: by 2002:a0c:aa41:: with SMTP id e1mr7773819qvb.117.1593549914665; Tue, 30 Jun 2020 13:45:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwyi+ZiCgpgJNPiRGb/lHwsXU/ArQZuHKcbcZdE9UMPtaTClSVeehVA8eVoAkqIf1DOQolAKg== X-Received: by 2002:a0c:aa41:: with SMTP id e1mr7773775qvb.117.1593549914158; Tue, 30 Jun 2020 13:45:14 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id p80sm3991738qke.19.2020.06.30.13.45.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:13 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 04/26] mm/arm: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:11 -0400 Message-Id: <20200630204511.38663-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 4901C1814B0C9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we need to pass the pt_regs pointer into __do_page_fault(). Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Russell King CC: Will Deacon CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Peter Xu --- arch/arm/mm/fault.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 01a8e0f8fef7..efa402025031 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -202,7 +202,8 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma) static vm_fault_t __kprobes __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, - unsigned int flags, struct task_struct *tsk) + unsigned int flags, struct task_struct *tsk, + struct pt_regs *regs) { struct vm_area_struct *vma; vm_fault_t fault; @@ -224,7 +225,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, goto out; } - return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ @@ -266,6 +267,8 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -290,7 +293,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) #endif } - fault = __do_page_fault(mm, addr, fsr, flags, tsk); + fault = __do_page_fault(mm, addr, fsr, flags, tsk, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_lock because @@ -302,23 +305,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) return 0; } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ - - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (!(fault & VM_FAULT_ERROR) && flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; From patchwork Tue Jun 30 20:45:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634759 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 256CD739 for ; Tue, 30 Jun 2020 20:45:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E6F7F20772 for ; Tue, 30 Jun 2020 20:45:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NcSq7uwh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6F7F20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1405B6B0022; Tue, 30 Jun 2020 16:45:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0C7376B0023; Tue, 30 Jun 2020 16:45:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE4CB6B0024; Tue, 30 Jun 2020 16:45:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id C72346B0022 for ; Tue, 30 Jun 2020 16:45:21 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8DE7F180AD80F for ; Tue, 30 Jun 2020 20:45:21 +0000 (UTC) X-FDA: 76987058442.17.sense56_2c16c9626e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 5DC2C180D0180 for ; Tue, 30 Jun 2020 20:45:21 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30036:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04ygaaqmajtsps7quoma57qzfjyftoczsfk7ke5zz3imfhby8et5eg1h18h98o6.adswjm5aar3n8r987twjrydarmhhdtoa394ndiy9eyojuiu5peqresrfwwdtgiz.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: sense56_2c16c9626e7a X-Filterd-Recvd-Size: 6581 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549920; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=fUO3NS9psrO0Jdd2xoJCjexzJ/aoni+i2MbifOTNg00=; b=NcSq7uwh51f2Z9VYX3tA4qDIFtVeICqJC51luYBuJlb/7kMdjACgCz2H6zS0DXT/nKTR5d SwVVAjuzk15Pfy280q5bCGhEqbl0o3EGXSV5eBjCPgnnD8gvBBcGcH8ix2RycmfkzT3EJc 9eR6Eiek79o9a3qRVw0S4ojbL46bWbs= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-74-UcEMuduDOEe5ZCgs_nhQfQ-1; Tue, 30 Jun 2020 16:45:18 -0400 X-MC-Unique: UcEMuduDOEe5ZCgs_nhQfQ-1 Received: by mail-qt1-f198.google.com with SMTP id r25so15289937qtj.11 for ; Tue, 30 Jun 2020 13:45:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=fUO3NS9psrO0Jdd2xoJCjexzJ/aoni+i2MbifOTNg00=; b=VH7IKpDjTciakaPSigGMoD600j+f54bKhhtK+yzWXBNpW4an//2YuFr4yxj9MLdM3E SzT6WcvUEXq42fiBQmg2H2r/us6k11BY3AbQz4w+nHJu9EfEB7blm7benfS8I/NN7Im/ jZPM/5pTUf4d0QBpiP/Nr42pMgpVestX8YvBeO5hWZ0dhNfwtkHaHXoOoicFa0CJkQ81 Jep3igbXxcUl7UhJuie97v03+jd15uc96VNGoreVAVZyZwQ73a6SF2i3lNUVfGzZU6jX RL2nRZr6JApx3+W2/c7srgf20bmwxkOay/2CC8viLfkMnB/Actx/xBcct62mKkKmXmPx 1ecg== X-Gm-Message-State: AOAM532KPk2QuNrDXL8+jR1VDEQQzzVtAVPlgaMJ2ZQlhTJ72nkF2mA+ S8foA0dNatmSwwasCBsvJ+5rM70qaRZ0ljmTt9zXLRYIxlv9SvmwnJnm0o9G/gSgdpHayluZcXk nsplRkIsaf3k= X-Received: by 2002:ac8:27c9:: with SMTP id x9mr2655042qtx.172.1593549917114; Tue, 30 Jun 2020 13:45:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxM45GnPoMoskFnro4SYt1oRyRPA5o85g+uNQHWZT+8Z2bvlk/KPFB1VULzepW16SEXqoxiIw== X-Received: by 2002:ac8:27c9:: with SMTP id x9mr2655012qtx.172.1593549916828; Tue, 30 Jun 2020 13:45:16 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id m7sm3926577qti.6.2020.06.30.13.45.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:16 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 05/26] mm/arm64: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:14 -0400 Message-Id: <20200630204514.38711-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 5DC2C180D0180 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we pass pt_regs pointer into __do_page_fault(). CC: Catalin Marinas CC: Will Deacon CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Peter Xu Acked-by: Will Deacon --- arch/arm64/mm/fault.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index be29f4076fe3..f07333e86c2f 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -404,7 +404,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re #define VM_FAULT_BADACCESS 0x020000 static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags) + unsigned int mm_flags, unsigned long vm_flags, + struct pt_regs *regs) { struct vm_area_struct *vma = find_vma(mm, addr); @@ -428,7 +429,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs); } static bool is_el0_instruction_abort(unsigned int esr) @@ -450,7 +451,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, { const struct fault_info *inf; struct mm_struct *mm = current->mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned long vm_flags = VM_ACCESS_FLAGS; unsigned int mm_flags = FAULT_FLAG_DEFAULT; @@ -516,8 +517,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, #endif } - fault = __do_page_fault(mm, addr, mm_flags, vm_flags); - major |= fault & VM_FAULT_MAJOR; + fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -538,25 +538,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, * Handle the "normal" (no error) case first. */ if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | - VM_FAULT_BADACCESS)))) { - /* - * Major/minor page fault accounting is only done - * once. If we go through a retry, it is extremely - * likely that the page will be found in page cache at - * that point. - */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - addr); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - addr); - } - + VM_FAULT_BADACCESS)))) return 0; - } /* * If we are in kernel mode at this point, we have no context to From patchwork Tue Jun 30 20:45:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634805 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E781C138C for ; Tue, 30 Jun 2020 20:51:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B21062077D for ; Tue, 30 Jun 2020 20:51:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VifhG8tw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B21062077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D47176B0005; Tue, 30 Jun 2020 16:51:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CF8EF6B0083; Tue, 30 Jun 2020 16:51:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE6646B0085; Tue, 30 Jun 2020 16:51:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id A63426B0005 for ; Tue, 30 Jun 2020 16:51:26 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6B41E181AC9CC for ; Tue, 30 Jun 2020 20:51:26 +0000 (UTC) X-FDA: 76987073772.23.stem87_32037be26e7b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 3FCBF37606 for ; Tue, 30 Jun 2020 20:51:26 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yrbudkxubbxwmm986yg4wrx5464ocqguyfdrw7rubu8j1xn7ohgws49zbtirs.h8b46ker3k36wr1e1pbk6y6b8ompj8rjwuysurjja9dhoo5zbnagyjopecomhyi.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: stem87_32037be26e7b X-Filterd-Recvd-Size: 4859 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:51:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593550285; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=7ODhBJEySE25lLdIL36bUDM3v/f2DyEns/dEg5PpJb4=; b=VifhG8twFoZrnpjBYYX8cKhuqVWrIwSmQYKs+SYNo4w81X8zYKQsKXhvvLLxsha8zXauCl Ka3jhFvyHsoso5n4LRDOisxwPj4dLozmr2ekwHGIfmtTOk47EpfJwTeB799eKY4bnwV4i3 FuQmxoVrTeW5LDKrJCtSoV9W/CTcejI= Received: from mail-vk1-f197.google.com (mail-vk1-f197.google.com [209.85.221.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-514-mc7-22IwN1-exq4Xb0KxqA-1; Tue, 30 Jun 2020 16:51:23 -0400 X-MC-Unique: mc7-22IwN1-exq4Xb0KxqA-1 Received: by mail-vk1-f197.google.com with SMTP id n129so3558971vkn.0 for ; Tue, 30 Jun 2020 13:51:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=7ODhBJEySE25lLdIL36bUDM3v/f2DyEns/dEg5PpJb4=; b=MHXFtykCBQztpdFaPl8QpuNzIy/w0Unxd3z9DtVjL7TYfE1DVxdZ61b33FDX/XNBpT ShVVDGRxfa+pl43S9YOBpK0QXp8KmaTvBwbteO32HUsZ+AYO8MCl3UNCjiVWDbHL3ngC CJG1dcNuKSobM9nIrJNDWmwywA6/W9GxMcCUK1rllVwBmTnUl9xrqTZ148FN9lnX6Hgr /86m0S7uq3uvIxr6DzgQ1j10p5mr+JwI7F+Sh01zGWgBxCM9sG22L8Uh8MHgGnVOybRh mzPrucp7uoB1ttEhACHnyX1Yh8Cp5uFOCeESeAIcTvGVIIfYsX/T5p9DOdLSn8mku52C XBtQ== X-Gm-Message-State: AOAM531MEIgdqPzwZ/6wGVZe9vtG8zNfoUQPeyNQJOBnN/DmJuhbKds3 AGrE/VcEaBhVdghtooLhFSUqQKtjO36od0ILKtQyk46Q8w14x/T9yHmuZoMoEU+UXryE6H5TPpA G9hLSdCmYC7Q= X-Received: by 2002:a0c:a306:: with SMTP id u6mr8860906qvu.88.1593549920886; Tue, 30 Jun 2020 13:45:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy4ZC/3hV5sZ3jrFOaj/HrUvlBV2LSVm33S24f5OpZpmLOt/vUUR0YiRw9Ypkc2T/z0QGSYbg== X-Received: by 2002:a0c:a306:: with SMTP id u6mr8860803qvu.88.1593549919465; Tue, 30 Jun 2020 13:45:19 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id y16sm4443907qty.1.2020.06.30.13.45.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:18 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v4 06/26] mm/csky: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:17 -0400 Message-Id: <20200630204517.38760-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 3FCBF37606 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Guo Ren CC: linux-csky@vger.kernel.org Signed-off-by: Peter Xu Acked-by: Guo Ren --- arch/csky/mm/fault.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c index 7137e2e8dc57..c3f580714ee4 100644 --- a/arch/csky/mm/fault.c +++ b/arch/csky/mm/fault.c @@ -151,7 +151,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, * the fault. */ fault = handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0, - NULL); + regs); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; @@ -161,16 +161,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, goto bad_area; BUG(); } - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - address); - } - mmap_read_unlock(mm); return; From patchwork Tue Jun 30 20:45:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC3E9138C for ; Tue, 30 Jun 2020 20:45:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 989CD2077D for ; Tue, 30 Jun 2020 20:45:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FptQo7l2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 989CD2077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9A01C6B0025; Tue, 30 Jun 2020 16:45:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 976D36B0026; Tue, 30 Jun 2020 16:45:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 865EA6B0027; Tue, 30 Jun 2020 16:45:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0136.hostedemail.com [216.40.44.136]) by kanga.kvack.org (Postfix) with ESMTP id 71A8F6B0025 for ; Tue, 30 Jun 2020 16:45:31 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 320731EE6 for ; Tue, 30 Jun 2020 20:45:31 +0000 (UTC) X-FDA: 76987058862.15.plane17_090bb3526e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 0B05C1814B0C9 for ; Tue, 30 Jun 2020 20:45:31 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yg3rrryi1snh11tnch9d9jpsmcyyp59g6nhg4yzdk7krqydakijo618fbn7ug.r77ft7ixpupdzc7sy94dtuyf7jyy5nayy837wbeii1k4cdqw8n5igqsg6jdryi1.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: plane17_090bb3526e7a X-Filterd-Recvd-Size: 5520 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549930; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=0oSiaS0i/TrG3B4WZW/K39VImBtqNKePW8X1XXaS114=; b=FptQo7l2kzYDakTff8Osv2NjUO46PDbRzcapN7t7jhqZK5HW/9H2TtVPmw8xgwvmPBUUHL OMBekXqsUd9Tcq3uQUqGyWNqlFodxvBP1QMjiSs5i+xsndHakFkkTDZCY8S3cu/fVGnMoB IjbPkGhGeJDT5tSwgF+YmXbiHi5qSMU= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-460-FYd3JygAOTqOxKcS0a9VAQ-1; Tue, 30 Jun 2020 16:45:25 -0400 X-MC-Unique: FYd3JygAOTqOxKcS0a9VAQ-1 Received: by mail-qt1-f200.google.com with SMTP id i5so15341655qtw.3 for ; Tue, 30 Jun 2020 13:45:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=0oSiaS0i/TrG3B4WZW/K39VImBtqNKePW8X1XXaS114=; b=uiPT+qdCiWPCCvzZrLN1Pc+IszOW26/G3CyRgzSq532crPfvtMXSupyPR1rz/nuME5 oGQ0sa5YElrDK0k2/GP+Dtc4uiw3//Zbdolcn/Bg4xZbZaamEd7ItdoSEc4HVFAjgNPH 4Ugx8JZ5sloHSCh+z7Gl7VyRevO/Pyz0rBZTRNfTdQbNw9EyHU/gRErWQKARZwJto2X4 GqWz183tZqRCBbDCWUjd1bD8/yGyPp/g21Ms9aJGupdRFtP7gkewT5t7aP90TseGGonC iJpJ0DNe0d/eh4CB4BQFDyz+REhiy7JYs5JYmW3ZZ87HkbEzl96N96S5aQ3PUknk0xN2 LtUg== X-Gm-Message-State: AOAM531hSKtaMmlFrHCBCTXU9vSfJ1YvpjR3daH6tZBE8EHV+tNCW2Iw 5wQw0NetlBefjjCOo0LaOdZhMvAk7ws3qUcflvwxsa9eYgDqXLNCG/MeLLuQe6ZMh9N1gaHyudb Ha11aHeqnYO0= X-Received: by 2002:ae9:ef4f:: with SMTP id d76mr21557577qkg.423.1593549924292; Tue, 30 Jun 2020 13:45:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwiCIZOd0SFhyDP1RMP34PE1DDliFCmZMFzYzAiXHV+Itj+3kk75rVqggZCe39IF1oIv79/dg== X-Received: by 2002:ae9:ef4f:: with SMTP id d76mr21557403qkg.423.1593549922268; Tue, 30 Jun 2020 13:45:22 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id s71sm4176169qke.0.2020.06.30.13.45.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:21 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Brian Cain , linux-hexagon@vger.kernel.org Subject: [PATCH v4 07/26] mm/hexagon: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:19 -0400 Message-Id: <20200630204519.38809-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 0B05C1814B0C9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Brian Cain CC: linux-hexagon@vger.kernel.org Signed-off-by: Peter Xu Acked-by: Brian Cain --- arch/hexagon/mm/vm_fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index f12f330e7946..ef32c5a84ff3 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -18,6 +18,7 @@ #include #include #include +#include /* * Decode of hardware exception sends us to one of several @@ -53,6 +54,8 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -88,7 +91,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) break; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -96,10 +99,6 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs) /* The most common case -- we are done. */ if (likely(!(fault & VM_FAULT_ERROR))) { if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; From patchwork Tue Jun 30 20:45:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634811 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 911A613B4 for ; Tue, 30 Jun 2020 20:53:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5314F20772 for ; Tue, 30 Jun 2020 20:53:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LdqrLam5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5314F20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9276A6B008C; Tue, 30 Jun 2020 16:53:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8D7F26B0093; Tue, 30 Jun 2020 16:53:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EF846B0095; Tue, 30 Jun 2020 16:53:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id 6A2756B008C for ; Tue, 30 Jun 2020 16:53:14 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2BFA58248068 for ; Tue, 30 Jun 2020 20:53:14 +0000 (UTC) X-FDA: 76987078308.03.magic66_6014a6d26e7b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id EEF4828A4EB for ; Tue, 30 Jun 2020 20:53:13 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yrz81fo88wedzq8wyrwuuw5qa8koca413oxcgo8uwsq94iobk1spdzqrq7hoc.s3ezs9hg1qd3nm7be1skz1td1bxwb5sxtgdo6hmk1yc7aq3jq96ryp3o9owym7t.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: magic66_6014a6d26e7b X-Filterd-Recvd-Size: 5311 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:53:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593550392; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=u4u5+MpkCsisdwspgoCKpfclb3FqsVFfOPYmJEG8Y9I=; b=LdqrLam5UbeRF9uxP+00GWCgSEih31LTZ95BeYYyw2WyCY3pOgzTHWZPIOxuZr3T2A2Bd7 aGGtomlERSf4hyxUquuJIkofYuVPkvnSdPg9/aAJEeG5Bf8NdNtJa5pFh4fDz16XG6iuJZ 6R0ErwTz1+7x1QUFwVUoeFhfJ34nXTw= Received: from mail-io1-f70.google.com (mail-io1-f70.google.com [209.85.166.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-291-oddMGNUpMHOsiv1eTpoUjg-1; Tue, 30 Jun 2020 16:53:11 -0400 X-MC-Unique: oddMGNUpMHOsiv1eTpoUjg-1 Received: by mail-io1-f70.google.com with SMTP id n3so13844375iob.8 for ; Tue, 30 Jun 2020 13:53:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=u4u5+MpkCsisdwspgoCKpfclb3FqsVFfOPYmJEG8Y9I=; b=WVctaYgf8hv898OveAor4EmKhMjodXT/Gunr5vylCSBg2pwxMgkfYqJ9BR2nGtZnIB FTL9c6Qk519j7hQkzmBcTxlxDL092mA9teShLSSTj+WMFep5ce1TvflByhbdVkfwj2dK aWalvorvDV61fe+tyCBJOOJ93nJigXnoW8qTnLQoQlK+id72EalL59anetEEdLQOR5Zp 9DaKldad6o8Qvg5AJBatO1IRuUmC9zJE5EZn6MHZn+TY69gfc23gGnCzVU19rqP+H3XC f47gPnCm2K9w94E+NnSTe8JJBIB59UfW5ru6FReQn2u+gJeKOV70u4tz3CxiAMH6PhEf SYZw== X-Gm-Message-State: AOAM530qFrhm1kiIqGzPBaBiJdwewl+borYeq+IGJ7rqMIbxMtRUkGXl CLj0410Vn9N3Re6qWg0J6ECRTqBG9qt8IaZfIqyK/uiCvumcRGrz3omctoSOZBSwpW2xpOKSSa1 LsQK9VdGhdg4= X-Received: by 2002:a37:a68a:: with SMTP id p132mr11766598qke.184.1593549924930; Tue, 30 Jun 2020 13:45:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyqVJixsWFF3zg6niA98hdKrVy594U+4EDrNGHhvpcW+xbDBZX1tC15jNdl4/SPSApn/cUTvw== X-Received: by 2002:a37:a68a:: with SMTP id p132mr11766569qke.184.1593549924642; Tue, 30 Jun 2020 13:45:24 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g1sm4326491qkl.86.2020.06.30.13.45.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:24 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon Subject: [PATCH v4 08/26] mm/ia64: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:22 -0400 Message-Id: <20200630204522.38857-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: EEF4828A4EB X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). Signed-off-by: Peter Xu --- arch/ia64/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index abf2808f9b4b..cd9766d2b6e0 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -105,6 +106,8 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re flags |= FAULT_FLAG_USER; if (mask & VM_WRITE) flags |= FAULT_FLAG_WRITE; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); @@ -143,7 +146,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re * sure we exit gracefully rather than endlessly redo the * fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -166,10 +169,6 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634765 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38AD6138C for ; Tue, 30 Jun 2020 20:45:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF4BF20772 for ; Tue, 30 Jun 2020 20:45:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="V8Q979Gm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF4BF20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 077466B0027; Tue, 30 Jun 2020 16:45:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 029526B0028; Tue, 30 Jun 2020 16:45:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E34A66B0029; Tue, 30 Jun 2020 16:45:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id C9DDF6B0027 for ; Tue, 30 Jun 2020 16:45:32 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8E8BA181AC9CC for ; Tue, 30 Jun 2020 20:45:32 +0000 (UTC) X-FDA: 76987058904.06.rule89_33078e026e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 680BA100347D5 for ; Tue, 30 Jun 2020 20:45:32 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yrjckb5agcc3auicbfqkr9ksbgpyp6b8rdzbhtgygg6hiqggmxnt8yi6ubzqi.us4egy3prcjmnp138988p31917xjjr51cpxpxk6363jbatqwxoaioejxk4de3t9.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: rule89_33078e026e7a X-Filterd-Recvd-Size: 5601 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549931; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=Q+9vkARh0QAhoC5rVpcauEcS1HmAJvkVoHtpMHMdH44=; b=V8Q979GmqEWVrII5w3jmWG0C4rRfFQUhb+GUkXXnzepPjafbeE3qw9fwfR+IcU+3pmU39c OmhwdMg9HhTfG3/65r/EwqUBt5Zq6pSO/xAz6nJa2GAe9dTR6Okw9vEDL7Bt1UoYPWKUDj djR1mYtFVQwE2jUZMI6fTONrRn3oNeM= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-358-rrjTIFJCMoSEWKtFP1A7eA-1; Tue, 30 Jun 2020 16:45:28 -0400 X-MC-Unique: rrjTIFJCMoSEWKtFP1A7eA-1 Received: by mail-qt1-f198.google.com with SMTP id 71so7293637qte.5 for ; Tue, 30 Jun 2020 13:45:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Q+9vkARh0QAhoC5rVpcauEcS1HmAJvkVoHtpMHMdH44=; b=ImXSzlVJYh4DeMBaykhmn26amhenorws1vHQpsKcZmaGPxdlwmo3vfHzOxOG8z+T6c kEJ910Agu7oXm2soaQd75LSH6f4Iom04mYlDb63SAxMvhMv0K5Dlirwt5SvMFwTcC/n8 N53ak/Ob2+/UPQdXNM613ourlf/dPKXC3iI48W7iom3Xt48zvaTsJAmAjsMmSED8cFhg jczOJVSSbxrKVExJkfb3mBxOx1BEsMhqQScwIlcWojNPrK/YGzTvOHZxcOBPI3iztuHY 7CqdTMpGLtix70vc5GxNnClrWdDKhduaSMd6OmHik8qumsUCCoeu/8xVDkSiU1umMzHC JhcQ== X-Gm-Message-State: AOAM532Rkan93sSUWr2lSsqfOougfua7HPwL2bpnNqQz6uzq8scY6VVc S91N2RxyMl0J//m6d7MF3HKWIi8Zrhq9xF+STA8Rwuog7sZO4NPKSvBS3hTTRU1EIAUAciSVAPL NvgFewbCRzhk= X-Received: by 2002:a37:af41:: with SMTP id y62mr20874894qke.385.1593549927708; Tue, 30 Jun 2020 13:45:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/9f/7q91pxPuT3m4Vf6AMh7F1XsjTpGX1SRngMV0Mt26KohAGiffe7X1KyPB+Pvn5KcloYw== X-Received: by 2002:a37:af41:: with SMTP id y62mr20874868qke.385.1593549927383; Tue, 30 Jun 2020 13:45:27 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id k194sm3755819qke.100.2020.06.30.13.45.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:26 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Geert Uytterhoeven , linux-m68k@lists.linux-m68k.org Subject: [PATCH v4 09/26] mm/m68k: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:25 -0400 Message-Id: <20200630204525.38906-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 680BA100347D5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Geert Uytterhoeven CC: linux-m68k@lists.linux-m68k.org Signed-off-by: Peter Xu --- arch/m68k/mm/fault.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index 738fff2a16f4..d9c22e24d585 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -85,6 +86,8 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); @@ -135,7 +138,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); pr_debug("handle_mm_fault returns %x\n", fault); if (fault_signal_pending(fault, regs)) @@ -151,16 +154,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634767 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 84C22739 for ; Tue, 30 Jun 2020 20:45:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5113220772 for ; Tue, 30 Jun 2020 20:45:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UVkmoYqh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5113220772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3ADFE6B0029; Tue, 30 Jun 2020 16:45:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 387906B002A; Tue, 30 Jun 2020 16:45:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24D206B002B; Tue, 30 Jun 2020 16:45:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 098776B0029 for ; Tue, 30 Jun 2020 16:45:35 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C489B8248068 for ; Tue, 30 Jun 2020 20:45:34 +0000 (UTC) X-FDA: 76987058988.28.shade94_020535326e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 9F20F6C2F for ; Tue, 30 Jun 2020 20:45:34 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30051:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yrbqqaq97kuemwkudp3dkh6tuxtoc63mrauohxj1sar98sj4umfu7hdjpsuen.r8durm3mqh5n8jf3gwf55oxui5qetjoy5hwnms5fjuxfwmxi6cqsmrh19ma4q5a.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: shade94_020535326e7a X-Filterd-Recvd-Size: 5521 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549933; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ntPp8bHOPIXhq64ZYjMBP9tKw1Ahb/2+VdQMPS01kXM=; b=UVkmoYqhnDWYDMJaipnqrJYY9TuoG4fSlVy3ulHlF5sq2M/mmBjUJ3agTVmf74EotUHeaK zk1Eq67tLS1JvXpB6qkD37lFm6qsIh2EzgkUZJnuurbK+vZBOPPOMzZEZiodaHqOD2+y72 RlOc1m6QMgtQbsPoiIh9M0lu2cGvr8U= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-251-JmuY0scrNO2oBMcV1_5Z5g-1; Tue, 30 Jun 2020 16:45:31 -0400 X-MC-Unique: JmuY0scrNO2oBMcV1_5Z5g-1 Received: by mail-qk1-f200.google.com with SMTP id f79so9370312qke.9 for ; Tue, 30 Jun 2020 13:45:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=ntPp8bHOPIXhq64ZYjMBP9tKw1Ahb/2+VdQMPS01kXM=; b=q+JA7duucO1mb75Oopch1OSK87Mx4Iqiw30FAquXOCEX+/1O0Al2gOasBeqpFXwJSy 3AEcnQ4nOvecbAqVNqkeGqtudCktsOv9NhR/FX4zUIIymhjIgicys3cxitiPuLcRzGmb jqwBZerd0g1m8xatlKWXv9o7F5ZLqqmaiIBWcqQ+CPF9e28Yf/ykjvgJg8ms5hHXbnRr +7O12XwRirrB+qn/n4GFX1MSCu0FcphZJbVAJLytjkFf0IysoQhZ3p+mNXlmnBhr3a7n dCRU5sznyaU0LkwCt42H44GAdAtYFTwZdIY0czs4avNwEhsYplaGgKayMsDUPMmQQd07 NUGw== X-Gm-Message-State: AOAM533+5K82M6WDNHUT78m/p1CGr9DhIG4A6xISgbj4qaslNws7YRky f/VA81CGdltwFKbqKGk+dqPN8EUnykzk3OtFTtSAdDh37fVKjcNzP22480d7Sf1fFx/4pdDhacT e0JR95j8Ralo= X-Received: by 2002:a37:de19:: with SMTP id h25mr18441909qkj.354.1593549930600; Tue, 30 Jun 2020 13:45:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwwUQDPyUuSg/1KIgpJ4dyImwv9wjGn6EtkG5kne0m0m1PAimwPyxdFpEe1G1QOHO4oZVoDAg== X-Received: by 2002:a37:de19:: with SMTP id h25mr18441858qkj.354.1593549929950; Tue, 30 Jun 2020 13:45:29 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id r7sm3769108qtm.66.2020.06.30.13.45.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:29 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Michal Simek Subject: [PATCH v4 10/26] mm/microblaze: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:27 -0400 Message-Id: <20200630204527.38955-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 9F20F6C2F X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Michal Simek Signed-off-by: Peter Xu --- arch/microblaze/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index 1a3d4c4ca28b..b3fed2cecf84 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include @@ -121,6 +122,8 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the * kernel and should generate an OOPS. Unfortunately, in the case of an @@ -214,7 +217,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -230,10 +233,6 @@ void do_page_fault(struct pt_regs *regs, unsigned long address, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (unlikely(fault & VM_FAULT_MAJOR)) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0D38739 for ; Tue, 30 Jun 2020 20:45:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D9272083B for ; Tue, 30 Jun 2020 20:45:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DA/zp1Gw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D9272083B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 841A36B002A; Tue, 30 Jun 2020 16:45:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7F28F6B002B; Tue, 30 Jun 2020 16:45:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 694056B002C; Tue, 30 Jun 2020 16:45:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id 549D86B002A for ; Tue, 30 Jun 2020 16:45:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 13A791EE6 for ; Tue, 30 Jun 2020 20:45:37 +0000 (UTC) X-FDA: 76987059114.15.soup66_2f030c126e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 9813C1814B0C8 for ; Tue, 30 Jun 2020 20:45:36 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yr1cpf8o7ibpkq59r3r9cmz38i1opp1wwo89r7kek8ai6ttpgfqp1xjttr84s.9pmttyqtohgqgzc8i1w3ypapw38h3y6rctdg4sbahr8nr7na1xth3cdoncpws5g.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: soup66_2f030c126e7a X-Filterd-Recvd-Size: 5634 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=DEmzww7H5zywGqZA9zT7LUnhxIyJ8MI0ZWzVs6Wigj0=; b=DA/zp1GwmEkNzo8wm7P6iM4zBxyQwj+KthkoWBgvSDeIuYOnHcJba9gWWa5rqUW8wpUbI7 hXbFPCzvyTh3/X536edg1B7oQyBIBK9TqgSXKhed7K9SSg5zlWpF+R4oJA57LMP4UV+KzO GdNk5f37wgbf1vmptrPhq0Zv/YsHzTk= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-188-y71Og_m8PY29KSVXuyC3cQ-1; Tue, 30 Jun 2020 16:45:34 -0400 X-MC-Unique: y71Og_m8PY29KSVXuyC3cQ-1 Received: by mail-qt1-f197.google.com with SMTP id c22so15314398qtp.9 for ; Tue, 30 Jun 2020 13:45:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=DEmzww7H5zywGqZA9zT7LUnhxIyJ8MI0ZWzVs6Wigj0=; b=Jv/1wKbmSTrKGSi1sdxaWfXoQorzC7/Dtq+uekPs49eqK9C+/mLm5Xmb+tzVnWp3Iw qdwSjPCRsDtHPM4PBKKG18QcZ+U5Gg8GBjCXR2jvyV3ij4so/Xv5Ikr+e0lFILUC7itd 0AlSSA+D5FcBwjVNHD1/WHOGtoI21EEQFwmMtii01jCrI7PVhNObfEAY9YjjpeZfc7q8 peCsNLzbaCHSA5UMamCflcfHtPMOADeB9c9Y8um/sNDVPJ0vmqb0tAbvlqOrzk3XwozQ j9I7ttnN9xX0HEZlmZ/ouu67I3vlDf6+QMpFzM2k4Q+uWFvq9vEJfQsIRtQUb2CM3xPe eRjQ== X-Gm-Message-State: AOAM531N1oAVjyqw33+xde4vsxvQoSsUPjKQwaaf8mphK6MUqVxWGZ04 O6CV+t81EK0Fik6jcqiq1wHQ0LHktP62UqYf2kqhKW4nNMiueQxtSLWETZito7nEtax7GF0O8rW 7Q2cNZYjgIy0= X-Received: by 2002:a0c:f385:: with SMTP id i5mr22470545qvk.4.1593549933145; Tue, 30 Jun 2020 13:45:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlcR9sS9E2sPm9khBe36+osfWhMWvjfpILkpvioRaSYPtX3w5/pAVkcwQLHo7Ww9mgcZBAyg== X-Received: by 2002:a0c:f385:: with SMTP id i5mr22470520qvk.4.1593549932874; Tue, 30 Jun 2020 13:45:32 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id b186sm3862220qkd.28.2020.06.30.13.45.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:32 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH v4 11/26] mm/mips: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:30 -0400 Message-Id: <20200630204530.39003-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 9813C1814B0C8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Thomas Bogendoerfer CC: linux-mips@vger.kernel.org Acked-by: Thomas Bogendoerfer Signed-off-by: Peter Xu --- arch/mips/mm/fault.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index b1db39784db9..7c871b14e74a 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -96,6 +96,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -152,12 +154,11 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; @@ -168,15 +169,6 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - tsk->maj_flt++; - } else { - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - tsk->min_flt++; - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634771 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 67C5E739 for ; Tue, 30 Jun 2020 20:45:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2A4F420772 for ; Tue, 30 Jun 2020 20:45:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LTtCd356" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A4F420772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4E6C66B002B; Tue, 30 Jun 2020 16:45:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4BD836B002C; Tue, 30 Jun 2020 16:45:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AD066B002D; Tue, 30 Jun 2020 16:45:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 2699F6B002B for ; Tue, 30 Jun 2020 16:45:40 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E124B180AD806 for ; Tue, 30 Jun 2020 20:45:39 +0000 (UTC) X-FDA: 76987059198.18.boy76_540834a26e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id AC07C100ED9E0 for ; Tue, 30 Jun 2020 20:45:39 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30051:30054:30090,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yr8a6fszhcq79w77njn19uym7p3yprthxih1a6acuhh1dong6af5f9xckn1ye.p3hi5aixhfh3yf9em4fh6xdn5sfr951y3ie8ofo36cme1ohjgtfada83z3t7jm7.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: boy76_540834a26e7a X-Filterd-Recvd-Size: 5797 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549938; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=/WN5suCL4vjaI1eJdzhaIBmykla0j2zqu4dhyFqEqPg=; b=LTtCd3560ik9MvggR2i7a/F22Hf27/HsBH7f3qhtdmqeFgDAnymgBvyPsSJdChF1bD+FUC 2mBE6kHg6hCtn3I9znUwLl2DhmcDQ5+tbCuMrGqqpdcR5tyq2b0pqQndMM+as3eTIio99n 33bu/azAr1B/gIGc8viqQmZRasVKV5M= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-64-clypFzcSNkahnzqM7WVa_w-1; Tue, 30 Jun 2020 16:45:36 -0400 X-MC-Unique: clypFzcSNkahnzqM7WVa_w-1 Received: by mail-qk1-f198.google.com with SMTP id k1so12214569qko.14 for ; Tue, 30 Jun 2020 13:45:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/WN5suCL4vjaI1eJdzhaIBmykla0j2zqu4dhyFqEqPg=; b=tKyittPNRTyLsIUFxqwOs/Z4t2NVqeTcV2m6jeHv1kl4Hp1LDjXaEydG0zKD25d/3L kqoaleYEJopoA7RaDmRnacwBEtmC+7xZF4R3PgKW5Gbvjc93SQxG0wV0oS55ncZhyQYy bxblPisHuULRrrhRwRI/mEIFZkv4rwQeZ9Y2ixxSfKtfJjyOKRYCI3CkyqL2MuxGHJhT /5Uzp2TY+JdBXD0/2A284LwNEKnNob5TqmKiinHWD8g7eLM+jEQFtoSH9OHWoRj1sP6z R+WRg8emrLscbb70TaXSPS8sMyZGLK4jV1wJrGWLXG+czyetHOnT8Hr5pxlRuRdwzBg9 agow== X-Gm-Message-State: AOAM530+Rm9J632x1rA2Jfa+6qmhdkoud/JdAA4npU07s3RhOlNjphzy HAx6h2xLEdAX7i7/TO6T1u8mh9nbxk15TEHHAWvY2hPioXRpLCs8MLQwUyJQZ7StL6AWqgwppCd 0mJbXh82Rerg= X-Received: by 2002:a37:9384:: with SMTP id v126mr21406143qkd.279.1593549936012; Tue, 30 Jun 2020 13:45:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyl0r385bL1CCQ43sCi04YewH2j6MjNKst8FXnzcKtfYGfPRrvD9MYri9EdNQZ89OEEZX5aig== X-Received: by 2002:a37:9384:: with SMTP id v126mr21406084qkd.279.1593549935438; Tue, 30 Jun 2020 13:45:35 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id u6sm4405162qtk.9.2020.06.30.13.45.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:34 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Nick Hu , Greentime Hu , Vincent Chen Subject: [PATCH v4 12/26] mm/nds32: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:33 -0400 Message-Id: <20200630204533.39053-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: AC07C100ED9E0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Nick Hu CC: Greentime Hu CC: Vincent Chen Acked-by: Greentime Hu Signed-off-by: Peter Xu --- arch/nds32/mm/fault.c | 19 +++---------------- 1 file changed, 3 insertions(+), 16 deletions(-) diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index d0ecc8fb5b23..f02524eb6d56 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -121,6 +121,8 @@ void do_page_fault(unsigned long entry, unsigned long addr, if (unlikely(faulthandler_disabled() || !mm)) goto no_context; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -206,7 +208,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, * the fault. */ - fault = handle_mm_fault(vma, addr, flags, NULL); + fault = handle_mm_fault(vma, addr, flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -228,22 +230,7 @@ void do_page_fault(unsigned long entry, unsigned long addr, goto bad_area; } - /* - * Major/minor page fault accounting is only done on the initial - * attempt. If we go through a retry, it is extremely likely that the - * page will be found in page cache at that point. - */ - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634773 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C901B739 for ; Tue, 30 Jun 2020 20:45:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 95EC92077D for ; Tue, 30 Jun 2020 20:45:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bCSxtIVh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95EC92077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 50C0B6B002C; Tue, 30 Jun 2020 16:45:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4BC6C6B002D; Tue, 30 Jun 2020 16:45:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3863C6B002E; Tue, 30 Jun 2020 16:45:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id 2314F6B002C for ; Tue, 30 Jun 2020 16:45:44 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E3F391EF1 for ; Tue, 30 Jun 2020 20:45:43 +0000 (UTC) X-FDA: 76987059366.16.van03_5303d5326e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id B3462100E6903 for ; Tue, 30 Jun 2020 20:45:43 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054:30064,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y8cs3bbd98wa3hujya34aqro3g1yce4fcm7kz6gygg6hiqggrz5npry9pidtr.po8z7ryykmgfirsj8988p3191hcsk34d8tcpts36ngxreww8t7zhhaimysu3mrw.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:51,LUA_SUMMARY:none X-HE-Tag: van03_5303d5326e7a X-Filterd-Recvd-Size: 5664 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549942; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=/OgQ3igxRFmWHn0+Y9XrBgGWT5Mzg0+fUW3Zyepd27I=; b=bCSxtIVhLgVKj6buRe0jK+zN9k/yRU6UMlQYJGA7nzpIpQHxXJiHlgrOy2Ukvz/u8DBEJu hkR9A+ExCFyc3lnKi6IaDPW3RazYV4AZsuXM2sl43XphcOlrSo/Z2F+3JDNI8LDH9TkT06 P37WZhiNurRYsRLagTpiWAECU1QKJ/I= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-173-AqmSgo0lPtqz8HLVlElLuA-1; Tue, 30 Jun 2020 16:45:41 -0400 X-MC-Unique: AqmSgo0lPtqz8HLVlElLuA-1 Received: by mail-qv1-f70.google.com with SMTP id j18so14680224qvk.1 for ; Tue, 30 Jun 2020 13:45:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/OgQ3igxRFmWHn0+Y9XrBgGWT5Mzg0+fUW3Zyepd27I=; b=rdwqXWvImX+L8Lj1w12e7gvPdbAcczBPPZoKOd9Gpwt4Dw3bHPDDvwiUtYRT9Iv0MK 3YvOqApTQQC5BQGonSN89WlNbB3Vi9nuTlN0iOtRL6RS3ZjKK8OVIUrxofqbwILLPU4G mIi3dn6ugGaV7DbnKG4h/OOJc2Z2pCH+hSNlBE8XodeTbf7ozxyLn7v8eGxHZ4y4dW5C WJiT+Pn7jhZv2hDOvJnvLnXLABhD17HEwC51IxXn0sRHaX2tCcbyYHiOBkooE/7vu+aJ U2EFc13c2immlf+PiT3XyaxLERzQhxnpTgkVH4BI0Lnt7YemNi14i4dYjxE3w3JCxe4x /UEw== X-Gm-Message-State: AOAM533TppTxJGYg1qO9dLQQ8KrQKZFOLEt9mHiXeHb7nm4CXofADX0O 6jpTrJ2CJphRDK8EuOc81MCyRNhpZrGlVREOjux9zGghC3pMVqsWF+ZCil79Z9J/sv9RgNYtcEY vg8dTZEhfM2Y= X-Received: by 2002:ac8:7512:: with SMTP id u18mr21565594qtq.85.1593549939139; Tue, 30 Jun 2020 13:45:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzrq8y45uqCywGIm4yWqg1FVgQgY7p5Y/VuWHOhbRuiBWgzglNEydMONNKVfrJW9amJ/IMJxw== X-Received: by 2002:ac8:7512:: with SMTP id u18mr21565511qtq.85.1593549937837; Tue, 30 Jun 2020 13:45:37 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id a15sm3767215qkl.129.2020.06.30.13.45.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:37 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Ley Foon Tan Subject: [PATCH v4 13/26] mm/nios2: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:35 -0400 Message-Id: <20200630204535.39101-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: B3462100E6903 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Ley Foon Tan Signed-off-by: Peter Xu --- arch/nios2/mm/fault.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index 86beb9a2698e..9476feecf512 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -83,6 +84,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + if (!mmap_read_trylock(mm)) { if (!user_mode(regs) && !search_exception_tables(regs->ea)) goto bad_area_nosemaphore; @@ -131,7 +134,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -146,16 +149,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause, BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634775 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6835C739 for ; Tue, 30 Jun 2020 20:45:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 301E320772 for ; Tue, 30 Jun 2020 20:45:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TScUlVWA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 301E320772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E99C6B002F; Tue, 30 Jun 2020 16:45:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 298FA6B0030; Tue, 30 Jun 2020 16:45:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 189F76B0031; Tue, 30 Jun 2020 16:45:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 0112B6B002F for ; Tue, 30 Jun 2020 16:45:50 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C44CF2DFC for ; Tue, 30 Jun 2020 20:45:50 +0000 (UTC) X-FDA: 76987059660.30.boys29_251707326e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 95D25180B3C8E for ; Tue, 30 Jun 2020 20:45:50 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yf79o3o4petfrrznfo475ra9kk7ypyxycoaar6n8yn3sb1q1tg5t9t7kfh7md.5m4suoqfmt155x181ggcfqfrp6y6hju5n1j7fr6qetustpnoyz9mccpxaids7s9.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: boys29_251707326e7a X-Filterd-Recvd-Size: 5668 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549949; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=C2j+0prlu3q0Pnx3SiAUON88ocaozCL3GS+ha/ScH8o=; b=TScUlVWArwXKWoEkuNrCc6mwwQltWeWQ/HsuXaxENhmY6zP0LyGWAlMhcP7TYAsjrIJuK/ Qjn652loklY5SKR4pHg7DEEHKT6tLYpPmLXL/N28dqUcHVtqYcTKAWXxu2OV9gYB+N+Y4P 1IdM/K+d1WlXdfKjMbuoQGn3e37LsaQ= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-219-Vj3RqDqsNVKMdl-_Xm3ioA-1; Tue, 30 Jun 2020 16:45:42 -0400 X-MC-Unique: Vj3RqDqsNVKMdl-_Xm3ioA-1 Received: by mail-qk1-f198.google.com with SMTP id z1so5382060qkz.3 for ; Tue, 30 Jun 2020 13:45:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=C2j+0prlu3q0Pnx3SiAUON88ocaozCL3GS+ha/ScH8o=; b=TQwIr1fG6xEgv50JpS7SqHvqlDajPzsJ0iQPAJzn85nwMHA70nl7iHEB8OHFaBP0v1 bMhUHger8v3a4wJjGG3kNS+0ekxca/+3rJdVqm71Q7SB7jpnZ8finUOYCwPzDNxdDdp/ CG7mzk7zSn3bY8BM/VegIm9XtR69bb+zSJ8a4UGCAcBtod5vbTIZAKaBnVTXdy6HCnUf /+MlSQPXjp4XCLa0moNcxathy0pAQmWhfD9w5CcJ6JnJNvhmaD6KJ2UJOUh03/VGLVOW EXL2U5NICFiZEloj9MVH1MG1G0hqhjPrkC5RlxLUz6OmdTJCMOdJyO3uyrUa8uoJKoyZ XAmg== X-Gm-Message-State: AOAM532nv+qsyxoTcJX7abqdGi6jFLxOuVMJk7uFkytkhBiSEveSRcA8 QKqh7MGFNQ+ZdVGG/0bzQIkXPeSpSIBcFWZE8mHPKZO9Bf95IcjNh36ZyVa/4fdPyzpVSGviNfj 243s1Y+GjaNg= X-Received: by 2002:a37:af82:: with SMTP id y124mr21944081qke.254.1593549940749; Tue, 30 Jun 2020 13:45:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy/HYIsMAbxeYN3zSZX+wIPNCjUoMjnq6Rwg1C2jh93RywEtUABz6jJyXvLDQO0NcHZL2Cmkg== X-Received: by 2002:a37:af82:: with SMTP id y124mr21944054qke.254.1593549940427; Tue, 30 Jun 2020 13:45:40 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id 142sm3880877qki.35.2020.06.30.13.45.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:39 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Jonas Bonn , Stefan Kristiansson , Stafford Horne , openrisc@lists.librecores.org Subject: [PATCH v4 14/26] mm/openrisc: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:38 -0400 Message-Id: <20200630204538.39149-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 95D25180B3C8E X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Jonas Bonn CC: Stefan Kristiansson CC: Stafford Horne CC: openrisc@lists.librecores.org Acked-by: Stafford Horne Signed-off-by: Peter Xu --- arch/openrisc/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index 3daa491d1edb..ca97d9baab51 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -103,6 +104,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, if (in_interrupt() || !mm) goto no_context; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -159,7 +162,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -176,10 +179,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address, if (flags & FAULT_FLAG_ALLOW_RETRY) { /*RGD modeled on Cris */ - if (fault & VM_FAULT_MAJOR) - tsk->maj_flt++; - else - tsk->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634813 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BDCE6138C for ; Tue, 30 Jun 2020 20:53:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8A48220702 for ; Tue, 30 Jun 2020 20:53:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ckJbStfp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A48220702 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C5BA76B0093; Tue, 30 Jun 2020 16:53:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C0C856B0096; Tue, 30 Jun 2020 16:53:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B225C6B0098; Tue, 30 Jun 2020 16:53:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0207.hostedemail.com [216.40.44.207]) by kanga.kvack.org (Postfix) with ESMTP id 9DBA86B0093 for ; Tue, 30 Jun 2020 16:53:24 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 613028248068 for ; Tue, 30 Jun 2020 20:53:24 +0000 (UTC) X-FDA: 76987078728.26.knee95_0f0f6fa26e7b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 403D41804B65C for ; Tue, 30 Jun 2020 20:53:24 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yfib4yzn8ckomgdn3j9w7cnweoiyp5779xypoc5n5w17ewttkfn8x4kuaic6t.tme8p4fxpupdzc7ss147s7jgh8yctddb3ngfj6emyqs789max53r8745bh3iqna.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: knee95_0f0f6fa26e7b X-Filterd-Recvd-Size: 5534 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:53:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593550403; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=dSLF2JUETXiqlgMgrNmE+7rrHD1HC1gVnVSET6GUPpA=; b=ckJbStfp0nq3xNkwZbm2ZP936AA8ia+ld+rb9c5Ar+jXvWFCpYagolC27xt54V7PqWp3os 3Ad7KnURfkO7LM/Wk2Qfz9hicql0pMEXkIkEwfi1w67c1H5TwPMStdRDhLsGg9y4NM2fmp bvV/iZ1hgctXPGJQhyDrXBYPtV5A9s0= Received: from mail-io1-f70.google.com (mail-io1-f70.google.com [209.85.166.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-219-5NOZNn_gMeq5kVudJsuEAw-1; Tue, 30 Jun 2020 16:53:21 -0400 X-MC-Unique: 5NOZNn_gMeq5kVudJsuEAw-1 Received: by mail-io1-f70.google.com with SMTP id d64so13835113iof.12 for ; Tue, 30 Jun 2020 13:53:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=dSLF2JUETXiqlgMgrNmE+7rrHD1HC1gVnVSET6GUPpA=; b=FFjNXtYfE+tZ4D9QSCgGWRQwRnE5c/WZlUk3AOZZhFjOgkkcbzDhfQpXdCS4hwCUHn f7nUiVIqW6VXs0eBlTaCQwh8RwWGc1uwm1gxg0g4Narc6+R+gssUTn4N2Hzv4qTxJwZt aDwx0cEMmEHaU7jWfkUmIR/lXJyLEbV/p6cz99V4/L6XVov1ErJYK/Yjy8nLyRB+JbG5 Zfz6YsWQjvnBgZVQxUTwPHIxrJ/b1gjxO0xMyWZKNj0yN8rZGXtDM1U72st+Xak7sMrN gavUNsj3/JbmFlAREZu6smCOKT7ffgopzIxsURx8z36op10IWhL0zNVJGBgPXM693UzU 9wgA== X-Gm-Message-State: AOAM532sgyx4QByoERTvv7c0Hmn4uBr7Q37zA1oB9QwQka/H1CGWSVJ8 AB3F8Ytr1Q7KieCOPs0R5i3iZRRTmayVeG2532hUCvmBHrdN78Yw9CwmvPTU6Qcrd3y/x+HI7x5 45+Wq76sR8J0= X-Received: by 2002:a02:3501:: with SMTP id k1mr24516433jaa.14.1593550400823; Tue, 30 Jun 2020 13:53:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyvoEV2FUJkvA5qAwTsI4mdiDZPCkCjdD/AwbVYfXw8WHclHjXZP0TyZnfiU/sv0k2kSJZuSg== X-Received: by 2002:aed:204e:: with SMTP id 72mr22146487qta.313.1593549942977; Tue, 30 Jun 2020 13:45:42 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id t5sm3840619qkh.46.2020.06.30.13.45.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:42 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , "James E . J . Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH v4 15/26] mm/parisc: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:40 -0400 Message-Id: <20200630204540.39197-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 403D41804B65C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: James E.J. Bottomley CC: Helge Deller CC: linux-parisc@vger.kernel.org Signed-off-by: Peter Xu --- arch/parisc/mm/fault.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index e32d06928c24..4bfe2da9fbe3 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -18,6 +18,7 @@ #include #include #include +#include #include @@ -281,6 +282,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, acc_type = parisc_acctyp(code, regs->iir); if (acc_type & VM_WRITE) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: mmap_read_lock(mm); vma = find_vma_prev(mm, address, &prev_vma); @@ -302,7 +304,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, * fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -323,10 +325,6 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { /* * No need to mmap_read_unlock(mm) as we would From patchwork Tue Jun 30 20:45:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634777 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0846138C for ; Tue, 30 Jun 2020 20:45:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9CF6720772 for ; Tue, 30 Jun 2020 20:45:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XAnxaTq3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9CF6720772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CD51D6B0030; Tue, 30 Jun 2020 16:45:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C87736B0031; Tue, 30 Jun 2020 16:45:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFDB66B0032; Tue, 30 Jun 2020 16:45:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id 920A06B0030 for ; Tue, 30 Jun 2020 16:45:51 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5883C2DFC for ; Tue, 30 Jun 2020 20:45:51 +0000 (UTC) X-FDA: 76987059702.02.bell29_2a1655426e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id BD6F060001357550 for ; Tue, 30 Jun 2020 20:45:50 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yfbhpjdiitgj7aaqpee8jntzey9optgx7mmc7ioe6anjksut7gpz8hahup5eg.nmqtjsmet7zfs5x9j8ork7zpz6r713fqdwxrbbpq6fzwj8yz4ipnh6b4aeazca3.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: bell29_2a1655426e7a X-Filterd-Recvd-Size: 5001 Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549949; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ynnJMHDkmLB1gpQYI5LlP8bTNHn29Vusf9IRRJFn7yI=; b=XAnxaTq3AEqE7wbXoQS1Db6VqijL4CDmhdI4Lp/2UA6/2Hh9CfFC9bySAhTWUOwOVd/98j OD+X4WrfDHOXVKHegkPXGW9YFZcphpd+fyIGRlBZs9jHREgJrlIBHfIuVDWn/hMZMP9+sT fpb/UzLe278OWi9FAYzJuwWBK98SlXw= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-4-3POnY8lnMrmMULvEuqIXhw-1; Tue, 30 Jun 2020 16:45:48 -0400 X-MC-Unique: 3POnY8lnMrmMULvEuqIXhw-1 Received: by mail-qv1-f69.google.com with SMTP id r4so14610943qvh.10 for ; Tue, 30 Jun 2020 13:45:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=ynnJMHDkmLB1gpQYI5LlP8bTNHn29Vusf9IRRJFn7yI=; b=QUj4VD+w+nv2sHnvlJtPGI7mnRQmnkmuOPxBt1ELfvbPyRYuOq4+m1eWJYJxWidcEJ KXdY9NOd4+PfS9bdN1lSU7rIWoRsmPrDV8qNyo2eGNuCvidI7ChUVHxxC3WUeriX8Gsz kgqBQG5a6r0Flh7L7KCDmTQ3X681QI14U/iuTJvZUkin2OFSV/mQ114XGVjES+MmVcT6 VJEEx9HCnZKmb6YbLNbdT5LCcekd53jkJCZ/4lAromXEGPe/0mxiK7jVE0Ps2ElWzb8z yxylMvh2INdNPMirM2FN/lB8MwgHjgc3YjCiKErQL8esN6aWXkmUKBZIGikRGE5cCwQ4 HDEg== X-Gm-Message-State: AOAM5321qeozL5tDe30isuL/n4z7y9fjq8VcaF0QND4Tcmc90Lwf0j4b dt5NjK/tcuJaQzS5ueFFaMe8+l4ZNBFawIlJrIRaCsIk4RkgyDTEyXXB4hKpklYJ+LTnhV8faCu szeCk45ZAaTI= X-Received: by 2002:a37:6382:: with SMTP id x124mr19156401qkb.13.1593549946778; Tue, 30 Jun 2020 13:45:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxYJzOveTh/zHsoqWyRhvWa6nes5qts8kYD9+VOymtjnIw0C/lNBXFLIHdqm4iSlLEtMxrrcQ== X-Received: by 2002:a37:6382:: with SMTP id x124mr19156332qkb.13.1593549945576; Tue, 30 Jun 2020 13:45:45 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id y40sm4291853qtc.29.2020.06.30.13.45.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:44 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 16/26] mm/powerpc: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:43 -0400 Message-Id: <20200630204543.39245-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: BD6F060001357550 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). CC: Michael Ellerman CC: Benjamin Herrenschmidt CC: Paul Mackerras CC: linuxppc-dev@lists.ozlabs.org Signed-off-by: Peter Xu --- arch/powerpc/mm/fault.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 25dee001d8e1..00259e9b452d 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -607,7 +607,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); major |= fault & VM_FAULT_MAJOR; @@ -633,14 +633,9 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, /* * Major/minor page fault accounting. */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); + if (major) cmo_account_page_fault(); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } + return 0; } NOKPROBE_SYMBOL(__do_page_fault); From patchwork Tue Jun 30 20:45:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634779 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4B7BE739 for ; Tue, 30 Jun 2020 20:45:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15F2C20772 for ; Tue, 30 Jun 2020 20:45:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HYbKdRb+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15F2C20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 13E496B005A; Tue, 30 Jun 2020 16:45:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 09F326B005D; Tue, 30 Jun 2020 16:45:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5A436B005A; Tue, 30 Jun 2020 16:45:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id CD75A6B0037 for ; Tue, 30 Jun 2020 16:45:53 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 94F2F8248068 for ; Tue, 30 Jun 2020 20:45:53 +0000 (UTC) X-FDA: 76987059786.03.blow29_4501e9826e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 6B3B128A4E9 for ; Tue, 30 Jun 2020 20:45:53 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yfe64p1rzin6nmm8mtp4kf688awopden4xkbxqpr6uc4p79x5mcyxyhkk7cig.mk3e35bg961wat5ijzhep1jbwui6wwx97qkr5sdm7fgj3pxz1sqbkz7fni1tfob.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: blow29_4501e9826e7a X-Filterd-Recvd-Size: 5328 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549952; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=mfudH1gu1GL1LZqxVEZK8L9MFfxsg/gHPxv1OKTQK3E=; b=HYbKdRb+2PgyIQPbdWg6PVXIyP8sY3hiuzqCoSA+qgdVOLsuTrZIYptvAtT14Ppph0pFys TkcHv7DGYHv/YkFoGBSxLn3mPC/z4RgOh9Y+G9a1f+lRip5OJbZhnHPmBzfsl3W6ieEmJX MGZWcwarbT+vHJqXL5IeXM0qOuH2FvI= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-459-J1iRvaK3PJOjkNJpJHqWdg-1; Tue, 30 Jun 2020 16:45:50 -0400 X-MC-Unique: J1iRvaK3PJOjkNJpJHqWdg-1 Received: by mail-qv1-f70.google.com with SMTP id v10so4463678qvm.17 for ; Tue, 30 Jun 2020 13:45:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=mfudH1gu1GL1LZqxVEZK8L9MFfxsg/gHPxv1OKTQK3E=; b=GmduLneOrzkldkiakQV3C0Wb1TxvHkwymcFZCRc1lOlWda8Is7PKpPUnvsgQVFz2vi QsvpYljpNVT9owk0uZWVpe608BMCD1SO9dP2CMID9FeMAhM1gIeV0/wDujgi3YGgjxrE wGKp3K18RBdgoopu5RsDsW4erDzt8J3FTRsuNlRGYJnKQDiU09RiHKYyRUzgkV+dF6FS l71mw2hEkoGRaqyEnt34BxcXX/sFDpONmxQJcP4aVR/Riu0mMcWwxJC5ULMRaDHeeSHz Kv2lZBnbNSX6rE4LVz5JHO7hATad9Gpua+ud4aCVdi00bRYt5bYUKjnimaZBiJ8rHZKn lsEg== X-Gm-Message-State: AOAM533F2xKJDMXYfFjuLoDqErC3+uiaa4Nffp38cVVuuN+8mtGHYb1Q AqN7sdYnCU99zOeLmCN2njrb1HukwsdrrmpDknxnrddXKczssAcaDGEn9CKvsalEDF6hk/hk8Z1 nqLxop5BQY8M= X-Received: by 2002:a37:5c7:: with SMTP id 190mr21417905qkf.479.1593549948403; Tue, 30 Jun 2020 13:45:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyumlFzo2DUqNktzirguzAlyAPQFn8TaOIfLX5bHcMtCXo263dv1sp1iQI8Cs0ZPWsxzyNVig== X-Received: by 2002:a37:5c7:: with SMTP id 190mr21417870qkf.479.1593549948141; Tue, 30 Jun 2020 13:45:48 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id p63sm4107348qkc.80.2020.06.30.13.45.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:47 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [PATCH v4 17/26] mm/riscv: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:45 -0400 Message-Id: <20200630204545.39293-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 6B3B128A4E9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Paul Walmsley CC: Palmer Dabbelt CC: Albert Ou CC: linux-riscv@lists.infradead.org Signed-off-by: Peter Xu --- arch/riscv/mm/fault.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 50a952a68433..e72cec09d55b 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -110,7 +110,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags, NULL); + fault = handle_mm_fault(vma, addr, flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -128,21 +128,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) BUG(); } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634781 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B1727739 for ; Tue, 30 Jun 2020 20:46:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 74A4220663 for ; Tue, 30 Jun 2020 20:46:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ajm6FVVL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 74A4220663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 35F956B0062; Tue, 30 Jun 2020 16:45:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2C14E6B0068; Tue, 30 Jun 2020 16:45:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B0DE6B006C; Tue, 30 Jun 2020 16:45:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id F12826B0062 for ; Tue, 30 Jun 2020 16:45:54 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B5F28181AC9CC for ; Tue, 30 Jun 2020 20:45:54 +0000 (UTC) X-FDA: 76987059828.21.bee40_3a0c7ab26e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 8F136180442C0 for ; Tue, 30 Jun 2020 20:45:54 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yfebpdxonehw3e7xfgb9kmmnw6rocmrxrwsd58i95bymdtte3xi44mmp7sbcu.odzix68cutjygowc8oysxip8gh965juwx1fqm1xhzg46qnfn14yx4ts7amppfah.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bee40_3a0c7ab26e7a X-Filterd-Recvd-Size: 5530 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549953; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=QmKrOvMWqrUVt7J9PTTUHlaMQYpm2NLJh4q6ELyuhdM=; b=ajm6FVVLMhpJ0qJ3IrlAz0L73Yn8Mo8DBQrdqh1VITkCuFQAYob6t7aXk4FAdMesiOeElq ZRR97h8Zr6adp7Nvq3PGczMV/khBzzpj8V+AIEAS/4Bs5tjHxVDyRm1Oj7qm9haN4fh5fo unmj5KNLjtaXvZpNXrsPooS5G08r+fg= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-513-ncUOgfMgMLGqxyu6z4Jzng-1; Tue, 30 Jun 2020 16:45:52 -0400 X-MC-Unique: ncUOgfMgMLGqxyu6z4Jzng-1 Received: by mail-qv1-f71.google.com with SMTP id y36so14596458qvf.21 for ; Tue, 30 Jun 2020 13:45:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=QmKrOvMWqrUVt7J9PTTUHlaMQYpm2NLJh4q6ELyuhdM=; b=MHccHGLoeFRA8BZiMzXOauSkUlbiZ43U+eHuAr7bGeZTlBNaEFBh63oeAe/pv14ch1 h71CuYvrIHFcpd3zXH9xn09qr0Xcnxl1DLyIag0F+LuzGLVRVfdNhG2eS7UQK1Z5gBar nBRu/ku0eh+agIVdj2kOfeKv/tUmMtKo/ANbwUiotIRZKRnrbouoM6mmzBdxYw/m88Bs 01N9tMhvjXyYMbVu5mvNUZX9wd5tvVtXNmEJ+TwXngM/PhJjyZGIpXzGPSIe2esVhMvc QCKga31tcEFstauxmNFSBXaSpnuDLGVw4k6Mp5PQCcfqkxLI0NWHjq70feEitDs3uYcO 4S/A== X-Gm-Message-State: AOAM533dFKycqHaJVaP/aYif8Jk954jDVDMEogD0OHnc9YTbFMaoujUd pojx06pW6wgJHWuPYYxWdW8sqqGuC6fF2a0fm6ZPMDnwEIAqI2oFWBgSaOX/0uAg51W6zresJd9 wo7rkkP2Xzpg= X-Received: by 2002:aed:25fd:: with SMTP id y58mr22804777qtc.310.1593549951210; Tue, 30 Jun 2020 13:45:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/hkCkeODL2Wy8DJkssVjFyiyJsWkeyAfUcqwLI37xKcbykEL4x8zUn+pbQH01QHAlweYYdg== X-Received: by 2002:aed:25fd:: with SMTP id y58mr22804761qtc.310.1593549950997; Tue, 30 Jun 2020 13:45:50 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id d79sm4006344qkb.101.2020.06.30.13.45.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:50 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , linux-s390@vger.kernel.org Subject: [PATCH v4 18/26] mm/s390: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:48 -0400 Message-Id: <20200630204548.39342-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 8F136180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Heiko Carstens CC: Vasily Gorbik CC: Christian Borntraeger CC: linux-s390@vger.kernel.org Signed-off-by: Peter Xu Reviewed-by: Gerald Schaefer Acked-by: Gerald Schaefer --- arch/s390/mm/fault.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index fc14df0b4d6e..9aa201df2e94 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -478,7 +478,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; if (flags & FAULT_FLAG_RETRY_NOWAIT) @@ -488,21 +488,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (unlikely(fault & VM_FAULT_ERROR)) goto out_up; - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } if (fault & VM_FAULT_RETRY) { if (IS_ENABLED(CONFIG_PGSTE) && gmap && (flags & FAULT_FLAG_RETRY_NOWAIT)) { From patchwork Tue Jun 30 20:45:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634783 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 02E88138C for ; Tue, 30 Jun 2020 20:46:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BA03320663 for ; Tue, 30 Jun 2020 20:46:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="cJc9BQYb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BA03320663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 497B66B0068; Tue, 30 Jun 2020 16:45:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 46EBF6B006E; Tue, 30 Jun 2020 16:45:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 336DA6B0070; Tue, 30 Jun 2020 16:45:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 518156B0068 for ; Tue, 30 Jun 2020 16:45:58 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1719F8248068 for ; Tue, 30 Jun 2020 20:45:58 +0000 (UTC) X-FDA: 76987059996.10.girls34_3808f7726e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id E215116A0B9 for ; Tue, 30 Jun 2020 20:45:57 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04ygaxs9dpb14zq9r4du7fgxufqwdycek6b1jh48nik7db3sn3s4m5zfi5qyqhn.1zuzg7qzqi7cuasjqeaeynhcg6csxt1zi3617gdou3cksyqbrptzz8towask3mm.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: girls34_3808f7726e7a X-Filterd-Recvd-Size: 4991 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:45:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549956; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=sstMP5NBLNFJ42p1Ptb1NInEyGUfT5YQOxqtWfaqyuo=; b=cJc9BQYblcIX6LMbsz7kNsu721DR77xJMYr8IUKwDrkwubxi2FByqEIyBMRLuwVI2CZqe+ s6HT26EsbSZ4+X/VXkC1O7Hwk3+2h3HUb/oKnRRd8Jtu7mnZV+GfYHzfg+HqBbukB3EsoQ sCFqsNGgDxSgL2Yzk4zYaSzVpZPk2M0= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-105-syc54eqJPIqQPsh0StHQPQ-1; Tue, 30 Jun 2020 16:45:55 -0400 X-MC-Unique: syc54eqJPIqQPsh0StHQPQ-1 Received: by mail-qv1-f72.google.com with SMTP id j4so14638576qvt.20 for ; Tue, 30 Jun 2020 13:45:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=sstMP5NBLNFJ42p1Ptb1NInEyGUfT5YQOxqtWfaqyuo=; b=MmdE+6ZLuz42z0SR7x1vRj0ymBvrFP8nzOjt0CEh+jC4mDGsMVYxrr4uc1hFQOaxq+ Pjkjt5ATGFgCsWEvp6o5M2joK3OW4m8pNlB7qI4MtA2vMoTsk6gg3CGFkV1v88oFs0au 8RPWqGZ9VmJc+s7cSV8knDbrXKtbL6Wi+lNSsECro9UMHzvFQRczGCYGRTumW5Zn9LJQ USEF9FBd00XulYfiGYh6MfMrYjv8ILy90LVU6FQKwqTqCCqN1bgKUo2AxdJIs8OwKy/h PtEzLrj23kQ17lhxn6Xd94yIkcok6zvzVjLzLVdd4X/j4fRpgR//09Bo65OhBHnN5nL1 Qi3g== X-Gm-Message-State: AOAM532v5VpBx5P6hMCaGWp54nqTcrurMSRNcvOijX33EAd/ssUEp4Im gLRO2vpvY3KxIOlBhiUSvTOP/xNQ8MxxaaBHXcNfuauxFvyFHllcti1Eq1Y9bdCmH17NLceb9f5 J1Y0/dI9tY7Y= X-Received: by 2002:a37:6801:: with SMTP id d1mr20922050qkc.109.1593549954016; Tue, 30 Jun 2020 13:45:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyL8+s36Fr/X9eM/aUs84kdpIh/dAND6uY8M4fv2gdc1FLMJbdbBR7ydewQl2OxPf7/WxSloA== X-Received: by 2002:a37:6801:: with SMTP id d1mr20922017qkc.109.1593549953621; Tue, 30 Jun 2020 13:45:53 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id g5sm4383436qta.46.2020.06.30.13.45.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:52 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Subject: [PATCH v4 19/26] mm/sh: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:51 -0400 Message-Id: <20200630204551.39391-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: E215116A0B9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: Yoshinori Sato CC: Rich Felker CC: linux-sh@vger.kernel.org Signed-off-by: Peter Xu --- arch/sh/mm/fault.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index 3c0a11827f7e..482668a2f6d3 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -482,22 +482,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR))) if (mm_fault_error(regs, error_code, address, fault)) return; if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634807 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF302138C for ; Tue, 30 Jun 2020 20:51:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B1A0320772 for ; Tue, 30 Jun 2020 20:51:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Vlp7QXA/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B1A0320772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1121F6B0083; Tue, 30 Jun 2020 16:51:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0C26D6B0087; Tue, 30 Jun 2020 16:51:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECC1F6B0088; Tue, 30 Jun 2020 16:51:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id D76EC6B0083 for ; Tue, 30 Jun 2020 16:51:51 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A3D8D180AD804 for ; Tue, 30 Jun 2020 20:51:51 +0000 (UTC) X-FDA: 76987074822.22.price02_130758c26e7b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 7489018038E60 for ; Tue, 30 Jun 2020 20:51:51 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y8u4gaeoeo83ab5jqj6ng1ypu14oc9r5wezdik1ksbe6f6xzodo3dfprudioe.961wz1wfq35wbdstwk7dhhfngqh7jthyd9nusghu45wim7k975cf68s93gunbmi.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: price02_130758c26e7b X-Filterd-Recvd-Size: 5024 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:51:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593550310; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ZuPZYk0Ns+MCrJXk2nN23I5YsKRR2oHib+g0ZYAxJ90=; b=Vlp7QXA/9wYv4+i0j8brQMa6gb43Qu5Hoyjps36kEcYtNgsaCGd52JKuHM5TUjNXTdFQX1 kQncUu/jZ743KUBedW/qEz+NJxNX7IomPhw+7cWCNBRsoVkSO+zLcVXR/ad20HibDYTaDo ovev/3/VonnEwsXAGGVQ0mTzaG+9kko= Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-416-kUolG1jMNTisgyyweJvQFw-1; Tue, 30 Jun 2020 16:51:49 -0400 X-MC-Unique: kUolG1jMNTisgyyweJvQFw-1 Received: by mail-pf1-f200.google.com with SMTP id y69so14989836pfg.9 for ; Tue, 30 Jun 2020 13:51:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=ZuPZYk0Ns+MCrJXk2nN23I5YsKRR2oHib+g0ZYAxJ90=; b=dgh6+NW00/INP/VC8U9N7rcj/5ynRcLeomUm9412WUQBx+xHh5vNiCsINT94u8k01C aZHcS++fsitnIGyIJ64m0rOY1OmmswsomUcuBmXSO+LIhTu2frAF24JNsmIf6NDe85es f/uvAFxt8ac7olYjiPqaNDWIbmWCocVhfOwTvcpykh34QrR4Ne16YnzK+9pRxx/qibd9 TmDILrjyhLY04pa28wO7YH72re7X0/h+MJQ50xoK3hXW3ze6MfcCVZ4ygqHNaft1DEch 4iGqRD2C7gItQriUKYophV3SRbbL3FGElF8r/1g73au53RT4Sm4QmF00J+WzdyzHPAYi apZQ== X-Gm-Message-State: AOAM533hdqPdZww3IU1u8I5P74yH7PoYXB6CM7bfXI20BG2Y3acaKifm ITKey7q3H4YVNBsocHeACC7CnwD8MiP0lTsfZTA7faNLk4MRucMDyOhoq7QbtT8cJMYhKK9CMhl 9sDeNnpP6VEM= X-Received: by 2002:a05:6214:1586:: with SMTP id m6mr20808928qvw.171.1593549956470; Tue, 30 Jun 2020 13:45:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyNcVTxS1nLn72PqQ6RmcgCA2jlQ+iPsG1/Q8+SmC7pwUxqSCsBS9tibh2g1OtZBBb74J0RnA== X-Received: by 2002:a05:6214:1586:: with SMTP id m6mr20808902qvw.171.1593549956202; Tue, 30 Jun 2020 13:45:56 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id v184sm3850652qki.12.2020.06.30.13.45.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:55 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , "David S . Miller" , sparclinux@vger.kernel.org Subject: [PATCH v4 20/26] mm/sparc32: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:53 -0400 Message-Id: <20200630204553.39442-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 7489018038E60 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: David S. Miller CC: sparclinux@vger.kernel.org Signed-off-by: Peter Xu Acked-by: David S. Miller --- arch/sparc/mm/fault_32.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index 06af03db4417..8071bfd72349 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -234,7 +234,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -250,15 +250,6 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, address); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634785 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9CE7B739 for ; Tue, 30 Jun 2020 20:46:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6922B2077D for ; Tue, 30 Jun 2020 20:46:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gg0gPidi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6922B2077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 692FF6B006E; Tue, 30 Jun 2020 16:46:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 669176B0071; Tue, 30 Jun 2020 16:46:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 582126B0072; Tue, 30 Jun 2020 16:46:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 4399C6B006E for ; Tue, 30 Jun 2020 16:46:07 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0A8A9180AD806 for ; Tue, 30 Jun 2020 20:46:07 +0000 (UTC) X-FDA: 76987060374.19.club74_250412526e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id C90711ACEBC for ; Tue, 30 Jun 2020 20:46:06 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30054,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y8eucp4t1i5bnf1ebi6pufkkirhypgtxantjon6yji9zr3y7d3emfnp6ebwb3.zuxr16e4i513emt6arzutqj6mgk7e785amdbaxtbihwie9poizdmjg14ipcpuik.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:39,LUA_SUMMARY:none X-HE-Tag: club74_250412526e7a X-Filterd-Recvd-Size: 4943 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:46:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549965; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=nRL7j1WwZNqVLuKE8r33la3T5z/V/hLFvWs36l0m98c=; b=gg0gPidiG8TcbxMOpyay9XdoSEe6JKL9I5h6MaZ2NgXTAZ0hMTDVoAAlgUCw87PNxiYK3g ylSSoxNaLXfTJ/hSlo9hDLwDBazSMt/MjGK9etA0BovjLPHIthwap8MyEYm+By7m6KPSGL mX8kxdfNGFfr5HkFDljxutiX732nY1M= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-475-g0-7aSdpP827qGUS6ZIP5g-1; Tue, 30 Jun 2020 16:46:04 -0400 X-MC-Unique: g0-7aSdpP827qGUS6ZIP5g-1 Received: by mail-qk1-f200.google.com with SMTP id 13so667804qks.11 for ; Tue, 30 Jun 2020 13:46:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=nRL7j1WwZNqVLuKE8r33la3T5z/V/hLFvWs36l0m98c=; b=G+0SSgqXSPvGi0bhncp24+p7nTv/sBI64SFu9/SMyv5AF6ls18otTktV9rxjdj7qNV Q0SyFpsggUBdSUeHGsuyIo02ir8588cmmuSYlQ5WynqQ/cvZhYFlokWkRQi9iG0ktMc6 9oFGwxRXwKJZUQrlOYFksmYFwX/qnThd6kEpap6bfmwT0q8FcmlH6bQPCu2d9ckACavG 0Qu0dV/zH/6w6i9uCinpfFJwTMS688ZMti6KwBXy6PPEw+ufJ7ouJL5UlyGgOmPSsUlW ICm/qmuULNDBXkpaxcQvFOMWkiozmegbNHtqgRqG+aLaBsZIlStOHnXPyja43QaueHoI HzlQ== X-Gm-Message-State: AOAM5324pTs6FkOeA8+hJ4ELW+yd4QCB45xJXBrrrqktm/j1zBlqDYeA VAQ1WF4T3OW+4rjq/oOcl56MKu5U43puReoL/bxnEpAZJf358qaoaIJOLUBIlRkp0Tc3jk/8P+R WoE9g8nSkkBw= X-Received: by 2002:a05:620a:635:: with SMTP id 21mr22244595qkv.491.1593549962000; Tue, 30 Jun 2020 13:46:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlUPsr/W+WeZFM6KWdqEHBVB5RxdLgehVJ6NhUZN7D0mYiZYTZhpM7moIljz6u9XaYXW1fJQ== X-Received: by 2002:a05:620a:635:: with SMTP id 21mr22244374qkv.491.1593549958835; Tue, 30 Jun 2020 13:45:58 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id f18sm4467263qtc.28.2020.06.30.13.45.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:45:58 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , "David S . Miller" , sparclinux@vger.kernel.org Subject: [PATCH v4 21/26] mm/sparc64: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:56 -0400 Message-Id: <20200630204556.39491-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: C90711ACEBC X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. CC: David S. Miller CC: sparclinux@vger.kernel.org Signed-off-by: Peter Xu Acked-by: David S. Miller --- arch/sparc/mm/fault_64.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index 9ebee14ee893..0a6bcc85fba7 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -422,7 +422,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) goto bad_area; } - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) goto exit_exception; @@ -438,15 +438,6 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, - 1, regs, address); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, - 1, regs, address); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; From patchwork Tue Jun 30 20:45:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634789 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DD3F0739 for ; Tue, 30 Jun 2020 20:46:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A951E2081A for ; Tue, 30 Jun 2020 20:46:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="B+eb6OQH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A951E2081A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3368D6B0073; Tue, 30 Jun 2020 16:46:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 310A26B0075; Tue, 30 Jun 2020 16:46:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 227CE6B0078; Tue, 30 Jun 2020 16:46:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id 0BAA66B0073 for ; Tue, 30 Jun 2020 16:46:12 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C3DD6180AD804 for ; Tue, 30 Jun 2020 20:46:11 +0000 (UTC) X-FDA: 76987060542.28.heart56_4a0586f26e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id A0C7D6C17 for ; Tue, 30 Jun 2020 20:46:11 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30051:30054:30090,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yfqa6frwswaxs4gfdhei7jfsqwuoc5t3z5j9w6oa1gp1mjs4uce7r5todem57.9er1dzxso1qggir7z74cyk1dbp9kfskifq7i1neunu7qmudgs5puj59sy1ydasx.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: heart56_4a0586f26e7a X-Filterd-Recvd-Size: 6308 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:46:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549970; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=4RAHTSP5z0C7SFvi5Ey3JD7YAbQrWHUIepHjbZsf2aU=; b=B+eb6OQHqdgOiKp70ad2QOYUZsR77k6mhj1aFCvt2OPKSygvxZkf9EtdBohMivm+CXiocC fmvMCmBApJNr+LCYjHn0mbRlOAnO8z2bAYcdvj137wUaQkJi0P9S9E1fcb4fftomuSjhjn 8FeeC0MMFyYwUEbP7IsXcKR/3Ic/qKw= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-253-D4xWQrkQMWWNZW3_aPDLuQ-1; Tue, 30 Jun 2020 16:46:03 -0400 X-MC-Unique: D4xWQrkQMWWNZW3_aPDLuQ-1 Received: by mail-qv1-f71.google.com with SMTP id w3so14675893qvl.9 for ; Tue, 30 Jun 2020 13:46:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=4RAHTSP5z0C7SFvi5Ey3JD7YAbQrWHUIepHjbZsf2aU=; b=mFxbmb74V75ZW0ztzmdW8a2E6ezm4LYA5w2tvuQfP8uOqCtl3EQ2YZ08IlNoE2zHmT lKLP434hM6H8y4T86U/7Bcw7Vf4pXWfNKF9vD2I+Ovbg36KWMMBXSjVVjcDmD3zAtPhC EiQzpYGw6MI6eQJK401A1pH6/zkjyRwJJyYg2srSeLesQKDYz4MqPA4QfvOufpFxULHC BCJkkp5b1/VFos/N3VODuw1Av6skTxOyJm2B9+lJ+nbxnKrobdLK5tEm5XzZ0PVP5CJ6 bRbEQ7C3VkoGeYVYqL3Qc9tGQpBPmXExicLE37BlDXCmnZyESQgGda8CNG0bWZaf/DOv ymSg== X-Gm-Message-State: AOAM531Ds2GoK00fkdHGtEWi1dujhrhl8LmlpZXqZjIhye2BZld8ImvF ctwE3diOK2PV/qKlgE/ycXlLWYLB8vqGGSCTU0tzbGg73cl6PrEdvnhtTt2nxckaJyOVaRXgVlr ct31L5VVP354= X-Received: by 2002:a37:5b81:: with SMTP id p123mr21686109qkb.150.1593549961533; Tue, 30 Jun 2020 13:46:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzZM7dYXcf5y/PjIzLQzsAt1kp+Qf2qriyfk9Ij6NWjLpMdeiq7I8wMzFAUueBBnMbxGIlWpg== X-Received: by 2002:a37:5b81:: with SMTP id p123mr21686090qkb.150.1593549961276; Tue, 30 Jun 2020 13:46:01 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id u124sm3833681qkf.83.2020.06.30.13.45.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:46:00 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Guan Xuetao Subject: [PATCH v4 22/26] mm/unicore32: Use general page fault accounting Date: Tue, 30 Jun 2020 16:45:59 -0400 Message-Id: <20200630204559.39539-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: A0C7D6C17 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Add the missing PERF_COUNT_SW_PAGE_FAULTS perf events too. Note, the other two perf events (PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN]) were done in handle_mm_fault(). CC: Guan Xuetao Signed-off-by: Peter Xu --- arch/unicore32/mm/fault.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c index 9b4539d8d669..69bf99bcd8fd 100644 --- a/arch/unicore32/mm/fault.c +++ b/arch/unicore32/mm/fault.c @@ -16,6 +16,7 @@ #include #include #include +#include #include @@ -159,7 +160,8 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma) } static vm_fault_t __do_pf(struct mm_struct *mm, unsigned long addr, - unsigned int fsr, unsigned int flags, struct task_struct *tsk) + unsigned int fsr, unsigned int flags, + struct task_struct *tsk, struct pt_regs *regs) { struct vm_area_struct *vma; vm_fault_t fault; @@ -185,7 +187,7 @@ static vm_fault_t __do_pf(struct mm_struct *mm, unsigned long addr, * If for any reason at all we couldn't handle the fault, make * sure we exit gracefully rather than endlessly redo the fault. */ - fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); + fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); return fault; check_stack: @@ -218,6 +220,8 @@ static int do_pf(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (!(fsr ^ 0x12)) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -243,7 +247,7 @@ static int do_pf(unsigned long addr, unsigned int fsr, struct pt_regs *regs) #endif } - fault = __do_pf(mm, addr, fsr, flags, tsk); + fault = __do_pf(mm, addr, fsr, flags, tsk, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_lock because @@ -253,10 +257,6 @@ static int do_pf(unsigned long addr, unsigned int fsr, struct pt_regs *regs) return 0; if (!(fault & VM_FAULT_ERROR) && (flags & FAULT_FLAG_ALLOW_RETRY)) { - if (fault & VM_FAULT_MAJOR) - tsk->maj_flt++; - else - tsk->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; From patchwork Tue Jun 30 20:46:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634787 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EAADC739 for ; Tue, 30 Jun 2020 20:46:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B663420775 for ; Tue, 30 Jun 2020 20:46:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hhmpEFw7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B663420775 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1B13E6B0071; Tue, 30 Jun 2020 16:46:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 161636B0073; Tue, 30 Jun 2020 16:46:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 050906B0074; Tue, 30 Jun 2020 16:46:09 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id E488B6B0071 for ; Tue, 30 Jun 2020 16:46:08 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A4CF61EE6 for ; Tue, 30 Jun 2020 20:46:08 +0000 (UTC) X-FDA: 76987060416.11.trick99_510eab426e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 81C5E180F8B86 for ; Tue, 30 Jun 2020 20:46:08 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30001:30003:30054,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yrurrudjp7818u1emo459ejrgs4ocm8he78irijsjxepupfn3s6kwd45hbrko.krx3ukh9r66ndgajpgks1ydwainnmup561r1hyqapfojq4tqy1cnohr5f3c9313.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: trick99_510eab426e7a X-Filterd-Recvd-Size: 5721 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549967; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=xKTka6Ypb97PfFs9wakfoxiGbE9LtmEFf0rRmXaAIMo=; b=hhmpEFw71eAdiIjkf8T7wYQNdYWeYnDHncwqYwXQS+fHoZlqITSAXrrKUPvVFKX+JR98nC ELjIUciKhB/RZwBm8ia3sTXSxeIuybm2+YP6/gC5/aFQ0MkM1sC4pFj0kbvmkbkQu+TgtU e/rK7pIhBlOg2MP8hO6If/6h+maxGEw= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-505-OshcrN7JOzKBlk6S8HWUKA-1; Tue, 30 Jun 2020 16:46:05 -0400 X-MC-Unique: OshcrN7JOzKBlk6S8HWUKA-1 Received: by mail-qv1-f69.google.com with SMTP id g17so12886336qvw.0 for ; Tue, 30 Jun 2020 13:46:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=xKTka6Ypb97PfFs9wakfoxiGbE9LtmEFf0rRmXaAIMo=; b=FYWTS0/C5IBdxaBwdOv/zRIb6NffTPfqV6BX45Zm2TlLCHz7G5qWaWirpxdn7yZDFm zH/ZYn6yAv35/oUMqNMyCHWWDRPi0xV56vj8TTQbOqxd6JN4v2C8Msx37np84WlxhFc+ rzWap6VI4ScAf05UmaXKSTHsn2wRJtXFJ/IGYIVwZBsIqcc6vTIYiqgQQSmmDNKc+m1u R9hSyItd7TAd/C3WNOK6iTUhXw3BMSejDwl1k6ja1vSFejIlklEEhuUGWiEt7kNG4VSH J49zh6gHUc6faLvkQp3eTUsB0eoQTSrxxUPGOY1nQsTGvR9cuTzCU8zlyAmfbTahaWiy V/nQ== X-Gm-Message-State: AOAM530ZnUiTZ/SKEJX2W+VCeWrFrtWXLw5tTG1ggqkFc7K4FauxcPpB 4iM0XB5gJ/tDMPqcORrK3R4Q4VvlDFwHyO0ruqm08Xa9UbLRewaE1axZCyU7XKQGPm75Z8jmZxG NbcqrHEdZ6cM= X-Received: by 2002:aed:2b04:: with SMTP id p4mr22839474qtd.158.1593549964730; Tue, 30 Jun 2020 13:46:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxZVq5ZLv/tiefIstP8J/VIwICsvgCu20sWu/3kRcEuyQrKCd+/O7fDLZCFxom05nNc25pjGA== X-Received: by 2002:aed:2b04:: with SMTP id p4mr22839427qtd.158.1593549964186; Tue, 30 Jun 2020 13:46:04 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id u58sm4071887qth.77.2020.06.30.13.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:46:03 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" Subject: [PATCH v4 23/26] mm/x86: Use general page fault accounting Date: Tue, 30 Jun 2020 16:46:01 -0400 Message-Id: <20200630204601.39591-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 81C5E180F8B86 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). CC: Dave Hansen CC: Andy Lutomirski CC: Peter Zijlstra CC: Thomas Gleixner CC: Ingo Molnar CC: Borislav Petkov CC: x86@kernel.org CC: H. Peter Anvin Signed-off-by: Peter Xu --- arch/x86/mm/fault.c | 17 ++--------------- 1 file changed, 2 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index fe3ca00eb121..9ac80bb87781 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1140,7 +1140,7 @@ void do_user_addr_fault(struct pt_regs *regs, struct vm_area_struct *vma; struct task_struct *tsk; struct mm_struct *mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned int flags = FAULT_FLAG_DEFAULT; tsk = current; @@ -1292,8 +1292,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags, NULL); - major |= fault & VM_FAULT_MAJOR; + fault = handle_mm_fault(vma, address, flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -1320,18 +1319,6 @@ void do_user_addr_fault(struct pt_regs *regs, return; } - /* - * Major/minor page fault accounting. If any of the events - * returned VM_FAULT_MAJOR, we account it as a major fault. - */ - if (major) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } - check_v8086_mode(regs, address, tsk); } NOKPROBE_SYMBOL(do_user_addr_fault); From patchwork Tue Jun 30 20:46:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634791 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70BCB739 for ; Tue, 30 Jun 2020 20:46:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 33A4120775 for ; Tue, 30 Jun 2020 20:46:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LziHOV4D" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33A4120775 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 24E746B0075; Tue, 30 Jun 2020 16:46:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 188EF6B007B; Tue, 30 Jun 2020 16:46:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 075446B007D; Tue, 30 Jun 2020 16:46:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id E3A626B0075 for ; Tue, 30 Jun 2020 16:46:12 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A38AF1EE6 for ; Tue, 30 Jun 2020 20:46:12 +0000 (UTC) X-FDA: 76987060584.29.cap54_0d0c83926e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 7DE1C18086CDA for ; Tue, 30 Jun 2020 20:46:12 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yf8fth9xx69s7hw1nwb9w4c83jsycuw4r8ssdrnfet9io47aqr6ojkpmcjt8c.pstwhyxy58u74qaf9gqjmc88mmn4cm1gc7zsu1ahxr4zst758kmykrrbwk19mbu.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: cap54_0d0c83926e7a X-Filterd-Recvd-Size: 5793 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:46:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549971; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=/aAL+YqVyNIr1/+0GkFqbZ5SrplCx8CkqqCzwc93fOY=; b=LziHOV4DWXPDNJym42T5WBVP4e7LZGxjo9SXoS0FI4m4nKrC39SNcrL28rp71Q+FG8HTCt X6oj79x2z2ejQpdwgopWpZenMQ1sEX851L3TG31oleE19nJ5rfQoQAOwLgrNaNEpPM66sf 89j6UfQnndWw4hw05Wc2KWozlaSo7QA= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-356-eJ_VB0p2PBaffDKNkk9o5A-1; Tue, 30 Jun 2020 16:46:08 -0400 X-MC-Unique: eJ_VB0p2PBaffDKNkk9o5A-1 Received: by mail-qk1-f199.google.com with SMTP id h4so14675337qkl.23 for ; Tue, 30 Jun 2020 13:46:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/aAL+YqVyNIr1/+0GkFqbZ5SrplCx8CkqqCzwc93fOY=; b=q7AJZjs5K+4cKYYOjAiw2OmnJIShfzlzpna7jU89f64wuXr/6+m4mJHwgu2taPR2yE 53geYDwRwQD9LXPGvpO7Bt2+1Z5MZaer+GfX2gGTrGE/+ji+X6hzSRfPlOa3pWcq/N5B 08AG2Pnl6136KKGBgA0yFXEJDKNAfnXUBGOTJsCxsZN/W3lqqiPvYMc8Zrj4zsQ4OXXc kz3AtDV193z41HT9c9weBxXU5vC1IWECZs4te6bYftV0wWUCGCI66vS2BD1OLgXFqK4p v854g4xatDjf7Ukr66val4DQ3gHTablWLqYd+l3YCzRSb0pWuJw+Yd9An6j1eD5TS0Ea THPQ== X-Gm-Message-State: AOAM533p7plU092cWSaTaizzeB3G6lECe8gtBqKntrs+0yDGAiLMDLYw cmPaaREiK0fC0gRsjqLbiSyQvk5HbhRTzTE5Ew9HzHmp+JDdNsu5yMO28IORLSkhzYqHe3Ll4zY rimoucHIoGVk= X-Received: by 2002:ac8:3777:: with SMTP id p52mr22305343qtb.31.1593549967245; Tue, 30 Jun 2020 13:46:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxA5Qn2mPg7zu4RqzhVdNMOn9tGWBHigOGJIoAnItfVkgeeGrQilw31yhmo4PQ8Uq4TNeoO9g== X-Received: by 2002:ac8:3777:: with SMTP id p52mr22305300qtb.31.1593549966760; Tue, 30 Jun 2020 13:46:06 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id q29sm1563919qtc.10.2020.06.30.13.46.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:46:06 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon , Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH v4 24/26] mm/xtensa: Use general page fault accounting Date: Tue, 30 Jun 2020 16:46:04 -0400 Message-Id: <20200630204604.39640-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 7DE1C18086CDA X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. Remove the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] perf events because it's now also done in handle_mm_fault(). Move the PERF_COUNT_SW_PAGE_FAULTS event higher before taking mmap_sem for the fault, then it'll match with the rest of the archs. CC: Chris Zankel CC: Max Filippov CC: linux-xtensa@linux-xtensa.org Acked-by: Max Filippov Signed-off-by: Peter Xu --- arch/xtensa/mm/fault.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index 6942de45f078..a05b53a22810 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -73,6 +73,9 @@ void do_page_fault(struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + retry: mmap_read_lock(mm); vma = find_vma(mm, address); @@ -108,7 +111,7 @@ void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, NULL); + fault = handle_mm_fault(vma, address, flags, regs); if (fault_signal_pending(fault, regs)) return; @@ -123,10 +126,6 @@ void do_page_fault(struct pt_regs *regs) BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; @@ -140,12 +139,6 @@ void do_page_fault(struct pt_regs *regs) } mmap_read_unlock(mm); - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - if (flags & VM_FAULT_MAJOR) - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - else - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - return; /* Something tried to access memory that isn't in our memory map.. From patchwork Tue Jun 30 20:46:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634793 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F18EE138C for ; Tue, 30 Jun 2020 20:46:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B0FAA20775 for ; Tue, 30 Jun 2020 20:46:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XEV9lShM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B0FAA20775 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2C1FC6B007B; Tue, 30 Jun 2020 16:46:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 29D476B007E; Tue, 30 Jun 2020 16:46:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18C386B0080; Tue, 30 Jun 2020 16:46:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id E53646B007B for ; Tue, 30 Jun 2020 16:46:14 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AC10A2DFC for ; Tue, 30 Jun 2020 20:46:14 +0000 (UTC) X-FDA: 76987060668.04.list00_440e7ab26e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 86CDD8003436 for ; Tue, 30 Jun 2020 20:46:14 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30001:30003:30012:30054:30070:30075:30091,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04ygwqf1m59sohuos9zkfa8haopczocmcca87ezua79ufud4nk945acph98cn8h.39zr6dii1qjpxp13yyyt3a3kuq6yom4rofkhigiax1qrbbcrgcakygr4fcjuy4i.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: list00_440e7ab26e7a X-Filterd-Recvd-Size: 8542 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:46:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549973; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=KP+2GjJCGxAUudaqzPCZQK6QSLk+7TO8m2VQsgXKpzI=; b=XEV9lShMRe4XGgIn3GDBY2cHHRzVs/LUbpP3bxRrOnxmn2XVWiDmCJF1e3zf3rzTYgdvsU OP6uJ4DBbMgV5jfwfojcabo8IHFQh84xo04iQTSmsGUTlfhRxZuzgsvi5bWttf6MK3vMkO /IizwJ2kTxz3FTP0KDY6PnXr/dtifso= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-159-wQcVxhd3P9OJWz4wD5DnAQ-1; Tue, 30 Jun 2020 16:46:12 -0400 X-MC-Unique: wQcVxhd3P9OJWz4wD5DnAQ-1 Received: by mail-qk1-f197.google.com with SMTP id q6so6904137qke.21 for ; Tue, 30 Jun 2020 13:46:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=KP+2GjJCGxAUudaqzPCZQK6QSLk+7TO8m2VQsgXKpzI=; b=IdekMO5ITOHysQzCfmOKqg6Zj/YiOZZw3jn8jm/98at53n9b55j2bu73aB9FdtrVKx XFYiEiFWehlscOBuIhcktTG+/tAU6Nvunso7A1nS3pGOG2bmAWDUcXt3rV1RgJjIxxFp vY5kuGVkMmGwTAJitezvDFhNiLHEdTUlhzjvxIKG9qLp0lFjl3MpT9hfBPAoP7DjWLnV LN/ovO5oGIOfVhSBppNtzhd6t+EuSzI3i38OHUVSWNsWjO2cNyZRXC40muYA75/kk+z4 3445kHpsKbI9/DujnXq2MJwBmNQHY22FWdvGhoA7i2MyUT6Z6Jn01ZxsB39BQ8BKYCz2 kI0g== X-Gm-Message-State: AOAM5303M5E/4Q+Kg6ag5Qyn9Dzhqy+uFMdK13ojJLBW+9fdf5DWHhrv uA5Jhzm8Pz1rKUn/Cbw7VSntOijH+g9Ds+Kh2epBvTAnsdOpppQyxLLbgbLjZdMUQ45Wsi8YnGp jjUax2ks9wgM= X-Received: by 2002:a05:620a:20db:: with SMTP id f27mr19920693qka.345.1593549971478; Tue, 30 Jun 2020 13:46:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyuOgaCMTcOSfNWgVpDMRuvJOVPg/PDTEHtuDHHfX8jyFjFfbUpFyUfQhuNzejHJ4xckU7Yog== X-Received: by 2002:a05:620a:20db:: with SMTP id f27mr19920545qka.345.1593549969136; Tue, 30 Jun 2020 13:46:09 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id d13sm3830829qkj.27.2020.06.30.13.46.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:46:08 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon Subject: [PATCH v4 25/26] mm: Clean up the last pieces of page fault accountings Date: Tue, 30 Jun 2020 16:46:07 -0400 Message-Id: <20200630204607.39688-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 86CDD8003436 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Here're the last pieces of page fault accounting that were still done outside handle_mm_fault() where we still have regs==NULL when calling handle_mm_fault(): arch/powerpc/mm/copro_fault.c: copro_handle_mm_fault arch/sparc/mm/fault_32.c: force_user_fault arch/um/kernel/trap.c: handle_page_fault mm/gup.c: faultin_page fixup_user_fault mm/hmm.c: hmm_vma_fault mm/ksm.c: break_ksm Some of them has the issue of duplicated accounting for page fault retries. Some of them didn't do the accounting at all. This patch cleans all these up by letting handle_mm_fault() to do per-task page fault accounting even if regs==NULL (though we'll still skip the perf event accountings). With that, we can safely remove all the outliers now. There's another functional change in that now we account the page faults to the caller of gup, rather than the task_struct that passed into the gup code. More information of this can be found at [1]. After this patch, below things should never be touched again outside handle_mm_fault(): - task_struct.[maj|min]_flt - PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] [1] https://lore.kernel.org/lkml/CAHk-=wj_V2Tps2QrMn20_W0OJF9xqNh52XSGA42s-ZJ8Y+GyKw@mail.gmail.com/ Signed-off-by: Peter Xu --- arch/powerpc/mm/copro_fault.c | 5 ----- arch/um/kernel/trap.c | 4 ---- mm/gup.c | 13 ------------- mm/memory.c | 19 ++++++++++++------- 4 files changed, 12 insertions(+), 29 deletions(-) diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c index 2d0276abe0a6..8acd00178956 100644 --- a/arch/powerpc/mm/copro_fault.c +++ b/arch/powerpc/mm/copro_fault.c @@ -76,11 +76,6 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea, BUG(); } - if (*flt & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; - out_unlock: mmap_read_unlock(mm); return ret; diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 8d9870d76da1..ad12f78bda7e 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -88,10 +88,6 @@ int handle_page_fault(unsigned long address, unsigned long ip, BUG(); } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; diff --git a/mm/gup.c b/mm/gup.c index 53ad15629014..89b18c407ad2 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -893,13 +893,6 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, BUG(); } - if (tsk) { - if (ret & VM_FAULT_MAJOR) - tsk->maj_flt++; - else - tsk->min_flt++; - } - if (ret & VM_FAULT_RETRY) { if (locked && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) *locked = 0; @@ -1255,12 +1248,6 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, goto retry; } - if (tsk) { - if (major) - tsk->maj_flt++; - else - tsk->min_flt++; - } return 0; } EXPORT_SYMBOL_GPL(fixup_user_fault); diff --git a/mm/memory.c b/mm/memory.c index e594d5cdcaa0..00d96ae9464c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4365,6 +4365,8 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, /** * mm_account_fault - Do page fault accountings * @regs: the pt_regs struct pointer. When set to NULL, will skip accounting + * of perf event counters, but we'll still do the per-task accounting to + * the task who triggered this page fault. * @address: faulted address. * @major: whether this is a major fault. * @@ -4380,16 +4382,18 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, static inline void mm_account_fault(struct pt_regs *regs, unsigned long address, bool major) { + if (major) + current->maj_flt++; + else + current->min_flt++; + if (!regs) return; - if (major) { - current->maj_flt++; + if (major) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); - } else { - current->min_flt++; + else perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); - } } /* @@ -4462,8 +4466,9 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, * fault is VM_FAULT_MAJOR, or if it was a retry (which implies that * we couldn't handle it immediately previously). * - * - If the fault is done for GUP, regs will be NULL and no accounting - * will be done. + * - If the fault is done for GUP, regs will be NULL and we only do + * the accounting for the per thread fault counters who triggered + * the fault, and we skip the perf event updates. */ mm_account_fault(regs, address, (ret & VM_FAULT_MAJOR) || (flags & FAULT_FLAG_TRIED)); From patchwork Tue Jun 30 20:46:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11634795 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1A75739 for ; Tue, 30 Jun 2020 20:46:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8637820826 for ; Tue, 30 Jun 2020 20:46:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="cDfhecmb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8637820826 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7CA536B007E; Tue, 30 Jun 2020 16:46:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 77B246B0081; Tue, 30 Jun 2020 16:46:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66F686B0082; Tue, 30 Jun 2020 16:46:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id 4C5236B007E for ; Tue, 30 Jun 2020 16:46:32 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 14FB48248068 for ; Tue, 30 Jun 2020 20:46:32 +0000 (UTC) X-FDA: 76987061424.10.coat07_0203e3c26e7a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id E384B16A0BE for ; Tue, 30 Jun 2020 20:46:31 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30003:30012:30034:30036:30051:30054:30070:30074:30090,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y83xf7uhs4gtnjyyrdzbfoeerh9ycge63kypaczbkwbzpjur9z8ru9di9oeon.b8qi19k1egpbtydbqi5wgyc6ewpwg3sm9dq5qwnp9ta8u658pn4coboxb8cd6i3.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: coat07_0203e3c26e7a X-Filterd-Recvd-Size: 32651 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 30 Jun 2020 20:46:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593549990; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=UWJxabScOILnyTKEU7DaJa6DEN6MUtZshAUEeSqqw9g=; b=cDfhecmbAaNcj9tPor4ImFOFzsTe0pIn3nwoTj74ElC4MXblZy6TT8iNSu8ezGKnZE/7VU GdphGV7YmeJewFsecmZwkoPuVAD53AOGwIKok/xwyyN+HlC1I32RiC0Rhud/tYVs3uU/20 wg2iMTZZ1p9gHnjdzoAOZnA0V7ZIO+A= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-209-JvwQcS1zMv6U42_jqI94og-1; Tue, 30 Jun 2020 16:46:17 -0400 X-MC-Unique: JvwQcS1zMv6U42_jqI94og-1 Received: by mail-qk1-f197.google.com with SMTP id g12so15437739qko.19 for ; Tue, 30 Jun 2020 13:46:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=UWJxabScOILnyTKEU7DaJa6DEN6MUtZshAUEeSqqw9g=; b=ofiJO19CQN05zBJLWq0HEYo0+7+kQQQmp5I20zzA6QXDFXyrql3SCRiJKNlrzXd8rE st71NkkJBynYAb87yavBZTwcslQeGiUeO06KPbUBq+GCE8WcgakDVPEtqcmVzAiThbEq sMDc59+wyxkMXG+ybGxN4IHdUQt8eG6UbS5uaMiFA0v96yRPkAWzLSFfZfFIGlp4y4Nz zpJ8D2KXkVbEne6NcMVdsQI5AwFwJTMOHXsB2zkXrf69ZaKLt0USLoe2A2kbRM4tn9bg vr58Sd4jewYTIursbn5EQdPLdLlGJ3o+DxWYSR5ZLeM3dw1GzKaQSfHUufob/zfMWJcT 7pOg== X-Gm-Message-State: AOAM530pCitv8jl/zNJg9xj5sywllZQ6Nr83cCFp0KAEJEKFCMHGIaST RWPa2bBs9Te2gjhAqE+jsTsVTkYc6ke+esoHHXyJXCr+a7T3xFMwnz1BwG169dhyRtCcEubtCiQ 0OFydyKVFd6A= X-Received: by 2002:ac8:17d6:: with SMTP id r22mr23329860qtk.15.1593549973772; Tue, 30 Jun 2020 13:46:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxd5mhscVcW8HurpKdLeHExR5lTNnTPR9j/PSpMtTi85HXwWzraO2OVtYFlAxQA8ES/UGUxkQ== X-Received: by 2002:ac8:17d6:: with SMTP id r22mr23329702qtk.15.1593549971695; Tue, 30 Jun 2020 13:46:11 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id j52sm4296019qtc.49.2020.06.30.13.46.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 13:46:11 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , peterx@redhat.com, John Hubbard , Michael Ellerman , Gerald Schaefer , Andrea Arcangeli , Linus Torvalds , Will Deacon Subject: [PATCH v4 26/26] mm/gup: Remove task_struct pointer for all gup code Date: Tue, 30 Jun 2020 16:46:09 -0400 Message-Id: <20200630204609.39736-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: E384B16A0BE X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After the cleanup of page fault accounting, gup does not need to pass task_struct around any more. Remove that parameter in the whole gup stack. Signed-off-by: Peter Xu --- arch/arc/kernel/process.c | 2 +- arch/s390/kvm/interrupt.c | 2 +- arch/s390/kvm/kvm-s390.c | 2 +- arch/s390/kvm/priv.c | 8 +- arch/s390/mm/gmap.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 +- drivers/infiniband/core/umem_odp.c | 2 +- drivers/vfio/vfio_iommu_type1.c | 4 +- fs/exec.c | 2 +- include/linux/mm.h | 9 +- kernel/events/uprobes.c | 6 +- kernel/futex.c | 2 +- mm/gup.c | 92 +++++++++------------ mm/memory.c | 2 +- mm/process_vm_access.c | 2 +- security/tomoyo/domain.c | 2 +- virt/kvm/async_pf.c | 2 +- virt/kvm/kvm_main.c | 2 +- 18 files changed, 65 insertions(+), 82 deletions(-) diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c index 8c8e5172fecd..1ef6b78ff9c7 100644 --- a/arch/arc/kernel/process.c +++ b/arch/arc/kernel/process.c @@ -91,7 +91,7 @@ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new) goto fail; mmap_read_lock(current->mm); - ret = fixup_user_fault(current, current->mm, (unsigned long) uaddr, + ret = fixup_user_fault(current->mm, (unsigned long) uaddr, FAULT_FLAG_WRITE, NULL); mmap_read_unlock(current->mm); diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index 1608fd99bbee..2f177298c663 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -2768,7 +2768,7 @@ static struct page *get_map_page(struct kvm *kvm, u64 uaddr) struct page *page = NULL; mmap_read_lock(kvm->mm); - get_user_pages_remote(NULL, kvm->mm, uaddr, 1, FOLL_WRITE, + get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE, &page, NULL, NULL); mmap_read_unlock(kvm->mm); return page; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 08e6cf6cb454..f78921bc11b3 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1892,7 +1892,7 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args) r = set_guest_storage_key(current->mm, hva, keys[i], 0); if (r) { - r = fixup_user_fault(current, current->mm, hva, + r = fixup_user_fault(current->mm, hva, FAULT_FLAG_WRITE, &unlocked); if (r) break; diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 96ae368aa0a2..0fd94e86a28d 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -274,7 +274,7 @@ static int handle_iske(struct kvm_vcpu *vcpu) rc = get_guest_storage_key(current->mm, vmaddr, &key); if (rc) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { mmap_read_unlock(current->mm); @@ -320,7 +320,7 @@ static int handle_rrbe(struct kvm_vcpu *vcpu) mmap_read_lock(current->mm); rc = reset_guest_reference_bit(current->mm, vmaddr); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); if (!rc) { mmap_read_unlock(current->mm); @@ -391,7 +391,7 @@ static int handle_sske(struct kvm_vcpu *vcpu) m3 & SSKE_MC); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc = !rc ? -EAGAIN : rc; } @@ -1095,7 +1095,7 @@ static int handle_pfmf(struct kvm_vcpu *vcpu) rc = cond_set_guest_storage_key(current->mm, vmaddr, key, NULL, nq, mr, mc); if (rc < 0) { - rc = fixup_user_fault(current, current->mm, vmaddr, + rc = fixup_user_fault(current->mm, vmaddr, FAULT_FLAG_WRITE, &unlocked); rc = !rc ? -EAGAIN : rc; } diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 190357ff86b3..8747487c50a8 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -649,7 +649,7 @@ int gmap_fault(struct gmap *gmap, unsigned long gaddr, rc = vmaddr; goto out_up; } - if (fixup_user_fault(current, gmap->mm, vmaddr, fault_flags, + if (fixup_user_fault(gmap->mm, vmaddr, fault_flags, &unlocked)) { rc = -EFAULT; goto out_up; @@ -879,7 +879,7 @@ static int gmap_pte_op_fixup(struct gmap *gmap, unsigned long gaddr, BUG_ON(gmap_is_shadow(gmap)); fault_flags = (prot == PROT_WRITE) ? FAULT_FLAG_WRITE : 0; - if (fixup_user_fault(current, mm, vmaddr, fault_flags, &unlocked)) + if (fixup_user_fault(mm, vmaddr, fault_flags, &unlocked)) return -EFAULT; if (unlocked) /* lost mmap_lock, caller has to retry __gmap_translate */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 9c53eb883400..4ce66baaa17f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -472,7 +472,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work) locked = 1; } ret = pin_user_pages_remote - (work->task, mm, + (mm, obj->userptr.ptr + pinned * PAGE_SIZE, npages - pinned, flags, diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 5e32f61a2fe4..cc6b4befde7c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -439,7 +439,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, * complex (and doesn't gain us much performance in most use * cases). */ - npages = get_user_pages_remote(owning_process, owning_mm, + npages = get_user_pages_remote(owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); mmap_read_unlock(owning_mm); diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 5e556ac9102a..9d41105bfd01 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -425,7 +425,7 @@ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm, if (ret) { bool unlocked = false; - ret = fixup_user_fault(NULL, mm, vaddr, + ret = fixup_user_fault(mm, vaddr, FAULT_FLAG_REMOTE | (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); @@ -453,7 +453,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr, flags |= FOLL_WRITE; mmap_read_lock(mm); - ret = pin_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, + ret = pin_user_pages_remote(mm, vaddr, 1, flags | FOLL_LONGTERM, page, NULL, NULL); if (ret == 1) { *pfn = page_to_pfn(page[0]); diff --git a/fs/exec.c b/fs/exec.c index 7b7cbb180785..3cf806de5710 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -217,7 +217,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos, * We are doing an exec(). 'current' is the process * doing the exec and bprm->mm is the new process's mm. */ - ret = get_user_pages_remote(current, bprm->mm, pos, 1, gup_flags, + ret = get_user_pages_remote(bprm->mm, pos, 1, gup_flags, &page, NULL, NULL); if (ret <= 0) return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index ebc173dddad5..6da813301497 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1654,7 +1654,7 @@ int invalidate_inode_page(struct page *page); extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct pt_regs *regs); -extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, +extern int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); void unmap_mapping_pages(struct address_space *mapping, @@ -1670,8 +1670,7 @@ static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, BUG(); return VM_FAULT_SIGBUS; } -static inline int fixup_user_fault(struct task_struct *tsk, - struct mm_struct *mm, unsigned long address, +static inline int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { /* should never happen if there's no MMU */ @@ -1697,11 +1696,11 @@ extern int access_remote_vm(struct mm_struct *mm, unsigned long addr, extern int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, unsigned long addr, void *buf, int len, unsigned int gup_flags); -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index e84eb52b646b..f500204eb70d 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -376,7 +376,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) if (!vaddr || !d) return -EINVAL; - ret = get_user_pages_remote(NULL, mm, vaddr, 1, + ret = get_user_pages_remote(mm, vaddr, 1, FOLL_WRITE, &page, &vma, NULL); if (unlikely(ret <= 0)) { /* @@ -477,7 +477,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (is_register) gup_flags |= FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - ret = get_user_pages_remote(NULL, mm, vaddr, 1, gup_flags, + ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, &vma, NULL); if (ret <= 0) return ret; @@ -2029,7 +2029,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) * but we treat this as a 'remote' access since it is * essentially a kernel access to the memory. */ - result = get_user_pages_remote(NULL, mm, vaddr, 1, FOLL_FORCE, &page, + result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, NULL, NULL); if (result < 0) return result; diff --git a/kernel/futex.c b/kernel/futex.c index 05e88562de68..d024fcef62e8 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -699,7 +699,7 @@ static int fault_in_user_writeable(u32 __user *uaddr) int ret; mmap_read_lock(mm); - ret = fixup_user_fault(current, mm, (unsigned long)uaddr, + ret = fixup_user_fault(mm, (unsigned long)uaddr, FAULT_FLAG_WRITE, NULL); mmap_read_unlock(mm); diff --git a/mm/gup.c b/mm/gup.c index 89b18c407ad2..8ddc48022d74 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -859,7 +859,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, * does not include FOLL_NOWAIT, the mmap_lock may be released. If it * is, *@locked will be set to 0 and -EBUSY returned. */ -static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, +static int faultin_page(struct vm_area_struct *vma, unsigned long address, unsigned int *flags, int *locked) { unsigned int fault_flags = 0; @@ -962,7 +962,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) /** * __get_user_pages() - pin user pages in memory - * @tsk: task_struct of target task * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1021,7 +1020,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * instead of __get_user_pages. __get_user_pages should be used only if * you need some special @gup_flags. */ -static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, +static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1103,8 +1102,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, page = follow_page_mask(vma, start, foll_flags, &ctx); if (!page) { - ret = faultin_page(tsk, vma, start, &foll_flags, - locked); + ret = faultin_page(vma, start, &foll_flags, locked); switch (ret) { case 0: goto retry; @@ -1178,8 +1176,6 @@ static bool vma_permits_fault(struct vm_area_struct *vma, /** * fixup_user_fault() - manually resolve a user page fault - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @address: user address * @fault_flags:flags to pass down to handle_mm_fault() @@ -1207,7 +1203,7 @@ static bool vma_permits_fault(struct vm_area_struct *vma, * This function will not return with an unlocked mmap_lock. So it has not the * same semantics wrt the @mm->mmap_lock as does filemap_fault(). */ -int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, +int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { @@ -1256,8 +1252,7 @@ EXPORT_SYMBOL_GPL(fixup_user_fault); * Please note that this function, unlike __get_user_pages will not * return 0 for nr_pages > 0 without FOLL_NOWAIT */ -static __always_inline long __get_user_pages_locked(struct task_struct *tsk, - struct mm_struct *mm, +static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1290,7 +1285,7 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, pages_done = 0; lock_dropped = false; for (;;) { - ret = __get_user_pages(tsk, mm, start, nr_pages, flags, pages, + ret = __get_user_pages(mm, start, nr_pages, flags, pages, vmas, locked); if (!locked) /* VM_FAULT_RETRY couldn't trigger, bypass */ @@ -1350,7 +1345,7 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, } *locked = 1; - ret = __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED, + ret = __get_user_pages(mm, start, 1, flags | FOLL_TRIED, pages, NULL, locked); if (!*locked) { /* Continue to retry until we succeeded */ @@ -1436,7 +1431,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * We made sure addr is within a VMA, so the following will * not result in a stack expansion that recurses back here. */ - return __get_user_pages(current, mm, start, nr_pages, gup_flags, + return __get_user_pages(mm, start, nr_pages, gup_flags, NULL, NULL, locked); } @@ -1520,7 +1515,7 @@ struct page *get_dump_page(unsigned long addr) struct vm_area_struct *vma; struct page *page; - if (__get_user_pages(current, current->mm, addr, 1, + if (__get_user_pages(current->mm, addr, 1, FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, NULL) < 1) return NULL; @@ -1529,8 +1524,7 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ #else /* CONFIG_MMU */ -static long __get_user_pages_locked(struct task_struct *tsk, - struct mm_struct *mm, unsigned long start, +static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, int *locked, unsigned int foll_flags) @@ -1606,8 +1600,7 @@ static struct page *alloc_migration_target_non_cma(struct page *page, unsigned l return alloc_migration_target(page, (unsigned long)&mtc); } -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1681,7 +1674,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * again migrating any new CMA pages which we failed to isolate * earlier. */ - ret = __get_user_pages_locked(tsk, mm, start, nr_pages, + ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); @@ -1695,8 +1688,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, return ret; } #else -static long check_and_migrate_cma_pages(struct task_struct *tsk, - struct mm_struct *mm, +static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1711,8 +1703,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which * allows us to process the FOLL_LONGTERM flag. */ -static long __gup_longterm_locked(struct task_struct *tsk, - struct mm_struct *mm, +static long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, @@ -1737,7 +1728,7 @@ static long __gup_longterm_locked(struct task_struct *tsk, flags = memalloc_nocma_save(); } - rc = __get_user_pages_locked(tsk, mm, start, nr_pages, pages, + rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas_tmp, NULL, gup_flags); if (gup_flags & FOLL_LONGTERM) { @@ -1752,7 +1743,7 @@ static long __gup_longterm_locked(struct task_struct *tsk, goto out; } - rc = check_and_migrate_cma_pages(tsk, mm, start, rc, pages, + rc = check_and_migrate_cma_pages(mm, start, rc, pages, vmas_tmp, gup_flags); } @@ -1762,22 +1753,20 @@ static long __gup_longterm_locked(struct task_struct *tsk, return rc; } #else /* !CONFIG_FS_DAX && !CONFIG_CMA */ -static __always_inline long __gup_longterm_locked(struct task_struct *tsk, - struct mm_struct *mm, +static __always_inline long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, struct vm_area_struct **vmas, unsigned int flags) { - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, flags); } #endif /* CONFIG_FS_DAX || CONFIG_CMA */ #ifdef CONFIG_MMU -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1796,20 +1785,18 @@ static long __get_user_pages_remote(struct task_struct *tsk, * This will check the vmas (even if our vmas arg is NULL) * and return -ENOTSUPP if DAX isn't allowed in this case: */ - return __gup_longterm_locked(tsk, mm, start, nr_pages, pages, + return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } - return __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, gup_flags | FOLL_TOUCH | FOLL_REMOTE); } /** * get_user_pages_remote() - pin user pages in memory - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -1868,7 +1855,7 @@ static long __get_user_pages_remote(struct task_struct *tsk, * should use get_user_pages_remote because it cannot pass * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault. */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1880,13 +1867,13 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(get_user_pages_remote); #else /* CONFIG_MMU */ -long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1894,8 +1881,7 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, return 0; } -static long __get_user_pages_remote(struct task_struct *tsk, - struct mm_struct *mm, +static long __get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -1932,7 +1918,7 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } EXPORT_SYMBOL(get_user_pages); @@ -1942,7 +1928,7 @@ EXPORT_SYMBOL(get_user_pages); * * mmap_read_lock(mm); * do_something() - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * mmap_read_unlock(mm); * * to: @@ -1950,7 +1936,7 @@ EXPORT_SYMBOL(get_user_pages); * int locked = 1; * mmap_read_lock(mm); * do_something() - * get_user_pages_locked(tsk, mm, ..., pages, &locked); + * get_user_pages_locked(mm, ..., pages, &locked); * if (locked) * mmap_read_unlock(mm); * @@ -1988,7 +1974,7 @@ long get_user_pages_locked(unsigned long start, unsigned long nr_pages, if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) return -EINVAL; - return __get_user_pages_locked(current, current->mm, start, nr_pages, + return __get_user_pages_locked(current->mm, start, nr_pages, pages, NULL, locked, gup_flags | FOLL_TOUCH); } @@ -1998,12 +1984,12 @@ EXPORT_SYMBOL(get_user_pages_locked); * get_user_pages_unlocked() is suitable to replace the form: * * mmap_read_lock(mm); - * get_user_pages(tsk, mm, ..., pages, NULL); + * get_user_pages(mm, ..., pages, NULL); * mmap_read_unlock(mm); * * with: * - * get_user_pages_unlocked(tsk, mm, ..., pages); + * get_user_pages_unlocked(mm, ..., pages); * * It is functionally equivalent to get_user_pages_fast so * get_user_pages_fast should be used instead if specific gup_flags @@ -2026,7 +2012,7 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, return -EINVAL; mmap_read_lock(mm); - ret = __get_user_pages_locked(current, mm, start, nr_pages, pages, NULL, + ret = __get_user_pages_locked(mm, start, nr_pages, pages, NULL, &locked, gup_flags | FOLL_TOUCH); if (locked) mmap_read_unlock(mm); @@ -2671,7 +2657,7 @@ static int __gup_longterm_unlocked(unsigned long start, int nr_pages, */ if (gup_flags & FOLL_LONGTERM) { mmap_read_lock(current->mm); - ret = __gup_longterm_locked(current, current->mm, + ret = __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, gup_flags); mmap_read_unlock(current->mm); @@ -2914,10 +2900,8 @@ int pin_user_pages_fast_only(unsigned long start, int nr_pages, EXPORT_SYMBOL_GPL(pin_user_pages_fast_only); /** - * pin_user_pages_remote() - pin pages of a remote process (task != current) + * pin_user_pages_remote() - pin pages of a remote process * - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. * @mm: mm_struct of target mm * @start: starting user address * @nr_pages: number of pages from start to pin @@ -2938,7 +2922,7 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast_only); * FOLL_PIN means that the pages must be released via unpin_user_page(). Please * see Documentation/core-api/pin_user_pages.rst for details. */ -long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, +long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) @@ -2948,7 +2932,7 @@ long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, return -EINVAL; gup_flags |= FOLL_PIN; - return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, + return __get_user_pages_remote(mm, start, nr_pages, gup_flags, pages, vmas, locked); } EXPORT_SYMBOL(pin_user_pages_remote); @@ -2980,7 +2964,7 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, return -EINVAL; gup_flags |= FOLL_PIN; - return __gup_longterm_locked(current, current->mm, start, nr_pages, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags); } EXPORT_SYMBOL(pin_user_pages); @@ -3025,7 +3009,7 @@ long pin_user_pages_locked(unsigned long start, unsigned long nr_pages, return -EINVAL; gup_flags |= FOLL_PIN; - return __get_user_pages_locked(current, current->mm, start, nr_pages, + return __get_user_pages_locked(current->mm, start, nr_pages, pages, NULL, locked, gup_flags | FOLL_TOUCH); } diff --git a/mm/memory.c b/mm/memory.c index 00d96ae9464c..65296a546f02 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4746,7 +4746,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, void *maddr; struct page *page = NULL; - ret = get_user_pages_remote(tsk, mm, addr, 1, + ret = get_user_pages_remote(mm, addr, 1, gup_flags, &page, &vma, NULL); if (ret <= 0) { #ifndef CONFIG_HAVE_IOREMAP_PROT diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index cc85ce81914a..29c052099aff 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -105,7 +105,7 @@ static int process_vm_rw_single_vec(unsigned long addr, * current/current->mm */ mmap_read_lock(mm); - pinned_pages = pin_user_pages_remote(task, mm, pa, pinned_pages, + pinned_pages = pin_user_pages_remote(mm, pa, pinned_pages, flags, process_pages, NULL, &locked); if (locked) diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 7869d6a9980b..afe5e68ede77 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -914,7 +914,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsigned long pos, * (represented by bprm). 'current' is the process doing * the execve(). */ - if (get_user_pages_remote(current, bprm->mm, pos, 1, + if (get_user_pages_remote(bprm->mm, pos, 1, FOLL_FORCE, &page, NULL, NULL) <= 0) return false; #else diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 45799606bb3e..0939ed377688 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -61,7 +61,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ mmap_read_lock(mm); - get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, &locked); if (locked) mmap_read_unlock(mm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a852af5c3214..45a0a1e6fde8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1830,7 +1830,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * not call the fault handler, so do it here. */ bool unlocked = false; - r = fixup_user_fault(current, current->mm, addr, + r = fixup_user_fault(current->mm, addr, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked)