From patchwork Fri Jun 26 22:31:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11628983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9274D1392 for ; Fri, 26 Jun 2020 22:31:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F86F20885 for ; Fri, 26 Jun 2020 22:31:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FFRVttDr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F86F20885 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 00EEB6B000C; Fri, 26 Jun 2020 18:31:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EDE3B6B000D; Fri, 26 Jun 2020 18:31:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D56076B0010; Fri, 26 Jun 2020 18:31:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id AE7C16B000C for ; Fri, 26 Jun 2020 18:31:43 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7A4602DFD for ; Fri, 26 Jun 2020 22:31:43 +0000 (UTC) X-FDA: 76972811286.05.love96_4c1198726e59 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 55E1518027F8C for ; Fri, 26 Jun 2020 22:31:43 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30036:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y88gisqps371awmoq9ma4jebqapyczsfk7ke5zz3imfhby8et5eg1h18h98o6.adswjm5aar3n8r987twjrydarmhhdtoa394ndiy9eyojuiu5peqresrfwwdtgiz.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: love96_4c1198726e59 X-Filterd-Recvd-Size: 6622 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 26 Jun 2020 22:31:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593210702; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u3Vh1HVYE5zmkqw2sOdn8zxGKpBRt5nL/FWkJRhYB9o=; b=FFRVttDrB8S7MKhG81omFBDzCj6lNjJ62i8Zlz4pomDU+azr9+s+I+kwlhV4y5fjdsJsHI xJhZLuzjHDQ6gwbu+Ncmf91P1saVVWpKJ2N74mil1qCMhIHZCoyHSW5rs/+6Q2xfz5RRzX /jkBapCxBiFmuPcnNwo1KFWl39uR7jk= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-282-TJMX7nVfNi2UfSSAwo63yg-1; Fri, 26 Jun 2020 18:31:41 -0400 X-MC-Unique: TJMX7nVfNi2UfSSAwo63yg-1 Received: by mail-qk1-f199.google.com with SMTP id l123so4630137qkd.8 for ; Fri, 26 Jun 2020 15:31:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u3Vh1HVYE5zmkqw2sOdn8zxGKpBRt5nL/FWkJRhYB9o=; b=tX0H4gaK7pi8xHjsoHLb36m99SD98byA2kujvoYIYjIOxUHiJyMbFsN74KsnulQKN7 OJs4HweHAyT/MhalfIf7YBiE3XVFSr6Os7BHNBTgcC4E0iFMF/Cr9elzdl87BLhSzCOG G63x/B/0aeI1NZHCr/PXTkq86XuTzkmBM7iPOXSF2Jtp68prn0O555IXcGfP5coKIWB1 SusbJhLOT4DVgWlOY0Zx5/xy5ifiYNYGjL2271pbllBjm2Y4Zf+bsiPHjVbH/06gvBZQ lGO/ng5wYeKF6c5JPaMN/+u3GYXYNX53ZtgCqnAz+lQZiVUxU7sYObiPtExkAw7/jCP0 aDsw== X-Gm-Message-State: AOAM531s0lfsxMIwCCh3rytGfO89BcjlLPJJQAA4g3SU8k77EkGRn9CX NSMBYG2bw38kvcAi4KK/IHTQvD8BDznwSw1rmrhGWQpGCaslx+wTC61DC41OvmaqbCCdQlab7+g bNF4pMq/wd4k= X-Received: by 2002:ac8:2fa9:: with SMTP id l38mr5242190qta.40.1593210700166; Fri, 26 Jun 2020 15:31:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzm0orUUqKrKyKvfnfW8pcdQLEQcj6s00CrnrDVCK/RKvjz45DEVq9UJHC9oLbgbG9R8tJqLw== X-Received: by 2002:ac8:2fa9:: with SMTP id l38mr5242166qta.40.1593210699937; Fri, 26 Jun 2020 15:31:39 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id f203sm9903311qke.135.2020.06.26.15.31.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Jun 2020 15:31:39 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Linus Torvalds , Gerald Schaefer , Andrea Arcangeli , Will Deacon , peterx@redhat.com, Michael Ellerman , Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: [PATCH 05/26] mm/arm64: Use general page fault accounting Date: Fri, 26 Jun 2020 18:31:09 -0400 Message-Id: <20200626223130.199227-6-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626223130.199227-1-peterx@redhat.com> References: <20200626223130.199227-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 55E1518027F8C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we pass pt_regs pointer into __do_page_fault(). CC: Catalin Marinas CC: Will Deacon CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Peter Xu --- arch/arm64/mm/fault.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 5f6607b951b8..09b206521559 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -398,7 +398,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re #define VM_FAULT_BADACCESS 0x020000 static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags) + unsigned int mm_flags, unsigned long vm_flags, + struct pt_regs *regs) { struct vm_area_struct *vma = find_vma(mm, addr); @@ -422,7 +423,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs); } static bool is_el0_instruction_abort(unsigned int esr) @@ -444,7 +445,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, { const struct fault_info *inf; struct mm_struct *mm = current->mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned long vm_flags = VM_ACCESS_FLAGS; unsigned int mm_flags = FAULT_FLAG_DEFAULT; @@ -510,8 +511,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, #endif } - fault = __do_page_fault(mm, addr, mm_flags, vm_flags); - major |= fault & VM_FAULT_MAJOR; + fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -532,25 +532,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, * Handle the "normal" (no error) case first. */ if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | - VM_FAULT_BADACCESS)))) { - /* - * Major/minor page fault accounting is only done - * once. If we go through a retry, it is extremely - * likely that the page will be found in page cache at - * that point. - */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - addr); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - addr); - } - + VM_FAULT_BADACCESS)))) return 0; - } /* * If we are in kernel mode at this point, we have no context to