From patchwork Fri Jun 26 22:31:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11628985 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ABD7214B7 for ; Fri, 26 Jun 2020 22:31:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 793CF20FC3 for ; Fri, 26 Jun 2020 22:31:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Wd90nVQm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 793CF20FC3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 873E76B000D; Fri, 26 Jun 2020 18:31:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7D7356B0010; Fri, 26 Jun 2020 18:31:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B32A6B0022; Fri, 26 Jun 2020 18:31:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id 3E79C6B000D for ; Fri, 26 Jun 2020 18:31:44 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 02269824556B for ; Fri, 26 Jun 2020 22:31:44 +0000 (UTC) X-FDA: 76972811328.28.joke40_2a0175626e59 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id D1AE16D68 for ; Fri, 26 Jun 2020 22:31:43 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30036:30051:30054:30090,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y83p5nksz5f68x1fd968xkr3qn7ocys1p314unz1wwemtxc8t6jaqzke4cco9.qmnnypqtppc596o5awzeuj9k7dytmht7jcn7bfxrgbucwi1a3uwit917rbm86ch.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:1:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: joke40_2a0175626e59 X-Filterd-Recvd-Size: 6683 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 26 Jun 2020 22:31:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593210703; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oYNDwtlfjBw7SDjKSzeBn7nLzHhEdZ/Papn4VLBZIoQ=; b=Wd90nVQmMNwxVqvpgbH1voaT+3Yl7Xm94djT31DGrQUVbgdZKFnKn+ZCYBokpRb5tVmvRq CobnhBV7QgfgZ2eapmFaUuLY0NERs+WwtOx4JYC6o0ddDm+yRleVO7DWIG4zoOkKKDUMS/ FDmdFCVHqS25jLtKatpLG6kqfxt1l7k= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-258-QueBhsH9PPKo5B9PNNC8Ew-1; Fri, 26 Jun 2020 18:31:39 -0400 X-MC-Unique: QueBhsH9PPKo5B9PNNC8Ew-1 Received: by mail-qk1-f198.google.com with SMTP id i6so7640363qkn.22 for ; Fri, 26 Jun 2020 15:31:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oYNDwtlfjBw7SDjKSzeBn7nLzHhEdZ/Papn4VLBZIoQ=; b=Etp+pjlCrzSC6nQhHehqZvzg0xXkDF9aeL0Gx82znWrDDquwlxOT47GxHSB4r+fsKO TVb19RkVuYKC1eICYakzHFJnQ/XGduaslC/11k7oUP+TwezR7AyXpWh7nw0udBhITYz2 HGHbEEl7Tha64895c0B0al3Byl+A3e5mDJxulslFR301omxHnCjXi1vuGXvy4lSmO9Hh K9YnoZA5IFaqO3E5CP0cvltxD1mGGDqv8H/tKvFuCx/iyE4p2s9oXJJNinGAuMngkjSL teXjl7YAVV8tr23rNtbBB/Ib6vOL/AH+lk2Ftl/YwzOU90UkiqZphIhoh93L6kW3lVE9 4cuQ== X-Gm-Message-State: AOAM531D3oc3IsEvUxGznWHv1u6eRFvmkK6AwjuDMHgX9ZZik4QapNgs wISiO1QQLQwelzecPs5fLuoX5zQV5lqB/zpQQC+nc9LxPcGl0ra6wJEmdvIRSa53hs4TLD7Yiwy V4Cwmf2QAp2c= X-Received: by 2002:ac8:674c:: with SMTP id n12mr1720506qtp.312.1593210698650; Fri, 26 Jun 2020 15:31:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwO0yRN7CMv/OFP4YUTmRxOmwQIc3TG0C2j9mstmrYAwNOCl8Vo8qeqsTy1tVWlG77fWkMn/A== X-Received: by 2002:ac8:674c:: with SMTP id n12mr1720484qtp.312.1593210698435; Fri, 26 Jun 2020 15:31:38 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id f203sm9903311qke.135.2020.06.26.15.31.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Jun 2020 15:31:37 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Linus Torvalds , Gerald Schaefer , Andrea Arcangeli , Will Deacon , peterx@redhat.com, Michael Ellerman , Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH 04/26] mm/arm: Use general page fault accounting Date: Fri, 26 Jun 2020 18:31:08 -0400 Message-Id: <20200626223130.199227-5-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626223130.199227-1-peterx@redhat.com> References: <20200626223130.199227-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: D1AE16D68 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we need to pass the pt_regs pointer into __do_page_fault(). Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. CC: Russell King CC: Will Deacon CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Peter Xu --- arch/arm/mm/fault.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 0d6be0f4f27c..8530befee012 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -201,7 +201,8 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma) static vm_fault_t __kprobes __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, - unsigned int flags, struct task_struct *tsk) + unsigned int flags, struct task_struct *tsk, + struct pt_regs *regs) { struct vm_area_struct *vma; vm_fault_t fault; @@ -223,7 +224,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, goto out; } - return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ @@ -265,6 +266,8 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -289,7 +292,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) #endif } - fault = __do_page_fault(mm, addr, fsr, flags, tsk); + fault = __do_page_fault(mm, addr, fsr, flags, tsk, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_sem because @@ -301,23 +304,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) return 0; } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ - - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (!(fault & VM_FAULT_ERROR) && flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry;