From patchwork Fri Jan 28 13:10:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Lespinasse X-Patchwork-Id: 12728527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3027C433FE for ; Fri, 28 Jan 2022 13:11:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BEB056B00A5; Fri, 28 Jan 2022 08:10:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B23E96B0093; Fri, 28 Jan 2022 08:10:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B72C06B00A5; Fri, 28 Jan 2022 08:10:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay026.a.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 9A1EE6B00A0 for ; Fri, 28 Jan 2022 08:10:13 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5C64821F95 for ; Fri, 28 Jan 2022 13:10:13 +0000 (UTC) X-FDA: 79079729106.02.6F58B0A Received: from server.lespinasse.org (server.lespinasse.org [63.205.204.226]) by imf01.hostedemail.com (Postfix) with ESMTP id BFA9A4000E for ; Fri, 28 Jan 2022 13:10:12 +0000 (UTC) DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=lespinasse.org; i=@lespinasse.org; q=dns/txt; s=srv-52-ed; t=1643375407; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : from; bh=9LUG5WrBQEb2xTCcRQnwKzWeTPNgCCAgZCv7h4INXy0=; b=HoH4UKsoi3piw3Y7Ft9hloWe9vIGQzFqMVsgTxTip2qSR0AuyqsuuqvY2O8W/ZpvxtODE Z0fofr6+wkDfoqFCQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lespinasse.org; i=@lespinasse.org; q=dns/txt; s=srv-52-rsa; t=1643375407; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : from; bh=9LUG5WrBQEb2xTCcRQnwKzWeTPNgCCAgZCv7h4INXy0=; b=v9fnJPF2zKk5woJCEPKhHnu6QX/4Q0liJnlf1qw3UflqWC7A8ZMYjk7zQRxEGtay18Rlc CUi/+bzYWT/TniOqJs578nQrhRGMMKqSC4pJL6cxZWU8H41lyVBug6AGwy6vCltQHRjM2Dm aCJTil5iQA9QTA3LPMwBFcAGV1JRg+JQm1HB7Jw7KVd/Jrh7xfsGTO0S72fW9qUM8UagwlK 73zzwhALRQDi0GxhhDTH50AWpwHu6kDigcbgqWulY0Y1UYCN8CRyJywino5762BFo0V/G0P Tq0OYR9VH546GjTsHUuwPfaL+FhHeVDVRqiK9a1k8yv0saMpcGbHJt9v1qqw== Received: from zeus.lespinasse.org (zeus.lespinasse.org [10.0.0.150]) by server.lespinasse.org (Postfix) with ESMTPS id 4210A160AE0; Fri, 28 Jan 2022 05:10:07 -0800 (PST) Received: by zeus.lespinasse.org (Postfix, from userid 1000) id 2812820473; Fri, 28 Jan 2022 05:10:07 -0800 (PST) From: Michel Lespinasse To: Linux-MM , linux-kernel@vger.kernel.org, Andrew Morton Cc: kernel-team@fb.com, Laurent Dufour , Jerome Glisse , Peter Zijlstra , Michal Hocko , Vlastimil Babka , Davidlohr Bueso , Matthew Wilcox , Liam Howlett , Rik van Riel , Paul McKenney , Song Liu , Suren Baghdasaryan , Minchan Kim , Joel Fernandes , David Rientjes , Axel Rasmussen , Andy Lutomirski , Michel Lespinasse Subject: [PATCH v2 33/35] arm64/mm: attempt speculative mm faults first Date: Fri, 28 Jan 2022 05:10:04 -0800 Message-Id: <20220128131006.67712-34-michel@lespinasse.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220128131006.67712-1-michel@lespinasse.org> References: <20220128131006.67712-1-michel@lespinasse.org> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=lespinasse.org header.s=srv-52-ed header.b=HoH4UKso; dkim=pass header.d=lespinasse.org header.s=srv-52-rsa header.b=v9fnJPF2; spf=pass (imf01.hostedemail.com: domain of walken@lespinasse.org designates 63.205.204.226 as permitted sender) smtp.mailfrom=walken@lespinasse.org; dmarc=pass (policy=none) header.from=lespinasse.org X-Rspam-User: nil X-Rspamd-Queue-Id: BFA9A4000E X-Stat-Signature: 3tdkkpshcigdrah53341w8515ywpsosh X-Rspamd-Server: rspam12 X-HE-Tag: 1643375412-711787 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Attempt speculative mm fault handling first, and fall back to the existing (non-speculative) code if that fails. This follows the lines of the x86 speculative fault handling code, but with some minor arch differences such as the way that the VM_FAULT_BADACCESS case is handled. Signed-off-by: Michel Lespinasse --- arch/arm64/mm/fault.c | 62 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..2598795f4e70 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -524,6 +525,11 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr, unsigned long vm_flags; unsigned int mm_flags = FAULT_FLAG_DEFAULT; unsigned long addr = untagged_addr(far); +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT + struct vm_area_struct *vma; + struct vm_area_struct pvma; + unsigned long seq; +#endif if (kprobe_page_fault(regs, esr)) return 0; @@ -574,6 +580,59 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr, perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT + /* + * No need to try speculative faults for kernel or + * single threaded user space. + */ + if (!(mm_flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1) + goto no_spf; + + count_vm_event(SPF_ATTEMPT); + seq = mmap_seq_read_start(mm); + if (seq & 1) { + count_vm_spf_event(SPF_ABORT_ODD); + goto spf_abort; + } + rcu_read_lock(); + vma = __find_vma(mm, addr); + if (!vma || vma->vm_start > addr) { + rcu_read_unlock(); + count_vm_spf_event(SPF_ABORT_UNMAPPED); + goto spf_abort; + } + if (!vma_is_anonymous(vma)) { + rcu_read_unlock(); + count_vm_spf_event(SPF_ABORT_NO_SPECULATE); + goto spf_abort; + } + pvma = *vma; + rcu_read_unlock(); + if (!mmap_seq_read_check(mm, seq, SPF_ABORT_VMA_COPY)) + goto spf_abort; + vma = &pvma; + if (!(vma->vm_flags & vm_flags)) { + count_vm_spf_event(SPF_ABORT_ACCESS_ERROR); + goto spf_abort; + } + fault = do_handle_mm_fault(vma, addr & PAGE_MASK, + mm_flags | FAULT_FLAG_SPECULATIVE, seq, regs); + + /* Quick path to respond to signals */ + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + goto no_context; + return 0; + } + if (!(fault & VM_FAULT_RETRY)) + goto done; + +spf_abort: + count_vm_event(SPF_ABORT); +no_spf: + +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -612,6 +671,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr, goto retry; } mmap_read_unlock(mm); +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT +done: +#endif /* * Handle the "normal" (no error) case first.