From patchwork Mon Aug 21 12:30:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D2CFEE49AA for ; Mon, 21 Aug 2023 12:32:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=a1OQGW8KuOh1OWEm8lrOXu2ydDMlf2ElEdZVky/V5EE=; b=TS4E/24UfH4RgU vnrY9Xn+btQZ5JobYKBeSMEIFrzyiqxxlmgxyv9b3QGAEunLOla0ekO9pmX8HKcmnJdnItX0uL8MH Ibbk2dQFfgYD7d+qksJiwwOwfUkkFM0hBuBjKVleUthgI4nDDxVMep5IXQK4vh1meXS2l0t4sT7If K4Pb34eA+mFWkKdJ9tXiZBmzQ3d8xdCnm41VZccWlKOav7o6+NsqshkJbP0U5Jto97kuLsNWXgZXX PgxMiwm7BSPwA3iCbziyAGiqphkxOQEe5WOfR/m+O/F1AnXMucuRMNEDxWtWoMwYtuXMtOMdFi49I tGzhJfxaniJtvJm0hu+g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44a-00DzL0-2T; Mon, 21 Aug 2023 12:31:56 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY442-00Dys1-2v; Mon, 21 Aug 2023 12:31:29 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsF81thwzNnKj; Mon, 21 Aug 2023 20:27:40 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:12 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 01/10] mm: add a generic VMA lock-based page fault handler Date: Mon, 21 Aug 2023 20:30:47 +0800 Message-ID: <20230821123056.2109942-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053123_535210_07374950 X-CRM114-Status: GOOD ( 17.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The ARCH_SUPPORTS_PER_VMA_LOCK are enabled by more and more architectures, eg, x86, arm64, powerpc and s390, and riscv, those implementation are very similar which results in some duplicated codes, let's add a generic VMA lock-based page fault handler try_to_vma_locked_page_fault() to eliminate them, and which also make us easy to support this on new architectures. Since different architectures use different way to check vma whether is accessable or not, the struct pt_regs, page fault error code and vma flags are added into struct vm_fault, then, the architecture's page fault code could re-use struct vm_fault to record and check vma accessable by each own implementation. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 17 +++++++++++++++++ include/linux/mm_types.h | 2 ++ mm/memory.c | 39 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 58 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3f764e84e567..22a6f4c56ff3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -512,9 +512,12 @@ struct vm_fault { pgoff_t pgoff; /* Logical page offset based on vma */ unsigned long address; /* Faulting virtual address - masked */ unsigned long real_address; /* Faulting virtual address - unmasked */ + unsigned long fault_code; /* Faulting error code during page fault */ + struct pt_regs *regs; /* The registers stored during page fault */ }; enum fault_flag flags; /* FAULT_FLAG_xxx flags * XXX: should really be 'const' */ + vm_flags_t vm_flags; /* VMA flags to be used for access checking */ pmd_t *pmd; /* Pointer to pmd entry matching * the 'address' */ pud_t *pud; /* Pointer to pud entry matching @@ -774,6 +777,9 @@ static inline void assert_fault_locked(struct vm_fault *vmf) struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address); +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf); +vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf); + #else /* CONFIG_PER_VMA_LOCK */ static inline bool vma_start_read(struct vm_area_struct *vma) @@ -801,6 +807,17 @@ static inline void assert_fault_locked(struct vm_fault *vmf) mmap_assert_locked(vmf->vma->vm_mm); } +static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, + unsigned long address) +{ + return NULL; +} + +static inline vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf) +{ + return VM_FAULT_NONE; +} + #endif /* CONFIG_PER_VMA_LOCK */ extern const struct vm_operations_struct vma_dummy_vm_ops; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index f5ba5b0bc836..702820cea3f9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1119,6 +1119,7 @@ typedef __bitwise unsigned int vm_fault_t; * fault. Used to decide whether a process gets delivered SIGBUS or * just gets major/minor fault counters bumped up. * + * @VM_FAULT_NONE: Special case, not starting to handle fault * @VM_FAULT_OOM: Out Of Memory * @VM_FAULT_SIGBUS: Bad access * @VM_FAULT_MAJOR: Page read from storage @@ -1139,6 +1140,7 @@ typedef __bitwise unsigned int vm_fault_t; * */ enum vm_fault_reason { + VM_FAULT_NONE = (__force vm_fault_t)0x000000, VM_FAULT_OOM = (__force vm_fault_t)0x000001, VM_FAULT_SIGBUS = (__force vm_fault_t)0x000002, VM_FAULT_MAJOR = (__force vm_fault_t)0x000004, diff --git a/mm/memory.c b/mm/memory.c index 3b4aaa0d2fff..60fe35db5134 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5510,6 +5510,45 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, count_vm_vma_lock_event(VMA_LOCK_ABORT); return NULL; } + +#ifdef CONFIG_PER_VMA_LOCK +bool __weak arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return (vma->vm_flags & vmf->vm_flags) == 0; +} +#endif + +vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf) +{ + vm_fault_t fault = VM_FAULT_NONE; + struct vm_area_struct *vma; + + if (!(vmf->flags & FAULT_FLAG_USER)) + return fault; + + vma = lock_vma_under_rcu(current->mm, vmf->real_address); + if (!vma) + return fault; + + if (arch_vma_access_error(vma, vmf)) { + vma_end_read(vma); + return fault; + } + + fault = handle_mm_fault(vma, vmf->real_address, + vmf->flags | FAULT_FLAG_VMA_LOCK, vmf->regs); + + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); + + if (fault & VM_FAULT_RETRY) + count_vm_vma_lock_event(VMA_LOCK_RETRY); + else + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + + return fault; +} + #endif /* CONFIG_PER_VMA_LOCK */ #ifndef __PAGETABLE_P4D_FOLDED From patchwork Mon Aug 21 12:30:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0B156EE49AA for ; Mon, 21 Aug 2023 13:45:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=v2U06zo8+kzrvEAmKXlCuVcMiMPHxz4h23CpWtfvRpA=; b=QRUXvjEyWk7zjd okC8h8tpuWJtlyhjZ5Gh/GC4L4wQc24Uv3D3S6+yvWnUJ63dqdGHOv0ArBeS/AyEZoaSIsZinCGbC bpIw/dar/6UjM8b5lAfWzoBfR9CD2KjpgmNngguRCclQjOL5Sm2d+oCssOcpY+SIIKK46H7k+VW1e OifsMUUhZU7mkddlPxtX0GDBb+lVgio8f5FJrrPrZVzv5ETUMErsAw2LXtiF5FiYHNRa5lSnoi0uS lfRkHgRSpGn7cQl6ifHmrntW8XS3v2HneqSqXLKRxJU58KViZivHL4SWXWBOawKFCOwnxvyp9afan XbL2/UnI1DrLkhUgVYlQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY5DZ-00E6kv-33; Mon, 21 Aug 2023 13:45:17 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY440-00Dyre-0m; Mon, 21 Aug 2023 12:31:22 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsF110wXztSVv; Mon, 21 Aug 2023 20:27:33 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:14 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 02/10] arm64: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:48 +0800 Message-ID: <20230821123056.2109942-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053120_729323_D6C424A1 X-CRM114-Status: GOOD ( 16.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code, also pass struct vmf to __do_page_fault() directly instead of each independent variable. No functional change intended. Signed-off-by: Kefeng Wang --- arch/arm64/mm/fault.c | 60 ++++++++++++++++--------------------------- 1 file changed, 22 insertions(+), 38 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 2e5d1e238af9..2b7a1e610b3e 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -498,9 +498,8 @@ static void do_bad_area(unsigned long far, unsigned long esr, #define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) static vm_fault_t __do_page_fault(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags, - struct pt_regs *regs) + struct vm_area_struct *vma, + struct vm_fault *vmf) { /* * Ok, we have a good vm_area for this memory access, so we can handle @@ -508,9 +507,9 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, * Check that the permissions on the VMA allow for the fault which * occurred. */ - if (!(vma->vm_flags & vm_flags)) + if (!(vma->vm_flags & vmf->vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr, mm_flags, regs); + return handle_mm_fault(vma, vmf->real_address, vmf->flags, vmf->regs); } static bool is_el0_instruction_abort(unsigned long esr) @@ -533,10 +532,12 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, const struct fault_info *inf; struct mm_struct *mm = current->mm; vm_fault_t fault; - unsigned long vm_flags; - unsigned int mm_flags = FAULT_FLAG_DEFAULT; unsigned long addr = untagged_addr(far); struct vm_area_struct *vma; + struct vm_fault vmf = { + .real_address = addr, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, esr)) return 0; @@ -549,7 +550,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto no_context; if (user_mode(regs)) - mm_flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; /* * vm_flags tells us what bits we must have in vma->vm_flags @@ -559,20 +560,20 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, */ if (is_el0_instruction_abort(esr)) { /* It was exec fault */ - vm_flags = VM_EXEC; - mm_flags |= FAULT_FLAG_INSTRUCTION; + vmf.vm_flags = VM_EXEC; + vmf.flags |= FAULT_FLAG_INSTRUCTION; } else if (is_write_abort(esr)) { /* It was write fault */ - vm_flags = VM_WRITE; - mm_flags |= FAULT_FLAG_WRITE; + vmf.vm_flags = VM_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; } else { /* It was read fault */ - vm_flags = VM_READ; + vmf.vm_flags = VM_READ; /* Write implies read */ - vm_flags |= VM_WRITE; + vmf.vm_flags |= VM_WRITE; /* If EPAN is absent then exec implies read */ if (!cpus_have_const_cap(ARM64_HAS_EPAN)) - vm_flags |= VM_EXEC; + vmf.vm_flags |= VM_EXEC; } if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) { @@ -587,26 +588,11 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); - if (!(mm_flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, addr); - if (!vma) - goto lock_mmap; - - if (!(vma->vm_flags & vm_flags)) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -614,8 +600,6 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto no_context; return 0; } -lock_mmap: - retry: vma = lock_mm_and_find_vma(mm, addr, regs); if (unlikely(!vma)) { @@ -623,7 +607,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto done; } - fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs); + fault = __do_page_fault(mm, vma, &vmf); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -637,7 +621,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, return 0; if (fault & VM_FAULT_RETRY) { - mm_flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } mmap_read_unlock(mm); From patchwork Mon Aug 21 12:30:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 22E4FEE4996 for ; Mon, 21 Aug 2023 12:31:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yX9t83zGHfTkIDdag4e3CYoZlf8+OOgJbbCXnnNY4hI=; b=svrxSaNf3d4E5x JcZQ4K/MdZPpV0gK2H/4xpeuhogb7dATcroAf1b03LXhOxXfeFxf7Tp2RjshhMCH5tMBc208EYodK pFa2COax+Q/qiWnFmIeSJlUE9pwCkW7h4Ta7DvM4SyX+6acigiSMtf2X2EN3tJbxbGaUvbL+LPfgS NszWY+L6P5yeb1JFhcDr45gGiJX34WdSdXT+ijp9zoheNW64biVT6iM4KeleF56/txQ2+xSuM39ox NV9TR72SePNgGwMQ/jAvZyY3zwsPfDsBlayaOux5/GOFHAHItT1Udw6YoZJrsesFTU75idP8c0yvA XwYNy3kwcBD67M1UW/BQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY447-00DyzJ-1C; Mon, 21 Aug 2023 12:31:27 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY440-00Dyrg-0m; Mon, 21 Aug 2023 12:31:24 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsF23D6sztShV; Mon, 21 Aug 2023 20:27:34 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:15 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 03/10] x86: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:49 +0800 Message-ID: <20230821123056.2109942-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053120_806331_ECD8FBEF X-CRM114-Status: GOOD ( 16.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/x86/mm/fault.c | 55 +++++++++++++++++++-------------------------- 1 file changed, 23 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index ab778eac1952..3edc9edc0b28 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1227,6 +1227,13 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, } NOKPROBE_SYMBOL(do_kern_addr_fault); +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->fault_code, vma); +} +#endif + /* * Handle faults in the user portion of the address space. Nothing in here * should check X86_PF_USER without a specific justification: for almost @@ -1241,13 +1248,13 @@ void do_user_addr_fault(struct pt_regs *regs, unsigned long address) { struct vm_area_struct *vma; - struct task_struct *tsk; - struct mm_struct *mm; + struct mm_struct *mm = current->mm; vm_fault_t fault; - unsigned int flags = FAULT_FLAG_DEFAULT; - - tsk = current; - mm = tsk->mm; + struct vm_fault vmf = { + .real_address = address, + .fault_code = error_code, + .flags = FAULT_FLAG_DEFAULT + }; if (unlikely((error_code & (X86_PF_USER | X86_PF_INSTR)) == X86_PF_INSTR)) { /* @@ -1311,7 +1318,7 @@ void do_user_addr_fault(struct pt_regs *regs, */ if (user_mode(regs)) { local_irq_enable(); - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; } else { if (regs->flags & X86_EFLAGS_IF) local_irq_enable(); @@ -1326,11 +1333,11 @@ void do_user_addr_fault(struct pt_regs *regs, * maybe_mkwrite() can create a proper shadow stack PTE. */ if (error_code & X86_PF_SHSTK) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (error_code & X86_PF_WRITE) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (error_code & X86_PF_INSTR) - flags |= FAULT_FLAG_INSTRUCTION; + vmf.flags |= FAULT_FLAG_INSTRUCTION; #ifdef CONFIG_X86_64 /* @@ -1350,26 +1357,11 @@ void do_user_addr_fault(struct pt_regs *regs, } #endif - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_error(error_code, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -1379,7 +1371,6 @@ void do_user_addr_fault(struct pt_regs *regs, ARCH_DEFAULT_PKEY); return; } -lock_mmap: retry: vma = lock_mm_and_find_vma(mm, address, regs); @@ -1410,7 +1401,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { /* @@ -1434,7 +1425,7 @@ void do_user_addr_fault(struct pt_regs *regs, * that we made any progress. Handle this case first. */ if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } From patchwork Mon Aug 21 12:30:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1CB7CEE4996 for ; Mon, 21 Aug 2023 12:31:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yo0vyhNKjq5VjfkD1eBB4M9sY4ibM+VPXf6t+vwhOAY=; b=hnWV9rjs2089nC Lwz+BPQi9BesfV8nn1B5s5PErE/Wzd96rk5JNEWFu/uDwDjNve6gfsIx7YVsYStdkPwyk6J1mTU2m 1SaLDII7SHzjf2bmdZhZbq0xkuLwEIDV9ThBv8AdRQcykKXuHQl+8z/6Hv7Agvc63xAG5s51ujFl7 tSpGb2tN6QdtQNISezr3o1AawEjDEqJ7twwGVDdzHbrJvnoOjJK0Mh+DMuM8h1yiRJTFfWyQJ1vQx APZaZT3ftumXV7w/jsT+cFnT06mqmfCF3zf2pArDXXMZIdFS7Kt5OyshXfrnKWlHsx6y1f2+E+nC4 3ipNHgbJXcNoxVji5OoA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44K-00DzAC-0s; Mon, 21 Aug 2023 12:31:40 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY441-00Dyrn-0w; Mon, 21 Aug 2023 12:31:24 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4RTsHg0Sf6z1L9Pp; Mon, 21 Aug 2023 20:29:51 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:16 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 04/10] s390: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:50 +0800 Message-ID: <20230821123056.2109942-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053121_874430_AF0E4475 X-CRM114-Status: GOOD ( 15.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/s390/mm/fault.c | 66 ++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 39 deletions(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index 099c4824dd8a..fbbdebde6ea7 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -357,16 +357,18 @@ static noinline void do_fault_error(struct pt_regs *regs, vm_fault_t fault) static inline vm_fault_t do_exception(struct pt_regs *regs, int access) { struct gmap *gmap; - struct task_struct *tsk; - struct mm_struct *mm; struct vm_area_struct *vma; enum fault_type type; - unsigned long address; - unsigned int flags; + struct mm_struct *mm = current->mm; + unsigned long address = get_fault_address(regs); vm_fault_t fault; bool is_write; + struct vm_fault vmf = { + .real_address = address, + .flags = FAULT_FLAG_DEFAULT, + .vm_flags = access, + }; - tsk = current; /* * The instruction that caused the program check has * been nullified. Don't signal single step via SIGTRAP. @@ -376,8 +378,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (kprobe_page_fault(regs, 14)) return 0; - mm = tsk->mm; - address = get_fault_address(regs); is_write = fault_is_write(regs); /* @@ -398,45 +398,33 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) } perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - flags = FAULT_FLAG_DEFAULT; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write) - access = VM_WRITE; - if (access == VM_WRITE) - flags |= FAULT_FLAG_WRITE; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - if (!(vma->vm_flags & access)) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); - if (likely(!(fault & VM_FAULT_ERROR))) - fault = 0; + vmf.vm_flags = VM_WRITE; + if (vmf.vm_flags == VM_WRITE) + vmf.flags |= FAULT_FLAG_WRITE; + + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto lock_mm; + if (!(fault & VM_FAULT_RETRY)) goto out; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); + /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; goto out; } -lock_mmap: + +lock_mm: mmap_read_lock(mm); gmap = NULL; if (IS_ENABLED(CONFIG_PGSTE) && type == GMAP_FAULT) { gmap = (struct gmap *) S390_lowcore.gmap; current->thread.gmap_addr = address; - current->thread.gmap_write_flag = !!(flags & FAULT_FLAG_WRITE); + current->thread.gmap_write_flag = !!(vmf.flags & FAULT_FLAG_WRITE); current->thread.gmap_int_code = regs->int_code & 0xffff; address = __gmap_translate(gmap, address); if (address == -EFAULT) { @@ -444,7 +432,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) goto out_up; } if (gmap->pfault_enabled) - flags |= FAULT_FLAG_RETRY_NOWAIT; + vmf.flags |= FAULT_FLAG_RETRY_NOWAIT; } retry: @@ -466,7 +454,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * we can handle it.. */ fault = VM_FAULT_BADACCESS; - if (unlikely(!(vma->vm_flags & access))) + if (unlikely(!(vma->vm_flags & vmf.vm_flags))) goto out_up; /* @@ -474,10 +462,10 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; - if (flags & FAULT_FLAG_RETRY_NOWAIT) + if (vmf.flags & FAULT_FLAG_RETRY_NOWAIT) goto out_up; goto out; } @@ -497,7 +485,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (fault & VM_FAULT_RETRY) { if (IS_ENABLED(CONFIG_PGSTE) && gmap && - (flags & FAULT_FLAG_RETRY_NOWAIT)) { + (vmf.flags & FAULT_FLAG_RETRY_NOWAIT)) { /* * FAULT_FLAG_RETRY_NOWAIT has been set, mmap_lock has * not been released @@ -506,8 +494,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) fault = VM_FAULT_PFAULT; goto out_up; } - flags &= ~FAULT_FLAG_RETRY_NOWAIT; - flags |= FAULT_FLAG_TRIED; + vmf.flags &= ~FAULT_FLAG_RETRY_NOWAIT; + vmf.flags |= FAULT_FLAG_TRIED; mmap_read_lock(mm); goto retry; } From patchwork Mon Aug 21 12:30:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A655EE49A6 for ; Mon, 21 Aug 2023 12:32:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FSaqq0unrwmLJJBLaJRmxz/63iATtRRxGo9qpOvZqOY=; b=C2Pe81Cg4BdLeO YD/X+B5zGPRyPM9t6ldvO7cw9PhBJdkd5Kb0kUHcazraTcW/rhDzbwvaLP9HCyGVKxcPNvGeAh67p MrE18F7qQcfggRUsAjvGlaKhpYs2Fr3SlvzsROVrlv2i7Qt+kMRrQnD4PungB8Pe0dYXqWCQxV9PD yXB1qajf6K6Owp7SsT3cUCjq75QBdJ/e3u4eqZ/zpZAW4p5s13Jc84QOoPE7ceQJLWnijOqKVP8WM /XrSkj+2S31weo10XxTV9Fz6eNC2nPCIjY29W7ovSwnbCKzZ4W8Wl7JhSDfOg5W5JdpkOfZ6R9oPU fci3I8g61B4go8XOA0Ag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44T-00DzG1-0a; Mon, 21 Aug 2023 12:31:49 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY442-00Dys4-2x; Mon, 21 Aug 2023 12:31:28 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsFF3Rj6zNnTN; Mon, 21 Aug 2023 20:27:45 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:17 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 05/10] powerpc: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:51 +0800 Message-ID: <20230821123056.2109942-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053123_526355_352F1653 X-CRM114-Status: GOOD ( 16.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/powerpc/mm/fault.c | 66 ++++++++++++++++++++--------------------- 1 file changed, 32 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index b1723094d464..52f9546e020e 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -391,6 +391,22 @@ static int page_fault_is_bad(unsigned long err) #define page_fault_is_bad(__err) ((__err) & DSISR_BAD_FAULT_32S) #endif +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + int is_exec = TRAP(vmf->regs) == INTERRUPT_INST_STORAGE; + int is_write = page_fault_is_write(vmf->fault_code); + + if (unlikely(access_pkey_error(is_write, is_exec, + (vmf->fault_code & DSISR_KEYFAULT), vma))) + return true; + + if (unlikely(access_error(is_write, is_exec, vma))) + return true; + return false; +} +#endif + /* * For 600- and 800-family processors, the error_code parameter is DSISR * for a data fault, SRR1 for an instruction fault. @@ -407,12 +423,18 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, { struct vm_area_struct * vma; struct mm_struct *mm = current->mm; - unsigned int flags = FAULT_FLAG_DEFAULT; int is_exec = TRAP(regs) == INTERRUPT_INST_STORAGE; int is_user = user_mode(regs); int is_write = page_fault_is_write(error_code); vm_fault_t fault, major = 0; bool kprobe_fault = kprobe_page_fault(regs, 11); + struct vm_fault vmf = { + .real_address = address, + .fault_code = error_code, + .regs = regs, + .flags = FAULT_FLAG_DEFAULT, + }; + if (unlikely(debugger_fault_handler(regs) || kprobe_fault)) return 0; @@ -463,45 +485,21 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * mmap_lock held */ if (is_user) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (is_exec) - flags |= FAULT_FLAG_INSTRUCTION; + vmf.flags |= FAULT_FLAG_INSTRUCTION; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_pkey_error(is_write, is_exec, - (error_code & DSISR_KEYFAULT), vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - if (unlikely(access_error(is_write, is_exec, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); if (fault_signal_pending(fault, regs)) return user_mode(regs) ? 0 : SIGBUS; -lock_mmap: - /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the * kernel and should generate an OOPS. Unfortunately, in the case of an @@ -528,7 +526,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); major |= fault & VM_FAULT_MAJOR; @@ -544,7 +542,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * case. */ if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } From patchwork Mon Aug 21 12:30:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8880DEE49AA for ; Mon, 21 Aug 2023 12:32:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nftgRGLOZheKK5wxn+RQs+eDScNd16wQHgryfaLMZOY=; b=BR8HFoIrwp+fsi z4xztDHHaOszh7n28PX0HCSoTmC+Mia8mqVjJPEUaFVSa6LG620P5UfpHkSBnWsWtkOYsFSsBolAB mJPUVLDGjRtmouWvIEMcy50bgwc5aSw+xouZC7Xejen6e98uDuGbrGKEDD3iuoGGgkHZ7F+GyxjYf /spZcXTabnYvPfLGHTEN71FnjEnGJiaO5lPbIoj4GxY2fqegXFK0+/hyhDWZXValbGJVtC9gB4PxY Qz1nViIMTPyKIfQjze9Ix3nlERnq3bCkS0VqKZyXOnhSRXN792A3VOI64F8DqewtjldHG/gHnj4Bz b/8byFp9NnAMEZPXpmwA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44K-00DzAW-2c; Mon, 21 Aug 2023 12:31:40 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY442-00Dysr-30; Mon, 21 Aug 2023 12:31:26 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsFG5XSpzNnTc; Mon, 21 Aug 2023 20:27:46 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:19 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 06/10] riscv: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:52 +0800 Message-ID: <20230821123056.2109942-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053123_423436_0F979E3E X-CRM114-Status: GOOD ( 15.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/riscv/mm/fault.c | 58 ++++++++++++++++++------------------------- 1 file changed, 24 insertions(+), 34 deletions(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 6115d7514972..b46129b636f2 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -215,6 +215,13 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma) return false; } +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->fault_code, vma); +} +#endif + /* * This routine handles page faults. It determines the address and the * problem, and then passes it off to one of the appropriate routines. @@ -223,17 +230,16 @@ void handle_page_fault(struct pt_regs *regs) { struct task_struct *tsk; struct vm_area_struct *vma; - struct mm_struct *mm; - unsigned long addr, cause; - unsigned int flags = FAULT_FLAG_DEFAULT; + struct mm_struct *mm = current->mm; + unsigned long addr = regs->badaddr; + unsigned long cause = regs->cause; int code = SEGV_MAPERR; vm_fault_t fault; - - cause = regs->cause; - addr = regs->badaddr; - - tsk = current; - mm = tsk->mm; + struct vm_fault vmf = { + .real_address = addr, + .fault_code = cause, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, cause)) return; @@ -268,7 +274,7 @@ void handle_page_fault(struct pt_regs *regs) } if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (!user_mode(regs) && addr < TASK_SIZE && unlikely(!(regs->status & SR_SUM))) { if (fixup_exception(regs)) @@ -280,37 +286,21 @@ void handle_page_fault(struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (cause == EXC_STORE_PAGE_FAULT) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; else if (cause == EXC_INST_PAGE_FAULT) - flags |= FAULT_FLAG_INSTRUCTION; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, addr); - if (!vma) - goto lock_mmap; + vmf.flags |= FAULT_FLAG_INSTRUCTION; - if (unlikely(access_error(cause, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) no_context(regs, addr); return; } -lock_mmap: retry: vma = lock_mm_and_find_vma(mm, addr, regs); @@ -337,7 +327,7 @@ void handle_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags, regs); + fault = handle_mm_fault(vma, addr, vmf.flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -355,7 +345,7 @@ void handle_page_fault(struct pt_regs *regs) return; if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; /* * No need to mmap_read_unlock(mm) as we would From patchwork Mon Aug 21 12:30:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DF29DEE49A6 for ; Mon, 21 Aug 2023 12:32:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qMmVehq3c7nLM5dT6Su2nGfsrVD2VrtCetjpfqq1uRM=; b=0B86CkhGOR8/J9 waG81BSgBqL0VWrlaBJnhV7awy3RLcbh28q8f7U7OdCew9Q16sjwOEmvjybOs1a2Kvf6RNxVt38hU zNczlpm0y2kkHoLauKNVXUm/csxOOfTVJ62bHCgRS7YK22Y7lDqsZux9egWvMfdlzZO+cpema9G4A myA7wNOj3rIbDyZI67tBQ4ahwCJhpuedssXbsu3R9LSTj3LQ0txSSF+SCJHHtLj7nhejkDb6KVYSv lO1tRpSpAEUhADRnaGdZf8wgIdanh+eeAM+3ChqKItijlk6r/QLM3JjAxhAsen6tQaYsJDIjhXBAi 1dSV9tiRZ2N+lA5xcvVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44i-00DzQg-1v; Mon, 21 Aug 2023 12:32:04 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY444-00DytR-0K; Mon, 21 Aug 2023 12:31:30 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGr51ZwzVks7; Mon, 21 Aug 2023 20:29:08 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:20 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 07/10] ARM: mm: try VMA lock-based page fault handling first Date: Mon, 21 Aug 2023 20:30:53 +0800 Message-ID: <20230821123056.2109942-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053124_679498_5CCF4F39 X-CRM114-Status: GOOD ( 16.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/arm/Kconfig | 1 + arch/arm/mm/fault.c | 35 +++++++++++++++++++++++++---------- 2 files changed, 26 insertions(+), 10 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1a6a6eb48a15..8b6d4507ccee 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -34,6 +34,7 @@ config ARM select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7 select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_MEMTEST diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index fef62e4a9edd..d53bb028899a 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -242,8 +242,11 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) struct vm_area_struct *vma; int sig, code; vm_fault_t fault; - unsigned int flags = FAULT_FLAG_DEFAULT; - unsigned long vm_flags = VM_ACCESS_FLAGS; + struct vm_fault vmf = { + .real_address = addr, + .flags = FAULT_FLAG_DEFAULT, + .vm_flags = VM_ACCESS_FLAGS, + }; if (kprobe_page_fault(regs, fsr)) return 0; @@ -261,15 +264,15 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) goto no_context; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write_fault(fsr)) { - flags |= FAULT_FLAG_WRITE; - vm_flags = VM_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; + vmf.vm_flags = VM_WRITE; } if (fsr & FSR_LNX_PF) { - vm_flags = VM_EXEC; + vmf.vm_flags = VM_EXEC; if (is_permission_fault(fsr) && !user_mode(regs)) die_kernel_fault("execution of memory", @@ -278,6 +281,18 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + goto no_context; + return 0; + } + retry: vma = lock_mm_and_find_vma(mm, addr, regs); if (unlikely(!vma)) { @@ -289,10 +304,10 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) * ok, we have a good vm_area for this memory access, check the * permissions on the VMA allow for the fault which occurred. */ - if (!(vma->vm_flags & vm_flags)) + if (!(vma->vm_flags & vmf.vm_flags)) fault = VM_FAULT_BADACCESS; else - fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); + fault = handle_mm_fault(vma, addr & PAGE_MASK, vmf.flags, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_lock because @@ -310,13 +325,13 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (!(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_RETRY) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } } mmap_read_unlock(mm); - +done: /* * Handle the "normal" case first - VM_FAULT_MAJOR */ From patchwork Mon Aug 21 12:30:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CB834EE4996 for ; Mon, 21 Aug 2023 12:32:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=k+ozIzfYMderP7N9YUQ+nDCGj0HG3MQSwwzrZu7+K6Y=; b=BLGSqXAaYb4hvh DyP8Vep5izM+/vLOb9qiXbgZ4U4MUidhX9EWpZD1b1cQ7DLPiEJT04VCq9Oy0nv3ntOypY+ptCPJm 2BPEQMClU1wfLfjMcEOgRxK4X8atFFIKCKiYr2VBAgBuFfDb+h3blW3DaXVO63IC5mLRtcQXJKSyL 758JsVNuVDD0i7Rd4/F3Yh0GzXj8Qt2ZnjNOCAbxO/xKUiUpJFZJg2Hf/jKaITjP05IGb4AxxAuJn w6mBU/EesNhjCNkioA0QRo0UQpEf9FC6KNWWB/Guka37cH93w5mWdglhdWgDuwQlpFEsSNMUILRZ6 rPsciQFtmu6hpHw6i9dg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44y-00Dzd7-2A; Mon, 21 Aug 2023 12:32:20 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY44V-00DzH4-1J; Mon, 21 Aug 2023 12:31:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=0BPre1mji4iNkocWD89whAC0rMq3U6GyUcT+h0WZYyY=; b=LdnCfJ8Xtky1ZpNQF3xKRxY0aB zzHm67AGzb3dc2INiylHNmB8RnKsRw+fFCG8oWfxC6PCUQh06HiEeQZhbTIWnMYNnIfeAQXwTRVpG Oj9GbBHCtEPFZpoij54tSuY682CRtdFEgctZ6BIlNR4tYAYgvA/Qt5Ih0AsAbbo7sY365UvUdIYhA pSusoOYeDLTSl0cw5LBnAY2fQ7LSTgUKOZNOn17/0DKD0DhwddDop/MHz2ltGptFjCLePvzv0Htnc yclHCcL9qcrNCUi2Vm+/6RgPAU/2905th/g7QOREmhtgiGqjme83IBPaJ/PVvPAw4HhnsJ7OAsEGZ 93BXXRFg==; Received: from szxga02-in.huawei.com ([45.249.212.188]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY44L-001VC8-1Y; Mon, 21 Aug 2023 12:31:50 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGs73pWzVks8; Mon, 21 Aug 2023 20:29:09 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:21 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 08/10] loongarch: mm: cleanup __do_page_fault() Date: Mon, 21 Aug 2023 20:30:54 +0800 Message-ID: <20230821123056.2109942-9-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_133144_059265_4F48EE6D X-CRM114-Status: GOOD ( 14.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Cleanup __do_page_fault() by reuse bad_area_nosemaphore and bad_area label. Signed-off-by: Kefeng Wang --- arch/loongarch/mm/fault.c | 48 +++++++++++++-------------------------- 1 file changed, 16 insertions(+), 32 deletions(-) diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index e6376e3dce86..5d4c742c4bc5 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -157,18 +157,15 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, if (!user_mode(regs)) no_context(regs, write, address); else - do_sigsegv(regs, write, address, si_code); - return; + goto bad_area_nosemaphore; } /* * If we're in an interrupt or have no user * context, we must not take the fault.. */ - if (faulthandler_disabled() || !mm) { - do_sigsegv(regs, write, address, si_code); - return; - } + if (faulthandler_disabled() || !mm) + goto bad_area_nosemaphore; if (user_mode(regs)) flags |= FAULT_FLAG_USER; @@ -178,23 +175,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, vma = lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) goto bad_area_nosemaphore; - goto good_area; - -/* - * Something tried to access memory that isn't in our memory map.. - * Fix it, but check if it's kernel or user first.. - */ -bad_area: - mmap_read_unlock(mm); -bad_area_nosemaphore: - do_sigsegv(regs, write, address, si_code); - return; -/* - * Ok, we have a good vm_area for this memory access, so - * we can handle it.. - */ -good_area: si_code = SEGV_ACCERR; if (write) { @@ -235,22 +216,25 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, */ goto retry; } + + mmap_read_unlock(mm); + if (unlikely(fault & VM_FAULT_ERROR)) { - mmap_read_unlock(mm); - if (fault & VM_FAULT_OOM) { + if (fault & VM_FAULT_OOM) do_out_of_memory(regs, write, address); - return; - } else if (fault & VM_FAULT_SIGSEGV) { - do_sigsegv(regs, write, address, si_code); - return; - } else if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { + else if (fault & VM_FAULT_SIGSEGV) + goto bad_area_nosemaphore; + else if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) do_sigbus(regs, write, address, si_code); - return; - } - BUG(); + else + BUG(); } + return; +bad_area: mmap_read_unlock(mm); +bad_area_nosemaphore: + do_sigsegv(regs, write, address, si_code); } asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, From patchwork Mon Aug 21 12:30:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A8FFEE49A6 for ; Mon, 21 Aug 2023 12:32:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CU0OC/pynCSnLshZ6YEt3N4Mg2UR7A8qbgnwqGfIhU8=; b=HF/0pjdUJ5DV1A KU+csloj5c2lJOR3YZDWJyWRoGjgSFNyW9EZVy6GGdhEfR1m5EM+pMWOFPI6zMYPjS7AFhMXFrJfL pnJiwVWZEyPlBxAr/YwE11TOlCzSIV2mL6vgI7gjy5rLZcTVg8rHIhXjXlj8DN38USERwqmgPsQBt 0iz3+XrVZWpaJYQKEaF2Iv6HP4WaqInvcSjvSTtDze+LpCJWwIpxqK9FOKwvJfbCn5rs8s7AJafGF 7ni+m6ptme2k6LkSWMttNGfzyT3ES9gqR+PGiNJeoAEPT3SV3RLCHIYJi79sBacqVSLkdSve4kQUh HoxvuDyCGk5UqI6ishVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY456-00Dzi8-14; Mon, 21 Aug 2023 12:32:28 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY44X-00DzIV-1K; Mon, 21 Aug 2023 12:31:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=gIldAat6PTqp0m0qcFfSiDczJB9xcWWS8CD3qklnWTo=; b=kJu5Rp6+d8nls1Q2PjGT45jeU4 Y6CwAMZLRxH03DTsuuTA2P0J5opNT+1sXZXMLbPBbZ+EhDVI1DLOtioFt6i+p6t3uYyBFmoRT3OvE o4b4qc9hfgmKE/A7HVxGn4AxrHMI2+ghKlYO9yMC1TSDIig7tdSQ9flQwvbRpXjt+/Q5TlJN1UM1Y Fwld6G1nutHYNjha4okP946++jbp6vjRk6b39q7cLY8JfOQxyMO4wTD/kVSXDnwjTuRWV1Ygdkqi0 S+VOY4A2xNXLbAFvMfW2q+3CNx4+J+Ogs5lA2LRYJakbimF7s0UZ5XxKt7B0j1PVpnFOJ1X6rJXiC UFg3BNwg==; Received: from szxga02-in.huawei.com ([45.249.212.188]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY44O-001VCA-2Q; Mon, 21 Aug 2023 12:31:52 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGv1szHzVksj; Mon, 21 Aug 2023 20:29:11 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:23 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 09/10] loongarch: mm: add access_error() helper Date: Mon, 21 Aug 2023 20:30:55 +0800 Message-ID: <20230821123056.2109942-10-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_133147_498567_B49326BD X-CRM114-Status: GOOD ( 14.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add access_error() to check whether vma could be accessible or not, which will be used __do_page_fault() and later vma locked based page fault. Signed-off-by: Kefeng Wang --- arch/loongarch/mm/fault.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index 5d4c742c4bc5..2a45e9f3a485 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -126,6 +126,22 @@ static void __kprobes do_sigsegv(struct pt_regs *regs, force_sig_fault(SIGSEGV, si_code, (void __user *)address); } +static inline bool access_error(unsigned int flags, struct pt_regs *regs, + unsigned long addr, struct vm_area_struct *vma) +{ + if (flags & FAULT_FLAG_WRITE) { + if (!(vma->vm_flags & VM_WRITE)) + return true; + } else { + if (!(vma->vm_flags & VM_READ) && addr != exception_era(regs)) + return true; + if (!(vma->vm_flags & VM_EXEC) && addr == exception_era(regs)) + return true; + } + + return false; +} + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -169,6 +185,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + if (write) + flags |= FAULT_FLAG_WRITE; perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: @@ -178,16 +196,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, si_code = SEGV_ACCERR; - if (write) { - flags |= FAULT_FLAG_WRITE; - if (!(vma->vm_flags & VM_WRITE)) - goto bad_area; - } else { - if (!(vma->vm_flags & VM_READ) && address != exception_era(regs)) - goto bad_area; - if (!(vma->vm_flags & VM_EXEC) && address == exception_era(regs)) - goto bad_area; - } + if (access_error(flags, regs, vma)) + goto bad_area; /* * If for any reason at all we couldn't handle the fault, From patchwork Mon Aug 21 12:30:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2EDF4EE4996 for ; Mon, 21 Aug 2023 12:32:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TlrF2aLQXfZThu4X60ImQdcRd9Yp/izp2h8+U4WIkrE=; b=zzlQ08dLHfTFp6 7UjkTRw+gTKb2sTxgaVaEHRpEAsu5MX9vf6HCYn8R6XiAT0czX+XVdGrV/+UNs0F9IDMu+ptCoj8y GZwk75iXV7MCwhYiW6CJ/E204XSh+ezna/zpYm6lOL8w+NgyjSSiRD2mkoW6oBt/HOCmEW5QIOPHw SObnNPbFVfT2xj7Qt7/eYZDFO5b0KiqF1QpxM/d/la7f+O7W4UEfGYZ9whqVHRe0itxp/jKzzHnyD JaVsr7koae2ye/0oJ4Z7Q2DUZW+Nk18u9FntuCLvOEVa8kpP+cX7uMrvE1kRps0L65EUoQo2R5MjZ 53ViU14s0/edzCd7RggA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44q-00DzWl-1G; Mon, 21 Aug 2023 12:32:12 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY447-00Dyyx-2z; Mon, 21 Aug 2023 12:31:32 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsFC3r4rztShZ; Mon, 21 Aug 2023 20:27:43 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:24 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 10/10] loongarch: mm: try VMA lock-based page fault handling first Date: Mon, 21 Aug 2023 20:30:56 +0800 Message-ID: <20230821123056.2109942-11-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053128_393953_13A20FA7 X-CRM114-Status: GOOD ( 16.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/loongarch/Kconfig | 1 + arch/loongarch/mm/fault.c | 37 +++++++++++++++++++++++++++++++------ 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 2b27b18a63af..6b821f621920 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -56,6 +56,7 @@ config LOONGARCH select ARCH_SUPPORTS_LTO_CLANG select ARCH_SUPPORTS_LTO_CLANG_THIN select ARCH_SUPPORTS_NUMA_BALANCING + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_QUEUED_RWLOCKS diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index 2a45e9f3a485..f7ac3a14bb06 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -142,6 +142,13 @@ static inline bool access_error(unsigned int flags, struct pt_regs *regs, return false; } +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->flags, vmf->regs, vmf->real_address, vma); +} +#endif + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -151,11 +158,15 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, unsigned long address) { int si_code = SEGV_MAPERR; - unsigned int flags = FAULT_FLAG_DEFAULT; struct task_struct *tsk = current; struct mm_struct *mm = tsk->mm; struct vm_area_struct *vma = NULL; vm_fault_t fault; + struct vm_fault vmf = { + .real_address = address, + .regs = regs, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, current->thread.trap_nr)) return; @@ -184,11 +195,24 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, goto bad_area_nosemaphore; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (write) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + no_context(regs, write, address); + return; + } + retry: vma = lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) @@ -196,7 +220,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, si_code = SEGV_ACCERR; - if (access_error(flags, regs, vma)) + if (access_error(vmf.flags, regs, address, vma)) goto bad_area; /* @@ -204,7 +228,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) @@ -217,7 +241,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, return; if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; /* * No need to mmap_read_unlock(mm) as we would @@ -229,6 +253,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, mmap_read_unlock(mm); +done: if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) do_out_of_memory(regs, write, address);