From patchwork Mon Aug 21 12:30:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE19FEE4996 for ; Mon, 21 Aug 2023 12:32:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=b4s2yGZTO2uuK+6SgQ6pQ661bieQyab2NOIlfQLvM/w=; b=C315QCunk5CqYz zJKBrvTXNUJnoXR1YBg3+7c+qhpxNO9gKU1pYtm4etwmNmlWlmiCTDTeKTRFnB1Uai8/7gzs8Q87V pV1Jxss7xGQezAK4Y4LVX/C0NTpgvN9gJ+zTzeyUMvI09hUnVkQ+Bd3VRZlQylq+BP6WFjEDFMHA5 /SRbLfB2gXcFcXWCcMG6392R/B9eySJAGfeUR3DSlsVarFVw/zAAZal6maSqlMDlS2q3Sd9gFLPOO E9HS7czM+K1nOnVbT6eesoTvyB5q9LkphdV8yvkmVWHbqbzuFaulE0NWXDll6B9fcwtrux4lH7MD9 ayHKiOEOXQyFmoxR4kTg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44c-00DzMy-1o; Mon, 21 Aug 2023 12:31:58 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY442-00Dys1-2v; Mon, 21 Aug 2023 12:31:29 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsF81thwzNnKj; Mon, 21 Aug 2023 20:27:40 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:12 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 01/10] mm: add a generic VMA lock-based page fault handler Date: Mon, 21 Aug 2023 20:30:47 +0800 Message-ID: <20230821123056.2109942-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053123_535210_07374950 X-CRM114-Status: GOOD ( 17.17 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The ARCH_SUPPORTS_PER_VMA_LOCK are enabled by more and more architectures, eg, x86, arm64, powerpc and s390, and riscv, those implementation are very similar which results in some duplicated codes, let's add a generic VMA lock-based page fault handler try_to_vma_locked_page_fault() to eliminate them, and which also make us easy to support this on new architectures. Since different architectures use different way to check vma whether is accessable or not, the struct pt_regs, page fault error code and vma flags are added into struct vm_fault, then, the architecture's page fault code could re-use struct vm_fault to record and check vma accessable by each own implementation. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 17 +++++++++++++++++ include/linux/mm_types.h | 2 ++ mm/memory.c | 39 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 58 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3f764e84e567..22a6f4c56ff3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -512,9 +512,12 @@ struct vm_fault { pgoff_t pgoff; /* Logical page offset based on vma */ unsigned long address; /* Faulting virtual address - masked */ unsigned long real_address; /* Faulting virtual address - unmasked */ + unsigned long fault_code; /* Faulting error code during page fault */ + struct pt_regs *regs; /* The registers stored during page fault */ }; enum fault_flag flags; /* FAULT_FLAG_xxx flags * XXX: should really be 'const' */ + vm_flags_t vm_flags; /* VMA flags to be used for access checking */ pmd_t *pmd; /* Pointer to pmd entry matching * the 'address' */ pud_t *pud; /* Pointer to pud entry matching @@ -774,6 +777,9 @@ static inline void assert_fault_locked(struct vm_fault *vmf) struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address); +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf); +vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf); + #else /* CONFIG_PER_VMA_LOCK */ static inline bool vma_start_read(struct vm_area_struct *vma) @@ -801,6 +807,17 @@ static inline void assert_fault_locked(struct vm_fault *vmf) mmap_assert_locked(vmf->vma->vm_mm); } +static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, + unsigned long address) +{ + return NULL; +} + +static inline vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf) +{ + return VM_FAULT_NONE; +} + #endif /* CONFIG_PER_VMA_LOCK */ extern const struct vm_operations_struct vma_dummy_vm_ops; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index f5ba5b0bc836..702820cea3f9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1119,6 +1119,7 @@ typedef __bitwise unsigned int vm_fault_t; * fault. Used to decide whether a process gets delivered SIGBUS or * just gets major/minor fault counters bumped up. * + * @VM_FAULT_NONE: Special case, not starting to handle fault * @VM_FAULT_OOM: Out Of Memory * @VM_FAULT_SIGBUS: Bad access * @VM_FAULT_MAJOR: Page read from storage @@ -1139,6 +1140,7 @@ typedef __bitwise unsigned int vm_fault_t; * */ enum vm_fault_reason { + VM_FAULT_NONE = (__force vm_fault_t)0x000000, VM_FAULT_OOM = (__force vm_fault_t)0x000001, VM_FAULT_SIGBUS = (__force vm_fault_t)0x000002, VM_FAULT_MAJOR = (__force vm_fault_t)0x000004, diff --git a/mm/memory.c b/mm/memory.c index 3b4aaa0d2fff..60fe35db5134 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5510,6 +5510,45 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, count_vm_vma_lock_event(VMA_LOCK_ABORT); return NULL; } + +#ifdef CONFIG_PER_VMA_LOCK +bool __weak arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return (vma->vm_flags & vmf->vm_flags) == 0; +} +#endif + +vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf) +{ + vm_fault_t fault = VM_FAULT_NONE; + struct vm_area_struct *vma; + + if (!(vmf->flags & FAULT_FLAG_USER)) + return fault; + + vma = lock_vma_under_rcu(current->mm, vmf->real_address); + if (!vma) + return fault; + + if (arch_vma_access_error(vma, vmf)) { + vma_end_read(vma); + return fault; + } + + fault = handle_mm_fault(vma, vmf->real_address, + vmf->flags | FAULT_FLAG_VMA_LOCK, vmf->regs); + + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); + + if (fault & VM_FAULT_RETRY) + count_vm_vma_lock_event(VMA_LOCK_RETRY); + else + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + + return fault; +} + #endif /* CONFIG_PER_VMA_LOCK */ #ifndef __PAGETABLE_P4D_FOLDED From patchwork Mon Aug 21 12:30:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 38FC5EE49AB for ; Mon, 21 Aug 2023 12:31:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=W4S1CTxZMMl/JLcJF77gPc7NoxOlMEbSndZKJSOFHYY=; b=pvfSLSdhK5jBhA bQAOE1bO2vHMqijT4PrX5u0YmWd4rDZKFkiFbU2q8X4sKS2c9lH8jHQbXqvvzhwxeGBX88LmtobRz 2hpfKGkHlVy8Y6DHp0K8vNrO+IE4wjI+OfM7faKLa3CZU6uhrwh1yrMalkKKz3SBFBZ+P8UqtCJX2 fWehDy+jARtaReVRz5voFZTghMYJTpmzgTM6jCYiwCdWPpRYw7kvNl0Cdc+2OStr9/fCPBItcxON7 WO2UofM76FyZjuD8bWUPQl5J0VUgTqmjFpzyJb+DG76P/jewNkgTHXYSYDueS+OiDrJmX1hmXEkEm EPTOHqoUAeCfI1Hut2DA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY446-00Dyyv-0t; Mon, 21 Aug 2023 12:31:26 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY440-00Dyre-0m; Mon, 21 Aug 2023 12:31:22 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsF110wXztSVv; Mon, 21 Aug 2023 20:27:33 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:14 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 02/10] arm64: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:48 +0800 Message-ID: <20230821123056.2109942-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053120_729323_D6C424A1 X-CRM114-Status: GOOD ( 16.75 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code, also pass struct vmf to __do_page_fault() directly instead of each independent variable. No functional change intended. Signed-off-by: Kefeng Wang --- arch/arm64/mm/fault.c | 60 ++++++++++++++++--------------------------- 1 file changed, 22 insertions(+), 38 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 2e5d1e238af9..2b7a1e610b3e 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -498,9 +498,8 @@ static void do_bad_area(unsigned long far, unsigned long esr, #define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) static vm_fault_t __do_page_fault(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags, - struct pt_regs *regs) + struct vm_area_struct *vma, + struct vm_fault *vmf) { /* * Ok, we have a good vm_area for this memory access, so we can handle @@ -508,9 +507,9 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, * Check that the permissions on the VMA allow for the fault which * occurred. */ - if (!(vma->vm_flags & vm_flags)) + if (!(vma->vm_flags & vmf->vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr, mm_flags, regs); + return handle_mm_fault(vma, vmf->real_address, vmf->flags, vmf->regs); } static bool is_el0_instruction_abort(unsigned long esr) @@ -533,10 +532,12 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, const struct fault_info *inf; struct mm_struct *mm = current->mm; vm_fault_t fault; - unsigned long vm_flags; - unsigned int mm_flags = FAULT_FLAG_DEFAULT; unsigned long addr = untagged_addr(far); struct vm_area_struct *vma; + struct vm_fault vmf = { + .real_address = addr, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, esr)) return 0; @@ -549,7 +550,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto no_context; if (user_mode(regs)) - mm_flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; /* * vm_flags tells us what bits we must have in vma->vm_flags @@ -559,20 +560,20 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, */ if (is_el0_instruction_abort(esr)) { /* It was exec fault */ - vm_flags = VM_EXEC; - mm_flags |= FAULT_FLAG_INSTRUCTION; + vmf.vm_flags = VM_EXEC; + vmf.flags |= FAULT_FLAG_INSTRUCTION; } else if (is_write_abort(esr)) { /* It was write fault */ - vm_flags = VM_WRITE; - mm_flags |= FAULT_FLAG_WRITE; + vmf.vm_flags = VM_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; } else { /* It was read fault */ - vm_flags = VM_READ; + vmf.vm_flags = VM_READ; /* Write implies read */ - vm_flags |= VM_WRITE; + vmf.vm_flags |= VM_WRITE; /* If EPAN is absent then exec implies read */ if (!cpus_have_const_cap(ARM64_HAS_EPAN)) - vm_flags |= VM_EXEC; + vmf.vm_flags |= VM_EXEC; } if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) { @@ -587,26 +588,11 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); - if (!(mm_flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, addr); - if (!vma) - goto lock_mmap; - - if (!(vma->vm_flags & vm_flags)) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -614,8 +600,6 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto no_context; return 0; } -lock_mmap: - retry: vma = lock_mm_and_find_vma(mm, addr, regs); if (unlikely(!vma)) { @@ -623,7 +607,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto done; } - fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs); + fault = __do_page_fault(mm, vma, &vmf); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -637,7 +621,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, return 0; if (fault & VM_FAULT_RETRY) { - mm_flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } mmap_read_unlock(mm); From patchwork Mon Aug 21 12:30:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 457F3EE49AE for ; Mon, 21 Aug 2023 12:31:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BSvnIGsvoi8RL6uzmEDB0f5IhOq6eZa+I7lr7zz6O28=; b=CJpQz/xPY8seaF +MPFfd9dsSB5v91JEYGdQ6iZclfyWUcnel8U1VTVv9EF382bTTFfwtXjBqIQ84Ud8QqFUPUqU0FcM On4+6oKYYbYMAiulJ+M0i5l9EiQZMcqGyM0DqHpz0sOwqDBk9/Y14GVRjpiFXaOrLWmYdJPNLnHGi zU5pDn1Z+Mms/mBYTunHqLTDSaR0EeXoeElZAWjVhkWHQh5P7RIcXVXqzJwimLgkn30SrEN/ujueA ta/Q/06eCDOh0z4JSyHXyyhNzFAMUP8eXkdHPOPZuulSJWAXfPiFI4467DtyOBpKS5sV0NpvsoC+4 pIS9bOAU92ZKy1u4YOvw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY448-00Dz0n-0z; Mon, 21 Aug 2023 12:31:28 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY440-00Dyrg-0m; Mon, 21 Aug 2023 12:31:24 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsF23D6sztShV; Mon, 21 Aug 2023 20:27:34 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:15 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 03/10] x86: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:49 +0800 Message-ID: <20230821123056.2109942-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053120_806331_ECD8FBEF X-CRM114-Status: GOOD ( 16.00 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/x86/mm/fault.c | 55 +++++++++++++++++++-------------------------- 1 file changed, 23 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index ab778eac1952..3edc9edc0b28 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1227,6 +1227,13 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, } NOKPROBE_SYMBOL(do_kern_addr_fault); +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->fault_code, vma); +} +#endif + /* * Handle faults in the user portion of the address space. Nothing in here * should check X86_PF_USER without a specific justification: for almost @@ -1241,13 +1248,13 @@ void do_user_addr_fault(struct pt_regs *regs, unsigned long address) { struct vm_area_struct *vma; - struct task_struct *tsk; - struct mm_struct *mm; + struct mm_struct *mm = current->mm; vm_fault_t fault; - unsigned int flags = FAULT_FLAG_DEFAULT; - - tsk = current; - mm = tsk->mm; + struct vm_fault vmf = { + .real_address = address, + .fault_code = error_code, + .flags = FAULT_FLAG_DEFAULT + }; if (unlikely((error_code & (X86_PF_USER | X86_PF_INSTR)) == X86_PF_INSTR)) { /* @@ -1311,7 +1318,7 @@ void do_user_addr_fault(struct pt_regs *regs, */ if (user_mode(regs)) { local_irq_enable(); - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; } else { if (regs->flags & X86_EFLAGS_IF) local_irq_enable(); @@ -1326,11 +1333,11 @@ void do_user_addr_fault(struct pt_regs *regs, * maybe_mkwrite() can create a proper shadow stack PTE. */ if (error_code & X86_PF_SHSTK) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (error_code & X86_PF_WRITE) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (error_code & X86_PF_INSTR) - flags |= FAULT_FLAG_INSTRUCTION; + vmf.flags |= FAULT_FLAG_INSTRUCTION; #ifdef CONFIG_X86_64 /* @@ -1350,26 +1357,11 @@ void do_user_addr_fault(struct pt_regs *regs, } #endif - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_error(error_code, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -1379,7 +1371,6 @@ void do_user_addr_fault(struct pt_regs *regs, ARCH_DEFAULT_PKEY); return; } -lock_mmap: retry: vma = lock_mm_and_find_vma(mm, address, regs); @@ -1410,7 +1401,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { /* @@ -1434,7 +1425,7 @@ void do_user_addr_fault(struct pt_regs *regs, * that we made any progress. Handle this case first. */ if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } From patchwork Mon Aug 21 12:30:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C749AEE49AB for ; Mon, 21 Aug 2023 12:31:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dKDmjHZMBG4b9iKdj6+wxoui5kNBploQoGY35HW105s=; b=icqjGlsKI9B/uX 6+JDl0lgQA4lAB17IPHfpZafMcSmkn0hX1eWfR658hYenUb7wDYr2K6DgdnTOBuuQcYvl/GOf9pa6 K2kU7Ks2qRwQjuNCIxw4RV3WfsVfwfTvKftopOX0PWCVZK+MTktFdtfZmTO3j3jKt2Ixv6714VCtd vgLXIiuQvSewqa2kXRSv7ebLNgEjw4CcFo/Yxyjfe7Bb339rgn8D2IxJLM7mi0Kz28OvYGWwPm5aJ H4nMdWMfa4wbXwyzZ/0ow761acITopUeMI2ZlYRdilN1jO6Dp4+4UE00FbBBlGjjVsWQJTR1O0Gpc GzQVowjptDj2WzuCvubg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44H-00Dz8R-1t; Mon, 21 Aug 2023 12:31:37 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY441-00Dyrn-0w; Mon, 21 Aug 2023 12:31:24 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4RTsHg0Sf6z1L9Pp; Mon, 21 Aug 2023 20:29:51 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:16 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 04/10] s390: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:50 +0800 Message-ID: <20230821123056.2109942-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053121_874430_AF0E4475 X-CRM114-Status: GOOD ( 15.81 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/s390/mm/fault.c | 66 ++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 39 deletions(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index 099c4824dd8a..fbbdebde6ea7 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -357,16 +357,18 @@ static noinline void do_fault_error(struct pt_regs *regs, vm_fault_t fault) static inline vm_fault_t do_exception(struct pt_regs *regs, int access) { struct gmap *gmap; - struct task_struct *tsk; - struct mm_struct *mm; struct vm_area_struct *vma; enum fault_type type; - unsigned long address; - unsigned int flags; + struct mm_struct *mm = current->mm; + unsigned long address = get_fault_address(regs); vm_fault_t fault; bool is_write; + struct vm_fault vmf = { + .real_address = address, + .flags = FAULT_FLAG_DEFAULT, + .vm_flags = access, + }; - tsk = current; /* * The instruction that caused the program check has * been nullified. Don't signal single step via SIGTRAP. @@ -376,8 +378,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (kprobe_page_fault(regs, 14)) return 0; - mm = tsk->mm; - address = get_fault_address(regs); is_write = fault_is_write(regs); /* @@ -398,45 +398,33 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) } perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - flags = FAULT_FLAG_DEFAULT; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write) - access = VM_WRITE; - if (access == VM_WRITE) - flags |= FAULT_FLAG_WRITE; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - if (!(vma->vm_flags & access)) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); - if (likely(!(fault & VM_FAULT_ERROR))) - fault = 0; + vmf.vm_flags = VM_WRITE; + if (vmf.vm_flags == VM_WRITE) + vmf.flags |= FAULT_FLAG_WRITE; + + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto lock_mm; + if (!(fault & VM_FAULT_RETRY)) goto out; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); + /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; goto out; } -lock_mmap: + +lock_mm: mmap_read_lock(mm); gmap = NULL; if (IS_ENABLED(CONFIG_PGSTE) && type == GMAP_FAULT) { gmap = (struct gmap *) S390_lowcore.gmap; current->thread.gmap_addr = address; - current->thread.gmap_write_flag = !!(flags & FAULT_FLAG_WRITE); + current->thread.gmap_write_flag = !!(vmf.flags & FAULT_FLAG_WRITE); current->thread.gmap_int_code = regs->int_code & 0xffff; address = __gmap_translate(gmap, address); if (address == -EFAULT) { @@ -444,7 +432,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) goto out_up; } if (gmap->pfault_enabled) - flags |= FAULT_FLAG_RETRY_NOWAIT; + vmf.flags |= FAULT_FLAG_RETRY_NOWAIT; } retry: @@ -466,7 +454,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * we can handle it.. */ fault = VM_FAULT_BADACCESS; - if (unlikely(!(vma->vm_flags & access))) + if (unlikely(!(vma->vm_flags & vmf.vm_flags))) goto out_up; /* @@ -474,10 +462,10 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; - if (flags & FAULT_FLAG_RETRY_NOWAIT) + if (vmf.flags & FAULT_FLAG_RETRY_NOWAIT) goto out_up; goto out; } @@ -497,7 +485,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (fault & VM_FAULT_RETRY) { if (IS_ENABLED(CONFIG_PGSTE) && gmap && - (flags & FAULT_FLAG_RETRY_NOWAIT)) { + (vmf.flags & FAULT_FLAG_RETRY_NOWAIT)) { /* * FAULT_FLAG_RETRY_NOWAIT has been set, mmap_lock has * not been released @@ -506,8 +494,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) fault = VM_FAULT_PFAULT; goto out_up; } - flags &= ~FAULT_FLAG_RETRY_NOWAIT; - flags |= FAULT_FLAG_TRIED; + vmf.flags &= ~FAULT_FLAG_RETRY_NOWAIT; + vmf.flags |= FAULT_FLAG_TRIED; mmap_read_lock(mm); goto retry; } From patchwork Mon Aug 21 12:30:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 04741EE4996 for ; Mon, 21 Aug 2023 12:31:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Mjmdgacf2+PZW1X6rjDj6rs52GQYD1N9nKAJCWgPokk=; b=UMYhsXOVNYhEwq v2evxCbf6whtwTUeyOdvExf5+cUNe6Bvhu20lZqlkqr6nPQpgo6vU7bsVVdAG1lVSEuLYfc/BqL/w MmQ/oluqFn+1HJrTk2UTOcKQc/Wch9tpv8URvuoyjg1Bk6+RnCYRtnIutd+5ppQ7kahyhJy473+sI FXOOA51C7o6n+kE/U5Dy6UJETMJeX7pL5W8IPyDbc755SQYAaAE0dibqBOAAQcMgub/1B82oTvUJo FAQ4h6tiR2k0bGKQTkX4OH1NP7Qnjw3UtdEYVVoU3HXdLcTs1x/brGk3GgCoRxSvoeUyDtdbnpqJG C/dnjNWXU1JnSnFKVnLA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44U-00DzH9-2j; Mon, 21 Aug 2023 12:31:50 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY442-00Dys4-2x; Mon, 21 Aug 2023 12:31:28 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsFF3Rj6zNnTN; Mon, 21 Aug 2023 20:27:45 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:17 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 05/10] powerpc: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:51 +0800 Message-ID: <20230821123056.2109942-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053123_526355_352F1653 X-CRM114-Status: GOOD ( 16.64 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/powerpc/mm/fault.c | 66 ++++++++++++++++++++--------------------- 1 file changed, 32 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index b1723094d464..52f9546e020e 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -391,6 +391,22 @@ static int page_fault_is_bad(unsigned long err) #define page_fault_is_bad(__err) ((__err) & DSISR_BAD_FAULT_32S) #endif +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + int is_exec = TRAP(vmf->regs) == INTERRUPT_INST_STORAGE; + int is_write = page_fault_is_write(vmf->fault_code); + + if (unlikely(access_pkey_error(is_write, is_exec, + (vmf->fault_code & DSISR_KEYFAULT), vma))) + return true; + + if (unlikely(access_error(is_write, is_exec, vma))) + return true; + return false; +} +#endif + /* * For 600- and 800-family processors, the error_code parameter is DSISR * for a data fault, SRR1 for an instruction fault. @@ -407,12 +423,18 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, { struct vm_area_struct * vma; struct mm_struct *mm = current->mm; - unsigned int flags = FAULT_FLAG_DEFAULT; int is_exec = TRAP(regs) == INTERRUPT_INST_STORAGE; int is_user = user_mode(regs); int is_write = page_fault_is_write(error_code); vm_fault_t fault, major = 0; bool kprobe_fault = kprobe_page_fault(regs, 11); + struct vm_fault vmf = { + .real_address = address, + .fault_code = error_code, + .regs = regs, + .flags = FAULT_FLAG_DEFAULT, + }; + if (unlikely(debugger_fault_handler(regs) || kprobe_fault)) return 0; @@ -463,45 +485,21 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * mmap_lock held */ if (is_user) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (is_exec) - flags |= FAULT_FLAG_INSTRUCTION; + vmf.flags |= FAULT_FLAG_INSTRUCTION; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_pkey_error(is_write, is_exec, - (error_code & DSISR_KEYFAULT), vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - if (unlikely(access_error(is_write, is_exec, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); if (fault_signal_pending(fault, regs)) return user_mode(regs) ? 0 : SIGBUS; -lock_mmap: - /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the * kernel and should generate an OOPS. Unfortunately, in the case of an @@ -528,7 +526,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); major |= fault & VM_FAULT_MAJOR; @@ -544,7 +542,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * case. */ if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } From patchwork Mon Aug 21 12:30:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C64E1EE49AA for ; Mon, 21 Aug 2023 12:31:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kb/PSg0cl3xmM55Zoke9BONamILsUYK+VRcOq7aMwzM=; b=el0pYsDUw3rMVB 8Z6cDkEfhmGhjXi+KOcZGg6zzuOU7bWgQZIg2eB/zvpetp++kEZsubAc4gW0+6I6+wx+zdKxQ4PFi qbqZk5+xJ7MqzocW5esZUyr5QfpSxdIl/gwAgFrHDWM1HGp/1S5tD9A2YgQxvNZL6mMDhvoWm92tw 5dacIBZYFNQBtxarLHpze4UszNz+yH6h0WXFTt0NfuZjzcqTrMR6OGhX3omD4TP3MSYrnNKBXgxDt o7+SYR8l4KXOWu4cVa7xEREzlwkTg4j5xlI3ENx0fOJfV83Zd4KBN4CTpomsk9WPq6KORnhulfuzg YooRns19hk/nA1aKcL4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44N-00DzCh-0w; Mon, 21 Aug 2023 12:31:43 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY442-00Dysr-30; Mon, 21 Aug 2023 12:31:26 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsFG5XSpzNnTc; Mon, 21 Aug 2023 20:27:46 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:19 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 06/10] riscv: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:52 +0800 Message-ID: <20230821123056.2109942-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053123_423436_0F979E3E X-CRM114-Status: GOOD ( 15.09 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/riscv/mm/fault.c | 58 ++++++++++++++++++------------------------- 1 file changed, 24 insertions(+), 34 deletions(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 6115d7514972..b46129b636f2 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -215,6 +215,13 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma) return false; } +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->fault_code, vma); +} +#endif + /* * This routine handles page faults. It determines the address and the * problem, and then passes it off to one of the appropriate routines. @@ -223,17 +230,16 @@ void handle_page_fault(struct pt_regs *regs) { struct task_struct *tsk; struct vm_area_struct *vma; - struct mm_struct *mm; - unsigned long addr, cause; - unsigned int flags = FAULT_FLAG_DEFAULT; + struct mm_struct *mm = current->mm; + unsigned long addr = regs->badaddr; + unsigned long cause = regs->cause; int code = SEGV_MAPERR; vm_fault_t fault; - - cause = regs->cause; - addr = regs->badaddr; - - tsk = current; - mm = tsk->mm; + struct vm_fault vmf = { + .real_address = addr, + .fault_code = cause, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, cause)) return; @@ -268,7 +274,7 @@ void handle_page_fault(struct pt_regs *regs) } if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (!user_mode(regs) && addr < TASK_SIZE && unlikely(!(regs->status & SR_SUM))) { if (fixup_exception(regs)) @@ -280,37 +286,21 @@ void handle_page_fault(struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (cause == EXC_STORE_PAGE_FAULT) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; else if (cause == EXC_INST_PAGE_FAULT) - flags |= FAULT_FLAG_INSTRUCTION; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, addr); - if (!vma) - goto lock_mmap; + vmf.flags |= FAULT_FLAG_INSTRUCTION; - if (unlikely(access_error(cause, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) no_context(regs, addr); return; } -lock_mmap: retry: vma = lock_mm_and_find_vma(mm, addr, regs); @@ -337,7 +327,7 @@ void handle_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags, regs); + fault = handle_mm_fault(vma, addr, vmf.flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -355,7 +345,7 @@ void handle_page_fault(struct pt_regs *regs) return; if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; /* * No need to mmap_read_unlock(mm) as we would From patchwork Mon Aug 21 12:30:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD9B8EE4996 for ; Mon, 21 Aug 2023 12:32:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IQ8D52pk9YNjrKADQb8ScKEcpTuHf2s2ceuvHvWBHOY=; b=sV0wo18PbmhRMG hBdQpuX/ZVIknTq7cjYh5z5hNXwijPG9bH+CEFrxggY4yWiNAH4mPBerc4rK9Q300Llj8kzwa6auu 2YBdNogI0xDOfb5ySGmvEyf0zO3qSki8J7yu+ymRGZQ1qZMRJGLZA+pOfZ08APqlnxff8teLBQ9jW K4M6grB2QwhbvakTE029EkOho8xHeskT4vnYY8yW4p3BIy/sznEo8+ZtGqUOkmE4L4RFgGqzKIZ3X g2sfMdHDjHSUE6i5x7b3I+Je1rqnXevL520lGWwJ8l+Ix4RYCuAmC0K66B0zMr2vtDSqMo0Tw1fHe +PA1e99NC45dNE87NXRA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44k-00DzSK-1A; Mon, 21 Aug 2023 12:32:06 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY444-00DytR-0K; Mon, 21 Aug 2023 12:31:30 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGr51ZwzVks7; Mon, 21 Aug 2023 20:29:08 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:20 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 07/10] ARM: mm: try VMA lock-based page fault handling first Date: Mon, 21 Aug 2023 20:30:53 +0800 Message-ID: <20230821123056.2109942-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053124_679498_5CCF4F39 X-CRM114-Status: GOOD ( 16.95 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/arm/Kconfig | 1 + arch/arm/mm/fault.c | 35 +++++++++++++++++++++++++---------- 2 files changed, 26 insertions(+), 10 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1a6a6eb48a15..8b6d4507ccee 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -34,6 +34,7 @@ config ARM select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7 select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_MEMTEST diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index fef62e4a9edd..d53bb028899a 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -242,8 +242,11 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) struct vm_area_struct *vma; int sig, code; vm_fault_t fault; - unsigned int flags = FAULT_FLAG_DEFAULT; - unsigned long vm_flags = VM_ACCESS_FLAGS; + struct vm_fault vmf = { + .real_address = addr, + .flags = FAULT_FLAG_DEFAULT, + .vm_flags = VM_ACCESS_FLAGS, + }; if (kprobe_page_fault(regs, fsr)) return 0; @@ -261,15 +264,15 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) goto no_context; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write_fault(fsr)) { - flags |= FAULT_FLAG_WRITE; - vm_flags = VM_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; + vmf.vm_flags = VM_WRITE; } if (fsr & FSR_LNX_PF) { - vm_flags = VM_EXEC; + vmf.vm_flags = VM_EXEC; if (is_permission_fault(fsr) && !user_mode(regs)) die_kernel_fault("execution of memory", @@ -278,6 +281,18 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + goto no_context; + return 0; + } + retry: vma = lock_mm_and_find_vma(mm, addr, regs); if (unlikely(!vma)) { @@ -289,10 +304,10 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) * ok, we have a good vm_area for this memory access, check the * permissions on the VMA allow for the fault which occurred. */ - if (!(vma->vm_flags & vm_flags)) + if (!(vma->vm_flags & vmf.vm_flags)) fault = VM_FAULT_BADACCESS; else - fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); + fault = handle_mm_fault(vma, addr & PAGE_MASK, vmf.flags, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_lock because @@ -310,13 +325,13 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (!(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_RETRY) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } } mmap_read_unlock(mm); - +done: /* * Handle the "normal" case first - VM_FAULT_MAJOR */ From patchwork Mon Aug 21 12:30:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C00F6EE4996 for ; Mon, 21 Aug 2023 12:32:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ybA9Y4z5UdwZYkl6CH+f/cqy5bDwnj2bq3qWlqJP8w4=; b=WShcrI/nAF+O6J XRqGe9LqyKvPwaPw0vIhcIIrMch/P4h1gghV5SHHiKBdirk78UDIWntEifN4Ktxl1RKM01p/b5suI l45dBf+RZV3soP/uWYWVSkmvNB2Vbbln6SsDX5u2D3b2DFobGq13395J9YT5+6LSRg2uV+c9M6naW zii5RwjY3CfC/viNtMWeOhM3RB8mGcnkN5Yy7gt2xbVM7uFI5ft30cESqfEYlpZe6Ga6jW2aOaO9k BBIzX3l0rQEkAhosoARLSgYhdhbQXhGltTlPp3RSsG9AWDH2V+iS7tqpnvKQQgN8oHOJOlUuJExE1 2iNbo844yJwPY2GAcUjg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY450-00Dze9-06; Mon, 21 Aug 2023 12:32:22 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY44V-00DzH4-1J; Mon, 21 Aug 2023 12:31:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=0BPre1mji4iNkocWD89whAC0rMq3U6GyUcT+h0WZYyY=; b=LdnCfJ8Xtky1ZpNQF3xKRxY0aB zzHm67AGzb3dc2INiylHNmB8RnKsRw+fFCG8oWfxC6PCUQh06HiEeQZhbTIWnMYNnIfeAQXwTRVpG Oj9GbBHCtEPFZpoij54tSuY682CRtdFEgctZ6BIlNR4tYAYgvA/Qt5Ih0AsAbbo7sY365UvUdIYhA pSusoOYeDLTSl0cw5LBnAY2fQ7LSTgUKOZNOn17/0DKD0DhwddDop/MHz2ltGptFjCLePvzv0Htnc yclHCcL9qcrNCUi2Vm+/6RgPAU/2905th/g7QOREmhtgiGqjme83IBPaJ/PVvPAw4HhnsJ7OAsEGZ 93BXXRFg==; Received: from szxga02-in.huawei.com ([45.249.212.188]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY44L-001VC8-1Y; Mon, 21 Aug 2023 12:31:50 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGs73pWzVks8; Mon, 21 Aug 2023 20:29:09 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:21 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 08/10] loongarch: mm: cleanup __do_page_fault() Date: Mon, 21 Aug 2023 20:30:54 +0800 Message-ID: <20230821123056.2109942-9-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_133144_059265_4F48EE6D X-CRM114-Status: GOOD ( 14.57 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Cleanup __do_page_fault() by reuse bad_area_nosemaphore and bad_area label. Signed-off-by: Kefeng Wang --- arch/loongarch/mm/fault.c | 48 +++++++++++++-------------------------- 1 file changed, 16 insertions(+), 32 deletions(-) diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index e6376e3dce86..5d4c742c4bc5 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -157,18 +157,15 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, if (!user_mode(regs)) no_context(regs, write, address); else - do_sigsegv(regs, write, address, si_code); - return; + goto bad_area_nosemaphore; } /* * If we're in an interrupt or have no user * context, we must not take the fault.. */ - if (faulthandler_disabled() || !mm) { - do_sigsegv(regs, write, address, si_code); - return; - } + if (faulthandler_disabled() || !mm) + goto bad_area_nosemaphore; if (user_mode(regs)) flags |= FAULT_FLAG_USER; @@ -178,23 +175,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, vma = lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) goto bad_area_nosemaphore; - goto good_area; - -/* - * Something tried to access memory that isn't in our memory map.. - * Fix it, but check if it's kernel or user first.. - */ -bad_area: - mmap_read_unlock(mm); -bad_area_nosemaphore: - do_sigsegv(regs, write, address, si_code); - return; -/* - * Ok, we have a good vm_area for this memory access, so - * we can handle it.. - */ -good_area: si_code = SEGV_ACCERR; if (write) { @@ -235,22 +216,25 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, */ goto retry; } + + mmap_read_unlock(mm); + if (unlikely(fault & VM_FAULT_ERROR)) { - mmap_read_unlock(mm); - if (fault & VM_FAULT_OOM) { + if (fault & VM_FAULT_OOM) do_out_of_memory(regs, write, address); - return; - } else if (fault & VM_FAULT_SIGSEGV) { - do_sigsegv(regs, write, address, si_code); - return; - } else if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { + else if (fault & VM_FAULT_SIGSEGV) + goto bad_area_nosemaphore; + else if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) do_sigbus(regs, write, address, si_code); - return; - } - BUG(); + else + BUG(); } + return; +bad_area: mmap_read_unlock(mm); +bad_area_nosemaphore: + do_sigsegv(regs, write, address, si_code); } asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, From patchwork Mon Aug 21 12:30:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BED66EE4996 for ; Mon, 21 Aug 2023 12:32:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dzHRak8MJqFUuq8zsAfkVU1c7hWPffi97zz1TeBe2HY=; b=LyLDATsvBVKrbq 5CfGbrcKlymSiIlmMaltLDBArQlSGtYLgotHE3ELfdodZBt2fDfdvcPgfHwLh//yqBGqEcVYG1HQ/ WZxzjqZ5jENkmuiKM5eTplDc/22ri4a/hDtXCSKg4+8g2Q45Ah+YVFdQNOnucEzlgVlb9oUOY16z0 CQljeNav0xo1GhHBLjszI9WM7+7KHluoxfOBgnxB/z0iAEAahYovaC0nO21zJHXiLQD3kZiAl6Hf+ MjTLHiLSgxTWIV0ly6WLtSvNj44cAnTgAUTbLpa4uvbd1jjTyx7YB+v9yNXiDCUfOA7rcXTdBcQlP ypZ68NEq5Oy/ATU0wp3w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY459-00DzkC-0T; Mon, 21 Aug 2023 12:32:31 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY44X-00DzIV-1K; Mon, 21 Aug 2023 12:31:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=gIldAat6PTqp0m0qcFfSiDczJB9xcWWS8CD3qklnWTo=; b=kJu5Rp6+d8nls1Q2PjGT45jeU4 Y6CwAMZLRxH03DTsuuTA2P0J5opNT+1sXZXMLbPBbZ+EhDVI1DLOtioFt6i+p6t3uYyBFmoRT3OvE o4b4qc9hfgmKE/A7HVxGn4AxrHMI2+ghKlYO9yMC1TSDIig7tdSQ9flQwvbRpXjt+/Q5TlJN1UM1Y Fwld6G1nutHYNjha4okP946++jbp6vjRk6b39q7cLY8JfOQxyMO4wTD/kVSXDnwjTuRWV1Ygdkqi0 S+VOY4A2xNXLbAFvMfW2q+3CNx4+J+Ogs5lA2LRYJakbimF7s0UZ5XxKt7B0j1PVpnFOJ1X6rJXiC UFg3BNwg==; Received: from szxga02-in.huawei.com ([45.249.212.188]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY44O-001VCA-2Q; Mon, 21 Aug 2023 12:31:52 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGv1szHzVksj; Mon, 21 Aug 2023 20:29:11 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:23 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 09/10] loongarch: mm: add access_error() helper Date: Mon, 21 Aug 2023 20:30:55 +0800 Message-ID: <20230821123056.2109942-10-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_133147_498567_B49326BD X-CRM114-Status: GOOD ( 14.31 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add access_error() to check whether vma could be accessible or not, which will be used __do_page_fault() and later vma locked based page fault. Signed-off-by: Kefeng Wang --- arch/loongarch/mm/fault.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index 5d4c742c4bc5..2a45e9f3a485 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -126,6 +126,22 @@ static void __kprobes do_sigsegv(struct pt_regs *regs, force_sig_fault(SIGSEGV, si_code, (void __user *)address); } +static inline bool access_error(unsigned int flags, struct pt_regs *regs, + unsigned long addr, struct vm_area_struct *vma) +{ + if (flags & FAULT_FLAG_WRITE) { + if (!(vma->vm_flags & VM_WRITE)) + return true; + } else { + if (!(vma->vm_flags & VM_READ) && addr != exception_era(regs)) + return true; + if (!(vma->vm_flags & VM_EXEC) && addr == exception_era(regs)) + return true; + } + + return false; +} + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -169,6 +185,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + if (write) + flags |= FAULT_FLAG_WRITE; perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: @@ -178,16 +196,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, si_code = SEGV_ACCERR; - if (write) { - flags |= FAULT_FLAG_WRITE; - if (!(vma->vm_flags & VM_WRITE)) - goto bad_area; - } else { - if (!(vma->vm_flags & VM_READ) && address != exception_era(regs)) - goto bad_area; - if (!(vma->vm_flags & VM_EXEC) && address == exception_era(regs)) - goto bad_area; - } + if (access_error(flags, regs, vma)) + goto bad_area; /* * If for any reason at all we couldn't handle the fault, From patchwork Mon Aug 21 12:30:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AB8EEE4996 for ; Mon, 21 Aug 2023 12:32:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qZ9VCziK27wtiUYZqy/hDi/mxNFOsDJPTruFjWd4/8o=; b=EzAZjzz24nIsgM hEK/1THj8sAO8suep4zdbFCnlCHKZXStm14HHBIKBVUvq6WR8qY6FSj4mbwo37eki/hj9MId6J/kq D3PpE5igK5EoxFEaabk5C67E3hMkKzZeDnr2TpyX6U0ArBQFtJvqi3VD0JV1Ef6AOlMF3h2Mvpyru L/Dlp1wRB+L1wqfwYTfoukvUDgdWb9TTdvTuExLJraWV1ZacqW0Q1yOFX9TROJMZREjsAPK/jQ11x 7HDp2oK+91bWwYynRay4EZMDsmfwOP+jQNKn/s7nna14sxIp1pdCng6njQvcShxxp8j9004EUgKqj LVCn+VqDWuHO0qA+Pteg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qY44r-00DzY1-2I; Mon, 21 Aug 2023 12:32:13 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qY447-00Dyyx-2z; Mon, 21 Aug 2023 12:31:32 +0000 Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsFC3r4rztShZ; Mon, 21 Aug 2023 20:27:43 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:24 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 10/10] loongarch: mm: try VMA lock-based page fault handling first Date: Mon, 21 Aug 2023 20:30:56 +0800 Message-ID: <20230821123056.2109942-11-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230821_053128_393953_13A20FA7 X-CRM114-Status: GOOD ( 16.05 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/loongarch/Kconfig | 1 + arch/loongarch/mm/fault.c | 37 +++++++++++++++++++++++++++++++------ 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 2b27b18a63af..6b821f621920 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -56,6 +56,7 @@ config LOONGARCH select ARCH_SUPPORTS_LTO_CLANG select ARCH_SUPPORTS_LTO_CLANG_THIN select ARCH_SUPPORTS_NUMA_BALANCING + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_QUEUED_RWLOCKS diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index 2a45e9f3a485..f7ac3a14bb06 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -142,6 +142,13 @@ static inline bool access_error(unsigned int flags, struct pt_regs *regs, return false; } +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->flags, vmf->regs, vmf->real_address, vma); +} +#endif + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -151,11 +158,15 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, unsigned long address) { int si_code = SEGV_MAPERR; - unsigned int flags = FAULT_FLAG_DEFAULT; struct task_struct *tsk = current; struct mm_struct *mm = tsk->mm; struct vm_area_struct *vma = NULL; vm_fault_t fault; + struct vm_fault vmf = { + .real_address = address, + .regs = regs, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, current->thread.trap_nr)) return; @@ -184,11 +195,24 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, goto bad_area_nosemaphore; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (write) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + no_context(regs, write, address); + return; + } + retry: vma = lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) @@ -196,7 +220,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, si_code = SEGV_ACCERR; - if (access_error(flags, regs, vma)) + if (access_error(vmf.flags, regs, address, vma)) goto bad_area; /* @@ -204,7 +228,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) @@ -217,7 +241,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, return; if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; /* * No need to mmap_read_unlock(mm) as we would @@ -229,6 +253,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, mmap_read_unlock(mm); +done: if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) do_out_of_memory(regs, write, address);