From patchwork Mon Aug 21 12:30:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 408BBEE49B0 for ; Mon, 21 Aug 2023 12:31:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 61D248D000C; Mon, 21 Aug 2023 08:31:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5A2538D0002; Mon, 21 Aug 2023 08:31:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41D438D000C; Mon, 21 Aug 2023 08:31:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 319648D0002 for ; Mon, 21 Aug 2023 08:31:23 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E7331A0BCD for ; Mon, 21 Aug 2023 12:31:22 +0000 (UTC) X-FDA: 81148047204.26.753E87C Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf25.hostedemail.com (Postfix) with ESMTP id 1B356A0009 for ; Mon, 21 Aug 2023 12:31:18 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621080; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nHw7Ttr+6gOLYyndAOJOMAYlLCBrQmERbINb9LSfgAI=; b=RIkSHgkItruDptKA4Q7olfjzQKEl2CkD/2gftexvqg4jmr/Bd7lVNLxeCOzBwRv7R1LDYp TyM1r4YtH7WUahPnSCkCOB8eekZ2N2BIAbnHyjbhjIdjGz58/Knh5W80TAMA7U9xUsS22i Ax/0DyFSFAHP5DHBc3XXFVvi+dyQLbI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621080; a=rsa-sha256; cv=none; b=zQL0iB/ZugUbpeCw+4YR/Qo9oJi3B3XEil1GrTu70fhzeN6b0MjthbxNdVRL6FCnecxrlc I+zo6sJ3ES0AYS3j5hE2LVPX7E5mKptYsTnloHIOlSL4RpN7YGnCcnZfkitAfaXy2s7VxQ jKfi4IfVBf5BOnMxy6h449P99Ks5v/Y= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsF81thwzNnKj; Mon, 21 Aug 2023 20:27:40 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:12 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 01/10] mm: add a generic VMA lock-based page fault handler Date: Mon, 21 Aug 2023 20:30:47 +0800 Message-ID: <20230821123056.2109942-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 1B356A0009 X-Rspam-User: X-Stat-Signature: 3nytg88fhcmizn8z8se39495ya7xows4 X-Rspamd-Server: rspam03 X-HE-Tag: 1692621078-504 X-HE-Meta: U2FsdGVkX1/3o5KjKrNfCj8ZsvYt94oJ8m6QxztLmAL992dAa1oMp404eYhR8nrk4GY2We40Q4bbEFXmMpgxDAAAFUbuaPpUnPlUbTO0Ms1lUpeSDCrbt0iRK9J+3T7KOCeS8rz/CTlbuYmdPF+Vsb1NJY28QzCwANBMcddWQxhaGBsCuIbAyvq4TwyRyEIWOM90kAcoZa21rPbimpv4vmo94oqVyRgvUJFK2LkLXAflJOheo2ZYBP0xeq0b68caZWel1uVl0X6rnPyDxFQ9TeY79fxgV8nXRtymNS7/yv5n5q08qCXfVNSujGGifnUahnOD0r/dFT0R/8eV3Nw5RTDP/WWL+NgUe8j1iWlJi/65/xKQElW7DrIeXYtyx386JRvpqh5OyTQCnGXdoiVnyK8EtBCvOA59XOy5Qyn2xcvekrP14XDwd81HKaQ4Z9YynukvabtnA4jBrYr0UN2St60mwGuP/OHjP/i/QyFEituIVCI8JpETLK669lqRrU+wAwgS+yprueya6V4bBveRvEPVO9hwJfGhUKtqh+CHQCNWbxMKH4q5BM6x+TXvKBSvgTd8E+UpOZuB638FmwORtm07JiBrlTzucOQFs4guOGK/AKsmUd/MlZazZwyCJuNYu7aolJ48BHiQZ1SkIEQolvMx2ZTd3Au+LLdHreTxmrV1H9EhX9ypFW1nLtUVBSbXfTUw3GaJB5XKF08bmOEK+7TFRYo25uWgPyMaO97vugX7WOPNRnMNNyOVDw+C2utOU9birkWMiywg8NNzfI9W75j9zbQZk2fLvzg5CjVBlAtgHX4lopnyVOsHU3cHQSrqkOHRPks/k8Kj2chLGMDlbfvYLBYwhYI2ptRGKXwZ39Ptz57lslfpSHbDV3ghDw6PxHxbp3h9IVO/VFQfb0su0CV5fidVMZXy3VP9klfXeFeOsDqW5zFgPy/3CjHcz/vbw/IjiPUNeA8DnM7WlDy 4xm2d5LI Q7FiXN7F3EJzX0mfyw6+FKWwPYl/mrB4MePe/a0NoD3VZJXmhlASn5BefEs+og5gORNdKyIRavjMJIb6duo7xS0sDaBdDR9BAKYjv/RWnqPTcfQfCYfdgOUxtoTYEfmXw2OT3AMzrO53QxqV8usuArI81nw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The ARCH_SUPPORTS_PER_VMA_LOCK are enabled by more and more architectures, eg, x86, arm64, powerpc and s390, and riscv, those implementation are very similar which results in some duplicated codes, let's add a generic VMA lock-based page fault handler try_to_vma_locked_page_fault() to eliminate them, and which also make us easy to support this on new architectures. Since different architectures use different way to check vma whether is accessable or not, the struct pt_regs, page fault error code and vma flags are added into struct vm_fault, then, the architecture's page fault code could re-use struct vm_fault to record and check vma accessable by each own implementation. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 17 +++++++++++++++++ include/linux/mm_types.h | 2 ++ mm/memory.c | 39 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 58 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3f764e84e567..22a6f4c56ff3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -512,9 +512,12 @@ struct vm_fault { pgoff_t pgoff; /* Logical page offset based on vma */ unsigned long address; /* Faulting virtual address - masked */ unsigned long real_address; /* Faulting virtual address - unmasked */ + unsigned long fault_code; /* Faulting error code during page fault */ + struct pt_regs *regs; /* The registers stored during page fault */ }; enum fault_flag flags; /* FAULT_FLAG_xxx flags * XXX: should really be 'const' */ + vm_flags_t vm_flags; /* VMA flags to be used for access checking */ pmd_t *pmd; /* Pointer to pmd entry matching * the 'address' */ pud_t *pud; /* Pointer to pud entry matching @@ -774,6 +777,9 @@ static inline void assert_fault_locked(struct vm_fault *vmf) struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address); +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf); +vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf); + #else /* CONFIG_PER_VMA_LOCK */ static inline bool vma_start_read(struct vm_area_struct *vma) @@ -801,6 +807,17 @@ static inline void assert_fault_locked(struct vm_fault *vmf) mmap_assert_locked(vmf->vma->vm_mm); } +static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, + unsigned long address) +{ + return NULL; +} + +static inline vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf) +{ + return VM_FAULT_NONE; +} + #endif /* CONFIG_PER_VMA_LOCK */ extern const struct vm_operations_struct vma_dummy_vm_ops; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index f5ba5b0bc836..702820cea3f9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1119,6 +1119,7 @@ typedef __bitwise unsigned int vm_fault_t; * fault. Used to decide whether a process gets delivered SIGBUS or * just gets major/minor fault counters bumped up. * + * @VM_FAULT_NONE: Special case, not starting to handle fault * @VM_FAULT_OOM: Out Of Memory * @VM_FAULT_SIGBUS: Bad access * @VM_FAULT_MAJOR: Page read from storage @@ -1139,6 +1140,7 @@ typedef __bitwise unsigned int vm_fault_t; * */ enum vm_fault_reason { + VM_FAULT_NONE = (__force vm_fault_t)0x000000, VM_FAULT_OOM = (__force vm_fault_t)0x000001, VM_FAULT_SIGBUS = (__force vm_fault_t)0x000002, VM_FAULT_MAJOR = (__force vm_fault_t)0x000004, diff --git a/mm/memory.c b/mm/memory.c index 3b4aaa0d2fff..60fe35db5134 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5510,6 +5510,45 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, count_vm_vma_lock_event(VMA_LOCK_ABORT); return NULL; } + +#ifdef CONFIG_PER_VMA_LOCK +bool __weak arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return (vma->vm_flags & vmf->vm_flags) == 0; +} +#endif + +vm_fault_t try_vma_locked_page_fault(struct vm_fault *vmf) +{ + vm_fault_t fault = VM_FAULT_NONE; + struct vm_area_struct *vma; + + if (!(vmf->flags & FAULT_FLAG_USER)) + return fault; + + vma = lock_vma_under_rcu(current->mm, vmf->real_address); + if (!vma) + return fault; + + if (arch_vma_access_error(vma, vmf)) { + vma_end_read(vma); + return fault; + } + + fault = handle_mm_fault(vma, vmf->real_address, + vmf->flags | FAULT_FLAG_VMA_LOCK, vmf->regs); + + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); + + if (fault & VM_FAULT_RETRY) + count_vm_vma_lock_event(VMA_LOCK_RETRY); + else + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + + return fault; +} + #endif /* CONFIG_PER_VMA_LOCK */ #ifndef __PAGETABLE_P4D_FOLDED From patchwork Mon Aug 21 12:30:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3020EE49A6 for ; Mon, 21 Aug 2023 12:31:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DBAD88D000E; Mon, 21 Aug 2023 08:31:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D91D88D0002; Mon, 21 Aug 2023 08:31:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0CFB8D000E; Mon, 21 Aug 2023 08:31:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AF2778D0002 for ; Mon, 21 Aug 2023 08:31:26 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6ADEE40460 for ; Mon, 21 Aug 2023 12:31:26 +0000 (UTC) X-FDA: 81148047372.10.AEF9B5B Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf20.hostedemail.com (Postfix) with ESMTP id 1A5541C002B for ; Mon, 21 Aug 2023 12:31:21 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621084; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KQnug/dkwoV99tMtZsLDbSwgWmkgFvFxpomh3L97DHU=; b=a7xbDwKj7ClwRU8h554gzDlup7gOnTHn8ulriAvcqKPWUJVUj5KBou8x7isLYlsMOME2cZ ztxloppibU9578uMDK5fy+9HLc3SW0IiU9bSwapUIKh2DmWupm2t8SoikUflQN5/M0hqbA PRSdNZsR8AmXCW9+gx23ludRaYcYskQ= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621084; a=rsa-sha256; cv=none; b=BAbaEL9StRzYK7MzU12fDFfaipthZr3xcT8API3DtBFXet9hcp1It+F4qbYWqHwOGh/nqJ +O2pfRWwsmxoqS8xzZI7q05/SkZAOaJAzao8CbGXuplaWql+8RNFsTg+D8j37jAORHT0/v 6fAcfO6HDZKGT98azaqG2R3psFHj8j8= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsF110wXztSVv; Mon, 21 Aug 2023 20:27:33 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:14 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 02/10] arm64: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:48 +0800 Message-ID: <20230821123056.2109942-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 1A5541C002B X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: h11qttpynsdpadj5a3fsoa3mwjojypyu X-HE-Tag: 1692621081-416674 X-HE-Meta: U2FsdGVkX18rLe6u7rBPI+XconMIoCnDdakEANCGrH5pPPJn0YPSSlJew+RzPpYZjv1CvtQVO7+UcyhfZTgJxsUNTwtD3gUhrO+CgWsPym0KRsghDYvxNWUofAlVDMVIQeizAa3688RUd8ZKjLscQbWT0WUOEzLM+lkzN5R2fnLSjlq1uKWlw+2vBIpAeBoFRx9JLBRJtxFkaZVMrSEJWm0rEMf6ilI0g9EhrY8tRYtVfHXOu+pofaNo5R/fVq6KgCcNn2kw9MhvY1GLhHV+gfD5Le20/YYX034+ksklArFzOv8/5Zj9jxghykax3mj4Cfz4ADk/HqvltrVaDWjWamF7fg3y04p6Uh8f5iQ+W257ZhKx/UL2vmm4r6/PbyWB77/+oPe+nE6nivRBkK9pPTxJAJUCH8J4oGbQtBMi5xvs8Rkia/kyoYfHh9O3S9rG8VAEQo8I8KJuN36FSg+Vy+WWkmp/RL8eG9L9ejdt7+gzSRKykDzTgbH+Ezp/gLbUYOwABAm+pCYAAxE1t181PgucAU4hHd47CTYzTt6iiB4r+RcyZvE6kgUzN1Az86ESy0ys8LXaNfP7DOxCk/GqlMz5PhRNFZYDzVeedID//zkr/4IKiLbebAeShZhgVVZ5V1rPVBxUqQd6W5MU1WVz26IhR0SGYvI66Od8pjx7gSkjzIbdvv1jeUH5f0Ibv1ia54PJye400+sy6zCcaKUQjiNt3nnJYhwl+gGxm2/vnttdeCfQjV2kHMiO+RB2GKGtnZ0lAGER06pC+32YTEqPqeSPZNbbgtLOrCupkKPxhFwz+f3uJZLfvmTRiN/524m7WW6efuH2BjFUsFBjAg/q001K1p98GoEh6Q/PC/R4lWXXxQMHCOhrZGJSKoIH//5gXVXgRwFLbkeMjF3vNbbWFBuELeB7ZBMFeXUvdil7p+KBZzXcb242ibbhsuxg+EYXmUPWYgfFvPvobLykm8z hW9/SeAc FB9o4e+ZdxKNhDIMIgmNe7WHRVyEKHRemH2rXtMC2BlUwgRyCIXHlC3nsd4Nc5ey62N4dU1f60vRhaRziddysFp05xnIYwVJD81MJXAFXV7yySNSPRb9vPj7RiAE7TemDLHxt3btQcwMTr2RAZVx6hLRtscn2VLA7Jawlnvg0Pbm9swBWpIxkddyx+odYQu2/5aZt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use new try_vma_locked_page_fault() helper to simplify code, also pass struct vmf to __do_page_fault() directly instead of each independent variable. No functional change intended. Signed-off-by: Kefeng Wang --- arch/arm64/mm/fault.c | 60 ++++++++++++++++--------------------------- 1 file changed, 22 insertions(+), 38 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 2e5d1e238af9..2b7a1e610b3e 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -498,9 +498,8 @@ static void do_bad_area(unsigned long far, unsigned long esr, #define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) static vm_fault_t __do_page_fault(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags, - struct pt_regs *regs) + struct vm_area_struct *vma, + struct vm_fault *vmf) { /* * Ok, we have a good vm_area for this memory access, so we can handle @@ -508,9 +507,9 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, * Check that the permissions on the VMA allow for the fault which * occurred. */ - if (!(vma->vm_flags & vm_flags)) + if (!(vma->vm_flags & vmf->vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr, mm_flags, regs); + return handle_mm_fault(vma, vmf->real_address, vmf->flags, vmf->regs); } static bool is_el0_instruction_abort(unsigned long esr) @@ -533,10 +532,12 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, const struct fault_info *inf; struct mm_struct *mm = current->mm; vm_fault_t fault; - unsigned long vm_flags; - unsigned int mm_flags = FAULT_FLAG_DEFAULT; unsigned long addr = untagged_addr(far); struct vm_area_struct *vma; + struct vm_fault vmf = { + .real_address = addr, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, esr)) return 0; @@ -549,7 +550,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto no_context; if (user_mode(regs)) - mm_flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; /* * vm_flags tells us what bits we must have in vma->vm_flags @@ -559,20 +560,20 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, */ if (is_el0_instruction_abort(esr)) { /* It was exec fault */ - vm_flags = VM_EXEC; - mm_flags |= FAULT_FLAG_INSTRUCTION; + vmf.vm_flags = VM_EXEC; + vmf.flags |= FAULT_FLAG_INSTRUCTION; } else if (is_write_abort(esr)) { /* It was write fault */ - vm_flags = VM_WRITE; - mm_flags |= FAULT_FLAG_WRITE; + vmf.vm_flags = VM_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; } else { /* It was read fault */ - vm_flags = VM_READ; + vmf.vm_flags = VM_READ; /* Write implies read */ - vm_flags |= VM_WRITE; + vmf.vm_flags |= VM_WRITE; /* If EPAN is absent then exec implies read */ if (!cpus_have_const_cap(ARM64_HAS_EPAN)) - vm_flags |= VM_EXEC; + vmf.vm_flags |= VM_EXEC; } if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) { @@ -587,26 +588,11 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); - if (!(mm_flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, addr); - if (!vma) - goto lock_mmap; - - if (!(vma->vm_flags & vm_flags)) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -614,8 +600,6 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto no_context; return 0; } -lock_mmap: - retry: vma = lock_mm_and_find_vma(mm, addr, regs); if (unlikely(!vma)) { @@ -623,7 +607,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto done; } - fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs); + fault = __do_page_fault(mm, vma, &vmf); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -637,7 +621,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, return 0; if (fault & VM_FAULT_RETRY) { - mm_flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } mmap_read_unlock(mm); From patchwork Mon Aug 21 12:30:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2627EEE49AA for ; Mon, 21 Aug 2023 12:31:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A862D8D000A; Mon, 21 Aug 2023 08:31:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A36D18D0002; Mon, 21 Aug 2023 08:31:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FDAD8D000A; Mon, 21 Aug 2023 08:31:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 821C68D0002 for ; Mon, 21 Aug 2023 08:31:22 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5636440C0C for ; Mon, 21 Aug 2023 12:31:22 +0000 (UTC) X-FDA: 81148047204.01.2C10D52 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf03.hostedemail.com (Postfix) with ESMTP id A630020014 for ; Mon, 21 Aug 2023 12:31:19 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621080; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8Za++qJq53kzQeU94pIeGjitsbWDGJeX52Vmc0x0Gzo=; b=Q/Mou9dFyjRhKC71ltWC6DMKIgZgXdUNIXs1U00VK3fgdYfhyj2VNw7JQ5AV58R4pUBJtA 4gEqyIQdtEpAGVJPluFKpBnz/9SBQgsSEw3Yc0pNEFHscgsigRiXZXl6OZ712O9gFQjbci gtWkvk0tNJvtSpzJMdtsVYuXEAxFuOA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621080; a=rsa-sha256; cv=none; b=YYi9yeay6F3Gytvi0Cm84EsUJDUPg1HrAUi1r0QQ+TXb9ImnG29TSQ6T3ZHTHpbv917b+q OLHJ/wZRYIL6wP32aJS/ME64MPJVIsuu1wLonI97TdhYL7B5Nq2VNPHuSOBEYSBBvT2aIs Lb2PuThK6BM70TijLc0XsX+xOfACIBk= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsF23D6sztShV; Mon, 21 Aug 2023 20:27:34 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:15 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 03/10] x86: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:49 +0800 Message-ID: <20230821123056.2109942-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: A630020014 X-Rspam-User: X-Stat-Signature: 5hpmiued73jnny3o5a8zia958upag4oy X-Rspamd-Server: rspam03 X-HE-Tag: 1692621079-691666 X-HE-Meta: U2FsdGVkX19jPPi3RgPbeq+AsG+DGrttrct6jCApZRqtX4bTEecc20adImMSfatDgBNfUUs+5vb93FssRovqzWaQ6+c/ZdFaIvBxr7l/0XYz87RsERNtcQFkUSigFsJtJBSSUhlmCnQ3qO05oWv6FWiy4DdWb/EfAxFQtaMhw7T2Nevs1aTscnD+hcFOjSovaWaFmMJiO6h06EmI0GA5QiF/7EwZ+lWyGJRq0vVQqcncqWn4O1DMjvkyX4QJEu+nOKcOsAwGJYmCMfMCqC19LswstcQVyDHN4wjT/wN5af5Tn6P2ms2TxVlbqa4evcuzOKSHSHecGXzVzIzjt6ZUlbPTiqa0BYV4CdiGkmipS6//Bg7XS8/Vr0TGmPuWaDt+FmD/VC0qe4hBdo2oqg32gwKp37p7A7HkQkVvBag3j/O72X7RI/22WLUQbwZ1W7+qD/eaZ/+IHY+gUiO0B5sfORgeZToJes0w2fdGxJ9zi27n8g/Yinsnpzvz76+4bbb//Ut9aWkBQwTEs1giJbsysK0D8JLRF3gKXT9/dFXANXinf6JB7sPXJ10QcaRHYiYOVfWeqEAD5jCEtswa9wyZbwKquxgK7fUHYe5SHWzMX16D0Tb7DLNL673uj8qV4hTbEPAqBS6WidJL1DO0IwSgPxRGSJNN1wcSiuYloE+EijmXnt+Ot7ozp2tbNJ/ykzTRXZ+tjzMe4RKRN+x2lU/R6eWDwglgZwg0Z0vEwfkYpVKSGsUKtpSG7TtAK3dVGGWs15XNqo7JMKJncZ5HUSpfdkBaHYdZXRQsHkuMi82QIOG6Wi2bpFWO+CxWk5IJzwpoEWd2WUopfzyOKpWafCX2gSPk7kRrWYDEObPpy8obbI05J1MGaeJSVydRsETHNcbQ7EOgGS9ZfuJiSvU6gJLc8/wUHVSwS9TEIqc5wvaaeaFf+K1hr+liMwEF20MgiI2LLDtDby8MVPc5Rc1IvFs lJZ3SxaI iGmDtHY3zcX39h5l20X9cvhWI3su2f5lkXB/lhm0b5C462Z6Cz77O4Kz055bgS9fNZaULnmMuJ/uIYvzHt0gKYrG5EhdHOnWYNZyElODViQTPTKRGB5BNVWYw4RER/ZL901r0GZ/KWda2KU3MtrdJZtJ8uR050y2Ntrbw2rrR4fnCUQxlEPuLKOyiHmAq9wKn8N+j X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/x86/mm/fault.c | 55 +++++++++++++++++++-------------------------- 1 file changed, 23 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index ab778eac1952..3edc9edc0b28 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1227,6 +1227,13 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, } NOKPROBE_SYMBOL(do_kern_addr_fault); +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->fault_code, vma); +} +#endif + /* * Handle faults in the user portion of the address space. Nothing in here * should check X86_PF_USER without a specific justification: for almost @@ -1241,13 +1248,13 @@ void do_user_addr_fault(struct pt_regs *regs, unsigned long address) { struct vm_area_struct *vma; - struct task_struct *tsk; - struct mm_struct *mm; + struct mm_struct *mm = current->mm; vm_fault_t fault; - unsigned int flags = FAULT_FLAG_DEFAULT; - - tsk = current; - mm = tsk->mm; + struct vm_fault vmf = { + .real_address = address, + .fault_code = error_code, + .flags = FAULT_FLAG_DEFAULT + }; if (unlikely((error_code & (X86_PF_USER | X86_PF_INSTR)) == X86_PF_INSTR)) { /* @@ -1311,7 +1318,7 @@ void do_user_addr_fault(struct pt_regs *regs, */ if (user_mode(regs)) { local_irq_enable(); - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; } else { if (regs->flags & X86_EFLAGS_IF) local_irq_enable(); @@ -1326,11 +1333,11 @@ void do_user_addr_fault(struct pt_regs *regs, * maybe_mkwrite() can create a proper shadow stack PTE. */ if (error_code & X86_PF_SHSTK) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (error_code & X86_PF_WRITE) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (error_code & X86_PF_INSTR) - flags |= FAULT_FLAG_INSTRUCTION; + vmf.flags |= FAULT_FLAG_INSTRUCTION; #ifdef CONFIG_X86_64 /* @@ -1350,26 +1357,11 @@ void do_user_addr_fault(struct pt_regs *regs, } #endif - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_error(error_code, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -1379,7 +1371,6 @@ void do_user_addr_fault(struct pt_regs *regs, ARCH_DEFAULT_PKEY); return; } -lock_mmap: retry: vma = lock_mm_and_find_vma(mm, address, regs); @@ -1410,7 +1401,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { /* @@ -1434,7 +1425,7 @@ void do_user_addr_fault(struct pt_regs *regs, * that we made any progress. Handle this case first. */ if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } From patchwork Mon Aug 21 12:30:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BFC9EE49AC for ; Mon, 21 Aug 2023 12:31:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8165E8D000D; Mon, 21 Aug 2023 08:31:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 752558D0002; Mon, 21 Aug 2023 08:31:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 61A018D000D; Mon, 21 Aug 2023 08:31:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 50D798D0002 for ; Mon, 21 Aug 2023 08:31:24 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 35E04B2191 for ; Mon, 21 Aug 2023 12:31:24 +0000 (UTC) X-FDA: 81148047288.06.ED258C6 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf05.hostedemail.com (Postfix) with ESMTP id 7B3DA10001D for ; Mon, 21 Aug 2023 12:31:21 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621082; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GZ/DbYffByNIesxQY+KWY4PvD/yumirnKdLUTYRNp3o=; b=WqGrgnfSIM5lE293AgW3vcSt9Al5nYlTc3ToK0tw7cJkjEQA4ipZxKlJrsApwmF2AHAR6+ woYmGwGXEYee4g/4bmYl5MAGCYtH8vyct/npXfuiZE17U9EaOGmBFP3AwOhOvB6spSb4BI ze4LMLmyNFdzkxFHe0LrUc/PBsgtbp8= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621082; a=rsa-sha256; cv=none; b=FqzoovVw2bcwSxMbOD4HAeETQ4h/K8l6dV+wd8jMwXC1RqpOzNXW3jwEvpzdKNmN8q980y Z1Acbmh3KDqiQ0RjUoVhfPkjr8LoxSwrcOEYTEcIlMkqXC5flATDuKCdAVxjHHwCx8YBm/ UTzbDjBsGwwIeTJq3xpNIVSzoVpg854= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4RTsHg0Sf6z1L9Pp; Mon, 21 Aug 2023 20:29:51 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:16 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 04/10] s390: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:50 +0800 Message-ID: <20230821123056.2109942-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: 3drear839ejdsyc11mwkboscyfjcmng8 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 7B3DA10001D X-HE-Tag: 1692621081-781940 X-HE-Meta: U2FsdGVkX1+g+R8zySs174/BfnTcsoWlZwFH6OuXpBeLGmAuYDXY6YWjT4CwtM73fdw1T/2yYMC9yccCCedq3tP9pOdj/coSQmM3D0p2P+ftVvujnKCdSkaHK4t7LUc2MTtzeHCzpaPV25IxTqmNMfqt34n+Kg4GQ+0NsHwtXQZdCu4u/UAnzNHSFcYGsCBaqCqQGkZomHL47aupzwWfYmyvTPHvlNYH23jmZnS7t3rJylcgDELRV7yDCpxBFtDPt9yg1nmLaka//AZLWwwgrbDJ8Cj05gHYcMU3t7ULcmMG3NQu6J/nuRuQSvlmjP2Gimz6GaIq2ka9qvbR/R5CE7Vh0RcdtLUr9Z7hAS8JGEI2DdlpmAOBd9mJ8+6felQ/vAkgqjUXlCVz1lP0DJPAr8TTahBb3oJ8jN4Wkg1PNbOFuTct2HDy5ViIbpc16i9Wo2yLDdVfNibqi0tHw2SYpi6nkuXK9vpLH6br9Zy7LN33s0X3sJ20izStg4NBzjZp47e0Kzb8uAqh4R/Oac8+tta4TB9s/ne8A9ZzBAqbOPKji06+syaphJq7HdNVc/A5+1fo8g+kiYr/2OzleBf0LV2U/BZUy23Jerbql404JPPAWDGECVUH8ATJkXGwvW4z2CA/WXLfydSp3nNRLDWSnshPvt1L0DumivzujfX8LX8EkdXpIXbeZnSigzBRVaDZa2jCxLKSiypHHhjdp3m/NKz7Bc0vJSA6yfDRob4qJoHm908HU71HwlOCd6u63blfDak1Pt/bMNxFXTzGZY9l45vopsEk5r5MFIdS+7BQtDbuMUHb5qPPG7gQGiCL+emRJE6npk1j6terTXEpg6FpWSzfmXluhl0Qyc9iXkzecXrfublLKlewz4+yJnA5UdCu2p/ICxq5kn5dNgythvP2ekYDgdR5/aVJ7EuYzEKm2GkQDGP2nSI2b67DimCxruUM9frYPX13HoCsHGyIo97 NDXnNgmR qGKHYKXFydPLogCZuPkyMhFDwPvcHSp8dL57gD8aI1gIPngc6GdkClSaua+mEUxCgG7IAFv4cyiZft2ieALlceS78/bHIaNZpvw5WstEef04F2eHuiH1FdgtB4SP+VETGznzZb3SHfLBCR8KJAQaDWa+u3L9lMFxLsJpTg+YIlXCZYROZp/IoFiQYeLRPkc55OLdG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/s390/mm/fault.c | 66 ++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 39 deletions(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index 099c4824dd8a..fbbdebde6ea7 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -357,16 +357,18 @@ static noinline void do_fault_error(struct pt_regs *regs, vm_fault_t fault) static inline vm_fault_t do_exception(struct pt_regs *regs, int access) { struct gmap *gmap; - struct task_struct *tsk; - struct mm_struct *mm; struct vm_area_struct *vma; enum fault_type type; - unsigned long address; - unsigned int flags; + struct mm_struct *mm = current->mm; + unsigned long address = get_fault_address(regs); vm_fault_t fault; bool is_write; + struct vm_fault vmf = { + .real_address = address, + .flags = FAULT_FLAG_DEFAULT, + .vm_flags = access, + }; - tsk = current; /* * The instruction that caused the program check has * been nullified. Don't signal single step via SIGTRAP. @@ -376,8 +378,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (kprobe_page_fault(regs, 14)) return 0; - mm = tsk->mm; - address = get_fault_address(regs); is_write = fault_is_write(regs); /* @@ -398,45 +398,33 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) } perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); - flags = FAULT_FLAG_DEFAULT; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write) - access = VM_WRITE; - if (access == VM_WRITE) - flags |= FAULT_FLAG_WRITE; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - if (!(vma->vm_flags & access)) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); - if (likely(!(fault & VM_FAULT_ERROR))) - fault = 0; + vmf.vm_flags = VM_WRITE; + if (vmf.vm_flags == VM_WRITE) + vmf.flags |= FAULT_FLAG_WRITE; + + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto lock_mm; + if (!(fault & VM_FAULT_RETRY)) goto out; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); + /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; goto out; } -lock_mmap: + +lock_mm: mmap_read_lock(mm); gmap = NULL; if (IS_ENABLED(CONFIG_PGSTE) && type == GMAP_FAULT) { gmap = (struct gmap *) S390_lowcore.gmap; current->thread.gmap_addr = address; - current->thread.gmap_write_flag = !!(flags & FAULT_FLAG_WRITE); + current->thread.gmap_write_flag = !!(vmf.flags & FAULT_FLAG_WRITE); current->thread.gmap_int_code = regs->int_code & 0xffff; address = __gmap_translate(gmap, address); if (address == -EFAULT) { @@ -444,7 +432,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) goto out_up; } if (gmap->pfault_enabled) - flags |= FAULT_FLAG_RETRY_NOWAIT; + vmf.flags |= FAULT_FLAG_RETRY_NOWAIT; } retry: @@ -466,7 +454,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * we can handle it.. */ fault = VM_FAULT_BADACCESS; - if (unlikely(!(vma->vm_flags & access))) + if (unlikely(!(vma->vm_flags & vmf.vm_flags))) goto out_up; /* @@ -474,10 +462,10 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { fault = VM_FAULT_SIGNAL; - if (flags & FAULT_FLAG_RETRY_NOWAIT) + if (vmf.flags & FAULT_FLAG_RETRY_NOWAIT) goto out_up; goto out; } @@ -497,7 +485,7 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) if (fault & VM_FAULT_RETRY) { if (IS_ENABLED(CONFIG_PGSTE) && gmap && - (flags & FAULT_FLAG_RETRY_NOWAIT)) { + (vmf.flags & FAULT_FLAG_RETRY_NOWAIT)) { /* * FAULT_FLAG_RETRY_NOWAIT has been set, mmap_lock has * not been released @@ -506,8 +494,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) fault = VM_FAULT_PFAULT; goto out_up; } - flags &= ~FAULT_FLAG_RETRY_NOWAIT; - flags |= FAULT_FLAG_TRIED; + vmf.flags &= ~FAULT_FLAG_RETRY_NOWAIT; + vmf.flags |= FAULT_FLAG_TRIED; mmap_read_lock(mm); goto retry; } From patchwork Mon Aug 21 12:30:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7472EE49AA for ; Mon, 21 Aug 2023 12:31:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF96A8D000F; Mon, 21 Aug 2023 08:31:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CCF888D0002; Mon, 21 Aug 2023 08:31:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B48258D000F; Mon, 21 Aug 2023 08:31:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A81908D0002 for ; Mon, 21 Aug 2023 08:31:27 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3B928A0BD4 for ; Mon, 21 Aug 2023 12:31:27 +0000 (UTC) X-FDA: 81148047414.01.9328BDD Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf22.hostedemail.com (Postfix) with ESMTP id B3F06C0024 for ; Mon, 21 Aug 2023 12:31:24 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621085; a=rsa-sha256; cv=none; b=Bs2UcbEdM7PvfTA0tDe0LcJFL8TkIs6Cs+vawIB73zukKjQvGIWHmYHnNEuQs/K07TlREd GtP4R1ihGdJs199XR1t1WmjOmVkEhaTf5wxWQ3dIdg9Ny6e3RRZ0lza6SbkgDvqDDsKvTn qKMrNl+zpXc254rIORgxvQTwF73SUEw= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621085; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AZ8jFv70nL0yis8cw2hxyNkLT4NIf61vqExlmfu8DlI=; b=P3evthLiShkoRxNE2vN+kxb+/OY+Z3V7S/HspCH1Sb0fYF2iT26y9VTPQqQ6JR+N3qOQ5k Z27XWxIH1wOfzTv9WD2VbNoFUCee/PwWb78avmW1JEyL98pLcvhsYjaKpuZ2oaw43jdr4f Iw61GeIbrg4RWsAqjirzoLWSQmYu1bw= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsFF3Rj6zNnTN; Mon, 21 Aug 2023 20:27:45 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:17 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 05/10] powerpc: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:51 +0800 Message-ID: <20230821123056.2109942-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B3F06C0024 X-Stat-Signature: r4irqx7kpk8a7jzr9w56qjtnftq7ufh1 X-HE-Tag: 1692621084-804339 X-HE-Meta: U2FsdGVkX18VhblmrstA/kooTpkFw/auNgAWwub5WxaxIM7H6DUMgIlW/voXeiZ1dJ2+CHujIcUxdcIMo3dpR582pPjUf+RQgrD3qFMb6va4S41JhvOgIiU7muJISSJ4wCBVH0v6VX27RNmJxKbT2KogNG3a1k/gSawMpNNoHARoDMuLyr8NrwT+N9TfpaAWHnu+uoRF7DmPIUu/fZZB02IZHobgvHpF+Bhlgqb2dCNnjFwS/14YLb/fxn3JOoDIDG6Nf5Ggp6Rv/TmxyLTxSsQAANa/fS3Lkh5B6XuBv3ocmQ7Q5JccJE0Hin0t+kRAeA9TOwuymKhgI0jVz7EPRbf+1L/01hCOup6lnXcBmE18N2FMkp6CJcWHNGm3MxFt0flKNQiT5u/SuHfCqPdjQIsLArdTaoNpAQuO1QYXuOpApa+QFe2yzUbwuFK3qC1F0EVgrjsDeNoHcfOTNXRkwc8TfvyYmHuodM3+y1nOny2eb3QyvTrKtPnYv/rP5ij75j94DZn39NXzoW2WbHbuBKAUrC636f0steGZIjzhh4gi+I+9GcQnvqkHkEjWaL0Sf22nHiAPLdBhNvhzthB9smz1JV4VeOC2CHBqeX5cy4YiHTQda9y17yGR/tbcfehdgNc3IkVBga7XjtiDGq2t/0KqChJKd4mqV6Vg7XCzqGd74WPQexDccTI2vNl3pzUZLWbXu7yIYOzQxaErTl8j/T3SmQ2OWSlsw/0SwJVYKCMP8qDhMc6pL2dkTMV9gLfvTExNxL+AykbizSWUN5RbSAn3LpIlJRjntgoESLCqH6WWhG1PFaGpuPawHcRnbf9LdaHTV3png4cehPTTYxTZuZGik4BAsxstl0qhXasCP87+oSeyXH6vbNs0Bs99oeqiSDandsDQXk1Zm6H6b2QuQgkJiirh/QO35Jn9B9h+HyqZ0Z0Sf/aAbJQxUMzJGe10zDc68YmQM7CPWf0BOyQ lvMi4/wb IdX7mRcIPd16DYcNkRM4rC8e+zCdhFwRh+fyWVLva5flaFdgqE/oaQITtjb6TKRaS7cL4EQlvLZVX+FYTk9d86pqIumIy+yorrcRZN40Yb12kRe6Rp95OdOdzL/T62TcnRSxEZWcosEpZ08VU8WYofxN0Aw7maC2BTS/Mdal71wu8sgkyINmdHbsPQEzhiEMdDqIg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/powerpc/mm/fault.c | 66 ++++++++++++++++++++--------------------- 1 file changed, 32 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index b1723094d464..52f9546e020e 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -391,6 +391,22 @@ static int page_fault_is_bad(unsigned long err) #define page_fault_is_bad(__err) ((__err) & DSISR_BAD_FAULT_32S) #endif +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + int is_exec = TRAP(vmf->regs) == INTERRUPT_INST_STORAGE; + int is_write = page_fault_is_write(vmf->fault_code); + + if (unlikely(access_pkey_error(is_write, is_exec, + (vmf->fault_code & DSISR_KEYFAULT), vma))) + return true; + + if (unlikely(access_error(is_write, is_exec, vma))) + return true; + return false; +} +#endif + /* * For 600- and 800-family processors, the error_code parameter is DSISR * for a data fault, SRR1 for an instruction fault. @@ -407,12 +423,18 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, { struct vm_area_struct * vma; struct mm_struct *mm = current->mm; - unsigned int flags = FAULT_FLAG_DEFAULT; int is_exec = TRAP(regs) == INTERRUPT_INST_STORAGE; int is_user = user_mode(regs); int is_write = page_fault_is_write(error_code); vm_fault_t fault, major = 0; bool kprobe_fault = kprobe_page_fault(regs, 11); + struct vm_fault vmf = { + .real_address = address, + .fault_code = error_code, + .regs = regs, + .flags = FAULT_FLAG_DEFAULT, + }; + if (unlikely(debugger_fault_handler(regs) || kprobe_fault)) return 0; @@ -463,45 +485,21 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * mmap_lock held */ if (is_user) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; if (is_exec) - flags |= FAULT_FLAG_INSTRUCTION; + vmf.flags |= FAULT_FLAG_INSTRUCTION; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_pkey_error(is_write, is_exec, - (error_code & DSISR_KEYFAULT), vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - if (unlikely(access_error(is_write, is_exec, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); if (fault_signal_pending(fault, regs)) return user_mode(regs) ? 0 : SIGBUS; -lock_mmap: - /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the * kernel and should generate an OOPS. Unfortunately, in the case of an @@ -528,7 +526,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); major |= fault & VM_FAULT_MAJOR; @@ -544,7 +542,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, * case. */ if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } From patchwork Mon Aug 21 12:30:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37529EE49B0 for ; Mon, 21 Aug 2023 12:31:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B70A98D0010; Mon, 21 Aug 2023 08:31:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B20AC8D0002; Mon, 21 Aug 2023 08:31:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E8FA8D0010; Mon, 21 Aug 2023 08:31:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8AE498D0002 for ; Mon, 21 Aug 2023 08:31:28 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 68184160B76 for ; Mon, 21 Aug 2023 12:31:28 +0000 (UTC) X-FDA: 81148047456.14.9CC3FA1 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf11.hostedemail.com (Postfix) with ESMTP id 7DD9C40018 for ; Mon, 21 Aug 2023 12:31:25 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621086; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ylwOlASsFauhlnikf2xz20/mDXOPIgvFpCP5bYKqfYo=; b=dNm/M4kyKIMjRKKjzID4utCLQmaqtrhoOFMGN5pzOzTTtA0I2I3gBEZ2UTYvwipxt4Nev/ 7N5AVnpTLh/OeVY72fWp13sZtrnOWNW+YTrG1xr6tVcgu+bGXnWO5sMHkneXkO6ZI5ywpx +E9NzO2c2V4P8dSp96fnoXJXjLyoY5k= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621086; a=rsa-sha256; cv=none; b=4p2CAP0ow5Uq6zWZDMMuHDo5eJfdZh4YxqfAS6YPYSMM40BSS4FO6dOHeRPQzpX4tLrrh0 +ozvRRFFMhgSam4VmaQGWgvxcw03/NAUOOGVky9nrgEKGJFPjDIvl0pVnFq9BWsSbKG4cY 5Xm2fVSwoo7FGbeAbdw3wuG2HmnKU7w= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsFG5XSpzNnTc; Mon, 21 Aug 2023 20:27:46 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:19 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 06/10] riscv: mm: use try_vma_locked_page_fault() Date: Mon, 21 Aug 2023 20:30:52 +0800 Message-ID: <20230821123056.2109942-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 7DD9C40018 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: gbi7ktxmm8immnbmg5ih9mstuqwprf8t X-HE-Tag: 1692621085-75313 X-HE-Meta: U2FsdGVkX19o4K+YGGYomdFR3q2eOxWfQz3PAEId/2uK4XCDV9IDq0GKNrnYYNnkLl546UXIG2az5sr72/RkSGozrMUEkmDZd5qI9NUG68YxRgdsk2f7TUf2zFYImktjFmGbJg8+FaLkwFR20XqxcP3p/tnEyF17wuKZ4GCJG5dHONWf9fhG4ZzB6AeD7O2RpV8EftMFZtiRthiW3blI+/k1r6bDS+kmLw4BW8hRfsp/HrjbcclENxoTFFnXKJtyK26DiuV15uDKGaiCLcmcZofz1S617VzQbvecN7bnOXE0WguYPdvI5oIc6B8sH1HH4MCfl6maxCbBULTUhlocDbL2RdFc2a4IpNIHZP4NjWv6bgItnjYwRfXn243UWPELnaBY5QSoqWrp1Pn1iqsaWq31+nhkYawYGb7xjsGj8QTDWY5cNr5igsZV7i64EbodzoMCUMwHJ5Btbo+3fNJUJ5QKwxSq1W52cdzzCsHE1Y3yGHx4+UfQMNRwTP9OuZxzKjq6z61+DNrmkEDGZRYlSWgR7cep2YOXvcWwIUKnq2uPIBZYdjK5AkOiEDEvKMJs69sZ9iSaBKyAKWWozjnqUtCQsTOdG8HtgPZMLqjnwjz78EjiEpdR5NWn2WLRdtdJ99HErmtEEn3/uvJt5Qkovh3rUemUr50l3ck96R4y2WYXLwWvH7iQ6J+640hOQra6o0rs+vvBKSsmvfgWTvymjy/6cCyutffcWngU7BBBIL2B5ljmKtSX8glQBOWouWrM6dabpLUENgNtX4vfiuiNTMAxqVID0Hqh11tqy7u1jO83VPtLrDwb0akR7Eq3m4OowkBRX8bMO/cidK3HoOzk+kWOeAe3dGlyZdRRxGJHzDWdAZfsT0pv9VJOlB36fcdEBrly1JZ2CKCy/0V2QNR8w313V+hn79ZTAi4AJdj61Jkw2bxogl1R5YchJ2aKMd7xcIpVFTyXy0ssz392gjE wk2SdYmB B2yrnGXyExdN9AfzNIeqF1CY1YtQDJSAUgcyKCaTW48YEJXhFse1DmD7Uu+QwLz2rHJa7tfPkckUILKqaCx3dfvxHW/HyKR/pT6Dqm+Lk4gnztLzKNAdX+ti80GG3O3PnZG9UvDwrOddkFe2bbssco9fInwiec0xv9AL2jaCOxNvYPLeXTbCGtfvKUHqj124qUC1I X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/riscv/mm/fault.c | 58 ++++++++++++++++++------------------------- 1 file changed, 24 insertions(+), 34 deletions(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 6115d7514972..b46129b636f2 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -215,6 +215,13 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma) return false; } +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->fault_code, vma); +} +#endif + /* * This routine handles page faults. It determines the address and the * problem, and then passes it off to one of the appropriate routines. @@ -223,17 +230,16 @@ void handle_page_fault(struct pt_regs *regs) { struct task_struct *tsk; struct vm_area_struct *vma; - struct mm_struct *mm; - unsigned long addr, cause; - unsigned int flags = FAULT_FLAG_DEFAULT; + struct mm_struct *mm = current->mm; + unsigned long addr = regs->badaddr; + unsigned long cause = regs->cause; int code = SEGV_MAPERR; vm_fault_t fault; - - cause = regs->cause; - addr = regs->badaddr; - - tsk = current; - mm = tsk->mm; + struct vm_fault vmf = { + .real_address = addr, + .fault_code = cause, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, cause)) return; @@ -268,7 +274,7 @@ void handle_page_fault(struct pt_regs *regs) } if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (!user_mode(regs) && addr < TASK_SIZE && unlikely(!(regs->status & SR_SUM))) { if (fixup_exception(regs)) @@ -280,37 +286,21 @@ void handle_page_fault(struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (cause == EXC_STORE_PAGE_FAULT) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; else if (cause == EXC_INST_PAGE_FAULT) - flags |= FAULT_FLAG_INSTRUCTION; - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, addr); - if (!vma) - goto lock_mmap; + vmf.flags |= FAULT_FLAG_INSTRUCTION; - if (unlikely(access_error(cause, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) no_context(regs, addr); return; } -lock_mmap: retry: vma = lock_mm_and_find_vma(mm, addr, regs); @@ -337,7 +327,7 @@ void handle_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, addr, flags, regs); + fault = handle_mm_fault(vma, addr, vmf.flags, regs); /* * If we need to retry but a fatal signal is pending, handle the @@ -355,7 +345,7 @@ void handle_page_fault(struct pt_regs *regs) return; if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; /* * No need to mmap_read_unlock(mm) as we would From patchwork Mon Aug 21 12:30:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 064CCEE49B0 for ; Mon, 21 Aug 2023 12:31:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CC938D0012; Mon, 21 Aug 2023 08:31:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52DCC8D0002; Mon, 21 Aug 2023 08:31:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F5138D0012; Mon, 21 Aug 2023 08:31:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 23DA98D0002 for ; Mon, 21 Aug 2023 08:31:30 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E043A40C2E for ; Mon, 21 Aug 2023 12:31:29 +0000 (UTC) X-FDA: 81148047498.24.F95D613 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf21.hostedemail.com (Postfix) with ESMTP id 27B361C0031 for ; Mon, 21 Aug 2023 12:31:26 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621087; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l4jSuPrbGHI4pMXaRA4r+395y9GbHYpFkl5xNMupVfQ=; b=nRLSD9jBvXtd4gFGbEt+uZriRWml93eXFTB6QYzmPZoOi90145zgNY+vV6zvvrpVINHRPZ 8KQ/pg7BNg45pWOZn2ZplmE3+zr7xX6ihXC39TvRbEBgoYDc9pR27O5/O3y9Vy7kwPQl3e 22FJJ1q9zJ6uabmi2BzdFmDgyCnx878= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621087; a=rsa-sha256; cv=none; b=D+90gw9jpH1cQQsdp5LKU2zVS0sPg9tOmT/4Pku0ldeq/8pi5ksrwZEsCWMrROMhCH0Q7r 4FwXiA5xk0SLPTTP7S0PZtgMWHDOVXD0BBai/Nd8sxNfwoyR4/94tSh2G3SUB3Azv4r6t0 RkNeazvsETURg7PgASWw9HnAEjTNEB0= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGr51ZwzVks7; Mon, 21 Aug 2023 20:29:08 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:20 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 07/10] ARM: mm: try VMA lock-based page fault handling first Date: Mon, 21 Aug 2023 20:30:53 +0800 Message-ID: <20230821123056.2109942-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 27B361C0031 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: fcmiw3zpuifp4fmdarug3s8n6top6pbo X-HE-Tag: 1692621086-323273 X-HE-Meta: U2FsdGVkX1/1/KfjygVNpRqA1Itd8ngfot6qEtge9c2I/Tlbb2G4bsMZWzzDrVusORZNKVqLvRorn07RReI5aH71qqfSXY3qGCDEFP3OLSX6zbDWGq18WE7So+CzvJ1AR3Rr1FxQctfdnmB5ewYCHeGVrm3CkP4mK9PWPwlBE80V9y4PN+KjvgN0Wz7UwlZtEtnZWKBQD+1BzMagUQX1dLOfchIyl4XjdxfYV09glAA40xs/p94K5HhGFwFQEbEdA28hp51qe/v22ewmNDAKWd4CDCYyrdTTcYmLMaIzQcaEjm97NicCnG+oBFA14b2PrfLZBVKYq/6NsEXOE5u3WlZwx0HCQ64zMCYZMPz2/jNfhRsuzhSENaoPe41r/p7BIEnxdPzWkPqgMzmSlIvE+aRdg4hgSNYUSvhqPpyRCAGfEtbmhdMZgPrE9Kj692DBKD96WPK64FxvkSrupCCpfQl5/RF/2e6c+anaCos08oW9XvpKtC89cuiHiRa082H2qeV0MihhhwuyakJ2fqHO2xGuIGrGDlcKb7BnGnh5U5wW4H0EIp8dOsanfi37wuTngf8QgWg1rYXvEtTmUbEMs664L3+rfMcJpw715FixtRjqlnQbBdYqXl22kqt7oVMwzs4N8pRyTglPQ8q+zUKqSYAoZSmpCRMdUo4HpLN3x8m9tWZGDQwBEXOVbuIlQ86lQvjeFykk5Nf33EXwVbYKDOdDkh3VsAPHgcMZDcGftVMIW7YKVJX0LXnw0nmlLK7NwYYRPnJOQHsW9+RybzDdgIPLO/qZOVBcRwzEdtI0CKMI/L0zSG169XKswb0qLU5ZRZmDxCusb3fWlW3TGO8iJH400wt5oFEIZlRBXrsGENyYZCuq/fvXchiHKwDgKOv6bgjvkLwiwg4thCWLNZoDF+Dho9Npu3INqaeKTEijTqcQSg8LiNEWR4tiL28zSwR4/a+nG22AZXxmd44DQ9h IeFPZHXt voGj3ecPSHWGV29gYsls7byWC9zQwQCkCx4f2HEbP+34CvTDIq1arvhPwyjiFYHcieVbI2maT7b648zFZLnssfNFW1DQo364AIwf+P2wkmNEcZ+02zMDIu5vmO261tC8Ag9WGJrznjnLnX8HKJcsJIC9AXw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/arm/Kconfig | 1 + arch/arm/mm/fault.c | 35 +++++++++++++++++++++++++---------- 2 files changed, 26 insertions(+), 10 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1a6a6eb48a15..8b6d4507ccee 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -34,6 +34,7 @@ config ARM select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7 select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_MEMTEST diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index fef62e4a9edd..d53bb028899a 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -242,8 +242,11 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) struct vm_area_struct *vma; int sig, code; vm_fault_t fault; - unsigned int flags = FAULT_FLAG_DEFAULT; - unsigned long vm_flags = VM_ACCESS_FLAGS; + struct vm_fault vmf = { + .real_address = addr, + .flags = FAULT_FLAG_DEFAULT, + .vm_flags = VM_ACCESS_FLAGS, + }; if (kprobe_page_fault(regs, fsr)) return 0; @@ -261,15 +264,15 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) goto no_context; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (is_write_fault(fsr)) { - flags |= FAULT_FLAG_WRITE; - vm_flags = VM_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; + vmf.vm_flags = VM_WRITE; } if (fsr & FSR_LNX_PF) { - vm_flags = VM_EXEC; + vmf.vm_flags = VM_EXEC; if (is_permission_fault(fsr) && !user_mode(regs)) die_kernel_fault("execution of memory", @@ -278,6 +281,18 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + goto no_context; + return 0; + } + retry: vma = lock_mm_and_find_vma(mm, addr, regs); if (unlikely(!vma)) { @@ -289,10 +304,10 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) * ok, we have a good vm_area for this memory access, check the * permissions on the VMA allow for the fault which occurred. */ - if (!(vma->vm_flags & vm_flags)) + if (!(vma->vm_flags & vmf.vm_flags)) fault = VM_FAULT_BADACCESS; else - fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); + fault = handle_mm_fault(vma, addr & PAGE_MASK, vmf.flags, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_lock because @@ -310,13 +325,13 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (!(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_RETRY) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; goto retry; } } mmap_read_unlock(mm); - +done: /* * Handle the "normal" case first - VM_FAULT_MAJOR */ From patchwork Mon Aug 21 12:30:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66448EE4996 for ; Mon, 21 Aug 2023 12:31:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D9038D0011; Mon, 21 Aug 2023 08:31:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 888DD8D0002; Mon, 21 Aug 2023 08:31:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7518A8D0011; Mon, 21 Aug 2023 08:31:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 62A688D0002 for ; Mon, 21 Aug 2023 08:31:29 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 3677F1A0B77 for ; Mon, 21 Aug 2023 12:31:29 +0000 (UTC) X-FDA: 81148047498.04.7CD8717 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf15.hostedemail.com (Postfix) with ESMTP id 12AA6A000F for ; Mon, 21 Aug 2023 12:31:25 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621087; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0BPre1mji4iNkocWD89whAC0rMq3U6GyUcT+h0WZYyY=; b=kPIcK50NJIxoNnN+rr9ijU83SAFSW31BMiSYaUSoCzQZTovM76TkP5lDFPaNYQH4KzO1E6 oGPjxNsPp1kq8ZkFalLz6sEdtyl1LaEOi7DD3xGwHNTMSdBHWvb8KQAQ8KtC4JEYr2L4d0 f8uiczSHNvGZFHu/gLLOctCJgRlJVpQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621087; a=rsa-sha256; cv=none; b=olJIYnvDaFTdMAAM57hphsUWHdMOijV/rcX1luj98erBZGCgc62O2J7u7ewsRtm3LxwerJ RzW66N7cPEwYGTHX1ZWABhqil8HjzcL6QpjqMNljONQWXbPHTVzDaZc19Lt8PYBtLADjlX xVAtnyAsWMk4+m1qwdS+PKJWkF6NDz0= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGs73pWzVks8; Mon, 21 Aug 2023 20:29:09 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:21 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 08/10] loongarch: mm: cleanup __do_page_fault() Date: Mon, 21 Aug 2023 20:30:54 +0800 Message-ID: <20230821123056.2109942-9-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 12AA6A000F X-Rspam-User: X-Stat-Signature: 7umftuy9xuf3ejhmy85brrtaryaen54e X-Rspamd-Server: rspam03 X-HE-Tag: 1692621085-253481 X-HE-Meta: U2FsdGVkX1933hSpYFWDKPUzL73aYDRPYP8VpTlE8tJ9XddFAVcKilCE5S7QJuFiAOW66ouhEb1Xh7rNM2lvhZMT1KbBIBnH1dG7XJlZxJeGrW9RjITLWr78dcA0ZlXX23vsCHC7+d9s/CSlCN/hWbAKNuSn5yxQ4rizmxVn9KeJo1LJ4b6eN198Oe3I7upmVonAyHJdB74uUAXwKxr5Q6gpyvNUjTq6uUcn8PFVVT6/XAXxqWvufnExXis6Z9KDe28fJ7Y9vNx6yX3knu0A4LdFPVaj+1/6fo/OkRtj7nG9lq/2521PMYeaarefeDSIAx+j6xI22BM8f26bjtx6T744BkbQWv867MNa9KgXIr94L4kvsVIkAaLyAwolBYHi2cFbuQeVS9lDXv4u+otTcrRLKEmgnj6pfc/4SW5hZjqzBvlgY8QyZ08SzkqFl9OgqdT0dwHHzDLOvPH76aTZwhZry50aV9sItMaZhMab6CMmDkCz+Jgm7xBLzt6gjqoub/xtRzGHw6J9RH+iifOpj/UBfvTP52gi9aRL7dQYaWZZUtw2VIfSqZIMAw5eJg1ks8N1wK4+RLR3pDy4CfrQy/sxznTFsA3CL7ut9IYQX5k7p3Aqn10AnvAXr4rZl4dhpNjnvwpdZDVH5v+RhBTmO28hSyc+SEJzbxsmN3Y7MUNa33PFtaL6JxbW4dTHkH9Si8zaYU0KgDKL4Bz8GVSbbTnL2voJTZP9an0TJdwNVJrj8Ufdc0Zui/dxZpvo3109rgjO6zSbuySP9HciZkZgJDDmw4bGnPMbxn5zr3acpxJ7YrFFpsAqA6QA4jIx16Uw60BOqOOG+0l1eV4ZodORztG82dlwCarbCkJiY1KAugTc1YQcZsHuihYLH0FMbCzNzeCzmYfUEFxdHpNAJHvBn9J/DVpFto/6ZXtHqH0BV2oZKVTZ/Tqcz/GYGjpwsBWJDtLyZqSXMWP3QN5RZge lFisbHyU 110WAIyS125oF6afB9+2ZI3dYBN+J2jkjFLZOP88Pma9NZpg4DAVQqE3CayGdreDYDfv7Gz1jjZBud225FVhcnbxJXHd2VCFP6N3OX0Qb6KBSg4fqiO9LOvghpP3oUxEnJqXHfYV2xQnlaUG4O5F8uWEjyxaC6S02W21XOf8c6wG5yQXwzx1i/tIpnIbmjDXyCsDS/LysA1U8K1M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Cleanup __do_page_fault() by reuse bad_area_nosemaphore and bad_area label. Signed-off-by: Kefeng Wang --- arch/loongarch/mm/fault.c | 48 +++++++++++++-------------------------- 1 file changed, 16 insertions(+), 32 deletions(-) diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index e6376e3dce86..5d4c742c4bc5 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -157,18 +157,15 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, if (!user_mode(regs)) no_context(regs, write, address); else - do_sigsegv(regs, write, address, si_code); - return; + goto bad_area_nosemaphore; } /* * If we're in an interrupt or have no user * context, we must not take the fault.. */ - if (faulthandler_disabled() || !mm) { - do_sigsegv(regs, write, address, si_code); - return; - } + if (faulthandler_disabled() || !mm) + goto bad_area_nosemaphore; if (user_mode(regs)) flags |= FAULT_FLAG_USER; @@ -178,23 +175,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, vma = lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) goto bad_area_nosemaphore; - goto good_area; - -/* - * Something tried to access memory that isn't in our memory map.. - * Fix it, but check if it's kernel or user first.. - */ -bad_area: - mmap_read_unlock(mm); -bad_area_nosemaphore: - do_sigsegv(regs, write, address, si_code); - return; -/* - * Ok, we have a good vm_area for this memory access, so - * we can handle it.. - */ -good_area: si_code = SEGV_ACCERR; if (write) { @@ -235,22 +216,25 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, */ goto retry; } + + mmap_read_unlock(mm); + if (unlikely(fault & VM_FAULT_ERROR)) { - mmap_read_unlock(mm); - if (fault & VM_FAULT_OOM) { + if (fault & VM_FAULT_OOM) do_out_of_memory(regs, write, address); - return; - } else if (fault & VM_FAULT_SIGSEGV) { - do_sigsegv(regs, write, address, si_code); - return; - } else if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { + else if (fault & VM_FAULT_SIGSEGV) + goto bad_area_nosemaphore; + else if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) do_sigbus(regs, write, address, si_code); - return; - } - BUG(); + else + BUG(); } + return; +bad_area: mmap_read_unlock(mm); +bad_area_nosemaphore: + do_sigsegv(regs, write, address, si_code); } asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, From patchwork Mon Aug 21 12:30:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5405FEE49AC for ; Mon, 21 Aug 2023 12:31:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 30DA48D0015; Mon, 21 Aug 2023 08:31:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26FF48D0002; Mon, 21 Aug 2023 08:31:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1108B8D0015; Mon, 21 Aug 2023 08:31:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E98DF8D0002 for ; Mon, 21 Aug 2023 08:31:37 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BD53640410 for ; Mon, 21 Aug 2023 12:31:37 +0000 (UTC) X-FDA: 81148047834.16.120D3A5 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf30.hostedemail.com (Postfix) with ESMTP id 5BCF480021 for ; Mon, 21 Aug 2023 12:31:31 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gIldAat6PTqp0m0qcFfSiDczJB9xcWWS8CD3qklnWTo=; b=YXaIiCdxbnpjxpNlmMpbVf4n20yNnhf1TdxpUavuqPhR5O25GtFURYPchY08Uh70dwhkMb cngRotRVJtXwvkdOZHq+CO9+ylZgcEwzMARyQ8UueR3H8izKHoPoRPp3ZG/2LmynE7TXEB IrB4UUSQtkHf/Q1io8yXJ0W0VgW/Lk4= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621095; a=rsa-sha256; cv=none; b=qoQXJwHoH7mJ0L6RMBFvr4tu4o1AQYVh5e7hU987JM2IZcYRn9sys1NQs2ONedWbgM/MZY l3Pu1qs2SwDiS8ABp3j21M616nx7le35vbJOagPWosB6yLk/HMMu+x9Ee9amnCSwpvibhr LHYIUuKktDV3636Tq17Z7bMZWJQmmiE= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTsGv1szHzVksj; Mon, 21 Aug 2023 20:29:11 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:23 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 09/10] loongarch: mm: add access_error() helper Date: Mon, 21 Aug 2023 20:30:55 +0800 Message-ID: <20230821123056.2109942-10-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 5BCF480021 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 353thj9pa3ydmmdj3nfeg1ubfkreuwnm X-HE-Tag: 1692621091-229924 X-HE-Meta: U2FsdGVkX1/lcPXSoGyZYMBAzoK8ZRoXk25gXpTpceJCyPv4RqlDBfcx9+65Kc0DLH9oJOTTRB7qRA0gSEInhmEcE5td6zw/Zet33TkB052OiYLkJyhz4JSmurbv8co8JOjV7hSQvPHFsUX5SPH3iowsH8WYnyqVH8LMp+3XNnOKpkMG+udKXTg3eqAXQBGhKMMGjnQTPkmjdoRxxGFPIcN7mdfCKJH3WjLx5QTEQ8ufwKy0oWXztqm+LQZp02ejBTi4MQ3rY6XWtC8kowLo+fkO82v6HpJjyQnVJ89Yg4URbgeR4/k4cEEib2vKeg72sdyKgkoU0lYvWkOmEFuVhSxRZ3sMs1iAlQqWncXZWj8gZmMKRYsPgGnDWza+2fCwAb/q8LlmDhArLSOkQ1OF0pC0iRTsCaJuhsoJgafI8YUGWMcQBNLzI3i0xfADet9caOquOXHBdPj+/dR/4qaU/nL94r2FogIg8zR+K2X2hIFF7Y5uSFUXtT1a7y0PRw1gn1l0iytXXFC03uBdq8jC9HEy+qy5VJFDmr7Tp7120cEMkQxbgJos/2oCzi14ShzBnjEFNoatXcCbj1DmuyBHnXrvLKdkLGAsn+2PbyKJKXPg2iKDmTDa/MJtlmAiEw+FnqtwRcMhYQIVAzJtR8VQG7j2YT3dAZLjqA7e3kDErJNppNx7T6UesEQOcqSrObwlx9XlhEcUKgAa1ZkG4V+7KkWIUcSl/hOfr3yOD5BR6esDHveesadoREk0l6QeW776IGAlf/HnEShFyoOE5EtFVittTSK1tODbtXuEEswYGw/wYBfFcqhkQ88GEyRW65CEZd8njhUlve5vfyawiEXevdN5wvrzKKcyp21UInQ6UKSNk/21f9QPmS30aDXYbsXuYpDo0o/I7J6yLl0+b/KyOHUo106Ft5S4UZPZOMsx+LPMteEKbZpclerkS7BUK9EeI6wva+mN1x4L81iVQaA 3VJTJIrU Zv1SsqWQXM+ybgRgSovjBq+KyxEPWmq1lzrPvVZKmPsdpBE++oeOEoVFRW41WfT1x2iWvzjT8E6EP3iIzREKfeWuckbTpjjZoSFn7oSeTUQ5hL+3wouT5RucUZXEVyj5X8kmF0tzYd8BXDdtknxAvbU5ihjkTyFOubE1q6sEynS2dVqsfyZQ+bbM4/DVAR1QQjxoI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add access_error() to check whether vma could be accessible or not, which will be used __do_page_fault() and later vma locked based page fault. Signed-off-by: Kefeng Wang --- arch/loongarch/mm/fault.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index 5d4c742c4bc5..2a45e9f3a485 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -126,6 +126,22 @@ static void __kprobes do_sigsegv(struct pt_regs *regs, force_sig_fault(SIGSEGV, si_code, (void __user *)address); } +static inline bool access_error(unsigned int flags, struct pt_regs *regs, + unsigned long addr, struct vm_area_struct *vma) +{ + if (flags & FAULT_FLAG_WRITE) { + if (!(vma->vm_flags & VM_WRITE)) + return true; + } else { + if (!(vma->vm_flags & VM_READ) && addr != exception_era(regs)) + return true; + if (!(vma->vm_flags & VM_EXEC) && addr == exception_era(regs)) + return true; + } + + return false; +} + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -169,6 +185,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + if (write) + flags |= FAULT_FLAG_WRITE; perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: @@ -178,16 +196,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, si_code = SEGV_ACCERR; - if (write) { - flags |= FAULT_FLAG_WRITE; - if (!(vma->vm_flags & VM_WRITE)) - goto bad_area; - } else { - if (!(vma->vm_flags & VM_READ) && address != exception_era(regs)) - goto bad_area; - if (!(vma->vm_flags & VM_EXEC) && address == exception_era(regs)) - goto bad_area; - } + if (access_error(flags, regs, vma)) + goto bad_area; /* * If for any reason at all we couldn't handle the fault, From patchwork Mon Aug 21 12:30:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13359378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3F29EE4996 for ; Mon, 21 Aug 2023 12:31:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE6D38D0014; Mon, 21 Aug 2023 08:31:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BBE788D0002; Mon, 21 Aug 2023 08:31:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A865C8D0014; Mon, 21 Aug 2023 08:31:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9684F8D0002 for ; Mon, 21 Aug 2023 08:31:34 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6F968B2191 for ; Mon, 21 Aug 2023 12:31:34 +0000 (UTC) X-FDA: 81148047708.24.E9C717C Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf06.hostedemail.com (Postfix) with ESMTP id DEBE818002F for ; Mon, 21 Aug 2023 12:31:31 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692621092; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zcPOcBoFA8WcmdCmuHl5Kjr7MCHnERomL1MjeiooAb8=; b=PPgUW2gVSx9IVe9SNYGmr5cqMYKaCKeBdLR18YiGH1RabdtaNTzkY5Nq+mobNP0vC9nGjV i4pNBG9SaqZQNKsQgK1YQcwQk3nJdLLWpctXAKtMLsz2UHrsz963Qab5+RXPmaH/sCK6SZ g6lFUpj/QWvfWXuqEf1BNXNQkTTSMAY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692621092; a=rsa-sha256; cv=none; b=AeCew9JnpehWOmVVL2w5OvKTs5InloGSXRxqPcvc4dREeKjKfUx/xrwyGWi7fDSJdhBCEM CCJsDTHxGoHtlshxh4IJYd+XpIIa9DMVtAhhngo2YkXHW4Ba0vKDczrcWSMpqjKxRHIToL CjNHZrMg+RMHTGCTGjhANKxFYah4wWs= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTsFC3r4rztShZ; Mon, 21 Aug 2023 20:27:43 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 20:31:24 +0800 From: Kefeng Wang To: Andrew Morton , CC: , , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , , , , , , Kefeng Wang Subject: [PATCH rfc v2 10/10] loongarch: mm: try VMA lock-based page fault handling first Date: Mon, 21 Aug 2023 20:30:56 +0800 Message-ID: <20230821123056.2109942-11-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> References: <20230821123056.2109942-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DEBE818002F X-Stat-Signature: dpoeyywjypuf73brnpoa1uoug1da479x X-HE-Tag: 1692621091-324148 X-HE-Meta: U2FsdGVkX19Utf837N1V3dkdUquDiSH8VKAvjH0XCMjISztoh7VDoL10kdQsrfGsOrALKehqZsOFfTQFpO8PxdQ+Xr6HJ4REsqGUnXuDzq9Asc/WGikqk2GbUuuIvzbwhGYYRdhg1ti1EsrX9TL9VU210vYC4DEBTArVshSctsDC9uaOOs4kzHQHUcAV8j8lv/Nw6gAEUerkCYbeHUosJLXBCWpWSn0SM0Qt9PEmQtKlL/4+zK63/Gbvu4ZNKIt2t3wszP9K+KiZOtwbKn5P9p9zO4ay/N3whD9gHuiiLlQ2dg9kKS1dJh+4UspGhE6rqVjOtIc7JmMtFFUIjR1TAzf2QJNQf0G8aIViN0hplSz87KajMVXhcGn9A5Df+4ewhYKCd2mjDeCCEOupFnK62MlsifknIOmXa59TDtB3uI7EIiFl8EPVr4VT3n8t90skgX73jMigY2nYVfjoUuxRTMZt4wchtdcjYT7mrE63sc7VlC4oCZAXPQrCzsOfZACWlBW8z6LezkpsEI62WfkE5Gd+WYWzxLH9jAPwqdz0z4nmGxRfiNsdGz8xwyvcoU0XiK6s96TaqqYnruzzIFPVva3FYnGLUWrw5VEi6ma/CaoYu7BppsXNsj6TQ1bWtJf3pvToNcV79KMfhYRDGV9K8RF+m9fuLMLD/OCK7SEGlY5IRzkZy7qyM8P5dNp4wTidJewR3sVU1LI3CNXW2XcdSjJ7g3K526X39sd1Y0DKFEflO6fKtaTfrgaU/1e249BeCdAIH1gNQSPt7kBdoI2jAOCvdcq4EH+n4p/QtQWHsVgcyCDwWueekioU9Aor14AdqvQ60Blttpg0SKL/Bw2pXW7aZI6oQ+dyuBgGPISMjdZapVFIRAROb0yN+0EUukqm7hiy2jh67xhhsYnzFw8x8bIFSMEh9nyDKI/chG7/fs6p6cqA9uc5MsOnXPwHKscFXDD7SQp+PvqJIRg8Wic rst0tWaz 2RdvU+xQ3I78F6COo2txm0QTDuGbCAQ+X1JJ8SJkor7d+MCEW3WUwEoUruH3KGcV+9bOCE+Hi9skF12J/8Uuvbj5UQwgH5rVC1oJy9/Xn2mGmOG5KqBiRYMgcumn4GqkGCaDOGYdjhUcpKNIe0eyznkPB3A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/loongarch/Kconfig | 1 + arch/loongarch/mm/fault.c | 37 +++++++++++++++++++++++++++++++------ 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 2b27b18a63af..6b821f621920 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -56,6 +56,7 @@ config LOONGARCH select ARCH_SUPPORTS_LTO_CLANG select ARCH_SUPPORTS_LTO_CLANG_THIN select ARCH_SUPPORTS_NUMA_BALANCING + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_QUEUED_RWLOCKS diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index 2a45e9f3a485..f7ac3a14bb06 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -142,6 +142,13 @@ static inline bool access_error(unsigned int flags, struct pt_regs *regs, return false; } +#ifdef CONFIG_PER_VMA_LOCK +bool arch_vma_access_error(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return access_error(vmf->flags, vmf->regs, vmf->real_address, vma); +} +#endif + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -151,11 +158,15 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, unsigned long address) { int si_code = SEGV_MAPERR; - unsigned int flags = FAULT_FLAG_DEFAULT; struct task_struct *tsk = current; struct mm_struct *mm = tsk->mm; struct vm_area_struct *vma = NULL; vm_fault_t fault; + struct vm_fault vmf = { + .real_address = address, + .regs = regs, + .flags = FAULT_FLAG_DEFAULT, + }; if (kprobe_page_fault(regs, current->thread.trap_nr)) return; @@ -184,11 +195,24 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, goto bad_area_nosemaphore; if (user_mode(regs)) - flags |= FAULT_FLAG_USER; + vmf.flags |= FAULT_FLAG_USER; if (write) - flags |= FAULT_FLAG_WRITE; + vmf.flags |= FAULT_FLAG_WRITE; perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + + fault = try_vma_locked_page_fault(&vmf); + if (fault == VM_FAULT_NONE) + goto retry; + if (!(fault & VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + no_context(regs, write, address); + return; + } + retry: vma = lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) @@ -196,7 +220,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, si_code = SEGV_ACCERR; - if (access_error(flags, regs, vma)) + if (access_error(vmf.flags, regs, address, vma)) goto bad_area; /* @@ -204,7 +228,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault = handle_mm_fault(vma, address, flags, regs); + fault = handle_mm_fault(vma, address, vmf.flags, regs); if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) @@ -217,7 +241,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, return; if (unlikely(fault & VM_FAULT_RETRY)) { - flags |= FAULT_FLAG_TRIED; + vmf.flags |= FAULT_FLAG_TRIED; /* * No need to mmap_read_unlock(mm) as we would @@ -229,6 +253,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, mmap_read_unlock(mm); +done: if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) do_out_of_memory(regs, write, address);