From patchwork Tue Apr 2 07:51:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D47F1C6FD1F for ; Tue, 2 Apr 2024 07:53:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JpXWU8xt0utgDejibUgy2NxVIWIOLtOAUDwFg2tk4TM=; b=Mv29cvcs1CIBei l1+gbfTs+ugT4XAn1JpqR4dHLhiNxr1unA1HAI8iGASgPIAxWodiUzHZ1r0t/l97wACymt62MqKCe NIHdg6D2cJiGWr0ARhxdbavkTO1JItdguXhYdTJZE2HCtVTqRIMjyQXANLS5TlJEHfyGJtkZc9xDr 67n9VWgPlFzWELbUtSYs1OOIB5kUny8DPgy2cK1npTRn3/1Y1OHq+ioXrnQ5sEZpBHe9B+tVNE+pk 6uQk/NuDidE1rGdZWnnCUUsFjcyHnNKd1BEFooyKq2fYpBeJqtIP2F2D7zqkHlrjnswRyN/zvjkjy 9fA7rQDg5O4T/ZiKmgcA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxp-0000000A8Ok-26VF; Tue, 02 Apr 2024 07:53:49 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxO-0000000A81h-0GCG; Tue, 02 Apr 2024 07:53:26 +0000 Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4V80Td65bhzbdsR; Tue, 2 Apr 2024 15:52:21 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 58068140F7F; Tue, 2 Apr 2024 15:53:16 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:15 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 1/7] arm64: mm: cleanup __do_page_fault() Date: Tue, 2 Apr 2024 15:51:36 +0800 Message-ID: <20240402075142.196265-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_476846_462A3FF3 X-CRM114-Status: GOOD ( 12.48 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The __do_page_fault() only check vma->flags and call handle_mm_fault(), and only called by do_page_fault(), let's squash it into do_page_fault() to cleanup code. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan Reviewed-by: Catalin Marinas --- arch/arm64/mm/fault.c | 27 +++++++-------------------- 1 file changed, 7 insertions(+), 20 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 8251e2fea9c7..9bb9f395351a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -486,25 +486,6 @@ static void do_bad_area(unsigned long far, unsigned long esr, } } -#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000) -#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) - -static vm_fault_t __do_page_fault(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags, - struct pt_regs *regs) -{ - /* - * Ok, we have a good vm_area for this memory access, so we can handle - * it. - * Check that the permissions on the VMA allow for the fault which - * occurred. - */ - if (!(vma->vm_flags & vm_flags)) - return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr, mm_flags, regs); -} - static bool is_el0_instruction_abort(unsigned long esr) { return ESR_ELx_EC(esr) == ESR_ELx_EC_IABT_LOW; @@ -519,6 +500,9 @@ static bool is_write_abort(unsigned long esr) return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM); } +#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000) +#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) + static int __kprobes do_page_fault(unsigned long far, unsigned long esr, struct pt_regs *regs) { @@ -617,7 +601,10 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto done; } - fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs); + if (!(vma->vm_flags & vm_flags)) + fault = VM_FAULT_BADACCESS; + else + fault = handle_mm_fault(vma, addr, mm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { From patchwork Tue Apr 2 07:51:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36002CD1297 for ; Tue, 2 Apr 2024 07:53:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Br+B5FD+eBE8FjgmDUhjx3mbWMpabhFmnIro+5w0w2A=; b=B6Hw45y7CNuuEn DfseTCyFPo6LhD/7lm9YIYWb9/+eyr9SLIZlwAzKQM3/EaPbpYNY8UHyWUqFyTlXFbM8j9lClvyH+ oNm0u9UfLn88fJ6iJaWPwut9Kr3Pm7klO/5b4Ppq6odYtw1Sl79tJk8852aOM7elaHGC4/uf35Fn3 HeFbJ5ATs+QZfgxdzx6uJew8y2sklLVBEYKbLSXx8gZlPGQKAOuJMRfsyhvi1f9s6tf3b+tJcWSCw 8PEywk7tqJ//A8oMbMCql1+UN75VwhYsIy4oSk2/0v6TkTkElzmvQgdixuRWgThw6qlfu1AbyrTDb ALkrqyLfyL1nzN0+r9Yw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxU-0000000A87U-1Xlm; Tue, 02 Apr 2024 07:53:28 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxN-0000000A81i-3Hue; Tue, 02 Apr 2024 07:53:23 +0000 Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4V80RX18nRz1R9YK; Tue, 2 Apr 2024 15:50:32 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 3C3F0180060; Tue, 2 Apr 2024 15:53:17 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:16 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS Date: Tue, 2 Apr 2024 15:51:37 +0800 Message-ID: <20240402075142.196265-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_040369_2DDE666B X-CRM114-Status: GOOD ( 11.05 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly set fault to VM_FAULT_BADACCESS and handle error, no need to lock_mm_and_find_vma() and check vm_flags again, the latency time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan Reviewed-by: Catalin Marinas --- arch/arm64/mm/fault.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 9bb9f395351a..405f9aa831bd 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -572,7 +572,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, if (!(vma->vm_flags & vm_flags)) { vma_end_read(vma); - goto lock_mmap; + fault = VM_FAULT_BADACCESS; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + goto done; } fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) From patchwork Tue Apr 2 07:51:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F2B65C6FD1F for ; Tue, 2 Apr 2024 07:53:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TPvqChdPMqKdNpP8DZ2JW2/l8a9uvRjD1RaH9Fu70QI=; b=zuz/na1kZ+wdvw re3iaSjJegMttpeiUl83kIyAmNSuiGXW4gvymGHPw9Iv68R9y+/vuWZCGiebHeJY+d8OKughC6sS9 MYUiwTJsW7V/JHBdIb92J44rSyUf6VFpGbABAvU05WzMFqhhy4X4dD1g+JyOCmcRPoFlw40KAaSZy 4pGSjcL/ng1aPCbabvLONe7G1nxcNOkppJGkKGmXAsaK2wLUDwwyzF7B2keKH+eJgJ8zixwb1yMWt 05OXuWRjmQ/PjI/Bb+HB0FdnkNJTO5IcSXpmOBlfJIKeaBXeMV17uUUSShixkFO1df5FIS8k59VC3 /nR/tQU2i13MVKNP64Ww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxV-0000000A88b-1zof; Tue, 02 Apr 2024 07:53:29 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxN-0000000A81k-3I57; Tue, 02 Apr 2024 07:53:23 +0000 Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4V80RY0PFcz1R9f5; Tue, 2 Apr 2024 15:50:33 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 267511A016C; Tue, 2 Apr 2024 15:53:18 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:17 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 3/7] arm: mm: accelerate pagefault when VM_FAULT_BADACCESS Date: Tue, 2 Apr 2024 15:51:38 +0800 Message-ID: <20240402075142.196265-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_058001_C78B35B2 X-CRM114-Status: GOOD ( 11.07 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly set fault to VM_FAULT_BADACCESS and handle error, so no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan --- arch/arm/mm/fault.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 439dc6a26bb9..5c4b417e24f9 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -294,7 +294,9 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (!(vma->vm_flags & vm_flags)) { vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = VM_FAULT_BADACCESS; + goto bad_area; } fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) From patchwork Tue Apr 2 07:51:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A531ACD1293 for ; Tue, 2 Apr 2024 07:53:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6ckKQfRVQ/6XE6EXPtcHnRiK+ddUrZ9FGVIp0EAJ38U=; b=owITvwSjrlVP5M eBsP+zY0yTvyrhl6AsJgVdZgl/buhz1zu2xhryZHpS/qgXpCuOsD4A9ec+kS9VNtvdyY7Q7bO55gd rXYPEpgLVx9nJq4KQgRs1tEgz6t3zuLL7t58RM7YkLYC8xz9ZpaBDIpp+CiyKDO2iH+d1iy2QWEJL SsNSqwr7ZYeNrr1TU9/4n5K6XE9POcALZRzNG3bSf3IpABntGeySmUuSEcoy4kBZ7YJCZt65olnxt ZXTqJ6FHZD/LeKyC8wkku8GdxsR7cHTTlRNy/0j06lwSJoysgGu7ItdMDwVIDwQziISNP8jNOsAR0 DGtFOj2OGHHpF3iwgOwg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxZ-0000000A8Bq-3d7k; Tue, 02 Apr 2024 07:53:33 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxN-0000000A81m-3HiH; Tue, 02 Apr 2024 07:53:25 +0000 Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4V80RY6qG4z1R9ZT; Tue, 2 Apr 2024 15:50:33 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 0B769140133; Tue, 2 Apr 2024 15:53:19 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:18 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 4/7] powerpc: mm: accelerate pagefault when badaccess Date: Tue, 2 Apr 2024 15:51:39 +0800 Message-ID: <20240402075142.196265-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_239593_8DC11EE6 X-CRM114-Status: GOOD ( 13.49 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly handle error and return, there is no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang --- arch/powerpc/mm/fault.c | 33 ++++++++++++++++++++------------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 53335ae21a40..215690452495 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -71,23 +71,26 @@ static noinline int bad_area_nosemaphore(struct pt_regs *regs, unsigned long add return __bad_area_nosemaphore(regs, address, SEGV_MAPERR); } -static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code) +static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code, + struct mm_struct *mm, struct vm_area_struct *vma) { - struct mm_struct *mm = current->mm; /* * Something tried to access memory that isn't in our memory map.. * Fix it, but check if it's kernel or user first.. */ - mmap_read_unlock(mm); + if (mm) + mmap_read_unlock(mm); + else + vma_end_read(vma); return __bad_area_nosemaphore(regs, address, si_code); } static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, + struct mm_struct *mm, struct vm_area_struct *vma) { - struct mm_struct *mm = current->mm; int pkey; /* @@ -109,7 +112,10 @@ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, */ pkey = vma_pkey(vma); - mmap_read_unlock(mm); + if (mm) + mmap_read_unlock(mm); + else + vma_end_read(vma); /* * If we are in kernel mode, bail out with a SEGV, this will @@ -124,9 +130,10 @@ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, return 0; } -static noinline int bad_access(struct pt_regs *regs, unsigned long address) +static noinline int bad_access(struct pt_regs *regs, unsigned long address, + struct mm_struct *mm, struct vm_area_struct *vma) { - return __bad_area(regs, address, SEGV_ACCERR); + return __bad_area(regs, address, SEGV_ACCERR, mm, vma); } static int do_sigbus(struct pt_regs *regs, unsigned long address, @@ -479,13 +486,13 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, if (unlikely(access_pkey_error(is_write, is_exec, (error_code & DSISR_KEYFAULT), vma))) { - vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + return bad_access_pkey(regs, address, NULL, vma); } if (unlikely(access_error(is_write, is_exec, vma))) { - vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + return bad_access(regs, address, NULL, vma); } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); @@ -521,10 +528,10 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, if (unlikely(access_pkey_error(is_write, is_exec, (error_code & DSISR_KEYFAULT), vma))) - return bad_access_pkey(regs, address, vma); + return bad_access_pkey(regs, address, mm, vma); if (unlikely(access_error(is_write, is_exec, vma))) - return bad_access(regs, address); + return bad_access(regs, address, mm, vma); /* * If for any reason at all we couldn't handle the fault, From patchwork Tue Apr 2 07:51:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 92A69C6FD1F for ; Tue, 2 Apr 2024 07:53:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TMoj8qkzCqcrcZ3L6OgOjdSq1x2IzJspE12BWg0dAiU=; b=VLYQh1m68jCb7g 8YaeCQbgoQBW8+lnSlv3e6Lkd9ptlzehXediVhF3WA1f0W/jh+rH1we7w+gJ3yp9lHRPmOBYP36u/ VuV+SFZewfH6UfwBTyiuJlHoc49QntOyHNc+//gtf2jUD5kfqn+zj/DXfYEBa2AKzdGZczYYyt72U rLId9+Z5/bvkp6uXcJLFaZyfJMOhsKrGhUpCisWB7JkOO7hBjgDDv9Izca4uJwtHx295pJyArsNxm dhiCfqgEy7xfx/tr4yxIX3+UHBuVXwE0Bf+CIszMYhPKrVAHYkekiRxKrfm7pDnSB+zCF/Lzye/rl MUTLDYCB9cCVtWV3TFSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxj-0000000A8J6-1dYO; Tue, 02 Apr 2024 07:53:43 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxO-0000000A821-1djg; Tue, 02 Apr 2024 07:53:25 +0000 Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4V80RY6c70zwR9b; Tue, 2 Apr 2024 15:50:33 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id E3190140487; Tue, 2 Apr 2024 15:53:19 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:18 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 5/7] riscv: mm: accelerate pagefault when badaccess Date: Tue, 2 Apr 2024 15:51:40 +0800 Message-ID: <20240402075142.196265-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_648274_3AC5B50A X-CRM114-Status: GOOD ( 11.33 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly handle error and return, there is no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan --- arch/riscv/mm/fault.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 3ba1d4dde5dd..b3fcf7d67efb 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -292,7 +292,10 @@ void handle_page_fault(struct pt_regs *regs) if (unlikely(access_error(cause, vma))) { vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + tsk->thread.bad_cause = SEGV_ACCERR; + bad_area_nosemaphore(regs, code, addr); + return; } fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); From patchwork Tue Apr 2 07:51:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0F4DFC6FD1F for ; Tue, 2 Apr 2024 07:56:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yt9edJWJNZoN9mYrySuP32/VNk/1H6TFCrO8b2flgyw=; b=cQ/fbN35jBAUbd kgLhbLdEiqVWW3JYFNndBc4anknw6FLDfJIEmWMKuty1YhWrB0gRHGphX2A0I3sskPYNhjjNri6q8 CFBLMX/hjLdwxNLncgJEIFdBRsXkFViiP4c6RDc7ZISqp82C2nHqhHcd/ib0dqIkI+ymQFkF7p4It 8+ZK9iIcA4joXtuy7gFD33SNSPTn2zL/D7HRb8XuBPK6vr+AhTB3UHFjBbQr17yqHyYehfoBMIo5m H5+tS9XuozDjLHfwM4oGSeOX1Ki2EWTkxY587fvIkx+7zmrFbzLjhQr84gYcyA3C0Vk/eArscRRZN C32LLgwxH9r7P24TmFag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrZ0V-0000000A9XN-16rX; Tue, 02 Apr 2024 07:56:36 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrZ0G-0000000A9V3-1z7j; Tue, 02 Apr 2024 07:56:32 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=3lgbc0V97n4dqVCETRztl1tx7Hvne1+ku9oZwtLMRKM=; b=a+TOAUyYYZ3N8IH+u6X5aedBdf aGKNciA2dYLw39tnm2Z0PLqDMMSKhZ0SIktmey+M4c0f0Gl/n+zsGrYZg2MRoHv6a9/g/kOlE+KAr QeD12vci9C+eesEnLz4vyq7TtQSK83esE+iEXjgx1OA3MwLanwrv1Y1YG/vq7DyRLN1a67uJRO+91 FpGlBjM48ivO9EXkEp8SRedn6U7YcapGH/EQXufims2bYa2R0UivEdT2dbBP2wnUmS92ziFmZW3wN jNvFcubuNMzuCP8sQTILpKl1Y1J4NiyYhQ5JlnrbCIiWse3kvXcm2AeCLx2Yh3TRF2tMOStKboVwU SjWRE62A==; Received: from szxga02-in.huawei.com ([45.249.212.188]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYyM-00000003V4G-3Hap; Tue, 02 Apr 2024 07:56:16 +0000 Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4V80Tk2SWLzbds4; Tue, 2 Apr 2024 15:52:26 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id CD95B18007B; Tue, 2 Apr 2024 15:53:20 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:19 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 6/7] s390: mm: accelerate pagefault when badaccess Date: Tue, 2 Apr 2024 15:51:41 +0800 Message-ID: <20240402075142.196265-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_085444_449118_68988DDB X-CRM114-Status: GOOD ( 10.22 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly handle error and return, there is no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang --- arch/s390/mm/fault.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index c421dd44ffbe..162ca2576fd4 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -325,7 +325,8 @@ static void do_exception(struct pt_regs *regs, int access) goto lock_mmap; if (!(vma->vm_flags & access)) { vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + return handle_fault_error_nolock(regs, SEGV_ACCERR); } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) From patchwork Tue Apr 2 07:51:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ED8FCCD1284 for ; Tue, 2 Apr 2024 07:56:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sFUeopbIyMepePJwqtYhaIjw/2O1eQ+tu22P6vWNk+Q=; b=NY4jys+GPCZU4c yuwIaIrnpxt0rt/ymkp4jjoXiUksTXabgu4soRKO6/yrhyIXd4/qXnXefryHlxtsPzYK8zIiFR6KX hsXayMPY1cEbNdUXeVFLvyGWZDQY800aWiueOhRAPi2zqynMDSinNKk8m1A4CGHNHnsEexFwXArtn wXbs/XoXusN+VqociMQU8uUJLeqJe7liD21S9Bum3XGAPJpyUL+WgtA9W3FytsZOt+KuTtpO+DHfy IkIJGG8+d1Shd2kz1lhLm8Mz51iL/yzfHXMwlpJGGOy+ZaeLbgRGnHZV8ULHn2xbEuTTbQrimhfCa i0nBrVXaICa0/LM4QblA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrZ0T-0000000A9Wf-3bML; Tue, 02 Apr 2024 07:56:33 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrZ0E-0000000A9Ui-2s4e; Tue, 02 Apr 2024 07:56:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=s63mY4NKBPmN8uvTPNOa0oeJhsDly0BRCa2ZGt+fbXg=; b=m25Bo4dAMBTBn67uqpaH7zPXrA wRkWrN47hhkgMz0yelivbuDiLOBU8urBAsk7KwlM34EdqF22J2Do8sawZJiQH6SAPUaEPZAcGnFER PcKCC4DDW0XpsxpMHiTkRDxst//hC7S2wYK0Oi7wAF724VS8xSvWPZV7laSv8SC1E4SSQ+/X1RLz+ nu3YLKo1T7SnMrqJRxX4Q9RZTTzFQeoeNOZvWF/DKKB+nsI9a2irbzx8CRvlTq3gxVrmCQrDbJKMa zIB4E1yVAG7cGNp1yFWhk/LOzNOnjxeiroQTZY8Om97CPV9hLAUi8VpAnWCBxvbkBtjF4ZQH202Jp TrlyGWLg==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYyC-00000003V4K-30ml; Tue, 02 Apr 2024 07:56:05 +0000 Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4V80Rc70dCz29kHr; Tue, 2 Apr 2024 15:50:36 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id B81221A016C; Tue, 2 Apr 2024 15:53:21 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:20 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 7/7] x86: mm: accelerate pagefault when badaccess Date: Tue, 2 Apr 2024 15:51:42 +0800 Message-ID: <20240402075142.196265-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_085509_132701_218DD50A X-CRM114-Status: GOOD ( 13.13 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly handle error and return, there is no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan --- arch/x86/mm/fault.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index a4cc20d0036d..67b18adc75dd 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -866,14 +866,17 @@ bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code, static void __bad_area(struct pt_regs *regs, unsigned long error_code, - unsigned long address, u32 pkey, int si_code) + unsigned long address, struct mm_struct *mm, + struct vm_area_struct *vma, u32 pkey, int si_code) { - struct mm_struct *mm = current->mm; /* * Something tried to access memory that isn't in our memory map.. * Fix it, but check if it's kernel or user first.. */ - mmap_read_unlock(mm); + if (mm) + mmap_read_unlock(mm); + else + vma_end_read(vma); __bad_area_nosemaphore(regs, error_code, address, pkey, si_code); } @@ -897,7 +900,8 @@ static inline bool bad_area_access_from_pkeys(unsigned long error_code, static noinline void bad_area_access_error(struct pt_regs *regs, unsigned long error_code, - unsigned long address, struct vm_area_struct *vma) + unsigned long address, struct mm_struct *mm, + struct vm_area_struct *vma) { /* * This OSPKE check is not strictly necessary at runtime. @@ -927,9 +931,9 @@ bad_area_access_error(struct pt_regs *regs, unsigned long error_code, */ u32 pkey = vma_pkey(vma); - __bad_area(regs, error_code, address, pkey, SEGV_PKUERR); + __bad_area(regs, error_code, address, mm, vma, pkey, SEGV_PKUERR); } else { - __bad_area(regs, error_code, address, 0, SEGV_ACCERR); + __bad_area(regs, error_code, address, mm, vma, 0, SEGV_ACCERR); } } @@ -1357,8 +1361,9 @@ void do_user_addr_fault(struct pt_regs *regs, goto lock_mmap; if (unlikely(access_error(error_code, vma))) { - vma_end_read(vma); - goto lock_mmap; + bad_area_access_error(regs, error_code, address, NULL, vma); + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + return; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) @@ -1394,7 +1399,7 @@ void do_user_addr_fault(struct pt_regs *regs, * we can handle it.. */ if (unlikely(access_error(error_code, vma))) { - bad_area_access_error(regs, error_code, address, vma); + bad_area_access_error(regs, error_code, address, mm, vma); return; }