From patchwork Tue Apr 2 07:51:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06C69C6FD1F for ; Tue, 2 Apr 2024 07:53:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CF7n4ZT5bGyHs/4glwdPlqgPrnZaTnsn96SxlouEQUs=; b=mLz/9xssD7h18h 0Aw0IZ7+0u2NTHDhcN5IYlieKVxQb3VGqLzNX3LShv4aqvByBfq3WnTIWHsoIDIUms9jQsrgvB3tx N9MdYilCEiC0bsXTzBTIQVvqrtQBpX4kKMgkZbO1O4GaSsH/wNkP9l3mNe3ded33HNyn6eWS6iazx ZQyczuMqznPar55dxwq1sW+LGuvRVOYxmU5769dr6ahHsqmH+nZWmInVxmtI088Nb38KbYqE3oqQe omIaBLi8Qdf5BA4jOuInyIg3VHg4OkrW9SZM7WkBtPo9x27ZMxF+lBYMiH9WZFnGC4Vclo9NSv9r2 hm/qbFrG033VZZLxD+CA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxd-0000000A8EW-0TG9; Tue, 02 Apr 2024 07:53:37 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxO-0000000A81h-0GCG; Tue, 02 Apr 2024 07:53:26 +0000 Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4V80Td65bhzbdsR; Tue, 2 Apr 2024 15:52:21 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 58068140F7F; Tue, 2 Apr 2024 15:53:16 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:15 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 1/7] arm64: mm: cleanup __do_page_fault() Date: Tue, 2 Apr 2024 15:51:36 +0800 Message-ID: <20240402075142.196265-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_476846_462A3FF3 X-CRM114-Status: GOOD ( 12.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The __do_page_fault() only check vma->flags and call handle_mm_fault(), and only called by do_page_fault(), let's squash it into do_page_fault() to cleanup code. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan Reviewed-by: Catalin Marinas --- arch/arm64/mm/fault.c | 27 +++++++-------------------- 1 file changed, 7 insertions(+), 20 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 8251e2fea9c7..9bb9f395351a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -486,25 +486,6 @@ static void do_bad_area(unsigned long far, unsigned long esr, } } -#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000) -#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) - -static vm_fault_t __do_page_fault(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags, - struct pt_regs *regs) -{ - /* - * Ok, we have a good vm_area for this memory access, so we can handle - * it. - * Check that the permissions on the VMA allow for the fault which - * occurred. - */ - if (!(vma->vm_flags & vm_flags)) - return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr, mm_flags, regs); -} - static bool is_el0_instruction_abort(unsigned long esr) { return ESR_ELx_EC(esr) == ESR_ELx_EC_IABT_LOW; @@ -519,6 +500,9 @@ static bool is_write_abort(unsigned long esr) return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM); } +#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000) +#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) + static int __kprobes do_page_fault(unsigned long far, unsigned long esr, struct pt_regs *regs) { @@ -617,7 +601,10 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto done; } - fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs); + if (!(vma->vm_flags & vm_flags)) + fault = VM_FAULT_BADACCESS; + else + fault = handle_mm_fault(vma, addr, mm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { From patchwork Tue Apr 2 07:51:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82E1ECD1294 for ; Tue, 2 Apr 2024 07:53:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wf1ozmLtrP0yFSVNfYXVAKhPsnmigDz8eFKOCavsJOk=; b=K0RjZe/1sifyfb 1l1/+ChrTRJJtuBDRIqaQ1aU5zsh3WTp75S2S/94Tz/W4SO5TIOjWpMP3iEzuDURY7jBEQPUrXZKf RnCUdWY9avWfNDJfnZmdjbam93ASdMTAunbxIixtCZQKvgiGS2rrowvZUROpUfH2jc8QrxkxKeH6+ 3wrurPzumIfx0C6tApYOAcwI+BIUsuBJCE1KsM0oUCGgiv5GtET1H/l/mSkRYApNlsoi/JzObHjd9 5kKULvEXTI+lDaUIgXureiRzhtqQvsE8kDzcBmGZz4ZXHFa0vJobzsYhrGUg3tniwHGpFK2q03i8E QyHr2ZZyR0DXC+EZLD3Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxS-0000000A86W-3vSq; Tue, 02 Apr 2024 07:53:26 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxN-0000000A81i-3Hue; Tue, 02 Apr 2024 07:53:23 +0000 Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4V80RX18nRz1R9YK; Tue, 2 Apr 2024 15:50:32 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 3C3F0180060; Tue, 2 Apr 2024 15:53:17 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:16 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS Date: Tue, 2 Apr 2024 15:51:37 +0800 Message-ID: <20240402075142.196265-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_040369_2DDE666B X-CRM114-Status: GOOD ( 11.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly set fault to VM_FAULT_BADACCESS and handle error, no need to lock_mm_and_find_vma() and check vm_flags again, the latency time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan Reviewed-by: Catalin Marinas --- arch/arm64/mm/fault.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 9bb9f395351a..405f9aa831bd 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -572,7 +572,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, if (!(vma->vm_flags & vm_flags)) { vma_end_read(vma); - goto lock_mmap; + fault = VM_FAULT_BADACCESS; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + goto done; } fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) From patchwork Tue Apr 2 07:51:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 49BC0CD1284 for ; Tue, 2 Apr 2024 07:53:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9yCYd7MbM5ic78aTu/X+Br81pq6dcYPMlhApFQ0gbi0=; b=GtJL120qSg66CB PKhKMJHiW0oMLNVtnPwehhtkn20jJV6MEvYxxt1dnq3a0SITeTBznfUWm5TqW403rRBGjQdjuogYD 18TK4178zgOMECheLB7l2l7kF9gFCC6ymPRYaTKb/+6sLkO0roaDVhu28RrNEWcIT70N93i/bDz38 RHcnXPtSO691fOhWFmlDwwmLZi3fPzlNdUY6Etb5T8yFvkOFcmsJ6K0mrIrbh00IVpKkRXi0ppmE/ OF1INUssjwL5zSXofni/G6zsl5Fy7zPjvvRxV477RSCQcq1sGIowIymFVBHu9wzXNs0ZV1f8qPUdX a8qCRA8BT8l60DLv8qHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxQ-0000000A85F-3Im7; Tue, 02 Apr 2024 07:53:24 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxN-0000000A81k-3I57; Tue, 02 Apr 2024 07:53:23 +0000 Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4V80RY0PFcz1R9f5; Tue, 2 Apr 2024 15:50:33 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 267511A016C; Tue, 2 Apr 2024 15:53:18 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:17 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 3/7] arm: mm: accelerate pagefault when VM_FAULT_BADACCESS Date: Tue, 2 Apr 2024 15:51:38 +0800 Message-ID: <20240402075142.196265-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_058001_C78B35B2 X-CRM114-Status: GOOD ( 11.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly set fault to VM_FAULT_BADACCESS and handle error, so no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan --- arch/arm/mm/fault.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 439dc6a26bb9..5c4b417e24f9 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -294,7 +294,9 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (!(vma->vm_flags & vm_flags)) { vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + fault = VM_FAULT_BADACCESS; + goto bad_area; } fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) From patchwork Tue Apr 2 07:51:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E7A07CD1293 for ; Tue, 2 Apr 2024 07:53:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fJq4SzhiVmyjZ+T1WXeuXoTkkewFLti3ZLuWN3I00jM=; b=X6UBNKge1iucCf 1HQeNhzjMjRPn9q9XRxAWU//CO7BHX0qOmyS0Px5Zyx9iNEM+Z8qv1AVE4FAK4yY363ykHU91Ups9 zL29TI8t466XMgDW3jOh4JhdYetkuPi2NQjEcN1RIdRDFT0kdPc9hqfpYrVDUVaqr6YSiS1bvrff6 yqSyZV2CjH4WLSdIdb3cERjwNdl5eguh6vmAbeMQfLDronNT3+T+Mo0+0kWGJO6amSzAdXjgmivx8 ERwc4acB5ZZd1gnxI5B61t+itcXkKI2YU0n3EE128O3wBtQ0etl5ICCmBFPzoixTugjlKz5toX2NU dvL2Ik3bIZBhBESnv92g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxY-0000000A8AV-1bIH; Tue, 02 Apr 2024 07:53:32 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxN-0000000A81m-3HiH; Tue, 02 Apr 2024 07:53:25 +0000 Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4V80RY6qG4z1R9ZT; Tue, 2 Apr 2024 15:50:33 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 0B769140133; Tue, 2 Apr 2024 15:53:19 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:18 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 4/7] powerpc: mm: accelerate pagefault when badaccess Date: Tue, 2 Apr 2024 15:51:39 +0800 Message-ID: <20240402075142.196265-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_239593_8DC11EE6 X-CRM114-Status: GOOD ( 13.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly handle error and return, there is no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang --- arch/powerpc/mm/fault.c | 33 ++++++++++++++++++++------------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 53335ae21a40..215690452495 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -71,23 +71,26 @@ static noinline int bad_area_nosemaphore(struct pt_regs *regs, unsigned long add return __bad_area_nosemaphore(regs, address, SEGV_MAPERR); } -static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code) +static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code, + struct mm_struct *mm, struct vm_area_struct *vma) { - struct mm_struct *mm = current->mm; /* * Something tried to access memory that isn't in our memory map.. * Fix it, but check if it's kernel or user first.. */ - mmap_read_unlock(mm); + if (mm) + mmap_read_unlock(mm); + else + vma_end_read(vma); return __bad_area_nosemaphore(regs, address, si_code); } static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, + struct mm_struct *mm, struct vm_area_struct *vma) { - struct mm_struct *mm = current->mm; int pkey; /* @@ -109,7 +112,10 @@ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, */ pkey = vma_pkey(vma); - mmap_read_unlock(mm); + if (mm) + mmap_read_unlock(mm); + else + vma_end_read(vma); /* * If we are in kernel mode, bail out with a SEGV, this will @@ -124,9 +130,10 @@ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address, return 0; } -static noinline int bad_access(struct pt_regs *regs, unsigned long address) +static noinline int bad_access(struct pt_regs *regs, unsigned long address, + struct mm_struct *mm, struct vm_area_struct *vma) { - return __bad_area(regs, address, SEGV_ACCERR); + return __bad_area(regs, address, SEGV_ACCERR, mm, vma); } static int do_sigbus(struct pt_regs *regs, unsigned long address, @@ -479,13 +486,13 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, if (unlikely(access_pkey_error(is_write, is_exec, (error_code & DSISR_KEYFAULT), vma))) { - vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + return bad_access_pkey(regs, address, NULL, vma); } if (unlikely(access_error(is_write, is_exec, vma))) { - vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + return bad_access(regs, address, NULL, vma); } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); @@ -521,10 +528,10 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, if (unlikely(access_pkey_error(is_write, is_exec, (error_code & DSISR_KEYFAULT), vma))) - return bad_access_pkey(regs, address, vma); + return bad_access_pkey(regs, address, mm, vma); if (unlikely(access_error(is_write, is_exec, vma))) - return bad_access(regs, address); + return bad_access(regs, address, mm, vma); /* * If for any reason at all we couldn't handle the fault, From patchwork Tue Apr 2 07:51:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF476CD1294 for ; Tue, 2 Apr 2024 07:53:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fsPIhz0nj/l34PIJ5h/9nsXHoftOoDGG5Hh8zuZSQNE=; b=ItQ5vIZkMfEdoC nEO0A77nt05PPHftzWzag5aful72HrHHvjg87tuz+/fqO7FzM5KDtR+RXp1G/AZLtuTM6uRveDcSQ 8l5oTjQj+8FN4f2L5NKCNOkuIMkaNURbdAgjOtUC1RYlDbEwAXYYhpHQBhQx34eW18sje6EQ8R6xw kOljuBzMbgUgELj5kYpd7N07ZOmGldkZeKTzG97kpt8ojHBXLTnb4/f76tIYLm5kdot3v7NM26wBR AJJadDbO1t48wOtPBB6BnCW70j1UzdNRHSaQM7c92uv/VuIZ+k0QiRBrTHztBacYcfHQqAcQwTZ/q yKm4daitUaOq0GcZXkYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxa-0000000A8Ck-3H41; Tue, 02 Apr 2024 07:53:34 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYxO-0000000A821-1djg; Tue, 02 Apr 2024 07:53:25 +0000 Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4V80RY6c70zwR9b; Tue, 2 Apr 2024 15:50:33 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id E3190140487; Tue, 2 Apr 2024 15:53:19 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:18 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 5/7] riscv: mm: accelerate pagefault when badaccess Date: Tue, 2 Apr 2024 15:51:40 +0800 Message-ID: <20240402075142.196265-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_005322_648274_3AC5B50A X-CRM114-Status: GOOD ( 11.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly handle error and return, there is no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan --- arch/riscv/mm/fault.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 3ba1d4dde5dd..b3fcf7d67efb 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -292,7 +292,10 @@ void handle_page_fault(struct pt_regs *regs) if (unlikely(access_error(cause, vma))) { vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + tsk->thread.bad_cause = SEGV_ACCERR; + bad_area_nosemaphore(regs, code, addr); + return; } fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); From patchwork Tue Apr 2 07:51:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1BCEACD1284 for ; Tue, 2 Apr 2024 07:56:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L2EdB+Gg0uUXbmGJPAjShRALayEjvjGFY/EcDccQyEw=; b=bJnxVIwntqz2x7 7IsHP7xq/OXF32DsRvZL+3vaqOBUPIL8L/N9zjKxWGa1WaH5gDjGci75Lo/gx2CzUp84aDZ62k55I To0cV2p3cHYpKcpD2qHwlslAONgl5FRFuVy3aqnepaEUfvklklELEi0jsoqqRarhzOLvGv9Fdzl98 LUrKcdtIO5GPg4MvCvyi5r9V8RG3CMFrhH3UkPaWNofN6r/O6euVl5Wybyv85kt4p5k/N36ej5eGq khAJv966r1gH2LVdDMd2CWh5z0hvj9ZG0UrtIaD6UOiJj9GEWNd/RbSXxVHfwliNp7ccrIRrQeV7Q MadfUPlvyfL3d8acXSew==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrZ0U-0000000A9X4-2Hkd; Tue, 02 Apr 2024 07:56:34 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrZ0G-0000000A9V3-1z7j; Tue, 02 Apr 2024 07:56:32 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=3lgbc0V97n4dqVCETRztl1tx7Hvne1+ku9oZwtLMRKM=; b=a+TOAUyYYZ3N8IH+u6X5aedBdf aGKNciA2dYLw39tnm2Z0PLqDMMSKhZ0SIktmey+M4c0f0Gl/n+zsGrYZg2MRoHv6a9/g/kOlE+KAr QeD12vci9C+eesEnLz4vyq7TtQSK83esE+iEXjgx1OA3MwLanwrv1Y1YG/vq7DyRLN1a67uJRO+91 FpGlBjM48ivO9EXkEp8SRedn6U7YcapGH/EQXufims2bYa2R0UivEdT2dbBP2wnUmS92ziFmZW3wN jNvFcubuNMzuCP8sQTILpKl1Y1J4NiyYhQ5JlnrbCIiWse3kvXcm2AeCLx2Yh3TRF2tMOStKboVwU SjWRE62A==; Received: from szxga02-in.huawei.com ([45.249.212.188]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYyM-00000003V4G-3Hap; Tue, 02 Apr 2024 07:56:16 +0000 Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4V80Tk2SWLzbds4; Tue, 2 Apr 2024 15:52:26 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id CD95B18007B; Tue, 2 Apr 2024 15:53:20 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:19 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 6/7] s390: mm: accelerate pagefault when badaccess Date: Tue, 2 Apr 2024 15:51:41 +0800 Message-ID: <20240402075142.196265-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_085444_449118_68988DDB X-CRM114-Status: GOOD ( 10.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly handle error and return, there is no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang --- arch/s390/mm/fault.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index c421dd44ffbe..162ca2576fd4 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -325,7 +325,8 @@ static void do_exception(struct pt_regs *regs, int access) goto lock_mmap; if (!(vma->vm_flags & access)) { vma_end_read(vma); - goto lock_mmap; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + return handle_fault_error_nolock(regs, SEGV_ACCERR); } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) From patchwork Tue Apr 2 07:51:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13613523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8701FC6FD1F for ; Tue, 2 Apr 2024 07:56:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=T01rUKzBEvl04Lwmh/AXTmxRQfcx1SIZNS6k9bV5INI=; b=MBkwg3APWlOIvr NkjTzq+ch5AoHIg6mKsdNduOtTVpx6jTgbGTK7uc35EJK2znytE42/C8bdFRQsGtCdBvhtBCJrqGU pNUzjY6mpYk6VFVHnwkZAykElMPi0WbBUG/HauWt7hJGQ/IQfg0r4mGEi58Y8qY7N8KTBZ5mX0JiI 96SYl1Sp9slT2bOUsRofOQf1J926QiRds3cZZkGIAzdgOZcI0SvrFCe0B6b+IFBm90qSCgXBi7bht wOGK/qzsKsos3l2ZEZIBesvRI1QqN5WOaKWx6/+MdKzmmOcKppXWyNihdXmVJ+2dehsn+3EE5kHwN QDomzLo2nFDT0UyFBRCw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrZ0T-0000000A9WN-13bq; Tue, 02 Apr 2024 07:56:33 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrZ0E-0000000A9Ui-2s4e; Tue, 02 Apr 2024 07:56:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=s63mY4NKBPmN8uvTPNOa0oeJhsDly0BRCa2ZGt+fbXg=; b=m25Bo4dAMBTBn67uqpaH7zPXrA wRkWrN47hhkgMz0yelivbuDiLOBU8urBAsk7KwlM34EdqF22J2Do8sawZJiQH6SAPUaEPZAcGnFER PcKCC4DDW0XpsxpMHiTkRDxst//hC7S2wYK0Oi7wAF724VS8xSvWPZV7laSv8SC1E4SSQ+/X1RLz+ nu3YLKo1T7SnMrqJRxX4Q9RZTTzFQeoeNOZvWF/DKKB+nsI9a2irbzx8CRvlTq3gxVrmCQrDbJKMa zIB4E1yVAG7cGNp1yFWhk/LOzNOnjxeiroQTZY8Om97CPV9hLAUi8VpAnWCBxvbkBtjF4ZQH202Jp TrlyGWLg==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrYyC-00000003V4K-30ml; Tue, 02 Apr 2024 07:56:05 +0000 Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4V80Rc70dCz29kHr; Tue, 2 Apr 2024 15:50:36 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id B81221A016C; Tue, 2 Apr 2024 15:53:21 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 2 Apr 2024 15:53:20 +0800 From: Kefeng Wang To: CC: Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , , , , , , , Kefeng Wang Subject: [PATCH 7/7] x86: mm: accelerate pagefault when badaccess Date: Tue, 2 Apr 2024 15:51:42 +0800 Message-ID: <20240402075142.196265-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com> References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240402_085509_132701_218DD50A X-CRM114-Status: GOOD ( 13.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly handle error and return, there is no need to lock_mm_and_find_vma() and check vm_flags again. Signed-off-by: Kefeng Wang Reviewed-by: Suren Baghdasaryan --- arch/x86/mm/fault.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index a4cc20d0036d..67b18adc75dd 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -866,14 +866,17 @@ bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code, static void __bad_area(struct pt_regs *regs, unsigned long error_code, - unsigned long address, u32 pkey, int si_code) + unsigned long address, struct mm_struct *mm, + struct vm_area_struct *vma, u32 pkey, int si_code) { - struct mm_struct *mm = current->mm; /* * Something tried to access memory that isn't in our memory map.. * Fix it, but check if it's kernel or user first.. */ - mmap_read_unlock(mm); + if (mm) + mmap_read_unlock(mm); + else + vma_end_read(vma); __bad_area_nosemaphore(regs, error_code, address, pkey, si_code); } @@ -897,7 +900,8 @@ static inline bool bad_area_access_from_pkeys(unsigned long error_code, static noinline void bad_area_access_error(struct pt_regs *regs, unsigned long error_code, - unsigned long address, struct vm_area_struct *vma) + unsigned long address, struct mm_struct *mm, + struct vm_area_struct *vma) { /* * This OSPKE check is not strictly necessary at runtime. @@ -927,9 +931,9 @@ bad_area_access_error(struct pt_regs *regs, unsigned long error_code, */ u32 pkey = vma_pkey(vma); - __bad_area(regs, error_code, address, pkey, SEGV_PKUERR); + __bad_area(regs, error_code, address, mm, vma, pkey, SEGV_PKUERR); } else { - __bad_area(regs, error_code, address, 0, SEGV_ACCERR); + __bad_area(regs, error_code, address, mm, vma, 0, SEGV_ACCERR); } } @@ -1357,8 +1361,9 @@ void do_user_addr_fault(struct pt_regs *regs, goto lock_mmap; if (unlikely(access_error(error_code, vma))) { - vma_end_read(vma); - goto lock_mmap; + bad_area_access_error(regs, error_code, address, NULL, vma); + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + return; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) @@ -1394,7 +1399,7 @@ void do_user_addr_fault(struct pt_regs *regs, * we can handle it.. */ if (unlikely(access_error(error_code, vma))) { - bad_area_access_error(regs, error_code, address, vma); + bad_area_access_error(regs, error_code, address, mm, vma); return; }