From patchwork Wed Apr 6 09:13:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D2D0EC433F5 for ; Wed, 6 Apr 2022 08:55:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gQHmtKcs+dlIy3pJ5gHbAM5/J6pD2jX3FgDHWR/x26M=; b=Jfgp3VRxq1vRGo L9BoQv1sVGaHjhBW4GdzINnST8/RJvJvSJGuaTbyD6Og3zbWXh71wxpANSdf361MUxCXEN8PZ9lqy Wrwo3G93XtypTpgAUpWwwMa/Oxvk9GtQCg+IMiUw0NgsCwDzob4NrPsMSkYtAw2HrA56+9Hu4BWT1 4nHT2c9jcrHrSOaWjEe4MQ0AMAsOlon/g/U+lkvjABKCUrUf6xH92xuQAG/M8e44xhWhJygSASZXL WUxS3TWxOEcec+1TiFURQBKpoCa7XRQ+P6clgBokT20/vUv5IF1km0UkFFlAhl6UNbY6HQvjyIJTt gr2zFyDT8PIlduoE01pw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Qw-004qD1-CM; Wed, 06 Apr 2022 08:54:34 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Qs-004q9x-Nr for linux-arm-kernel@lists.infradead.org; Wed, 06 Apr 2022 08:54:32 +0000 Received: from kwepemi100010.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KYJD643qCzDqLM; Wed, 6 Apr 2022 16:52:06 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100010.china.huawei.com (7.221.188.54) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:25 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:24 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 1/7] x86: fix copy_mc_to_user compile error Date: Wed, 6 Apr 2022 09:13:05 +0000 Message-ID: <20220406091311.3354723-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220406_015430_979400_FC9D922E X-CRM114-Status: UNSURE ( 9.22 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The follow patch will add copy_mc_to_user to include/linux/uaccess.h, X86 must declare Implemented to avoid compile error. Signed-off-by: Tong Tiangen --- arch/x86/include/asm/uaccess.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index f78e2b3501a1..e18c5f098025 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* From patchwork Wed Apr 6 09:13:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802724 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32B6BC433EF for ; Wed, 6 Apr 2022 08:55:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wACrxs3AU2byjBx/ncsQJ6MhaGdDODW4BllGxF1Mx14=; b=zzjAXIv3qxQ4LU IHR1MXCsCp0rBYgeUuErCbdtMWg4D0297yj+H3bNklQjaUO+5gh8uY9GuSK4cSMMcdusOTqSwji/R lQDfxEv/9+uGdAmIuf+hAm9UQWb4jD/+GiS0cJ3EjoPh5uaty67BQcZHlAwQsAUdmTOW1Wb7BPDBw 740anQ2DkP1hfUe2zCfcMLwgUsidKjJImyGXcMh7+PW82K6svvkS65T1DHx84vJGD0ffaskIbrIrX 4WF8GiP0RfRR70V03h25+VpJZ15H3KYY1vxfBTTSus8cw0FX26jZP9Nzd6VkuZpLSEKDAGSZNdX6G lUBVcV/sCZ6H7i8R/6Cg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1R9-004qJE-32; Wed, 06 Apr 2022 08:54:47 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Qt-004qAd-7Y for linux-arm-kernel@lists.infradead.org; Wed, 06 Apr 2022 08:54:33 +0000 Received: from kwepemi100008.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KYJDq2CS4zgYLQ; Wed, 6 Apr 2022 16:52:43 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100008.china.huawei.com (7.221.188.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:27 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:26 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 2/7] arm64: fix page_address return value in copy_highpage Date: Wed, 6 Apr 2022 09:13:06 +0000 Message-ID: <20220406091311.3354723-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220406_015431_480808_7FB0CCFA X-CRM114-Status: GOOD ( 12.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Function page_address return void, fix it. Signed-off-by: Tong Tiangen Acked-by: Mark Rutland --- arch/arm64/mm/copypage.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index b5447e53cd73..0dea80bf6de4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -16,8 +16,8 @@ void copy_highpage(struct page *to, struct page *from) { - struct page *kto = page_address(to); - struct page *kfrom = page_address(from); + void *kto = page_address(to); + void *kfrom = page_address(from); copy_page(kto, kfrom); From patchwork Wed Apr 6 09:13:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C03C1C433FE for ; Wed, 6 Apr 2022 08:56:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=asn8PlQ3Gs3kgB13YvOeFfSGvzqUDXo70AneNl+8VW4=; b=si/yRu94m4BdPL 8I73Dvh12VmlDBPwah7nusXXHOzlqpRmFhpgmByKIbo8jEx3h/PTqyC+NuDRDChYQy+V50/omQ5Qk uxgQhZBBNPEUF/9jkKchoArFno3Jc6Hk+LjMKNmVSgq7jpVju10pRa2OLWNCM8cHlBvjMDISITRb2 KuA2aFAZrkc+9MhqA/yxOcSbx+MqMIiOzjoQgQCxpOaBrYg2mTvRq6I6uMKW3JOoIozr2rZO5EcLl 79Ssj52vK5eA9WmXE+4/RNNmU62umZuDe0hrvXAQWREI0b2BkrDNPmVVFAgo4CoKBqiYEoJY6kKe5 5NDmVcoLAwMAXO2msLEQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1RK-004qMr-4E; Wed, 06 Apr 2022 08:54:58 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Qs-004qBB-S0 for linux-arm-kernel@lists.infradead.org; Wed, 06 Apr 2022 08:54:33 +0000 Received: from kwepemi500001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KYJDs0N84zgYR6; Wed, 6 Apr 2022 16:52:45 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500001.china.huawei.com (7.221.188.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:28 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:27 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 3/7] arm64: add support for machine check error safe Date: Wed, 6 Apr 2022 09:13:07 +0000 Message-ID: <20220406091311.3354723-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220406_015431_278567_4E83888C X-CRM114-Status: GOOD ( 27.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In arm64 kernel hardware memory errors process(do_sea()), if the errors is consumed in the kernel, the current processing is panic. However, it is not optimal. In some case, the page accessed in kernel is a user page (such as copy_from_user/get_user), kill the user process and isolate the user page with hardware memory errors is a better choice. Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC. This patch only enable machine error check framework, it add exception fixup before kernel panic in do_sea() and only limit the consumption of hardware memory errors in kernel mode triggered by user mode processes. If fixup successful, there is no need to panic. Also add _asm_extable_mc macro used for add extable entry to help fixup. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/asm-extable.h | 13 ++++++++++++ arch/arm64/include/asm/esr.h | 5 +++++ arch/arm64/include/asm/extable.h | 2 +- arch/arm64/kernel/probes/kprobes.c | 2 +- arch/arm64/mm/extable.c | 20 ++++++++++++++++++- arch/arm64/mm/fault.c | 30 +++++++++++++++++++++++++++- include/linux/uaccess.h | 8 ++++++++ 8 files changed, 77 insertions(+), 4 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d9325dd95eba..012e38309955 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -19,6 +19,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index c39f2437e08e..74d1db74fd86 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -8,6 +8,11 @@ #define EX_TYPE_UACCESS_ERR_ZERO 3 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +/* _MC indicates that can fixup from machine check errors */ +#define EX_TYPE_FIXUP_MC 5 + +#define IS_EX_TYPE_MC(type) (type == EX_TYPE_FIXUP_MC) + #ifdef __ASSEMBLY__ #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \ @@ -27,6 +32,14 @@ __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP, 0) .endm +/* + * Create an exception table entry for `insn`, which will branch to `fixup` + * when an unhandled fault(include sea fault) is taken. + */ + .macro _asm_extable_mc, insn, fixup + __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP_MC, 0) + .endm + /* * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index d52a0b269ee8..11fcfc002654 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -330,6 +330,11 @@ #ifndef __ASSEMBLY__ #include +static inline bool esr_is_sea(u32 esr) +{ + return (esr & ESR_ELx_FSC) == ESR_ELx_FSC_EXTABT; +} + static inline bool esr_is_data_abort(u32 esr) { const u32 ec = ESR_ELx_EC(esr); diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f7835b0f473b 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -45,5 +45,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, } #endif /* !CONFIG_BPF_JIT */ -bool fixup_exception(struct pt_regs *regs); +bool fixup_exception(struct pt_regs *regs, unsigned int esr); #endif diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index d9dfa82c1f18..16a069e8eec3 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -285,7 +285,7 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr) * In case the user-specified fault handler returned * zero, try to fix up. */ - if (fixup_exception(regs)) + if (fixup_exception(regs, fsr)) return 1; } return 0; diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 489455309695..f1134c88e849 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -9,6 +9,7 @@ #include #include +#include static inline unsigned long get_ex_fixup(const struct exception_table_entry *ex) @@ -23,6 +24,18 @@ static bool ex_handler_fixup(const struct exception_table_entry *ex, return true; } +static bool ex_handler_fixup_mc(const struct exception_table_entry *ex, + struct pt_regs *regs, unsigned int esr) +{ + if (esr_is_sea(esr)) + regs->regs[0] = 0; + else + regs->regs[0] = 1; + + regs->pc = get_ex_fixup(ex); + return true; +} + static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex, struct pt_regs *regs) { @@ -63,7 +76,7 @@ ex_handler_load_unaligned_zeropad(const struct exception_table_entry *ex, return true; } -bool fixup_exception(struct pt_regs *regs) +bool fixup_exception(struct pt_regs *regs, unsigned int esr) { const struct exception_table_entry *ex; @@ -71,9 +84,14 @@ bool fixup_exception(struct pt_regs *regs) if (!ex) return false; + if (esr_is_sea(esr) && !IS_EX_TYPE_MC(ex->type)) + return false; + switch (ex->type) { case EX_TYPE_FIXUP: return ex_handler_fixup(ex, regs); + case EX_TYPE_FIXUP_MC: + return ex_handler_fixup_mc(ex, regs, esr); case EX_TYPE_BPF: return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..ffdfab2fdd60 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -361,7 +361,7 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, * Are we prepared to handle this kernel fault? * We are almost certainly not prepared to handle instruction faults. */ - if (!is_el1_instruction_abort(esr) && fixup_exception(regs)) + if (!is_el1_instruction_abort(esr) && fixup_exception(regs, esr)) return; if (WARN_RATELIMIT(is_spurious_el1_translation_fault(addr, esr, regs), @@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_process_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs) || !current->mm) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + current->thread.fault_address = 0; + current->thread.fault_code = esr; + + if (!fixup_exception(regs, esr)) + return false; + + arm64_force_sig_fault(sig, code, addr, + "Uncorrected hardware memory error in kernel-access\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } + + if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + return 0; + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 546179418ffa..dd952aeecdc1 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Wed Apr 6 09:13:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802726 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9799DC433F5 for ; Wed, 6 Apr 2022 08:56:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ROO9NSLTDdVVZ8Kv1N+wRzV2MvduFF7DA6PYOrz0e9w=; b=IfjyKrXXoqO9fi GRFIMtTdd0parZPzLJ+YFWlFQETu6lN/cdBqZpUkRhGLoKbvffhryH98cFQM7WWpgk3a2AMkmdx40 4qejDZrUDuGnKhGoZoP0RQlRwCxyu/QzWINp+I69xCICtZIznpK5vPmA9Jp/zNMoD4atPqrtntzPd Iy8Az8oOay+ajeqZ9C8mbrqiIB+6EcvKszKGcmS8zW3Z797I1/893NtX2q8bZ4RWeM7u9kL4H9swZ T53ktotpqdrIeRO6ZAZkwHXtIFGpbhNShiMqKqEoDD2p9ug3bkqExpPGrETgPj3+5XYAiSqfkJXW+ aswTi1cChngTMhlcWXog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Ra-004qUY-4i; Wed, 06 Apr 2022 08:55:14 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Qu-004qCG-Hm for linux-arm-kernel@lists.infradead.org; Wed, 06 Apr 2022 08:54:34 +0000 Received: from kwepemi100009.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KYJGM5tRfzdZZq; Wed, 6 Apr 2022 16:54:03 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100009.china.huawei.com (7.221.188.242) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:30 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:29 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 4/7] arm64: add copy_from_user to machine check safe Date: Wed, 6 Apr 2022 09:13:08 +0000 Message-ID: <20220406091311.3354723-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220406_015432_982493_81F794BF X-CRM114-Status: GOOD ( 10.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add scenarios copy_from_user to machine check safe. The data copied is user data and is machine check safe, so just kill the user process and isolate the error page, not necessary panic. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-uaccess.h | 16 ++++++++++++++++ arch/arm64/lib/copy_from_user.S | 11 ++++++----- 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 0557af834e03..f31c8978e1af 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -92,4 +92,20 @@ alternative_else_nop_endif _asm_extable 8888b,\l; .endm + + .macro user_ldp_mc l, reg1, reg2, addr, post_inc +8888: ldtr \reg1, [\addr]; +8889: ldtr \reg2, [\addr, #8]; + add \addr, \addr, \post_inc; + + _asm_extable_mc 8888b, \l; + _asm_extable_mc 8889b, \l; + .endm + + .macro user_ldst_mc l, inst, reg, addr, post_inc +8888: \inst \reg, [\addr]; + add \addr, \addr, \post_inc; + + _asm_extable_mc 8888b, \l; + .endm #endif diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S index 34e317907524..d9d7c5291871 100644 --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -21,7 +21,7 @@ */ .macro ldrb1 reg, ptr, val - user_ldst 9998f, ldtrb, \reg, \ptr, \val + user_ldst_mc 9998f, ldtrb, \reg, \ptr, \val .endm .macro strb1 reg, ptr, val @@ -29,7 +29,7 @@ .endm .macro ldrh1 reg, ptr, val - user_ldst 9997f, ldtrh, \reg, \ptr, \val + user_ldst_mc 9997f, ldtrh, \reg, \ptr, \val .endm .macro strh1 reg, ptr, val @@ -37,7 +37,7 @@ .endm .macro ldr1 reg, ptr, val - user_ldst 9997f, ldtr, \reg, \ptr, \val + user_ldst_mc 9997f, ldtr, \reg, \ptr, \val .endm .macro str1 reg, ptr, val @@ -45,7 +45,7 @@ .endm .macro ldp1 reg1, reg2, ptr, val - user_ldp 9997f, \reg1, \reg2, \ptr, \val + user_ldp_mc 9997f, \reg1, \reg2, \ptr, \val .endm .macro stp1 reg1, reg2, ptr, val @@ -62,7 +62,8 @@ SYM_FUNC_START(__arch_copy_from_user) ret // Exception fixups -9997: cmp dst, dstin +9997: cbz x0, 9998f // Check machine check exception + cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder USER(9998f, ldtrb tmp1w, [srcin]) From patchwork Wed Apr 6 09:13:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BC3ECC433F5 for ; Wed, 6 Apr 2022 08:56:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=N1ot9VwbWgg+Dgsb9nMcsJuzwYcnrBEpSEjEtSwodm0=; b=Iq2GRzyM5yhS+Y vJ3TI5T6l2vipCMLuRUwJqDFxte16e5EtKxXCWkBE/Vt4ZT+SIbjekipOLHTzmYOmGxqjJR392A62 Fmq7uP+4tNISfPrxxKhhmkP9bKDOZBXYVnp0qkLsn024KtQtQH3yPJEj9g65WvIdVQd3hRdDM4BHZ Jmml6WorqMB/hRvhvfM65+FGUVDDdYaJD2rMm4CVu5X1NVRPOPBp5AaDbPLkhK5GUSzY/fR1rXY5D H+vx2sP/fW1aSc+/AEXYq50VwOxHLx1/cSUSPTEiBk+ZJxDgjOhEsEBHetaRYMcyR5XaAIEXlATDM xdPYPmvM/yjJFUfvNNJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Rm-004qaR-JL; Wed, 06 Apr 2022 08:55:26 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Qz-004qD5-99 for linux-arm-kernel@lists.infradead.org; Wed, 06 Apr 2022 08:54:39 +0000 Received: from kwepemi100007.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KYJB55Bx8zBryw; Wed, 6 Apr 2022 16:50:21 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100007.china.huawei.com (7.221.188.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:32 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:31 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 5/7] arm64: add get_user to machine check safe Date: Wed, 6 Apr 2022 09:13:09 +0000 Message-ID: <20220406091311.3354723-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220406_015437_725157_8BE776CE X-CRM114-Status: GOOD ( 12.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add scenarios get_user to machine check safe. The processing of EX_TYPE_UACCESS_ERR_ZERO and EX_TYPE_UACCESS_ERR_ZERO_UCE_RECOVERY is same and both return -EFAULT. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 14 +++++++++++++- arch/arm64/include/asm/uaccess.h | 2 +- arch/arm64/mm/extable.c | 1 + 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 74d1db74fd86..bfc2d224cbae 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -10,8 +10,11 @@ /* _MC indicates that can fixup from machine check errors */ #define EX_TYPE_FIXUP_MC 5 +#define EX_TYPE_UACCESS_ERR_ZERO_MC 6 -#define IS_EX_TYPE_MC(type) (type == EX_TYPE_FIXUP_MC) +#define IS_EX_TYPE_MC(type) \ + (type == EX_TYPE_FIXUP_MC || \ + type == EX_TYPE_UACCESS_ERR_ZERO_MC) #ifdef __ASSEMBLY__ @@ -77,6 +80,15 @@ #define EX_DATA_REG(reg, gpr) \ "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")" +#define _ASM_EXTABLE_UACCESS_ERR_ZERO_MC(insn, fixup, err, zero) \ + __DEFINE_ASM_GPR_NUMS \ + __ASM_EXTABLE_RAW(#insn, #fixup, \ + __stringify(EX_TYPE_UACCESS_ERR_ZERO_MC), \ + "(" \ + EX_DATA_REG(ERR, err) " | " \ + EX_DATA_REG(ZERO, zero) \ + ")") + #define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \ __DEFINE_ASM_GPR_NUMS \ __ASM_EXTABLE_RAW(#insn, #fixup, \ diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index e8dce0cc5eaa..24b662407fbd 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -236,7 +236,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) asm volatile( \ "1: " load " " reg "1, [%2]\n" \ "2:\n" \ - _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ + _ASM_EXTABLE_UACCESS_ERR_ZERO_MC(1b, 2b, %w0, %w1) \ : "+r" (err), "=&r" (x) \ : "r" (addr)) diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index f1134c88e849..7c05f8d2bce0 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -95,6 +95,7 @@ bool fixup_exception(struct pt_regs *regs, unsigned int esr) case EX_TYPE_BPF: return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: + case EX_TYPE_UACCESS_ERR_ZERO_MC: return ex_handler_uaccess_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); From patchwork Wed Apr 6 09:13:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9C366C433F5 for ; Wed, 6 Apr 2022 08:57:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rIkA2086sHUTxgGOmnZQigS8EuDnBtO+jmzS5gndv1w=; b=Vi0XTNBjDYUPvF AAenWBN/fooiU+1qyQwaeF4FF+uHRu/mG9FfjYMUZI+Hptw/RxNd0mAPg5ZHEfvVWyR1kmWBVFZtf nFiPVCSPFQbX5bq+Ur1iCNLiZYoNFJlb9gGnF/uAgWcaYWaqB5tHhEqpbBZMhAfCtXDRycy/3k6Cg q65Zb2q3ULVn0LysqaUEjNMvubG5M53dfDyg7DemDF7kNWMewHt2lgyeRYRXDY8HvyRPInfgZYcoZ wY/O2hPASVPH4PDyV/d8A8Vg/9+dGhf4k9eJXA+ZIp9brCEttN/3wkEF5S5t4gA4AswkOWBQEHKiT uOSvN2FlpJd7DSlHlkaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1SS-004qxF-VI; Wed, 06 Apr 2022 08:56:09 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1R0-004qEH-RM for linux-arm-kernel@lists.infradead.org; Wed, 06 Apr 2022 08:54:41 +0000 Received: from kwepemi100006.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KYJGR422hz1HBWw; Wed, 6 Apr 2022 16:54:07 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100006.china.huawei.com (7.221.188.165) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:33 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:32 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 6/7] arm64: add cow to machine check safe Date: Wed, 6 Apr 2022 09:13:10 +0000 Message-ID: <20220406091311.3354723-7-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220406_015439_340778_D529B30C X-CRM114-Status: GOOD ( 18.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In the cow(copy on write) processing, the data of the user process is copied, when machine check error is encountered during copy, killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. The copy_page_mc() in copy_page_mc.S is largely borrows from copy_page() in copy_page.S and the main difference is copy_page_mc() add the extable entry to support machine check safe. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/page.h | 10 ++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_page_mc.S | 98 +++++++++++++++++++++++++++++++++++ arch/arm64/mm/copypage.c | 36 ++++++++++--- include/linux/highmem.h | 8 +++ mm/memory.c | 2 +- 6 files changed, 149 insertions(+), 7 deletions(-) create mode 100644 arch/arm64/lib/copy_page_mc.S diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 993a27ea6f54..832571a7dddb 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +extern void copy_page_mc(void *to, const void *from); +void copy_highpage_mc(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_HIGHPAGE_MC + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#endif + struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, unsigned long vaddr); #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..29c578414b12 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -22,3 +22,5 @@ obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o obj-$(CONFIG_ARM64_MTE) += mte.o obj-$(CONFIG_KASAN_SW_TAGS) += kasan_sw_tags.o + +obj-$(CONFIG_ARCH_HAS_CPY_MC) += copy_page_mc.o diff --git a/arch/arm64/lib/copy_page_mc.S b/arch/arm64/lib/copy_page_mc.S new file mode 100644 index 000000000000..cbf56e661efe --- /dev/null +++ b/arch/arm64/lib/copy_page_mc.S @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with machine check + * + * Parameters: + * x0 - dest + * x1 - src + */ +SYM_FUNC_START(__pi_copy_page_mc) +alternative_if ARM64_HAS_NO_HW_PREFETCH + // Prefetch three cache lines ahead. + prfm pldl1strm, [x1, #128] + prfm pldl1strm, [x1, #256] + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +100: ldp x2, x3, [x1] +101: ldp x4, x5, [x1, #16] +102: ldp x6, x7, [x1, #32] +103: ldp x8, x9, [x1, #48] +104: ldp x10, x11, [x1, #64] +105: ldp x12, x13, [x1, #80] +106: ldp x14, x15, [x1, #96] +107: ldp x16, x17, [x1, #112] + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + +alternative_if ARM64_HAS_NO_HW_PREFETCH + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + + stnp x2, x3, [x0, #-256] +200: ldp x2, x3, [x1] + stnp x4, x5, [x0, #16 - 256] +201: ldp x4, x5, [x1, #16] + stnp x6, x7, [x0, #32 - 256] +202: ldp x6, x7, [x1, #32] + stnp x8, x9, [x0, #48 - 256] +203: ldp x8, x9, [x1, #48] + stnp x10, x11, [x0, #64 - 256] +204: ldp x10, x11, [x1, #64] + stnp x12, x13, [x0, #80 - 256] +205: ldp x12, x13, [x1, #80] + stnp x14, x15, [x0, #96 - 256] +206: ldp x14, x15, [x1, #96] + stnp x16, x17, [x0, #112 - 256] +207: ldp x16, x17, [x1, #112] + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] + +300: ret + +_asm_extable_mc 100b, 300b +_asm_extable_mc 101b, 300b +_asm_extable_mc 102b, 300b +_asm_extable_mc 103b, 300b +_asm_extable_mc 104b, 300b +_asm_extable_mc 105b, 300b +_asm_extable_mc 106b, 300b +_asm_extable_mc 107b, 300b +_asm_extable_mc 200b, 300b +_asm_extable_mc 201b, 300b +_asm_extable_mc 202b, 300b +_asm_extable_mc 203b, 300b +_asm_extable_mc 204b, 300b +_asm_extable_mc 205b, 300b +_asm_extable_mc 206b, 300b +_asm_extable_mc 207b, 300b + +SYM_FUNC_END(__pi_copy_page_mc) +SYM_FUNC_ALIAS(copy_page_mc, __pi_copy_page_mc) +EXPORT_SYMBOL(copy_page_mc) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 0dea80bf6de4..0f28edfcb234 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -14,13 +14,8 @@ #include #include -void copy_highpage(struct page *to, struct page *from) +static void do_mte(struct page *to, struct page *from, void *kto, void *kfrom) { - void *kto = page_address(to); - void *kfrom = page_address(from); - - copy_page(kto, kfrom); - if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); page_kasan_tag_reset(to); @@ -35,6 +30,15 @@ void copy_highpage(struct page *to, struct page *from) mte_copy_page_tags(kto, kfrom); } } + +void copy_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page(kto, kfrom); + do_mte(to, from, kto, kfrom); +} EXPORT_SYMBOL(copy_highpage); void copy_user_highpage(struct page *to, struct page *from, @@ -44,3 +48,23 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +void copy_highpage_mc(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page_mc(kto, kfrom); + do_mte(to, from, kto, kfrom); +} +EXPORT_SYMBOL(copy_highpage_mc); + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + copy_highpage_mc(to, from); + flush_dcache_page(to); +} +EXPORT_SYMBOL_GPL(copy_user_highpage_mc); +#endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9c..a9dbf331b038 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -283,6 +283,10 @@ static inline void copy_user_highpage(struct page *to, struct page *from, #endif +#ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#define copy_user_highpage_mc copy_user_highpage +#endif + #ifndef __HAVE_ARCH_COPY_HIGHPAGE static inline void copy_highpage(struct page *to, struct page *from) @@ -298,6 +302,10 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif +#ifndef __HAVE_ARCH_COPY_HIGHPAGE_MC +#define cop_highpage_mc copy_highpage +#endif + static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, size_t len) diff --git a/mm/memory.c b/mm/memory.c index 76e3af9639d9..d5f62234152d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2767,7 +2767,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - copy_user_highpage(dst, src, addr, vma); + copy_user_highpage_mc(dst, src, addr, vma); return true; } From patchwork Wed Apr 6 09:13:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802728 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1EB99C433F5 for ; Wed, 6 Apr 2022 08:56:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Up83dzkkAL/GaQUglXff4rou8X4O5KBPGaVZ/OVzeiM=; b=M5OqN2Xxbs9aSW qOFrWqJT2Sze3EjChGLeoboiIiOMVEd/U2Dj9r0fFUaMIoqah7xxcLHcAfIy9pGK2fQZlBq8sZHIJ VnhkKV+tV/8U8E8QpFjFVhzPfw4MujKvJZf6oKhYM6mUoQ13xR/Hmbit+gn/RpSHeBt2yetpDywB5 5p6CvKffwd3MC2Kjuo+FE1aOrV5ENW0l7H18qIIMENUYuG8dMK7kHd27v5U8qYFOEFhQTNd5NLyUT fejCWZKW4V//Wxa8GEctvBYIyV+94ZfdzxS4gnKfSQAZSqEykMYxNSdOmQm8Zh8EQMp7nyWDLeOi8 bf7J7RQ2IuqPcbltt0Lg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1S6-004qkM-0K; Wed, 06 Apr 2022 08:55:46 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nc1Qz-004qFP-Re for linux-arm-kernel@lists.infradead.org; Wed, 06 Apr 2022 08:54:40 +0000 Received: from kwepemi100003.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KYJDz4LY3zgYQg; Wed, 6 Apr 2022 16:52:51 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100003.china.huawei.com (7.221.188.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:35 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:34 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 7/7] arm64: add pagecache reading to machine check safe Date: Wed, 6 Apr 2022 09:13:11 +0000 Message-ID: <20220406091311.3354723-8-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220406_015438_322824_02CDB182 X-CRM114-Status: GOOD ( 23.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When user process reading file, the data is cached in pagecache and the data belongs to the user process, When machine check error is encountered during pagecache reading, killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. The __arch_copy_mc_to_user() in copy_to_user_mc.S is largely borrows from __arch_copy_to_user() in copy_to_user.S and the main difference is __arch_copy_mc_to_user() add the extable entry to support machine check safe. In _copy_page_to_iter(), machine check safe only considered ITER_IOVEC which is used by pagecache reading. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/uaccess.h | 15 ++++++ arch/arm64/lib/Makefile | 2 +- arch/arm64/lib/copy_to_user_mc.S | 78 +++++++++++++++++++++++++++++ include/linux/uio.h | 9 +++- lib/iov_iter.c | 85 +++++++++++++++++++++++++------- 5 files changed, 170 insertions(+), 19 deletions(-) create mode 100644 arch/arm64/lib/copy_to_user_mc.S diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 24b662407fbd..f0d5e811165a 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -448,6 +448,21 @@ extern long strncpy_from_user(char *dest, const char __user *src, long count); extern __must_check long strnlen_user(const char __user *str, long n); +#ifdef CONFIG_ARCH_HAS_COPY_MC +extern unsigned long __must_check __arch_copy_mc_to_user(void __user *to, + const void *from, unsigned long n); +static inline unsigned long __must_check +copy_mc_to_user(void __user *to, const void *from, unsigned long n) +{ + uaccess_ttbr0_enable(); + n = __arch_copy_mc_to_user(__uaccess_mask_ptr(to), from, n); + uaccess_ttbr0_disable(); + + return n; +} +#define copy_mc_to_user copy_mc_to_user +#endif + #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE struct page; void memcpy_page_flushcache(char *to, struct page *page, size_t offset, size_t len); diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29c578414b12..9b3571227fb4 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -23,4 +23,4 @@ obj-$(CONFIG_ARM64_MTE) += mte.o obj-$(CONFIG_KASAN_SW_TAGS) += kasan_sw_tags.o -obj-$(CONFIG_ARCH_HAS_CPY_MC) += copy_page_mc.o +obj-$(CONFIG_ARCH_HAS_COPY_MC) += copy_page_mc.o copy_to_user_mc.o diff --git a/arch/arm64/lib/copy_to_user_mc.S b/arch/arm64/lib/copy_to_user_mc.S new file mode 100644 index 000000000000..9d228ff15446 --- /dev/null +++ b/arch/arm64/lib/copy_to_user_mc.S @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include + +#include +#include +#include + +/* + * Copy to user space from a kernel buffer (alignment handled by the hardware) + * + * Parameters: + * x0 - to + * x1 - from + * x2 - n + * Returns: + * x0 - bytes not copied + */ + .macro ldrb1 reg, ptr, val + 1000: ldrb \reg, [\ptr], \val; + _asm_extable_mc 1000b, 9998f; + .endm + + .macro strb1 reg, ptr, val + user_ldst_mc 9998f, sttrb, \reg, \ptr, \val + .endm + + .macro ldrh1 reg, ptr, val + 1001: ldrh \reg, [\ptr], \val; + _asm_extable_mc 1001b, 9998f; + .endm + + .macro strh1 reg, ptr, val + user_ldst_mc 9997f, sttrh, \reg, \ptr, \val + .endm + + .macro ldr1 reg, ptr, val + 1002: ldr \reg, [\ptr], \val; + _asm_extable_mc 1002b, 9998f; + .endm + + .macro str1 reg, ptr, val + user_ldst_mc 9997f, sttr, \reg, \ptr, \val + .endm + + .macro ldp1 reg1, reg2, ptr, val + 1003: ldp \reg1, \reg2, [\ptr], \val; + _asm_extable_mc 1003b, 9998f; + .endm + + .macro stp1 reg1, reg2, ptr, val + user_stp 9997f, \reg1, \reg2, \ptr, \val + .endm + +end .req x5 +srcin .req x15 +SYM_FUNC_START(__arch_copy_mc_to_user) + add end, x0, x2 + mov srcin, x1 +#include "copy_template.S" + mov x0, #0 + ret + + // Exception fixups +9997: cbz x0, 9998f // Check machine check exception + cmp dst, dstin + b.ne 9998f + // Before being absolutely sure we couldn't copy anything, try harder + ldrb tmp1w, [srcin] +USER(9998f, sttrb tmp1w, [dst]) + add dst, dst, #1 +9998: sub x0, end, dst // bytes not copied + ret +SYM_FUNC_END(__arch_copy_mc_to_user) +EXPORT_SYMBOL(__arch_copy_mc_to_user) diff --git a/include/linux/uio.h b/include/linux/uio.h index 739285fe5a2f..539d9ee9b032 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -147,10 +147,17 @@ size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i); size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i); size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i); +#ifdef CONFIG_ARCH_HAS_COPY_MC +size_t copy_mc_page_to_iter(struct page *page, size_t offset, size_t bytes, + struct iov_iter *i); +#else +#define copy_mc_page_to_iter copy_page_to_iter +#endif + static inline size_t copy_folio_to_iter(struct folio *folio, size_t offset, size_t bytes, struct iov_iter *i) { - return copy_page_to_iter(&folio->page, offset, bytes, i); + return copy_mc_page_to_iter(&folio->page, offset, bytes, i); } static __always_inline __must_check diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 6dd5330f7a99..2c5f3bb6391d 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -157,6 +157,19 @@ static int copyout(void __user *to, const void *from, size_t n) return n; } +#ifdef CONFIG_ARCH_HAS_COPY_MC +static int copyout_mc(void __user *to, const void *from, size_t n) +{ + if (access_ok(to, n)) { + instrument_copy_to_user(to, from, n); + n = copy_mc_to_user((__force void *) to, from, n); + } + return n; +} +#else +#define copyout_mc copyout +#endif + static int copyin(void *to, const void __user *from, size_t n) { if (should_fail_usercopy()) @@ -169,7 +182,7 @@ static int copyin(void *to, const void __user *from, size_t n) } static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, - struct iov_iter *i) + struct iov_iter *i, bool mc_safe) { size_t skip, copy, left, wanted; const struct iovec *iov; @@ -194,7 +207,10 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b from = kaddr + offset; /* first chunk, usually the only one */ - left = copyout(buf, from, copy); + if (mc_safe) + left = copyout_mc(buf, from, copy); + else + left = copyout(buf, from, copy); copy -= left; skip += copy; from += copy; @@ -204,7 +220,10 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b iov++; buf = iov->iov_base; copy = min(bytes, iov->iov_len); - left = copyout(buf, from, copy); + if (mc_safe) + left = copyout_mc(buf, from, copy); + else + left = copyout(buf, from, copy); copy -= left; skip = copy; from += copy; @@ -223,7 +242,10 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b kaddr = kmap(page); from = kaddr + offset; - left = copyout(buf, from, copy); + if (mc_safe) + left = copyout_mc(buf, from, copy); + else + left = copyout(buf, from, copy); copy -= left; skip += copy; from += copy; @@ -232,7 +254,10 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b iov++; buf = iov->iov_base; copy = min(bytes, iov->iov_len); - left = copyout(buf, from, copy); + if (mc_safe) + left = copyout_mc(buf, from, copy); + else + left = copyout(buf, from, copy); copy -= left; skip = copy; from += copy; @@ -674,15 +699,6 @@ size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) EXPORT_SYMBOL(_copy_to_iter); #ifdef CONFIG_ARCH_HAS_COPY_MC -static int copyout_mc(void __user *to, const void *from, size_t n) -{ - if (access_ok(to, n)) { - instrument_copy_to_user(to, from, n); - n = copy_mc_to_user((__force void *) to, from, n); - } - return n; -} - static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes, struct iov_iter *i) { @@ -846,10 +862,10 @@ static inline bool page_copy_sane(struct page *page, size_t offset, size_t n) } static size_t __copy_page_to_iter(struct page *page, size_t offset, size_t bytes, - struct iov_iter *i) + struct iov_iter *i, bool mc_safe) { if (likely(iter_is_iovec(i))) - return copy_page_to_iter_iovec(page, offset, bytes, i); + return copy_page_to_iter_iovec(page, offset, bytes, i, mc_safe); if (iov_iter_is_bvec(i) || iov_iter_is_kvec(i) || iov_iter_is_xarray(i)) { void *kaddr = kmap_local_page(page); size_t wanted = _copy_to_iter(kaddr + offset, bytes, i); @@ -878,7 +894,7 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes, offset %= PAGE_SIZE; while (1) { size_t n = __copy_page_to_iter(page, offset, - min(bytes, (size_t)PAGE_SIZE - offset), i); + min(bytes, (size_t)PAGE_SIZE - offset), i, false); res += n; bytes -= n; if (!bytes || !n) @@ -893,6 +909,41 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes, } EXPORT_SYMBOL(copy_page_to_iter); +#ifdef CONFIG_ARCH_HAS_COPY_MC +/** + * copy_mc_page_to_iter - copy page to iter with source memory error exception handling. + * + * The filemap_read deploys this for pagecache reading and the main differences between + * this and typical copy_page_to_iter() is call __copy_page_to_iter with mc_safe true. + * + * Return: number of bytes copied (may be %0) + */ +size_t copy_mc_page_to_iter(struct page *page, size_t offset, size_t bytes, + struct iov_iter *i) +{ + size_t res = 0; + + if (unlikely(!page_copy_sane(page, offset, bytes))) + return 0; + page += offset / PAGE_SIZE; // first subpage + offset %= PAGE_SIZE; + while (1) { + size_t n = __copy_page_to_iter(page, offset, + min(bytes, (size_t)PAGE_SIZE - offset), i, true); + res += n; + bytes -= n; + if (!bytes || !n) + break; + offset += n; + if (offset == PAGE_SIZE) { + page++; + offset = 0; + } + } + return res; +} +#endif + size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, struct iov_iter *i) {