From patchwork Wed Apr 6 09:13:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CD4BC433F5 for ; Wed, 6 Apr 2022 08:55:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0CD966B0074; Wed, 6 Apr 2022 04:54:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 07E676B0075; Wed, 6 Apr 2022 04:54:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAE5B6B0078; Wed, 6 Apr 2022 04:54:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id DDF4B6B0074 for ; Wed, 6 Apr 2022 04:54:39 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 952F5AB9CF for ; Wed, 6 Apr 2022 08:54:29 +0000 (UTC) X-FDA: 79325843058.23.65E3320 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf18.hostedemail.com (Postfix) with ESMTP id E834D1C0022 for ; Wed, 6 Apr 2022 08:54:28 +0000 (UTC) Received: from kwepemi100010.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KYJD643qCzDqLM; Wed, 6 Apr 2022 16:52:06 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100010.china.huawei.com (7.221.188.54) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:25 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:24 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 1/7] x86: fix copy_mc_to_user compile error Date: Wed, 6 Apr 2022 09:13:05 +0000 Message-ID: <20220406091311.3354723-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Stat-Signature: s96be5tqrzx1bhyrdb66nrq6k5sqwn6m Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E834D1C0022 X-HE-Tag: 1649235268-633459 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The follow patch will add copy_mc_to_user to include/linux/uaccess.h, X86 must declare Implemented to avoid compile error. Signed-off-by: Tong Tiangen --- arch/x86/include/asm/uaccess.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index f78e2b3501a1..e18c5f098025 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* From patchwork Wed Apr 6 09:13:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802718 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F83AC433EF for ; Wed, 6 Apr 2022 08:55:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F26218D0001; Wed, 6 Apr 2022 04:54:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ED5366B0078; Wed, 6 Apr 2022 04:54:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC3C48D0001; Wed, 6 Apr 2022 04:54:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id CE2536B0075 for ; Wed, 6 Apr 2022 04:54:40 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9EA8B21FB5 for ; Wed, 6 Apr 2022 08:54:30 +0000 (UTC) X-FDA: 79325843100.06.D3C63D2 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf05.hostedemail.com (Postfix) with ESMTP id C9FF4100002 for ; Wed, 6 Apr 2022 08:54:29 +0000 (UTC) Received: from kwepemi100008.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KYJDq2CS4zgYLQ; Wed, 6 Apr 2022 16:52:43 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100008.china.huawei.com (7.221.188.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:27 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:26 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 2/7] arm64: fix page_address return value in copy_highpage Date: Wed, 6 Apr 2022 09:13:06 +0000 Message-ID: <20220406091311.3354723-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C9FF4100002 X-Stat-Signature: aepocwhhojb434fcuwzt4pbjm34d11qq X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-HE-Tag: 1649235269-391566 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Function page_address return void, fix it. Signed-off-by: Tong Tiangen Acked-by: Mark Rutland --- arch/arm64/mm/copypage.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index b5447e53cd73..0dea80bf6de4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -16,8 +16,8 @@ void copy_highpage(struct page *to, struct page *from) { - struct page *kto = page_address(to); - struct page *kfrom = page_address(from); + void *kto = page_address(to); + void *kfrom = page_address(from); copy_page(kto, kfrom); From patchwork Wed Apr 6 09:13:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 768A5C433F5 for ; Wed, 6 Apr 2022 08:56:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 059728D0002; Wed, 6 Apr 2022 04:54:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 008E36B0078; Wed, 6 Apr 2022 04:54:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E12828D0002; Wed, 6 Apr 2022 04:54:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id D27FB6B0075 for ; Wed, 6 Apr 2022 04:54:42 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 85645A75BC for ; Wed, 6 Apr 2022 08:54:32 +0000 (UTC) X-FDA: 79325843184.24.8394E3A Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf22.hostedemail.com (Postfix) with ESMTP id 8CDC4C0036 for ; Wed, 6 Apr 2022 08:54:31 +0000 (UTC) Received: from kwepemi500001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KYJDs0N84zgYR6; Wed, 6 Apr 2022 16:52:45 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500001.china.huawei.com (7.221.188.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:28 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:27 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 3/7] arm64: add support for machine check error safe Date: Wed, 6 Apr 2022 09:13:07 +0000 Message-ID: <20220406091311.3354723-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 8CDC4C0036 X-Rspam-User: Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: 65irsked4ctywdbwmbxgfnort3ynsjyf X-HE-Tag: 1649235271-830572 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In arm64 kernel hardware memory errors process(do_sea()), if the errors is consumed in the kernel, the current processing is panic. However, it is not optimal. In some case, the page accessed in kernel is a user page (such as copy_from_user/get_user), kill the user process and isolate the user page with hardware memory errors is a better choice. Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC. This patch only enable machine error check framework, it add exception fixup before kernel panic in do_sea() and only limit the consumption of hardware memory errors in kernel mode triggered by user mode processes. If fixup successful, there is no need to panic. Also add _asm_extable_mc macro used for add extable entry to help fixup. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/asm-extable.h | 13 ++++++++++++ arch/arm64/include/asm/esr.h | 5 +++++ arch/arm64/include/asm/extable.h | 2 +- arch/arm64/kernel/probes/kprobes.c | 2 +- arch/arm64/mm/extable.c | 20 ++++++++++++++++++- arch/arm64/mm/fault.c | 30 +++++++++++++++++++++++++++- include/linux/uaccess.h | 8 ++++++++ 8 files changed, 77 insertions(+), 4 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d9325dd95eba..012e38309955 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -19,6 +19,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index c39f2437e08e..74d1db74fd86 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -8,6 +8,11 @@ #define EX_TYPE_UACCESS_ERR_ZERO 3 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +/* _MC indicates that can fixup from machine check errors */ +#define EX_TYPE_FIXUP_MC 5 + +#define IS_EX_TYPE_MC(type) (type == EX_TYPE_FIXUP_MC) + #ifdef __ASSEMBLY__ #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \ @@ -27,6 +32,14 @@ __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP, 0) .endm +/* + * Create an exception table entry for `insn`, which will branch to `fixup` + * when an unhandled fault(include sea fault) is taken. + */ + .macro _asm_extable_mc, insn, fixup + __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP_MC, 0) + .endm + /* * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index d52a0b269ee8..11fcfc002654 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -330,6 +330,11 @@ #ifndef __ASSEMBLY__ #include +static inline bool esr_is_sea(u32 esr) +{ + return (esr & ESR_ELx_FSC) == ESR_ELx_FSC_EXTABT; +} + static inline bool esr_is_data_abort(u32 esr) { const u32 ec = ESR_ELx_EC(esr); diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f7835b0f473b 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -45,5 +45,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, } #endif /* !CONFIG_BPF_JIT */ -bool fixup_exception(struct pt_regs *regs); +bool fixup_exception(struct pt_regs *regs, unsigned int esr); #endif diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index d9dfa82c1f18..16a069e8eec3 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -285,7 +285,7 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr) * In case the user-specified fault handler returned * zero, try to fix up. */ - if (fixup_exception(regs)) + if (fixup_exception(regs, fsr)) return 1; } return 0; diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 489455309695..f1134c88e849 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -9,6 +9,7 @@ #include #include +#include static inline unsigned long get_ex_fixup(const struct exception_table_entry *ex) @@ -23,6 +24,18 @@ static bool ex_handler_fixup(const struct exception_table_entry *ex, return true; } +static bool ex_handler_fixup_mc(const struct exception_table_entry *ex, + struct pt_regs *regs, unsigned int esr) +{ + if (esr_is_sea(esr)) + regs->regs[0] = 0; + else + regs->regs[0] = 1; + + regs->pc = get_ex_fixup(ex); + return true; +} + static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex, struct pt_regs *regs) { @@ -63,7 +76,7 @@ ex_handler_load_unaligned_zeropad(const struct exception_table_entry *ex, return true; } -bool fixup_exception(struct pt_regs *regs) +bool fixup_exception(struct pt_regs *regs, unsigned int esr) { const struct exception_table_entry *ex; @@ -71,9 +84,14 @@ bool fixup_exception(struct pt_regs *regs) if (!ex) return false; + if (esr_is_sea(esr) && !IS_EX_TYPE_MC(ex->type)) + return false; + switch (ex->type) { case EX_TYPE_FIXUP: return ex_handler_fixup(ex, regs); + case EX_TYPE_FIXUP_MC: + return ex_handler_fixup_mc(ex, regs, esr); case EX_TYPE_BPF: return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..ffdfab2fdd60 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -361,7 +361,7 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, * Are we prepared to handle this kernel fault? * We are almost certainly not prepared to handle instruction faults. */ - if (!is_el1_instruction_abort(esr) && fixup_exception(regs)) + if (!is_el1_instruction_abort(esr) && fixup_exception(regs, esr)) return; if (WARN_RATELIMIT(is_spurious_el1_translation_fault(addr, esr, regs), @@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_process_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs) || !current->mm) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + current->thread.fault_address = 0; + current->thread.fault_code = esr; + + if (!fixup_exception(regs, esr)) + return false; + + arm64_force_sig_fault(sig, code, addr, + "Uncorrected hardware memory error in kernel-access\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } + + if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + return 0; + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 546179418ffa..dd952aeecdc1 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Wed Apr 6 09:13:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E428DC433F5 for ; Wed, 6 Apr 2022 08:56:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6AAB6B0075; Wed, 6 Apr 2022 04:54:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F3298D0003; Wed, 6 Apr 2022 04:54:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BC146B007B; Wed, 6 Apr 2022 04:54:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id 7D3726B0075 for ; Wed, 6 Apr 2022 04:54:44 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2F0C98249980 for ; Wed, 6 Apr 2022 08:54:34 +0000 (UTC) X-FDA: 79325843268.28.A1F8F9B Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf15.hostedemail.com (Postfix) with ESMTP id 4208FA0037 for ; Wed, 6 Apr 2022 08:54:33 +0000 (UTC) Received: from kwepemi100009.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KYJGM5tRfzdZZq; Wed, 6 Apr 2022 16:54:03 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100009.china.huawei.com (7.221.188.242) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:30 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:29 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 4/7] arm64: add copy_from_user to machine check safe Date: Wed, 6 Apr 2022 09:13:08 +0000 Message-ID: <20220406091311.3354723-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4208FA0037 X-Stat-Signature: aqc78iwwwik5k85u756c9hakoipotsn8 X-HE-Tag: 1649235273-770638 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add scenarios copy_from_user to machine check safe. The data copied is user data and is machine check safe, so just kill the user process and isolate the error page, not necessary panic. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-uaccess.h | 16 ++++++++++++++++ arch/arm64/lib/copy_from_user.S | 11 ++++++----- 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 0557af834e03..f31c8978e1af 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -92,4 +92,20 @@ alternative_else_nop_endif _asm_extable 8888b,\l; .endm + + .macro user_ldp_mc l, reg1, reg2, addr, post_inc +8888: ldtr \reg1, [\addr]; +8889: ldtr \reg2, [\addr, #8]; + add \addr, \addr, \post_inc; + + _asm_extable_mc 8888b, \l; + _asm_extable_mc 8889b, \l; + .endm + + .macro user_ldst_mc l, inst, reg, addr, post_inc +8888: \inst \reg, [\addr]; + add \addr, \addr, \post_inc; + + _asm_extable_mc 8888b, \l; + .endm #endif diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S index 34e317907524..d9d7c5291871 100644 --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -21,7 +21,7 @@ */ .macro ldrb1 reg, ptr, val - user_ldst 9998f, ldtrb, \reg, \ptr, \val + user_ldst_mc 9998f, ldtrb, \reg, \ptr, \val .endm .macro strb1 reg, ptr, val @@ -29,7 +29,7 @@ .endm .macro ldrh1 reg, ptr, val - user_ldst 9997f, ldtrh, \reg, \ptr, \val + user_ldst_mc 9997f, ldtrh, \reg, \ptr, \val .endm .macro strh1 reg, ptr, val @@ -37,7 +37,7 @@ .endm .macro ldr1 reg, ptr, val - user_ldst 9997f, ldtr, \reg, \ptr, \val + user_ldst_mc 9997f, ldtr, \reg, \ptr, \val .endm .macro str1 reg, ptr, val @@ -45,7 +45,7 @@ .endm .macro ldp1 reg1, reg2, ptr, val - user_ldp 9997f, \reg1, \reg2, \ptr, \val + user_ldp_mc 9997f, \reg1, \reg2, \ptr, \val .endm .macro stp1 reg1, reg2, ptr, val @@ -62,7 +62,8 @@ SYM_FUNC_START(__arch_copy_from_user) ret // Exception fixups -9997: cmp dst, dstin +9997: cbz x0, 9998f // Check machine check exception + cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder USER(9998f, ldtrb tmp1w, [srcin]) From patchwork Wed Apr 6 09:13:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6A62C433EF for ; Wed, 6 Apr 2022 08:57:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF18E8D0005; Wed, 6 Apr 2022 04:54:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA1308D0003; Wed, 6 Apr 2022 04:54:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A69308D0005; Wed, 6 Apr 2022 04:54:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 9963F8D0003 for ; Wed, 6 Apr 2022 04:54:46 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 503D6183D2A2C for ; Wed, 6 Apr 2022 08:54:36 +0000 (UTC) X-FDA: 79325843352.31.01F5FC2 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf17.hostedemail.com (Postfix) with ESMTP id 241AD4003C for ; Wed, 6 Apr 2022 08:54:34 +0000 (UTC) Received: from kwepemi100007.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KYJB55Bx8zBryw; Wed, 6 Apr 2022 16:50:21 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100007.china.huawei.com (7.221.188.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:32 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:31 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 5/7] arm64: add get_user to machine check safe Date: Wed, 6 Apr 2022 09:13:09 +0000 Message-ID: <20220406091311.3354723-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: kcia1kbrhrd4k5x4e1xutoxk9kynbzat Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Rspamd-Queue-Id: 241AD4003C X-HE-Tag: 1649235274-950014 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add scenarios get_user to machine check safe. The processing of EX_TYPE_UACCESS_ERR_ZERO and EX_TYPE_UACCESS_ERR_ZERO_UCE_RECOVERY is same and both return -EFAULT. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 14 +++++++++++++- arch/arm64/include/asm/uaccess.h | 2 +- arch/arm64/mm/extable.c | 1 + 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 74d1db74fd86..bfc2d224cbae 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -10,8 +10,11 @@ /* _MC indicates that can fixup from machine check errors */ #define EX_TYPE_FIXUP_MC 5 +#define EX_TYPE_UACCESS_ERR_ZERO_MC 6 -#define IS_EX_TYPE_MC(type) (type == EX_TYPE_FIXUP_MC) +#define IS_EX_TYPE_MC(type) \ + (type == EX_TYPE_FIXUP_MC || \ + type == EX_TYPE_UACCESS_ERR_ZERO_MC) #ifdef __ASSEMBLY__ @@ -77,6 +80,15 @@ #define EX_DATA_REG(reg, gpr) \ "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")" +#define _ASM_EXTABLE_UACCESS_ERR_ZERO_MC(insn, fixup, err, zero) \ + __DEFINE_ASM_GPR_NUMS \ + __ASM_EXTABLE_RAW(#insn, #fixup, \ + __stringify(EX_TYPE_UACCESS_ERR_ZERO_MC), \ + "(" \ + EX_DATA_REG(ERR, err) " | " \ + EX_DATA_REG(ZERO, zero) \ + ")") + #define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \ __DEFINE_ASM_GPR_NUMS \ __ASM_EXTABLE_RAW(#insn, #fixup, \ diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index e8dce0cc5eaa..24b662407fbd 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -236,7 +236,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) asm volatile( \ "1: " load " " reg "1, [%2]\n" \ "2:\n" \ - _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ + _ASM_EXTABLE_UACCESS_ERR_ZERO_MC(1b, 2b, %w0, %w1) \ : "+r" (err), "=&r" (x) \ : "r" (addr)) diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index f1134c88e849..7c05f8d2bce0 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -95,6 +95,7 @@ bool fixup_exception(struct pt_regs *regs, unsigned int esr) case EX_TYPE_BPF: return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: + case EX_TYPE_UACCESS_ERR_ZERO_MC: return ex_handler_uaccess_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); From patchwork Wed Apr 6 09:13:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802722 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89D2EC433EF for ; Wed, 6 Apr 2022 08:57:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F06F8D0006; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A07E8D0003; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1901C8D0006; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id 0BAF08D0003 for ; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id AF24E183DBF36 for ; Wed, 6 Apr 2022 08:54:38 +0000 (UTC) X-FDA: 79325843436.29.4989644 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf05.hostedemail.com (Postfix) with ESMTP id 54A47100002 for ; Wed, 6 Apr 2022 08:54:37 +0000 (UTC) Received: from kwepemi100006.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KYJGR422hz1HBWw; Wed, 6 Apr 2022 16:54:07 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100006.china.huawei.com (7.221.188.165) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:33 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:32 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 6/7] arm64: add cow to machine check safe Date: Wed, 6 Apr 2022 09:13:10 +0000 Message-ID: <20220406091311.3354723-7-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 54A47100002 X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: 53zw45fis8ee51q1je3hyw3rofi5pny6 X-HE-Tag: 1649235277-841299 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the cow(copy on write) processing, the data of the user process is copied, when machine check error is encountered during copy, killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. The copy_page_mc() in copy_page_mc.S is largely borrows from copy_page() in copy_page.S and the main difference is copy_page_mc() add the extable entry to support machine check safe. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/page.h | 10 ++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_page_mc.S | 98 +++++++++++++++++++++++++++++++++++ arch/arm64/mm/copypage.c | 36 ++++++++++--- include/linux/highmem.h | 8 +++ mm/memory.c | 2 +- 6 files changed, 149 insertions(+), 7 deletions(-) create mode 100644 arch/arm64/lib/copy_page_mc.S diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 993a27ea6f54..832571a7dddb 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +extern void copy_page_mc(void *to, const void *from); +void copy_highpage_mc(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_HIGHPAGE_MC + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#endif + struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, unsigned long vaddr); #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..29c578414b12 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -22,3 +22,5 @@ obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o obj-$(CONFIG_ARM64_MTE) += mte.o obj-$(CONFIG_KASAN_SW_TAGS) += kasan_sw_tags.o + +obj-$(CONFIG_ARCH_HAS_CPY_MC) += copy_page_mc.o diff --git a/arch/arm64/lib/copy_page_mc.S b/arch/arm64/lib/copy_page_mc.S new file mode 100644 index 000000000000..cbf56e661efe --- /dev/null +++ b/arch/arm64/lib/copy_page_mc.S @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with machine check + * + * Parameters: + * x0 - dest + * x1 - src + */ +SYM_FUNC_START(__pi_copy_page_mc) +alternative_if ARM64_HAS_NO_HW_PREFETCH + // Prefetch three cache lines ahead. + prfm pldl1strm, [x1, #128] + prfm pldl1strm, [x1, #256] + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +100: ldp x2, x3, [x1] +101: ldp x4, x5, [x1, #16] +102: ldp x6, x7, [x1, #32] +103: ldp x8, x9, [x1, #48] +104: ldp x10, x11, [x1, #64] +105: ldp x12, x13, [x1, #80] +106: ldp x14, x15, [x1, #96] +107: ldp x16, x17, [x1, #112] + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + +alternative_if ARM64_HAS_NO_HW_PREFETCH + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + + stnp x2, x3, [x0, #-256] +200: ldp x2, x3, [x1] + stnp x4, x5, [x0, #16 - 256] +201: ldp x4, x5, [x1, #16] + stnp x6, x7, [x0, #32 - 256] +202: ldp x6, x7, [x1, #32] + stnp x8, x9, [x0, #48 - 256] +203: ldp x8, x9, [x1, #48] + stnp x10, x11, [x0, #64 - 256] +204: ldp x10, x11, [x1, #64] + stnp x12, x13, [x0, #80 - 256] +205: ldp x12, x13, [x1, #80] + stnp x14, x15, [x0, #96 - 256] +206: ldp x14, x15, [x1, #96] + stnp x16, x17, [x0, #112 - 256] +207: ldp x16, x17, [x1, #112] + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] + +300: ret + +_asm_extable_mc 100b, 300b +_asm_extable_mc 101b, 300b +_asm_extable_mc 102b, 300b +_asm_extable_mc 103b, 300b +_asm_extable_mc 104b, 300b +_asm_extable_mc 105b, 300b +_asm_extable_mc 106b, 300b +_asm_extable_mc 107b, 300b +_asm_extable_mc 200b, 300b +_asm_extable_mc 201b, 300b +_asm_extable_mc 202b, 300b +_asm_extable_mc 203b, 300b +_asm_extable_mc 204b, 300b +_asm_extable_mc 205b, 300b +_asm_extable_mc 206b, 300b +_asm_extable_mc 207b, 300b + +SYM_FUNC_END(__pi_copy_page_mc) +SYM_FUNC_ALIAS(copy_page_mc, __pi_copy_page_mc) +EXPORT_SYMBOL(copy_page_mc) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 0dea80bf6de4..0f28edfcb234 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -14,13 +14,8 @@ #include #include -void copy_highpage(struct page *to, struct page *from) +static void do_mte(struct page *to, struct page *from, void *kto, void *kfrom) { - void *kto = page_address(to); - void *kfrom = page_address(from); - - copy_page(kto, kfrom); - if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); page_kasan_tag_reset(to); @@ -35,6 +30,15 @@ void copy_highpage(struct page *to, struct page *from) mte_copy_page_tags(kto, kfrom); } } + +void copy_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page(kto, kfrom); + do_mte(to, from, kto, kfrom); +} EXPORT_SYMBOL(copy_highpage); void copy_user_highpage(struct page *to, struct page *from, @@ -44,3 +48,23 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +void copy_highpage_mc(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page_mc(kto, kfrom); + do_mte(to, from, kto, kfrom); +} +EXPORT_SYMBOL(copy_highpage_mc); + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + copy_highpage_mc(to, from); + flush_dcache_page(to); +} +EXPORT_SYMBOL_GPL(copy_user_highpage_mc); +#endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9c..a9dbf331b038 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -283,6 +283,10 @@ static inline void copy_user_highpage(struct page *to, struct page *from, #endif +#ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#define copy_user_highpage_mc copy_user_highpage +#endif + #ifndef __HAVE_ARCH_COPY_HIGHPAGE static inline void copy_highpage(struct page *to, struct page *from) @@ -298,6 +302,10 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif +#ifndef __HAVE_ARCH_COPY_HIGHPAGE_MC +#define cop_highpage_mc copy_highpage +#endif + static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, size_t len) diff --git a/mm/memory.c b/mm/memory.c index 76e3af9639d9..d5f62234152d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2767,7 +2767,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - copy_user_highpage(dst, src, addr, vma); + copy_user_highpage_mc(dst, src, addr, vma); return true; } From patchwork Wed Apr 6 09:13:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A964BC433EF for ; Wed, 6 Apr 2022 08:58:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D29448D0007; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD9658D0003; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA0CD8D0007; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id AB1AB8D0003 for ; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7B61D21E67 for ; Wed, 6 Apr 2022 08:54:39 +0000 (UTC) X-FDA: 79325843478.20.31492A6 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf07.hostedemail.com (Postfix) with ESMTP id 7F34040017 for ; Wed, 6 Apr 2022 08:54:38 +0000 (UTC) Received: from kwepemi100003.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KYJDz4LY3zgYQg; Wed, 6 Apr 2022 16:52:51 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100003.china.huawei.com (7.221.188.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:35 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:34 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 7/7] arm64: add pagecache reading to machine check safe Date: Wed, 6 Apr 2022 09:13:11 +0000 Message-ID: <20220406091311.3354723-8-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7F34040017 X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: i5cjgw8g4b8ak9ge5hzgznngb1xwhkio X-HE-Tag: 1649235278-335953 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When user process reading file, the data is cached in pagecache and the data belongs to the user process, When machine check error is encountered during pagecache reading, killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. The __arch_copy_mc_to_user() in copy_to_user_mc.S is largely borrows from __arch_copy_to_user() in copy_to_user.S and the main difference is __arch_copy_mc_to_user() add the extable entry to support machine check safe. In _copy_page_to_iter(), machine check safe only considered ITER_IOVEC which is used by pagecache reading. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/uaccess.h | 15 ++++++ arch/arm64/lib/Makefile | 2 +- arch/arm64/lib/copy_to_user_mc.S | 78 +++++++++++++++++++++++++++++ include/linux/uio.h | 9 +++- lib/iov_iter.c | 85 +++++++++++++++++++++++++------- 5 files changed, 170 insertions(+), 19 deletions(-) create mode 100644 arch/arm64/lib/copy_to_user_mc.S diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 24b662407fbd..f0d5e811165a 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -448,6 +448,21 @@ extern long strncpy_from_user(char *dest, const char __user *src, long count); extern __must_check long strnlen_user(const char __user *str, long n); +#ifdef CONFIG_ARCH_HAS_COPY_MC +extern unsigned long __must_check __arch_copy_mc_to_user(void __user *to, + const void *from, unsigned long n); +static inline unsigned long __must_check +copy_mc_to_user(void __user *to, const void *from, unsigned long n) +{ + uaccess_ttbr0_enable(); + n = __arch_copy_mc_to_user(__uaccess_mask_ptr(to), from, n); + uaccess_ttbr0_disable(); + + return n; +} +#define copy_mc_to_user copy_mc_to_user +#endif + #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE struct page; void memcpy_page_flushcache(char *to, struct page *page, size_t offset, size_t len); diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29c578414b12..9b3571227fb4 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -23,4 +23,4 @@ obj-$(CONFIG_ARM64_MTE) += mte.o obj-$(CONFIG_KASAN_SW_TAGS) += kasan_sw_tags.o -obj-$(CONFIG_ARCH_HAS_CPY_MC) += copy_page_mc.o +obj-$(CONFIG_ARCH_HAS_COPY_MC) += copy_page_mc.o copy_to_user_mc.o diff --git a/arch/arm64/lib/copy_to_user_mc.S b/arch/arm64/lib/copy_to_user_mc.S new file mode 100644 index 000000000000..9d228ff15446 --- /dev/null +++ b/arch/arm64/lib/copy_to_user_mc.S @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include + +#include +#include +#include + +/* + * Copy to user space from a kernel buffer (alignment handled by the hardware) + * + * Parameters: + * x0 - to + * x1 - from + * x2 - n + * Returns: + * x0 - bytes not copied + */ + .macro ldrb1 reg, ptr, val + 1000: ldrb \reg, [\ptr], \val; + _asm_extable_mc 1000b, 9998f; + .endm + + .macro strb1 reg, ptr, val + user_ldst_mc 9998f, sttrb, \reg, \ptr, \val + .endm + + .macro ldrh1 reg, ptr, val + 1001: ldrh \reg, [\ptr], \val; + _asm_extable_mc 1001b, 9998f; + .endm + + .macro strh1 reg, ptr, val + user_ldst_mc 9997f, sttrh, \reg, \ptr, \val + .endm + + .macro ldr1 reg, ptr, val + 1002: ldr \reg, [\ptr], \val; + _asm_extable_mc 1002b, 9998f; + .endm + + .macro str1 reg, ptr, val + user_ldst_mc 9997f, sttr, \reg, \ptr, \val + .endm + + .macro ldp1 reg1, reg2, ptr, val + 1003: ldp \reg1, \reg2, [\ptr], \val; + _asm_extable_mc 1003b, 9998f; + .endm + + .macro stp1 reg1, reg2, ptr, val + user_stp 9997f, \reg1, \reg2, \ptr, \val + .endm + +end .req x5 +srcin .req x15 +SYM_FUNC_START(__arch_copy_mc_to_user) + add end, x0, x2 + mov srcin, x1 +#include "copy_template.S" + mov x0, #0 + ret + + // Exception fixups +9997: cbz x0, 9998f // Check machine check exception + cmp dst, dstin + b.ne 9998f + // Before being absolutely sure we couldn't copy anything, try harder + ldrb tmp1w, [srcin] +USER(9998f, sttrb tmp1w, [dst]) + add dst, dst, #1 +9998: sub x0, end, dst // bytes not copied + ret +SYM_FUNC_END(__arch_copy_mc_to_user) +EXPORT_SYMBOL(__arch_copy_mc_to_user) diff --git a/include/linux/uio.h b/include/linux/uio.h index 739285fe5a2f..539d9ee9b032 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -147,10 +147,17 @@ size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i); size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i); size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i); +#ifdef CONFIG_ARCH_HAS_COPY_MC +size_t copy_mc_page_to_iter(struct page *page, size_t offset, size_t bytes, + struct iov_iter *i); +#else +#define copy_mc_page_to_iter copy_page_to_iter +#endif + static inline size_t copy_folio_to_iter(struct folio *folio, size_t offset, size_t bytes, struct iov_iter *i) { - return copy_page_to_iter(&folio->page, offset, bytes, i); + return copy_mc_page_to_iter(&folio->page, offset, bytes, i); } static __always_inline __must_check diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 6dd5330f7a99..2c5f3bb6391d 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -157,6 +157,19 @@ static int copyout(void __user *to, const void *from, size_t n) return n; } +#ifdef CONFIG_ARCH_HAS_COPY_MC +static int copyout_mc(void __user *to, const void *from, size_t n) +{ + if (access_ok(to, n)) { + instrument_copy_to_user(to, from, n); + n = copy_mc_to_user((__force void *) to, from, n); + } + return n; +} +#else +#define copyout_mc copyout +#endif + static int copyin(void *to, const void __user *from, size_t n) { if (should_fail_usercopy()) @@ -169,7 +182,7 @@ static int copyin(void *to, const void __user *from, size_t n) } static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, - struct iov_iter *i) + struct iov_iter *i, bool mc_safe) { size_t skip, copy, left, wanted; const struct iovec *iov; @@ -194,7 +207,10 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b from = kaddr + offset; /* first chunk, usually the only one */ - left = copyout(buf, from, copy); + if (mc_safe) + left = copyout_mc(buf, from, copy); + else + left = copyout(buf, from, copy); copy -= left; skip += copy; from += copy; @@ -204,7 +220,10 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b iov++; buf = iov->iov_base; copy = min(bytes, iov->iov_len); - left = copyout(buf, from, copy); + if (mc_safe) + left = copyout_mc(buf, from, copy); + else + left = copyout(buf, from, copy); copy -= left; skip = copy; from += copy; @@ -223,7 +242,10 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b kaddr = kmap(page); from = kaddr + offset; - left = copyout(buf, from, copy); + if (mc_safe) + left = copyout_mc(buf, from, copy); + else + left = copyout(buf, from, copy); copy -= left; skip += copy; from += copy; @@ -232,7 +254,10 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b iov++; buf = iov->iov_base; copy = min(bytes, iov->iov_len); - left = copyout(buf, from, copy); + if (mc_safe) + left = copyout_mc(buf, from, copy); + else + left = copyout(buf, from, copy); copy -= left; skip = copy; from += copy; @@ -674,15 +699,6 @@ size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) EXPORT_SYMBOL(_copy_to_iter); #ifdef CONFIG_ARCH_HAS_COPY_MC -static int copyout_mc(void __user *to, const void *from, size_t n) -{ - if (access_ok(to, n)) { - instrument_copy_to_user(to, from, n); - n = copy_mc_to_user((__force void *) to, from, n); - } - return n; -} - static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes, struct iov_iter *i) { @@ -846,10 +862,10 @@ static inline bool page_copy_sane(struct page *page, size_t offset, size_t n) } static size_t __copy_page_to_iter(struct page *page, size_t offset, size_t bytes, - struct iov_iter *i) + struct iov_iter *i, bool mc_safe) { if (likely(iter_is_iovec(i))) - return copy_page_to_iter_iovec(page, offset, bytes, i); + return copy_page_to_iter_iovec(page, offset, bytes, i, mc_safe); if (iov_iter_is_bvec(i) || iov_iter_is_kvec(i) || iov_iter_is_xarray(i)) { void *kaddr = kmap_local_page(page); size_t wanted = _copy_to_iter(kaddr + offset, bytes, i); @@ -878,7 +894,7 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes, offset %= PAGE_SIZE; while (1) { size_t n = __copy_page_to_iter(page, offset, - min(bytes, (size_t)PAGE_SIZE - offset), i); + min(bytes, (size_t)PAGE_SIZE - offset), i, false); res += n; bytes -= n; if (!bytes || !n) @@ -893,6 +909,41 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes, } EXPORT_SYMBOL(copy_page_to_iter); +#ifdef CONFIG_ARCH_HAS_COPY_MC +/** + * copy_mc_page_to_iter - copy page to iter with source memory error exception handling. + * + * The filemap_read deploys this for pagecache reading and the main differences between + * this and typical copy_page_to_iter() is call __copy_page_to_iter with mc_safe true. + * + * Return: number of bytes copied (may be %0) + */ +size_t copy_mc_page_to_iter(struct page *page, size_t offset, size_t bytes, + struct iov_iter *i) +{ + size_t res = 0; + + if (unlikely(!page_copy_sane(page, offset, bytes))) + return 0; + page += offset / PAGE_SIZE; // first subpage + offset %= PAGE_SIZE; + while (1) { + size_t n = __copy_page_to_iter(page, offset, + min(bytes, (size_t)PAGE_SIZE - offset), i, true); + res += n; + bytes -= n; + if (!bytes || !n) + break; + offset += n; + if (offset == PAGE_SIZE) { + page++; + offset = 0; + } + } + return res; +} +#endif + size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, struct iov_iter *i) {