From patchwork Wed Apr 20 03:04:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12819657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCA00C433EF for ; Wed, 20 Apr 2022 02:45:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9A5AC6B0073; Tue, 19 Apr 2022 22:45:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97B196B0074; Tue, 19 Apr 2022 22:45:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 842EE6B0075; Tue, 19 Apr 2022 22:45:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 76F946B0073 for ; Tue, 19 Apr 2022 22:45:38 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3A65A220A7 for ; Wed, 20 Apr 2022 02:45:38 +0000 (UTC) X-FDA: 79375716756.18.8457588 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf14.hostedemail.com (Postfix) with ESMTP id 94B5C100010 for ; Wed, 20 Apr 2022 02:45:36 +0000 (UTC) Received: from kwepemi100012.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KjlPs3cQ5zfYqj; Wed, 20 Apr 2022 10:44:49 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100012.china.huawei.com (7.221.188.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:34 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:32 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 1/7] x86, powerpc: fix function define in copy_mc_to_user Date: Wed, 20 Apr 2022 03:04:12 +0000 Message-ID: <20220420030418.3189040-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 94B5C100010 X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: gqg9efy8bsjwmja693ciynk6cpzn3cmx X-HE-Tag: 1650422736-831633 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: x86/powerpc has it's implementation of copy_mc_to_user but not use #define to declare. This may cause problems, for example, if other architectures open CONFIG_ARCH_HAS_COPY_MC, but want to use copy_mc_to_user() outside the architecture, the code add to include/linux/uaddess.h is as follows: #ifndef copy_mc_to_user static inline unsigned long __must_check copy_mc_to_user(void *dst, const void *src, size_t cnt) { ... } #endif Then this definition will conflict with the implementation of x86/powerpc and cause compilation errors as follow: Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()") Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman (powerpc) --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index 9b82b38ff867..58dbe8e2e318 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -358,6 +358,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index f78e2b3501a1..e18c5f098025 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* From patchwork Wed Apr 20 03:04:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12819658 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C71AC433FE for ; Wed, 20 Apr 2022 02:45:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BB386B0074; Tue, 19 Apr 2022 22:45:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 844756B0075; Tue, 19 Apr 2022 22:45:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70B656B0078; Tue, 19 Apr 2022 22:45:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 5C2426B0074 for ; Tue, 19 Apr 2022 22:45:39 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 327CB624AC for ; Wed, 20 Apr 2022 02:45:39 +0000 (UTC) X-FDA: 79375716798.06.FC728FD Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf19.hostedemail.com (Postfix) with ESMTP id 14BF21A000E for ; Wed, 20 Apr 2022 02:45:36 +0000 (UTC) Received: from kwepemi500008.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KjlPv36khz1J9nq; Wed, 20 Apr 2022 10:44:51 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500008.china.huawei.com (7.221.188.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:35 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:34 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 2/7] arm64: fix types in copy_highpage() Date: Wed, 20 Apr 2022 03:04:13 +0000 Message-ID: <20220420030418.3189040-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 14BF21A000E X-Rspam-User: Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf19.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: ckxdry5he5jiwscwtnkmpeu3jw9wh3s7 X-HE-Tag: 1650422736-172056 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In copy_highpage() the `kto` and `kfrom` local variables are pointers to struct page, but these are used to hold arbitrary pointers to kernel memory . Each call to page_address() returns a void pointer to memory associated with the relevant page, and copy_page() expects void pointers to this memory. This inconsistency was introduced in commit 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations") and while this doesn't appear to be harmful in practice it is clearly wrong. Correct this by making `kto` and `kfrom` void pointers. Fixes: 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations") Signed-off-by: Tong Tiangen Acked-by: Mark Rutland Reviewed-by: Kefeng Wang --- arch/arm64/mm/copypage.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index b5447e53cd73..0dea80bf6de4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -16,8 +16,8 @@ void copy_highpage(struct page *to, struct page *from) { - struct page *kto = page_address(to); - struct page *kfrom = page_address(from); + void *kto = page_address(to); + void *kfrom = page_address(from); copy_page(kto, kfrom); From patchwork Wed Apr 20 03:04:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12819659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31D2EC433EF for ; Wed, 20 Apr 2022 02:45:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0CC066B0075; Tue, 19 Apr 2022 22:45:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 07FD06B0078; Tue, 19 Apr 2022 22:45:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEFA56B007B; Tue, 19 Apr 2022 22:45:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id BB2806B0075 for ; Tue, 19 Apr 2022 22:45:41 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 91C2422954 for ; Wed, 20 Apr 2022 02:45:41 +0000 (UTC) X-FDA: 79375716882.14.2FD75BF Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf12.hostedemail.com (Postfix) with ESMTP id AA75B4000A for ; Wed, 20 Apr 2022 02:45:38 +0000 (UTC) Received: from kwepemi500009.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KjlKh2Q8MzCr9W; Wed, 20 Apr 2022 10:41:12 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500009.china.huawei.com (7.221.188.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:37 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:35 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 3/7] arm64: add support for machine check error safe Date: Wed, 20 Apr 2022 03:04:14 +0000 Message-ID: <20220420030418.3189040-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: AA75B4000A X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: qxc4afwdn1aeetqx9uykkip8jwbdt177 X-HE-Tag: 1650422738-437499 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: During the processing of arm64 kernel hardware memory errors(do_sea()), if the errors is consumed in the kernel, the current processing is panic. However, it is not optimal. Take uaccess for example, if the uaccess operation fails due to memory error, only the user process will be affected, kill the user process and isolate the user page with hardware memory errors is a better choice. This patch only enable machine error check framework, it add exception fixup before kernel panic in do_sea() and only limit the consumption of hardware memory errors in kernel mode triggered by user mode processes. If fixup successful, panic can be avoided. Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC. Also add copy_mc_to_user() in include/linux/uaccess.h, this helper is called when CONFIG_ARCH_HAS_COPOY_MC is open. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/extable.h | 1 + arch/arm64/mm/extable.c | 17 +++++++++++++++++ arch/arm64/mm/fault.c | 27 ++++++++++++++++++++++++++- include/linux/uaccess.h | 9 +++++++++ 5 files changed, 54 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d9325dd95eba..012e38309955 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -19,6 +19,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f80ebd0addfd 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_mc(struct pt_regs *regs); #endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 489455309695..4f0083a550d4 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -9,6 +9,7 @@ #include #include +#include static inline unsigned long get_ex_fixup(const struct exception_table_entry *ex) @@ -84,3 +85,19 @@ bool fixup_exception(struct pt_regs *regs) BUG(); } + +bool fixup_exception_mc(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + /* + * This is not complete, More Machine check safe extable type can + * be processed here. + */ + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..a9e6fb1999d1 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -695,6 +695,29 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs) || !current->mm) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + if (!fixup_exception_mc(regs)) + return false; + + set_thread_esr(0, esr); + + arm64_force_sig_fault(sig, code, addr, + "Uncorrected hardware memory error in kernel-access\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -720,7 +743,9 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } - arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); + + if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; } diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 546179418ffa..884661b29c17 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -174,6 +174,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Wed Apr 20 03:04:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12819660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB44BC433FE for ; Wed, 20 Apr 2022 02:45:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 514816B0078; Tue, 19 Apr 2022 22:45:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44F406B007B; Tue, 19 Apr 2022 22:45:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29FFE6B007D; Tue, 19 Apr 2022 22:45:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id 191B26B0078 for ; Tue, 19 Apr 2022 22:45:43 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AF8C2182EDF32 for ; Wed, 20 Apr 2022 02:45:42 +0000 (UTC) X-FDA: 79375716924.30.FD2C114 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf12.hostedemail.com (Postfix) with ESMTP id CE4664000A for ; Wed, 20 Apr 2022 02:45:39 +0000 (UTC) Received: from kwepemi500010.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KjlPy48wYz1J9nm; Wed, 20 Apr 2022 10:44:54 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500010.china.huawei.com (7.221.188.191) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:38 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:37 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 4/7] arm64: add copy_{to, from}_user to machine check safe Date: Wed, 20 Apr 2022 03:04:15 +0000 Message-ID: <20220420030418.3189040-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Stat-Signature: dc6hafcedfyye9y9t5exyge1tyfwai1a X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: CE4664000A X-HE-Tag: 1650422739-406143 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add copy_{to, from}_user() to machine check safe. If copy fail due to hardware memory error, only the relevant processes are affected, so killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. Add new extable type EX_TYPE_UACCESS_MC which can be used for uaccess that can be recovered from hardware memory errors. The x16 register is used to save the fixup type in copy_xxx_user which used extable type EX_TYPE_UACCESS_MC. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 14 ++++++++++++++ arch/arm64/include/asm/asm-uaccess.h | 15 ++++++++++----- arch/arm64/lib/copy_from_user.S | 18 +++++++++++------- arch/arm64/lib/copy_to_user.S | 18 +++++++++++------- arch/arm64/mm/extable.c | 18 ++++++++++++++---- 5 files changed, 60 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index c39f2437e08e..75b2c00e9523 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -2,12 +2,18 @@ #ifndef __ASM_ASM_EXTABLE_H #define __ASM_ASM_EXTABLE_H +#define FIXUP_TYPE_NORMAL 0 +#define FIXUP_TYPE_MC 1 + #define EX_TYPE_NONE 0 #define EX_TYPE_FIXUP 1 #define EX_TYPE_BPF 2 #define EX_TYPE_UACCESS_ERR_ZERO 3 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +/* _MC indicates that can fixup from machine check errors */ +#define EX_TYPE_UACCESS_MC 5 + #ifdef __ASSEMBLY__ #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \ @@ -27,6 +33,14 @@ __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP, 0) .endm +/* + * Create an exception table entry for `insn`, which will branch to `fixup` + * when an unhandled fault(include sea fault) is taken. + */ + .macro _asm_extable_uaccess_mc, insn, fixup + __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_UACCESS_MC, 0) + .endm + /* * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 0557af834e03..6c23c138e1fc 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -63,6 +63,11 @@ alternative_else_nop_endif 9999: x; \ _asm_extable 9999b, l + +#define USER_MC(l, x...) \ +9999: x; \ + _asm_extable_uaccess_mc 9999b, l + /* * Generate the assembly for LDTR/STTR with exception table entries. * This is complicated as there is no post-increment or pair versions of the @@ -73,8 +78,8 @@ alternative_else_nop_endif 8889: ldtr \reg2, [\addr, #8]; add \addr, \addr, \post_inc; - _asm_extable 8888b,\l; - _asm_extable 8889b,\l; + _asm_extable_uaccess_mc 8888b, \l; + _asm_extable_uaccess_mc 8889b, \l; .endm .macro user_stp l, reg1, reg2, addr, post_inc @@ -82,14 +87,14 @@ alternative_else_nop_endif 8889: sttr \reg2, [\addr, #8]; add \addr, \addr, \post_inc; - _asm_extable 8888b,\l; - _asm_extable 8889b,\l; + _asm_extable_uaccess_mc 8888b,\l; + _asm_extable_uaccess_mc 8889b,\l; .endm .macro user_ldst l, inst, reg, addr, post_inc 8888: \inst \reg, [\addr]; add \addr, \addr, \post_inc; - _asm_extable 8888b,\l; + _asm_extable_uaccess_mc 8888b, \l; .endm #endif diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S index 34e317907524..480cc5ac0a8d 100644 --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -25,7 +25,7 @@ .endm .macro strb1 reg, ptr, val - strb \reg, [\ptr], \val + USER_MC(9998f, strb \reg, [\ptr], \val) .endm .macro ldrh1 reg, ptr, val @@ -33,7 +33,7 @@ .endm .macro strh1 reg, ptr, val - strh \reg, [\ptr], \val + USER_MC(9998f, strh \reg, [\ptr], \val) .endm .macro ldr1 reg, ptr, val @@ -41,7 +41,7 @@ .endm .macro str1 reg, ptr, val - str \reg, [\ptr], \val + USER_MC(9998f, str \reg, [\ptr], \val) .endm .macro ldp1 reg1, reg2, ptr, val @@ -49,11 +49,12 @@ .endm .macro stp1 reg1, reg2, ptr, val - stp \reg1, \reg2, [\ptr], \val + USER_MC(9998f, stp \reg1, \reg2, [\ptr], \val) .endm -end .req x5 -srcin .req x15 +end .req x5 +srcin .req x15 +fixup_type .req x16 SYM_FUNC_START(__arch_copy_from_user) add end, x0, x2 mov srcin, x1 @@ -62,7 +63,10 @@ SYM_FUNC_START(__arch_copy_from_user) ret // Exception fixups -9997: cmp dst, dstin + // x16: fixup type written by ex_handler_uaccess_mc +9997: cmp fixup_type, #FIXUP_TYPE_MC + b.eq 9998f + cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder USER(9998f, ldtrb tmp1w, [srcin]) diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 802231772608..021a7d27b3a4 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -20,7 +20,7 @@ * x0 - bytes not copied */ .macro ldrb1 reg, ptr, val - ldrb \reg, [\ptr], \val + USER_MC(9998f, ldrb \reg, [\ptr], \val) .endm .macro strb1 reg, ptr, val @@ -28,7 +28,7 @@ .endm .macro ldrh1 reg, ptr, val - ldrh \reg, [\ptr], \val + USER_MC(9998f, ldrh \reg, [\ptr], \val) .endm .macro strh1 reg, ptr, val @@ -36,7 +36,7 @@ .endm .macro ldr1 reg, ptr, val - ldr \reg, [\ptr], \val + USER_MC(9998f, ldr \reg, [\ptr], \val) .endm .macro str1 reg, ptr, val @@ -44,15 +44,16 @@ .endm .macro ldp1 reg1, reg2, ptr, val - ldp \reg1, \reg2, [\ptr], \val + USER_MC(9998f, ldp \reg1, \reg2, [\ptr], \val) .endm .macro stp1 reg1, reg2, ptr, val user_stp 9997f, \reg1, \reg2, \ptr, \val .endm -end .req x5 -srcin .req x15 +end .req x5 +srcin .req x15 +fixup_type .req x16 SYM_FUNC_START(__arch_copy_to_user) add end, x0, x2 mov srcin, x1 @@ -61,7 +62,10 @@ SYM_FUNC_START(__arch_copy_to_user) ret // Exception fixups -9997: cmp dst, dstin + // x16: fixup type written by ex_handler_uaccess_mc +9997: cmp fixup_type, #FIXUP_TYPE_MC + b.eq 9998f + cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder ldrb tmp1w, [srcin] diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 4f0083a550d4..525876c3ebf4 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -24,6 +24,14 @@ static bool ex_handler_fixup(const struct exception_table_entry *ex, return true; } +static bool ex_handler_uaccess_type(const struct exception_table_entry *ex, + struct pt_regs *regs, + unsigned long fixup_type) +{ + regs->regs[16] = fixup_type; + return ex_handler_fixup(ex, regs); +} + static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex, struct pt_regs *regs) { @@ -75,6 +83,8 @@ bool fixup_exception(struct pt_regs *regs) switch (ex->type) { case EX_TYPE_FIXUP: return ex_handler_fixup(ex, regs); + case EX_TYPE_UACCESS_MC: + return ex_handler_uaccess_type(ex, regs, FIXUP_TYPE_NORMAL); case EX_TYPE_BPF: return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: @@ -94,10 +104,10 @@ bool fixup_exception_mc(struct pt_regs *regs) if (!ex) return false; - /* - * This is not complete, More Machine check safe extable type can - * be processed here. - */ + switch (ex->type) { + case EX_TYPE_UACCESS_MC: + return ex_handler_uaccess_type(ex, regs, FIXUP_TYPE_MC); + } return false; } From patchwork Wed Apr 20 03:04:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12819661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED8E0C433F5 for ; Wed, 20 Apr 2022 02:45:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AB63C6B007B; Tue, 19 Apr 2022 22:45:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A68BB6B007D; Tue, 19 Apr 2022 22:45:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9550D6B007E; Tue, 19 Apr 2022 22:45:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 87C4D6B007B for ; Tue, 19 Apr 2022 22:45:44 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 61B1362906 for ; Wed, 20 Apr 2022 02:45:44 +0000 (UTC) X-FDA: 79375717008.02.38DC1C3 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf10.hostedemail.com (Postfix) with ESMTP id 3AD70C0016 for ; Wed, 20 Apr 2022 02:45:42 +0000 (UTC) Received: from kwepemi500005.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KjlQh68FmzhXdN; Wed, 20 Apr 2022 10:45:32 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500005.china.huawei.com (7.221.188.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:40 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:38 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 5/7] arm64: mte: Clean up user tag accessors Date: Wed, 20 Apr 2022 03:04:16 +0000 Message-ID: <20220420030418.3189040-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 3AD70C0016 X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: efgwmcqo1gmcb83wa5c14o318sr9f9f1 X-HE-Tag: 1650422742-8365 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Robin Murphy Invoking user_ldst to explicitly add a post-increment of 0 is silly. Just use a normal USER() annotation and save the redundant instruction. Signed-off-by: Robin Murphy Reviewed-by: Tong Tiangen --- arch/arm64/lib/mte.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 8590af3c98c0..eeb9e45bcce8 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -93,7 +93,7 @@ SYM_FUNC_START(mte_copy_tags_from_user) mov x3, x1 cbz x2, 2f 1: - user_ldst 2f, ldtrb, w4, x1, 0 +USER(2f, ldtrb w4, [x1]) lsl x4, x4, #MTE_TAG_SHIFT stg x4, [x0], #MTE_GRANULE_SIZE add x1, x1, #1 @@ -120,7 +120,7 @@ SYM_FUNC_START(mte_copy_tags_to_user) 1: ldg x4, [x1] ubfx x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE - user_ldst 2f, sttrb, w4, x0, 0 +USER(2f, sttrb w4, [x0]) add x0, x0, #1 add x1, x1, #MTE_GRANULE_SIZE subs x2, x2, #1 From patchwork Wed Apr 20 03:04:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12819662 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52E34C433FE for ; Wed, 20 Apr 2022 02:45:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E12AC6B007D; Tue, 19 Apr 2022 22:45:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC2EE6B007E; Tue, 19 Apr 2022 22:45:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3CEE6B0080; Tue, 19 Apr 2022 22:45:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id B23876B007D for ; Tue, 19 Apr 2022 22:45:45 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8D0012810 for ; Wed, 20 Apr 2022 02:45:45 +0000 (UTC) X-FDA: 79375717050.10.80DCE98 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf30.hostedemail.com (Postfix) with ESMTP id 263C880016 for ; Wed, 20 Apr 2022 02:45:42 +0000 (UTC) Received: from kwepemi500004.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KjlQm1DFkzhXXK; Wed, 20 Apr 2022 10:45:36 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500004.china.huawei.com (7.221.188.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:41 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:40 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 6/7] arm64: add {get, put}_user to machine check safe Date: Wed, 20 Apr 2022 03:04:17 +0000 Message-ID: <20220420030418.3189040-7-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 263C880016 X-Stat-Signature: wjagt6zsrh6t1a7ahjr8khy5pehyec8e Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-HE-Tag: 1650422742-685482 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add {get, put}_user() to machine check safe. If get/put fail due to hardware memory error, only the relevant processes are affected, so killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. Add new extable type EX_TYPE_UACCESS_MC_ERR_ZERO which can be used for uaccess that can be recovered from hardware memory errors. The difference from EX_TYPE_UACCESS_MC is that this type also sets additional two target register which save error code and value needs to be set zero. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 14 ++++++++++++++ arch/arm64/include/asm/uaccess.h | 4 ++-- arch/arm64/mm/extable.c | 4 ++++ 3 files changed, 20 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 75b2c00e9523..80410899a9ad 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -13,6 +13,7 @@ /* _MC indicates that can fixup from machine check errors */ #define EX_TYPE_UACCESS_MC 5 +#define EX_TYPE_UACCESS_MC_ERR_ZERO 6 #ifdef __ASSEMBLY__ @@ -78,6 +79,15 @@ #define EX_DATA_REG(reg, gpr) \ "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")" +#define _ASM_EXTABLE_UACCESS_MC_ERR_ZERO(insn, fixup, err, zero) \ + __DEFINE_ASM_GPR_NUMS \ + __ASM_EXTABLE_RAW(#insn, #fixup, \ + __stringify(EX_TYPE_UACCESS_MC_ERR_ZERO), \ + "(" \ + EX_DATA_REG(ERR, err) " | " \ + EX_DATA_REG(ZERO, zero) \ + ")") + #define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \ __DEFINE_ASM_GPR_NUMS \ __ASM_EXTABLE_RAW(#insn, #fixup, \ @@ -90,6 +100,10 @@ #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr) + +#define _ASM_EXTABLE_UACCESS_MC_ERR(insn, fixup, err) \ + _ASM_EXTABLE_UACCESS_MC_ERR_ZERO(insn, fixup, err, wzr) + #define EX_DATA_REG_DATA_SHIFT 0 #define EX_DATA_REG_DATA GENMASK(4, 0) #define EX_DATA_REG_ADDR_SHIFT 5 diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index e8dce0cc5eaa..e41b47df48b0 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -236,7 +236,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) asm volatile( \ "1: " load " " reg "1, [%2]\n" \ "2:\n" \ - _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ + _ASM_EXTABLE_UACCESS_MC_ERR_ZERO(1b, 2b, %w0, %w1) \ : "+r" (err), "=&r" (x) \ : "r" (addr)) @@ -325,7 +325,7 @@ do { \ asm volatile( \ "1: " store " " reg "1, [%2]\n" \ "2:\n" \ - _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \ + _ASM_EXTABLE_UACCESS_MC_ERR(1b, 2b, %w0) \ : "+r" (err) \ : "r" (x), "r" (addr)) diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 525876c3ebf4..1023ccdb2f89 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -88,6 +88,7 @@ bool fixup_exception(struct pt_regs *regs) case EX_TYPE_BPF: return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: + case EX_TYPE_UACCESS_MC_ERR_ZERO: return ex_handler_uaccess_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); @@ -107,6 +108,9 @@ bool fixup_exception_mc(struct pt_regs *regs) switch (ex->type) { case EX_TYPE_UACCESS_MC: return ex_handler_uaccess_type(ex, regs, FIXUP_TYPE_MC); + case EX_TYPE_UACCESS_MC_ERR_ZERO: + return ex_handler_uaccess_err_zero(ex, regs); + } return false; From patchwork Wed Apr 20 03:04:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12819663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83187C4332F for ; Wed, 20 Apr 2022 02:45:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BBBA6B007E; Tue, 19 Apr 2022 22:45:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8433E6B0080; Tue, 19 Apr 2022 22:45:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BCD16B0081; Tue, 19 Apr 2022 22:45:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 56DF36B007E for ; Tue, 19 Apr 2022 22:45:47 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2C8362294C for ; Wed, 20 Apr 2022 02:45:47 +0000 (UTC) X-FDA: 79375717134.27.CE79726 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf23.hostedemail.com (Postfix) with ESMTP id C8896140011 for ; Wed, 20 Apr 2022 02:45:44 +0000 (UTC) Received: from kwepemi500007.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KjlKp3drSzCrYb; Wed, 20 Apr 2022 10:41:18 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500007.china.huawei.com (7.221.188.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:43 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:41 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 7/7] arm64: add cow to machine check safe Date: Wed, 20 Apr 2022 03:04:18 +0000 Message-ID: <20220420030418.3189040-8-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C8896140011 X-Stat-Signature: wres7nsphhu7byfdcg3bt4kbunz31gzp X-HE-Tag: 1650422744-199185 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the cow(copy on write) processing, the data of the user process is copied, when hardware memory error is encountered during copy, only the relevant processes are affected, so killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. Add new helper copy_page_mc() which provide a page copy implementation with machine check safe. At present, only used in cow. In future, we can expand more scenes. As long as the consequences of page copy failure are not fatal(eg: only affect user process), we can use this helper. The copy_page_mc() in copy_page_mc.S is largely borrows from copy_page() in copy_page.S and the main difference is copy_page_mc() add extable entry to every load/store insn to support machine check safe. largely to keep the patch simple. If needed those optimizations can be folded in. Add new extable type EX_TYPE_COPY_PAGE_MC which used in copy_page_mc(). This type only be processed in fixup_exception_mc(), The reason is that copy_page_mc() is consistent with copy_page() except machine check safe is considered, and copy_page() do not need to consider exception fixup. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 5 ++ arch/arm64/include/asm/page.h | 10 ++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_page_mc.S | 86 ++++++++++++++++++++++++++++ arch/arm64/mm/copypage.c | 36 ++++++++++-- arch/arm64/mm/extable.c | 2 + include/linux/highmem.h | 8 +++ mm/memory.c | 2 +- 8 files changed, 144 insertions(+), 7 deletions(-) create mode 100644 arch/arm64/lib/copy_page_mc.S diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 80410899a9ad..74c056ddae15 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -14,6 +14,7 @@ /* _MC indicates that can fixup from machine check errors */ #define EX_TYPE_UACCESS_MC 5 #define EX_TYPE_UACCESS_MC_ERR_ZERO 6 +#define EX_TYPE_COPY_PAGE_MC 7 #ifdef __ASSEMBLY__ @@ -42,6 +43,10 @@ __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_UACCESS_MC, 0) .endm + .macro _asm_extable_copy_page_mc, insn, fixup + __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_COPY_PAGE_MC, 0) + .endm + /* * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 993a27ea6f54..832571a7dddb 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +extern void copy_page_mc(void *to, const void *from); +void copy_highpage_mc(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_HIGHPAGE_MC + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#endif + struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, unsigned long vaddr); #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..0d9f292ef68a 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -15,6 +15,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_page_mc.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_page_mc.S b/arch/arm64/lib/copy_page_mc.S new file mode 100644 index 000000000000..655161363dcf --- /dev/null +++ b/arch/arm64/lib/copy_page_mc.S @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include + +#define CPY_MC(l, x...) \ +9999: x; \ + _asm_extable_copy_page_mc 9999b, l + +/* + * Copy a page from src to dest (both are page aligned) with machine check + * + * Parameters: + * x0 - dest + * x1 - src + */ +SYM_FUNC_START(__pi_copy_page_mc) +alternative_if ARM64_HAS_NO_HW_PREFETCH + // Prefetch three cache lines ahead. + prfm pldl1strm, [x1, #128] + prfm pldl1strm, [x1, #256] + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +CPY_MC(9998f, ldp x2, x3, [x1]) +CPY_MC(9998f, ldp x4, x5, [x1, #16]) +CPY_MC(9998f, ldp x6, x7, [x1, #32]) +CPY_MC(9998f, ldp x8, x9, [x1, #48]) +CPY_MC(9998f, ldp x10, x11, [x1, #64]) +CPY_MC(9998f, ldp x12, x13, [x1, #80]) +CPY_MC(9998f, ldp x14, x15, [x1, #96]) +CPY_MC(9998f, ldp x16, x17, [x1, #112]) + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + +alternative_if ARM64_HAS_NO_HW_PREFETCH + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) +CPY_MC(9998f, ldp x2, x3, [x1]) +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) +CPY_MC(9998f, ldp x4, x5, [x1, #16]) +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) +CPY_MC(9998f, ldp x6, x7, [x1, #32]) +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) +CPY_MC(9998f, ldp x8, x9, [x1, #48]) +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) +CPY_MC(9998f, ldp x10, x11, [x1, #64]) +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) +CPY_MC(9998f, ldp x12, x13, [x1, #80]) +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) +CPY_MC(9998f, ldp x14, x15, [x1, #96]) +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) +CPY_MC(9998f, ldp x16, x17, [x1, #112]) + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) + +9998: ret + +SYM_FUNC_END(__pi_copy_page_mc) +SYM_FUNC_ALIAS(copy_page_mc, __pi_copy_page_mc) +EXPORT_SYMBOL(copy_page_mc) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 0dea80bf6de4..0f28edfcb234 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -14,13 +14,8 @@ #include #include -void copy_highpage(struct page *to, struct page *from) +static void do_mte(struct page *to, struct page *from, void *kto, void *kfrom) { - void *kto = page_address(to); - void *kfrom = page_address(from); - - copy_page(kto, kfrom); - if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); page_kasan_tag_reset(to); @@ -35,6 +30,15 @@ void copy_highpage(struct page *to, struct page *from) mte_copy_page_tags(kto, kfrom); } } + +void copy_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page(kto, kfrom); + do_mte(to, from, kto, kfrom); +} EXPORT_SYMBOL(copy_highpage); void copy_user_highpage(struct page *to, struct page *from, @@ -44,3 +48,23 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +void copy_highpage_mc(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page_mc(kto, kfrom); + do_mte(to, from, kto, kfrom); +} +EXPORT_SYMBOL(copy_highpage_mc); + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + copy_highpage_mc(to, from); + flush_dcache_page(to); +} +EXPORT_SYMBOL_GPL(copy_user_highpage_mc); +#endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 1023ccdb2f89..4c882d36dd64 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -110,6 +110,8 @@ bool fixup_exception_mc(struct pt_regs *regs) return ex_handler_uaccess_type(ex, regs, FIXUP_TYPE_MC); case EX_TYPE_UACCESS_MC_ERR_ZERO: return ex_handler_uaccess_err_zero(ex, regs); + case EX_TYPE_COPY_PAGE_MC: + return ex_handler_fixup(ex, regs); } diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9c..a9dbf331b038 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -283,6 +283,10 @@ static inline void copy_user_highpage(struct page *to, struct page *from, #endif +#ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#define copy_user_highpage_mc copy_user_highpage +#endif + #ifndef __HAVE_ARCH_COPY_HIGHPAGE static inline void copy_highpage(struct page *to, struct page *from) @@ -298,6 +302,10 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif +#ifndef __HAVE_ARCH_COPY_HIGHPAGE_MC +#define cop_highpage_mc copy_highpage +#endif + static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, size_t len) diff --git a/mm/memory.c b/mm/memory.c index 76e3af9639d9..d5f62234152d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2767,7 +2767,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - copy_user_highpage(dst, src, addr, vma); + copy_user_highpage_mc(dst, src, addr, vma); return true; }