From patchwork Tue May 28 08:59:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 56A62C27C44 for ; Tue, 28 May 2024 08:59:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FH7oPc4kAaTZoh5hzGIPToyp5utITVWhVZgz8abUWVU=; b=1+fe4Remwo+TIh hPCaQ+rTY2I/bPgzFevV0+VjB4Bkt9nC9ITY7jMgvCb2y60BvTo6deNVqxA8iyx/PB6pUr7ID4KZo +/WjR7dV6uXl0g8SXy8LWMlGNHMI1KZs/ijxcGmj0C3f86vJLFKSz6x+A8McODc/6wUBuY02LVGPP Bw0USuyu936FlRSLz46X7wWGJkjHhKjQ3iXKfOd48mbjAWtQJz3E5CdRJmHlx3aZg5XywBsz/GgAe O0ovguw5QcOJbgl6D5esD6F7L8WwncRnIvN1wByzIUL7mSJ23PTdjPysfwJGeL5fOlSZOCvMQ/IPF C3t67vGDCMoexSXl66Bg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsg9-0000000HZP5-27eu; Tue, 28 May 2024 08:59:33 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsg4-0000000HZLZ-0VYK for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2024 08:59:30 +0000 Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4VpRFS2mtVzPnc9; Tue, 28 May 2024 16:56:12 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id E62E6141EC8; Tue, 28 May 2024 16:59:20 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:18 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user() Date: Tue, 28 May 2024 16:59:10 +0800 Message-ID: <20240528085915.1955987-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240528_015928_531713_1031EDAF X-CRM114-Status: GOOD ( 10.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org x86/powerpc has it's implementation of copy_mc_to_user(), we add generic fallback in include/linux/uaccess.h prepare for other architechures to enable CONFIG_ARCH_HAS_COPY_MC. Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Jonathan Cameron --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/uaccess.h | 8 ++++++++ 3 files changed, 10 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index de10437fd206..df42e6ad647f 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 0f9bab92a43d..309f2439327e 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void __user *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..0dfa9241b6ee 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + return copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Tue May 28 08:59:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A6C54C25B78 for ; Tue, 28 May 2024 08:59:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5aS2ls6X2kyUF2BORaww5sUxOHA8GHi+k5GAwd8Eg0w=; b=MsuHA1cPH61hJ/ qsP+5BSPeEAzGKsSze51ZKrRbnohvTW4UuUeLNy+jYrt5ztU9Q+IjWtfjKXjoEzrUH02j8tdQBbbT aTgVX3Ve1x+zZO5pvPTOeNsKfq2kbbPsz7PvOpnfhSBd/uo3qHCb080YHzOoI8eFw4dfYVc+/WN0X 8KiJBZCck3ho8YTKvmEh6SZjwtut7g4cQBCYsIbkHZSMY7tFERz5FD0dbSVANYAXy5tiiC9C1EgNZ IMkPw9pUgROrNUQpNpzlqitCrZGLCemkspGf0+tGsTEBKaIaMyHOwnaW+TamhutXtXqLKuxb/qGjZ kJ43S2DnOFThXo7yQgBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsg8-0000000HZOf-3Wkn; Tue, 28 May 2024 08:59:32 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsg4-0000000HZLY-0Uuy for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2024 08:59:30 +0000 Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VpRHD29JNz1HCrG; Tue, 28 May 2024 16:57:44 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id C95E014037C; Tue, 28 May 2024 16:59:22 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:20 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC Date: Tue, 28 May 2024 16:59:11 +0800 Message-ID: <20240528085915.1955987-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240528_015928_502457_2490A8C2 X-CRM114-Status: GOOD ( 24.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For the arm64 kernel, when it processes hardware memory errors for synchronize notifications(do_sea()), if the errors is consumed within the kernel, the current processing is panic. However, it is not optimal. Take copy_from/to_user for example, If ld* triggers a memory error, even in kernel mode, only the associated process is affected. Killing the user process and isolating the corrupt page is a better choice. New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn that can recover from memory errors triggered by access to kernel memory. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++----- arch/arm64/include/asm/asm-uaccess.h | 4 ++++ arch/arm64/include/asm/extable.h | 1 + arch/arm64/lib/copy_to_user.S | 10 ++++----- arch/arm64/mm/extable.c | 19 +++++++++++++++++ arch/arm64/mm/fault.c | 27 +++++++++++++++++------- 7 files changed, 75 insertions(+), 18 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5d91259ee7b5..13ca06ddf3dd 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -20,6 +20,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..9c0664fe1eb1 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -5,11 +5,13 @@ #include #include -#define EX_TYPE_NONE 0 -#define EX_TYPE_BPF 1 -#define EX_TYPE_UACCESS_ERR_ZERO 2 -#define EX_TYPE_KACCESS_ERR_ZERO 3 -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_NONE 0 +#define EX_TYPE_BPF 1 +#define EX_TYPE_UACCESS_ERR_ZERO 2 +#define EX_TYPE_KACCESS_ERR_ZERO 3 +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +/* kernel access memory error safe */ +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -51,6 +53,17 @@ #define _ASM_EXTABLE_UACCESS(insn, fixup) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) +#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero) \ + __ASM_EXTABLE_RAW(insn, fixup, \ + EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, \ + ( \ + EX_DATA_REG(ERR, err) | \ + EX_DATA_REG(ZERO, zero) \ + )) + +#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup) \ + _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr) + /* * Create an exception table entry for uaccess `insn`, which will branch to `fixup` * when an unhandled fault is taken. @@ -69,6 +82,14 @@ .endif .endm +/* + * Create an exception table entry for kaccess me(memory error) safe `insn`, which + * will branch to `fixup` when an unhandled fault is taken. + */ + .macro _asm_extable_kaccess_me_safe, insn, fixup + _ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup) + .endm + #else /* __ASSEMBLY__ */ #include diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 5b6efe8abeeb..7bbebfa5b710 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -57,6 +57,10 @@ alternative_else_nop_endif .endm #endif +#define KERNEL_ME_SAFE(l, x...) \ +9999: x; \ + _asm_extable_kaccess_me_safe 9999b, l + #define USER(l, x...) \ 9999: x; \ _asm_extable_uaccess 9999b, l diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..bc49443bc502 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_me(struct pt_regs *regs); #endif diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 802231772608..2ac716c0d6d8 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -20,7 +20,7 @@ * x0 - bytes not copied */ .macro ldrb1 reg, ptr, val - ldrb \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val) .endm .macro strb1 reg, ptr, val @@ -28,7 +28,7 @@ .endm .macro ldrh1 reg, ptr, val - ldrh \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val) .endm .macro strh1 reg, ptr, val @@ -36,7 +36,7 @@ .endm .macro ldr1 reg, ptr, val - ldr \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val) .endm .macro str1 reg, ptr, val @@ -44,7 +44,7 @@ .endm .macro ldp1 reg1, reg2, ptr, val - ldp \reg1, \reg2, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val) .endm .macro stp1 reg1, reg2, ptr, val @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) 9997: cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder - ldrb tmp1w, [srcin] +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin]) USER(9998f, sttrb tmp1w, [dst]) add dst, dst, #1 9998: sub x0, end, dst // bytes not copied diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 228d681a8715..8c690ae61944 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs) return ex_handler_uaccess_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE: + return false; } BUG(); } + +bool fixup_exception_me(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + switch (ex->type) { + case EX_TYPE_UACCESS_ERR_ZERO: + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE: + return ex_handler_uaccess_err_zero(ex, regs); + } + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 451ba7cbd5ad..2dc65f99d389 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) return 1; /* "fault" */ } +/* + * APEI claimed this as a firmware-first notification. + * Some processing deferred to task_work before ret_to_user(). + */ +static bool do_apei_claim_sea(struct pt_regs *regs) +{ + if (user_mode(regs)) { + if (!apei_claim_sea(regs)) + return true; + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { + if (fixup_exception_me(regs) && !apei_claim_sea(regs)) + return true; + } + + return false; +} + static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) { const struct fault_info *inf; unsigned long siaddr; - inf = esr_to_fault_info(esr); - - if (user_mode(regs) && apei_claim_sea(regs) == 0) { - /* - * APEI claimed this as a firmware-first notification. - * Some processing deferred to task_work before ret_to_user(). - */ + if (do_apei_claim_sea(regs)) return 0; - } + inf = esr_to_fault_info(esr); if (esr & ESR_ELx_FnV) { siaddr = 0; } else { From patchwork Tue May 28 08:59:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A113C27C4F for ; Tue, 28 May 2024 08:59:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+0nKHXnQsnlzU6dGEAMA8XIJ+Jj9WUCWXbCdNUdo6U4=; b=15SKQqUGvi4Y+h TRmd0wc+pY2tZ84D63S6GCvmJFFpZCUgCx4dGr4kfJZM5Hq03ifhMVGNFliLK/sMFyRr48EHwe1Z9 lKkaqFXuwOTrSqUZbjaoy82eFjjO2NRLfmo/7w/R7eIaCd/NywLCrlwk3I1cp6Rmyd0JhC0ThG2RP HNJGe5i7yO7jbydLXapk94cemVMxIMlpufvujnISiU7QEK0VlHr6i3JXUk123EegX5pfXY5sGPv6x q5YZjwY5AgpOnaEsAE0vvUanLS4IGHo/TUd7QhKcTjze8c2BxJjekBOThjE0AS1YyBQ+aTxQ8sJ7c pweYJEFC/wC1+WLxR5VQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsgA-0000000HZPQ-0UxM; Tue, 28 May 2024 08:59:34 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsg5-0000000HZM1-0hRK for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2024 08:59:31 +0000 Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4VpRF22XpZz2CjFS; Tue, 28 May 2024 16:55:50 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 8CFA7180066; Tue, 28 May 2024 16:59:24 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:22 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 3/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Date: Tue, 28 May 2024 16:59:12 +0800 Message-ID: <20240528085915.1955987-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240528_015929_735047_0A8EEEC8 X-CRM114-Status: GOOD ( 17.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If hardware errors are encountered during page copying, returning the bytes not copied is not meaningful, and the caller cannot do any processing on the remaining data. Returning -EFAULT is more reasonable, which represents a hardware error encountered during the copying. Signed-off-by: Tong Tiangen Reviewed-by: Jonathan Cameron --- include/linux/highmem.h | 8 ++++---- mm/khugepaged.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 00341b56d291..64a567d5ad6f 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from) /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory - * page with #MC in source page (@from) handled, and return the number - * of bytes not copied if there was a #MC, otherwise 0 for success. + * page with #MC in source page (@from) handled, and return -EFAULT if there + * was a #MC, otherwise 0 for success. */ static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) @@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } static inline int copy_mc_highpage(struct page *to, struct page *from) @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } #else static inline int copy_mc_user_highpage(struct page *to, struct page *from, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 774a97e6e2da..cce838e85967 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -798,7 +798,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, src_addr, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, src_addr, vma)) { result = SCAN_COPY_MC; break; } @@ -2042,7 +2042,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, index++; dst++; } - if (copy_mc_highpage(dst, folio_page(folio, 0)) > 0) { + if (copy_mc_highpage(dst, folio_page(folio, 0))) { result = SCAN_COPY_MC; goto rollback; } From patchwork Tue May 28 08:59:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3AE34C25B78 for ; Tue, 28 May 2024 09:00:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lQ9JsWvJq/LuCrQ8iRY8C37KaJR1ysztbcbYXwDzbjk=; b=wSZHUbUzsPxVKq gzMB4jbtbBR1cQ0/An7NkMFuBjCZTkfB7jcO7upi/pA8AUyi9T7zi10NVJ07JbnPNaj43YguaHahu +hmx/PqdoFBYyd6re08Sc4pyh2QOo+MJlZT6+J+GoM2OIbNMjYffqW8QdIObyzTuDw3cXQwUssV2q 4CDtVRe6Br2pRM9qu0iPCcwL6d+JPF3Gm7rn6ERjoJWN/EYex0wh5pc+fZmNyFF1jmRLxWe0Ds4qp DnOOjvmB+m9P5xTznV8DysAyZ6Xblcg5K3fJjXnIp4RKVto9NRqIuAdyXdYGubSX3NfopVzqm/0on GJ47lid2IqAKw+Df/DvA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsgi-0000000HZkY-3bMy; Tue, 28 May 2024 09:00:08 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsg5-0000000HZM5-1H8q for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2024 08:59:31 +0000 Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4VpRF371t1z1S5xH; Tue, 28 May 2024 16:55:51 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 83FFC1A016C; Tue, 28 May 2024 16:59:26 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:24 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage() Date: Tue, 28 May 2024 16:59:13 +0800 Message-ID: <20240528085915.1955987-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240528_015929_711039_E84BA2A8 X-CRM114-Status: GOOD ( 26.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, many scenarios that can tolerate memory errors when copying page have been supported in the kernel[1~5], all of which are implemented by copy_mc_[user]_highpage(). arm64 should also support this mechanism. Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. Add new helper copy_mc_page() which provide a page copy implementation with hardware memory error safe. The code logic of copy_mc_page() is the same as copy_page(), the main difference is that the ldp insn of copy_mc_page() contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the main logic is extracted to copy_page_template.S. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/mte.h | 9 +++++ arch/arm64/include/asm/page.h | 10 ++++++ arch/arm64/lib/Makefile | 2 ++ arch/arm64/lib/copy_mc_page.S | 35 ++++++++++++++++++ arch/arm64/lib/copy_page.S | 50 +++----------------------- arch/arm64/lib/copy_page_template.S | 56 +++++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 29 +++++++++++++++ arch/arm64/mm/copypage.c | 45 +++++++++++++++++++++++ include/linux/highmem.h | 8 +++++ 9 files changed, 199 insertions(+), 45 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S create mode 100644 arch/arm64/lib/copy_page_template.S diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 91fbd5c8a391..dc68337c2623 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -92,6 +92,11 @@ static inline bool try_page_mte_tagging(struct page *page) void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t pte, unsigned int nr_pages); void mte_copy_page_tags(void *kto, const void *kfrom); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +int mte_copy_mc_page_tags(void *kto, const void *kfrom); +#endif + void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_cpu_setup(void); @@ -128,6 +133,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) +{ + return 0; +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..304cc86b8a10 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +int copy_mc_page(void *to, const void *from); +int copy_mc_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_MC_HIGHPAGE + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE +#endif + struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 13e6a2829116..65ec3d24d32d 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -13,6 +13,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S new file mode 100644 index 000000000000..7e633a6aa89f --- /dev/null +++ b/arch/arm64/lib/copy_mc_page.S @@ -0,0 +1,35 @@ +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with memory error safe + * + * Parameters: + * x0 - dest + * x1 - src + * Returns: + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong + * while copying. + */ + .macro ldp1 reg1, reg2, ptr, val + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr, \val]) + .endm + +SYM_FUNC_START(__pi_copy_mc_page) +#include "copy_page_template.S" + + mov x0, #0 + ret + +9998: mov x0, #-EFAULT + ret + +SYM_FUNC_END(__pi_copy_mc_page) +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) +EXPORT_SYMBOL(copy_mc_page) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index 6a56d7cf309d..5499f507bb75 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -17,52 +17,12 @@ * x0 - dest * x1 - src */ -SYM_FUNC_START(__pi_copy_page) - ldp x2, x3, [x1] - ldp x4, x5, [x1, #16] - ldp x6, x7, [x1, #32] - ldp x8, x9, [x1, #48] - ldp x10, x11, [x1, #64] - ldp x12, x13, [x1, #80] - ldp x14, x15, [x1, #96] - ldp x16, x17, [x1, #112] - - add x0, x0, #256 - add x1, x1, #128 -1: - tst x0, #(PAGE_SIZE - 1) - - stnp x2, x3, [x0, #-256] - ldp x2, x3, [x1] - stnp x4, x5, [x0, #16 - 256] - ldp x4, x5, [x1, #16] - stnp x6, x7, [x0, #32 - 256] - ldp x6, x7, [x1, #32] - stnp x8, x9, [x0, #48 - 256] - ldp x8, x9, [x1, #48] - stnp x10, x11, [x0, #64 - 256] - ldp x10, x11, [x1, #64] - stnp x12, x13, [x0, #80 - 256] - ldp x12, x13, [x1, #80] - stnp x14, x15, [x0, #96 - 256] - ldp x14, x15, [x1, #96] - stnp x16, x17, [x0, #112 - 256] - ldp x16, x17, [x1, #112] - - add x0, x0, #128 - add x1, x1, #128 - - b.ne 1b - - stnp x2, x3, [x0, #-256] - stnp x4, x5, [x0, #16 - 256] - stnp x6, x7, [x0, #32 - 256] - stnp x8, x9, [x0, #48 - 256] - stnp x10, x11, [x0, #64 - 256] - stnp x12, x13, [x0, #80 - 256] - stnp x14, x15, [x0, #96 - 256] - stnp x16, x17, [x0, #112 - 256] + .macro ldp1 reg1, reg2, ptr, val + ldp \reg1, \reg2, [\ptr, \val] + .endm +SYM_FUNC_START(__pi_copy_page) +#include "copy_page_template.S" ret SYM_FUNC_END(__pi_copy_page) SYM_FUNC_ALIAS(copy_page, __pi_copy_page) diff --git a/arch/arm64/lib/copy_page_template.S b/arch/arm64/lib/copy_page_template.S new file mode 100644 index 000000000000..b3ddec2c7a27 --- /dev/null +++ b/arch/arm64/lib/copy_page_template.S @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +/* + * Copy a page from src to dest (both are page aligned) + * + * Parameters: + * x0 - dest + * x1 - src + */ + ldp1 x2, x3, x1, #0 + ldp1 x4, x5, x1, #16 + ldp1 x6, x7, x1, #32 + ldp1 x8, x9, x1, #48 + ldp1 x10, x11, x1, #64 + ldp1 x12, x13, x1, #80 + ldp1 x14, x15, x1, #96 + ldp1 x16, x17, x1, #112 + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + + stnp x2, x3, [x0, #-256] + ldp1 x2, x3, x1, #0 + stnp x4, x5, [x0, #16 - 256] + ldp1 x4, x5, x1, #16 + stnp x6, x7, [x0, #32 - 256] + ldp1 x6, x7, x1, #32 + stnp x8, x9, [x0, #48 - 256] + ldp1 x8, x9, x1, #48 + stnp x10, x11, [x0, #64 - 256] + ldp1 x10, x11, x1, #64 + stnp x12, x13, [x0, #80 - 256] + ldp1 x12, x13, x1, #80 + stnp x14, x15, [x0, #96 - 256] + ldp1 x14, x15, x1, #96 + stnp x16, x17, [x0, #112 - 256] + ldp1 x16, x17, x1, #112 + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 5018ac03b6bf..50ef24318281 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags) ret SYM_FUNC_END(mte_copy_page_tags) +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Copy the tags from the source page to the destination one wiht machine check safe + * x0 - address of the destination page + * x1 - address of the source page + * Returns: + * x0 - Return 0 if copy success, or + * -EFAULT if anything goes wrong while copying. + */ +SYM_FUNC_START(mte_copy_mc_page_tags) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: +KERNEL_ME_SAFE(2f, ldgm x4, [x3]) + stgm x4, [x2] + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b + + mov x0, #0 + ret + +2: mov x0, #-EFAULT + ret +SYM_FUNC_END(mte_copy_mc_page_tags) +#endif + /* * Read tags from a user buffer (one tag per byte) and set the corresponding * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index a7bb20055ce0..ff0d9ceea2a4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Return -EFAULT if anything goes wrong while copying page or mte. + */ +int copy_mc_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + int ret; + + ret = copy_mc_page(kto, kfrom); + if (ret) + return -EFAULT; + + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); + + if (system_supports_mte() && page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); + ret = mte_copy_mc_page_tags(kto, kfrom); + if (ret) + return -EFAULT; + + set_page_mte_tagged(to); + } + + return 0; +} +EXPORT_SYMBOL(copy_mc_highpage); + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + int ret; + + ret = copy_mc_highpage(to, from); + if (!ret) + flush_dcache_page(to); + + return ret; +} +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); +#endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 64a567d5ad6f..7a8ddb2b67e6 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif #ifdef copy_mc_to_kernel +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, return ret ? -EFAULT : 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { unsigned long ret; @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) return ret ? -EFAULT : 0; } +#endif #else +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { copy_user_highpage(to, from, vaddr, vma); return 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { copy_highpage(to, from); return 0; } #endif +#endif static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, From patchwork Tue May 28 08:59:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3EDFDC25B78 for ; Tue, 28 May 2024 09:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=X12QBW/NNG9bb7TD7GDpU9MG6KxXMAM4tOUuZ2zMs/U=; b=kHq5bMUmR2Oxfc czH7ToguyZBpwyilMPmFFk3EBeeLdhb2JkRLxbwxeeJ9fiaju1MvHPbPFTJqJ+2N3K/fRazU77IWK kEw1VsfRsJpN6derFWksXGDI6D64SJjiyZ/FtKpSY09ig3vHL6JqV8nWqsyqIWAK3ANfwYt2/T/OF Y2ESnJNnBMLYBCVQXlbjuHjgzKbIQap9gt4BkTvdEgYAwdbA4ZOTd586hZGw/+XbNdoqtMxNJwSoh yvD8hWN78GMW/FfCmW70Nd3qO1/ZSvJcEy5OdjUpDlN4kURuxCr+lf+OOZAJOmUGFhjAtOti1CPxu P3syYNa9TYPHtGt2u0Bw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsgj-0000000HZl6-2jev; Tue, 28 May 2024 09:00:09 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsg7-0000000HZMh-0v30 for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2024 08:59:33 +0000 Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VpRHg66lLzckDb; Tue, 28 May 2024 16:58:07 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 328B1180AA5; Tue, 28 May 2024 16:59:28 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:26 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 5/6] arm64: introduce copy_mc_to_kernel() implementation Date: Tue, 28 May 2024 16:59:14 +0800 Message-ID: <20240528085915.1955987-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240528_015931_739877_41010B29 X-CRM114-Status: GOOD ( 21.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The copy_mc_to_kernel() helper is memory copy implementation that handles source exceptions. It can be used in memory copy scenarios that tolerate hardware memory errors(e.g: pmem_read/dax_copy_to_iter). Currnently, only x86 and ppc suuport this helper, after arm64 support ARCH_HAS_COPY_MC , we introduce copy_mc_to_kernel() implementation. Also add memcpy_mc() for memory copy that handles source exceptions. Because there is no GPR is available for saving "bytes not copied" in memcpy(), the mempcy_mc() is referenced to the implementation of copy_from_user(). Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/string.h | 5 +++ arch/arm64/include/asm/uaccess.h | 18 ++++++++ arch/arm64/lib/Makefile | 2 +- arch/arm64/lib/memcpy_mc.S | 73 ++++++++++++++++++++++++++++++++ mm/kasan/shadow.c | 12 ++++++ 5 files changed, 109 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/lib/memcpy_mc.S diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h index 3a3264ff47b9..23eca4fb24fa 100644 --- a/arch/arm64/include/asm/string.h +++ b/arch/arm64/include/asm/string.h @@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t); extern void *memcpy(void *, const void *, __kernel_size_t); extern void *__memcpy(void *, const void *, __kernel_size_t); +#define __HAVE_ARCH_MEMCPY_MC +extern int memcpy_mc(void *, const void *, __kernel_size_t); +extern int __memcpy_mc(void *, const void *, __kernel_size_t); + #define __HAVE_ARCH_MEMMOVE extern void *memmove(void *, const void *, __kernel_size_t); extern void *__memmove(void *, const void *, __kernel_size_t); @@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt); */ #define memcpy(dst, src, len) __memcpy(dst, src, len) +#define memcpy_mc(dst, src, len) __memcpy_mc(dst, src, len) #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 14be5000c5a0..07c1aeaeb094 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -425,4 +425,22 @@ static inline size_t probe_subpage_writeable(const char __user *uaddr, #endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */ +#ifdef CONFIG_ARCH_HAS_COPY_MC +/** + * copy_mc_to_kernel - memory copy that handles source exceptions + * + * @to: destination address + * @from: source address + * @size: number of bytes to copy + * + * Return 0 for success, or bytes not copied. + */ +static inline unsigned long __must_check +copy_mc_to_kernel(void *to, const void *from, unsigned long size) +{ + return memcpy_mc(to, from, size); +} +#define copy_mc_to_kernel copy_mc_to_kernel +#endif + #endif /* __ASM_UACCESS_H */ diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 65ec3d24d32d..0d2fc251fbae 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -13,7 +13,7 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o -lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o memcpy_mc.o obj-$(CONFIG_CRC32) += crc32.o diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S new file mode 100644 index 000000000000..1798090eba06 --- /dev/null +++ b/arch/arm64/lib/memcpy_mc.S @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + */ + +#include +#include +#include +#include + +/* + * Copy a buffer from src to dest (alignment handled by the hardware) + * + * Parameters: + * x0 - dest + * x1 - src + * x2 - n + * Returns: + * x0 - dest + */ + .macro ldrb1 reg, ptr, val + KERNEL_ME_SAFE(9997f, ldrb \reg, [\ptr], \val) + .endm + + .macro strb1 reg, ptr, val + strb \reg, [\ptr], \val + .endm + + .macro ldrh1 reg, ptr, val + KERNEL_ME_SAFE(9997f, ldrh \reg, [\ptr], \val) + .endm + + .macro strh1 reg, ptr, val + strh \reg, [\ptr], \val + .endm + + .macro ldr1 reg, ptr, val + KERNEL_ME_SAFE(9997f, ldr \reg, [\ptr], \val) + .endm + + .macro str1 reg, ptr, val + str \reg, [\ptr], \val + .endm + + .macro ldp1 reg1, reg2, ptr, val + KERNEL_ME_SAFE(9997f, ldp \reg1, \reg2, [\ptr], \val) + .endm + + .macro stp1 reg1, reg2, ptr, val + stp \reg1, \reg2, [\ptr], \val + .endm + +end .req x5 +SYM_FUNC_START(__memcpy_mc) + add end, x0, x2 +#include "copy_template.S" + mov x0, #0 // Nothing to copy + ret + + // Exception fixups +9997: sub x0, end, dst // bytes not copied + ret +SYM_FUNC_END(__memcpy_mc) +EXPORT_SYMBOL(__memcpy_mc) +SYM_FUNC_ALIAS_WEAK(memcpy_mc, __memcpy_mc) +EXPORT_SYMBOL(memcpy_mc) diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index d6210ca48dda..e23632391eac 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len) } #endif +#ifdef __HAVE_ARCH_MEMCPY_MC +#undef memcpy_mc +int memcpy_mc(void *dest, const void *src, size_t len) +{ + if (!kasan_check_range(src, len, false, _RET_IP_) || + !kasan_check_range(dest, len, true, _RET_IP_)) + return (int)len; + + return __memcpy_mc(dest, src, len); +} +#endif + void *__asan_memset(void *addr, int c, ssize_t len) { if (!kasan_check_range(addr, len, true, _RET_IP_)) From patchwork Tue May 28 08:59:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA334C27C44 for ; Tue, 28 May 2024 09:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=97B+dL/GNB57Y9LzVOCNVGTSQdMflKO9JSMIKtALlAM=; b=Gx5n/g2oSGhHkT zYcT22QV+WaNSQ2ZPHl2rvlw/yuJ3v7bLI9hJhWy7Qa2Q9JNrC4/4gJBSPhquWAQl4gwjW48mv01T DbaRGGzDjnK2vyyDuF3TXZYoTYpZ6p0cjxtYAboRl0cUZxW+LWt7ZNKoWxjaxIdXS3uj4AV33eqpF SDUOF8TOqeSkw6N1DJBllGqeAfMjdhdgd1cfNyWmWPIz88kH2Cdxf3UrE8y90wleAPPRXeIwfgu26 SljQEp4W1l+3MMaiihk1ee1Oi/Iyk/Pwxv1J72KYpNG1oYElJj/BEC3xsPy/Wu5R8gj2sjxrqw3mO p+lKZ2Spo90vm3yGJnNQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsgk-0000000HZlW-2LJl; Tue, 28 May 2024 09:00:10 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sBsg7-0000000HZNa-32DV for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2024 08:59:33 +0000 Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VpRDF0V3xzlX8L; Tue, 28 May 2024 16:55:09 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 013A5141EC8; Tue, 28 May 2024 16:59:30 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:28 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 6/6] arm64: send SIGBUS to user process for SEA exception Date: Tue, 28 May 2024 16:59:15 +0800 Message-ID: <20240528085915.1955987-7-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240528_015931_988444_53F0803A X-CRM114-Status: GOOD ( 15.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For SEA exception, kernel require take some action to recover from memory error, such as isolate poison page adn kill failure thread, which are done in memory_failure(). During our test, the failure thread cannot be killed due to this issue[1], Here, I temporarily workaround this issue by sending signals to user processes in do_sea(). After [1] is merged, this patch can be rolled back or the SIGBUS will be sent repeated. [1]https://lore.kernel.org/lkml/20240204080144.7977-1-xueshuai@linux.alibaba.com/ Signed-off-by: Tong Tiangen --- arch/arm64/mm/fault.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 2dc65f99d389..37d7e74d9aee 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -730,9 +730,6 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) const struct fault_info *inf; unsigned long siaddr; - if (do_apei_claim_sea(regs)) - return 0; - inf = esr_to_fault_info(esr); if (esr & ESR_ELx_FnV) { siaddr = 0; @@ -744,6 +741,17 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } + + if (do_apei_claim_sea(regs)) { + if (current->mm) { + set_thread_esr(0, esr); + arm64_force_sig_fault(inf->sig, inf->code, siaddr, + "Uncorrected memory error on access \ + to poison memory\n"); + } + return 0; + } + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0;