From patchwork Mon Dec 18 08:23:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13496363 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F19E8101D9; Mon, 18 Dec 2023 08:24:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4SttC80jgpz1Q6sp; Mon, 18 Dec 2023 16:24:04 +0800 (CST) Received: from kwepemm000017.china.huawei.com (unknown [7.193.23.46]) by mail.maildlp.com (Postfix) with ESMTPS id 33FC9140120; Mon, 18 Dec 2023 16:24:18 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm000017.china.huawei.com (7.193.23.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 18 Dec 2023 16:24:16 +0800 From: Tong Tiangen To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , Dave Hansen , , "H. Peter Anvin" , Tony Luck , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Naoya Horiguchi CC: , , , Tong Tiangen , Guohanjun Subject: [PATCH -next v3 1/3] x86/mce: remove redundant fixup type EX_TYPE_COPY Date: Mon, 18 Dec 2023 16:23:58 +0800 Message-ID: <20231218082400.2694698-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231218082400.2694698-1-tongtiangen@huawei.com> References: <20231218082400.2694698-1-tongtiangen@huawei.com> Precedence: bulk X-Mailing-List: linux-edac@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm000017.china.huawei.com (7.193.23.46) In commit 034ff37d3407 ("x86: rewrite '__copy_user_nocache' function"), rewrited __copy_user_nocache() uses EX_TYPE_UACCESS instead of EX_TYPE_COPY,this change does not broken the MC safe copy for __copy_user_nocache(), but as a result, there's no place for EX_TYPE_COPY to use. Therefore, we remove the definition of EX_TYPE_COPY. Signed-off-by: Tong Tiangen --- arch/x86/include/asm/asm.h | 3 --- arch/x86/include/asm/extable_fixup_types.h | 23 +++++++++++----------- arch/x86/kernel/cpu/mce/severity.c | 1 - arch/x86/mm/extable.c | 9 --------- 4 files changed, 11 insertions(+), 25 deletions(-) diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index fbcfec4dc4cc..692409ea0c37 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -215,9 +215,6 @@ register unsigned long current_stack_pointer asm(_ASM_SP); #define _ASM_EXTABLE_UA(from, to) \ _ASM_EXTABLE_TYPE(from, to, EX_TYPE_UACCESS) -#define _ASM_EXTABLE_CPY(from, to) \ - _ASM_EXTABLE_TYPE(from, to, EX_TYPE_COPY) - #define _ASM_EXTABLE_FAULT(from, to) \ _ASM_EXTABLE_TYPE(from, to, EX_TYPE_FAULT) diff --git a/arch/x86/include/asm/extable_fixup_types.h b/arch/x86/include/asm/extable_fixup_types.h index 991e31cfde94..6126af55b85b 100644 --- a/arch/x86/include/asm/extable_fixup_types.h +++ b/arch/x86/include/asm/extable_fixup_types.h @@ -36,18 +36,17 @@ #define EX_TYPE_DEFAULT 1 #define EX_TYPE_FAULT 2 #define EX_TYPE_UACCESS 3 -#define EX_TYPE_COPY 4 -#define EX_TYPE_CLEAR_FS 5 -#define EX_TYPE_FPU_RESTORE 6 -#define EX_TYPE_BPF 7 -#define EX_TYPE_WRMSR 8 -#define EX_TYPE_RDMSR 9 -#define EX_TYPE_WRMSR_SAFE 10 /* reg := -EIO */ -#define EX_TYPE_RDMSR_SAFE 11 /* reg := -EIO */ -#define EX_TYPE_WRMSR_IN_MCE 12 -#define EX_TYPE_RDMSR_IN_MCE 13 -#define EX_TYPE_DEFAULT_MCE_SAFE 14 -#define EX_TYPE_FAULT_MCE_SAFE 15 +#define EX_TYPE_CLEAR_FS 4 +#define EX_TYPE_FPU_RESTORE 5 +#define EX_TYPE_BPF 6 +#define EX_TYPE_WRMSR 7 +#define EX_TYPE_RDMSR 8 +#define EX_TYPE_WRMSR_SAFE 9 /* reg := -EIO */ +#define EX_TYPE_RDMSR_SAFE 10 /* reg := -EIO */ +#define EX_TYPE_WRMSR_IN_MCE 11 +#define EX_TYPE_RDMSR_IN_MCE 12 +#define EX_TYPE_DEFAULT_MCE_SAFE 13 +#define EX_TYPE_FAULT_MCE_SAFE 14 #define EX_TYPE_POP_REG 16 /* sp += sizeof(long) */ #define EX_TYPE_POP_ZERO (EX_TYPE_POP_REG | EX_DATA_IMM(0)) diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c index c4477162c07d..bca780fa5e57 100644 --- a/arch/x86/kernel/cpu/mce/severity.c +++ b/arch/x86/kernel/cpu/mce/severity.c @@ -290,7 +290,6 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs) switch (fixup_type) { case EX_TYPE_UACCESS: - case EX_TYPE_COPY: if (!copy_user) return IN_KERNEL; m->kflags |= MCE_IN_KERNEL_COPYIN; diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index 271dcb2deabc..2354c0156e51 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -163,13 +163,6 @@ static bool ex_handler_uaccess(const struct exception_table_entry *fixup, return ex_handler_default(fixup, regs); } -static bool ex_handler_copy(const struct exception_table_entry *fixup, - struct pt_regs *regs, int trapnr) -{ - WARN_ONCE(trapnr == X86_TRAP_GP, "General protection fault in user access. Non-canonical address?"); - return ex_handler_fault(fixup, regs, trapnr); -} - static bool ex_handler_msr(const struct exception_table_entry *fixup, struct pt_regs *regs, bool wrmsr, bool safe, int reg) { @@ -267,8 +260,6 @@ int fixup_exception(struct pt_regs *regs, int trapnr, unsigned long error_code, return ex_handler_fault(e, regs, trapnr); case EX_TYPE_UACCESS: return ex_handler_uaccess(e, regs, trapnr, fault_addr); - case EX_TYPE_COPY: - return ex_handler_copy(e, regs, trapnr); case EX_TYPE_CLEAR_FS: return ex_handler_clear_fs(e, regs); case EX_TYPE_FPU_RESTORE: From patchwork Mon Dec 18 08:23:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13496360 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D382C125B0; Mon, 18 Dec 2023 08:24:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4SttCK0dlFzZdZ4; Mon, 18 Dec 2023 16:24:13 +0800 (CST) Received: from kwepemm000017.china.huawei.com (unknown [7.193.23.46]) by mail.maildlp.com (Postfix) with ESMTPS id 50A1518005A; Mon, 18 Dec 2023 16:24:19 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm000017.china.huawei.com (7.193.23.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 18 Dec 2023 16:24:18 +0800 From: Tong Tiangen To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , Dave Hansen , , "H. Peter Anvin" , Tony Luck , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Naoya Horiguchi CC: , , , Tong Tiangen , Guohanjun Subject: [PATCH -next v3 2/3] x86/mce: rename MCE_IN_KERNEL_COPYIN to MCE_IN_KERNEL_COPY_MC Date: Mon, 18 Dec 2023 16:23:59 +0800 Message-ID: <20231218082400.2694698-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231218082400.2694698-1-tongtiangen@huawei.com> References: <20231218082400.2694698-1-tongtiangen@huawei.com> Precedence: bulk X-Mailing-List: linux-edac@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm000017.china.huawei.com (7.193.23.46) In the x86 mce processing, macro MCE_IN_KERNEL_COPYIN is used to identify copied from user. do_machine_check() uses this flag to isolate posion page in memory_failure(). there's nothing wrong but we can expand the use of this macro. Currently, there are some kernel memory copy scenarios is also mc safe which use copy_mc_to_kernel() or copy_mc_user_highpage(). In these scenarios, posion pages need to be isolated too. Therefore, a macro similar to MCE_IN_KERNEL_COPYIN is required. For this reason, we can rename MCE_IN_KERNEL_COPYIN to MCE_IN_KERNEL_COPY_MC, the new macro can be applied to both user-to-kernel mc safe copy and kernel-to-kernel mc safe copy. Signed-off-by: Tong Tiangen --- arch/x86/include/asm/mce.h | 8 ++++---- arch/x86/kernel/cpu/mce/core.c | 2 +- arch/x86/kernel/cpu/mce/severity.c | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h index de3118305838..cb628ab2f32f 100644 --- a/arch/x86/include/asm/mce.h +++ b/arch/x86/include/asm/mce.h @@ -151,11 +151,11 @@ /* * Indicates an MCE that happened in kernel space while copying data - * from user. In this case fixup_exception() gets the kernel to the - * error exit for the copy function. Machine check handler can then - * treat it like a fault taken in user mode. + * from user or kernel. In this case fixup_exception() gets the kernel + * to the error exit for the copy function. Machine check handler can + * then treat it like a fault taken in user or kernel mode. */ -#define MCE_IN_KERNEL_COPYIN BIT_ULL(7) +#define MCE_IN_KERNEL_COPY_MC BIT_ULL(7) /* * This structure contains all data related to the MCE log. Also diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index faef7ceed746..dbea0c395c56 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1597,7 +1597,7 @@ noinstr void do_machine_check(struct pt_regs *regs) mce_panic("Failed kernel mode recovery", &m, msg); } - if (m.kflags & MCE_IN_KERNEL_COPYIN) + if (m.kflags & MCE_IN_KERNEL_COPY_MC) queue_task_work(&m, msg, kill_me_never); } diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c index bca780fa5e57..df67a7a13034 100644 --- a/arch/x86/kernel/cpu/mce/severity.c +++ b/arch/x86/kernel/cpu/mce/severity.c @@ -292,7 +292,7 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs) case EX_TYPE_UACCESS: if (!copy_user) return IN_KERNEL; - m->kflags |= MCE_IN_KERNEL_COPYIN; + m->kflags |= MCE_IN_KERNEL_COPY_MC; fallthrough; case EX_TYPE_FAULT_MCE_SAFE: From patchwork Mon Dec 18 08:24:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13496362 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97CA3101FE; Mon, 18 Dec 2023 08:24:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SttBS22vKzYspP; Mon, 18 Dec 2023 16:23:28 +0800 (CST) Received: from kwepemm000017.china.huawei.com (unknown [7.193.23.46]) by mail.maildlp.com (Postfix) with ESMTPS id 6CEDA140120; Mon, 18 Dec 2023 16:24:20 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm000017.china.huawei.com (7.193.23.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 18 Dec 2023 16:24:19 +0800 From: Tong Tiangen To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , Dave Hansen , , "H. Peter Anvin" , Tony Luck , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Naoya Horiguchi CC: , , , Tong Tiangen , Guohanjun Subject: [PATCH -next v3 3/3] x86/mce: set MCE_IN_KERNEL_COPY_MC for DEFAULT_MCE_SAFE exception Date: Mon, 18 Dec 2023 16:24:00 +0800 Message-ID: <20231218082400.2694698-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231218082400.2694698-1-tongtiangen@huawei.com> References: <20231218082400.2694698-1-tongtiangen@huawei.com> Precedence: bulk X-Mailing-List: linux-edac@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm000017.china.huawei.com (7.193.23.46) From: Kefeng Wang If an MCE has happened in kernel space and the kernel can recover, mce.kflags MCE_IN_KERNEL_RECOV will set in error_context(). With the setting of MCE_IN_KERNEL_RECOV, the MCE is handled in do_machine_check(). But due to lack of MCE_IN_KERNEL_COPY_MC, although the kernel won't panic, the corrupted page don't be isolated, new one maybe consume it again, which is not what we expected. In order to avoid above issue, some hwpoison recover process[1][2][3][4], memory_failure_queue() is called to cope with such unhandled corrupted pages, also there are some other already existed MC-safe copy scenarios, eg, nvdimm, dm-writecache, dax, which don't isolate corrupted pages. The best way to fix them is set MCE_IN_KERNEL_COPY_MC for MC-Safe Copy, then let the core do_machine_check() to isolate corrupted page instead of doing it one-by-one. EX_TYPE_FAULT_MCE_SAFE is used for the FPU. Here, we do not touch the logic of FPU. We only modify the logic of EX_TYPE_DEFAULT_MCE_SAFE which is used in the scenarios described above. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") Reviewed-by: Naoya Horiguchi Reviewed-by: Tony Luck Signed-off-by: Kefeng Wang Signed-off-by: Tong Tiangen --- arch/x86/kernel/cpu/mce/severity.c | 4 ++-- mm/ksm.c | 1 - mm/memory.c | 12 +++--------- 3 files changed, 5 insertions(+), 12 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c index df67a7a13034..b4b1d028cbb3 100644 --- a/arch/x86/kernel/cpu/mce/severity.c +++ b/arch/x86/kernel/cpu/mce/severity.c @@ -292,11 +292,11 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs) case EX_TYPE_UACCESS: if (!copy_user) return IN_KERNEL; + fallthrough; + case EX_TYPE_DEFAULT_MCE_SAFE: m->kflags |= MCE_IN_KERNEL_COPY_MC; fallthrough; - case EX_TYPE_FAULT_MCE_SAFE: - case EX_TYPE_DEFAULT_MCE_SAFE: m->kflags |= MCE_IN_KERNEL_RECOV; return IN_KERNEL_RECOV; diff --git a/mm/ksm.c b/mm/ksm.c index ae05fb438ac5..01e3a7ef1b9d 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -3075,7 +3075,6 @@ struct folio *ksm_might_need_to_copy(struct folio *folio, if (copy_mc_user_highpage(folio_page(new_folio, 0), page, addr, vma)) { folio_put(new_folio); - memory_failure_queue(folio_pfn(folio), 0); return ERR_PTR(-EHWPOISON); } folio_set_dirty(new_folio); diff --git a/mm/memory.c b/mm/memory.c index 809746555827..9f0d875b1d3f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2836,10 +2836,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - if (copy_mc_user_highpage(dst, src, addr, vma)) { - memory_failure_queue(page_to_pfn(src), 0); + if (copy_mc_user_highpage(dst, src, addr, vma)) return -EHWPOISON; - } return 0; } @@ -6168,10 +6166,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, cond_resched(); if (copy_mc_user_highpage(dst_page, src_page, - addr + i*PAGE_SIZE, vma)) { - memory_failure_queue(page_to_pfn(src_page), 0); + addr + i*PAGE_SIZE, vma)) return -EHWPOISON; - } } return 0; } @@ -6187,10 +6183,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) struct copy_subpage_arg *copy_arg = arg; if (copy_mc_user_highpage(copy_arg->dst + idx, copy_arg->src + idx, - addr, copy_arg->vma)) { - memory_failure_queue(page_to_pfn(copy_arg->src + idx), 0); + addr, copy_arg->vma)) return -EHWPOISON; - } return 0; }