From patchwork Wed Apr 6 09:13:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12802722 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89D2EC433EF for ; Wed, 6 Apr 2022 08:57:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F06F8D0006; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A07E8D0003; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1901C8D0006; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id 0BAF08D0003 for ; Wed, 6 Apr 2022 04:54:49 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id AF24E183DBF36 for ; Wed, 6 Apr 2022 08:54:38 +0000 (UTC) X-FDA: 79325843436.29.4989644 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf05.hostedemail.com (Postfix) with ESMTP id 54A47100002 for ; Wed, 6 Apr 2022 08:54:37 +0000 (UTC) Received: from kwepemi100006.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KYJGR422hz1HBWw; Wed, 6 Apr 2022 16:54:07 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100006.china.huawei.com (7.221.188.165) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 6 Apr 2022 16:54:33 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 6 Apr 2022 16:54:32 +0800 From: Tong Tiangen To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , , "H. Peter Anvin" CC: , , , Tong Tiangen Subject: [RFC PATCH -next V2 6/7] arm64: add cow to machine check safe Date: Wed, 6 Apr 2022 09:13:10 +0000 Message-ID: <20220406091311.3354723-7-tongtiangen@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220406091311.3354723-1-tongtiangen@huawei.com> References: <20220406091311.3354723-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 54A47100002 X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: 53zw45fis8ee51q1je3hyw3rofi5pny6 X-HE-Tag: 1649235277-841299 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the cow(copy on write) processing, the data of the user process is copied, when machine check error is encountered during copy, killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. The copy_page_mc() in copy_page_mc.S is largely borrows from copy_page() in copy_page.S and the main difference is copy_page_mc() add the extable entry to support machine check safe. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/page.h | 10 ++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_page_mc.S | 98 +++++++++++++++++++++++++++++++++++ arch/arm64/mm/copypage.c | 36 ++++++++++--- include/linux/highmem.h | 8 +++ mm/memory.c | 2 +- 6 files changed, 149 insertions(+), 7 deletions(-) create mode 100644 arch/arm64/lib/copy_page_mc.S diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 993a27ea6f54..832571a7dddb 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +extern void copy_page_mc(void *to, const void *from); +void copy_highpage_mc(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_HIGHPAGE_MC + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#endif + struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, unsigned long vaddr); #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..29c578414b12 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -22,3 +22,5 @@ obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o obj-$(CONFIG_ARM64_MTE) += mte.o obj-$(CONFIG_KASAN_SW_TAGS) += kasan_sw_tags.o + +obj-$(CONFIG_ARCH_HAS_CPY_MC) += copy_page_mc.o diff --git a/arch/arm64/lib/copy_page_mc.S b/arch/arm64/lib/copy_page_mc.S new file mode 100644 index 000000000000..cbf56e661efe --- /dev/null +++ b/arch/arm64/lib/copy_page_mc.S @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with machine check + * + * Parameters: + * x0 - dest + * x1 - src + */ +SYM_FUNC_START(__pi_copy_page_mc) +alternative_if ARM64_HAS_NO_HW_PREFETCH + // Prefetch three cache lines ahead. + prfm pldl1strm, [x1, #128] + prfm pldl1strm, [x1, #256] + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +100: ldp x2, x3, [x1] +101: ldp x4, x5, [x1, #16] +102: ldp x6, x7, [x1, #32] +103: ldp x8, x9, [x1, #48] +104: ldp x10, x11, [x1, #64] +105: ldp x12, x13, [x1, #80] +106: ldp x14, x15, [x1, #96] +107: ldp x16, x17, [x1, #112] + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + +alternative_if ARM64_HAS_NO_HW_PREFETCH + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + + stnp x2, x3, [x0, #-256] +200: ldp x2, x3, [x1] + stnp x4, x5, [x0, #16 - 256] +201: ldp x4, x5, [x1, #16] + stnp x6, x7, [x0, #32 - 256] +202: ldp x6, x7, [x1, #32] + stnp x8, x9, [x0, #48 - 256] +203: ldp x8, x9, [x1, #48] + stnp x10, x11, [x0, #64 - 256] +204: ldp x10, x11, [x1, #64] + stnp x12, x13, [x0, #80 - 256] +205: ldp x12, x13, [x1, #80] + stnp x14, x15, [x0, #96 - 256] +206: ldp x14, x15, [x1, #96] + stnp x16, x17, [x0, #112 - 256] +207: ldp x16, x17, [x1, #112] + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] + +300: ret + +_asm_extable_mc 100b, 300b +_asm_extable_mc 101b, 300b +_asm_extable_mc 102b, 300b +_asm_extable_mc 103b, 300b +_asm_extable_mc 104b, 300b +_asm_extable_mc 105b, 300b +_asm_extable_mc 106b, 300b +_asm_extable_mc 107b, 300b +_asm_extable_mc 200b, 300b +_asm_extable_mc 201b, 300b +_asm_extable_mc 202b, 300b +_asm_extable_mc 203b, 300b +_asm_extable_mc 204b, 300b +_asm_extable_mc 205b, 300b +_asm_extable_mc 206b, 300b +_asm_extable_mc 207b, 300b + +SYM_FUNC_END(__pi_copy_page_mc) +SYM_FUNC_ALIAS(copy_page_mc, __pi_copy_page_mc) +EXPORT_SYMBOL(copy_page_mc) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 0dea80bf6de4..0f28edfcb234 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -14,13 +14,8 @@ #include #include -void copy_highpage(struct page *to, struct page *from) +static void do_mte(struct page *to, struct page *from, void *kto, void *kfrom) { - void *kto = page_address(to); - void *kfrom = page_address(from); - - copy_page(kto, kfrom); - if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); page_kasan_tag_reset(to); @@ -35,6 +30,15 @@ void copy_highpage(struct page *to, struct page *from) mte_copy_page_tags(kto, kfrom); } } + +void copy_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page(kto, kfrom); + do_mte(to, from, kto, kfrom); +} EXPORT_SYMBOL(copy_highpage); void copy_user_highpage(struct page *to, struct page *from, @@ -44,3 +48,23 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +void copy_highpage_mc(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page_mc(kto, kfrom); + do_mte(to, from, kto, kfrom); +} +EXPORT_SYMBOL(copy_highpage_mc); + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + copy_highpage_mc(to, from); + flush_dcache_page(to); +} +EXPORT_SYMBOL_GPL(copy_user_highpage_mc); +#endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9c..a9dbf331b038 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -283,6 +283,10 @@ static inline void copy_user_highpage(struct page *to, struct page *from, #endif +#ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#define copy_user_highpage_mc copy_user_highpage +#endif + #ifndef __HAVE_ARCH_COPY_HIGHPAGE static inline void copy_highpage(struct page *to, struct page *from) @@ -298,6 +302,10 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif +#ifndef __HAVE_ARCH_COPY_HIGHPAGE_MC +#define cop_highpage_mc copy_highpage +#endif + static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, size_t len) diff --git a/mm/memory.c b/mm/memory.c index 76e3af9639d9..d5f62234152d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2767,7 +2767,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - copy_user_highpage(dst, src, addr, vma); + copy_user_highpage_mc(dst, src, addr, vma); return true; }