From patchwork Wed Feb 7 13:22:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13548487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B9BFC4828D for ; Wed, 7 Feb 2024 13:22:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C6816B0075; Wed, 7 Feb 2024 08:22:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 44C586B0078; Wed, 7 Feb 2024 08:22:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 279796B007D; Wed, 7 Feb 2024 08:22:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 131BA6B0075 for ; Wed, 7 Feb 2024 08:22:25 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8FE231609FB for ; Wed, 7 Feb 2024 13:22:24 +0000 (UTC) X-FDA: 81765071808.09.126B8F6 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf17.hostedemail.com (Postfix) with ESMTP id BF69A40007 for ; Wed, 7 Feb 2024 13:22:21 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707312142; a=rsa-sha256; cv=none; b=d58cq67MGJfEaOyCB3Sbi4dIM9Mmft/pB8lXgS664mYHv4FVGL1J8SZso9nydL/a5m8kE7 ZJhtfDVvb417PPO5sTGXPLIr+cwBJab+f6mNOarewXxd0oa/gQ/WjBYXdW5fFp3c2Llpg2 QuoWrc2dL0caHcMC7YcxqpeVJ5+1VhQ= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707312142; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W+V3hpnEnaWvLyVJnlAq6fqrYusvwPYT3iTx/9ZgIVQ=; b=iL9I7+P/XC4hbhG6gEuSJVrf4pDTOtC5E55bCKbi9YIB1PNDpgmKqQyOnR02NZdDMUcm6M CvALBd6v3/79U7qeriaGihqw4ygVXDB9F3vF0PPNyAc/wzGnFNHQwTasq541JahHWljO1q T8vLvsjN4Fx2FCTeRoC6AYoagaT7C8o= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4TVLJN6TZfz1FKNk; Wed, 7 Feb 2024 21:17:40 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id A0E0B1400CF; Wed, 7 Feb 2024 21:22:17 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 7 Feb 2024 21:22:15 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v11 1/5] uaccess: add generic fallback version of copy_mc_to_user() Date: Wed, 7 Feb 2024 21:22:00 +0800 Message-ID: <20240207132204.1720444-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240207132204.1720444-1-tongtiangen@huawei.com> References: <20240207132204.1720444-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: BF69A40007 X-Stat-Signature: a8zxuutidm1mu1sap6aesb8uwjde8acs X-Rspam-User: X-HE-Tag: 1707312141-675928 X-HE-Meta: U2FsdGVkX1+i9KEMkZBfrWihw3cS8jOKFd4F3vA2VoacQjZgbB9FvrmNQlAJK2QkPpp3Xjc/LjMoGvaiCttztE/kIFXl8UaTk910uDmtPRU4wZbwaSW2puSrq2suk8CC/OVLtKIjMt3OBLGw9fO8VnVgQC57FnyCz4BG+NJ+6N9uBvJpKPyrsHE424wTTQ5l7Y2tKXGkOPakThMTFkYz6JScWT/wBbdfD2KZB8m1INa6e8ew8KSgBoMTW4zceHWDeyXw+QJ0A1xe8is0Oytx3ZDWPvGz7iTZdb8HxQr3Q18KUL1zuSWllHpj2zdXlMDEsCWjvu3w2jqsM8R5W3SLhbEQHiDwFKu7OWinn9JVn1agTVe2U2LvfkC50HxXL1ZuviDkBbprF7Os/UUtNsl1t+zyYJodn/POyCqW3ZbLsAEkuqaIm53b4B+h5mzeeRfx9Du9/X6o+cDWRL7E4+xt6zDvIkrVl5PIg3lxFI9i+xRfw0uh3qPLKc75AutJnIg3r2hHg+wFdUdTUVslq2tRyQk9ihWEqoqsK3g/o5QDyTsiYKjYUnDsKDqiindy/VviowXpsBx7GRZneKZ35FizSfpaleJFfq6jaUeHqRAka/73lvANtTuFIdstZ+38c+3TNHPZME2ZbfcIJ9MsCxFgpNsM7jhzFpSdHE1yrK7joCbxptj71KwQwZRqV75BX4Gzg+x2z17W7CyxibY6MFW5biwFD+/F5gDX0hSYPTS/lBJ/YzfSZYtaIHFm6IXUws2tgiuLWeCo9lIAdN8FWv1JMgdmujTdCkIfVCDCyO5kFBYjqfAu13HbHKDosM7DqU62yGkuFiMoR0VCnZINzSL/4ftyzdoP2vrjtu5d2orr4rDqkv75Mn/DmDZc37/r1uz6N9Jy5RAZybQoU/kdRJZuzeU9vA9QtH0yozLUIqZIouft/yvs7NXG5Wun2Xti7sWVWaKDkPXAeRyHO+r59L9 FjseF8lQ VOQwerB5TD3kVX3z81JTZP+/J/TIibnOWJZ6qGeRXDk9EsToweRkoJgZv1LKKnCxsdbnqn55qYPiTREx9o27A2TNcq/uF2EI4jHvgS8eqogELfYg7Gm6dtZe7H7Gv4gZ7Vc6ymW0pifXu8Hd18jO2HDUL36VSdD2rdUFAVx5gjvkkRYbILYgudJQLXbB8e04j93lPFiS4dj1k8WCeuz0iTTLOO+lL3K0xdXU2MAZxpERYljS+AJ2eANI+3+ArjhQZ1nJP5OcQ/JmEJGM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: x86/powerpc has it's implementation of copy_mc_to_user(), we add generic fallback in include/linux/uaccess.h prepare for other architechures to enable CONFIG_ARCH_HAS_COPY_MC. Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/uaccess.h | 9 +++++++++ 3 files changed, 11 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index f1f9890f50d3..4bfd1e6f0702 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 5c367c1290c3..fd56282ee9a8 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void __user *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..550287c92990 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -205,6 +205,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Wed Feb 7 13:22:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13548488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15166C4828F for ; Wed, 7 Feb 2024 13:22:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B7BAC6B0078; Wed, 7 Feb 2024 08:22:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A19026B007D; Wed, 7 Feb 2024 08:22:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8462F6B007E; Wed, 7 Feb 2024 08:22:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 73C066B0078 for ; Wed, 7 Feb 2024 08:22:25 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 52B3BA0D89 for ; Wed, 7 Feb 2024 13:22:25 +0000 (UTC) X-FDA: 81765071850.30.5148B7B Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf06.hostedemail.com (Postfix) with ESMTP id DDEDC180008 for ; Wed, 7 Feb 2024 13:22:22 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707312143; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BKEi0Nf3Es05EdoNOaKRHdHx/p/uUdvi6P/X6KF3dys=; b=7ySvhKQ7LwaRRv9Hs9HW2k9Xfy6dpB4iOKDfiplk3Ydi9RKglImzMA4OUQMaLTAifhZBiR aIAYOy0PFP75VBDwHQ/A0cs7P1YRZUKraxC/df0fcBu09Z2NIpSVZx3j6viPC6GsNpDRB3 ig9mPwjPYRUfdqKzqPUPb073mX/XFn0= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707312143; a=rsa-sha256; cv=none; b=7CDS5Rbz5lfH+O3UWQ0yR536Od4is109mcykzctUIB0820jKPFIxlxO+FkQmxyvundHHxJ JzTHlXzDMJHkVRV5k+qMhSQtXQrOWMT+SbrIsP7qM6Nc3HYFXzzLQP2JoKsQ3/yjwc/kwK 6PZLIXx53LGVNJd6CUl+OT3ujfcaKZs= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4TVLPC1Qr1z1vtFJ; Wed, 7 Feb 2024 21:21:51 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 5A2EB1A0172; Wed, 7 Feb 2024 21:22:19 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 7 Feb 2024 21:22:17 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v11 2/5] arm64: add support for ARCH_HAS_COPY_MC Date: Wed, 7 Feb 2024 21:22:01 +0800 Message-ID: <20240207132204.1720444-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240207132204.1720444-1-tongtiangen@huawei.com> References: <20240207132204.1720444-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspam-User: X-Stat-Signature: tc55rtrdujoxwacoywdagpphpj9fog13 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DDEDC180008 X-HE-Tag: 1707312142-462163 X-HE-Meta: U2FsdGVkX19eR0TMmVo5yBPDJT5R4+k6uNbJwzr1yAYKkQSrO0LMX30OV8VRtl5QrD88PmPkqxZ6q8jfkEnfO9lF2lnayBK8XPBnNdXomGolGWS/gDeZAk/vS/y1kzT6Oy4wtb5eIa1tD1SGbnxKvXU7/HAkMeM+dst49QFv9Aa5bFkZQS0yNVgTrWCWntSgFxBIt5G4T0iCPcwG+HQQ2hARF+T3ACcPLtptTlrzySnwJ36NeYX6Yf8t+IblR36OZyqduu8mu7PV7UgUXbA5roFbFbbdXFl90SEx2MnLGfV79g4+r25xr9G6tC/ovhvT5kPOFO5P/qw9Cn2+GZL2Dn67xDtXOtJ4jTiA2WtoYTeAYzLnJxE7cCccvXAKMrTyBqnYxZAEzSkBUT0q81O0+xza9ujAWCNehJSNh9WZOHz34CxNr9i+gp/bKAg29BkNxirlqMiTfvwXiCTlpM1JbkO5h7GE2SWr7qAQlHunopoVpSu1o29kSpwK4xhipRLC/qTW/N6w5v/xK/+PQl6vjFeqiStNZw7LgfqcdlcBxoq4t+Wa2+3UV6q5T91drHswubBSwUE/97jaMyhw/N5Gm0gcC6rhDdkSdXWbxRf6hm2tgsJ02RFlBc/Gw3SWn9PQeguH1IBm+ITsGDJHKWqztvuEdaJnLboCferbW/BXYMVFXgOl3daymGIzKKi/D2LwLFnpKFHsQ5yvrODbUdQJBnv+tGWGYv5hPX/+1AXI2ECxqfo5BBJy8PGi0kS4KyKgLdhBniT2TWm5IaVG5ZW9ZS73i8njdjS8NFD/ZHQIXR4aUzpe+Gxsf9fCrb1yJm3QLPhxpq+pVhfxhqCOgsiniCudBqGWSyV42fjreXXl7LdtvfeApXuwf+Sznv0b4bb2NWb+Ez5DkKlh25GKhRQuEZwn0pXcz/vC/NiRLG+7NxhD0eGxlz2GPkWeKSRNCFoMbmxY4Aab6f0UWlK7Aly Bmw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For the arm64 kernel, when it processes hardware memory errors for synchronize notifications(do_sea()), if the errors is consumed within the kernel, the current processing is panic. However, it is not optimal. Take copy_from/to_user for example, If ld* triggers a memory error, even in kernel mode, only the associated process is affected. Killing the user process and isolating the corrupt page is a better choice. New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn that can recover from memory errors triggered by access to kernel memory. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++----- arch/arm64/include/asm/asm-uaccess.h | 4 ++++ arch/arm64/include/asm/extable.h | 1 + arch/arm64/lib/copy_to_user.S | 10 ++++----- arch/arm64/mm/extable.c | 19 +++++++++++++++++ arch/arm64/mm/fault.c | 27 +++++++++++++++++------- 7 files changed, 75 insertions(+), 18 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 96fb363d2f52..72b651c461d5 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -20,6 +20,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..9c0664fe1eb1 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -5,11 +5,13 @@ #include #include -#define EX_TYPE_NONE 0 -#define EX_TYPE_BPF 1 -#define EX_TYPE_UACCESS_ERR_ZERO 2 -#define EX_TYPE_KACCESS_ERR_ZERO 3 -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_NONE 0 +#define EX_TYPE_BPF 1 +#define EX_TYPE_UACCESS_ERR_ZERO 2 +#define EX_TYPE_KACCESS_ERR_ZERO 3 +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +/* kernel access memory error safe */ +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -51,6 +53,17 @@ #define _ASM_EXTABLE_UACCESS(insn, fixup) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) +#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero) \ + __ASM_EXTABLE_RAW(insn, fixup, \ + EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, \ + ( \ + EX_DATA_REG(ERR, err) | \ + EX_DATA_REG(ZERO, zero) \ + )) + +#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup) \ + _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr) + /* * Create an exception table entry for uaccess `insn`, which will branch to `fixup` * when an unhandled fault is taken. @@ -69,6 +82,14 @@ .endif .endm +/* + * Create an exception table entry for kaccess me(memory error) safe `insn`, which + * will branch to `fixup` when an unhandled fault is taken. + */ + .macro _asm_extable_kaccess_me_safe, insn, fixup + _ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup) + .endm + #else /* __ASSEMBLY__ */ #include diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 5b6efe8abeeb..7bbebfa5b710 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -57,6 +57,10 @@ alternative_else_nop_endif .endm #endif +#define KERNEL_ME_SAFE(l, x...) \ +9999: x; \ + _asm_extable_kaccess_me_safe 9999b, l + #define USER(l, x...) \ 9999: x; \ _asm_extable_uaccess 9999b, l diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..bc49443bc502 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_me(struct pt_regs *regs); #endif diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 802231772608..2ac716c0d6d8 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -20,7 +20,7 @@ * x0 - bytes not copied */ .macro ldrb1 reg, ptr, val - ldrb \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val) .endm .macro strb1 reg, ptr, val @@ -28,7 +28,7 @@ .endm .macro ldrh1 reg, ptr, val - ldrh \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val) .endm .macro strh1 reg, ptr, val @@ -36,7 +36,7 @@ .endm .macro ldr1 reg, ptr, val - ldr \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val) .endm .macro str1 reg, ptr, val @@ -44,7 +44,7 @@ .endm .macro ldp1 reg1, reg2, ptr, val - ldp \reg1, \reg2, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val) .endm .macro stp1 reg1, reg2, ptr, val @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) 9997: cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder - ldrb tmp1w, [srcin] +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin]) USER(9998f, sttrb tmp1w, [dst]) add dst, dst, #1 9998: sub x0, end, dst // bytes not copied diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 228d681a8715..8c690ae61944 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs) return ex_handler_uaccess_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE: + return false; } BUG(); } + +bool fixup_exception_me(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + switch (ex->type) { + case EX_TYPE_UACCESS_ERR_ZERO: + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE: + return ex_handler_uaccess_err_zero(ex, regs); + } + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 13189322a38f..78f9d5ce83bb 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -802,21 +802,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) return 1; /* "fault" */ } +/* + * APEI claimed this as a firmware-first notification. + * Some processing deferred to task_work before ret_to_user(). + */ +static bool do_apei_claim_sea(struct pt_regs *regs) +{ + if (user_mode(regs)) { + if (!apei_claim_sea(regs)) + return true; + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { + if (fixup_exception_me(regs) && !apei_claim_sea(regs)) + return true; + } + + return false; +} + static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) { const struct fault_info *inf; unsigned long siaddr; - inf = esr_to_fault_info(esr); - - if (user_mode(regs) && apei_claim_sea(regs) == 0) { - /* - * APEI claimed this as a firmware-first notification. - * Some processing deferred to task_work before ret_to_user(). - */ + if (do_apei_claim_sea(regs)) return 0; - } + inf = esr_to_fault_info(esr); if (esr & ESR_ELx_FnV) { siaddr = 0; } else { From patchwork Wed Feb 7 13:22:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13548489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 788F4C4828D for ; Wed, 7 Feb 2024 13:22:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B7256B007D; Wed, 7 Feb 2024 08:22:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 141D36B007E; Wed, 7 Feb 2024 08:22:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EFD6F6B0080; Wed, 7 Feb 2024 08:22:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D29976B007D for ; Wed, 7 Feb 2024 08:22:26 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AAF4AC0D2D for ; Wed, 7 Feb 2024 13:22:26 +0000 (UTC) X-FDA: 81765071892.19.1EC0214 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf11.hostedemail.com (Postfix) with ESMTP id 240CF40012 for ; Wed, 7 Feb 2024 13:22:23 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707312144; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gXoTR5ohhq/GeXW6QY+CaNA6pgMbd5PecOtf5q6PYXc=; b=uhW2BdIpYyB7++T13ac51KeY806OgDs3Mpo9NqsnA0wFuBlglyUIDFNA5umkMh9QbE5ooX DykEtleBj+0wvYFBmaW9U7pmQkWyzXvplSOvtfHOHDWvPdu1RuFiSwbyGFIHSkfeuglXvK PqiRsjNwnotN3cYca0RujOQna0I18oU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707312145; a=rsa-sha256; cv=none; b=OgvSaTbx7fEQlhvDZsGvX+RTKSNDR74q38brVzvsBACnQnCbKSWY/w4hchfLKubcS+P+3F R8SXR9LqUzKSQKBqrNsMjyzGAr8dThX+9hjAOiujzzcFJOeL+vWDQS24Nrk0d1j8Lc8nBZ 8cybIfVNMs53p0g35tLHAzkGbPtX0fY= Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TVLNT3FzMz1xnF1; Wed, 7 Feb 2024 21:21:13 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 16D2314025A; Wed, 7 Feb 2024 21:22:21 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 7 Feb 2024 21:22:19 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v11 3/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Date: Wed, 7 Feb 2024 21:22:02 +0800 Message-ID: <20240207132204.1720444-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240207132204.1720444-1-tongtiangen@huawei.com> References: <20240207132204.1720444-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 240CF40012 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: cpqhidraug3rmtdfpd3keojpgjxaqacu X-HE-Tag: 1707312143-263094 X-HE-Meta: U2FsdGVkX1+QLBaiGBiNwMAL1kizjG0LbbMf5+/h1m7rMSLmS+9Ind7CMpepDuVKTRM+Ij9+WBcxsHaWKbVU1hPr9Xrlf6zMpLaLG2/C2Yb2PkYjCmy6wSS8F7LuqMEWvfypNZLzj1Nm3dnusk1vES2VclOts0zRw5uHhDQsDNLfdQf6f6tdRwx8PrSfnPrnwcvCPDcmCImHkpIlSBjpZj+xVL3BDLpxcEuQpTbXxwn9E/2STA4F2kloUZK5AXaryzKPJh0hEJAQfajcOpM/OoVgkz1KHCz426wS91WITjtwP4ZGzbe7vncL/H6nWw79VJGgbfhjz7HCLc7jWTgeFybK2gj9A2gXc8+f3IcrKDDyjLI9BqMc4m2qaFrSIU2x1W+9sxaUfTXO7FdRdl9tsqRkU7KfWZ6mnui5QK7Yks7K/umvOBoTNN4kcCDljdTZIFMdFgxQx8gKjGq1nTTRRfSg9FiWAW3ZSKeACTqIbQGMVQ5xiGW62GEf0Brd0qOUEFxtLnE0w5uL1zWc1n+NxPipDhy6+VLa+PjxEu8XnLIM3yf6FDBtVLbIsYDyQl1vQBMvYHl7FW496PmvkrEtA9NSYo5HFt2IU/RrYp2TIqXGjzZ1CWefk1gbCpFJ+ajjBVUJ9Ycg/DgB1nJMSeDnhkRpF1l4C75ApzvuMf/OCZ2TCGKPjOO5orD9adTzChj453kmPBJiqfuZjglO/X+NkLa0KMVzdSchpxLiZ92Tsdz3SS8mqX5ysMfkYuTGBCzGma3Xnvq6W4D3fTMW5ZV7Oq4noaX3+VlKTo0NngI8MtCgCbSaS15HkpuZM5Goev15o/+bkzMpFZbL/2RwBi6kyEbQWKsdLmY6zstk4CFNTj6lG6fjnHJHPK/b/kDmAlOmKWH5g2r9J434te+0nqo2le8np9S67y+V4Yedd+l6AOby59juqnS0BuZaOncTl/bOv60EwjpTX1h4AYdzT7H 2/A4tfvy 0+AQ5fHG3m+R53MkycdDIV2vfxqBMRSjOVGohceVyni/upYGOO8fEMmKRdw1ItbawXB0A+83lIUKdNeV7JNO7+P5C4HxnVOyqUsvL0m8gA4z0G97TyR81rfehjbr1vg7SvUzJGr5sVGJDkHtv2CqkTPmlxPixAYRtciOXlvbHhN6Oyzv9TJgFaR81YntnCdaxNQneYhCEJRCKu5Kukupw7w/ST2Umq5IbvIbEE6FpToOMoXtSxatk1Ce685dTlKUZX29unkf+gKznFvA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If hardware errors are encountered during page copying, returning the bytes not copied is not meaningful, and the caller cannot do any processing on the remaining data. Returning -EFAULT is more reasonable, which represents a hardware error encountered during the copying. Signed-off-by: Tong Tiangen --- include/linux/highmem.h | 8 ++++---- mm/khugepaged.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 451c1dff0e87..c5ca1a1fc4f5 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from) /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory - * page with #MC in source page (@from) handled, and return the number - * of bytes not copied if there was a #MC, otherwise 0 for success. + * page with #MC in source page (@from) handled, and return -EFAULT if there + * was a #MC, otherwise 0 for success. */ static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) @@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } static inline int copy_mc_highpage(struct page *to, struct page *from) @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } #else static inline int copy_mc_user_highpage(struct page *to, struct page *from, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index fe43fbc44525..d0f40c42f620 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -797,7 +797,7 @@ static int __collapse_huge_page_copy(pte_t *pte, continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, _address, vma)) { result = SCAN_COPY_MC; break; } @@ -2053,7 +2053,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, clear_highpage(hpage + (index % HPAGE_PMD_NR)); index++; } - if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) { + if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page)) { result = SCAN_COPY_MC; goto rollback; } From patchwork Wed Feb 7 13:22:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13548491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7FDEC4828D for ; Wed, 7 Feb 2024 13:22:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7105E6B0085; Wed, 7 Feb 2024 08:22:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A5526B0083; Wed, 7 Feb 2024 08:22:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C46A6B0085; Wed, 7 Feb 2024 08:22:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 37B3C6B0081 for ; Wed, 7 Feb 2024 08:22:31 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id F3B5CC0D02 for ; Wed, 7 Feb 2024 13:22:30 +0000 (UTC) X-FDA: 81765072102.18.534DFCB Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf17.hostedemail.com (Postfix) with ESMTP id 64E2340007 for ; Wed, 7 Feb 2024 13:22:27 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707312149; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LO4GATX313XxIRamWeU7GRgzrmEGrMa8jdJMhtcmJmY=; b=d6l3QLckVkQir3l9/hWziyMcez+2ZRnGtnydEB4066IXm0e7sgNKYP61pjFqTg7evV6Afb t0z6YbIz9aqO5/+oF/+4c1c3MfJ7Vh3iD7CJYIj4GSBj4akL9Zzf79Ea6xpKbtj4p5Uxzd YGgwNpKxJDfsVDu3Tkh2HXTC/nht2VQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707312149; a=rsa-sha256; cv=none; b=i6dmzzdUIAoCryxF9tcHNHQUOVVy2NBSRF7hFcEG7DGjEyB3oFxr9SgaXasAdTDtLwkgMF B9j6i+CyIsU1Mj8qehRYnL9cfZ4yE6ImQ2dFD4SF4Nru6FJ7kqoKNLM9E3oN05o7Y6U/gn LiJXaPGtH+dQX1YIMIMOXpjE3/FjUoA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TVLNW1bzBz1xnHb; Wed, 7 Feb 2024 21:21:15 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id D32F214025A; Wed, 7 Feb 2024 21:22:22 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 7 Feb 2024 21:22:20 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v11 4/5] arm64: support copy_mc_[user]_highpage() Date: Wed, 7 Feb 2024 21:22:03 +0800 Message-ID: <20240207132204.1720444-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240207132204.1720444-1-tongtiangen@huawei.com> References: <20240207132204.1720444-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 64E2340007 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: fazpbsgmt9xf1cengxoopjmdo5a8we9o X-HE-Tag: 1707312147-23659 X-HE-Meta: U2FsdGVkX18yBQtw5334OR1aV5kMHrwGJwr7ApCTR3pU+EfVMH7+KEa/nDUZ12LdkEneGO+hwKMsQFDjf+0n/wteE6O5BXR/6CLoObpHMscOY0MDs/Ez3baf2oqqDHlHvVqL2U5MoIeuMn0FPe+fdcgTI/Ht+E0ckn9YUxgujsqCKm0oKqjdKbaEnwJjQB+hGcpfHzPfNSCar8lflHkCYaKmSgl2Y71joa7H/GnnAmzH+1klAmy5rp4f9WEu5NsFXhR5b119em5T0f7CUrA41Y4qRL/pUEW7kFTGeZXzdpZv0hMLS9rsqq2pxZUdszx3C0aPi6PYCmMXvrK6WRbW7la2DuSVQBDDPRcDFdyWyblei7QDCFHd/0UolOphO1MSXVj+CIKuB5DU7DMkeBUnrZ5zeEW9QIGKWWR+OQunYmVgw4NCLkqauvCpRNj+NxTZxqUA21Qj2ACaa+a8rttYc6XuvRX2gVfMuclFsFlS/NjS7c+hb5O8bWbxqLNYyOhhLJ27ggkMBfOz2j5+bwCRjwvAjfAWwh3He71RxzvOy9ahAChDtTRango9+EfakxXRSldMXEcm19McgPdxecdnkyvLxh70SML3DfIx9mWnTkCGXr81Si99lkhvrl5luBpiCBYzm9psj7OXAirTIalp3VNPqXK6k56t1khw9QW2P1Rqw3KZAJ0iyQDzHf//tOEeLCVily1dWsSHjs+Cvjes5C/mwxMWx6fjOt/HvgNGUJ/GW4YaezE+DAPOXea5/Xi3kRG2mt5mcvajNpwyIgdxJ8gX+hIUIDcDZ0/GjjfX6QhSKReba77v7aZ27QKS43hZZZ7T78TimWcYfNRD3kj2nKa1sAyEZjNZt0D2VNsSlCK8RSb8YPtHQmOugQn6xjowy1jjQggwBGXj3GMheUpH9csWSIgsqZWFnIyQ0iOog3ShTVUMm9P/Ubwkm0gjzH01CwvhzHguu9fz5pcAl2D weXJdVWG 8AMZJklLZ+En+DkI/Zzrrv++xjj/OLG7U9C6Sarh/J8QfZnKpJ7Hz5QHBaXmhszUOOo0NjKTTfq32AuL6fPx7BzUcPrviDFvbNhCqEAWKLqQe/QlqySrvYPs9qTeshygjp6O8DqZJD7vgaVwWqrWEvgcBiboWYsNvq/UDr/WQ0itXsejmH8G5DN16AR7y2OBvl2D6PINOm0TRPHiGpns+svceVbOa/u0v4dTzNR/GkeVX0TS8gem/ksxLUVVs2P5dlkidOwBVj5PjzzJ01jXm6e3HhJ49WeLVTiqE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, many scenarios that can tolerate memory errors when copying page have been supported in the kernel[1~5], all of which are implemented by copy_mc_[user]_highpage(). arm64 should also support this mechanism. Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. Add new helper copy_mc_page() which provide a page copy implementation with hardware memory error safe. The code logic of copy_mc_page() is the same as copy_page(), the main difference is that the ldp insn of copy_mc_page() contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the main logic is extracted to copy_page_template.S. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/mte.h | 9 +++++ arch/arm64/include/asm/page.h | 10 ++++++ arch/arm64/lib/Makefile | 2 ++ arch/arm64/lib/copy_mc_page.S | 37 +++++++++++++++++++ arch/arm64/lib/copy_page.S | 50 +++----------------------- arch/arm64/lib/copy_page_template.S | 56 +++++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 29 +++++++++++++++ arch/arm64/mm/copypage.c | 45 +++++++++++++++++++++++ include/linux/highmem.h | 8 +++++ 9 files changed, 201 insertions(+), 45 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S create mode 100644 arch/arm64/lib/copy_page_template.S diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 91fbd5c8a391..dc68337c2623 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -92,6 +92,11 @@ static inline bool try_page_mte_tagging(struct page *page) void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t pte, unsigned int nr_pages); void mte_copy_page_tags(void *kto, const void *kfrom); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +int mte_copy_mc_page_tags(void *kto, const void *kfrom); +#endif + void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_cpu_setup(void); @@ -128,6 +133,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) +{ + return 0; +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..304cc86b8a10 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +int copy_mc_page(void *to, const void *from); +int copy_mc_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_MC_HIGHPAGE + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE +#endif + struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..a2fd865b816d 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -15,6 +15,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S new file mode 100644 index 000000000000..1e5fe6952869 --- /dev/null +++ b/arch/arm64/lib/copy_mc_page.S @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with memory error safe + * + * Parameters: + * x0 - dest + * x1 - src + * Returns: + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong + * while copying. + */ + .macro ldp1 reg1, reg2, ptr, val + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr, \val]) + .endm + +SYM_FUNC_START(__pi_copy_mc_page) +#include "copy_page_template.S" + + mov x0, #0 + ret + +9998: mov x0, #-EFAULT + ret + +SYM_FUNC_END(__pi_copy_mc_page) +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) +EXPORT_SYMBOL(copy_mc_page) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index 6a56d7cf309d..5499f507bb75 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -17,52 +17,12 @@ * x0 - dest * x1 - src */ -SYM_FUNC_START(__pi_copy_page) - ldp x2, x3, [x1] - ldp x4, x5, [x1, #16] - ldp x6, x7, [x1, #32] - ldp x8, x9, [x1, #48] - ldp x10, x11, [x1, #64] - ldp x12, x13, [x1, #80] - ldp x14, x15, [x1, #96] - ldp x16, x17, [x1, #112] - - add x0, x0, #256 - add x1, x1, #128 -1: - tst x0, #(PAGE_SIZE - 1) - - stnp x2, x3, [x0, #-256] - ldp x2, x3, [x1] - stnp x4, x5, [x0, #16 - 256] - ldp x4, x5, [x1, #16] - stnp x6, x7, [x0, #32 - 256] - ldp x6, x7, [x1, #32] - stnp x8, x9, [x0, #48 - 256] - ldp x8, x9, [x1, #48] - stnp x10, x11, [x0, #64 - 256] - ldp x10, x11, [x1, #64] - stnp x12, x13, [x0, #80 - 256] - ldp x12, x13, [x1, #80] - stnp x14, x15, [x0, #96 - 256] - ldp x14, x15, [x1, #96] - stnp x16, x17, [x0, #112 - 256] - ldp x16, x17, [x1, #112] - - add x0, x0, #128 - add x1, x1, #128 - - b.ne 1b - - stnp x2, x3, [x0, #-256] - stnp x4, x5, [x0, #16 - 256] - stnp x6, x7, [x0, #32 - 256] - stnp x8, x9, [x0, #48 - 256] - stnp x10, x11, [x0, #64 - 256] - stnp x12, x13, [x0, #80 - 256] - stnp x14, x15, [x0, #96 - 256] - stnp x16, x17, [x0, #112 - 256] + .macro ldp1 reg1, reg2, ptr, val + ldp \reg1, \reg2, [\ptr, \val] + .endm +SYM_FUNC_START(__pi_copy_page) +#include "copy_page_template.S" ret SYM_FUNC_END(__pi_copy_page) SYM_FUNC_ALIAS(copy_page, __pi_copy_page) diff --git a/arch/arm64/lib/copy_page_template.S b/arch/arm64/lib/copy_page_template.S new file mode 100644 index 000000000000..b3ddec2c7a27 --- /dev/null +++ b/arch/arm64/lib/copy_page_template.S @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +/* + * Copy a page from src to dest (both are page aligned) + * + * Parameters: + * x0 - dest + * x1 - src + */ + ldp1 x2, x3, x1, #0 + ldp1 x4, x5, x1, #16 + ldp1 x6, x7, x1, #32 + ldp1 x8, x9, x1, #48 + ldp1 x10, x11, x1, #64 + ldp1 x12, x13, x1, #80 + ldp1 x14, x15, x1, #96 + ldp1 x16, x17, x1, #112 + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + + stnp x2, x3, [x0, #-256] + ldp1 x2, x3, x1, #0 + stnp x4, x5, [x0, #16 - 256] + ldp1 x4, x5, x1, #16 + stnp x6, x7, [x0, #32 - 256] + ldp1 x6, x7, x1, #32 + stnp x8, x9, [x0, #48 - 256] + ldp1 x8, x9, x1, #48 + stnp x10, x11, [x0, #64 - 256] + ldp1 x10, x11, x1, #64 + stnp x12, x13, [x0, #80 - 256] + ldp1 x12, x13, x1, #80 + stnp x14, x15, [x0, #96 - 256] + ldp1 x14, x15, x1, #96 + stnp x16, x17, [x0, #112 - 256] + ldp1 x16, x17, x1, #112 + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 5018ac03b6bf..50ef24318281 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags) ret SYM_FUNC_END(mte_copy_page_tags) +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Copy the tags from the source page to the destination one wiht machine check safe + * x0 - address of the destination page + * x1 - address of the source page + * Returns: + * x0 - Return 0 if copy success, or + * -EFAULT if anything goes wrong while copying. + */ +SYM_FUNC_START(mte_copy_mc_page_tags) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: +KERNEL_ME_SAFE(2f, ldgm x4, [x3]) + stgm x4, [x2] + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b + + mov x0, #0 + ret + +2: mov x0, #-EFAULT + ret +SYM_FUNC_END(mte_copy_mc_page_tags) +#endif + /* * Read tags from a user buffer (one tag per byte) and set the corresponding * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index a7bb20055ce0..ff0d9ceea2a4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Return -EFAULT if anything goes wrong while copying page or mte. + */ +int copy_mc_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + int ret; + + ret = copy_mc_page(kto, kfrom); + if (ret) + return -EFAULT; + + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); + + if (system_supports_mte() && page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); + ret = mte_copy_mc_page_tags(kto, kfrom); + if (ret) + return -EFAULT; + + set_page_mte_tagged(to); + } + + return 0; +} +EXPORT_SYMBOL(copy_mc_highpage); + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + int ret; + + ret = copy_mc_highpage(to, from); + if (!ret) + flush_dcache_page(to); + + return ret; +} +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); +#endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index c5ca1a1fc4f5..a42470ca42f2 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif #ifdef copy_mc_to_kernel +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, return ret ? -EFAULT : 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { unsigned long ret; @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) return ret ? -EFAULT : 0; } +#endif #else +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { copy_user_highpage(to, from, vaddr, vma); return 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { copy_highpage(to, from); return 0; } #endif +#endif static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, From patchwork Wed Feb 7 13:22:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13548490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7CE1C4829B for ; Wed, 7 Feb 2024 13:22:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2CC36B0080; Wed, 7 Feb 2024 08:22:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ADD126B0081; Wed, 7 Feb 2024 08:22:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BB056B0083; Wed, 7 Feb 2024 08:22:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 706776B0080 for ; Wed, 7 Feb 2024 08:22:30 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 46D4F120D4B for ; Wed, 7 Feb 2024 13:22:30 +0000 (UTC) X-FDA: 81765072060.21.A778303 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf03.hostedemail.com (Postfix) with ESMTP id 0AFCD20012 for ; Wed, 7 Feb 2024 13:22:27 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf03.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707312148; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oB+8JlGTwk582KMXwYJP2mXXjHsi1i6g2eF2BP3W3tg=; b=z/KMhc/eihxfKSXeekiGGWV6G8P68wHnJ9EyiD6GaJwBgiEXBPd+AgSFX6O/KXhsfNdo88 rpMV6gjYx+h09MEJ28yvyCfka1vjnGLoLYGWADOtAD/OHYnhVOc2NNKMNZvmQxkGpEMoEI 6O5hD9gceNDjMAueESISz6crLUjPZ74= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf03.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707312148; a=rsa-sha256; cv=none; b=Uzyby85BoR3vp89ohgCCSrRsIu5+xudlMulUBGtsNjHO8qJn87MNFhxtCI7ATa35xsIeqg Ryt8dAB7joWyKhDth/ftaQV0IFfD0lLWrjGYizzrq+t3SPX90sZsHirud9Ns41iZX6MFID qQLRDM46Dk9tcN3MYF3y0EGBvMgYYLE= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4TVLMc0HTBz1gyb9; Wed, 7 Feb 2024 21:20:28 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 83CD218001A; Wed, 7 Feb 2024 21:22:24 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 7 Feb 2024 21:22:22 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v11 5/5] arm64: send SIGBUS to user process for SEA exception Date: Wed, 7 Feb 2024 21:22:04 +0800 Message-ID: <20240207132204.1720444-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240207132204.1720444-1-tongtiangen@huawei.com> References: <20240207132204.1720444-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspam-User: X-Stat-Signature: ehx8f77m6eu9rzpwwiggrg8fqyumupt5 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0AFCD20012 X-HE-Tag: 1707312147-312298 X-HE-Meta: U2FsdGVkX1/VBXYhxuNHD1TGLu7E5SL4pzR+2EIIDjuj1b4bl2O72Lx1e1tIBXH63An4k5dOunwwJ4dZZ1Kf6Wssnfp/nEJ8ZWUwEaaoo9NjibZU+xGtVAVylb5rNKregiSXkJpnAK7iKQFi8yqnjSdUg0v1f5BFQ2k6ES1n8epB30S9zGevpWnvf9m9b3qkl6fHTcARpl6XNsT0jhIsTm3kagVu5Q8iZTY7ltEHGdnmMv+A+NqYcn0/w29aoRIxHxIRXc1W6P37BwhcwufzqYFUcHMfzJDvHsJAcH7Y/X27N+NaR3cLmVPmqCN5D/TeoB8FILeVqr1R+VNlxsgRsfELU3Ap3jmNRUtXAWRyC4O+Tctl4G6Xs/gHcZni0Ungitfmu5jjC3aFRtxG67uI214LE/AKZVYgGc9eXp/5XWWQvSwP72oU9ZHF91e2/FHgP/04jx1oVHglKSUgVZOXs4dupdVxC2to8smSwpGWfZzaEHcBguOEIw6u97YLkzJC4PsACFyNENxkiw0HiqjNZMT6zZ9N3sJe5XVMj+cadi8W6/g6KD+FbshDGXLxYwhONWRspx70a8CkgSsx+qoNVaWSVKxPbZqT1c3qc5rbPRO22LEn/ZjXVrfr9gDJOHTse/liStO4FT0WAsItdu0v/DDl9+zY3ZTsNHka6M49USKNDiCowz1jALT8neJcxSBsq1W0sAo2urfSoHmG0q87sfIiu5Ogi/eGikSf4Fm5HrWOwaX143M2hbH3XyDaxbHTQYe2fPTCw/3C7sEWZwi83tl6ln85pam0SRq+Drj4vmBspcqO1CC8h87NvKsRSv513ZuBZn46YGWcZp27KozoXPVH8jk/Tu8DK5m8/kppf0uRiAXwXRG/7f9cI8V236CkDkLpfAWpQ34bDHGBUnAT+g4t9Ll9xAUXPsRt8hJLTsy4XdY4s7UXRBNQ2bs/AjGPVLw563AUszjzhji/yeC +GZagw4M RM4g5tXrqGL54KT8hLZfCe23ye9MPyDzXbQasg0BqwAoE68ptE/2/oacIUo6TGzV3ysmthK2oNvMZyfAEE8Qf6VDWxpPhJuP1REm375OW+8bf1yvdwtLF9EddOIeaZ2r0mEHqCUjDnV5QelzEAbvlpFSHi7sIP7lF/4Qh0asf1AZyakm1p7OkKNAkf5fIdoJGiRAlN8fUTtiZ/+LQGmsiYJd4nR2wmqJf+I5mjlfYZsFD8xWUjDSrA4POFdZ+BVMpfYu1DWy17f6RpNz/SD7thAPMGAfXGHHDUSQSGW3uH/sYF5U= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For SEA exception, kernel require take some action to recover from memory error, such as isolate poison page adn kill failure thread, which are done in memory_failure(). During the test, the failure thread cannot be killed due to this issue[1], Here, I temporarily workaround this issue by sending signals to user processes (!(PF_KTHREAD|PF_IO_WORKER|PF_WQ_WORKER|PF_USER_WORKER)) in do_sea(). After [1] is merged, this patch can be rolled back or the SIGBUS will be sent repeated. [1]https://lore.kernel.org/lkml/20240204080144.7977-1-xueshuai@linux.alibaba.com/ Signed-off-by: Tong Tiangen --- arch/arm64/mm/fault.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 78f9d5ce83bb..a27bb2de1a7c 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -824,9 +824,6 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) const struct fault_info *inf; unsigned long siaddr; - if (do_apei_claim_sea(regs)) - return 0; - inf = esr_to_fault_info(esr); if (esr & ESR_ELx_FnV) { siaddr = 0; @@ -838,6 +835,19 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } + + if (do_apei_claim_sea(regs)) { + if (!(current->flags & (PF_KTHREAD | + PF_USER_WORKER | + PF_WQ_WORKER | + PF_IO_WORKER))) { + set_thread_esr(0, esr); + arm64_force_sig_fault(inf->sig, inf->code, siaddr, + "Uncorrected memory error on access to poison memory\n"); + } + return 0; + } + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0;