From patchwork Mon Jan 29 13:46:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13535673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF14FC47DDF for ; Mon, 29 Jan 2024 13:47:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85A4A6B00A4; Mon, 29 Jan 2024 08:47:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 809986B00A5; Mon, 29 Jan 2024 08:47:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BFC66B00A7; Mon, 29 Jan 2024 08:47:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4856E6B00A4 for ; Mon, 29 Jan 2024 08:47:07 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1CB45A19D9 for ; Mon, 29 Jan 2024 13:47:07 +0000 (UTC) X-FDA: 81732474894.12.EE29ED5 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf07.hostedemail.com (Postfix) with ESMTP id 69ED440019 for ; Mon, 29 Jan 2024 13:47:03 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706536025; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W+V3hpnEnaWvLyVJnlAq6fqrYusvwPYT3iTx/9ZgIVQ=; b=E1i7iWn5Lo05C/zH4dPEjDo1tFXaiuaCW4SWvnqaYO+L+zSlVVuRl8DOq/f346mkPjjIx1 HEXhAGmwBUEP7kV3C0uHArqfNeHnImOFEy/o2biFXhubOtrblH77jR9X640CRgsMkArP9e 8czgYanuBlfMzUFqo5GIUunVI/KECA4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706536025; a=rsa-sha256; cv=none; b=aeBobntuNTsm3+OREReJhSRdvp61jEb+gHMUim015Zp993xSgqg/Cds81pvvZjNBOparf+ CDs+eI+eeOSkVIVrA3ciTNKKEFr99qodtDkFV8+iIG78VQtnIFZx6Cxd8iE/NpZnkU3vvJ jbh5Kfg9jH8MFgdiE39nJXWyo0J5qCM= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TNqMD0tZpz1xmlw; Mon, 29 Jan 2024 21:46:00 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 0E5831A016D; Mon, 29 Jan 2024 21:46:58 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 29 Jan 2024 21:46:55 +0800 From: Tong Tiangen To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Robin Murphy , Andrey Ryabinin , Alexander Potapenko , Alexander Viro , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v10 1/6] uaccess: add generic fallback version of copy_mc_to_user() Date: Mon, 29 Jan 2024 21:46:47 +0800 Message-ID: <20240129134652.4004931-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240129134652.4004931-1-tongtiangen@huawei.com> References: <20240129134652.4004931-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 69ED440019 X-Rspam-User: X-Stat-Signature: 4tdoiaud5wxk4yjsrfj3hhcakeip7p74 X-Rspamd-Server: rspam01 X-HE-Tag: 1706536023-913529 X-HE-Meta: U2FsdGVkX1+8Z6x0z7azCYm4sGFgkd5fomW8gyfBb2ulqnaWLW+8n1zDHgX17MD74DuxFE6XP0tcTJpmD9cpM18F/7q9kdy9Zpn6GYCLTgPJyTrmTB/g8+kQPG6hAGTigJS9jjwslr1ipDakpsIFxaSpgUwjz6EwgT1BLUpB3tYqE2gQTjgdzSgjBO3WLppDYJEVej1Hv9q6Xty4Ft0bVu1AMgRd6dgV9uyjo7JtSlnvd28TuAqJDcYRa0J2ya7PnyNUuwX+0S0b7SJV7q0S0h/ZHWC/1uaVjVhT8eyyXzY8iiSaIiyyPMcF5rctVDM+aDLadma/vD20j+4FBkxi519TYa3zeecZ3n/7Y/UIU8N8UoqFk42ZzYVyAxX87sxzytfoAeJpnRWyN7NkQ1kRQ6BMMB4AUdUEkRSJ9VxHf8Ll1W5DK3uN9t4FR7sLubA/2NslFtLWC4qIthJSOOXfrfaJsCwIry8LqgxyVYr9qrwQRyqFgD076jsAyQ/gMnVN0fn2iTqjGbdijKJc9h5qAofr7iACxYCmfKdftVvPiC4KIOY9SmBhvHc0rW3eYQklde9tSUBLitz31JEfMcu4EOgUk3AT6tBceWt99jtMRqgRT7txfZh9gQnyBwPoVTlExwWfP73IJB1hc/LfMEqHbfaFnVCoUXbe4PfYPkkYrIwB0wyBiOz4tTuTGXpeuPcVAtMjH7KTLFYPqOKdZu4Nrx4geGn/hED4cYs45r7zZYxA6EkLZeWLoN5oBCgkznXjaJfezbPxe2WDfi89xOsFHuY8XI/v/hYT4j5IKmZ0nc1r2eO6UzajGWioNC4c3p1d2JxYLRP1fK6CT6vDHxV3IKb2YekeevD3WuAtCBMkT30ngZl/AAjfw2y9GRB6j+Ogj1MiXXtDwkQekODFYzd2orOKVrJAAIj6tiYcqWfz+LHWQcSK4YSqXlX8EgF4pAhnnYZ/m7cSYyrSHwFdImj 8PC3MtiL 06gg9QhZ1RGhVhw0uxBtVVJrfE20E6jCLw49/HbG1bnNxzq39AP4TkdjyTKhRsXC8fFFRkHX40mc5m3idDURU2Td8dKR8ksgZ07xFm+yZI6ck3Rgfe0xChV0iL8nPdIxQ2PmrqdYaTNJw0YsKowf4AyJfBlIqGgEpicjXloNDlJ7mCaTas7SKv4fLKlV8YGUcmiCVmji1nJJW8yCEc4OuQvPBmaB2I8WSmNToDBg3KRVQtPmaNo78JjZHEgcZvOWRe4EVWOOl5NIs68k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: x86/powerpc has it's implementation of copy_mc_to_user(), we add generic fallback in include/linux/uaccess.h prepare for other architechures to enable CONFIG_ARCH_HAS_COPY_MC. Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/uaccess.h | 9 +++++++++ 3 files changed, 11 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index f1f9890f50d3..4bfd1e6f0702 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 5c367c1290c3..fd56282ee9a8 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void __user *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..550287c92990 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -205,6 +205,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Mon Jan 29 13:46:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13535674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53ADEC47DDB for ; Mon, 29 Jan 2024 13:47:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 65F5E6B00A5; Mon, 29 Jan 2024 08:47:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 60C826B00A7; Mon, 29 Jan 2024 08:47:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C3876B00A8; Mon, 29 Jan 2024 08:47:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 28AAD6B00A5 for ; Mon, 29 Jan 2024 08:47:08 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 06A3D1C0B25 for ; Mon, 29 Jan 2024 13:47:08 +0000 (UTC) X-FDA: 81732474936.01.0D2440A Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf09.hostedemail.com (Postfix) with ESMTP id 49381140023 for ; Mon, 29 Jan 2024 13:47:04 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706536026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Sze+W7/VXzN9lUi+wiYHH7plMoEL8Mu63vaCr1k3jh4=; b=Y+3gmvrQXHxpxwvy0PkM+E9y3YemR1Ctsur1Y8gNvNvP5NZEskpGJoommxhQUbhnAjgY39 jV/wPixrDMpFUrwu/7+hesJa5hwLEM/b0P6r4QnGHtTNpM3hQvmkMbeC85pOyqYRq+/upr zfvQf2VeYOUZ7caxEewDSRHN9Eii/aM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706536026; a=rsa-sha256; cv=none; b=2Uu97pKImfUOFRsIsHCzoid8oaxSssSsIEXTT67iGHFx1hrFmycdEXaGiGesIFvFIucyYC UC71EeP6sOAXaEhY17ZPhNOgwzZLsM3kzSQeUijkWmBkj/49i1wHsxqPBXbYOhlRd5pRmO ZGNYGCWseBTLmGlKy8B3v4zbRsuN2wo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4TNqMF3pN7zNkcY; Mon, 29 Jan 2024 21:46:01 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id DA46B18005E; Mon, 29 Jan 2024 21:46:59 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 29 Jan 2024 21:46:57 +0800 From: Tong Tiangen To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Robin Murphy , Andrey Ryabinin , Alexander Potapenko , Alexander Viro , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v10 2/6] arm64: add support for machine check error safe Date: Mon, 29 Jan 2024 21:46:48 +0800 Message-ID: <20240129134652.4004931-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240129134652.4004931-1-tongtiangen@huawei.com> References: <20240129134652.4004931-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 49381140023 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 5mu3pw643hbqj8ykzddhfquheik1oiax X-HE-Tag: 1706536024-598131 X-HE-Meta: U2FsdGVkX18C8NXTm+0nY3G7lQk5dGf2DSPzyMqP2xMbbOUeQK1CXr3i+rwV5ogYkZbNULBgZrjMgEShT6A+WEbT3UgDzYa+eQcokupjPsIQLJ2ymv/qeU4bFH+FWV6h+wMGvVhHHcSFoT6L74M2snayoUIwEWwbpTSOpsS+Qxus9TypM6jct+tGMRkVHvzJlKeho5G/pEVzzhx8neqzmhDwmEuDb0Wk1KMBraBNoyoGAcDpwYnb+R0eGdBh+uwEUWuulyPI7+xA774msajOdHOrBEHyVDjvrE6T1gsgOyn3mLJuq03yceoNEPsU9otl4n0fHykRnd892wmepTDwJ6UZxt1h+nC4xeStGttwto+iJabepVbLJeZRqOq04bkEQOY26m3ingU3Pq/7GmOf6rwJiIiujWso0vDxZuUzY8Y0A8QwMaqbyr/gaTBZe3NL+J/Smz7OMXlNRf+hk7jqatDfOFxSHURDEfWk2hOl5JkBAIZXoYm5qHebcfGCOUPP2TYFT9IM9FYSge8tSR5gStik6UKULaWJJhTe4R6Yzldrc82oOOL7jjaQMtAL+XhdwADNnGst2dhgwMUTjFsxBFR/MbDWSvuz/yFaslY6UgRD78s+wkLz+OqYiPkvVzw9BTERj3uyw+QgvBdc2cHcrMKOZrNLttDOod1ANaqxDRZXFuRq5e0WzEEKEYptFmzMl2XuZDO+Pk8pgYabqsV1V7NkzrbOR5nTiccg/1dRlLgrlqoQ8HB85VxcEOtgIYADNaprklDrAB9KpjdFRrEY40ai8Rj93TCdMGiFrK8hQXqpslVcaTwPVNgyDkpKwHrulkE9j86J8Ed3wMTuWOqhD/2hpJAVsd/lyiPBXmvXBA9IyE837kDNyvXh6ZJK4ZIAzU4+7tID10gx/2BL7y3RooPAIWhO74xv8Hzak+B7igLizinSbb2bZaYHD768FVzgOlepsIgnVduzkU1ki4q WIjFJo/L QSLKAWmevQ1P3D+lItPA3ia7IILnapyAtzc0uaeO+wNO91Y3XkDzWcJGlHy2QGi3cVXIaEF0ZFPbxArL1vEX9wa4HjEoJSh5eDQ/n2LwV1uTsfl5q58CUC9eBEqDxo0wz5XgFdZuew5L2NToncesoHxZz/4UvOzivjGto/mSF7j1vcUgr9NghQxno+XY9Iu2q53Bb+2Dn2EzRVHp2fpkt5yqV9fVc3kxjj77uOJ950Hchp5qjAhSuaM8AcHK3uxK1RCnP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For the arm64 kernel, when it processes hardware memory errors for synchronize notifications(do_sea()), if the errors is consumed within the kernel, the current processing is panic. However, it is not optimal. Take uaccess for example, if the uaccess operation fails due to memory error, only the user process will be affected. Killing the user process and isolating the corrupt page is a better choice. This patch only enable machine error check framework and adds an exception fixup before the kernel panic in do_sea(). Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/extable.h | 1 + arch/arm64/mm/extable.c | 16 ++++++++++++++++ arch/arm64/mm/fault.c | 29 ++++++++++++++++++++++++++++- 4 files changed, 46 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index aa7c1d435139..2cc34b5e7abb 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -20,6 +20,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f80ebd0addfd 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_mc(struct pt_regs *regs); #endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 228d681a8715..478e639f8680 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -76,3 +76,19 @@ bool fixup_exception(struct pt_regs *regs) BUG(); } + +bool fixup_exception_mc(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + /* + * This is not complete, More Machine check safe extable type can + * be processed here. + */ + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 55f6455a8284..312932dc100b 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -730,6 +730,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs)) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + if (!fixup_exception_mc(regs)) + return false; + + if (current->flags & PF_KTHREAD) + return true; + + set_thread_esr(0, esr); + arm64_force_sig_fault(sig, code, addr, + "Uncorrected memory error on access to user memory\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -755,7 +780,9 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } - arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); + + if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; } From patchwork Mon Jan 29 13:46:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13535676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5356C47DDB for ; Mon, 29 Jan 2024 13:47:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 572BA6B00A7; Mon, 29 Jan 2024 08:47:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FE236B00AA; Mon, 29 Jan 2024 08:47:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 219E26B00A9; Mon, 29 Jan 2024 08:47:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0BEE66B00A7 for ; Mon, 29 Jan 2024 08:47:13 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9758640A04 for ; Mon, 29 Jan 2024 13:47:12 +0000 (UTC) X-FDA: 81732475104.13.267B6EC Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf14.hostedemail.com (Postfix) with ESMTP id 88DCC100021 for ; Mon, 29 Jan 2024 13:47:09 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706536030; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=inDfPJ8KPUyQcnteDpkHTQ4f2V6PM28Z1hOutgycIus=; b=SWazeyuNj7N+lnDs4UHIlvUFEZxEC5kzybB9bi0/6jUtBrpi0htUEgK5I2FVgaqE5AEiY8 nM8s5wLXaK4DNN/Dw9ikxHuoXEr0KaqYeH/kE9sDBJ7yZsAvh00WjlkPe+nvPmzOnDdPga De7u96ahCJF35GOgC8TiU46TtJFg6S0= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706536030; a=rsa-sha256; cv=none; b=lGQWfv3128af9N6lgNeuWCC480JngJk8Yu4necOXW0e76/pwuP8SyzbpCzSZY8RD6nX26J 9pqW5cwh5cKD2zJpPEyXFYh4IzpztFhJgG8km/DWZKxe4RCSXRJa3o7l1t60QOQn8ZF7pn GYty3puqyG+Phf3zjRg5SPBhCCOTldY= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4TNqMx3SYCz1vsVw; Mon, 29 Jan 2024 21:46:37 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id B94991A016B; Mon, 29 Jan 2024 21:47:01 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 29 Jan 2024 21:46:59 +0800 From: Tong Tiangen To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Robin Murphy , Andrey Ryabinin , Alexander Potapenko , Alexander Viro , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v10 3/6] arm64: add uaccess to machine check safe Date: Mon, 29 Jan 2024 21:46:49 +0800 Message-ID: <20240129134652.4004931-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240129134652.4004931-1-tongtiangen@huawei.com> References: <20240129134652.4004931-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 88DCC100021 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: cfai49ghh5cj9sop5dr36cpaqxh4ts8g X-HE-Tag: 1706536029-837909 X-HE-Meta: U2FsdGVkX1/FPpUgoUHD27dhl0YysA3EmiuwhdVZUvsOu7n/pm2AARNtWrM0fMUzhu0HpJrEjSYCWFHnUyd+eOrcvBqRXGRxpc0wWDOFVGUe7d5s3CEZ2VugYuiu5nDdJcsphd64FfkTBg0pbQDVMAosF/9TClzWylH6/ICcAB1Q9kVmSquW4H8NEWK1QMzs4ZifEDz33pKYll1Vj/qsi2duZ8gkpBekGNP7EbjuKgvI+rNZFg8Y/P9YL1fR8muaDagKOntqsOl7wFK3IfyBzgTa9v4CGxr8LpU6NqiSwlalXuCaU5wh14+BrAsNWZ2U6N5Pkgg3RkoJunohgx6bbj7eRTPnWKGo9hI35Feo7v23b5+8ehi12CVn9cFt+y0t5nZNZox9mXX1yPi9OoJ0J0gT11jzBiRa5Z9luiKMDk4NSKF3CV/33fyP3tzD+Wo2sC+fWYy3TmrPESrlpq9pWbECRsatFOli2+pcbMeeuTHDOud9HTj3lLDgsZpL36Fh/85t1J6eiQ6gs+QSXJ7R9tXHKJc3PVQueof4U+jixhTa8174fty6rRz1bwMgmU2F72BjW0g/rqsJYwZlC4m8YYga3nf/tQ9vMEORuUNApmJS5xo7JNbd6GcgQfgc+VOyicnyjYQiZ7BUBgFlZvJfUefcKZFKNi+ALtcIO7XKs9SaNdff9tUBU/77oOEUeaRqIigoO8I//rUwI3OoJA24D0d19lbZqRAfDalYh4X9ZcFVcbTSMBTTaztcGmlbRx186RrQiBhMR6Xyx8ktGb/rbR9gMjUGWWwj5T9h7Z6VFqSV1ynj/DDX2Z1eBnjE0AInAIBrd8+bHsJbHj8N0wv/p9GgDtuq0VB6D5NJfaaOEJLx5kxLUpzQLwLXUHccCMaN9AuiKZLdyLaOdGhv/+jB1WDADPP7k1mmi0RAHxuesYRz0q81A3WD/vfOLYvGMCU+G1H58B7Z8ypF1gSXVEq IWs2O+Dd m87mkVkig640Gc1+oRS3NYWrIVLWEOsNM5/wlIUCG4Q5qdcOzrACNXFEDBE/S+Kz8fs84BGSiUYT3fvB9RSZckzc0jE2BxgGKePbrGGnWfhSbNGgPyfRJTGCRX2luaQaIWeSJIZ2bBZHVHSiH9wbiseniM+zWOpvVfpklr6mpD52TjlgoYfxz1kHZGvv6kyeOIy/m26A/iMh4KHLvdM5Dm0vKaA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If user process access memory fails due to hardware memory error, only the relevant processes are affected, so it is more reasonable to kill the user process and isolate the corrupt page than to panic the kernel. Signed-off-by: Tong Tiangen --- arch/arm64/lib/copy_from_user.S | 10 +++++----- arch/arm64/lib/copy_to_user.S | 10 +++++----- arch/arm64/mm/extable.c | 8 ++++---- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S index 34e317907524..1bf676e9201d 100644 --- a/arch/arm64/lib/copy_from_user.S +++ b/arch/arm64/lib/copy_from_user.S @@ -25,7 +25,7 @@ .endm .macro strb1 reg, ptr, val - strb \reg, [\ptr], \val + USER(9998f, strb \reg, [\ptr], \val) .endm .macro ldrh1 reg, ptr, val @@ -33,7 +33,7 @@ .endm .macro strh1 reg, ptr, val - strh \reg, [\ptr], \val + USER(9998f, strh \reg, [\ptr], \val) .endm .macro ldr1 reg, ptr, val @@ -41,7 +41,7 @@ .endm .macro str1 reg, ptr, val - str \reg, [\ptr], \val + USER(9998f, str \reg, [\ptr], \val) .endm .macro ldp1 reg1, reg2, ptr, val @@ -49,7 +49,7 @@ .endm .macro stp1 reg1, reg2, ptr, val - stp \reg1, \reg2, [\ptr], \val + USER(9998f, stp \reg1, \reg2, [\ptr], \val) .endm end .req x5 @@ -66,7 +66,7 @@ SYM_FUNC_START(__arch_copy_from_user) b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder USER(9998f, ldtrb tmp1w, [srcin]) - strb tmp1w, [dst], #1 +USER(9998f, strb tmp1w, [dst], #1) 9998: sub x0, end, dst // bytes not copied ret SYM_FUNC_END(__arch_copy_from_user) diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 802231772608..cc031bd87455 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -20,7 +20,7 @@ * x0 - bytes not copied */ .macro ldrb1 reg, ptr, val - ldrb \reg, [\ptr], \val + USER(9998f, ldrb \reg, [\ptr], \val) .endm .macro strb1 reg, ptr, val @@ -28,7 +28,7 @@ .endm .macro ldrh1 reg, ptr, val - ldrh \reg, [\ptr], \val + USER(9998f, ldrh \reg, [\ptr], \val) .endm .macro strh1 reg, ptr, val @@ -36,7 +36,7 @@ .endm .macro ldr1 reg, ptr, val - ldr \reg, [\ptr], \val + USER(9998f, ldr \reg, [\ptr], \val) .endm .macro str1 reg, ptr, val @@ -44,7 +44,7 @@ .endm .macro ldp1 reg1, reg2, ptr, val - ldp \reg1, \reg2, [\ptr], \val + USER(9998f, ldp \reg1, \reg2, [\ptr], \val) .endm .macro stp1 reg1, reg2, ptr, val @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) 9997: cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder - ldrb tmp1w, [srcin] +USER(9998f, ldrb tmp1w, [srcin]) USER(9998f, sttrb tmp1w, [dst]) add dst, dst, #1 9998: sub x0, end, dst // bytes not copied diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 478e639f8680..28ec35e3d210 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs) if (!ex) return false; - /* - * This is not complete, More Machine check safe extable type can - * be processed here. - */ + switch (ex->type) { + case EX_TYPE_UACCESS_ERR_ZERO: + return ex_handler_uaccess_err_zero(ex, regs); + } return false; } From patchwork Mon Jan 29 13:46:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13535675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D158C47DDB for ; Mon, 29 Jan 2024 13:47:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D88006B0078; Mon, 29 Jan 2024 08:47:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D3A646B007E; Mon, 29 Jan 2024 08:47:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3D1C6B009F; Mon, 29 Jan 2024 08:47:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A549A6B0078 for ; Mon, 29 Jan 2024 08:47:11 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7EBCF160A42 for ; Mon, 29 Jan 2024 13:47:11 +0000 (UTC) X-FDA: 81732475062.23.44FFCFD Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf15.hostedemail.com (Postfix) with ESMTP id B190BA0023 for ; Mon, 29 Jan 2024 13:47:08 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706536029; a=rsa-sha256; cv=none; b=AflzLBoKYC+brL95bgIZOGZdmwF+8A6W1i0vAJhxOTrmyN9cE0vJA1f84KRLRcQOLvrBAM nT96a2lrLjxNGmmGrNaJdIWMc11hgdwYzl2HuufVUzJj0JOgK/YkJ4eLFwsln0Je/xVac0 34jkqjoHXnQJG8viCz5bZ3L5HyC/fwA= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706536029; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4oS5QI8PowY+Zs3TQNKojee8XDIVZivI95uIZM8SiOM=; b=2a5/OOMFhBBN+qRVvLTXV0ljpAEqvb++d5oYJPYGcEFeRGgcBpoRFy8JhpyKje/6zPf0Vz XAjHjopgbBvXYrWtxO8Wba8NA0Rw256u9/IKcokYSZSKQa9zi9HydpRFWo0KSu0qijO8QK /s833P9+JtNUKa9uvV1MwxZ6PMz4M2U= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4TNqMz4Dbvz1vsj8; Mon, 29 Jan 2024 21:46:39 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id D25391404DB; Mon, 29 Jan 2024 21:47:03 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 29 Jan 2024 21:47:01 +0800 From: Tong Tiangen To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Robin Murphy , Andrey Ryabinin , Alexander Potapenko , Alexander Viro , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v10 4/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Date: Mon, 29 Jan 2024 21:46:50 +0800 Message-ID: <20240129134652.4004931-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240129134652.4004931-1-tongtiangen@huawei.com> References: <20240129134652.4004931-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B190BA0023 X-Stat-Signature: ieppahydo8zg5hck7unbe6hgxtziy7zy X-HE-Tag: 1706536028-843329 X-HE-Meta: U2FsdGVkX18CsaoPzB7r5HE0KIWyDWpHkUZA34aErRLZBwmvK5KXL/LxbBAzaAzfRstJy6ak+3E/25ST37K7fO1L2ZjC9muiJq1ymanQkHm+3PMCFsqENQLCI8ddH561fEnmlbOjW/v89ZZGrC/vqtoQhi4DLMW3vdV+0/zK62W3j78mx2gAHJ1cM9Sh+OQ0/5yTsLXdL/xNlMVw6d69yKdVmQklpImXn/Bys5nY/WRmU5ju/5TxErvlDdoy4Ryv/be6kSArv5v6w8jmonanKziA2eY0Uk7Hw3DWgU65v4i1YMYmpee5G9RhDqqkPndj+FflVLryVYqclscjQe0CJfzGHUsAkDj7Dwl49FSUCCcbkTTBzdNkY0b38iiOI5jLG96I/MbQjpKlo77tJXVnc6KPkX4wwcpwRXShKcN2wGXJ+WXBsiBxaaFIcVpPtlmGD9WZjeuOt2VTwb2WlwLKNlxvA5lZXPNR1YmXY77O/QGIEGPxTYooHc5v5Vg0u7umZQitPO06uDh2fGkp8PeCtR83SPtAJmiaF37+ZLuv8eoVzyGPJUjE8zoS40ElfmI0n3TOyapYNHrBOO8a+64tQ4VHUbZLnHtvXdlugovkyuBKsr61lZPeTihJKqJRgJLVk3zghuMBrc6KVCbUx4nLKN04AU0S5eEPrcC0P4h45vLZJfIuyrcocHWhPX6YZt2Jj0Ca9d9ZuRcH1pyuk0h6CzRkdwMYkQY5nEK+xMxX5LsiGlxBkwG7DnkWFtTLbHXgCDrm+ySyxi7diYSj60sEV+mTYUwiZUDSHWtsw19D4gzWJ7Qswnr4SoCOyy6urpWlB+DR2doFZc9QdvS2xsivXcAI6u+6ivdIk3OlBjUUdTSzcrgqZMcBhFCTulYN1gBncQGj3OPJsxNE/qygVfEenMd8kQMnhgd//4+tvaKQUqANMocQwiPS+0EEKEwBFoWjMbbxkJMIfkLlm91MjWd kebTpWtF jyqjuLgl+PPAI+RaAuk4JCe/wBuGbtJxUSkPVkf6iGzCuScHI/2YyJUMQMrHQVMOPTmsjuOlTeBD0HzNvvx5zpXDLgpduD166TUsVpXgBRG+n2oPlNnPBSANiXQLzZgKvo3DiHO8CIbIixhJefD9X57ZqrZKQR8rda3xTRzFYDHcL2w0Lslo2X1GyHo0ODvY5FEhVa9VW0ERc73iZRArBpVW1ucD5lh4kQOYCNPmTvya5ytp1uy0cngN1LBQOcJ7mrbZgmeN1N0sKvHg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If hardware errors are encountered during page copying, returning the bytes not copied is not meaningful, and the caller cannot do any processing on the remaining data. Returning -EFAULT is more reasonable, which represents a hardware error encountered during the copying. Signed-off-by: Tong Tiangen --- include/linux/highmem.h | 8 ++++---- mm/khugepaged.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 451c1dff0e87..c5ca1a1fc4f5 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from) /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory - * page with #MC in source page (@from) handled, and return the number - * of bytes not copied if there was a #MC, otherwise 0 for success. + * page with #MC in source page (@from) handled, and return -EFAULT if there + * was a #MC, otherwise 0 for success. */ static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) @@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } static inline int copy_mc_highpage(struct page *to, struct page *from) @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } #else static inline int copy_mc_user_highpage(struct page *to, struct page *from, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2b219acb528e..ba6743a54c86 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -797,7 +797,7 @@ static int __collapse_huge_page_copy(pte_t *pte, continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, _address, vma)) { result = SCAN_COPY_MC; break; } @@ -2053,7 +2053,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, clear_highpage(hpage + (index % HPAGE_PMD_NR)); index++; } - if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) { + if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page)) { result = SCAN_COPY_MC; goto rollback; } From patchwork Mon Jan 29 13:46:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13535677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6545BC47DDB for ; Mon, 29 Jan 2024 13:47:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E1FC6B00A8; Mon, 29 Jan 2024 08:47:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 940946B00A9; Mon, 29 Jan 2024 08:47:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 743096B00AB; Mon, 29 Jan 2024 08:47:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3ADA16B00A8 for ; Mon, 29 Jan 2024 08:47:13 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 1AF911C06F7 for ; Mon, 29 Jan 2024 13:47:13 +0000 (UTC) X-FDA: 81732475146.08.25AC603 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf22.hostedemail.com (Postfix) with ESMTP id 73819C0022 for ; Mon, 29 Jan 2024 13:47:09 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706536030; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sZk81MB3F0fi4jLe+8qRvJk+fTfX2FXMCLP2DK+rkzY=; b=mb1+4x7vE3UsPt9Da5Iq+rjSee/leapBLle+q9CXuc9oRuo2gmGf7cBYbHdiNAHfSotpqs kyqQexgiAHy+wETeghZZyeXd566KJfiMToisEd14KJGaePZmjN3WJUU8i4F4Q1GZkdyVkE gMwwklo0F1Mjs5SOyzBd0sDpIkpWso8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706536030; a=rsa-sha256; cv=none; b=IjTstck4eNgs+9udbVkfPVXQNQbZkW5eWdzAEK7v8wrDPFQlbb5UHIGdr8JqWBGx7eLaNS FtSKrIDVDQkX1xA7g7PKgSRrIaUCLKrLEqQTcO9Yl1yh5U95Nlct3yUqK5BwiSz8rkct1i 7LaSnclfNSGMHBsxrklhJ6QVTT9RejQ= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4TNqN02YmzzbcRg; Mon, 29 Jan 2024 21:46:40 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id B5DC218007A; Mon, 29 Jan 2024 21:47:05 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 29 Jan 2024 21:47:03 +0800 From: Tong Tiangen To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Robin Murphy , Andrey Ryabinin , Alexander Potapenko , Alexander Viro , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v10 5/6] arm64: support copy_mc_[user]_highpage() Date: Mon, 29 Jan 2024 21:46:51 +0800 Message-ID: <20240129134652.4004931-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240129134652.4004931-1-tongtiangen@huawei.com> References: <20240129134652.4004931-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 73819C0022 X-Rspam-User: X-Stat-Signature: zjory4pph51wi85ng8445eae71wgzyqx X-Rspamd-Server: rspam03 X-HE-Tag: 1706536029-635850 X-HE-Meta: U2FsdGVkX1/rTrRUXT5rrdLK+gbkYECA7OWXAAjthskgd6v1KUbwLH4R87I4SxiFFZYUJmJr/+ErwLyjafJeTADF+M/WPDKu0ljutKJcSbA7Ww3mqrhRrMlxe9iwWFW4nHfFi+EMOvejMHZn9zSvIxEYzwhCArHUIAtQ5ybQRbYuQI41xerf3ZKc1ivQdyaqvK5eentwZJFWycWD5s52V4VHKZLVt2+96BnIo4kDGujRCddz7CAyaO0D8O4cFDgGu6gRztNdWz0pEJtivU1Fw5thPFPUOUpwzxPPC9DD6RJ8jrlBKJUmUkdg2ypoKLkxGKqbl4mDj7509z+DOw9ijAvf5lGHTbLFNx4+f/xajCm0csGtSHJL0IRlk8oVzkIn485e8InTHk+AjVjSwi65rV/GPRvKqJpXiynpkSKq+y6+rg72oXpGAYyM+0UjhTomA8frS/Kp95Kr1Jyymz1hqIkcIXO4k6nXACucZCC9MRCyQHGAy5EI6cTadzulMIizV2dgDxJtBkwACDpzhcwOaGc2CkR6XnAxTh3kUu8GimzN/+v+SOy5kd8ZaLkbKvWZpq3pFkTp/go2Do7eV0IBO1UCPp3ZVJQUk5jVPJA2+5mcxtX3NAHPcIywzMFMG9qdHTcFiDYnoenOooXURSzFq2MgStZlSZ80SfVgQmYVl7O+pwk6I3Nuy/OHAMtEJQaudFa/E2Q8wfcAanUmWyAErkqK1e+dgTf8gqHUXwAw6UFzMR5iU2ALtWLaoVkTlogovWopWRMbAO5XSVJjkSpjLHZLqLSmsEK8B1R81SOuU73/LgzZSY3Ma2MSS96BJFcVSheMcTZ291Xu/wRpMo2JK2/feMIyVOmwqe13otmFopSlNsosCHCo16WcKto6iF1FykQnHmXShKiKfCy32xnqgVriZ+WQ9PL4ODgebIHmG4CJGaYTNRLoOpfwXBupVnGSzjmwIeWG6jFNOEsk0ha cCNL2VjW p/bGRKOlAdwvm7kYPkzXzwcPSpF49V2druPSBFsU832PKdNvq2akwxBad/ymMj/IMuCqTpNuGFCZjXr2UsTs4LR/Re341Jgbz02qQQXZO+8ZPVPGoWmBAQ4/1oQn+155nKG8hYlf/fFq1zInwhf3U/oGdWUN3YAiQJXhrmxCXE7IBDP1JFnZ/JW1QPDZJAEmKr95j0/bMcNrp4l287d369RwETIsprE1+30sYIDjsxeu/ah4golSabfIUAgroaqieeeNhHdUrBHK/ZKI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, many scenarios that can tolerate memory errors when copying page have been supported in the kernel[1][2][3], all of which are implemented by copy_mc_[user]_highpage(). arm64 should also support this mechanism. Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. Add new helper copy_mc_page() which provide a page copy implementation with machine check safe. The copy_mc_page() in copy_mc_page.S is largely borrows from copy_page() in copy_page.S and the main difference is copy_mc_page() add extable entry to every load/store insn to support machine check safe. Add new extable type EX_TYPE_COPY_MC_PAGE_ERR_ZERO which used in copy_mc_page(). [1]a873dfe1032a ("mm, hwpoison: try to recover from copy-on write faults") [2]5f2500b93cc9 ("mm/khugepaged: recover from poisoned anonymous memory") [3]6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 15 ++++++ arch/arm64/include/asm/assembler.h | 4 ++ arch/arm64/include/asm/mte.h | 5 ++ arch/arm64/include/asm/page.h | 10 ++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_mc_page.S | 78 ++++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 27 ++++++++++ arch/arm64/mm/copypage.c | 66 ++++++++++++++++++++--- arch/arm64/mm/extable.c | 7 +-- include/linux/highmem.h | 8 +++ 10 files changed, 213 insertions(+), 9 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..819044fefbe7 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -10,6 +10,7 @@ #define EX_TYPE_UACCESS_ERR_ZERO 2 #define EX_TYPE_KACCESS_ERR_ZERO 3 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_COPY_MC_PAGE_ERR_ZERO 5 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -51,6 +52,16 @@ #define _ASM_EXTABLE_UACCESS(insn, fixup) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) +#define _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, err, zero) \ + __ASM_EXTABLE_RAW(insn, fixup, \ + EX_TYPE_COPY_MC_PAGE_ERR_ZERO, \ + ( \ + EX_DATA_REG(ERR, err) | \ + EX_DATA_REG(ZERO, zero) \ + )) + +#define _ASM_EXTABLE_COPY_MC_PAGE(insn, fixup) \ + _ASM_EXTABLE_COPY_MC_PAGE_ERR_ZERO(insn, fixup, wzr, wzr) /* * Create an exception table entry for uaccess `insn`, which will branch to `fixup` * when an unhandled fault is taken. @@ -59,6 +70,10 @@ _ASM_EXTABLE_UACCESS(\insn, \fixup) .endm + .macro _asm_extable_copy_mc_page, insn, fixup + _ASM_EXTABLE_COPY_MC_PAGE(\insn, \fixup) + .endm + /* * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 513787e43329..e1d8ce155878 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -154,6 +154,10 @@ lr .req x30 // link register #define CPU_LE(code...) code #endif +#define CPY_MC(l, x...) \ +9999: x; \ + _asm_extable_copy_mc_page 9999b, l + /* * Define a macro that constructs a 64-bit value by concatenating two * 32-bit registers. Note that on big endian systems the order of the diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 91fbd5c8a391..9cdded082dd4 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *page) void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t pte, unsigned int nr_pages); void mte_copy_page_tags(void *kto, const void *kfrom); +int mte_copy_mc_page_tags(void *kto, const void *kfrom); void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_cpu_setup(void); @@ -128,6 +129,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) +{ + return 0; +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..304cc86b8a10 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +int copy_mc_page(void *to, const void *from); +int copy_mc_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_MC_HIGHPAGE + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE +#endif + struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..a2fd865b816d 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -15,6 +15,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S new file mode 100644 index 000000000000..524534d26d86 --- /dev/null +++ b/arch/arm64/lib/copy_mc_page.S @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with machine check + * + * Parameters: + * x0 - dest + * x1 - src + * Returns: + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong + * while copying. + */ +SYM_FUNC_START(__pi_copy_mc_page) +CPY_MC(9998f, ldp x2, x3, [x1]) +CPY_MC(9998f, ldp x4, x5, [x1, #16]) +CPY_MC(9998f, ldp x6, x7, [x1, #32]) +CPY_MC(9998f, ldp x8, x9, [x1, #48]) +CPY_MC(9998f, ldp x10, x11, [x1, #64]) +CPY_MC(9998f, ldp x12, x13, [x1, #80]) +CPY_MC(9998f, ldp x14, x15, [x1, #96]) +CPY_MC(9998f, ldp x16, x17, [x1, #112]) + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) +CPY_MC(9998f, ldp x2, x3, [x1]) +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) +CPY_MC(9998f, ldp x4, x5, [x1, #16]) +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) +CPY_MC(9998f, ldp x6, x7, [x1, #32]) +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) +CPY_MC(9998f, ldp x8, x9, [x1, #48]) +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) +CPY_MC(9998f, ldp x10, x11, [x1, #64]) +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) +CPY_MC(9998f, ldp x12, x13, [x1, #80]) +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) +CPY_MC(9998f, ldp x14, x15, [x1, #96]) +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) +CPY_MC(9998f, ldp x16, x17, [x1, #112]) + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) + + mov x0, #0 + ret + +9998: mov x0, #-EFAULT + ret + +SYM_FUNC_END(__pi_copy_mc_page) +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) +EXPORT_SYMBOL(copy_mc_page) diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 5018ac03b6bf..2b748e83f6cf 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -80,6 +80,33 @@ SYM_FUNC_START(mte_copy_page_tags) ret SYM_FUNC_END(mte_copy_page_tags) +/* + * Copy the tags from the source page to the destination one wiht machine check safe + * x0 - address of the destination page + * x1 - address of the source page + * Returns: + * x0 - Return 0 if copy success, or + * -EFAULT if anything goes wrong while copying. + */ +SYM_FUNC_START(mte_copy_mc_page_tags) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: +CPY_MC(2f, ldgm x4, [x3]) +CPY_MC(2f, stgm x4, [x2]) + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b + + mov x0, #0 + ret + +2: mov x0, #-EFAULT + ret +SYM_FUNC_END(mte_copy_mc_page_tags) + /* * Read tags from a user buffer (one tag per byte) and set the corresponding * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index a7bb20055ce0..9765e40cde6c 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -14,6 +14,25 @@ #include #include +static int do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc) +{ + int ret = 0; + + if (system_supports_mte() && page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); + if (mc) + ret = mte_copy_mc_page_tags(kto, kfrom); + else + mte_copy_page_tags(kto, kfrom); + + if (!ret) + set_page_mte_tagged(to); + } + + return ret; +} + void copy_highpage(struct page *to, struct page *from) { void *kto = page_address(to); @@ -24,12 +43,7 @@ void copy_highpage(struct page *to, struct page *from) if (kasan_hw_tags_enabled()) page_kasan_tag_reset(to); - if (system_supports_mte() && page_mte_tagged(from)) { - /* It's a new page, shouldn't have been tagged yet */ - WARN_ON_ONCE(!try_page_mte_tagging(to)); - mte_copy_page_tags(kto, kfrom); - set_page_mte_tagged(to); - } + do_mte(to, from, kto, kfrom, false); } EXPORT_SYMBOL(copy_highpage); @@ -40,3 +54,43 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Return -EFAULT if anything goes wrong while copying page or mte. + */ +int copy_mc_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + int ret; + + ret = copy_mc_page(kto, kfrom); + if (ret) + return -EFAULT; + + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); + + ret = do_mte(to, from, kto, kfrom, true); + if (ret) + return -EFAULT; + + return 0; +} +EXPORT_SYMBOL(copy_mc_highpage); + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + int ret; + + ret = copy_mc_highpage(to, from); + + if (!ret) + flush_dcache_page(to); + + return ret; +} +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); +#endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 28ec35e3d210..bdc81518d207 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -16,7 +16,7 @@ get_ex_fixup(const struct exception_table_entry *ex) return ((unsigned long)&ex->fixup + ex->fixup); } -static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex, +static bool ex_handler_fixup_err_zero(const struct exception_table_entry *ex, struct pt_regs *regs) { int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data); @@ -69,7 +69,7 @@ bool fixup_exception(struct pt_regs *regs) return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: case EX_TYPE_KACCESS_ERR_ZERO: - return ex_handler_uaccess_err_zero(ex, regs); + return ex_handler_fixup_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); } @@ -87,7 +87,8 @@ bool fixup_exception_mc(struct pt_regs *regs) switch (ex->type) { case EX_TYPE_UACCESS_ERR_ZERO: - return ex_handler_uaccess_err_zero(ex, regs); + case EX_TYPE_COPY_MC_PAGE_ERR_ZERO: + return ex_handler_fixup_err_zero(ex, regs); } return false; diff --git a/include/linux/highmem.h b/include/linux/highmem.h index c5ca1a1fc4f5..a42470ca42f2 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif #ifdef copy_mc_to_kernel +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, return ret ? -EFAULT : 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { unsigned long ret; @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) return ret ? -EFAULT : 0; } +#endif #else +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { copy_user_highpage(to, from, vaddr, vma); return 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { copy_highpage(to, from); return 0; } #endif +#endif static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, From patchwork Mon Jan 29 13:46:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13535678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 650F0C47422 for ; Mon, 29 Jan 2024 13:47:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4FECD6B00A9; Mon, 29 Jan 2024 08:47:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4396F6B00AA; Mon, 29 Jan 2024 08:47:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B3726B00AB; Mon, 29 Jan 2024 08:47:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 166FA6B00A9 for ; Mon, 29 Jan 2024 08:47:14 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E58D7A09F8 for ; Mon, 29 Jan 2024 13:47:13 +0000 (UTC) X-FDA: 81732475146.23.58D8D9E Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf26.hostedemail.com (Postfix) with ESMTP id 5F1EB140017 for ; Mon, 29 Jan 2024 13:47:10 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706536032; a=rsa-sha256; cv=none; b=BBNsYxan736CUkmdR46NsRszjyGx7guy9cqvG2BSHX366MUFbf+kaEzR33h32AobS0nJAw SZ8VmirdLfXdArAw/xl2lESQvnONWubdKK31vyQUxTRG5DuRxNmGzJRPrkejSwhoV0gD52 vImeP5eBtE2HJ6bBMUNYQ3DQEDC+rMg= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706536032; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CCAT8rx4o8lYXQE9KbI4VDb/mpKWJwivRgZ4wGraeu8=; b=vmaGqmllZIOX7yCisrQwbbvonhWxNlK5li0F1riNonLhwl6oOGjnaFU7G4uxuP9ikNg4S3 5GTl8DTz7asI2fHLfORrDCWZYcwrCmiZZxZTUc3/W26iK9Ufzf/LobFSEUhTkvvZ2wQuVq JR6ju8zvClHn/Ru58jyhyZj0LIY6tog= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TNqMP4rlkz1xmlr; Mon, 29 Jan 2024 21:46:09 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 957621404DB; Mon, 29 Jan 2024 21:47:07 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 29 Jan 2024 21:47:05 +0800 From: Tong Tiangen To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Robin Murphy , Andrey Ryabinin , Alexander Potapenko , Alexander Viro , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v10 6/6] arm64: introduce copy_mc_to_kernel() implementation Date: Mon, 29 Jan 2024 21:46:52 +0800 Message-ID: <20240129134652.4004931-7-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240129134652.4004931-1-tongtiangen@huawei.com> References: <20240129134652.4004931-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5F1EB140017 X-Stat-Signature: sn1o86fbyaisrxn117u6metfy57gca8y X-HE-Tag: 1706536030-737451 X-HE-Meta: U2FsdGVkX19nEGzU3ymOQaKq3mizi/Ljer0aCBTuULwoabvVKME9J6Y+WptIPXWJqMk0JP+znNN3kIUxbGXwE2C0odoV60Op8ygHYn0gIIWds8m1s73XEAIxxsq4ZESwr9vGKDfJIxg/S+QeEPGZPuRjhx/UD8YeNr1raTpxAzVshQecmDT+CX5hcaSNlyUwy2RKyRs348oU3sVNbKYxFZLFdkbTAdQBBVkmpwKDFCNg+m2CaFhABilAb2quPvb/yZtwk2M65yiEIGUjniQ5OLum8IIVA66AzSdXkiFVrf6Q0gYKglBNCaZrkANaWM4CU2+POdWnB934fT7sH6adxpK6Cs3mVAlIQlEqn0ctBBjBkM+PbidaodWfXQIt4pB1QTLVzlP9px30Qlyog8Qb435wstxHc50Lu+AeMoSyxT0OsziQH+LV8M7lcJ1XAW00KrEqpFI4XugvD5n/fqcC2v6ecZWBIEI7I/URLHLeLY5evjWymjlN2545rU8V552+C2eMo8PGcsmwW/LpVLEefxUJCt9BJWaiXFIjdzRONTRImUjq+Ucer7wl39P83BUiRkncdcVWv7YdUMNX3xhcvVRaxAoHh/hPdT436HJntozbGzJRwgYMarp3GuIHWN13Aso6VLk/jtwge89HlC6kFnMKuvPz0/o3abdgzXe4W72dRQ4jgV+pVAfkFkGK4I4y9U8AQ2UvL/faToZt7GJNhoiRcIVEWEKHlnAhKqs9Q6oqVpqAW/uOxQeqIyb/qr19q7kSFJlW4g9fx1e5PeXdzGbNDanIyWDLIcmmkBZV8dPVo1PtxwvQbbxmyaR1fGiXVYhF0OFZy9e7kQThwSX9Qkn+szE2yTRAtA8GeLHsP9Ufo0IbYpElj2u+aLH6lf0Fp+EV/nWscbl+6h0vZSGuAe0a0joCpT/w9X6HxQz0c6E7e3zPBvywTsb14x/U2D5ws4yFtNBzlRPiW79YsLK qLT0opTl aMsV0iCeMY1MGfGClDEKIX4SVo/Cdq4QsQ+I4y+S4NY3VA0Emhk76CVNVMXt0aN1bY4IQ5oY3SmSKM5yFyZv149clPYRoO8RvvIG+hQBUmOFUpVpdp5drNL+IdcGkOlS3kiw3RNKnTVFTNu6KJIyq9CLPy0umtpuyoGIAf+tWmq4H9D6eJF90SjGdVjTGcxiYchJ8yhrkhyiHeapzUG1CRkMHqJ6i3xM5rGgKXjh1+6If0z4GjeI4N56YLMqWhpchqSg/v/ikR7nCljRydnVtJFeUMeEkGaiSIdklMqh3K8vUgvo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The copy_mc_to_kernel() helper is memory copy implementation that handles source exceptions. It can be used in memory copy scenarios that tolerate hardware memory errors(e.g: pmem_read/dax_copy_to_iter). Currnently, only x86 and ppc suuport this helper, after arm64 support machine check safe framework, we introduce copy_mc_to_kernel() implementation. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/string.h | 5 + arch/arm64/include/asm/uaccess.h | 21 +++ arch/arm64/lib/Makefile | 2 +- arch/arm64/lib/memcpy_mc.S | 257 +++++++++++++++++++++++++++++++ mm/kasan/shadow.c | 12 ++ 5 files changed, 296 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/lib/memcpy_mc.S diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h index 3a3264ff47b9..995b63c26e99 100644 --- a/arch/arm64/include/asm/string.h +++ b/arch/arm64/include/asm/string.h @@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t); extern void *memcpy(void *, const void *, __kernel_size_t); extern void *__memcpy(void *, const void *, __kernel_size_t); +#define __HAVE_ARCH_MEMCPY_MC +extern int memcpy_mcs(void *, const void *, __kernel_size_t); +extern int __memcpy_mcs(void *, const void *, __kernel_size_t); + #define __HAVE_ARCH_MEMMOVE extern void *memmove(void *, const void *, __kernel_size_t); extern void *__memmove(void *, const void *, __kernel_size_t); @@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt); */ #define memcpy(dst, src, len) __memcpy(dst, src, len) +#define memcpy_mcs(dst, src, len) __memcpy_mcs(dst, src, len) #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 14be5000c5a0..61e28ef2112a 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -425,4 +425,25 @@ static inline size_t probe_subpage_writeable(const char __user *uaddr, #endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */ +#ifdef CONFIG_ARCH_HAS_COPY_MC +/** + * copy_mc_to_kernel - memory copy that handles source exceptions + * + * @dst: destination address + * @src: source address + * @len: number of bytes to copy + * + * Return 0 for success, or #size if there was an exception. + */ +static inline unsigned long __must_check +copy_mc_to_kernel(void *to, const void *from, unsigned long size) +{ + int ret; + + ret = memcpy_mcs(to, from, size); + return (ret == -EFAULT) ? size : 0; +} +#define copy_mc_to_kernel copy_mc_to_kernel +#endif + #endif /* __ASM_UACCESS_H */ diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index a2fd865b816d..899d6ae9698c 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -3,7 +3,7 @@ lib-y := clear_user.o delay.o copy_from_user.o \ copy_to_user.o copy_page.o \ clear_page.o csum.o insn.o memchr.o memcpy.o \ memset.o memcmp.o strcmp.o strncmp.o strlen.o \ - strnlen.o strchr.o strrchr.o tishift.o + strnlen.o strchr.o strrchr.o tishift.o memcpy_mc.o ifeq ($(CONFIG_KERNEL_MODE_NEON), y) obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S new file mode 100644 index 000000000000..7076b500d154 --- /dev/null +++ b/arch/arm64/lib/memcpy_mc.S @@ -0,0 +1,257 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2012-2021, Arm Limited. + * + * Adapted from the original at: + * https://github.com/ARM-software/optimized-routines/blob/afd6244a1f8d9229/string/aarch64/memcpy.S + */ + +#include +#include + +/* Assumptions: + * + * ARMv8-a, AArch64, unaligned accesses. + * + */ + +#define L(label) .L ## label + +#define dstin x0 +#define src x1 +#define count x2 +#define dst x3 +#define srcend x4 +#define dstend x5 +#define A_l x6 +#define A_lw w6 +#define A_h x7 +#define B_l x8 +#define B_lw w8 +#define B_h x9 +#define C_l x10 +#define C_lw w10 +#define C_h x11 +#define D_l x12 +#define D_h x13 +#define E_l x14 +#define E_h x15 +#define F_l x16 +#define F_h x17 +#define G_l count +#define G_h dst +#define H_l src +#define H_h srcend +#define tmp1 x14 + +/* This implementation handles overlaps and supports both memcpy and memmove + from a single entry point. It uses unaligned accesses and branchless + sequences to keep the code small, simple and improve performance. + + Copies are split into 3 main cases: small copies of up to 32 bytes, medium + copies of up to 128 bytes, and large copies. The overhead of the overlap + check is negligible since it is only required for large copies. + + Large copies use a software pipelined loop processing 64 bytes per iteration. + The destination pointer is 16-byte aligned to minimize unaligned accesses. + The loop tail is handled by always copying 64 bytes from the end. +*/ + +SYM_FUNC_START(__pi_memcpy_mcs) + add srcend, src, count + add dstend, dstin, count + cmp count, 128 + b.hi L(copy_long) + cmp count, 32 + b.hi L(copy32_128) + + /* Small copies: 0..32 bytes. */ + cmp count, 16 + b.lo L(copy16) + CPY_MC(9998f, ldp A_l, A_h, [src]) + CPY_MC(9998f, ldp D_l, D_h, [srcend, -16]) + CPY_MC(9998f, stp A_l, A_h, [dstin]) + CPY_MC(9998f, stp D_l, D_h, [dstend, -16]) + mov x0, #0 + ret + + /* Copy 8-15 bytes. */ +L(copy16): + tbz count, 3, L(copy8) + CPY_MC(9998f, ldr A_l, [src]) + CPY_MC(9998f, ldr A_h, [srcend, -8]) + CPY_MC(9998f, str A_l, [dstin]) + CPY_MC(9998f, str A_h, [dstend, -8]) + mov x0, #0 + ret + + .p2align 3 + /* Copy 4-7 bytes. */ +L(copy8): + tbz count, 2, L(copy4) + CPY_MC(9998f, ldr A_lw, [src]) + CPY_MC(9998f, ldr B_lw, [srcend, -4]) + CPY_MC(9998f, str A_lw, [dstin]) + CPY_MC(9998f, str B_lw, [dstend, -4]) + mov x0, #0 + ret + + /* Copy 0..3 bytes using a branchless sequence. */ +L(copy4): + cbz count, L(copy0) + lsr tmp1, count, 1 + CPY_MC(9998f, ldrb A_lw, [src]) + CPY_MC(9998f, ldrb C_lw, [srcend, -1]) + CPY_MC(9998f, ldrb B_lw, [src, tmp1]) + CPY_MC(9998f, strb A_lw, [dstin]) + CPY_MC(9998f, strb B_lw, [dstin, tmp1]) + CPY_MC(9998f, strb C_lw, [dstend, -1]) +L(copy0): + mov x0, #0 + ret + + .p2align 4 + /* Medium copies: 33..128 bytes. */ +L(copy32_128): + CPY_MC(9998f, ldp A_l, A_h, [src]) + CPY_MC(9998f, ldp B_l, B_h, [src, 16]) + CPY_MC(9998f, ldp C_l, C_h, [srcend, -32]) + CPY_MC(9998f, ldp D_l, D_h, [srcend, -16]) + cmp count, 64 + b.hi L(copy128) + CPY_MC(9998f, stp A_l, A_h, [dstin]) + CPY_MC(9998f, stp B_l, B_h, [dstin, 16]) + CPY_MC(9998f, stp C_l, C_h, [dstend, -32]) + CPY_MC(9998f, stp D_l, D_h, [dstend, -16]) + mov x0, #0 + ret + + .p2align 4 + /* Copy 65..128 bytes. */ +L(copy128): + CPY_MC(9998f, ldp E_l, E_h, [src, 32]) + CPY_MC(9998f, ldp F_l, F_h, [src, 48]) + cmp count, 96 + b.ls L(copy96) + CPY_MC(9998f, ldp G_l, G_h, [srcend, -64]) + CPY_MC(9998f, ldp H_l, H_h, [srcend, -48]) + CPY_MC(9998f, stp G_l, G_h, [dstend, -64]) + CPY_MC(9998f, stp H_l, H_h, [dstend, -48]) +L(copy96): + CPY_MC(9998f, stp A_l, A_h, [dstin]) + CPY_MC(9998f, stp B_l, B_h, [dstin, 16]) + CPY_MC(9998f, stp E_l, E_h, [dstin, 32]) + CPY_MC(9998f, stp F_l, F_h, [dstin, 48]) + CPY_MC(9998f, stp C_l, C_h, [dstend, -32]) + CPY_MC(9998f, stp D_l, D_h, [dstend, -16]) + mov x0, #0 + ret + + .p2align 4 + /* Copy more than 128 bytes. */ +L(copy_long): + /* Use backwards copy if there is an overlap. */ + sub tmp1, dstin, src + cbz tmp1, L(copy0) + cmp tmp1, count + b.lo L(copy_long_backwards) + + /* Copy 16 bytes and then align dst to 16-byte alignment. */ + + CPY_MC(9998f, ldp D_l, D_h, [src]) + and tmp1, dstin, 15 + bic dst, dstin, 15 + sub src, src, tmp1 + add count, count, tmp1 /* Count is now 16 too large. */ + CPY_MC(9998f, ldp A_l, A_h, [src, 16]) + CPY_MC(9998f, stp D_l, D_h, [dstin]) + CPY_MC(9998f, ldp B_l, B_h, [src, 32]) + CPY_MC(9998f, ldp C_l, C_h, [src, 48]) + CPY_MC(9998f, ldp D_l, D_h, [src, 64]!) + subs count, count, 128 + 16 /* Test and readjust count. */ + b.ls L(copy64_from_end) + +L(loop64): + CPY_MC(9998f, stp A_l, A_h, [dst, 16]) + CPY_MC(9998f, ldp A_l, A_h, [src, 16]) + CPY_MC(9998f, stp B_l, B_h, [dst, 32]) + CPY_MC(9998f, ldp B_l, B_h, [src, 32]) + CPY_MC(9998f, stp C_l, C_h, [dst, 48]) + CPY_MC(9998f, ldp C_l, C_h, [src, 48]) + CPY_MC(9998f, stp D_l, D_h, [dst, 64]!) + CPY_MC(9998f, ldp D_l, D_h, [src, 64]!) + subs count, count, 64 + b.hi L(loop64) + + /* Write the last iteration and copy 64 bytes from the end. */ +L(copy64_from_end): + CPY_MC(9998f, ldp E_l, E_h, [srcend, -64]) + CPY_MC(9998f, stp A_l, A_h, [dst, 16]) + CPY_MC(9998f, ldp A_l, A_h, [srcend, -48]) + CPY_MC(9998f, stp B_l, B_h, [dst, 32]) + CPY_MC(9998f, ldp B_l, B_h, [srcend, -32]) + CPY_MC(9998f, stp C_l, C_h, [dst, 48]) + CPY_MC(9998f, ldp C_l, C_h, [srcend, -16]) + CPY_MC(9998f, stp D_l, D_h, [dst, 64]) + CPY_MC(9998f, stp E_l, E_h, [dstend, -64]) + CPY_MC(9998f, stp A_l, A_h, [dstend, -48]) + CPY_MC(9998f, stp B_l, B_h, [dstend, -32]) + CPY_MC(9998f, stp C_l, C_h, [dstend, -16]) + mov x0, #0 + ret + + .p2align 4 + + /* Large backwards copy for overlapping copies. + Copy 16 bytes and then align dst to 16-byte alignment. */ +L(copy_long_backwards): + CPY_MC(9998f, ldp D_l, D_h, [srcend, -16]) + and tmp1, dstend, 15 + sub srcend, srcend, tmp1 + sub count, count, tmp1 + CPY_MC(9998f, ldp A_l, A_h, [srcend, -16]) + CPY_MC(9998f, stp D_l, D_h, [dstend, -16]) + CPY_MC(9998f, ldp B_l, B_h, [srcend, -32]) + CPY_MC(9998f, ldp C_l, C_h, [srcend, -48]) + CPY_MC(9998f, ldp D_l, D_h, [srcend, -64]!) + sub dstend, dstend, tmp1 + subs count, count, 128 + b.ls L(copy64_from_start) + +L(loop64_backwards): + CPY_MC(9998f, stp A_l, A_h, [dstend, -16]) + CPY_MC(9998f, ldp A_l, A_h, [srcend, -16]) + CPY_MC(9998f, stp B_l, B_h, [dstend, -32]) + CPY_MC(9998f, ldp B_l, B_h, [srcend, -32]) + CPY_MC(9998f, stp C_l, C_h, [dstend, -48]) + CPY_MC(9998f, ldp C_l, C_h, [srcend, -48]) + CPY_MC(9998f, stp D_l, D_h, [dstend, -64]!) + CPY_MC(9998f, ldp D_l, D_h, [srcend, -64]!) + subs count, count, 64 + b.hi L(loop64_backwards) + + /* Write the last iteration and copy 64 bytes from the start. */ +L(copy64_from_start): + CPY_MC(9998f, ldp G_l, G_h, [src, 48]) + CPY_MC(9998f, stp A_l, A_h, [dstend, -16]) + CPY_MC(9998f, ldp A_l, A_h, [src, 32]) + CPY_MC(9998f, stp B_l, B_h, [dstend, -32]) + CPY_MC(9998f, ldp B_l, B_h, [src, 16]) + CPY_MC(9998f, stp C_l, C_h, [dstend, -48]) + CPY_MC(9998f, ldp C_l, C_h, [src]) + CPY_MC(9998f, stp D_l, D_h, [dstend, -64]) + CPY_MC(9998f, stp G_l, G_h, [dstin, 48]) + CPY_MC(9998f, stp A_l, A_h, [dstin, 32]) + CPY_MC(9998f, stp B_l, B_h, [dstin, 16]) + CPY_MC(9998f, stp C_l, C_h, [dstin]) + mov x0, #0 + ret + +9998: mov x0, #-EFAULT + ret +SYM_FUNC_END(__pi_memcpy_mcs) + +SYM_FUNC_ALIAS(__memcpy_mcs, __pi_memcpy_mcs) +EXPORT_SYMBOL(__memcpy_mcs) +SYM_FUNC_ALIAS_WEAK(memcpy_mcs, __memcpy_mcs) +EXPORT_SYMBOL(memcpy_mcs) diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 9ef84f31833f..e6519fd329b2 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len) } #endif +#ifdef __HAVE_ARCH_MEMCPY_MC +#undef memcpy_mcs +int memcpy_mcs(void *dest, const void *src, size_t len) +{ + if (!check_memory_region((unsigned long)src, len, false, _RET_IP_) || + !check_memory_region((unsigned long)dest, len, true, _RET_IP_)) + return (unsigned long)len; + + return __memcpy_mcs(dest, src, len); +} +#endif + void *__asan_memset(void *addr, int c, ssize_t len) { if (!kasan_check_range(addr, len, true, _RET_IP_))