From patchwork Wed Feb 7 13:21:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13548486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E341AC4828F for ; Wed, 7 Feb 2024 13:22:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 140816B0074; Wed, 7 Feb 2024 08:22:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CA666B0075; Wed, 7 Feb 2024 08:22:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAC8E6B0078; Wed, 7 Feb 2024 08:22:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D48BB6B0074 for ; Wed, 7 Feb 2024 08:22:23 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A648A16036E for ; Wed, 7 Feb 2024 13:22:23 +0000 (UTC) X-FDA: 81765071766.13.C2463BF Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf14.hostedemail.com (Postfix) with ESMTP id 617C7100023 for ; Wed, 7 Feb 2024 13:22:20 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707312142; a=rsa-sha256; cv=none; b=OUfJJhR8uch5TRmTDWauBO8lxRqmYYcUVpyOEsf3XAmBPUflqHJ9JJtjGby970JFpg6AwG F1BSpbONtPytd8nAcwWIIfUprYiPNuGxUJnHGLfMTeSEKvy2R6j+ezDSEtRsYYj5gyQySe g2KRB6p4yOejmbM0hDhqjVeV4OrhwPE= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707312142; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=4CUecGYTVGzvy3JPMacB4oD3LDlakAjvSnUInK5w9eY=; b=wg5BjloxCOo0pZce/ur+oON7Y9eD19V+yf3ZYWTVpLP9NHssGh0xL3tKtJ2z8aaP3QrVnq azLPhaQ2GpPVC/tEsiE77xqW9999N1ajBl5N2h0xmotGltBoczAaNVto+7v/xO/DQq7W67 2XySGXHAD8NwXJqgxUhNW5ZAEEICaA4= Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TVLNN2Ldmz1xnGX; Wed, 7 Feb 2024 21:21:08 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id EB0E9140414; Wed, 7 Feb 2024 21:22:15 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 7 Feb 2024 21:22:14 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Date: Wed, 7 Feb 2024 21:21:59 +0800 Message-ID: <20240207132204.1720444-1-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 617C7100023 X-Stat-Signature: 5u91bghayw6r79ahjcwzsu3tr7h85myc X-Rspam-User: X-HE-Tag: 1707312140-444526 X-HE-Meta: U2FsdGVkX1/m8lO3z6vp5Rh8+3QN7U3RHsZiu9eTOlpDVM99PSWecfYyP8w8nDoc0VNJntisrnRFLwmT34fKi5Dc+9r3wUCaX1QqkDQ2v2z/hYRkq8JrMR3XwnOFXHhAtkSB2ZlJAwt2BuQn0oXMfjGK9pmlhZDEbR6Stk++yBdXlO5nKYw0Bg/Qf9LuYkFmoXrZjAO2XydggAuqk8nyG/oct0UypvVjo+8CeG6fM3Ri/m2CZx5Fn0z9c7mBC58ZIefN2wQx4kGdCKjHUtXCuJvjEQNXf8HAVEH/owyC9x1qYuJYHc2udQpCCaojzxPqTr3ADYCiG6lwnoBhv4cb84Zo93SvHu82R7w6BVFKxbe+VwE35O4UdYp/PvD4RukgjTDpwLy7K2CCpOb565a7/vP5c/pzkednD9sYY2gAqSlD9hwgWJqVBBO4j6rLD8R/ERGqg8AMfNkCs1XpIdMDTfdRLnbnYTJLGlIbFoAo1XCUSLbo1rH+YnS3ftGrwkxavlyzsHz9Pg0lJUB0ab8cSF3HiwT2UMuEmPq4+65ds3WGKABdJAgr4naXECtzXNOzbxQ+V/qbbLAWVG77kzJp1lKWPUjKNVGczbihC2fOeFjYCD0WixA0riSs3Qxzs0mRttyboH2ZMHnMAZVu5iOCxxOthOBecDOIjJgrielpKynz7zgOV5ktd5pOgLpqbgGjWk6rL37HipdeitpTg7PmULDWnd0uKHu7eBKMjERS3k1euKweITmow/e+5nzYxaEIQb6N8DYyxL2kvV2ZiL/I4yi1Fy9tO5ecj7WAdhBjUGlpft/KquabZakuD3pXpLA7gJUEdegucb4W2jc1orS7StVrAprjuofAVTj5HNq2wAG5JkM4KNUlcy8sDxfj6BLXlr3YT1pNSzr84lUhOdjcG7hsCDmW4dzE7L7ucARliQgMZG3Bdk/onc2vvLfuL/Q4jxs1FYwAGQmgDcBoAD7 OUGu5ftm K/X9J6dOuAz2oiCW1H0+Tyw/18NGbZxKoL3k8Hidiwy3uEeO7vxK7Ab6/ZgaGvz4iFmShN5BpkkmnRWa6wE4uC+EqBXg5w7TGqtYTSwUXXDd3qN8DsdCZvYmNqSWdAdxsh33Br9Ei+YiTvq4PZan9VCTeTaDY4wPWI3E7ktTJSuC9oGRuxaEn0lLMMPtvQwKFKMqnNnmt1BlL2d04uzQc7gjY740zaJQFzQchKbAKSkiHVM/R8ZDUn3KRPNgxlzkKOw1FOQtCMksyFHOg5Ewf4g3gmKUbX9gEpbm6Fa0a4m5iJ9pdozKRExttajBJq/Rv5ftaO4MGOZRb/dQ7bmk2HQhKOvkb38aMhDbnUi1u3SxM0fUQRKtHup2LS+8SY4LP0Fjj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With the increase of memory capacity and density, the probability of memory error also increases. The increasing size and density of server RAM in data centers and clouds have shown increased uncorrectable memory errors. Currently, more and more scenarios that can tolerate memory errors,such as CoW[1,2], KSM copy[3], coredump copy[4], khugepaged[5,6], uaccess copy[7], etc. This patchset introduces a new processing framework on ARM64, which enables ARM64 to support error recovery in the above scenarios, and more scenarios can be expanded based on this in the future. In arm64, memory error handling in do_sea(), which is divided into two cases: 1. If the user state consumed the memory errors, the solution is to kill the user process and isolate the error page. 2. If the kernel state consumed the memory errors, the solution is to panic. For case 2, Undifferentiated panic may not be the optimal choice, as it can be handled better. In some scenarios, we can avoid panic, such as uaccess, if the uaccess fails due to memory error, only the user process will be affected, killing the user process and isolating the user page with hardware memory errors is a better choice. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 245f09226893 ("mm: hwpoison: coredump: support recovery from dump_user_range()") [5] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") [6] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") [7] commit 278b917f8cb9 ("x86/mce: Add _ASM_EXTABLE_CPY for copy user access") ------------------ Test result: 1. copy_page(), copy_mc_page() basic function test pass, and the disassembly contents remains the same before and after refactor. 2. copy_to/from_user() access kernel NULL pointer raise translation fault and dump error message then die(), test pass. 3. Test following scenarios: copy_from_user(), get_user(), COW. Before patched: trigger a hardware memory error then panic. After patched: trigger a hardware memory error without panic. Testing step: step1. start an user-process. step2. poison(einj) the user-process's page. step3: user-process access the poison page in kernel mode, then trigger SEA. step4: the kernel will not panic, only the user process is killed, the poison page is isolated. (before patched, the kernel will panic in do_sea()) ------------------ Since V10: Accroding Mark's suggestion: 1. Merge V10's patch2 and patch3 to V11's patch2. 2. Patch2(V11): use new fixup_type for ld* in copy_to_user(), fix fatal issues (NULL kernel pointeraccess) been fixup incorrectly. 3. Patch2(V11): refactoring the logic of do_sea(). 4. Patch4(V11): Remove duplicate assembly logic and remove do_mte(). Besides: 1. Patch2(V11): remove st* insn's fixup, st* generally not trigger memory error. 2. Split a part of the logic of patch2(V11) to patch5(V11), for detail, see patch5(V11)'s commit msg. 3. Remove patch6(v10) “arm64: introduce copy_mc_to_kernel() implementation”. During modification, some problems that cannot be solved in a short period are found. The patch will be released after the problems are solved. 4. Add test result in this patch. 5. Modify patchset title, do not use machine check and remove "-next". Since V9: 1. Rebase to latest kernel version 6.8-rc2. 2. Add patch 6/6 to support copy_mc_to_kernel(). Since V8: 1. Rebase to latest kernel version and fix topo in some of the patches. 2. According to the suggestion of Catalin, I attempted to modify the return value of function copy_mc_[user]_highpage() to bytes not copied. During the modification process, I found that it would be more reasonable to return -EFAULT when copy error occurs (referring to the newly added patch 4). For ARM64, the implementation of copy_mc_[user]_highpage() needs to consider MTE. Considering the scenario where data copying is successful but the MTE tag copying fails, it is also not reasonable to return bytes not copied. 3. Considering the recent addition of machine check safe support for multiple scenarios, modify commit message for patch 5 (patch 4 for V8). Since V7: Currently, there are patches supporting recover from poison consumption for the cow scenario[1]. Therefore, Supporting cow scenario under the arm64 architecture only needs to modify the relevant code under the arch/. [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/ Since V6: Resend patches that are not merged into the mainline in V6. Since V5: 1. Add patch2/3 to add uaccess assembly helpers. 2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8. 3. Remove kernel access fixup in patch9. All suggestion are from Mark. Since V4: 1. According Michael's suggestion, add patch5. 2. According Mark's suggestiog, do some restructuring to arm64 extable, then a new adaptation of machine check safe support is made based on this. 3. According Mark's suggestion, support machine check safe in do_mte() in cow scene. 4. In V4, two patches have been merged into -next, so V5 not send these two patches. Since V3: 1. According to Robin's suggestion, direct modify user_ldst and user_ldp in asm-uaccess.h and modify mte.S. 2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S and copy_to_user.S. 3. According to Robin's suggestion, using micro in copy_page_mc.S to simplify code. 4. According to KeFeng's suggestion, modify powerpc code in patch1. 5. According to KeFeng's suggestion, modify mm/extable.c and some code optimization. Since V2: 1. According to Mark's suggestion, all uaccess can be recovered due to memory error. 2. Scenario pagecache reading is also supported as part of uaccess (copy_to_user()) and duplication code problem is also solved. Thanks for Robin's suggestion. 3. According Mark's suggestion, update commit message of patch 2/5. 4. According Borisllav's suggestion, update commit message of patch 1/5. Since V1: 1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of ARM64_UCE_KERNEL_RECOVERY. 2.Add two new scenes, cow and pagecache reading. 3.Fix two small bug(the first two patch). V1 in here: https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/ Tong Tiangen (5): uaccess: add generic fallback version of copy_mc_to_user() arm64: add support for ARCH_HAS_COPY_MC mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() arm64: support copy_mc_[user]_highpage() arm64: send SIGBUS to user process for SEA exception arch/arm64/Kconfig | 1 + arch/arm64/include/asm/asm-extable.h | 31 ++++++++++++--- arch/arm64/include/asm/asm-uaccess.h | 4 ++ arch/arm64/include/asm/extable.h | 1 + arch/arm64/include/asm/mte.h | 9 +++++ arch/arm64/include/asm/page.h | 10 +++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_mc_page.S | 37 ++++++++++++++++++ arch/arm64/lib/copy_page.S | 50 +++---------------------- arch/arm64/lib/copy_page_template.S | 56 ++++++++++++++++++++++++++++ arch/arm64/lib/copy_to_user.S | 10 ++--- arch/arm64/lib/mte.S | 29 ++++++++++++++ arch/arm64/mm/copypage.c | 45 ++++++++++++++++++++++ arch/arm64/mm/extable.c | 19 ++++++++++ arch/arm64/mm/fault.c | 39 ++++++++++++++----- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/highmem.h | 16 ++++++-- include/linux/uaccess.h | 9 +++++ mm/khugepaged.c | 4 +- 20 files changed, 304 insertions(+), 70 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S create mode 100644 arch/arm64/lib/copy_page_template.S