From patchwork Mon Dec 9 02:42:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13898760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2011E7717F for ; Mon, 9 Dec 2024 02:44:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=LlnRR/ekzvfKFU0JIU5g10P2k1CrW7ToDOkJBkyI4mw=; b=NongaPQM2/XFQ4DhrK1XfYMJyY Q/82etT7qck3ymTNP2Sj7syeEvZ+I5Pq0wypGsKGkdqjpTj9StPg87gw+3in9gtqYh8FkouruHYI1 JsDHFnuZl16/9+ZMniZzugsEVO1L2RYCU4BcdQ9kx4QuKxHTMzqeQSN62eJ5iDfD/XUDYZE+W9/ES a8fQCTlgGwtucDKjyR664lXK1ubC5hZ5JYZqUlnBjqScNor2XVVxIDUyElDkT9itg7zE5YXo9MG+B fSiEeCFESYj+BhRivP0tPfl7REbg5x74Zsl9eyQmPLqJUDEt/ho8782C2wWA3A+9t4PSxRagwsUzl yXaN5WEQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tKTlF-00000006E8H-0fUJ; Mon, 09 Dec 2024 02:44:37 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tKTkA-00000006Dkf-49LI for linux-arm-kernel@lists.infradead.org; Mon, 09 Dec 2024 02:43:33 +0000 Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Y65jL32dzzRj4G; Mon, 9 Dec 2024 10:41:42 +0800 (CST) Received: from kwepemk500005.china.huawei.com (unknown [7.202.194.90]) by mail.maildlp.com (Postfix) with ESMTPS id F0128180102; Mon, 9 Dec 2024 10:43:24 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemk500005.china.huawei.com (7.202.194.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 9 Dec 2024 10:43:22 +0800 From: Tong Tiangen To: Mark Rutland , Jonathan Cameron , Mauro Carvalho Chehab , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Madhavan Srinivasan CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v13 3/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Date: Mon, 9 Dec 2024 10:42:55 +0800 Message-ID: <20241209024257.3618492-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241209024257.3618492-1-tongtiangen@huawei.com> References: <20241209024257.3618492-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemk500005.china.huawei.com (7.202.194.90) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241208_184331_355247_8380C1E7 X-CRM114-Status: GOOD ( 20.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, copy_mc_[user]_highpage() returns zero on success, or in case of failures, the number of bytes that weren't copied. While tracking the number of not copied works fine for x86 and PPC, There are some difficulties in doing the same thing on ARM64 because there is no available caller-saved register in copy_page()(lib/copy_page.S) to save "bytes not copied", and the following copy_mc_page() will also encounter the same problem. Consider the caller of copy_mc_[user]_highpage() cannot do any processing on the remaining data(The page has hardware errors), they only check if copy was succeeded or not, make the interface more generic by using an error code when copy fails (-EFAULT) or return zero on success. Signed-off-by: Tong Tiangen Reviewed-by: Jonathan Cameron Reviewed-by: Mauro Carvalho Chehab --- include/linux/highmem.h | 8 ++++---- mm/khugepaged.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 6e452bd8e7e3..0eb4b9b06837 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -329,8 +329,8 @@ static inline void copy_highpage(struct page *to, struct page *from) /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory - * page with #MC in source page (@from) handled, and return the number - * of bytes not copied if there was a #MC, otherwise 0 for success. + * page with #MC in source page (@from) handled, and return -EFAULT if there + * was a #MC, otherwise 0 for success. */ static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) @@ -349,7 +349,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, if (ret) memory_failure_queue(page_to_pfn(from), 0); - return ret; + return ret ? -EFAULT : 0; } static inline int copy_mc_highpage(struct page *to, struct page *from) @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) if (ret) memory_failure_queue(page_to_pfn(from), 0); - return ret; + return ret ? -EFAULT : 0; } #else static inline int copy_mc_user_highpage(struct page *to, struct page *from, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6f8d46d107b4..c3cdc0155dcd 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -820,7 +820,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, src_addr, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, src_addr, vma)) { result = SCAN_COPY_MC; break; } @@ -2081,7 +2081,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } for (i = 0; i < nr_pages; i++) { - if (copy_mc_highpage(dst, folio_page(folio, i)) > 0) { + if (copy_mc_highpage(dst, folio_page(folio, i))) { result = SCAN_COPY_MC; goto rollback; }