From patchwork Wed Oct 20 21:07:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12573347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B9B4C433EF for ; Wed, 20 Oct 2021 21:08:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 372F96128B for ; Wed, 20 Oct 2021 21:08:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231234AbhJTVKR (ORCPT ); Wed, 20 Oct 2021 17:10:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231298AbhJTVKQ (ORCPT ); Wed, 20 Oct 2021 17:10:16 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CEC9C06161C; Wed, 20 Oct 2021 14:08:01 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id i1so415448plr.13; Wed, 20 Oct 2021 14:08:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YY4rRYrBdBk9JxylDDW+2MAnqVHDh/g91wQSCjGEN3A=; b=OSTKM/8ENI2Vrjms0RwHS0lb7YTvIGp+fuXdcepf9PwoX/X6lDtdcI/3u0xImDro9C M/FuWwpv+d8D++3ZxWN54YWrm5HkuekFOPbCpTkzCO0fdK4r0tOQu1o0B6jY+kst6py8 TNllZEKCAMZJGjuZGVFlKO4sjib/D6/nHxqmBKrP6EvjTHtLlGQo48HfEHus+YJlUU86 6cfADR4HQJG2MTmgShTiJTkzu7NWm4+2xQk+L35yHcGcBwzMQQD/ad5txGLzIqiHedQx +FTgaFLNh1hh2X/lMWy2t0OtGt5RTMuFkPaig9W9DDhYW2muKMH8Osr2pF5CB1p3+3Ni w8dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YY4rRYrBdBk9JxylDDW+2MAnqVHDh/g91wQSCjGEN3A=; b=c6PKMHf3UUTPkR6zhVr05CmoWLgYvdlgij4x3oMHM2xWTpJgzcS+pDhjsz/RBfdWpj Om5H7dbpVf48TaKwz8IlgfsWQ8DDlk9W0AiMopRCR5Fxquu3hfdZp20ZvfbXOr9R4SEh imK23acoRyXhbwpr7Fh9PWUYMA29awZo+yrxqJFg+J8Z5QnWjJELTRdgq7thxlNVU49C 31GHvHN8nt2ffXZbr6fIuS3AlAl1PjRoyf15MakNKTg5RBCcJ10LPzwWsj8iHBAR5LLd kKJ6CR2IFAycGy48vZlXN6rUu8BEHFROInjTxOL4Qa228yEavU3OLX+Czoy87iBfqzYq +81g== X-Gm-Message-State: AOAM5300qag88AeISGL42dz5rKrV6Fr/Xsp5gKqgT/+OE4UgjAS8yU2n vLUo3FagOkGahFkTSyDzwnc= X-Google-Smtp-Source: ABdhPJzc3KspH4ZB0okwrRVGixRXsTVfHp1xrC+0xaV0LCp6se+ciHgv2NcQtUdJEOO+YeJiN/4xZA== X-Received: by 2002:a17:90a:df8f:: with SMTP id p15mr1087750pjv.209.1634764081184; Wed, 20 Oct 2021 14:08:01 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id i8sm3403143pfo.117.2021.10.20.14.07.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 14:08:00 -0700 (PDT) From: Yang Shi To: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, peterx@redhat.com, osalvador@suse.de, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v5 PATCH 1/6] mm: hwpoison: remove the unnecessary THP check Date: Wed, 20 Oct 2021 14:07:50 -0700 Message-Id: <20211020210755.23964-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211020210755.23964-1-shy828301@gmail.com> References: <20211020210755.23964-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When handling THP hwpoison checked if the THP is in allocation or free stage since hwpoison may mistreat it as hugetlb page. After commit 415c64c1453a ("mm/memory-failure: split thp earlier in memory error handling") the problem has been fixed, so this check is no longer needed. Remove it. The side effect of the removal is hwpoison may report unsplit THP instead of unknown error for shmem THP. It seems not like a big deal. The following patch depends on this, which fixes shmem THP with hwpoisoned subpage(s) are mapped PMD wrongly. So this patch needs to be backported to -stable as well. Cc: Acked-by: Naoya Horiguchi Suggested-by: Naoya Horiguchi Signed-off-by: Yang Shi --- mm/memory-failure.c | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 3e6449f2102a..73f68699e7ab 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1147,20 +1147,6 @@ static int __get_hwpoison_page(struct page *page) if (!HWPoisonHandlable(head)) return -EBUSY; - if (PageTransHuge(head)) { - /* - * Non anonymous thp exists only in allocation/free time. We - * can't handle such a case correctly, so let's give it up. - * This should be better than triggering BUG_ON when kernel - * tries to touch the "partially handled" page. - */ - if (!PageAnon(head)) { - pr_err("Memory failure: %#lx: non anonymous thp\n", - page_to_pfn(page)); - return 0; - } - } - if (get_page_unless_zero(head)) { if (head == compound_head(page)) return 1; From patchwork Wed Oct 20 21:07:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12573349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AB5CC433F5 for ; Wed, 20 Oct 2021 21:08:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 26F4F60F9E for ; Wed, 20 Oct 2021 21:08:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231302AbhJTVKU (ORCPT ); Wed, 20 Oct 2021 17:10:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231328AbhJTVKS (ORCPT ); Wed, 20 Oct 2021 17:10:18 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1232C061749; Wed, 20 Oct 2021 14:08:03 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id h193so8235557pgc.1; Wed, 20 Oct 2021 14:08:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+z3eqMMzfIzuQ+RxCtglmKOkopke5dWPRTwRSWvGk8M=; b=EGNgpiaPYuKu4blYXfkF9jcPhXsXCgWMU1/oXkPYQ7X0sO7X4V7TVnk+McRzdFnyKs 1EUqXDtT545PvfghAsMn2Pyfc3BHzldadTRrAqV6ZGJpjqt9qEwfP4CF/Ush7d9Q0o9+ c4RDPRkKNA5/PyY3q3JDTS56eLQ6GbyKj2r8b9cUGzDXgYHf758ANLJ+J9Wkyeg8vUlU GOmbu9+SWnVngpoSSLFjiyjz9VxbK5xo7DuNFGyjd0VQNtg6oZ6rce+beUvp2b0wgVB8 S2aGrMMfzzSoDkaYNw0GOLxLM/7u1I3hay49xmETyn8EC3wvywGjs36eaPoz7lQ4lEUB z3Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+z3eqMMzfIzuQ+RxCtglmKOkopke5dWPRTwRSWvGk8M=; b=JuvoaqbeanJjKgl1v4YeNsSPujg4DUlpFHmKogY5Jgq2+e1sCS+MAyUtjk3xXE+tKn Oi++WDJyKwltXydkn5nI8ykJkm2Cn1VqCL8dWVr8Z0JgxHYaZNoLiuUv6jh3ljhHEYDQ SynpMc+tdfFqATa/VpqqHXVIazqQy8iGF4Wud5BwU3oIgVS0XPukWc8iSY52iKX7waaa 7XaUrHWq91ZrNTirPg+zWeTg6cmqPGHnU3Wp1ZTEXLEjBvY7y7f6uLKB2vmW/aezu6GJ xSGcx5YfMdEceR3oUKQq4iEXoRL/6P9ZlYgelie4NywzLQwPzRa9JNxxtgpqkyf6GN5/ NIiA== X-Gm-Message-State: AOAM530CMcup2B3SyzMaUr+FZY9DXrKQ6tozVRSP6dLtx91RViL0kygl CmWRMyOjywCBaSvjIXhERYM= X-Google-Smtp-Source: ABdhPJxNe3glBHdLRuDeDelEYzph3hnzhGzpZQbWSceE2Nsph+7DVmq6c2iwNuR4QwEsWO6hFrra5A== X-Received: by 2002:aa7:8f12:0:b0:44c:833f:9dad with SMTP id x18-20020aa78f12000000b0044c833f9dadmr1119155pfr.35.1634764083078; Wed, 20 Oct 2021 14:08:03 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id i8sm3403143pfo.117.2021.10.20.14.08.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 14:08:02 -0700 (PDT) From: Yang Shi To: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, peterx@redhat.com, osalvador@suse.de, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v5 PATCH 2/6] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Date: Wed, 20 Oct 2021 14:07:51 -0700 Message-Id: <20211020210755.23964-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211020210755.23964-1-shy828301@gmail.com> References: <20211020210755.23964-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When handling shmem page fault the THP with corrupted subpage could be PMD mapped if certain conditions are satisfied. But kernel is supposed to send SIGBUS when trying to map hwpoisoned page. There are two paths which may do PMD map: fault around and regular fault. Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths") the thing was even worse in fault around path. The THP could be PMD mapped as long as the VMA fits regardless what subpage is accessed and corrupted. After this commit as long as head page is not corrupted the THP could be PMD mapped. In the regular fault path the THP could be PMD mapped as long as the corrupted page is not accessed and the VMA fits. This loophole could be fixed by iterating every subpage to check if any of them is hwpoisoned or not, but it is somewhat costly in page fault path. So introduce a new page flag called HasHWPoisoned on the first tail page. It indicates the THP has hwpoisoned subpage(s). It is set if any subpage of THP is found hwpoisoned by memory failure and after the refcount is bumped successfully, then cleared when the THP is freed or split. The soft offline path doesn't need this since soft offline handler just marks a subpage hwpoisoned when the subpage is migrated successfully. But shmem THP didn't get split then migrated at all. Fixes: 800d8c63b2e9 ("shmem: add huge pages support") Cc: Reviewed-by: Naoya Horiguchi Suggested-by: Kirill A. Shutemov Signed-off-by: Yang Shi --- include/linux/page-flags.h | 23 +++++++++++++++++++++++ mm/huge_memory.c | 2 ++ mm/memory-failure.c | 14 ++++++++++++++ mm/memory.c | 9 +++++++++ mm/page_alloc.c | 4 +++- 5 files changed, 51 insertions(+), 1 deletion(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index a558d67ee86f..fbfd3fad48f2 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -171,6 +171,15 @@ enum pageflags { /* Compound pages. Stored in first tail page's flags */ PG_double_map = PG_workingset, +#ifdef CONFIG_MEMORY_FAILURE + /* + * Compound pages. Stored in first tail page's flags. + * Indicates that at least one subpage is hwpoisoned in the + * THP. + */ + PG_has_hwpoisoned = PG_mappedtodisk, +#endif + /* non-lru isolated movable page */ PG_isolated = PG_reclaim, @@ -668,6 +677,20 @@ PAGEFLAG_FALSE(DoubleMap) TESTSCFLAG_FALSE(DoubleMap) #endif +#if defined(CONFIG_MEMORY_FAILURE) && defined(CONFIG_TRANSPARENT_HUGEPAGE) +/* + * PageHasHWPoisoned indicates that at least one subpage is hwpoisoned in the + * compound page. + * + * This flag is set by hwpoison handler. Cleared by THP split or free page. + */ +PAGEFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND) + TESTSCFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND) +#else +PAGEFLAG_FALSE(HasHWPoisoned) + TESTSCFLAG_FALSE(HasHWPoisoned) +#endif + /* * Check if a page is currently marked HWPoisoned. Note that this check is * best effort only and inherently racy: there is no way to synchronize with diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5e9ef0fc261e..0574b1613714 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ lruvec = lock_page_lruvec(head); + ClearPageHasHWPoisoned(head); + for (i = nr - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); /* Some pages can be beyond EOF: drop them from page cache */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 73f68699e7ab..bdbbb32211a5 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1694,6 +1694,20 @@ int memory_failure(unsigned long pfn, int flags) } if (PageTransHuge(hpage)) { + /* + * The flag must be set after the refcount is bumped + * otherwise it may race with THP split. + * And the flag can't be set in get_hwpoison_page() since + * it is called by soft offline too and it is just called + * for !MF_COUNT_INCREASE. So here seems to be the best + * place. + * + * Don't need care about the above error handling paths for + * get_hwpoison_page() since they handle either free page + * or unhandlable page. The refcount is bumped iff the + * page is a valid handlable page. + */ + SetPageHasHWPoisoned(hpage); if (try_to_split_thp_page(p, "Memory Failure") < 0) { action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED); res = -EBUSY; diff --git a/mm/memory.c b/mm/memory.c index adf9b9ef8277..c52be6d6b605 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3906,6 +3906,15 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) if (compound_order(page) != HPAGE_PMD_ORDER) return ret; + /* + * Just backoff if any subpage of a THP is corrupted otherwise + * the corrupted page may mapped by PMD silently to escape the + * check. This kind of THP just can be PTE mapped. Access to + * the corrupted subpage should trigger SIGBUS as expected. + */ + if (unlikely(PageHasHWPoisoned(page))) + return ret; + /* * Archs like ppc64 need additional space to store information * related to pte entry. Use the preallocated table for that. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b37435c274cf..7f37652f0287 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1312,8 +1312,10 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(compound && compound_order(page) != order, page); - if (compound) + if (compound) { ClearPageDoubleMap(page); + ClearPageHasHWPoisoned(page); + } for (i = 1; i < (1 << order); i++) { if (compound) bad += free_tail_pages_check(page, page + i); From patchwork Wed Oct 20 21:07:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12573351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F3C2C433EF for ; Wed, 20 Oct 2021 21:08:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 58D7E610EA for ; Wed, 20 Oct 2021 21:08:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231411AbhJTVKZ (ORCPT ); Wed, 20 Oct 2021 17:10:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231350AbhJTVKU (ORCPT ); Wed, 20 Oct 2021 17:10:20 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7532DC06161C; Wed, 20 Oct 2021 14:08:05 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id g184so23617386pgc.6; Wed, 20 Oct 2021 14:08:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0Upl2CqhTTaM00XnpLoxyW/y8TEVonM+yQlK+1g9tkk=; b=WLzGSX3ENSUti+x/9HxR5yoXNlIjgCpOn4O6vgnTjn4Yi04LrO6SIIWK3dFKfB9EGd M7ynjmL+9UYppcrJrhQPZKNM2+w2TWZgeH6cSRvRVLOk9S4Rtm4/VANyJ45RFzCuxTig b2aJFdXT5F3AbVtWtJn5JR1/SDGwLFpCUGY/aBn1AAJQHf+cNVXmHQa8wKW5jJg0/0Ly cSznQwqrk9dIRHuh+JBgTOgKUDUGMRAaJBFOQbRvtE9WGvsrfPlmYAnn4yuXDM+IkPyZ WoR7zpd56cf0AgGhBNjpK1Ef9zwrvFkLnWoq5DtNhiZDJmHmeeB5rbbcl9ISI1shgE6a KFmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0Upl2CqhTTaM00XnpLoxyW/y8TEVonM+yQlK+1g9tkk=; b=Z1zu23p2RdXgdaLtCZpRDbEyOJZBHmDaC/90g+mNBB6t3iCwYTsZbpGAuRi7KlG2kY fkClJ08somj9XA1ehAhg2SvcR95oG+vdu22l/F/M5ySUn+yrY2ocLw0a68V8ZgT2x++T m0W8eNP0rIoPcsgSKhdpq/SeMM8iX47yHkU1iYRtGN7KtFP1tgZF0gnmCFxAt/++aePC vw3tdxitzXe+/lINuoJldkJFzm3LK0LW4h/TQHdZvbHQaFw1bijQ6aMIobrCsHKKNLal i8csgvP41XwuzfVi3iLmM1Mv9QIWpCBhnAW+/O+wdW/0quCNnYTVyxbMg2KDTtSC5cVf aSYw== X-Gm-Message-State: AOAM530uEZy4G38CZtoQZtjMmkEtU8RnuUOOJXvN5unBP7u5jhoMoko2 CanclBY2BSEl2KtJgBTlcGw= X-Google-Smtp-Source: ABdhPJy6C62a+2ttxe2Xo+4qddg6gKC6Bx74zTBsozFIfxwEH0P32OFZiHIgg4KqTuua0h4Pxbr1jA== X-Received: by 2002:aa7:9099:0:b0:44c:a3b5:ca52 with SMTP id i25-20020aa79099000000b0044ca3b5ca52mr1619862pfa.85.1634764085050; Wed, 20 Oct 2021 14:08:05 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id i8sm3403143pfo.117.2021.10.20.14.08.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 14:08:04 -0700 (PDT) From: Yang Shi To: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, peterx@redhat.com, osalvador@suse.de, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v5 PATCH 3/6] mm: filemap: coding style cleanup for filemap_map_pmd() Date: Wed, 20 Oct 2021 14:07:52 -0700 Message-Id: <20211020210755.23964-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211020210755.23964-1-shy828301@gmail.com> References: <20211020210755.23964-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org A minor cleanup to the indent. Reviewed-by: Naoya Horiguchi Signed-off-by: Yang Shi --- mm/filemap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index dae481293b5d..2acc2b977f66 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3195,12 +3195,12 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page) } if (pmd_none(*vmf->pmd) && PageTransHuge(page)) { - vm_fault_t ret = do_set_pmd(vmf, page); - if (!ret) { - /* The page is mapped successfully, reference consumed. */ - unlock_page(page); - return true; - } + vm_fault_t ret = do_set_pmd(vmf, page); + if (!ret) { + /* The page is mapped successfully, reference consumed. */ + unlock_page(page); + return true; + } } if (pmd_none(*vmf->pmd)) { From patchwork Wed Oct 20 21:07:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12573353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B5FBC433F5 for ; Wed, 20 Oct 2021 21:08:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 34097610EA for ; Wed, 20 Oct 2021 21:08:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231350AbhJTVK1 (ORCPT ); Wed, 20 Oct 2021 17:10:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231368AbhJTVKW (ORCPT ); Wed, 20 Oct 2021 17:10:22 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81881C061749; Wed, 20 Oct 2021 14:08:07 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id q187so228678pgq.2; Wed, 20 Oct 2021 14:08:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7O2jemVYe0cdkuCpXRsCTx09Gh0RPAfbpmg1m/i8Imk=; b=qgzRo9cTGlk9YDJCuVhg/rOQ5tvo0oepgrPcqRlLrzwGEWDG6hhjkZLkWaUTYecWUc zGovGv89xq6siwFDIUH15dWBH+vR+yF4x1Dc5jaz1XiB9sd4yQWCHzeRZnGufQoLQuDL gDemw7tufwdXcaSChFSOLqVciBnLUdVHRiGWTN8LPdYreag6ewu8x2MJPK/23VRRinqc 5OFeLyB6NedfZxrmsfUJdJUBAbQZOUPfqhtccwifaoI0nxR11YYCZeraWxNTT8cPj3SS /szKISBseE64pjUWFPgcPAS5LkLtbM8kpD1O4lC4iOCKO4iQFHuGzZBWr00o7oFSRqfZ quZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7O2jemVYe0cdkuCpXRsCTx09Gh0RPAfbpmg1m/i8Imk=; b=AihENtuG7Sjk9oD3UuGE4jDt75fuOLXtaUScfpZzUfVB77ClEg7+Td9ntj3FHvFssR F3Wh5xCvVAO2ph0QpeBa8IOiLX5DThVf3fqEzoZ72vdCjKsuT3cS2TeBgH9Ob4UCX0oP k9lY+Gc+MNaVCKKPPbNrSqLLuE+nLZjnxkbBXVHLIMEM7ZIHotaqjU0qc4XPf/VOALFx BvC7IRvE4bvhz4jMB11tLq8I2/E4lZEfMTWe06VDTWlQ5KO1haU4qEfBIIUyKMEBmqJo qt8QswjQvCkLqt64/zh97jRqPLB2sk6RMc0tUsTlHAA1Wod6DGLMS+D+sG6e3OYcZGWK +ogA== X-Gm-Message-State: AOAM532OAVX1lwZu9EOa0O2sOFzCHRbzPQztWXPOuU34QENVhmLn0lxg goqo3t21aPMOUEFm2jGjk8Y= X-Google-Smtp-Source: ABdhPJxUCPR5Kgu+vcT+c3DRMEm0+BInCFJ2AU660nrM2JGkEjwBBrdIw+EBYmJIvla9nLs9G1EjNw== X-Received: by 2002:a63:e216:: with SMTP id q22mr1218827pgh.3.1634764086956; Wed, 20 Oct 2021 14:08:06 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id i8sm3403143pfo.117.2021.10.20.14.08.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 14:08:05 -0700 (PDT) From: Yang Shi To: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, peterx@redhat.com, osalvador@suse.de, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v5 PATCH 4/6] mm: hwpoison: refactor refcount check handling Date: Wed, 20 Oct 2021 14:07:53 -0700 Message-Id: <20211020210755.23964-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211020210755.23964-1-shy828301@gmail.com> References: <20211020210755.23964-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Memory failure will report failure if the page still has extra pinned refcount other than from hwpoison after the handler is done. Actually the check is not necessary for all handlers, so move the check into specific handlers. This would make the following keeping shmem page in page cache patch easier. There may be expected extra pin for some cases, for example, when the page is dirty and in swapcache. Suggested-by: Naoya Horiguchi Signed-off-by: Naoya Horiguchi Signed-off-by: Yang Shi --- mm/memory-failure.c | 93 +++++++++++++++++++++++++++++++-------------- 1 file changed, 64 insertions(+), 29 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index bdbbb32211a5..aaeda93d26fb 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -806,12 +806,44 @@ static int truncate_error_page(struct page *p, unsigned long pfn, return ret; } +struct page_state { + unsigned long mask; + unsigned long res; + enum mf_action_page_type type; + + /* Callback ->action() has to unlock the relevant page inside it. */ + int (*action)(struct page_state *ps, struct page *p); +}; + +/* + * Return true if page is still referenced by others, otherwise return + * false. + * + * The extra_pins is true when one extra refcount is expected. + */ +static bool has_extra_refcount(struct page_state *ps, struct page *p, + bool extra_pins) +{ + int count = page_count(p) - 1; + + if (extra_pins) + count -= 1; + + if (count > 0) { + pr_err("Memory failure: %#lx: %s still referenced by %d users\n", + page_to_pfn(p), action_page_types[ps->type], count); + return true; + } + + return false; +} + /* * Error hit kernel page. * Do nothing, try to be lucky and not touch this instead. For a few cases we * could be more sophisticated. */ -static int me_kernel(struct page *p, unsigned long pfn) +static int me_kernel(struct page_state *ps, struct page *p) { unlock_page(p); return MF_IGNORED; @@ -820,9 +852,9 @@ static int me_kernel(struct page *p, unsigned long pfn) /* * Page in unknown state. Do nothing. */ -static int me_unknown(struct page *p, unsigned long pfn) +static int me_unknown(struct page_state *ps, struct page *p) { - pr_err("Memory failure: %#lx: Unknown page state\n", pfn); + pr_err("Memory failure: %#lx: Unknown page state\n", page_to_pfn(p)); unlock_page(p); return MF_FAILED; } @@ -830,7 +862,7 @@ static int me_unknown(struct page *p, unsigned long pfn) /* * Clean (or cleaned) page cache page. */ -static int me_pagecache_clean(struct page *p, unsigned long pfn) +static int me_pagecache_clean(struct page_state *ps, struct page *p) { int ret; struct address_space *mapping; @@ -867,9 +899,13 @@ static int me_pagecache_clean(struct page *p, unsigned long pfn) * * Open: to take i_rwsem or not for this? Right now we don't. */ - ret = truncate_error_page(p, pfn, mapping); + ret = truncate_error_page(p, page_to_pfn(p), mapping); out: unlock_page(p); + + if (has_extra_refcount(ps, p, false)) + ret = MF_FAILED; + return ret; } @@ -878,7 +914,7 @@ static int me_pagecache_clean(struct page *p, unsigned long pfn) * Issues: when the error hit a hole page the error is not properly * propagated. */ -static int me_pagecache_dirty(struct page *p, unsigned long pfn) +static int me_pagecache_dirty(struct page_state *ps, struct page *p) { struct address_space *mapping = page_mapping(p); @@ -922,7 +958,7 @@ static int me_pagecache_dirty(struct page *p, unsigned long pfn) mapping_set_error(mapping, -EIO); } - return me_pagecache_clean(p, pfn); + return me_pagecache_clean(ps, p); } /* @@ -944,9 +980,10 @@ static int me_pagecache_dirty(struct page *p, unsigned long pfn) * Clean swap cache pages can be directly isolated. A later page fault will * bring in the known good data from disk. */ -static int me_swapcache_dirty(struct page *p, unsigned long pfn) +static int me_swapcache_dirty(struct page_state *ps, struct page *p) { int ret; + bool extra_pins = false; ClearPageDirty(p); /* Trigger EIO in shmem: */ @@ -954,10 +991,17 @@ static int me_swapcache_dirty(struct page *p, unsigned long pfn) ret = delete_from_lru_cache(p) ? MF_FAILED : MF_DELAYED; unlock_page(p); + + if (ret == MF_DELAYED) + extra_pins = true; + + if (has_extra_refcount(ps, p, extra_pins)) + ret = MF_FAILED; + return ret; } -static int me_swapcache_clean(struct page *p, unsigned long pfn) +static int me_swapcache_clean(struct page_state *ps, struct page *p) { int ret; @@ -965,6 +1009,10 @@ static int me_swapcache_clean(struct page *p, unsigned long pfn) ret = delete_from_lru_cache(p) ? MF_FAILED : MF_RECOVERED; unlock_page(p); + + if (has_extra_refcount(ps, p, false)) + ret = MF_FAILED; + return ret; } @@ -974,7 +1022,7 @@ static int me_swapcache_clean(struct page *p, unsigned long pfn) * - Error on hugepage is contained in hugepage unit (not in raw page unit.) * To narrow down kill region to one page, we need to break up pmd. */ -static int me_huge_page(struct page *p, unsigned long pfn) +static int me_huge_page(struct page_state *ps, struct page *p) { int res; struct page *hpage = compound_head(p); @@ -985,7 +1033,7 @@ static int me_huge_page(struct page *p, unsigned long pfn) mapping = page_mapping(hpage); if (mapping) { - res = truncate_error_page(hpage, pfn, mapping); + res = truncate_error_page(hpage, page_to_pfn(p), mapping); unlock_page(hpage); } else { res = MF_FAILED; @@ -1003,6 +1051,9 @@ static int me_huge_page(struct page *p, unsigned long pfn) } } + if (has_extra_refcount(ps, p, false)) + res = MF_FAILED; + return res; } @@ -1028,14 +1079,7 @@ static int me_huge_page(struct page *p, unsigned long pfn) #define slab (1UL << PG_slab) #define reserved (1UL << PG_reserved) -static struct page_state { - unsigned long mask; - unsigned long res; - enum mf_action_page_type type; - - /* Callback ->action() has to unlock the relevant page inside it. */ - int (*action)(struct page *p, unsigned long pfn); -} error_states[] = { +static struct page_state error_states[] = { { reserved, reserved, MF_MSG_KERNEL, me_kernel }, /* * free pages are specially detected outside this table: @@ -1095,19 +1139,10 @@ static int page_action(struct page_state *ps, struct page *p, unsigned long pfn) { int result; - int count; /* page p should be unlocked after returning from ps->action(). */ - result = ps->action(p, pfn); + result = ps->action(ps, p); - count = page_count(p) - 1; - if (ps->action == me_swapcache_dirty && result == MF_DELAYED) - count--; - if (count > 0) { - pr_err("Memory failure: %#lx: %s still referenced by %d users\n", - pfn, action_page_types[ps->type], count); - result = MF_FAILED; - } action_result(pfn, ps->type, result); /* Could do more checks here if page looks ok */ From patchwork Wed Oct 20 21:07:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12573355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39338C433EF for ; Wed, 20 Oct 2021 21:08:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A29E6138B for ; Wed, 20 Oct 2021 21:08:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231470AbhJTVKa (ORCPT ); Wed, 20 Oct 2021 17:10:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231352AbhJTVK0 (ORCPT ); Wed, 20 Oct 2021 17:10:26 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACA49C061768; Wed, 20 Oct 2021 14:08:09 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id o133so4012196pfg.7; Wed, 20 Oct 2021 14:08:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VhKiYV125XYYJ9Cqjf+na/y/uU4EzGqrjZFtLgGxOLI=; b=KVvaztXO6aH4dUV/+AVYfrJKBr310WbJPKg9aP4UAoKyLzbBAA/V7BUDpPoJ5DwVVU E9Oc2nXFJEBb0wptinixqkhTkDOHb07FUd6d3IlUMF452b9UX99n7SI9qdGBY8axem73 wqOMbg2z1zxd6V3ZYyGWVz9+wmdFFCPM4uzHAkoJ61BUuhCMglMmPzU1U844W/HOpWIY xoQQmyPl4TF06WLj7dLCA+j4vjQfxIZwMVkefWJMF+b+0sIkum0uTampsUpxRGkpWNcn C7AA+kAtRq/w2wdfzY9FOf8WHxmqAJD7EV+eaSs1KlhfDETMqhPj0511Fr3LOWkW3eAu jxhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VhKiYV125XYYJ9Cqjf+na/y/uU4EzGqrjZFtLgGxOLI=; b=ZEzipGXrnGnY59P9vBtoQT1MFjbAY3ABtQb7ICUW6TWdkWN+CN4Ld1srTMVwNVWSDN kOv4YdKW7s9QC3+WGndmZlnyA1BG4wpVuNdRMckoxEoHT1bocjHQrfVi2Yz3I6AU0zA8 +THvfgO+V4pKfixgFq79sj/Grmzs/Yp/4ALms6f8duYuVRaKDkSH+6BXiz44svXMTddP dKUN9KOigQf0dq3fx9ElUvA01+n5Oz/pIf2fTFj6tCIFZYsFo1HfABGVy7eQI8TkuYlQ lmI3ITxOzlbt7NG25Liu1Z+IIToZezGFO2my+fONupIlb3lVNRfBp+vfxPewdK/aVv9A EM7A== X-Gm-Message-State: AOAM533MuKdRNJ6ph4hZaXksxhz6TktZUmBgy+1jcaG5tFfDgqM8jHAW IwyzJShRbhluePx0CYFScRI= X-Google-Smtp-Source: ABdhPJzPAtb2hHvPGKVOLDaQI/51Q5nIlKC8rfpRs5WkHpv/lGnOr1rSdUFphUbHU0P0Pn6b3Qw3Ow== X-Received: by 2002:a63:a744:: with SMTP id w4mr1242143pgo.456.1634764089146; Wed, 20 Oct 2021 14:08:09 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id i8sm3403143pfo.117.2021.10.20.14.08.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 14:08:08 -0700 (PDT) From: Yang Shi To: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, peterx@redhat.com, osalvador@suse.de, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v5 PATCH 5/6] mm: shmem: don't truncate page if memory failure happens Date: Wed, 20 Oct 2021 14:07:54 -0700 Message-Id: <20211020210755.23964-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211020210755.23964-1-shy828301@gmail.com> References: <20211020210755.23964-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The current behavior of memory failure is to truncate the page cache regardless of dirty or clean. If the page is dirty the later access will get the obsolete data from disk without any notification to the users. This may cause silent data loss. It is even worse for shmem since shmem is in-memory filesystem, truncating page cache means discarding data blocks. The later read would return all zero. The right approach is to keep the corrupted page in page cache, any later access would return error for syscalls or SIGBUS for page fault, until the file is truncated, hole punched or removed. The regular storage backed filesystems would be more complicated so this patch is focused on shmem. This also unblock the support for soft offlining shmem THP. Signed-off-by: Yang Shi --- mm/memory-failure.c | 10 +++++++++- mm/shmem.c | 38 +++++++++++++++++++++++++++++++++++--- mm/userfaultfd.c | 5 +++++ 3 files changed, 49 insertions(+), 4 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index aaeda93d26fb..3603a3acf7b3 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -57,6 +57,7 @@ #include #include #include +#include #include "internal.h" #include "ras/ras_event.h" @@ -866,6 +867,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) { int ret; struct address_space *mapping; + bool extra_pins; delete_from_lru_cache(p); @@ -894,6 +896,12 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) goto out; } + /* + * The shmem page is kept in page cache instead of truncating + * so is expected to have an extra refcount after error-handling. + */ + extra_pins = shmem_mapping(mapping); + /* * Truncation is a bit tricky. Enable it per file system for now. * @@ -903,7 +911,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) out: unlock_page(p); - if (has_extra_refcount(ps, p, false)) + if (has_extra_refcount(ps, p, extra_pins)) ret = MF_FAILED; return ret; diff --git a/mm/shmem.c b/mm/shmem.c index b5860f4a2738..89062ce85db8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, struct inode *inode = mapping->host; struct shmem_inode_info *info = SHMEM_I(inode); pgoff_t index = pos >> PAGE_SHIFT; + int ret = 0; /* i_rwsem is held by caller */ if (unlikely(info->seals & (F_SEAL_GROW | @@ -2466,7 +2467,15 @@ shmem_write_begin(struct file *file, struct address_space *mapping, return -EPERM; } - return shmem_getpage(inode, index, pagep, SGP_WRITE); + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); + + if (*pagep && PageHWPoison(*pagep)) { + unlock_page(*pagep); + put_page(*pagep); + ret = -EIO; + } + + return ret; } static int @@ -2553,6 +2562,12 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) if (sgp == SGP_CACHE) set_page_dirty(page); unlock_page(page); + + if (PageHWPoison(page)) { + put_page(page); + error = -EIO; + break; + } } /* @@ -3114,7 +3129,8 @@ static const char *shmem_get_link(struct dentry *dentry, page = find_get_page(inode->i_mapping, 0); if (!page) return ERR_PTR(-ECHILD); - if (!PageUptodate(page)) { + if (PageHWPoison(page) || + !PageUptodate(page)) { put_page(page); return ERR_PTR(-ECHILD); } @@ -3122,6 +3138,11 @@ static const char *shmem_get_link(struct dentry *dentry, error = shmem_getpage(inode, 0, &page, SGP_READ); if (error) return ERR_PTR(error); + if (page && PageHWPoison(page)) { + unlock_page(page); + put_page(page); + return ERR_PTR(-ECHILD); + } unlock_page(page); } set_delayed_call(done, shmem_put_link, page); @@ -3772,6 +3793,13 @@ static void shmem_destroy_inodecache(void) kmem_cache_destroy(shmem_inode_cachep); } +/* Keep the page in page cache instead of truncating it */ +static int shmem_error_remove_page(struct address_space *mapping, + struct page *page) +{ + return 0; +} + const struct address_space_operations shmem_aops = { .writepage = shmem_writepage, .set_page_dirty = __set_page_dirty_no_writeback, @@ -3782,7 +3810,7 @@ const struct address_space_operations shmem_aops = { #ifdef CONFIG_MIGRATION .migratepage = migrate_page, #endif - .error_remove_page = generic_error_remove_page, + .error_remove_page = shmem_error_remove_page, }; EXPORT_SYMBOL(shmem_aops); @@ -4193,6 +4221,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, page = ERR_PTR(error); else unlock_page(page); + + if (PageHWPoison(page)) + page = ERR_PTR(-EIO); + return page; #else /* diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7a9008415534..b688d5327177 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm, goto out; } + if (PageHWPoison(page)) { + ret = -EIO; + goto out_release; + } + ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, page, false, wp_copy); if (ret) From patchwork Wed Oct 20 21:07:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12573357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E607EC4332F for ; Wed, 20 Oct 2021 21:08:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE5AF61354 for ; Wed, 20 Oct 2021 21:08:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231501AbhJTVKc (ORCPT ); Wed, 20 Oct 2021 17:10:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231358AbhJTVK0 (ORCPT ); Wed, 20 Oct 2021 17:10:26 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7691DC061769; Wed, 20 Oct 2021 14:08:11 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id t184so4054267pfd.0; Wed, 20 Oct 2021 14:08:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LPxj7lozeCH0kLnc2/lpe544PLDdIaniV+hAq9jcw3Y=; b=aaoRyUUDHzIFQzxjETUplwQaxR1W3mqUn0GOKalk/y0EBM61QVOu8EKmRkgltZW3vM BoeTTpuQvgdYas3IWbRYAABxsDMDMxC/BXdKsvuzhRHBTIqTQxrUP1Ri0qxhCihWAeBc ypu7SxlFkJI2950Of9Dp2oDnTKlCDWPIK9V9tuox4tvKmpLaPnXn8ceg/rx6H2qY9I7P X1rTheWtyz54X4mRRcB33yKDNrrxAuRdDJ3C3W9A2ymI9uzF7peyhDvMgfCscYcraBxY /wRoXXLnefUy1RLK60ri7D1bkZ+mMXP/ps9TGn3fZaxjFPBLTnqvdzAnV8oLFHIEfehI GmjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LPxj7lozeCH0kLnc2/lpe544PLDdIaniV+hAq9jcw3Y=; b=1B510YasJIutODm9vEMlKFHyu6lWFDu2YZ5hw5ofiIdgLvF+z3yySRSP2R0lSk8iBn KBY2V2B/ibVuWVT3wHSld858nVzIWgUBpQXRILIDpmhhuvn8ZRpNOjKxMTkvXyKGRZer QkAMEoMOU7U9Jq/SeDsThqlqOmEHKRdXDHHPBzNsGg9eB1MNR+5kKevTpoaC9ZyTz367 PsDAOVkTd0JCmrKkKhc1FCyYkTXYi6+yfBm3RkItuYTDb/M+L7OSneZ4lElM6DyrEMWo 3AqpOIfEZoGUBrotg4wWUeRLrTkouHgVjHvl9NYa0Xm//oNa4tlHe9GsgM3ZfOrXu25+ wJ+A== X-Gm-Message-State: AOAM5303EQw0MjLlhUNA7NfDt95SNYiYh/1clC8hrhtAZxeSPX0Zqt6t fDiYt3WZYjM25ylW18MeEbmUoBk0Fe4= X-Google-Smtp-Source: ABdhPJwJ1bfMEfa0N+uUR9T0CIwfVbfdxBvEBHjDdo9xVZtBYc3GO+RIreIvr1CvuOaFZ8Byfi/9ig== X-Received: by 2002:a63:340c:: with SMTP id b12mr1227283pga.241.1634764091084; Wed, 20 Oct 2021 14:08:11 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id i8sm3403143pfo.117.2021.10.20.14.08.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 14:08:10 -0700 (PDT) From: Yang Shi To: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, peterx@redhat.com, osalvador@suse.de, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v5 PATCH 6/6] mm: hwpoison: handle non-anonymous THP correctly Date: Wed, 20 Oct 2021 14:07:55 -0700 Message-Id: <20211020210755.23964-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211020210755.23964-1-shy828301@gmail.com> References: <20211020210755.23964-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently hwpoison doesn't handle non-anonymous THP, but since v4.8 THP support for tmpfs and read-only file cache has been added. They could be offlined by split THP, just like anonymous THP. Acked-by: Naoya Horiguchi Signed-off-by: Yang Shi --- mm/memory-failure.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 3603a3acf7b3..bd697c64e973 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1443,14 +1443,11 @@ static int identify_page_state(unsigned long pfn, struct page *p, static int try_to_split_thp_page(struct page *page, const char *msg) { lock_page(page); - if (!PageAnon(page) || unlikely(split_huge_page(page))) { + if (unlikely(split_huge_page(page))) { unsigned long pfn = page_to_pfn(page); unlock_page(page); - if (!PageAnon(page)) - pr_info("%s: %#lx: non anonymous thp\n", msg, pfn); - else - pr_info("%s: %#lx: thp split failed\n", msg, pfn); + pr_info("%s: %#lx: thp split failed\n", msg, pfn); put_page(page); return -EBUSY; }