From patchwork Thu Sep 30 21:53:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12529375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8240EC43217 for ; Thu, 30 Sep 2021 21:53:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1BBB4619F5 for ; Thu, 30 Sep 2021 21:53:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1BBB4619F5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 47D659400D8; Thu, 30 Sep 2021 17:53:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4068494003A; Thu, 30 Sep 2021 17:53:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A6E49400D8; Thu, 30 Sep 2021 17:53:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id 1274E94003A for ; Thu, 30 Sep 2021 17:53:26 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C5BFB3C7A8 for ; Thu, 30 Sep 2021 21:53:25 +0000 (UTC) X-FDA: 78645591570.24.A62EFA1 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf29.hostedemail.com (Postfix) with ESMTP id 84EBE9000676 for ; Thu, 30 Sep 2021 21:53:25 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id me5-20020a17090b17c500b0019af76b7bb4so7779525pjb.2 for ; Thu, 30 Sep 2021 14:53:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=h+26h9Sv4vHl46H8SxokcbKyYQ9NxeiIl/SfRJf9HZw=; b=Djy5drgdl22dEGglF63rH52PgcXQfNUhldeQ/QMjK3qtlY3T3LP8KFviIAyUTa7lIF VEt3AVn6B+Y6pQXf05pOz0Pk5lO3P9ci2211v0ZL1MbGqx7KYy+atFaNVOMoeYjF1rX7 Wv/UDjijd/gUekQXIZFlG1aBcZsPwVrfBf2CR1NyT8Ffb1y0gaebBTgkXyyci0Zpl0qw 7LmYoCC6768fKFYG3j7c3vOWUXobYSzp4jsTjWk8wLZ8Z1bQxJ0CSUiOJrDFHAfdInBp NcQ7FAnpmTmEw6sCeWCVx4Tx+BOVWhyfDvtqiKvTUwlkc/nBDfySTuLEfakDVcQ4hmfB bWtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=h+26h9Sv4vHl46H8SxokcbKyYQ9NxeiIl/SfRJf9HZw=; b=QQebnLC4jWz53MbnVYLw2sTSVF2Tb/OCPe/MFiqeCJldAMEx/ha2muVLdeKosbff5e nuoZ+Kkgd9kRBn6K/uWAJs7jeI5E5N1G8M8svMg/5FPmUpyhLtsdssMAVT+gtiFQtOjX sFNJeoU0qJZPExmZ0RGWqA5cNYaokWjQvfassXAd2RYmX5QN5+kuf/gGCGIVyts2OTSQ lOx/7PZyp5VBPt4MkrypY/0NkT7W7sykZgNXGg9Z5tJAe21M6CLrJEB2B9O5uxKlUKtb kzhzusPK0oTYjmZ9ikpxBk2xa98W96m1LcBnr0YJ/+cIfiO8TCjmrCh9qJIYQeFiFAdo +2/w== X-Gm-Message-State: AOAM532ZV3VSWEo+ZETulx95sxS9Wq4dqi49awrasKdXJgk93nyL/5Ic KHHl+DglbCNHmL3fJT+FGtM= X-Google-Smtp-Source: ABdhPJxmGL0nQKDZ15pCq04wQt05mZnD3pmd18L/LfPhVGMA+5VddqOZo4soEdXmWx6MUCZuuPL9iQ== X-Received: by 2002:a17:902:8495:b0:13e:6a01:f5fb with SMTP id c21-20020a170902849500b0013e6a01f5fbmr5810851plo.61.1633038804435; Thu, 30 Sep 2021 14:53:24 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id p17sm5647535pjg.54.2021.09.30.14.53.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 14:53:23 -0700 (PDT) From: Yang Shi To: naoya.horiguchi@nec.com, hughd@google.com, kirill.shutemov@linux.intel.com, willy@infradead.org, peterx@redhat.com, osalvador@suse.de, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 4/5] mm: shmem: don't truncate page if memory failure happens Date: Thu, 30 Sep 2021 14:53:10 -0700 Message-Id: <20210930215311.240774-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210930215311.240774-1-shy828301@gmail.com> References: <20210930215311.240774-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 84EBE9000676 X-Stat-Signature: a5waxiar3p9swzugkot973n94ebn3e4b Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Djy5drgd; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of shy828301@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-HE-Tag: 1633038805-302862 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The current behavior of memory failure is to truncate the page cache regardless of dirty or clean. If the page is dirty the later access will get the obsolete data from disk without any notification to the users. This may cause silent data loss. It is even worse for shmem since shmem is in-memory filesystem, truncating page cache means discarding data blocks. The later read would return all zero. The right approach is to keep the corrupted page in page cache, any later access would return error for syscalls or SIGBUS for page fault, until the file is truncated, hole punched or removed. The regular storage backed filesystems would be more complicated so this patch is focused on shmem. This also unblock the support for soft offlining shmem THP. Signed-off-by: Yang Shi --- mm/memory-failure.c | 10 +++++++++- mm/shmem.c | 31 +++++++++++++++++++++++++++++-- mm/userfaultfd.c | 5 +++++ 3 files changed, 43 insertions(+), 3 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 562bcf335bd2..176883cd080f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -57,6 +57,7 @@ #include #include #include +#include #include "internal.h" #include "ras/ras_event.h" @@ -866,6 +867,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) { int ret; struct address_space *mapping; + bool dec; delete_from_lru_cache(p); @@ -894,6 +896,12 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) goto out; } + /* + * The shmem page is kept in page cache instead of truncating + * so need decrement the refcount from page cache. + */ + dec = shmem_mapping(mapping); + /* * Truncation is a bit tricky. Enable it per file system for now. * @@ -903,7 +911,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) out: unlock_page(p); - if (has_extra_refcount(ps, p, false)) + if (has_extra_refcount(ps, p, dec)) ret = MF_FAILED; return ret; diff --git a/mm/shmem.c b/mm/shmem.c index 88742953532c..75c36b6a405a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, struct inode *inode = mapping->host; struct shmem_inode_info *info = SHMEM_I(inode); pgoff_t index = pos >> PAGE_SHIFT; + int ret = 0; /* i_rwsem is held by caller */ if (unlikely(info->seals & (F_SEAL_GROW | @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping, return -EPERM; } - return shmem_getpage(inode, index, pagep, SGP_WRITE); + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); + + if (*pagep) { + if (PageHWPoison(*pagep)) { + unlock_page(*pagep); + put_page(*pagep); + ret = -EIO; + } + } + + return ret; } static int @@ -2555,6 +2566,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) unlock_page(page); } + if (page && PageHWPoison(page)) { + error = -EIO; + break; + } + /* * We must evaluate after, since reads (unlike writes) * are called without i_rwsem protection against truncate @@ -3772,6 +3788,13 @@ static void shmem_destroy_inodecache(void) kmem_cache_destroy(shmem_inode_cachep); } +/* Keep the page in page cache instead of truncating it */ +static int shmem_error_remove_page(struct address_space *mapping, + struct page *page) +{ + return 0; +} + const struct address_space_operations shmem_aops = { .writepage = shmem_writepage, .set_page_dirty = __set_page_dirty_no_writeback, @@ -3782,7 +3805,7 @@ const struct address_space_operations shmem_aops = { #ifdef CONFIG_MIGRATION .migratepage = migrate_page, #endif - .error_remove_page = generic_error_remove_page, + .error_remove_page = shmem_error_remove_page, }; EXPORT_SYMBOL(shmem_aops); @@ -4193,6 +4216,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, page = ERR_PTR(error); else unlock_page(page); + + if (PageHWPoison(page)) + page = ERR_PTR(-EIO); + return page; #else /* diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7a9008415534..b688d5327177 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm, goto out; } + if (PageHWPoison(page)) { + ret = -EIO; + goto out_release; + } + ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, page, false, wp_copy); if (ret)