Message ID | 20210930215311.240774-5-shy828301@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Solve silent data loss caused by poisoned page cache (shmem/tmpfs) | expand |
On Thu, Sep 30, 2021 at 02:53:10PM -0700, Yang Shi wrote: > The current behavior of memory failure is to truncate the page cache > regardless of dirty or clean. If the page is dirty the later access > will get the obsolete data from disk without any notification to the > users. This may cause silent data loss. It is even worse for shmem > since shmem is in-memory filesystem, truncating page cache means > discarding data blocks. The later read would return all zero. > > The right approach is to keep the corrupted page in page cache, any > later access would return error for syscalls or SIGBUS for page fault, > until the file is truncated, hole punched or removed. The regular > storage backed filesystems would be more complicated so this patch > is focused on shmem. This also unblock the support for soft > offlining shmem THP. > > Signed-off-by: Yang Shi <shy828301@gmail.com> > --- ... > @@ -894,6 +896,12 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) > goto out; > } > > + /* > + * The shmem page is kept in page cache instead of truncating > + * so need decrement the refcount from page cache. > + */ This comment seems to me confusing because no refcount is decremented here. What the variable dec tries to do is to give the expected value of the refcount of the error page after successfull erorr handling, which differs according to the page state before error handling, so dec adjusts it. How about the below? + /* + * The shmem page is kept in page cache instead of truncating + * so is expected to have an extra refcount after error-handling. + */ > + dec = shmem_mapping(mapping); > + > /* > * Truncation is a bit tricky. Enable it per file system for now. > * ... > @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > return -EPERM; > } > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > + > + if (*pagep) { > + if (PageHWPoison(*pagep)) { Unless you plan to add some code in the near future, how about merging these two if sentences? if (*pagep && PageHWPoison(*pagep)) { Thanks, Naoya Horiguchi > + unlock_page(*pagep); > + put_page(*pagep); > + ret = -EIO; > + } > + } > + > + return ret; > } > > static int
On Fri, Oct 1, 2021 at 12:05 AM Naoya Horiguchi <naoya.horiguchi@linux.dev> wrote: > > On Thu, Sep 30, 2021 at 02:53:10PM -0700, Yang Shi wrote: > > The current behavior of memory failure is to truncate the page cache > > regardless of dirty or clean. If the page is dirty the later access > > will get the obsolete data from disk without any notification to the > > users. This may cause silent data loss. It is even worse for shmem > > since shmem is in-memory filesystem, truncating page cache means > > discarding data blocks. The later read would return all zero. > > > > The right approach is to keep the corrupted page in page cache, any > > later access would return error for syscalls or SIGBUS for page fault, > > until the file is truncated, hole punched or removed. The regular > > storage backed filesystems would be more complicated so this patch > > is focused on shmem. This also unblock the support for soft > > offlining shmem THP. > > > > Signed-off-by: Yang Shi <shy828301@gmail.com> > > --- > ... > > @@ -894,6 +896,12 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) > > goto out; > > } > > > > + /* > > + * The shmem page is kept in page cache instead of truncating > > + * so need decrement the refcount from page cache. > > + */ > > This comment seems to me confusing because no refcount is decremented here. > What the variable dec tries to do is to give the expected value of the > refcount of the error page after successfull erorr handling, which differs > according to the page state before error handling, so dec adjusts it. > > How about the below? > > + /* > + * The shmem page is kept in page cache instead of truncating > + * so is expected to have an extra refcount after error-handling. > + */ Thanks for the suggestion, yes, it seems better. > > > + dec = shmem_mapping(mapping); > > + > > /* > > * Truncation is a bit tricky. Enable it per file system for now. > > * > ... > > @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > return -EPERM; > > } > > > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > > + > > + if (*pagep) { > > + if (PageHWPoison(*pagep)) { > > Unless you plan to add some code in the near future, how about merging > these two if sentences? > > if (*pagep && PageHWPoison(*pagep)) { Sure. > > Thanks, > Naoya Horiguchi > > > + unlock_page(*pagep); > > + put_page(*pagep); > > + ret = -EIO; > > + } > > + } > > + > > + return ret; > > } > > > > static int
On Thu, Sep 30, 2021 at 02:53:10PM -0700, Yang Shi wrote: > diff --git a/mm/shmem.c b/mm/shmem.c > index 88742953532c..75c36b6a405a 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > struct inode *inode = mapping->host; > struct shmem_inode_info *info = SHMEM_I(inode); > pgoff_t index = pos >> PAGE_SHIFT; > + int ret = 0; > > /* i_rwsem is held by caller */ > if (unlikely(info->seals & (F_SEAL_GROW | > @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > return -EPERM; > } > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > + > + if (*pagep) { > + if (PageHWPoison(*pagep)) { > + unlock_page(*pagep); > + put_page(*pagep); > + ret = -EIO; > + } > + } > + > + return ret; > } > > static int > @@ -2555,6 +2566,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > unlock_page(page); > } > > + if (page && PageHWPoison(page)) { > + error = -EIO; > + break; > + } > + > /* > * We must evaluate after, since reads (unlike writes) > * are called without i_rwsem protection against truncate [...] > @@ -4193,6 +4216,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, > page = ERR_PTR(error); > else > unlock_page(page); > + > + if (PageHWPoison(page)) > + page = ERR_PTR(-EIO); > + > return page; > #else > /* > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index 7a9008415534..b688d5327177 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm, > goto out; > } > > + if (PageHWPoison(page)) { > + ret = -EIO; > + goto out_release; > + } > + > ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, > page, false, wp_copy); > if (ret) > -- > 2.26.2 > These are shmem_getpage_gfp() call sites: shmem_getpage[151] return shmem_getpage_gfp(inode, index, pagep, sgp, shmem_fault[2112] err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE, shmem_read_mapping_page_gfp[4188] error = shmem_getpage_gfp(inode, index, &page, SGP_CACHE, These are further shmem_getpage() call sites: collapse_file[1735] if (shmem_getpage(mapping->host, index, &page, shmem_undo_range[965] shmem_getpage(inode, start - 1, &page, SGP_READ); shmem_undo_range[980] shmem_getpage(inode, end, &page, SGP_READ); shmem_write_begin[2467] return shmem_getpage(inode, index, pagep, SGP_WRITE); shmem_file_read_iter[2544] error = shmem_getpage(inode, index, &page, sgp); shmem_fallocate[2733] error = shmem_getpage(inode, index, &page, SGP_FALLOC); shmem_symlink[3079] error = shmem_getpage(inode, 0, &page, SGP_WRITE); shmem_get_link[3120] error = shmem_getpage(inode, 0, &page, SGP_READ); mcontinue_atomic_pte[235] ret = shmem_getpage(inode, pgoff, &page, SGP_READ); Wondering whether this patch covered all of them. This also reminded me that whether we should simply fail shmem_getpage_gfp() directly, then all above callers will get a proper failure, rather than we do PageHWPoison() check everywhere?
On Mon, Oct 11, 2021 at 6:57 PM Peter Xu <peterx@redhat.com> wrote: > > On Thu, Sep 30, 2021 at 02:53:10PM -0700, Yang Shi wrote: > > diff --git a/mm/shmem.c b/mm/shmem.c > > index 88742953532c..75c36b6a405a 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > struct inode *inode = mapping->host; > > struct shmem_inode_info *info = SHMEM_I(inode); > > pgoff_t index = pos >> PAGE_SHIFT; > > + int ret = 0; > > > > /* i_rwsem is held by caller */ > > if (unlikely(info->seals & (F_SEAL_GROW | > > @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > return -EPERM; > > } > > > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > > + > > + if (*pagep) { > > + if (PageHWPoison(*pagep)) { > > + unlock_page(*pagep); > > + put_page(*pagep); > > + ret = -EIO; > > + } > > + } > > + > > + return ret; > > } > > > > static int > > @@ -2555,6 +2566,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > > unlock_page(page); > > } > > > > + if (page && PageHWPoison(page)) { > > + error = -EIO; > > + break; > > + } > > + > > /* > > * We must evaluate after, since reads (unlike writes) > > * are called without i_rwsem protection against truncate > > [...] > > > @@ -4193,6 +4216,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, > > page = ERR_PTR(error); > > else > > unlock_page(page); > > + > > + if (PageHWPoison(page)) > > + page = ERR_PTR(-EIO); > > + > > return page; > > #else > > /* > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > index 7a9008415534..b688d5327177 100644 > > --- a/mm/userfaultfd.c > > +++ b/mm/userfaultfd.c > > @@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm, > > goto out; > > } > > > > + if (PageHWPoison(page)) { > > + ret = -EIO; > > + goto out_release; > > + } > > + > > ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, > > page, false, wp_copy); > > if (ret) > > -- > > 2.26.2 > > > > These are shmem_getpage_gfp() call sites: > > shmem_getpage[151] return shmem_getpage_gfp(inode, index, pagep, sgp, > shmem_fault[2112] err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE, > shmem_read_mapping_page_gfp[4188] error = shmem_getpage_gfp(inode, index, &page, SGP_CACHE, > > These are further shmem_getpage() call sites: > > collapse_file[1735] if (shmem_getpage(mapping->host, index, &page, > shmem_undo_range[965] shmem_getpage(inode, start - 1, &page, SGP_READ); > shmem_undo_range[980] shmem_getpage(inode, end, &page, SGP_READ); > shmem_write_begin[2467] return shmem_getpage(inode, index, pagep, SGP_WRITE); > shmem_file_read_iter[2544] error = shmem_getpage(inode, index, &page, sgp); > shmem_fallocate[2733] error = shmem_getpage(inode, index, &page, SGP_FALLOC); > shmem_symlink[3079] error = shmem_getpage(inode, 0, &page, SGP_WRITE); > shmem_get_link[3120] error = shmem_getpage(inode, 0, &page, SGP_READ); > mcontinue_atomic_pte[235] ret = shmem_getpage(inode, pgoff, &page, SGP_READ); > > Wondering whether this patch covered all of them. No, it doesn't need. Not all places care about hwpoison page, for example, truncate, hole punch, etc. Only the APIs which return the data back to userspace or write back to disk need care about if the data is corrupted or not since. This has been elaborated in the cover letter. > > This also reminded me that whether we should simply fail shmem_getpage_gfp() > directly, then all above callers will get a proper failure, rather than we do > PageHWPoison() check everywhere? Actually I did a prototype for this approach by returning ERR_PTR(-EIO). But all the callers have to check this return value even though the callers don't care about hwpoison page since all the callers (not only shmem, but also all other filesystems) just check if page is NULL but not check if it is an error pointer. This actually incur more changes. It sounds not optimal IMHO. So I just treat hwpoison as other flags, for example, Uptodate, and have callers check it when necessary. > > -- > Peter Xu >
On Tue, Oct 12, 2021 at 12:17:33PM -0700, Yang Shi wrote: > On Mon, Oct 11, 2021 at 6:57 PM Peter Xu <peterx@redhat.com> wrote: > > > > On Thu, Sep 30, 2021 at 02:53:10PM -0700, Yang Shi wrote: > > > diff --git a/mm/shmem.c b/mm/shmem.c > > > index 88742953532c..75c36b6a405a 100644 > > > --- a/mm/shmem.c > > > +++ b/mm/shmem.c > > > @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > > struct inode *inode = mapping->host; > > > struct shmem_inode_info *info = SHMEM_I(inode); > > > pgoff_t index = pos >> PAGE_SHIFT; > > > + int ret = 0; > > > > > > /* i_rwsem is held by caller */ > > > if (unlikely(info->seals & (F_SEAL_GROW | > > > @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > > return -EPERM; > > > } > > > > > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > > > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > > > + > > > + if (*pagep) { > > > + if (PageHWPoison(*pagep)) { > > > + unlock_page(*pagep); > > > + put_page(*pagep); > > > + ret = -EIO; > > > + } > > > + } > > > + > > > + return ret; > > > } > > > > > > static int > > > @@ -2555,6 +2566,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > > > unlock_page(page); > > > } > > > > > > + if (page && PageHWPoison(page)) { > > > + error = -EIO; > > > + break; > > > + } > > > + > > > /* > > > * We must evaluate after, since reads (unlike writes) > > > * are called without i_rwsem protection against truncate > > > > [...] > > > > > @@ -4193,6 +4216,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, > > > page = ERR_PTR(error); > > > else > > > unlock_page(page); > > > + > > > + if (PageHWPoison(page)) > > > + page = ERR_PTR(-EIO); > > > + > > > return page; > > > #else > > > /* > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > > index 7a9008415534..b688d5327177 100644 > > > --- a/mm/userfaultfd.c > > > +++ b/mm/userfaultfd.c > > > @@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm, > > > goto out; > > > } > > > > > > + if (PageHWPoison(page)) { > > > + ret = -EIO; > > > + goto out_release; > > > + } > > > + > > > ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, > > > page, false, wp_copy); > > > if (ret) > > > -- > > > 2.26.2 > > > > > > > These are shmem_getpage_gfp() call sites: > > > > shmem_getpage[151] return shmem_getpage_gfp(inode, index, pagep, sgp, > > shmem_fault[2112] err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE, > > shmem_read_mapping_page_gfp[4188] error = shmem_getpage_gfp(inode, index, &page, SGP_CACHE, > > > > These are further shmem_getpage() call sites: > > > > collapse_file[1735] if (shmem_getpage(mapping->host, index, &page, > > shmem_undo_range[965] shmem_getpage(inode, start - 1, &page, SGP_READ); > > shmem_undo_range[980] shmem_getpage(inode, end, &page, SGP_READ); > > shmem_write_begin[2467] return shmem_getpage(inode, index, pagep, SGP_WRITE); > > shmem_file_read_iter[2544] error = shmem_getpage(inode, index, &page, sgp); > > shmem_fallocate[2733] error = shmem_getpage(inode, index, &page, SGP_FALLOC); > > shmem_symlink[3079] error = shmem_getpage(inode, 0, &page, SGP_WRITE); > > shmem_get_link[3120] error = shmem_getpage(inode, 0, &page, SGP_READ); > > mcontinue_atomic_pte[235] ret = shmem_getpage(inode, pgoff, &page, SGP_READ); > > > > Wondering whether this patch covered all of them. > > No, it doesn't need. Not all places care about hwpoison page, for > example, truncate, hole punch, etc. Only the APIs which return the > data back to userspace or write back to disk need care about if the > data is corrupted or not since. This has been elaborated in the cover > letter. I see, sorry I missed that. However I still have two entries unsure in above list that this patch didn't cover (besides fault path, truncate, ...): collapse_file[1735] if (shmem_getpage(mapping->host, index, &page, shmem_get_link[3120] error = shmem_getpage(inode, 0, &page, SGP_READ); IIUC the 1st one is when we want to collapse a file thp, should we stop the attempt if we see a hwpoison small page? The 2nd one should be where we had a symlink shmem file and the 1st page which stores the link got corrupted. Should we fail the get_link() then?
On Tue, Oct 12, 2021 at 3:27 PM Peter Xu <peterx@redhat.com> wrote: > > On Tue, Oct 12, 2021 at 12:17:33PM -0700, Yang Shi wrote: > > On Mon, Oct 11, 2021 at 6:57 PM Peter Xu <peterx@redhat.com> wrote: > > > > > > On Thu, Sep 30, 2021 at 02:53:10PM -0700, Yang Shi wrote: > > > > diff --git a/mm/shmem.c b/mm/shmem.c > > > > index 88742953532c..75c36b6a405a 100644 > > > > --- a/mm/shmem.c > > > > +++ b/mm/shmem.c > > > > @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > > > struct inode *inode = mapping->host; > > > > struct shmem_inode_info *info = SHMEM_I(inode); > > > > pgoff_t index = pos >> PAGE_SHIFT; > > > > + int ret = 0; > > > > > > > > /* i_rwsem is held by caller */ > > > > if (unlikely(info->seals & (F_SEAL_GROW | > > > > @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > > > return -EPERM; > > > > } > > > > > > > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > > > > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > > > > + > > > > + if (*pagep) { > > > > + if (PageHWPoison(*pagep)) { > > > > + unlock_page(*pagep); > > > > + put_page(*pagep); > > > > + ret = -EIO; > > > > + } > > > > + } > > > > + > > > > + return ret; > > > > } > > > > > > > > static int > > > > @@ -2555,6 +2566,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > > > > unlock_page(page); > > > > } > > > > > > > > + if (page && PageHWPoison(page)) { > > > > + error = -EIO; > > > > + break; > > > > + } > > > > + > > > > /* > > > > * We must evaluate after, since reads (unlike writes) > > > > * are called without i_rwsem protection against truncate > > > > > > [...] > > > > > > > @@ -4193,6 +4216,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, > > > > page = ERR_PTR(error); > > > > else > > > > unlock_page(page); > > > > + > > > > + if (PageHWPoison(page)) > > > > + page = ERR_PTR(-EIO); > > > > + > > > > return page; > > > > #else > > > > /* > > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > > > index 7a9008415534..b688d5327177 100644 > > > > --- a/mm/userfaultfd.c > > > > +++ b/mm/userfaultfd.c > > > > @@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm, > > > > goto out; > > > > } > > > > > > > > + if (PageHWPoison(page)) { > > > > + ret = -EIO; > > > > + goto out_release; > > > > + } > > > > + > > > > ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, > > > > page, false, wp_copy); > > > > if (ret) > > > > -- > > > > 2.26.2 > > > > > > > > > > These are shmem_getpage_gfp() call sites: > > > > > > shmem_getpage[151] return shmem_getpage_gfp(inode, index, pagep, sgp, > > > shmem_fault[2112] err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE, > > > shmem_read_mapping_page_gfp[4188] error = shmem_getpage_gfp(inode, index, &page, SGP_CACHE, > > > > > > These are further shmem_getpage() call sites: > > > > > > collapse_file[1735] if (shmem_getpage(mapping->host, index, &page, > > > shmem_undo_range[965] shmem_getpage(inode, start - 1, &page, SGP_READ); > > > shmem_undo_range[980] shmem_getpage(inode, end, &page, SGP_READ); > > > shmem_write_begin[2467] return shmem_getpage(inode, index, pagep, SGP_WRITE); > > > shmem_file_read_iter[2544] error = shmem_getpage(inode, index, &page, sgp); > > > shmem_fallocate[2733] error = shmem_getpage(inode, index, &page, SGP_FALLOC); > > > shmem_symlink[3079] error = shmem_getpage(inode, 0, &page, SGP_WRITE); > > > shmem_get_link[3120] error = shmem_getpage(inode, 0, &page, SGP_READ); > > > mcontinue_atomic_pte[235] ret = shmem_getpage(inode, pgoff, &page, SGP_READ); > > > > > > Wondering whether this patch covered all of them. > > > > No, it doesn't need. Not all places care about hwpoison page, for > > example, truncate, hole punch, etc. Only the APIs which return the > > data back to userspace or write back to disk need care about if the > > data is corrupted or not since. This has been elaborated in the cover > > letter. > > I see, sorry I missed that. However I still have two entries unsure in above > list that this patch didn't cover (besides fault path, truncate, ...): > > collapse_file[1735] if (shmem_getpage(mapping->host, index, &page, > shmem_get_link[3120] error = shmem_getpage(inode, 0, &page, SGP_READ); > > IIUC the 1st one is when we want to collapse a file thp, should we stop the > attempt if we see a hwpoison small page? The page refcount could stop collapsing hwpoison page. One could argue khugepaged could bail out earlier by checking hwpoison flag, but it is definitely not a must do. So it relies on refcount now. > > The 2nd one should be where we had a symlink shmem file and the 1st page which > stores the link got corrupted. Should we fail the get_link() then? Thanks for catching this. Yeah, it seems this one is overlooked. I didn't know that reading symlink needed to copy the first page. Will fix it in the next version. > > -- > Peter Xu >
On Tue, Oct 12, 2021 at 08:00:31PM -0700, Yang Shi wrote: > The page refcount could stop collapsing hwpoison page. One could argue > khugepaged could bail out earlier by checking hwpoison flag, but it is > definitely not a must do. So it relies on refcount now. I suppose you mean the page_ref_freeze() in collapse_file()? Yeah that seems to work too. Thanks,
On Tue, Oct 12, 2021 at 8:07 PM Peter Xu <peterx@redhat.com> wrote: > > On Tue, Oct 12, 2021 at 08:00:31PM -0700, Yang Shi wrote: > > The page refcount could stop collapsing hwpoison page. One could argue > > khugepaged could bail out earlier by checking hwpoison flag, but it is > > definitely not a must do. So it relies on refcount now. > > I suppose you mean the page_ref_freeze() in collapse_file()? Yeah that seems > to work too. Thanks, Yes, exactly. > > -- > Peter Xu >
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 562bcf335bd2..176883cd080f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -57,6 +57,7 @@ #include <linux/ratelimit.h> #include <linux/page-isolation.h> #include <linux/pagewalk.h> +#include <linux/shmem_fs.h> #include "internal.h" #include "ras/ras_event.h" @@ -866,6 +867,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) { int ret; struct address_space *mapping; + bool dec; delete_from_lru_cache(p); @@ -894,6 +896,12 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) goto out; } + /* + * The shmem page is kept in page cache instead of truncating + * so need decrement the refcount from page cache. + */ + dec = shmem_mapping(mapping); + /* * Truncation is a bit tricky. Enable it per file system for now. * @@ -903,7 +911,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) out: unlock_page(p); - if (has_extra_refcount(ps, p, false)) + if (has_extra_refcount(ps, p, dec)) ret = MF_FAILED; return ret; diff --git a/mm/shmem.c b/mm/shmem.c index 88742953532c..75c36b6a405a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, struct inode *inode = mapping->host; struct shmem_inode_info *info = SHMEM_I(inode); pgoff_t index = pos >> PAGE_SHIFT; + int ret = 0; /* i_rwsem is held by caller */ if (unlikely(info->seals & (F_SEAL_GROW | @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping, return -EPERM; } - return shmem_getpage(inode, index, pagep, SGP_WRITE); + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); + + if (*pagep) { + if (PageHWPoison(*pagep)) { + unlock_page(*pagep); + put_page(*pagep); + ret = -EIO; + } + } + + return ret; } static int @@ -2555,6 +2566,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) unlock_page(page); } + if (page && PageHWPoison(page)) { + error = -EIO; + break; + } + /* * We must evaluate after, since reads (unlike writes) * are called without i_rwsem protection against truncate @@ -3772,6 +3788,13 @@ static void shmem_destroy_inodecache(void) kmem_cache_destroy(shmem_inode_cachep); } +/* Keep the page in page cache instead of truncating it */ +static int shmem_error_remove_page(struct address_space *mapping, + struct page *page) +{ + return 0; +} + const struct address_space_operations shmem_aops = { .writepage = shmem_writepage, .set_page_dirty = __set_page_dirty_no_writeback, @@ -3782,7 +3805,7 @@ const struct address_space_operations shmem_aops = { #ifdef CONFIG_MIGRATION .migratepage = migrate_page, #endif - .error_remove_page = generic_error_remove_page, + .error_remove_page = shmem_error_remove_page, }; EXPORT_SYMBOL(shmem_aops); @@ -4193,6 +4216,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, page = ERR_PTR(error); else unlock_page(page); + + if (PageHWPoison(page)) + page = ERR_PTR(-EIO); + return page; #else /* diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7a9008415534..b688d5327177 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm, goto out; } + if (PageHWPoison(page)) { + ret = -EIO; + goto out_release; + } + ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, page, false, wp_copy); if (ret)
The current behavior of memory failure is to truncate the page cache regardless of dirty or clean. If the page is dirty the later access will get the obsolete data from disk without any notification to the users. This may cause silent data loss. It is even worse for shmem since shmem is in-memory filesystem, truncating page cache means discarding data blocks. The later read would return all zero. The right approach is to keep the corrupted page in page cache, any later access would return error for syscalls or SIGBUS for page fault, until the file is truncated, hole punched or removed. The regular storage backed filesystems would be more complicated so this patch is focused on shmem. This also unblock the support for soft offlining shmem THP. Signed-off-by: Yang Shi <shy828301@gmail.com> --- mm/memory-failure.c | 10 +++++++++- mm/shmem.c | 31 +++++++++++++++++++++++++++++-- mm/userfaultfd.c | 5 +++++ 3 files changed, 43 insertions(+), 3 deletions(-)