Message ID | 20230418061933.3282785-1-stevensd@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/shmem: Fix race in shmem_undo_range w/THP | expand |
On Tue, Apr 18, 2023 at 3:22 PM David Stevens <stevensd@chromium.org> wrote: > > From: David Stevens <stevensd@chromium.org> > > Split folios during the second loop of shmem_undo_range. It's not > sufficient to only split folios when dealing with partial pages, since > it's possible for a THP to be faulted in after that point. Calling > truncate_inode_folio in that situation can result in throwing away data > outside of the range being targeted. > > Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large folios") > Cc: stable@vger.kernel.org > Signed-off-by: David Stevens <stevensd@chromium.org> > --- > mm/shmem.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 9218c955f482..317cbeb0fb6b 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1033,7 +1033,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, > } > VM_BUG_ON_FOLIO(folio_test_writeback(folio), > folio); > - truncate_inode_folio(mapping, folio); > + truncate_inode_partial_folio(folio, lstart, lend); It was pointed out to me that truncate_inode_partial_folio only sometimes frees the target pages. So this patch does fix the data loss, but it ends up making partial hole punches on a THP not actually free memory. I'll send out a v2 that properly calls truncate_inode_folio after splitting a THP. -David > } > folio_unlock(folio); > } > -- > 2.40.0.634.g4ca3ef3211-goog >
diff --git a/mm/shmem.c b/mm/shmem.c index 9218c955f482..317cbeb0fb6b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1033,7 +1033,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, } VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); - truncate_inode_folio(mapping, folio); + truncate_inode_partial_folio(folio, lstart, lend); } folio_unlock(folio); }