diff mbox series

[37/48] truncate: Convert invalidate_inode_pages2_range() to use a folio

Message ID 20211208042256.1923824-38-willy@infradead.org (mailing list archive)
State New, archived
Headers show
Series Folios for 5.17 | expand

Commit Message

Matthew Wilcox (Oracle) Dec. 8, 2021, 4:22 a.m. UTC
If we're going to unmap a folio, we have to be sure to unmap the entire
folio, not just the part of it which lies after the search index.

We cannot yet remove the struct page from invalidate_inode_pages2_range()
because the page pointer in the pvec might be a shadow/dax/swap entry
instead of actually a page.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/truncate.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

Comments

Christoph Hellwig Dec. 23, 2021, 8:21 a.m. UTC | #1
On Wed, Dec 08, 2021 at 04:22:45AM +0000, Matthew Wilcox (Oracle) wrote:
> If we're going to unmap a folio, we have to be sure to unmap the entire
> folio, not just the part of it which lies after the search index.
> 
> We cannot yet remove the struct page from invalidate_inode_pages2_range()
> because the page pointer in the pvec might be a shadow/dax/swap entry
> instead of actually a page.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>
diff mbox series

Patch

diff --git a/mm/truncate.c b/mm/truncate.c
index 0df420c1cf5b..ef6980b240e2 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -642,8 +642,9 @@  int invalidate_inode_pages2_range(struct address_space *mapping,
 	while (find_get_entries(mapping, index, end, &pvec, indices)) {
 		for (i = 0; i < pagevec_count(&pvec); i++) {
 			struct page *page = pvec.pages[i];
+			struct folio *folio;
 
-			/* We rely upon deletion not changing page->index */
+			/* We rely upon deletion not changing folio->index */
 			index = indices[i];
 
 			if (xa_is_value(page)) {
@@ -652,10 +653,11 @@  int invalidate_inode_pages2_range(struct address_space *mapping,
 					ret = -EBUSY;
 				continue;
 			}
+			folio = page_folio(page);
 
-			if (!did_range_unmap && page_mapped(page)) {
+			if (!did_range_unmap && folio_mapped(folio)) {
 				/*
-				 * If page is mapped, before taking its lock,
+				 * If folio is mapped, before taking its lock,
 				 * zap the rest of the file in one hit.
 				 */
 				unmap_mapping_pages(mapping, index,
@@ -663,26 +665,27 @@  int invalidate_inode_pages2_range(struct address_space *mapping,
 				did_range_unmap = 1;
 			}
 
-			lock_page(page);
-			WARN_ON(page_to_index(page) != index);
-			if (page->mapping != mapping) {
-				unlock_page(page);
+			folio_lock(folio);
+			VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio);
+			if (folio->mapping != mapping) {
+				folio_unlock(folio);
 				continue;
 			}
-			wait_on_page_writeback(page);
+			folio_wait_writeback(folio);
 
-			if (page_mapped(page))
-				unmap_mapping_folio(page_folio(page));
-			BUG_ON(page_mapped(page));
+			if (folio_mapped(folio))
+				unmap_mapping_folio(folio);
+			BUG_ON(folio_mapped(folio));
 
-			ret2 = do_launder_page(mapping, page);
+			ret2 = do_launder_page(mapping, &folio->page);
 			if (ret2 == 0) {
-				if (!invalidate_complete_page2(mapping, page))
+				if (!invalidate_complete_page2(mapping,
+								&folio->page))
 					ret2 = -EBUSY;
 			}
 			if (ret2 < 0)
 				ret = ret2;
-			unlock_page(page);
+			folio_unlock(folio);
 		}
 		pagevec_remove_exceptionals(&pvec);
 		pagevec_release(&pvec);