diff mbox series

[v2,28/46] mm/writeback: Add filemap_dirty_folio()

Message ID 20210622121551.3398730-29-willy@infradead.org (mailing list archive)
State New
Headers show
Series Folio-enabling the page cache | expand

Commit Message

Matthew Wilcox June 22, 2021, 12:15 p.m. UTC
Reimplement __set_page_dirty_nobuffers() as a wrapper around
filemap_dirty_folio().  This can use a cast to struct folio
because we know that the ->set_page_dirty address space op
is always called with a page pointer that happens to also be
a folio pointer.  Saves 7 bytes of kernel text.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/writeback.h |  1 +
 mm/page-writeback.c       | 64 ++++++++++++++++++++++-----------------
 2 files changed, 37 insertions(+), 28 deletions(-)

Comments

Christoph Hellwig June 23, 2021, 9:32 a.m. UTC | #1
On Tue, Jun 22, 2021 at 01:15:33PM +0100, Matthew Wilcox (Oracle) wrote:
> Reimplement __set_page_dirty_nobuffers() as a wrapper around
> filemap_dirty_folio().  This can use a cast to struct folio
> because we know that the ->set_page_dirty address space op
> is always called with a page pointer that happens to also be
> a folio pointer.  Saves 7 bytes of kernel text.

Modulo the cast comment from the last patch this looks fine:

Reviewed-by: Christoph Hellwig <hch@lst.de>
diff mbox series

Patch

diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 8e5c5bb16e2d..aa372f6d2b55 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -398,6 +398,7 @@  void writeback_set_ratelimit(void);
 void tag_pages_for_writeback(struct address_space *mapping,
 			     pgoff_t start, pgoff_t end);
 
+bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio);
 void account_page_redirty(struct page *page);
 
 void sb_mark_inode_writeback(struct inode *inode);
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index a7989870b171..64b989eff9f5 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2484,39 +2484,47 @@  void __folio_mark_dirty(struct folio *folio, struct address_space *mapping,
 	xa_unlock_irqrestore(&mapping->i_pages, flags);
 }
 
-/*
- * For address_spaces which do not use buffers.  Just tag the page as dirty in
- * the xarray.
- *
- * This is also used when a single buffer is being dirtied: we want to set the
- * page dirty in that case, but not all the buffers.  This is a "bottom-up"
- * dirtying, whereas __set_page_dirty_buffers() is a "top-down" dirtying.
- *
- * The caller must ensure this doesn't race with truncation.  Most will simply
- * hold the page lock, but e.g. zap_pte_range() calls with the page mapped and
- * the pte lock held, which also locks out truncation.
+/**
+ * filemap_dirty_folio - Mark a folio dirty for filesystems which do not use buffer_heads.
+ * @mapping: Address space this folio belongs to.
+ * @folio: Folio to be marked as dirty.
+ *
+ * Filesystems which do not use buffer heads should call this function
+ * from their set_page_dirty address space operation.  It ignores the
+ * contents of folio_private(), so if the filesystem marks individual
+ * blocks as dirty, the filesystem should handle that itself.
+ *
+ * This is also sometimes used by filesystems which use buffer_heads when
+ * a single buffer is being dirtied: we want to set the folio dirty in
+ * that case, but not all the buffers.  This is a "bottom-up" dirtying,
+ * whereas __set_page_dirty_buffers() is a "top-down" dirtying.
+ *
+ * The caller must ensure this doesn't race with truncation.  Most will
+ * simply hold the folio lock, but e.g. zap_pte_range() calls with the
+ * folio mapped and the pte lock held, which also locks out truncation.
  */
-int __set_page_dirty_nobuffers(struct page *page)
+bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio)
 {
-	lock_page_memcg(page);
-	if (!TestSetPageDirty(page)) {
-		struct address_space *mapping = page_mapping(page);
+	lock_folio_memcg(folio);
+	if (folio_test_set_dirty_flag(folio)) {
+		unlock_folio_memcg(folio);
+		return false;
+	}
 
-		if (!mapping) {
-			unlock_page_memcg(page);
-			return 1;
-		}
-		__set_page_dirty(page, mapping, !PagePrivate(page));
-		unlock_page_memcg(page);
+	__folio_mark_dirty(folio, mapping, !folio_private(folio));
+	unlock_folio_memcg(folio);
 
-		if (mapping->host) {
-			/* !PageAnon && !swapper_space */
-			__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
-		}
-		return 1;
+	if (mapping->host) {
+		/* !PageAnon && !swapper_space */
+		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
 	}
-	unlock_page_memcg(page);
-	return 0;
+	return true;
+}
+EXPORT_SYMBOL(filemap_dirty_folio);
+
+int __set_page_dirty_nobuffers(struct page *page)
+{
+	return filemap_dirty_folio(page_mapping(page), (struct folio *)page);
 }
 EXPORT_SYMBOL(__set_page_dirty_nobuffers);