Message ID | 20210715033704.692967-72-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Memory folios | expand |
Matthew Wilcox (Oracle) <willy@infradead.org> wrote: > Reimplement __set_page_dirty_nobuffers() as a wrapper around > filemap_dirty_folio(). > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Howells <dhowells@redhat.com>
On 7/15/21 5:35 AM, Matthew Wilcox (Oracle) wrote: > Reimplement __set_page_dirty_nobuffers() as a wrapper around > filemap_dirty_folio(). I assume it becomes obvious later why the new "mapping" parameter instead of taking it from the folio, but maybe the changelog should say it here? > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Vlastimil Babka <vbabka@suse.cz>
On Thu, Aug 12, 2021 at 06:07:05PM +0200, Vlastimil Babka wrote: > On 7/15/21 5:35 AM, Matthew Wilcox (Oracle) wrote: > > Reimplement __set_page_dirty_nobuffers() as a wrapper around > > filemap_dirty_folio(). > > I assume it becomes obvious later why the new "mapping" parameter instead of > taking it from the folio, but maybe the changelog should say it here? --- mm/writeback: Add filemap_dirty_folio() Reimplement __set_page_dirty_nobuffers() as a wrapper around filemap_dirty_folio(). Eventually folio_mark_dirty() will pass the folio's mapping to the address space's ->dirty_folio() operation, so add the parameter to filemap_dirty_folio() now. --- Nobody seems quite sure whether it's possible to truncate (or otherwise remove) a page from a file while it's being marked as dirty. viz: int set_page_dirty(struct page *page) { struct address_space *mapping = page_mapping(page); if (likely(mapping)) { ... return mapping->a_ops->set_page_dirty(page); } so ->set_page_dirty can only be called if page has a mapping (obviously, otherwise we wouldn't know whose ->set_page_dirty to call). But then in __set_page_dirty_nobuffers(), we check to see if mapping has become unset: if (!TestSetPageDirty(page)) { struct address_space *mapping = page_mapping(page); if (!mapping) { unlock_page_memcg(page); return 1; } Confusingly, the comment to __set_page_dirty_nobuffers says: * The caller must ensure this doesn't race with truncation. Most will simply * hold the page lock, but e.g. zap_pte_range() calls with the page mapped and * the pte lock held, which also locks out truncation. I believe this is left-over from commit 2d6d7f982846 in 2015. Anyway, passing mapping as a parameter is something we already do for just about every other address_space operation, and we already called page_mapping() to get it, so why make the callee call it again? Not to mention people get confused about whether to call page_mapping() or just look at page->mapping. Changing the ->set_page_dirty() operation to ->dirty_folio() is something I've postponed until the 5.17/5.18 timeframe, but we might as well pass the parameter to filemap_dirty_folio() now.
diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 667e86cfbdcf..eda9cc778ef6 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -398,6 +398,7 @@ void writeback_set_ratelimit(void); void tag_pages_for_writeback(struct address_space *mapping, pgoff_t start, pgoff_t end); +bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio); void account_page_redirty(struct page *page); void sb_mark_inode_writeback(struct inode *inode); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 2c2b3917b5dc..dad962b920e5 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -83,3 +83,9 @@ bool set_page_dirty(struct page *page) return folio_mark_dirty(page_folio(page)); } EXPORT_SYMBOL(set_page_dirty); + +int __set_page_dirty_nobuffers(struct page *page) +{ + return filemap_dirty_folio(page_mapping(page), page_folio(page)); +} +EXPORT_SYMBOL(__set_page_dirty_nobuffers); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 2dc410b110ff..bd97c461d499 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2488,41 +2488,43 @@ void __folio_mark_dirty(struct folio *folio, struct address_space *mapping, xa_unlock_irqrestore(&mapping->i_pages, flags); } -/* - * For address_spaces which do not use buffers. Just tag the page as dirty in - * the xarray. - * - * This is also used when a single buffer is being dirtied: we want to set the - * page dirty in that case, but not all the buffers. This is a "bottom-up" - * dirtying, whereas __set_page_dirty_buffers() is a "top-down" dirtying. - * - * The caller must ensure this doesn't race with truncation. Most will simply - * hold the page lock, but e.g. zap_pte_range() calls with the page mapped and - * the pte lock held, which also locks out truncation. +/** + * filemap_dirty_folio - Mark a folio dirty for filesystems which do not use buffer_heads. + * @mapping: Address space this folio belongs to. + * @folio: Folio to be marked as dirty. + * + * Filesystems which do not use buffer heads should call this function + * from their set_page_dirty address space operation. It ignores the + * contents of folio_get_private(), so if the filesystem marks individual + * blocks as dirty, the filesystem should handle that itself. + * + * This is also sometimes used by filesystems which use buffer_heads when + * a single buffer is being dirtied: we want to set the folio dirty in + * that case, but not all the buffers. This is a "bottom-up" dirtying, + * whereas __set_page_dirty_buffers() is a "top-down" dirtying. + * + * The caller must ensure this doesn't race with truncation. Most will + * simply hold the folio lock, but e.g. zap_pte_range() calls with the + * folio mapped and the pte lock held, which also locks out truncation. */ -int __set_page_dirty_nobuffers(struct page *page) +bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio) { - lock_page_memcg(page); - if (!TestSetPageDirty(page)) { - struct address_space *mapping = page_mapping(page); + folio_memcg_lock(folio); + if (folio_test_set_dirty(folio)) { + folio_memcg_unlock(folio); + return false; + } - if (!mapping) { - unlock_page_memcg(page); - return 1; - } - __set_page_dirty(page, mapping, !PagePrivate(page)); - unlock_page_memcg(page); + __folio_mark_dirty(folio, mapping, !folio_test_private(folio)); + folio_memcg_unlock(folio); - if (mapping->host) { - /* !PageAnon && !swapper_space */ - __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); - } - return 1; + if (mapping->host) { + /* !PageAnon && !swapper_space */ + __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } - unlock_page_memcg(page); - return 0; + return true; } -EXPORT_SYMBOL(__set_page_dirty_nobuffers); +EXPORT_SYMBOL(filemap_dirty_folio); /* * Call this whenever redirtying a page, to de-account the dirty counters