Message ID | 20210622121551.3398730-28-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Folio-enabling the page cache | expand |
On Tue, Jun 22, 2021 at 01:15:32PM +0100, Matthew Wilcox (Oracle) wrote: > Turn __set_page_dirty() into a wrapper around __folio_mark_dirty() (which > can directly cast from page to folio because we know that set_page_dirty() > calls filesystems with the head page). Convert account_page_dirtied() > into folio_account_dirtied() and account the number of pages in the folio. Is it really worth micro-optimizing a transitional function like that? I'd rather eat the overhead of the compound_page() call over adding hacky casts like this.
On Wed, Jun 23, 2021 at 11:27:12AM +0200, Christoph Hellwig wrote: > On Tue, Jun 22, 2021 at 01:15:32PM +0100, Matthew Wilcox (Oracle) wrote: > > Turn __set_page_dirty() into a wrapper around __folio_mark_dirty() (which > > can directly cast from page to folio because we know that set_page_dirty() > > calls filesystems with the head page). Convert account_page_dirtied() > > into folio_account_dirtied() and account the number of pages in the folio. > > Is it really worth micro-optimizing a transitional function like that? > I'd rather eat the overhead of the compound_page() call over adding hacky > casts like this. Fair enough. There's only three calls to it and one of them goes away this series.
On Thu, Jun 24, 2021 at 07:37:30PM +0100, Matthew Wilcox wrote: > On Wed, Jun 23, 2021 at 11:27:12AM +0200, Christoph Hellwig wrote: > > On Tue, Jun 22, 2021 at 01:15:32PM +0100, Matthew Wilcox (Oracle) wrote: > > > Turn __set_page_dirty() into a wrapper around __folio_mark_dirty() (which > > > can directly cast from page to folio because we know that set_page_dirty() > > > calls filesystems with the head page). Convert account_page_dirtied() > > > into folio_account_dirtied() and account the number of pages in the folio. > > > > Is it really worth micro-optimizing a transitional function like that? > > I'd rather eat the overhead of the compound_page() call over adding hacky > > casts like this. > > Fair enough. There's only three calls to it and one of them goes away > this series. The other option would be a helper that asserts a page is not a tail page and then do the cast to document the assumptions.
On Mon, Jun 28, 2021 at 07:03:26AM +0100, Christoph Hellwig wrote: > On Thu, Jun 24, 2021 at 07:37:30PM +0100, Matthew Wilcox wrote: > > On Wed, Jun 23, 2021 at 11:27:12AM +0200, Christoph Hellwig wrote: > > > On Tue, Jun 22, 2021 at 01:15:32PM +0100, Matthew Wilcox (Oracle) wrote: > > > > Turn __set_page_dirty() into a wrapper around __folio_mark_dirty() (which > > > > can directly cast from page to folio because we know that set_page_dirty() > > > > calls filesystems with the head page). Convert account_page_dirtied() > > > > into folio_account_dirtied() and account the number of pages in the folio. > > > > > > Is it really worth micro-optimizing a transitional function like that? > > > I'd rather eat the overhead of the compound_page() call over adding hacky > > > casts like this. > > > > Fair enough. There's only three calls to it and one of them goes away > > this series. > > The other option would be a helper that asserts a page is not a tail > page and then do the cast to document the assumptions. btw, every call to folio_flags() checks !PageTail: struct page *page = &folio->page; VM_BUG_ON_PGFLAGS(PageTail(page), page); now, that's not going to be turned on for regular builds, but it does give us a _lot_ of runtime assertions that somebody hasn't cast a tail page to a folio.
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 00693cb48b5d..f9f05724db3b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1638,10 +1638,9 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb); -static inline void mem_cgroup_track_foreign_dirty(struct page *page, +static inline void mem_cgroup_track_foreign_dirty(struct folio *folio, struct bdi_writeback *wb) { - struct folio *folio = page_folio(page); if (mem_cgroup_disabled()) return; @@ -1666,7 +1665,7 @@ static inline void mem_cgroup_wb_stats(struct bdi_writeback *wb, { } -static inline void mem_cgroup_track_foreign_dirty(struct page *page, +static inline void mem_cgroup_track_foreign_dirty(struct folio *folio, struct bdi_writeback *wb) { } diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 22d756d56404..e6a9756293aa 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -772,8 +772,13 @@ void end_page_writeback(struct page *page); void folio_end_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); void folio_wait_stable(struct folio *folio); +void __folio_mark_dirty(struct folio *folio, struct address_space *, int warn); +static inline void __set_page_dirty(struct page *page, + struct address_space *mapping, int warn) +{ + __folio_mark_dirty((struct folio *)page, mapping, warn); +} -void __set_page_dirty(struct page *, struct address_space *, int warn); int __set_page_dirty_nobuffers(struct page *page); int __set_page_dirty_no_writeback(struct page *page); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 73b937955cc1..a7989870b171 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2417,29 +2417,30 @@ EXPORT_SYMBOL(__set_page_dirty_no_writeback); * * NOTE: This relies on being atomic wrt interrupts. */ -static void account_page_dirtied(struct page *page, +static void folio_account_dirtied(struct folio *folio, struct address_space *mapping) { struct inode *inode = mapping->host; - trace_writeback_dirty_page(page, mapping); + trace_writeback_dirty_page(&folio->page, mapping); if (mapping_can_writeback(mapping)) { struct bdi_writeback *wb; + long nr = folio_nr_pages(folio); - inode_attach_wb(inode, page); + inode_attach_wb(inode, &folio->page); wb = inode_to_wb(inode); - __inc_lruvec_page_state(page, NR_FILE_DIRTY); - __inc_zone_page_state(page, NR_ZONE_WRITE_PENDING); - __inc_node_page_state(page, NR_DIRTIED); - inc_wb_stat(wb, WB_RECLAIMABLE); - inc_wb_stat(wb, WB_DIRTIED); - task_io_account_write(PAGE_SIZE); - current->nr_dirtied++; - this_cpu_inc(bdp_ratelimits); + __lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, nr); + __zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, nr); + __node_stat_mod_folio(folio, NR_DIRTIED, nr); + wb_stat_mod(wb, WB_RECLAIMABLE, nr); + wb_stat_mod(wb, WB_DIRTIED, nr); + task_io_account_write(nr * PAGE_SIZE); + current->nr_dirtied += nr; + __this_cpu_add(bdp_ratelimits, nr); - mem_cgroup_track_foreign_dirty(page, wb); + mem_cgroup_track_foreign_dirty(folio, wb); } } @@ -2460,24 +2461,24 @@ void account_page_cleaned(struct page *page, struct address_space *mapping, } /* - * Mark the page dirty, and set it dirty in the page cache, and mark the inode - * dirty. + * Mark the folio dirty, and set it dirty in the page cache, and mark + * the inode dirty. * - * If warn is true, then emit a warning if the page is not uptodate and has + * If warn is true, then emit a warning if the folio is not uptodate and has * not been truncated. * * The caller must hold lock_page_memcg(). */ -void __set_page_dirty(struct page *page, struct address_space *mapping, +void __folio_mark_dirty(struct folio *folio, struct address_space *mapping, int warn) { unsigned long flags; xa_lock_irqsave(&mapping->i_pages, flags); - if (page->mapping) { /* Race with truncate? */ - WARN_ON_ONCE(warn && !PageUptodate(page)); - account_page_dirtied(page, mapping); - __xa_set_mark(&mapping->i_pages, page_index(page), + if (folio->mapping) { /* Race with truncate? */ + WARN_ON_ONCE(warn && !folio_uptodate(folio)); + folio_account_dirtied(folio, mapping); + __xa_set_mark(&mapping->i_pages, folio_index(folio), PAGECACHE_TAG_DIRTY); } xa_unlock_irqrestore(&mapping->i_pages, flags);
Turn __set_page_dirty() into a wrapper around __folio_mark_dirty() (which can directly cast from page to folio because we know that set_page_dirty() calls filesystems with the head page). Convert account_page_dirtied() into folio_account_dirtied() and account the number of pages in the folio. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- include/linux/memcontrol.h | 5 ++--- include/linux/pagemap.h | 7 ++++++- mm/page-writeback.c | 41 +++++++++++++++++++------------------- 3 files changed, 29 insertions(+), 24 deletions(-)