Message ID | 20210719184001.1750630-7-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Folio support in block + iomap layers | expand |
On Mon, Jul 19, 2021 at 07:39:50PM +0100, Matthew Wilcox (Oracle) wrote: > This is an address_space operation, so its argument must remain as a > struct page, but we can use a folio internally. Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de>
On Mon, Jul 19, 2021 at 07:39:50PM +0100, Matthew Wilcox (Oracle) wrote: > This is an address_space operation, so its argument must remain as a > struct page, but we can use a folio internally. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> <rant> /me curses at vger and fsdevel for not delivering this; if I have to scrape lore to have reliable email, why don't we just use a webpage for this? <grumble> </rant> The patch itself looks good though. Reviewed-by: Darrick J. Wong <djwong@kernel.org> --D > --- > fs/iomap/buffered-io.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 83eb5fdcbe05..715b25a1c1e6 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -460,15 +460,15 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) > { > struct folio *folio = page_folio(page); > > - trace_iomap_releasepage(page->mapping->host, page_offset(page), > - PAGE_SIZE); > + trace_iomap_releasepage(folio->mapping->host, folio_pos(folio), > + folio_size(folio)); > > /* > * mm accommodates an old ext3 case where clean pages might not have had > * the dirty bit cleared. Thus, it can send actual dirty pages to > * ->releasepage() via shrink_active_list(), skip those here. > */ > - if (PageDirty(page) || PageWriteback(page)) > + if (folio_test_dirty(folio) || folio_test_writeback(folio)) > return 0; > iomap_page_release(folio); > return 1; > -- > 2.30.2 >
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 83eb5fdcbe05..715b25a1c1e6 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -460,15 +460,15 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) { struct folio *folio = page_folio(page); - trace_iomap_releasepage(page->mapping->host, page_offset(page), - PAGE_SIZE); + trace_iomap_releasepage(folio->mapping->host, folio_pos(folio), + folio_size(folio)); /* * mm accommodates an old ext3 case where clean pages might not have had * the dirty bit cleared. Thus, it can send actual dirty pages to * ->releasepage() via shrink_active_list(), skip those here. */ - if (PageDirty(page) || PageWriteback(page)) + if (folio_test_dirty(folio) || folio_test_writeback(folio)) return 0; iomap_page_release(folio); return 1;
This is an address_space operation, so its argument must remain as a struct page, but we can use a folio internally. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- fs/iomap/buffered-io.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)