From patchwork Thu Dec 16 21:06:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C915C43217 for ; Thu, 16 Dec 2021 21:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241562AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241532AbhLPVHV (ORCPT ); Thu, 16 Dec 2021 16:07:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E16C1C06173F; Thu, 16 Dec 2021 13:07:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=i1uGRLSmQDRzx2wkY5dVs/7U4AY55WJaVteBHDy5RtE=; b=gwxjH/rb5enuTNjKvPygJBHPep 2C5Jg33thj/JnCT6E3/u0VuI9B5FXhDQJvdmjdfohcy+3Admd9s4h9JVmTaxaq7zP3j5jnaJew4Ko oaYzg47cGMhmEQNO4a4AfbAIY5E+sFEcZ1u/myYCozQ7kO+fK0JmnxPjNyqGTOVqAfjybB9WJn67J vPQd5i35DNlwEdAVQEJnTIFxxcKUB8BcJ8/2z4EUDInJjKs7HdOF4J2j5tZOlT1kl6GQ/CPpLuy81 6SQWYLOlTVa9urbqFvcQd7K8D7IVq5lIIXq4SPeGzh+1e5/zW9sAAGFBp2mCxAZqvW/yL5PUaWZlE 4Ep08g4g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyA-00Fx36-VX; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jens Axboe , Christoph Hellwig Subject: [PATCH v3 01/25] block: Add bio_add_folio() Date: Thu, 16 Dec 2021 21:06:51 +0000 Message-Id: <20211216210715.3801857-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org This is a thin wrapper around bio_add_page(). The main advantage here is the documentation that folios larger than 2GiB are not supported. It's not currently possible to allocate folios that large, but if it ever becomes possible, this function will fail gracefully instead of doing I/O to the wrong bytes. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jens Axboe Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- block/bio.c | 22 ++++++++++++++++++++++ include/linux/bio.h | 3 ++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 15ab0d6d1c06..4b3087e20d51 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1033,6 +1033,28 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +/** + * bio_add_folio - Attempt to add part of a folio to a bio. + * @bio: BIO to add to. + * @folio: Folio to add. + * @len: How many bytes from the folio to add. + * @off: First byte in this folio to add. + * + * Filesystems that use folios can call this function instead of calling + * bio_add_page() for each page in the folio. If @off is bigger than + * PAGE_SIZE, this function can create a bio_vec that starts in a page + * after the bv_page. BIOs do not support folios that are 4GiB or larger. + * + * Return: Whether the addition was successful. + */ +bool bio_add_folio(struct bio *bio, struct folio *folio, size_t len, + size_t off) +{ + if (len > UINT_MAX || off > UINT_MAX) + return 0; + return bio_add_page(bio, &folio->page, len, off) > 0; +} + void __bio_release_pages(struct bio *bio, bool mark_dirty) { struct bvec_iter_all iter_all; diff --git a/include/linux/bio.h b/include/linux/bio.h index fe6bdfbbef66..a783cac49978 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -409,7 +409,8 @@ extern void bio_uninit(struct bio *); extern void bio_reset(struct bio *); void bio_chain(struct bio *, struct bio *); -extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int); +int bio_add_page(struct bio *, struct page *, unsigned len, unsigned off); +bool bio_add_folio(struct bio *, struct folio *, size_t len, size_t off); extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *, unsigned int, unsigned int); int bio_add_zone_append_page(struct bio *bio, struct page *page, From patchwork Thu Dec 16 21:06:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C36EC433F5 for ; Thu, 16 Dec 2021 21:08:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241617AbhLPVIU (ORCPT ); Thu, 16 Dec 2021 16:08:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235454AbhLPVHY (ORCPT ); Thu, 16 Dec 2021 16:07:24 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DA21C061759; Thu, 16 Dec 2021 13:07:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9yx1uM3IgMq95UhqybQy+4eLkKdDKXGRUpPaw12iJ9Q=; b=ju6rjF/d1yjj/3tHRJNwSllSRo dYK94EWF3eLWYVvV2cnhGDp/odpgJzcbaIbJSyvdTL74dS/ZrlHR+NmM+zlaFKfZSaQgWOLvsJcdw 5s+ljgkAD13+hPkG7C2YGTfAtGLyXQYgFDtJFPLrQqAjwd+4QrYBlHWy1AYvTRCr8k/htnO/8S7Oo FLD6BidvJWcDujrldJm3zKn1Nd1UhEqS+dRp7cBdGOMbauUC3lhTcNNjl4i9TdaOXc3aKkt0dSpGA +FpT/ArScSqyN4fzOgmPy0EbimIqX092cTIaKd3c6anX3lOvNhPKlQOgBCz8+aihk0s3P/Yb3HWlC wzGeKREQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx38-1v; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jens Axboe , Christoph Hellwig Subject: [PATCH v3 02/25] block: Add bio_for_each_folio_all() Date: Thu, 16 Dec 2021 21:06:52 +0000 Message-Id: <20211216210715.3801857-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Allow callers to iterate over each folio instead of each page. The bio need not have been constructed using folios originally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jens Axboe Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- Documentation/core-api/kernel-api.rst | 1 + include/linux/bio.h | 53 ++++++++++++++++++++++++++- 2 files changed, 53 insertions(+), 1 deletion(-) diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index 2e7186805148..7f0cb604b6ab 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -279,6 +279,7 @@ Accounting Framework Block Devices ============= +.. kernel-doc:: include/linux/bio.h .. kernel-doc:: block/blk-core.c :export: diff --git a/include/linux/bio.h b/include/linux/bio.h index a783cac49978..e3c9e8207f12 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -166,7 +166,7 @@ static inline void bio_advance(struct bio *bio, unsigned int nbytes) */ #define bio_for_each_bvec_all(bvl, bio, i) \ for (i = 0, bvl = bio_first_bvec_all(bio); \ - i < (bio)->bi_vcnt; i++, bvl++) \ + i < (bio)->bi_vcnt; i++, bvl++) #define bio_iter_last(bvec, iter) ((iter).bi_size == (bvec).bv_len) @@ -260,6 +260,57 @@ static inline struct bio_vec *bio_last_bvec_all(struct bio *bio) return &bio->bi_io_vec[bio->bi_vcnt - 1]; } +/** + * struct folio_iter - State for iterating all folios in a bio. + * @folio: The current folio we're iterating. NULL after the last folio. + * @offset: The byte offset within the current folio. + * @length: The number of bytes in this iteration (will not cross folio + * boundary). + */ +struct folio_iter { + struct folio *folio; + size_t offset; + size_t length; + /* private: for use by the iterator */ + size_t _seg_count; + int _i; +}; + +static inline void bio_first_folio(struct folio_iter *fi, struct bio *bio, + int i) +{ + struct bio_vec *bvec = bio_first_bvec_all(bio) + i; + + fi->folio = page_folio(bvec->bv_page); + fi->offset = bvec->bv_offset + + PAGE_SIZE * (bvec->bv_page - &fi->folio->page); + fi->_seg_count = bvec->bv_len; + fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count); + fi->_i = i; +} + +static inline void bio_next_folio(struct folio_iter *fi, struct bio *bio) +{ + fi->_seg_count -= fi->length; + if (fi->_seg_count) { + fi->folio = folio_next(fi->folio); + fi->offset = 0; + fi->length = min(folio_size(fi->folio), fi->_seg_count); + } else if (fi->_i + 1 < bio->bi_vcnt) { + bio_first_folio(fi, bio, fi->_i + 1); + } else { + fi->folio = NULL; + } +} + +/** + * bio_for_each_folio_all - Iterate over each folio in a bio. + * @fi: struct folio_iter which is updated for each folio. + * @bio: struct bio to iterate over. + */ +#define bio_for_each_folio_all(fi, bio) \ + for (bio_first_folio(&fi, bio, 0); fi.folio; bio_next_folio(&fi, bio)) + enum bip_flags { BIP_BLOCK_INTEGRITY = 1 << 0, /* block layer owns integrity data */ BIP_MAPPED_INTEGRITY = 1 << 1, /* ref tag has been remapped */ From patchwork Thu Dec 16 21:06:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F351C43217 for ; Thu, 16 Dec 2021 21:07:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241554AbhLPVHV (ORCPT ); Thu, 16 Dec 2021 16:07:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239928AbhLPVHV (ORCPT ); Thu, 16 Dec 2021 16:07:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C211DC06173E; Thu, 16 Dec 2021 13:07:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RPCoVR4MjPsQguYMSRohDHIyYxW5oDCR7j4E9XgSs88=; b=UwZZc8IuWJ2oNJ/h7NP6Cj4dYd hNBwJjsZ6wfldq1FiaEN67i6R6TcnZKcWbryAH92g0juEv1hhTo5vTXnTD/Arw+upPcCmcLFb+B1a BneNNcKOeSCgZwdfLd9vJOndhCt+PtkMapQI5qhYn9WUK2l3QPqwATIA0wnQEpKkg+fTxbdPmJ066 3iWV8qHKY7wqlH++Hy5dgSGwt1wBJ39mq5CQTU/D73OPh/274CM8N1ghSmO1tOZrpvTrslzQ956sB KE3bd4WQJP48+4RLfW5NaPVsf2k3UvHf3QGAV7LxcJCuE07f4BLpxT/n5nSujJ0H9b99l3+FQRQaC ZW+z9RBg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx3E-5j; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 03/25] fs/buffer: Convert __block_write_begin_int() to take a folio Date: Thu, 16 Dec 2021 21:06:53 +0000 Message-Id: <20211216210715.3801857-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org There are no plans to convert buffer_head infrastructure to use large folios, but __block_write_begin_int() is called from iomap, and it's more convenient and less error-prone if we pass in a folio from iomap. It also has a nice saving of almost 200 bytes of code from removing repeated calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/buffer.c | 23 ++++++++++++----------- fs/internal.h | 2 +- fs/iomap/buffered-io.c | 7 +++++-- 3 files changed, 18 insertions(+), 14 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 46bc589b7a03..8e112b6bd371 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1969,34 +1969,34 @@ iomap_to_bh(struct inode *inode, sector_t block, struct buffer_head *bh, } } -int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, +int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len, get_block_t *get_block, const struct iomap *iomap) { unsigned from = pos & (PAGE_SIZE - 1); unsigned to = from + len; - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; unsigned block_start, block_end; sector_t block; int err = 0; unsigned blocksize, bbits; struct buffer_head *bh, *head, *wait[2], **wait_bh=wait; - BUG_ON(!PageLocked(page)); + BUG_ON(!folio_test_locked(folio)); BUG_ON(from > PAGE_SIZE); BUG_ON(to > PAGE_SIZE); BUG_ON(from > to); - head = create_page_buffers(page, inode, 0); + head = create_page_buffers(&folio->page, inode, 0); blocksize = head->b_size; bbits = block_size_bits(blocksize); - block = (sector_t)page->index << (PAGE_SHIFT - bbits); + block = (sector_t)folio->index << (PAGE_SHIFT - bbits); for(bh = head, block_start = 0; bh != head || !block_start; block++, block_start=block_end, bh = bh->b_this_page) { block_end = block_start + blocksize; if (block_end <= from || block_start >= to) { - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { if (!buffer_uptodate(bh)) set_buffer_uptodate(bh); } @@ -2016,20 +2016,20 @@ int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, if (buffer_new(bh)) { clean_bdev_bh_alias(bh); - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { clear_buffer_new(bh); set_buffer_uptodate(bh); mark_buffer_dirty(bh); continue; } if (block_end > to || block_start < from) - zero_user_segments(page, + folio_zero_segments(folio, to, block_end, block_start, from); continue; } } - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { if (!buffer_uptodate(bh)) set_buffer_uptodate(bh); continue; @@ -2050,14 +2050,15 @@ int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, err = -EIO; } if (unlikely(err)) - page_zero_new_buffers(page, from, to); + page_zero_new_buffers(&folio->page, from, to); return err; } int __block_write_begin(struct page *page, loff_t pos, unsigned len, get_block_t *get_block) { - return __block_write_begin_int(page, pos, len, get_block, NULL); + return __block_write_begin_int(page_folio(page), pos, len, get_block, + NULL); } EXPORT_SYMBOL(__block_write_begin); diff --git a/fs/internal.h b/fs/internal.h index 7979ff8d168c..8590c973c2f4 100644 --- a/fs/internal.h +++ b/fs/internal.h @@ -37,7 +37,7 @@ static inline int emergency_thaw_bdev(struct super_block *sb) /* * buffer.c */ -int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, +int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len, get_block_t *get_block, const struct iomap *iomap); /* diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 71a36ae120ee..ecb65167715b 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -603,6 +603,7 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, const struct iomap_page_ops *page_ops = iter->iomap.page_ops; const struct iomap *srcmap = iomap_iter_srcmap(iter); struct page *page; + struct folio *folio; int status = 0; BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length); @@ -624,11 +625,12 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, status = -ENOMEM; goto out_no_page; } + folio = page_folio(page); if (srcmap->type == IOMAP_INLINE) status = iomap_write_begin_inline(iter, page); else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) - status = __block_write_begin_int(page, pos, len, NULL, srcmap); + status = __block_write_begin_int(folio, pos, len, NULL, srcmap); else status = __iomap_write_begin(iter, pos, len, page); @@ -960,11 +962,12 @@ EXPORT_SYMBOL_GPL(iomap_truncate_page); static loff_t iomap_page_mkwrite_iter(struct iomap_iter *iter, struct page *page) { + struct folio *folio = page_folio(page); loff_t length = iomap_length(iter); int ret; if (iter->iomap.flags & IOMAP_F_BUFFER_HEAD) { - ret = __block_write_begin_int(page, iter->pos, length, NULL, + ret = __block_write_begin_int(folio, iter->pos, length, NULL, &iter->iomap); if (ret) return ret; From patchwork Thu Dec 16 21:06:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0927BC433EF for ; Thu, 16 Dec 2021 21:08:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241608AbhLPVIS (ORCPT ); Thu, 16 Dec 2021 16:08:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241604AbhLPVH1 (ORCPT ); Thu, 16 Dec 2021 16:07:27 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD813C061746; Thu, 16 Dec 2021 13:07:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vui2CRwyg4cqW/hCQ7NDSrocHiewHhCM57oasct4A2U=; b=U8txs0WWlrCNoTt6YJequBGPMC vcEZZbDwJCrpObqi1OfWGbbAuZqRfb/cK+45/gqleLNqzprKJDah3WMgce12UN8C71r3tq3ZhvnD6 44s77RA5Q3AS67Ow3yYiHwIANee/jciOzbz06c/Pv7kJIV/n+YZv+5gKGArOzy7eY7TYIJi0Eo17p EO03SkGR4+wkgPxq08oyWgYJPFEtbu1UlWS5Keiq5bXGcknbbbEKK7twC9Bk16zcJQbCagFRqxjhh ImCf8CswkjJd4UgLynY8W9QUxPcaUb+nnjFneN57LEwv4bG16c2xJOLqWIP6c7K2vUchNeqeilZi8 M4qCoggw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx3G-8O; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 04/25] iomap: Convert to_iomap_page to take a folio Date: Thu, 16 Dec 2021 21:06:54 +0000 Message-Id: <20211216210715.3801857-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The big comment about only using a head page can go away now that it takes a folio argument. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 32 +++++++++++++++----------------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ecb65167715b..101d5b0754e9 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -22,8 +22,8 @@ #include "../internal.h" /* - * Structure allocated for each page or THP when block size < page size - * to track sub-page uptodate status and I/O completions. + * Structure allocated for each folio when block size < folio size + * to track sub-folio uptodate status and I/O completions. */ struct iomap_page { atomic_t read_bytes_pending; @@ -32,17 +32,10 @@ struct iomap_page { unsigned long uptodate[]; }; -static inline struct iomap_page *to_iomap_page(struct page *page) +static inline struct iomap_page *to_iomap_page(struct folio *folio) { - /* - * per-block data is stored in the head page. Callers should - * not be dealing with tail pages, and if they are, they can - * call thp_head() first. - */ - VM_BUG_ON_PGFLAGS(PageTail(page), page); - - if (page_has_private(page)) - return (struct iomap_page *)page_private(page); + if (folio_test_private(folio)) + return folio_get_private(folio); return NULL; } @@ -51,7 +44,8 @@ static struct bio_set iomap_ioend_bioset; static struct iomap_page * iomap_page_create(struct inode *inode, struct page *page) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); unsigned int nr_blocks = i_blocks_per_page(inode, page); if (iop || nr_blocks <= 1) @@ -144,7 +138,8 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, static void iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; @@ -173,7 +168,8 @@ static void iomap_read_page_end_io(struct bio_vec *bvec, int error) { struct page *page = bvec->bv_page; - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { ClearPageUptodate(page); @@ -438,7 +434,8 @@ int iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned long count) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned len, first, last; unsigned i; @@ -1012,7 +1009,8 @@ static void iomap_finish_page_writeback(struct inode *inode, struct page *page, int error, unsigned int len) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); if (error) { SetPageError(page); From patchwork Thu Dec 16 21:06:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74B8FC43219 for ; Thu, 16 Dec 2021 21:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241570AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241547AbhLPVHV (ORCPT ); Thu, 16 Dec 2021 16:07:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DD80C061574; Thu, 16 Dec 2021 13:07:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aiaQ3uVVb7kKGOU7VIYidSaXwzAu/IZFIfVe3tXDGyE=; b=Y3HCRvBvRKXTooxC/biDNQhusO b6XwlBJPoJJHTP2fu+8aCufdpSVPiIv9kadU4uT0oEXP1uT7wKfXz19g/9nuyiy1EbOZro1XyYDkQ ZGKMNf898kDiODxYw1Npk0mk15AuUYLoA78toWfBcWp2jVBcjIsQtJIdAJmmbJNf4tgS6oSICzrBs z4CTp4wNjyrUS1hsQ8xOGZyERx5TVYLw9opyhrXa5v41HIIfqog6QL3uplv3Hh2nIWeD2KFGC9dIC mOul152JR0eQgLcnumB7Dgg2LBtWL3LxGgpulIVBpnswU2pMAV27Cu5tNF6JNh5n1Z0HR6+vWwwe5 t8JjY05g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx3M-CC; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 05/25] iomap: Convert iomap_page_create to take a folio Date: Thu, 16 Dec 2021 21:06:55 +0000 Message-Id: <20211216210715.3801857-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org This function already assumed it was being passed a head page, so just formalise that. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 101d5b0754e9..d7823610da5c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -42,11 +42,10 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio) static struct bio_set iomap_ioend_bioset; static struct iomap_page * -iomap_page_create(struct inode *inode, struct page *page) +iomap_page_create(struct inode *inode, struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); - unsigned int nr_blocks = i_blocks_per_page(inode, page); + unsigned int nr_blocks = i_blocks_per_folio(inode, folio); if (iop || nr_blocks <= 1) return iop; @@ -54,9 +53,9 @@ iomap_page_create(struct inode *inode, struct page *page) iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)), GFP_NOFS | __GFP_NOFAIL); spin_lock_init(&iop->uptodate_lock); - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) bitmap_fill(iop->uptodate, nr_blocks); - attach_page_private(page, iop); + folio_attach_private(folio, iop); return iop; } @@ -213,6 +212,7 @@ struct iomap_readpage_ctx { static int iomap_read_inline_data(const struct iomap_iter *iter, struct page *page) { + struct folio *folio = page_folio(page); const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; size_t poff = offset_in_page(iomap->offset); @@ -229,7 +229,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, if (WARN_ON_ONCE(size > iomap->length)) return -EIO; if (poff > 0) - iomap_page_create(iter->inode, page); + iomap_page_create(iter->inode, folio); addr = kmap_local_page(page) + poff; memcpy(addr, iomap->inline_data, size); @@ -256,6 +256,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, loff_t pos = iter->pos + offset; loff_t length = iomap_length(iter) - offset; struct page *page = ctx->cur_page; + struct folio *folio = page_folio(page); struct iomap_page *iop; loff_t orig_pos = pos; unsigned poff, plen; @@ -265,7 +266,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, return iomap_read_inline_data(iter, page); /* zero post-eof blocks as the page may be mapped */ - iop = iomap_page_create(iter->inode, page); + iop = iomap_page_create(iter->inode, folio); iomap_adjust_read_range(iter->inode, iop, &pos, length, &poff, &plen); if (plen == 0) goto done; @@ -547,8 +548,9 @@ iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, unsigned len, struct page *page) { + struct folio *folio = page_folio(page); const struct iomap *srcmap = iomap_iter_srcmap(iter); - struct iomap_page *iop = iomap_page_create(iter->inode, page); + struct iomap_page *iop = iomap_page_create(iter->inode, folio); loff_t block_size = i_blocksize(iter->inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); @@ -1296,7 +1298,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct page *page, u64 end_offset) { - struct iomap_page *iop = iomap_page_create(inode, page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); u64 file_offset; /* file offset of page */ From patchwork Thu Dec 16 21:06:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A4F1C433FE for ; Thu, 16 Dec 2021 21:07:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241599AbhLPVHZ (ORCPT ); Thu, 16 Dec 2021 16:07:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241563AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 305EFC06173E; Thu, 16 Dec 2021 13:07:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rT2FyJOEB6hB11htdcfZ/4EMfqEh9RWu8+8phVuC/xE=; b=h0XtuDJQiwyPQSvEBE12X/WJVH ldUQWx8j0yu7YsxBOjM6SeyGIIR6dXq0HVb8iIgLx4XZolqDJUnpFN6Y8nETJvyN8Q7thBczK3keY u74XqGi0gs9t3l0EEZVcJ0Q4LGPrCR7GiUaG7L+w4s8RAC5kJaIYLNui4Vc+H99DA2L9NBRPUkqwa xdVQvCKKjg3ZyBuegGbhqzfr2AMusbfYVQiWmx4b/KntC8oi9uD2VUyaqzI54iyQXSTp/Yo9cyCtF GqWSfFLxPDQ//2uRHICalea77/HBB/Bk3BATp0nu51FVyqRbULKg0Cj/NhWvI007234cHDdyA8qU6 +nXpkAXA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx3U-GH; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 06/25] iomap: Convert iomap_page_release to take a folio Date: Thu, 16 Dec 2021 21:06:56 +0000 Message-Id: <20211216210715.3801857-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org iomap_page_release() was also assuming that it was being passed a head page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index d7823610da5c..16604f605357 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -59,18 +59,18 @@ iomap_page_create(struct inode *inode, struct folio *folio) return iop; } -static void -iomap_page_release(struct page *page) +static void iomap_page_release(struct folio *folio) { - struct iomap_page *iop = detach_page_private(page); - unsigned int nr_blocks = i_blocks_per_page(page->mapping->host, page); + struct iomap_page *iop = folio_detach_private(folio); + struct inode *inode = folio->mapping->host; + unsigned int nr_blocks = i_blocks_per_folio(inode, folio); if (!iop) return; WARN_ON_ONCE(atomic_read(&iop->read_bytes_pending)); WARN_ON_ONCE(atomic_read(&iop->write_bytes_pending)); WARN_ON_ONCE(bitmap_full(iop->uptodate, nr_blocks) != - PageUptodate(page)); + folio_test_uptodate(folio)); kfree(iop); } @@ -462,6 +462,8 @@ EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate); int iomap_releasepage(struct page *page, gfp_t gfp_mask) { + struct folio *folio = page_folio(page); + trace_iomap_releasepage(page->mapping->host, page_offset(page), PAGE_SIZE); @@ -472,7 +474,7 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) */ if (PageDirty(page) || PageWriteback(page)) return 0; - iomap_page_release(page); + iomap_page_release(folio); return 1; } EXPORT_SYMBOL_GPL(iomap_releasepage); @@ -480,6 +482,8 @@ EXPORT_SYMBOL_GPL(iomap_releasepage); void iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) { + struct folio *folio = page_folio(page); + trace_iomap_invalidatepage(page->mapping->host, offset, len); /* @@ -489,7 +493,7 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) if (offset == 0 && len == PAGE_SIZE) { WARN_ON_ONCE(PageWriteback(page)); cancel_dirty_page(page); - iomap_page_release(page); + iomap_page_release(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidatepage); From patchwork Thu Dec 16 21:06:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79DC6C4321E for ; Thu, 16 Dec 2021 21:08:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242020AbhLPVIV (ORCPT ); Thu, 16 Dec 2021 16:08:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241550AbhLPVHV (ORCPT ); Thu, 16 Dec 2021 16:07:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5768FC06173F; Thu, 16 Dec 2021 13:07:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=19L35JVLRTrAp5NKOV2Bx+UpgN8Jr6QXV+BBWJ4TIk8=; b=Sf8pSMfs8U7OMZoqcbrgs8SS/3 r6WbdCSnY9eAAsb7URV2bsHQpfA5pw+fk+PYDON0KNZpQRbZjynvC7JQ6/iEP8no4yPR5Zb9f2Q6X ttraZOUOflo0v3yI5RyXMHIF/GVXfl1V9FjJmyW5qMpFemkHdGHHS2/Phaf15y2ayc+dn4OMd3bFD oCdwHwQ33WSsRTkye1yDD/DTEiIIKabmBCKPG0wKMt3q48kqxrqAKCb1v3nn9UViSOYkRhSIPbEFh 8Miof6DKPP+WC7R0BKDYdr6bxUgIvPpfzn8mNRNh6WJO6OvHQnQoptswa7mjBTTi9uwM5zu3epH+x I6OrOopA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx3a-Jr; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 07/25] iomap: Convert iomap_releasepage to use a folio Date: Thu, 16 Dec 2021 21:06:57 +0000 Message-Id: <20211216210715.3801857-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org This is an address_space operation, so its argument must remain as a struct page, but we can use a folio internally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 16604f605357..b0192b148c9f 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -464,15 +464,15 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) { struct folio *folio = page_folio(page); - trace_iomap_releasepage(page->mapping->host, page_offset(page), - PAGE_SIZE); + trace_iomap_releasepage(folio->mapping->host, folio_pos(folio), + folio_size(folio)); /* * mm accommodates an old ext3 case where clean pages might not have had * the dirty bit cleared. Thus, it can send actual dirty pages to * ->releasepage() via shrink_active_list(); skip those here. */ - if (PageDirty(page) || PageWriteback(page)) + if (folio_test_dirty(folio) || folio_test_writeback(folio)) return 0; iomap_page_release(folio); return 1; From patchwork Thu Dec 16 21:06:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6638C433F5 for ; Thu, 16 Dec 2021 21:07:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241585AbhLPVHX (ORCPT ); Thu, 16 Dec 2021 16:07:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241557AbhLPVHV (ORCPT ); Thu, 16 Dec 2021 16:07:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D8C4C061574; Thu, 16 Dec 2021 13:07:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FDfqvmr3ab60gvr2H9/ZIXdXgt08gzt0VMaO3j2bXiI=; b=C9hlJDVjrnhBvPbQ9Je7UGOeDW EzvjljBQj/ieWp7QwjEhbmMwkxeejxbZ9RPxwhUjaqWxDVBZmhG9ucvqaqKysYaieoBWGFlOR3Vwd e1IrBw9tFfpadIjFQedB/OOjrfwthqtVOcvVTkyEc+95boHgWWDfjispG/DuZc7phtjxU4sDQ0DhK FI0jlQdOxif0nQYDMArPTML/22c2VjlyYfe0kMI/rOVVM3aAHaK+UQQfKEt58cWXISCiuHB5Rtt/+ WZnapa1mTFNI3bhvH4IlX/u0o+pH7SYworrKf2FXtWmE3fAsoA+4Z+HqzLf0BRfCCSIHH1zGBzqFE wtOk/jPA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx3g-Nw; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 08/25] iomap: Add iomap_invalidate_folio Date: Thu, 16 Dec 2021 21:06:58 +0000 Message-Id: <20211216210715.3801857-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Keep iomap_invalidatepage around as a wrapper for use in address_space operations. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 20 ++++++++++++-------- include/linux/iomap.h | 1 + 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b0192b148c9f..de7ce1909527 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -479,23 +479,27 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) } EXPORT_SYMBOL_GPL(iomap_releasepage); -void -iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) +void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) { - struct folio *folio = page_folio(page); - - trace_iomap_invalidatepage(page->mapping->host, offset, len); + trace_iomap_invalidatepage(folio->mapping->host, offset, len); /* * If we're invalidating the entire page, clear the dirty state from it * and release it to avoid unnecessary buildup of the LRU. */ - if (offset == 0 && len == PAGE_SIZE) { - WARN_ON_ONCE(PageWriteback(page)); - cancel_dirty_page(page); + if (offset == 0 && len == folio_size(folio)) { + WARN_ON_ONCE(folio_test_writeback(folio)); + folio_cancel_dirty(folio); iomap_page_release(folio); } } +EXPORT_SYMBOL_GPL(iomap_invalidate_folio); + +void iomap_invalidatepage(struct page *page, unsigned int offset, + unsigned int len) +{ + iomap_invalidate_folio(page_folio(page), offset, len); +} EXPORT_SYMBOL_GPL(iomap_invalidatepage); #ifdef CONFIG_MIGRATION diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 6d1b08d0ae93..29491fb9c5ba 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -225,6 +225,7 @@ void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops); int iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned long count); int iomap_releasepage(struct page *page, gfp_t gfp_mask); +void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); void iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len); #ifdef CONFIG_MIGRATION From patchwork Thu Dec 16 21:06:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DE06C4332F for ; Thu, 16 Dec 2021 21:07:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241719AbhLPVHo (ORCPT ); Thu, 16 Dec 2021 16:07:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241559AbhLPVHV (ORCPT ); Thu, 16 Dec 2021 16:07:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9939AC061401; Thu, 16 Dec 2021 13:07:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vUEy3WXPhEOM+VRJBe3YwimHe2vh4gzKH3Ww3nzcQkw=; b=YUEsYjP/dZVODr6BUUiL4Gy1dR Fzr3E8adU5bqLv+mocGGxZ22m8Ovs0XXaphyaP6LoCKTwJOx7QlCet07UBNU8+XAPw9DK1Wu0HGA3 GNf73nT7KDttsCVt3DcUCO1Qde5W/xjDGSlgtEoEV9PrNd5/Heag3msMQ8Z6ejkfB7xQguAQSWbB8 +0UybIvlUQHIQLpA/04L/oJxf5hGdHJ/Dx4O8wFb0AuWxlm6qFtKH+5OqwYXEi6tfygJw9Cv7yyhH D+V775Yvi7buebdv2coUmhEFP1684SjpJnGLN9NfYlyJrR6wZpAul9+UcFuuj+1oviwfI67Jkvvgf nYK7ncWw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx3m-Rz; Thu, 16 Dec 2021 21:07:19 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 09/25] iomap: Pass the iomap_page into iomap_set_range_uptodate Date: Thu, 16 Dec 2021 21:06:59 +0000 Message-Id: <20211216210715.3801857-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org All but one caller already has the iomap_page, so we can avoid getting it again. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index de7ce1909527..856ef62b319e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -134,11 +134,9 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, *lenp = plen; } -static void -iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) +static void iomap_iop_set_range_uptodate(struct page *page, + struct iomap_page *iop, unsigned off, unsigned len) { - struct folio *folio = page_folio(page); - struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; @@ -151,14 +149,14 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) spin_unlock_irqrestore(&iop->uptodate_lock, flags); } -static void -iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len) +static void iomap_set_range_uptodate(struct page *page, + struct iomap_page *iop, unsigned off, unsigned len) { if (PageError(page)) return; - if (page_has_private(page)) - iomap_iop_set_range_uptodate(page, off, len); + if (iop) + iomap_iop_set_range_uptodate(page, iop, off, len); else SetPageUptodate(page); } @@ -174,7 +172,8 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error) ClearPageUptodate(page); SetPageError(page); } else { - iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len); + iomap_set_range_uptodate(page, iop, bvec->bv_offset, + bvec->bv_len); } if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) @@ -213,6 +212,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, struct page *page) { struct folio *folio = page_folio(page); + struct iomap_page *iop; const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; size_t poff = offset_in_page(iomap->offset); @@ -229,13 +229,15 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, if (WARN_ON_ONCE(size > iomap->length)) return -EIO; if (poff > 0) - iomap_page_create(iter->inode, folio); + iop = iomap_page_create(iter->inode, folio); + else + iop = to_iomap_page(folio); addr = kmap_local_page(page) + poff; memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); kunmap_local(addr); - iomap_set_range_uptodate(page, poff, PAGE_SIZE - poff); + iomap_set_range_uptodate(page, iop, poff, PAGE_SIZE - poff); return 0; } @@ -273,7 +275,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, if (iomap_block_needs_zeroing(iter, pos)) { zero_user(page, poff, plen); - iomap_set_range_uptodate(page, poff, plen); + iomap_set_range_uptodate(page, iop, poff, plen); goto done; } @@ -589,7 +591,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (status) return status; } - iomap_set_range_uptodate(page, poff, plen); + iomap_set_range_uptodate(page, iop, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -661,6 +663,8 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t copied, struct page *page) { + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); flush_dcache_page(page); /* @@ -676,7 +680,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, offset_in_page(pos), len); + iomap_set_range_uptodate(page, iop, offset_in_page(pos), len); __set_page_dirty_nobuffers(page); return copied; } From patchwork Thu Dec 16 21:07:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CAC6C433F5 for ; Thu, 16 Dec 2021 21:07:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241591AbhLPVHY (ORCPT ); Thu, 16 Dec 2021 16:07:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241561AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B367BC06173F; Thu, 16 Dec 2021 13:07:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZVgGzvz1GSe5crusiBtnWHvRHdypC4/wy9LeIXeqoUo=; b=XotkDzGNp7MYzrDO59aaaiiExJ RlA8Stqb+eFRtpgrpyhiulNnASkD1+Qk+8lMlK3yx9R+ex3oZ2soJobGnLkKo+1TCynANPIS9nngB uo7mn3Enn4kyS4mm3HRryHsocFYz9zXehmz9qigPeoQ/PhKnGPazraoUgP+/QjwJxTktdT/Rf7BPL qyYnfkYeD6REe93evKnbrCJPrnLssqQOIzy/phRmX3ipp0ELuTPxc3B+wEbbXbGmrYy78f7D4t6ss 1c2DrflLaFIIGAQBApeag/grHxwO861KxRKTbs47EIijKneP2/3ON8Srz39RhxjCciUpjt5HrGTIe GiNbeogA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyB-00Fx3s-W2; Thu, 16 Dec 2021 21:07:20 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 10/25] iomap: Convert bio completions to use folios Date: Thu, 16 Dec 2021 21:07:00 +0000 Message-Id: <20211216210715.3801857-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Use bio_for_each_folio() to iterate over each folio in the bio instead of iterating over each page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 50 ++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 29 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 856ef62b319e..5730a3317407 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -161,34 +161,29 @@ static void iomap_set_range_uptodate(struct page *page, SetPageUptodate(page); } -static void -iomap_read_page_end_io(struct bio_vec *bvec, int error) +static void iomap_finish_folio_read(struct folio *folio, size_t offset, + size_t len, int error) { - struct page *page = bvec->bv_page; - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { - ClearPageUptodate(page); - SetPageError(page); + folio_clear_uptodate(folio); + folio_set_error(folio); } else { - iomap_set_range_uptodate(page, iop, bvec->bv_offset, - bvec->bv_len); + iomap_set_range_uptodate(&folio->page, iop, offset, len); } - if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) - unlock_page(page); + if (!iop || atomic_sub_and_test(len, &iop->read_bytes_pending)) + folio_unlock(folio); } -static void -iomap_read_end_io(struct bio *bio) +static void iomap_read_end_io(struct bio *bio) { int error = blk_status_to_errno(bio->bi_status); - struct bio_vec *bvec; - struct bvec_iter_all iter_all; + struct folio_iter fi; - bio_for_each_segment_all(bvec, bio, iter_all) - iomap_read_page_end_io(bvec, error); + bio_for_each_folio_all(fi, bio) + iomap_finish_folio_read(fi.folio, fi.offset, fi.length, error); bio_put(bio); } @@ -1019,23 +1014,21 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); -static void -iomap_finish_page_writeback(struct inode *inode, struct page *page, - int error, unsigned int len) +static void iomap_finish_folio_write(struct inode *inode, struct folio *folio, + size_t len, int error) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); if (error) { - SetPageError(page); + folio_set_error(folio); mapping_set_error(inode->i_mapping, error); } - WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); + WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) <= 0); if (!iop || atomic_sub_and_test(len, &iop->write_bytes_pending)) - end_page_writeback(page); + folio_end_writeback(folio); } /* @@ -1054,8 +1047,7 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) bool quiet = bio_flagged(bio, BIO_QUIET); for (bio = &ioend->io_inline_bio; bio; bio = next) { - struct bio_vec *bv; - struct bvec_iter_all iter_all; + struct folio_iter fi; /* * For the last bio, bi_private points to the ioend, so we @@ -1066,10 +1058,10 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) else next = bio->bi_private; - /* walk each page on bio, ending page IO on them */ - bio_for_each_segment_all(bv, bio, iter_all) - iomap_finish_page_writeback(inode, bv->bv_page, error, - bv->bv_len); + /* walk all folios in bio, ending page IO on them */ + bio_for_each_folio_all(fi, bio) + iomap_finish_folio_write(inode, fi.folio, fi.length, + error); bio_put(bio); } /* The ioend has been freed by bio_put() */ From patchwork Thu Dec 16 21:07:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACD6FC4321E for ; Thu, 16 Dec 2021 21:07:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241557AbhLPVHn (ORCPT ); Thu, 16 Dec 2021 16:07:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241564AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7884C061746; Thu, 16 Dec 2021 13:07:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=brFu0TnND3ObB8135/CA0Jr2diuWF9QZkemiu9T/Pbc=; b=OeF9rxtDp6pyNC/mhfnEC6yh/L gLaCWQSeYhhFyvz5mPHkqBaMAIL4pAcxBdYtDZlm6MYGgwyW0YfDc66Wg8GUOjqt99bCA+Ew5/c+J v2cLsmHWmAR9xxTTcjxZ/8arSqZ1zA6B0cSDX2anPk5EkyeG2rMV4FnVi4CSIc/gncamfPrVFHgKh R0KDB6IfEP8JoQqRTfLF13kus6sp3fX/98X65JeXtLScyhWf5IWg47vO2qET16lcf1QFbZFI5Sz9u istS5zp+usjlVHoXTT7v4guDKsDSOmBzMXwmG+pvE6OGA5qiTE3X3/7TCAXfJg8d1MHBrDM2yfUeD tOwNbG+g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyC-00Fx3y-4h; Thu, 16 Dec 2021 21:07:20 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 11/25] iomap: Use folio offsets instead of page offsets Date: Thu, 16 Dec 2021 21:07:01 +0000 Message-Id: <20211216210715.3801857-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Pass a folio around instead of the page, and make sure the offset is relative to the start of the folio instead of the start of a page. Also use size_t for offset & length to make it clear that these are byte counts, and to support >2GB folios in the future. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 78 ++++++++++++++++++++++-------------------- 1 file changed, 40 insertions(+), 38 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 5730a3317407..06ff80c05340 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -75,18 +75,18 @@ static void iomap_page_release(struct folio *folio) } /* - * Calculate the range inside the page that we actually need to read. + * Calculate the range inside the folio that we actually need to read. */ -static void -iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, - loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) +static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, + loff_t *pos, loff_t length, size_t *offp, size_t *lenp) { + struct iomap_page *iop = to_iomap_page(folio); loff_t orig_pos = *pos; loff_t isize = i_size_read(inode); unsigned block_bits = inode->i_blkbits; unsigned block_size = (1 << block_bits); - unsigned poff = offset_in_page(*pos); - unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); + size_t poff = offset_in_folio(folio, *pos); + size_t plen = min_t(loff_t, folio_size(folio) - poff, length); unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; @@ -124,7 +124,7 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, * page cache for blocks that are entirely outside of i_size. */ if (orig_pos <= isize && orig_pos + length > isize) { - unsigned end = offset_in_page(isize - 1) >> block_bits; + unsigned end = offset_in_folio(folio, isize - 1) >> block_bits; if (first <= end && last > end) plen -= (last - end) * block_size; @@ -134,31 +134,31 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, *lenp = plen; } -static void iomap_iop_set_range_uptodate(struct page *page, - struct iomap_page *iop, unsigned off, unsigned len) +static void iomap_iop_set_range_uptodate(struct folio *folio, + struct iomap_page *iop, size_t off, size_t len) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; unsigned long flags; spin_lock_irqsave(&iop->uptodate_lock, flags); bitmap_set(iop->uptodate, first, last - first + 1); - if (bitmap_full(iop->uptodate, i_blocks_per_page(inode, page))) - SetPageUptodate(page); + if (bitmap_full(iop->uptodate, i_blocks_per_folio(inode, folio))) + folio_mark_uptodate(folio); spin_unlock_irqrestore(&iop->uptodate_lock, flags); } -static void iomap_set_range_uptodate(struct page *page, - struct iomap_page *iop, unsigned off, unsigned len) +static void iomap_set_range_uptodate(struct folio *folio, + struct iomap_page *iop, size_t off, size_t len) { - if (PageError(page)) + if (folio_test_error(folio)) return; if (iop) - iomap_iop_set_range_uptodate(page, iop, off, len); + iomap_iop_set_range_uptodate(folio, iop, off, len); else - SetPageUptodate(page); + folio_mark_uptodate(folio); } static void iomap_finish_folio_read(struct folio *folio, size_t offset, @@ -170,7 +170,7 @@ static void iomap_finish_folio_read(struct folio *folio, size_t offset, folio_clear_uptodate(folio); folio_set_error(folio); } else { - iomap_set_range_uptodate(&folio->page, iop, offset, len); + iomap_set_range_uptodate(folio, iop, offset, len); } if (!iop || atomic_sub_and_test(len, &iop->read_bytes_pending)) @@ -211,6 +211,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; size_t poff = offset_in_page(iomap->offset); + size_t offset = offset_in_folio(folio, iomap->offset); void *addr; if (PageUptodate(page)) @@ -223,7 +224,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, return -EIO; if (WARN_ON_ONCE(size > iomap->length)) return -EIO; - if (poff > 0) + if (offset > 0) iop = iomap_page_create(iter->inode, folio); else iop = to_iomap_page(folio); @@ -232,7 +233,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); kunmap_local(addr); - iomap_set_range_uptodate(page, iop, poff, PAGE_SIZE - poff); + iomap_set_range_uptodate(folio, iop, offset, PAGE_SIZE - poff); return 0; } @@ -256,7 +257,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, struct folio *folio = page_folio(page); struct iomap_page *iop; loff_t orig_pos = pos; - unsigned poff, plen; + size_t poff, plen; sector_t sector; if (iomap->type == IOMAP_INLINE) @@ -264,13 +265,13 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, /* zero post-eof blocks as the page may be mapped */ iop = iomap_page_create(iter->inode, folio); - iomap_adjust_read_range(iter->inode, iop, &pos, length, &poff, &plen); + iomap_adjust_read_range(iter->inode, folio, &pos, length, &poff, &plen); if (plen == 0) goto done; if (iomap_block_needs_zeroing(iter, pos)) { - zero_user(page, poff, plen); - iomap_set_range_uptodate(page, iop, poff, plen); + folio_zero_range(folio, poff, plen); + iomap_set_range_uptodate(folio, iop, poff, plen); goto done; } @@ -281,7 +282,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, sector = iomap_sector(iomap, pos); if (!ctx->bio || bio_end_sector(ctx->bio) != sector || - bio_add_page(ctx->bio, page, plen, poff) != plen) { + !bio_add_folio(ctx->bio, folio, plen, poff)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); gfp_t orig_gfp = gfp; unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE); @@ -305,8 +306,9 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, ctx->bio->bi_iter.bi_sector = sector; bio_set_dev(ctx->bio, iomap->bdev); ctx->bio->bi_end_io = iomap_read_end_io; - __bio_add_page(ctx->bio, page, plen, poff); + bio_add_folio(ctx->bio, folio, plen, poff); } + done: /* * Move the caller beyond our range so that it keeps making progress. @@ -535,9 +537,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) truncate_pagecache_range(inode, max(pos, i_size), pos + len); } -static int -iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, - unsigned plen, const struct iomap *iomap) +static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, + size_t poff, size_t plen, const struct iomap *iomap) { struct bio_vec bvec; struct bio bio; @@ -546,7 +547,7 @@ iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, bio.bi_opf = REQ_OP_READ; bio.bi_iter.bi_sector = iomap_sector(iomap, block_start); bio_set_dev(&bio, iomap->bdev); - __bio_add_page(&bio, page, plen, poff); + bio_add_folio(&bio, folio, plen, poff); return submit_bio_wait(&bio); } @@ -559,14 +560,15 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, loff_t block_size = i_blocksize(iter->inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); - unsigned from = offset_in_page(pos), to = from + len, poff, plen; + size_t from = offset_in_folio(folio, pos), to = from + len; + size_t poff, plen; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return 0; - ClearPageError(page); + folio_clear_error(folio); do { - iomap_adjust_read_range(iter->inode, iop, &block_start, + iomap_adjust_read_range(iter->inode, folio, &block_start, block_end - block_start, &poff, &plen); if (plen == 0) break; @@ -579,14 +581,14 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (iomap_block_needs_zeroing(iter, block_start)) { if (WARN_ON_ONCE(iter->flags & IOMAP_UNSHARE)) return -EIO; - zero_user_segments(page, poff, from, to, poff + plen); + folio_zero_segments(folio, poff, from, to, poff + plen); } else { - int status = iomap_read_page_sync(block_start, page, + int status = iomap_read_folio_sync(block_start, folio, poff, plen, srcmap); if (status) return status; } - iomap_set_range_uptodate(page, iop, poff, plen); + iomap_set_range_uptodate(folio, iop, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -675,7 +677,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, iop, offset_in_page(pos), len); + iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); __set_page_dirty_nobuffers(page); return copied; } From patchwork Thu Dec 16 21:07:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CF57C433F5 for ; Thu, 16 Dec 2021 21:07:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241704AbhLPVHn (ORCPT ); Thu, 16 Dec 2021 16:07:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241566AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03BB8C061574; Thu, 16 Dec 2021 13:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/GPw/t46DUAu+EQGlBQpIskJ8A1FQYCfjrZfF5LHiuc=; b=kgjXKLARbp5JVTJ4fihOCHpkaj Vnk/2HR/BXSnwlI3rPLY1gWEbHvNRjsQw1U0C3OCJEEO3ylwyY94GWkO5Yyx2hkUG9jhZAQfov8uW WArXqNTGRxQT/Hl+Kjdr0sLB5jxtOUM7T+H7om3wm45LgHWgDbEfxaLi9TpD79Q8X7r7BB3zcaasP azik2Y8DCZktlJM5bbc6y+zBoBCnMVv/1RTYZ8iA8KEZXPO0tr1MmFae0hRmZPdf8CQnJ5Q9U8p/3 sB4Qn1+XM1jCPbQz+O5EYDdx2+j/Ye4WvecXQlNd29Z9RLHfXiccbdtgzkAKxjvj0m8P7UTPgzRkj M8nub+ow==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyC-00Fx44-8S; Thu, 16 Dec 2021 21:07:20 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 12/25] iomap: Convert iomap_read_inline_data to take a folio Date: Thu, 16 Dec 2021 21:07:02 +0000 Message-Id: <20211216210715.3801857-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org We still only support up to a single page of inline data (at least, per call to iomap_read_inline_data()), but it can now be written into the middle of a folio in case we decide to allocate a 16KiB page for a file that's 8.1KiB in size. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 06ff80c05340..2ebea02780b8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -197,16 +197,15 @@ struct iomap_readpage_ctx { /** * iomap_read_inline_data - copy inline data into the page cache * @iter: iteration structure - * @page: page to copy to + * @folio: folio to copy to * - * Copy the inline data in @iter into @page and zero out the rest of the page. + * Copy the inline data in @iter into @folio and zero out the rest of the folio. * Only a single IOMAP_INLINE extent is allowed at the end of each file. * Returns zero for success to complete the read, or the usual negative errno. */ static int iomap_read_inline_data(const struct iomap_iter *iter, - struct page *page) + struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop; const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; @@ -214,7 +213,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, size_t offset = offset_in_folio(folio, iomap->offset); void *addr; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return 0; if (WARN_ON_ONCE(size > PAGE_SIZE - poff)) @@ -229,7 +228,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, else iop = to_iomap_page(folio); - addr = kmap_local_page(page) + poff; + addr = kmap_local_folio(folio, offset); memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); kunmap_local(addr); @@ -261,7 +260,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, sector_t sector; if (iomap->type == IOMAP_INLINE) - return iomap_read_inline_data(iter, page); + return iomap_read_inline_data(iter, folio); /* zero post-eof blocks as the page may be mapped */ iop = iomap_page_create(iter->inode, folio); @@ -597,10 +596,12 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, static int iomap_write_begin_inline(const struct iomap_iter *iter, struct page *page) { + struct folio *folio = page_folio(page); + /* needs more work for the tailpacking case; disable for now */ if (WARN_ON_ONCE(iomap_iter_srcmap(iter)->offset != 0)) return -EIO; - return iomap_read_inline_data(iter, page); + return iomap_read_inline_data(iter, folio); } static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, From patchwork Thu Dec 16 21:07:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5F78C43219 for ; Thu, 16 Dec 2021 21:07:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241696AbhLPVHm (ORCPT ); Thu, 16 Dec 2021 16:07:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241567AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17AFFC061747; Thu, 16 Dec 2021 13:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=lIhC+u6Nk/VRoSD3IDEbgSu4q9cz/2FSHvcgz8mkQPg=; b=g6p/FtWQQQCkCi4goZMEnOUl0V 35vyTQnxUP5+9+3NY5r1E8n4dCppjXvVdQ30CkH/ZlAUxaTIBaTgpAJOXDdsUZgf+n8TXk3yplokg WE5zyBg4HRgAlUCBvJ23YKQSXogkStGhZdKBnjn4KNH8hfuQhpiCIKeDXtetQtnF4poUYtbjRikZ8 8VJ2I0ZdLfGjLJHlrJwt87tBrfiKwqcuR/QraPUDS+DMMlTw1CggFUHESTR7KfOM5wS/3cWiRdoBH L6/OsDaESosCCL1chaY8/Kqkjpoj8YJwgpKpTr0ZZeyg1uSgQ2Z9/jDh+FyCWXeyjurvXynpgWw3B ypfQGemA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyC-00Fx4A-Cb; Thu, 16 Dec 2021 21:07:20 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 13/25] iomap: Convert readahead and readpage to use a folio Date: Thu, 16 Dec 2021 21:07:03 +0000 Message-Id: <20211216210715.3801857-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Handle folios of arbitrary size instead of working in PAGE_SIZE units. readahead_folio() decreases the page refcount for you, so this is not quite a mechanical change. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 53 +++++++++++++++++++++--------------------- 1 file changed, 26 insertions(+), 27 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 2ebea02780b8..ad89c20cb741 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -188,8 +188,8 @@ static void iomap_read_end_io(struct bio *bio) } struct iomap_readpage_ctx { - struct page *cur_page; - bool cur_page_in_bio; + struct folio *cur_folio; + bool cur_folio_in_bio; struct bio *bio; struct readahead_control *rac; }; @@ -252,8 +252,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, const struct iomap *iomap = &iter->iomap; loff_t pos = iter->pos + offset; loff_t length = iomap_length(iter) - offset; - struct page *page = ctx->cur_page; - struct folio *folio = page_folio(page); + struct folio *folio = ctx->cur_folio; struct iomap_page *iop; loff_t orig_pos = pos; size_t poff, plen; @@ -274,7 +273,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, goto done; } - ctx->cur_page_in_bio = true; + ctx->cur_folio_in_bio = true; if (iop) atomic_add(plen, &iop->read_bytes_pending); @@ -282,7 +281,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, if (!ctx->bio || bio_end_sector(ctx->bio) != sector || !bio_add_folio(ctx->bio, folio, plen, poff)) { - gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); + gfp_t gfp = mapping_gfp_constraint(folio->mapping, GFP_KERNEL); gfp_t orig_gfp = gfp; unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE); @@ -321,30 +320,31 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, int iomap_readpage(struct page *page, const struct iomap_ops *ops) { + struct folio *folio = page_folio(page); struct iomap_iter iter = { - .inode = page->mapping->host, - .pos = page_offset(page), - .len = PAGE_SIZE, + .inode = folio->mapping->host, + .pos = folio_pos(folio), + .len = folio_size(folio), }; struct iomap_readpage_ctx ctx = { - .cur_page = page, + .cur_folio = folio, }; int ret; - trace_iomap_readpage(page->mapping->host, 1); + trace_iomap_readpage(iter.inode, 1); while ((ret = iomap_iter(&iter, ops)) > 0) iter.processed = iomap_readpage_iter(&iter, &ctx, 0); if (ret < 0) - SetPageError(page); + folio_set_error(folio); if (ctx.bio) { submit_bio(ctx.bio); - WARN_ON_ONCE(!ctx.cur_page_in_bio); + WARN_ON_ONCE(!ctx.cur_folio_in_bio); } else { - WARN_ON_ONCE(ctx.cur_page_in_bio); - unlock_page(page); + WARN_ON_ONCE(ctx.cur_folio_in_bio); + folio_unlock(folio); } /* @@ -363,15 +363,15 @@ static loff_t iomap_readahead_iter(const struct iomap_iter *iter, loff_t done, ret; for (done = 0; done < length; done += ret) { - if (ctx->cur_page && offset_in_page(iter->pos + done) == 0) { - if (!ctx->cur_page_in_bio) - unlock_page(ctx->cur_page); - put_page(ctx->cur_page); - ctx->cur_page = NULL; + if (ctx->cur_folio && + offset_in_folio(ctx->cur_folio, iter->pos + done) == 0) { + if (!ctx->cur_folio_in_bio) + folio_unlock(ctx->cur_folio); + ctx->cur_folio = NULL; } - if (!ctx->cur_page) { - ctx->cur_page = readahead_page(ctx->rac); - ctx->cur_page_in_bio = false; + if (!ctx->cur_folio) { + ctx->cur_folio = readahead_folio(ctx->rac); + ctx->cur_folio_in_bio = false; } ret = iomap_readpage_iter(iter, ctx, done); if (ret <= 0) @@ -414,10 +414,9 @@ void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops) if (ctx.bio) submit_bio(ctx.bio); - if (ctx.cur_page) { - if (!ctx.cur_page_in_bio) - unlock_page(ctx.cur_page); - put_page(ctx.cur_page); + if (ctx.cur_folio) { + if (!ctx.cur_folio_in_bio) + folio_unlock(ctx.cur_folio); } } EXPORT_SYMBOL_GPL(iomap_readahead); From patchwork Thu Dec 16 21:07:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23E5AC4332F for ; Thu, 16 Dec 2021 21:07:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241694AbhLPVHl (ORCPT ); Thu, 16 Dec 2021 16:07:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241589AbhLPVHY (ORCPT ); Thu, 16 Dec 2021 16:07:24 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD45BC061756; Thu, 16 Dec 2021 13:07:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=t+DGtRseGLddgzRTezSz/EKylPqS09NEepsXJi8iyNA=; b=NRYY4U2Cq4ZuwxLo6LGrUNVdmF C/jHDnjjthvgzsW9H8KZi17/73l7n0qAVxcwgPavN/vJ3bdhcLPCKelJ82nQe63NkxXY9OS8UWFIz QzUalCQ2nKIlmF0+MK/jEDW7VgU6MkhlDpn/KhJ7GOsmwlRTj17z4RIbKTVcWv1hiPp5TUCAKYbq/ 4tOonRnk6JTXqGIhhpXwqO+9gtjsiBDTcqW4ocMQ5khsQvgK/+y2OenfV65VWbg2XsipxQNAYp+Sa GUuDbh3Xi7UA1Nr5SJO+GYalPT47bxkyK5CRw/8O19G/pylgLXn6VJUHJzE6l/6P3dE0lieRNi3iy 4Sk9+hlw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyC-00Fx4G-Gg; Thu, 16 Dec 2021 21:07:20 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 14/25] iomap: Convert iomap_page_mkwrite to use a folio Date: Thu, 16 Dec 2021 21:07:04 +0000 Message-Id: <20211216210715.3801857-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org If we write to any page in a folio, we have to mark the entire folio as dirty, and potentially COW the entire folio, because it'll all get written back as one unit. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ad89c20cb741..8d7a67655b60 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -967,10 +967,9 @@ iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, } EXPORT_SYMBOL_GPL(iomap_truncate_page); -static loff_t iomap_page_mkwrite_iter(struct iomap_iter *iter, - struct page *page) +static loff_t iomap_folio_mkwrite_iter(struct iomap_iter *iter, + struct folio *folio) { - struct folio *folio = page_folio(page); loff_t length = iomap_length(iter); int ret; @@ -979,10 +978,10 @@ static loff_t iomap_page_mkwrite_iter(struct iomap_iter *iter, &iter->iomap); if (ret) return ret; - block_commit_write(page, 0, length); + block_commit_write(&folio->page, 0, length); } else { - WARN_ON_ONCE(!PageUptodate(page)); - set_page_dirty(page); + WARN_ON_ONCE(!folio_test_uptodate(folio)); + folio_mark_dirty(folio); } return length; @@ -994,24 +993,24 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) .inode = file_inode(vmf->vma->vm_file), .flags = IOMAP_WRITE | IOMAP_FAULT, }; - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); ssize_t ret; - lock_page(page); - ret = page_mkwrite_check_truncate(page, iter.inode); + folio_lock(folio); + ret = folio_mkwrite_check_truncate(folio, iter.inode); if (ret < 0) goto out_unlock; - iter.pos = page_offset(page); + iter.pos = folio_pos(folio); iter.len = ret; while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_page_mkwrite_iter(&iter, page); + iter.processed = iomap_folio_mkwrite_iter(&iter, folio); if (ret < 0) goto out_unlock; - wait_for_stable_page(page); + folio_wait_stable(folio); return VM_FAULT_LOCKED; out_unlock: - unlock_page(page); + folio_unlock(folio); return block_page_mkwrite_return(ret); } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); From patchwork Thu Dec 16 21:07:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B16CC433EF for ; Thu, 16 Dec 2021 21:07:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241610AbhLPVH1 (ORCPT ); Thu, 16 Dec 2021 16:07:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241568AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E82AC06173E; Thu, 16 Dec 2021 13:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ojl5PaoTezGTatzmMRn19yoajNZ3Q7oGrHkVaeCU+1s=; b=KrDp8ssOxyFOEkwsHG1jbhpE+m f3ox4dRdgAMy1a/FKw/GLEyZ5kCeNcyd2YGIqI4Hd8Gdi+R6X6Ly4fkFbl1iz1FTXzKilqyf7at8l o8Da2diwp4PDcOXMCYhXtAT2SgzjKFrjJWo606LDpG1eul3/EyDj4gaT5lAbeG7jQi5F2B97UBv1W qJncU7hUTseU3KMKxRklCknrcBcMYlWSL05EOh9kmd8Hs5nhFOE6gdwequd74YvQPEVUo7WZOg8Nm +SPjIeue68Exu9t68fgYWQhIllPPk8hNHCZJWwBAF6FCFRD4e1TWbmLQtHKsYaf55jdV60/cUmSkM +i6JJ4YQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyC-00Fx4M-Lz; Thu, 16 Dec 2021 21:07:20 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 15/25] iomap: Allow iomap_write_begin() to be called with the full length Date: Thu, 16 Dec 2021 21:07:05 +0000 Message-Id: <20211216210715.3801857-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org In the future, we want write_begin to know the entire length of the write so that it can choose to allocate large folios. Pass the full length in from __iomap_zero_iter() and limit it where necessary. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 8d7a67655b60..b1ded5204d1c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -619,6 +619,9 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (fatal_signal_pending(current)) return -EINTR; + if (!mapping_large_folio_support(iter->inode->i_mapping)) + len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos)); + if (page_ops && page_ops->page_prepare) { status = page_ops->page_prepare(iter->inode, pos, len); if (status) @@ -632,6 +635,8 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, goto out_no_page; } folio = page_folio(page); + if (pos + len > folio_pos(folio) + folio_size(folio)) + len = folio_pos(folio) + folio_size(folio) - pos; if (srcmap->type == IOMAP_INLINE) status = iomap_write_begin_inline(iter, page); @@ -891,11 +896,13 @@ static s64 __iomap_zero_iter(struct iomap_iter *iter, loff_t pos, u64 length) struct page *page; int status; unsigned offset = offset_in_page(pos); - unsigned bytes = min_t(u64, PAGE_SIZE - offset, length); + unsigned bytes = min_t(u64, UINT_MAX, length); status = iomap_write_begin(iter, pos, bytes, &page); if (status) return status; + if (bytes > PAGE_SIZE - offset) + bytes = PAGE_SIZE - offset; zero_user(page, offset, bytes); mark_page_accessed(page); From patchwork Thu Dec 16 21:07:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F04BC433EF for ; Thu, 16 Dec 2021 21:07:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241622AbhLPVH3 (ORCPT ); Thu, 16 Dec 2021 16:07:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241569AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 790A6C06173F; Thu, 16 Dec 2021 13:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9FeMlPUoktdp/YpxNBbRES3BvPNhzOqTg88y0XGGm68=; b=PkhhdeSsA2OZin8SLjjXrDzRF7 nSLifAZ3hOE929JAllJk0g48OfOOiC4oTSaxtMLmLAouw/drMeXXdHCbU5aYwdlDDjtUU6PEGqAhJ /vHZI3xl4xPjFICcsVg6v6pCw7vfgpJ/b/eNnjah2bN8mlkiC5HmIR4oAEpkrv85QMrxtZ3KORZrR M2TAtqX6f0DL6/ayOqNL0iWqs56rLLx6dbKdRjm1i9dGX+RVIK/KNMqNSQHWYHvWOqWXXQ017Yf7y aMieMLq7Cq9kh0QFIJuAA2VIuavYXOk15Dd6jANRM6ij1ZC8PDrm+NQSnTKr7jNoEaVNGKFW21qJ8 YKl+0/Ng==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyC-00Fx4S-QK; Thu, 16 Dec 2021 21:07:20 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 16/25] iomap: Convert __iomap_zero_iter to use a folio Date: Thu, 16 Dec 2021 21:07:06 +0000 Message-Id: <20211216210715.3801857-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The zero iterator can work in folio-sized chunks instead of page-sized chunks. This will save a lot of page cache lookups if the file is cached in large folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b1ded5204d1c..47cf558244f4 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -893,19 +893,23 @@ EXPORT_SYMBOL_GPL(iomap_file_unshare); static s64 __iomap_zero_iter(struct iomap_iter *iter, loff_t pos, u64 length) { + struct folio *folio; struct page *page; int status; - unsigned offset = offset_in_page(pos); + size_t offset; unsigned bytes = min_t(u64, UINT_MAX, length); status = iomap_write_begin(iter, pos, bytes, &page); if (status) return status; - if (bytes > PAGE_SIZE - offset) - bytes = PAGE_SIZE - offset; + folio = page_folio(page); + + offset = offset_in_folio(folio, pos); + if (bytes > folio_size(folio) - offset) + bytes = folio_size(folio) - offset; - zero_user(page, offset, bytes); - mark_page_accessed(page); + folio_zero_range(folio, offset, bytes); + folio_mark_accessed(folio); return iomap_write_end(iter, pos, bytes, bytes, page); } From patchwork Thu Dec 16 21:07:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25F90C433EF for ; Thu, 16 Dec 2021 21:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241547AbhLPVHk (ORCPT ); Thu, 16 Dec 2021 16:07:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241571AbhLPVHW (ORCPT ); Thu, 16 Dec 2021 16:07:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7D23C061574; Thu, 16 Dec 2021 13:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Bf4SidGUlAAz0s8jNkhJypflNUGRpNSR81KR7pNpUA8=; b=rG+24dcwn6Fh5H/cudXeDr4rwH mgDX0SHSnAwjQT+q8HQD0rXQHhy9R644XVaYvykXb/GFFQ0ABv6NeHvtjMPAvl+f/4OzWmaxDUcTC U2fE7bnFyWAzQOeuedR5LgFEM7Px2fuxAGa4GenRpiHJJ4rs7OVUbwgxHWBTXwMMS44S+MtszghmQ L97HsGIPuoGUDjuSDvm5oPHU69xMKpZj9yNsez299fAEx+ahczep/J1pvQMc32o524z4lTl8IbLE0 UJU0rEXV6K26vB9kfDRRpGFBo+a9MnQNvyY1Ob6/SCnlehanU6Vv9OWN0wiLN/L0dFMogC250vm/K uAcash9g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyC-00Fx4b-Ua; Thu, 16 Dec 2021 21:07:21 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 17/25] iomap: Convert iomap_write_begin() and iomap_write_end() to folios Date: Thu, 16 Dec 2021 21:07:07 +0000 Message-Id: <20211216210715.3801857-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org These functions still only work in PAGE_SIZE chunks, but there are fewer conversions from tail to head pages as a result of this patch. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 73 ++++++++++++++++++++---------------------- 1 file changed, 34 insertions(+), 39 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 47cf558244f4..975a4a6aeca0 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -550,9 +550,8 @@ static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, } static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, - unsigned len, struct page *page) + size_t len, struct folio *folio) { - struct folio *folio = page_folio(page); const struct iomap *srcmap = iomap_iter_srcmap(iter); struct iomap_page *iop = iomap_page_create(iter->inode, folio); loff_t block_size = i_blocksize(iter->inode); @@ -593,10 +592,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, } static int iomap_write_begin_inline(const struct iomap_iter *iter, - struct page *page) + struct folio *folio) { - struct folio *folio = page_folio(page); - /* needs more work for the tailpacking case; disable for now */ if (WARN_ON_ONCE(iomap_iter_srcmap(iter)->offset != 0)) return -EIO; @@ -604,12 +601,12 @@ static int iomap_write_begin_inline(const struct iomap_iter *iter, } static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, - unsigned len, struct page **pagep) + size_t len, struct folio **foliop) { const struct iomap_page_ops *page_ops = iter->iomap.page_ops; const struct iomap *srcmap = iomap_iter_srcmap(iter); - struct page *page; struct folio *folio; + unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS; int status = 0; BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length); @@ -620,7 +617,7 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, return -EINTR; if (!mapping_large_folio_support(iter->inode->i_mapping)) - len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos)); + len = min(len, PAGE_SIZE - offset_in_page(pos)); if (page_ops && page_ops->page_prepare) { status = page_ops->page_prepare(iter->inode, pos, len); @@ -628,32 +625,31 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, return status; } - page = grab_cache_page_write_begin(iter->inode->i_mapping, - pos >> PAGE_SHIFT, AOP_FLAG_NOFS); - if (!page) { + folio = __filemap_get_folio(iter->inode->i_mapping, pos >> PAGE_SHIFT, + fgp, mapping_gfp_mask(iter->inode->i_mapping)); + if (!folio) { status = -ENOMEM; goto out_no_page; } - folio = page_folio(page); if (pos + len > folio_pos(folio) + folio_size(folio)) len = folio_pos(folio) + folio_size(folio) - pos; if (srcmap->type == IOMAP_INLINE) - status = iomap_write_begin_inline(iter, page); + status = iomap_write_begin_inline(iter, folio); else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) status = __block_write_begin_int(folio, pos, len, NULL, srcmap); else - status = __iomap_write_begin(iter, pos, len, page); + status = __iomap_write_begin(iter, pos, len, folio); if (unlikely(status)) goto out_unlock; - *pagep = page; + *foliop = folio; return 0; out_unlock: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); iomap_write_failed(iter->inode, pos, len); out_no_page: @@ -663,11 +659,10 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, } static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, - size_t copied, struct page *page) + size_t copied, struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); - flush_dcache_page(page); + flush_dcache_folio(folio); /* * The blocks that were entirely written will now be uptodate, so we @@ -680,10 +675,10 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, * non-uptodate page as a zero-length write, and force the caller to * redo the whole thing. */ - if (unlikely(copied < len && !PageUptodate(page))) + if (unlikely(copied < len && !folio_test_uptodate(folio))) return 0; iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); - __set_page_dirty_nobuffers(page); + filemap_dirty_folio(inode->i_mapping, folio); return copied; } @@ -707,7 +702,7 @@ static size_t iomap_write_end_inline(const struct iomap_iter *iter, /* Returns the number of bytes copied. May be 0. Cannot be an errno. */ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, - size_t copied, struct page *page) + size_t copied, struct folio *folio) { const struct iomap_page_ops *page_ops = iter->iomap.page_ops; const struct iomap *srcmap = iomap_iter_srcmap(iter); @@ -715,12 +710,12 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, size_t ret; if (srcmap->type == IOMAP_INLINE) { - ret = iomap_write_end_inline(iter, page, pos, copied); + ret = iomap_write_end_inline(iter, &folio->page, pos, copied); } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { ret = block_write_end(NULL, iter->inode->i_mapping, pos, len, - copied, page, NULL); + copied, &folio->page, NULL); } else { - ret = __iomap_write_end(iter->inode, pos, len, copied, page); + ret = __iomap_write_end(iter->inode, pos, len, copied, folio); } /* @@ -732,13 +727,13 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, i_size_write(iter->inode, pos + ret); iter->iomap.flags |= IOMAP_F_SIZE_CHANGED; } - unlock_page(page); + folio_unlock(folio); if (old_size < pos) pagecache_isize_extended(iter->inode, old_size, pos); if (page_ops && page_ops->page_done) - page_ops->page_done(iter->inode, pos, ret, page); - put_page(page); + page_ops->page_done(iter->inode, pos, ret, &folio->page); + folio_put(folio); if (ret < len) iomap_write_failed(iter->inode, pos, len); @@ -753,6 +748,7 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) long status = 0; do { + struct folio *folio; struct page *page; unsigned long offset; /* Offset into pagecache page */ unsigned long bytes; /* Bytes to write to page */ @@ -776,16 +772,17 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) break; } - status = iomap_write_begin(iter, pos, bytes, &page); + status = iomap_write_begin(iter, pos, bytes, &folio); if (unlikely(status)) break; + page = folio_file_page(folio, pos >> PAGE_SHIFT); if (mapping_writably_mapped(iter->inode->i_mapping)) flush_dcache_page(page); copied = copy_page_from_iter_atomic(page, offset, bytes, i); - status = iomap_write_end(iter, pos, bytes, copied, page); + status = iomap_write_end(iter, pos, bytes, copied, folio); if (unlikely(copied != status)) iov_iter_revert(i, copied - status); @@ -851,13 +848,13 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) do { unsigned long offset = offset_in_page(pos); unsigned long bytes = min_t(loff_t, PAGE_SIZE - offset, length); - struct page *page; + struct folio *folio; - status = iomap_write_begin(iter, pos, bytes, &page); + status = iomap_write_begin(iter, pos, bytes, &folio); if (unlikely(status)) return status; - status = iomap_write_end(iter, pos, bytes, bytes, page); + status = iomap_write_end(iter, pos, bytes, bytes, folio); if (WARN_ON_ONCE(status == 0)) return -EIO; @@ -894,15 +891,13 @@ EXPORT_SYMBOL_GPL(iomap_file_unshare); static s64 __iomap_zero_iter(struct iomap_iter *iter, loff_t pos, u64 length) { struct folio *folio; - struct page *page; int status; size_t offset; - unsigned bytes = min_t(u64, UINT_MAX, length); + size_t bytes = min_t(u64, SIZE_MAX, length); - status = iomap_write_begin(iter, pos, bytes, &page); + status = iomap_write_begin(iter, pos, bytes, &folio); if (status) return status; - folio = page_folio(page); offset = offset_in_folio(folio, pos); if (bytes > folio_size(folio) - offset) @@ -911,7 +906,7 @@ static s64 __iomap_zero_iter(struct iomap_iter *iter, loff_t pos, u64 length) folio_zero_range(folio, offset, bytes); folio_mark_accessed(folio); - return iomap_write_end(iter, pos, bytes, bytes, page); + return iomap_write_end(iter, pos, bytes, bytes, folio); } static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) From patchwork Thu Dec 16 21:07:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04DDFC433F5 for ; Thu, 16 Dec 2021 21:07:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241644AbhLPVHc (ORCPT ); Thu, 16 Dec 2021 16:07:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241573AbhLPVHX (ORCPT ); Thu, 16 Dec 2021 16:07:23 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD216C061401; Thu, 16 Dec 2021 13:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Oc7hLAdhalV5iGmAfu2+59peoMoCuxLvV2dLzDsyFaw=; b=U1SB07RUdLpqtMp7E6z/UgLf31 w7Qit5iwuxTtUrGWEBhsNXb3B36YxwQJwHptFKAjhUqbKOU5icI8ZI48dd2IvLZ0tbbdsV0Iz8B6J dJxZCV/mFBgKVGRz6TVTftX8e7pAZh4kceU5VrhjwJXnbXklS30N9gWhWYFiekRLZZtUG8Y1u4U48 GFEROHiXo5es3momAAPZv9mV/qMA07rnRDxB602slOL10QICydhAoF8LfsmgyGdUPj3fg3hmgQmIj Kbz0F+OTeKgI5hNKXXRKb1MbV11WlCBI7glIp1pd7rA3Y69BxfIy4mNzjNzIsuMtCLFWNR6VyNv1J cCgVydmQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyD-00Fx4h-2J; Thu, 16 Dec 2021 21:07:21 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 18/25] iomap: Convert iomap_write_end_inline to take a folio Date: Thu, 16 Dec 2021 21:07:08 +0000 Message-Id: <20211216210715.3801857-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org This conversion is only safe because iomap only supports writes to inline data which starts at the beginning of the file. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 975a4a6aeca0..b2bf7f337e75 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -683,16 +683,16 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, } static size_t iomap_write_end_inline(const struct iomap_iter *iter, - struct page *page, loff_t pos, size_t copied) + struct folio *folio, loff_t pos, size_t copied) { const struct iomap *iomap = &iter->iomap; void *addr; - WARN_ON_ONCE(!PageUptodate(page)); + WARN_ON_ONCE(!folio_test_uptodate(folio)); BUG_ON(!iomap_inline_data_valid(iomap)); - flush_dcache_page(page); - addr = kmap_local_page(page) + pos; + flush_dcache_folio(folio); + addr = kmap_local_folio(folio, pos); memcpy(iomap_inline_data(iomap, pos), addr, copied); kunmap_local(addr); @@ -710,7 +710,7 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, size_t ret; if (srcmap->type == IOMAP_INLINE) { - ret = iomap_write_end_inline(iter, &folio->page, pos, copied); + ret = iomap_write_end_inline(iter, folio, pos, copied); } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { ret = block_write_end(NULL, iter->inode->i_mapping, pos, len, copied, &folio->page, NULL); From patchwork Thu Dec 16 21:07:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 580F9C433F5 for ; Thu, 16 Dec 2021 21:07:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241654AbhLPVHe (ORCPT ); Thu, 16 Dec 2021 16:07:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241575AbhLPVHX (ORCPT ); Thu, 16 Dec 2021 16:07:23 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3FE1C061746; Thu, 16 Dec 2021 13:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hQSsL51YTSb5oR2h3nF49Qw5BAdDZkjKk1KgHRJJkQU=; b=r3O9q1D8ksLfGYxPCAeLaJ3lmC 2iYViHJPUqut0vhZ7NSOTe4DvovYtPYKBXBtm/roCeeJgxCfKwX9dakvCDXM1kasLUXf9Tr+2glif +ppZs80QzCDVi2JQ2YOrOOsOq+/+a5hQ9Pr4wFZ6UX4UgJ9dSI0l5g4jGKle7kCtbLkGAkJ7k3LZp aHg2SCd80HnbGmtsqFmsnh6drePPQcA0GtRtBJGNsKucIhWZhpp7qr+rwNCc7hYipe1eU28V8a+CJ rI1LiBjOABGwsEF+kbaRigBgRTYsL3bNzX3F8tQWGctkQpdsRTDT4zr4FkVJ8A7caoxtfFANp1nNX Cb0kc5OA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyD-00Fx4r-5u; Thu, 16 Dec 2021 21:07:21 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 19/25] iomap,xfs: Convert ->discard_page to ->discard_folio Date: Thu, 16 Dec 2021 21:07:09 +0000 Message-Id: <20211216210715.3801857-20-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org XFS has the only implementation of ->discard_page today, so convert it to use folios in the same patch as converting the API. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 4 ++-- fs/xfs/xfs_aops.c | 24 ++++++++++++------------ include/linux/iomap.h | 2 +- 3 files changed, 15 insertions(+), 15 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b2bf7f337e75..ceb7fbe4e7c9 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1360,8 +1360,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * won't be affected by I/O completion and we must unlock it * now. */ - if (wpc->ops->discard_page) - wpc->ops->discard_page(page, file_offset); + if (wpc->ops->discard_folio) + wpc->ops->discard_folio(folio, file_offset); if (!count) { ClearPageUptodate(page); unlock_page(page); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index c8c15c3c3147..4098a9875c5b 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -437,37 +437,37 @@ xfs_prepare_ioend( * see a ENOSPC in writeback). */ static void -xfs_discard_page( - struct page *page, - loff_t fileoff) +xfs_discard_folio( + struct folio *folio, + loff_t pos) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; struct xfs_inode *ip = XFS_I(inode); struct xfs_mount *mp = ip->i_mount; - unsigned int pageoff = offset_in_page(fileoff); - xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, fileoff); - xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, pageoff); + size_t offset = offset_in_folio(folio, pos); + xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, pos); + xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, offset); int error; if (xfs_is_shutdown(mp)) goto out_invalidate; xfs_alert_ratelimited(mp, - "page discard on page "PTR_FMT", inode 0x%llx, offset %llu.", - page, ip->i_ino, fileoff); + "page discard on page "PTR_FMT", inode 0x%llx, pos %llu.", + folio, ip->i_ino, pos); error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - i_blocks_per_page(inode, page) - pageoff_fsb); + i_blocks_per_folio(inode, folio) - pageoff_fsb); if (error && !xfs_is_shutdown(mp)) xfs_alert(mp, "page discard unable to remove delalloc mapping."); out_invalidate: - iomap_invalidatepage(page, pageoff, PAGE_SIZE - pageoff); + iomap_invalidate_folio(folio, offset, folio_size(folio) - offset); } static const struct iomap_writeback_ops xfs_writeback_ops = { .map_blocks = xfs_map_blocks, .prepare_ioend = xfs_prepare_ioend, - .discard_page = xfs_discard_page, + .discard_folio = xfs_discard_folio, }; STATIC int diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 29491fb9c5ba..5ef5088dbbd8 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -285,7 +285,7 @@ struct iomap_writeback_ops { * Optional, allows the file system to discard state on a page where * we failed to submit any I/O. */ - void (*discard_page)(struct page *page, loff_t fileoff); + void (*discard_folio)(struct folio *folio, loff_t pos); }; struct iomap_writepage_ctx { From patchwork Thu Dec 16 21:07:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 588CAC433F5 for ; Thu, 16 Dec 2021 21:07:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241663AbhLPVHh (ORCPT ); Thu, 16 Dec 2021 16:07:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241577AbhLPVHX (ORCPT ); Thu, 16 Dec 2021 16:07:23 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00A8AC06173E; Thu, 16 Dec 2021 13:07:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=o6Ral/RyPuui2KR6pkvTTtFUtEwCOaZkk4vnlc1Lvww=; b=ohqW8pHPacdsExZWfF9e8NGdaG PFbEOUlGkc+weiITf9RjPzgmx7mBFOX0iClwo0kkftK0hbSrK/3zwKZXc2HGXatOUNS4MTcMpysDm p7jduSwEQhFH2Leevtb+mgkB9LldOuXPaUp6n3Yo23hCq4zxICgeFCNwePsEJclcLCVrZiIhr+f7G DXj7S+0xtA2u7qLVj3XJzwXraJi0z3y/ThlUmkF1u2M8Ks607POVwZ8IH3wGTB22fQpu9ry/I5DoP zkpR8OgYA7IqBJVh8QvNzQFlkkai6qtR1bF/XxJmY5xE3o7pGhfEE1Fk6XV12KqEE4jT9L8xoS7EV ewHox/4g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyD-00Fx51-AN; Thu, 16 Dec 2021 21:07:21 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 20/25] iomap: Simplify iomap_writepage_map() Date: Thu, 16 Dec 2021 21:07:10 +0000 Message-Id: <20211216210715.3801857-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Rename end_offset to end_pos and file_offset to pos to match the rest of the file. Simplify the loop by calculating nblocks up front instead of each time around the loop. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ceb7fbe4e7c9..20e7087aa75c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1307,37 +1307,36 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, - struct page *page, u64 end_offset) + struct page *page, u64 end_pos) { struct folio *folio = page_folio(page); struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); - u64 file_offset; /* file offset of page */ + unsigned nblocks = i_blocks_per_folio(inode, folio); + u64 pos = folio_pos(folio); int error = 0, count = 0, i; LIST_HEAD(submit_list); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0); /* - * Walk through the page to find areas to write back. If we run off the - * end of the current map or find the current map invalid, grab a new - * one. + * Walk through the folio to find areas to write back. If we + * run off the end of the current map or find the current map + * invalid, grab a new one. */ - for (i = 0, file_offset = page_offset(page); - i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; - i++, file_offset += len) { + for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { if (iop && !test_bit(i, iop->uptodate)) continue; - error = wpc->ops->map_blocks(wpc, inode, file_offset); + error = wpc->ops->map_blocks(wpc, inode, pos); if (error) break; if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, page, iop, wpc, wbc, &submit_list); count++; } @@ -1361,7 +1360,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * now. */ if (wpc->ops->discard_folio) - wpc->ops->discard_folio(folio, file_offset); + wpc->ops->discard_folio(folio, pos); if (!count) { ClearPageUptodate(page); unlock_page(page); From patchwork Thu Dec 16 21:07:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9D72C433EF for ; Thu, 16 Dec 2021 21:07:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241656AbhLPVHg (ORCPT ); Thu, 16 Dec 2021 16:07:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241583AbhLPVHX (ORCPT ); Thu, 16 Dec 2021 16:07:23 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26BE3C06173F; Thu, 16 Dec 2021 13:07:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tJpVpLh0vQUdHYCCDcCNSHG1cwy4vcBawwdcR1SJtWA=; b=p01MmXADRCGABY4peI+g8VfOqZ IzBiEz1SI6xIiJcRKZ1sMSJ09WHjTGPUNt9OfPcWQ1aV+rUK7M9DPPqZefX4/ndK7J4doIh7Llpdb RuqV4dDb2At8G18OBwtpRLAfV4A7AzXlteOEYio7mtTuXCz73SX3+/4owwcih7nIsGPpThB9Wnadl vibbFLdc7zaZRyDD8fufWw02xK75M03OCakegaD0xPHxPQyLTdalvFTgFPWZk9xsDqmzeUr5M4MPG L2yrjBcyJYpZzG1GBioMKgSVv4ZhTv9yU+guIOLxQJgy/68NM88oNPx1AG2BNuOKESTi129bhwR6z XtyKZP+Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyD-00Fx5B-EZ; Thu, 16 Dec 2021 21:07:21 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 21/25] iomap: Simplify iomap_do_writepage() Date: Thu, 16 Dec 2021 21:07:11 +0000 Message-Id: <20211216210715.3801857-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Rename end_offset to end_pos and offset_into_page to poff to match the rest of the file. Simplify the handling of the last page straddling i_size by doing the EOF check based on the byte granularity i_size instead of converting to a pgoff prematurely. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 20e7087aa75c..8bfdda8e5f1c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1408,9 +1408,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) { struct iomap_writepage_ctx *wpc = data; struct inode *inode = page->mapping->host; - pgoff_t end_index; - u64 end_offset; - loff_t offset; + u64 end_pos, isize; trace_iomap_writepage(inode, page_offset(page), PAGE_SIZE); @@ -1441,11 +1439,9 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | desired writeback range | see else | * ---------------------------------^------------------| */ - offset = i_size_read(inode); - end_index = offset >> PAGE_SHIFT; - if (page->index < end_index) - end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT; - else { + isize = i_size_read(inode); + end_pos = page_offset(page) + PAGE_SIZE; + if (end_pos > isize) { /* * Check whether the page to write out is beyond or straddles * i_size or not. @@ -1457,7 +1453,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | | Straddles | * ---------------------------------^-----------|--------| */ - unsigned offset_into_page = offset & (PAGE_SIZE - 1); + size_t poff = offset_in_page(isize); + pgoff_t end_index = isize >> PAGE_SHIFT; /* * Skip the page if it's fully outside i_size, e.g. due to a @@ -1477,7 +1474,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * offset is just equal to the EOF. */ if (page->index > end_index || - (page->index == end_index && offset_into_page == 0)) + (page->index == end_index && poff == 0)) goto redirty; /* @@ -1488,13 +1485,13 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * memory is zeroed when mapped, and writes to that region are * not written out to the file." */ - zero_user_segment(page, offset_into_page, PAGE_SIZE); + zero_user_segment(page, poff, PAGE_SIZE); /* Adjust the end_offset to the end of file */ - end_offset = offset; + end_pos = isize; } - return iomap_writepage_map(wpc, wbc, inode, page, end_offset); + return iomap_writepage_map(wpc, wbc, inode, page, end_pos); redirty: redirty_page_for_writepage(wbc, page); From patchwork Thu Dec 16 21:07:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAE2DC4332F for ; Thu, 16 Dec 2021 21:08:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241671AbhLPVHj (ORCPT ); Thu, 16 Dec 2021 16:07:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241584AbhLPVHX (ORCPT ); Thu, 16 Dec 2021 16:07:23 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 439F1C061747; Thu, 16 Dec 2021 13:07:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eF+8D8gTQBSbYs1qk+m5D5/ZgWrAg47RU8vktcHWvqo=; b=vdP6l8zb4DbSwxZc8QQ+wCVjPF yzVdzVybnI98jpvNuiF+VKNWxQYcqE+0mSq4cwq3ALsy5Us3SM885zKIQKbZmirAwneqhYWJ79nlq P5M6bsV8likMA+3krPw2QF4ViiaHaRCl1eLcTi4oiEI5JWYK6FE1KQWHeJ55mXMTG0iVFIHxWawtp 0FlCsuDyCK4zwYpJUnL5Y6I9a5HIi2SSL5PmvpSq/QO8ObOIGrs5bUNslQ95F07HefFHlgwdoEKQL sa+D9/9QnEIuR4PRLuVt3Qc3csibTMyIt1zngn0WafqczHSj+5CVhsBlteEqXHMJPa3a7LHuGO7Jf L54MnPqA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyD-00Fx5H-IP; Thu, 16 Dec 2021 21:07:21 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 22/25] iomap: Convert iomap_add_to_ioend() to take a folio Date: Thu, 16 Dec 2021 21:07:12 +0000 Message-Id: <20211216210715.3801857-23-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org We still iterate one block at a time, but now we call compound_head() less often. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 70 ++++++++++++++++++++---------------------- 1 file changed, 34 insertions(+), 36 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 8bfdda8e5f1c..a36695db6f9d 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1263,29 +1263,29 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, * first; otherwise finish off the current ioend and start another. */ static void -iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, +iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, struct iomap_page *iop, struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { - sector_t sector = iomap_sector(&wpc->iomap, offset); + sector_t sector = iomap_sector(&wpc->iomap, pos); unsigned len = i_blocksize(inode); - unsigned poff = offset & (PAGE_SIZE - 1); + size_t poff = offset_in_folio(folio, pos); - if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) { + if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos, sector)) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc); + wpc->ioend = iomap_alloc_ioend(inode, wpc, pos, sector, wbc); } - if (bio_add_page(wpc->ioend->io_bio, page, len, poff) != len) { + if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) { wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio); - __bio_add_page(wpc->ioend->io_bio, page, len, poff); + bio_add_folio(wpc->ioend->io_bio, folio, len, poff); } if (iop) atomic_add(len, &iop->write_bytes_pending); wpc->ioend->io_size += len; - wbc_account_cgroup_owner(wbc, page, len); + wbc_account_cgroup_owner(wbc, &folio->page, len); } /* @@ -1307,9 +1307,8 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, - struct page *page, u64 end_pos) + struct folio *folio, u64 end_pos) { - struct folio *folio = page_folio(page); struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); @@ -1336,15 +1335,15 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, pos, page, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, folio, iop, wpc, wbc, &submit_list); count++; } WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); - WARN_ON_ONCE(!PageLocked(page)); - WARN_ON_ONCE(PageWriteback(page)); - WARN_ON_ONCE(PageDirty(page)); + WARN_ON_ONCE(!folio_test_locked(folio)); + WARN_ON_ONCE(folio_test_writeback(folio)); + WARN_ON_ONCE(folio_test_dirty(folio)); /* * We cannot cancel the ioend directly here on error. We may have @@ -1362,14 +1361,14 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, if (wpc->ops->discard_folio) wpc->ops->discard_folio(folio, pos); if (!count) { - ClearPageUptodate(page); - unlock_page(page); + folio_clear_uptodate(folio); + folio_unlock(folio); goto done; } } - set_page_writeback(page); - unlock_page(page); + folio_start_writeback(folio); + folio_unlock(folio); /* * Preserve the original error if there was one; catch @@ -1390,9 +1389,9 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * with a partial page truncate on a sub-page block sized filesystem. */ if (!count) - end_page_writeback(page); + folio_end_writeback(folio); done: - mapping_set_error(page->mapping, error); + mapping_set_error(folio->mapping, error); return error; } @@ -1406,14 +1405,15 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, static int iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) { + struct folio *folio = page_folio(page); struct iomap_writepage_ctx *wpc = data; - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; u64 end_pos, isize; - trace_iomap_writepage(inode, page_offset(page), PAGE_SIZE); + trace_iomap_writepage(inode, folio_pos(folio), folio_size(folio)); /* - * Refuse to write the page out if we're called from reclaim context. + * Refuse to write the folio out if we're called from reclaim context. * * This avoids stack overflows when called from deeply used stacks in * random callers for direct reclaim or memcg reclaim. We explicitly @@ -1427,10 +1427,10 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) goto redirty; /* - * Is this page beyond the end of the file? + * Is this folio beyond the end of the file? * - * The page index is less than the end_index, adjust the end_offset - * to the highest offset that this page should represent. + * The folio index is less than the end_index, adjust the end_pos + * to the highest offset that this folio should represent. * ----------------------------------------------------- * | file mapping | | * ----------------------------------------------------- @@ -1440,7 +1440,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * ---------------------------------^------------------| */ isize = i_size_read(inode); - end_pos = page_offset(page) + PAGE_SIZE; + end_pos = folio_pos(folio) + folio_size(folio); if (end_pos > isize) { /* * Check whether the page to write out is beyond or straddles @@ -1453,7 +1453,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | | Straddles | * ---------------------------------^-----------|--------| */ - size_t poff = offset_in_page(isize); + size_t poff = offset_in_folio(folio, isize); pgoff_t end_index = isize >> PAGE_SHIFT; /* @@ -1473,8 +1473,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * checking if the page is totally beyond i_size or if its * offset is just equal to the EOF. */ - if (page->index > end_index || - (page->index == end_index && poff == 0)) + if (folio->index > end_index || + (folio->index == end_index && poff == 0)) goto redirty; /* @@ -1485,17 +1485,15 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * memory is zeroed when mapped, and writes to that region are * not written out to the file." */ - zero_user_segment(page, poff, PAGE_SIZE); - - /* Adjust the end_offset to the end of file */ + folio_zero_segment(folio, poff, folio_size(folio)); end_pos = isize; } - return iomap_writepage_map(wpc, wbc, inode, page, end_pos); + return iomap_writepage_map(wpc, wbc, inode, folio, end_pos); redirty: - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); return 0; } From patchwork Thu Dec 16 21:07:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ABCEC433FE for ; Thu, 16 Dec 2021 21:08:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241605AbhLPVIT (ORCPT ); Thu, 16 Dec 2021 16:08:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241597AbhLPVHZ (ORCPT ); Thu, 16 Dec 2021 16:07:25 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43C24C06175A; Thu, 16 Dec 2021 13:07:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gUQx8mxjdOjMT6SuQ1S8oSD7Fjv7zKbwTGmG07T4Lyc=; b=byOoSLK0T0SKyC4jUYoBvl9WFF 6nm0LLY1JkSSqWnei29Qq+dqFsYpQZf5tOAiRomtedl4WWMLCURhKzZzGFwVgwZcaiwfEJyuM64rd NRecL8ifOTf4Kn+k4mBWf8GUicWGdzDHbZOFkZePofwca8CvuRT1u1biGKGvvsUsGcW8kHB7lk6mw daiOMmmJynanfsnXE0yrBUhf595FK4qyDtx72YGYYM2ijEeDqGsNHsYCjIWiye5+i/X08l0pHTb5O SBSM9+Cfeztxruyz6WtNWgfjgTawBry7omDTEZGUBSaU46PEXgX8QNF2m79gaC04i6pV5bOvADbvc 31sapkjA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyD-00Fx5N-MO; Thu, 16 Dec 2021 21:07:21 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 23/25] iomap: Convert iomap_migrate_page() to use folios Date: Thu, 16 Dec 2021 21:07:13 +0000 Message-Id: <20211216210715.3801857-24-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The arguments are still pages for now, but we can use folios internally and cut out a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index a36695db6f9d..b2f6e5991eb0 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -504,19 +504,21 @@ int iomap_migrate_page(struct address_space *mapping, struct page *newpage, struct page *page, enum migrate_mode mode) { + struct folio *folio = page_folio(page); + struct folio *newfolio = page_folio(newpage); int ret; - ret = migrate_page_move_mapping(mapping, newpage, page, 0); + ret = folio_migrate_mapping(mapping, newfolio, folio, 0); if (ret != MIGRATEPAGE_SUCCESS) return ret; - if (page_has_private(page)) - attach_page_private(newpage, detach_page_private(page)); + if (folio_test_private(folio)) + folio_attach_private(newfolio, folio_detach_private(folio)); if (mode != MIGRATE_SYNC_NO_COPY) - migrate_page_copy(newpage, page); + folio_migrate_copy(newfolio, folio); else - migrate_page_states(newpage, page); + folio_migrate_flags(newfolio, folio); return MIGRATEPAGE_SUCCESS; } EXPORT_SYMBOL_GPL(iomap_migrate_page); From patchwork Thu Dec 16 21:07:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6467FC43217 for ; Thu, 16 Dec 2021 21:07:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241685AbhLPVHk (ORCPT ); Thu, 16 Dec 2021 16:07:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241580AbhLPVHY (ORCPT ); Thu, 16 Dec 2021 16:07:24 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A82E7C061748; Thu, 16 Dec 2021 13:07:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=suVGpwgyQ1ocQfAS3H1Y5bDjLvNL3Ps59REde6+5QbA=; b=JGtk7dRY0siCX8diQFLzaNRyt3 8x5OxXQCyfabtXcdFE1jejVKK/jUDhgm+6ixG39EywZjvqGoJpefHuHYFOMKGDKivI3zoM0aICBf4 Wz3ogFTJOCMNpKjd1vWm6sju1LoLlcbbwvx9WYXbHtgcTnKvdhbdbd304CvhH1lJ62yXZeWmXvvjT 9Zq9DFDd4bKgKHHtgR0Y5O3xRi5LjGBLhd2685M+VNJ9rlAJYu7RXwEH4n7OPWy/pFBNVnziW0k4L gLUcij3F2ku9kl6LRHg4m8wVkJAQ3wScbDap4gPotJioScqfOrcv8KD4E0n9jxSJqpS/48zaudryu HNTTMZ4g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyD-00Fx5Z-QA; Thu, 16 Dec 2021 21:07:21 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 24/25] iomap: Support large folios in invalidatepage Date: Thu, 16 Dec 2021 21:07:14 +0000 Message-Id: <20211216210715.3801857-25-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org If we're punching a hole in a large folio, we need to remove the per-folio iomap data as the folio is about to be split and each page will need its own. If a dirty folio is only partially-uptodate, the iomap data contains the information about which blocks cannot be written back, so assert that a dirty folio is fully uptodate. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b2f6e5991eb0..36ccb903db92 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -481,13 +481,18 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) trace_iomap_invalidatepage(folio->mapping->host, offset, len); /* - * If we're invalidating the entire page, clear the dirty state from it - * and release it to avoid unnecessary buildup of the LRU. + * If we're invalidating the entire folio, clear the dirty state + * from it and release it to avoid unnecessary buildup of the LRU. */ if (offset == 0 && len == folio_size(folio)) { WARN_ON_ONCE(folio_test_writeback(folio)); folio_cancel_dirty(folio); iomap_page_release(folio); + } else if (folio_test_large(folio)) { + /* Must release the iop so the page can be split */ + WARN_ON_ONCE(!folio_test_uptodate(folio) && + folio_test_dirty(folio)); + iomap_page_release(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidate_folio); From patchwork Thu Dec 16 21:07:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F1D5C4332F for ; Thu, 16 Dec 2021 21:08:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241566AbhLPVIU (ORCPT ); Thu, 16 Dec 2021 16:08:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241588AbhLPVHY (ORCPT ); Thu, 16 Dec 2021 16:07:24 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB40AC061751; Thu, 16 Dec 2021 13:07:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=C/aEEIUI8R1IaR4jIAxLPfTciBIfzrBXc1+qsDU9LWQ=; b=aBSQ2v/hS8CUP+5vmDxnEbfDm2 xNkZzKlEY251Gwp3szPLCYi1bdov1Nu8pYaqMxObQJoSWMp6K4j4MeBuhXIRue07oDRPP9wK6Y6lf W4itzeHSKgawVtnubEkqjdH7+ux9w4+n35RkF7hUVEV9/IOyJ3BEE+JgTaM4s8ynEyvCveDDhxQ9C 7dj225p7o/WDzITGYj46OR6lin5M63FTJ9PHiSn3+119t834sFKZvxoaOHhQuFhVV28uC3Tsncf+J O35ziU7uZGD4txO3uFHWPq/OhZwI+GzZ3SH8mOYxdw2AeY53O4XjxhKRRoS5Nh9lu5uYq7LBowsxe 6PrZGACw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxxyD-00Fx5o-V0; Thu, 16 Dec 2021 21:07:22 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J. Wong" Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v3 25/25] xfs: Support large folios Date: Thu, 16 Dec 2021 21:07:15 +0000 Message-Id: <20211216210715.3801857-26-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216210715.3801857-1-willy@infradead.org> References: <20211216210715.3801857-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Now that iomap has been converted, XFS is large folio safe. Indicate to the VFS that it can now create large folios for XFS. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Tested-by: Dave Chinner Tested-by: Darrick J. Wong --- fs/xfs/xfs_icache.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index da4af2142a2b..cdc39f576ca1 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -87,6 +87,7 @@ xfs_inode_alloc( /* VFS doesn't initialise i_mode or i_state! */ VFS_I(ip)->i_mode = 0; VFS_I(ip)->i_state = 0; + mapping_set_large_folios(VFS_I(ip)->i_mapping); XFS_STATS_INC(mp, vn_active); ASSERT(atomic_read(&ip->i_pincount) == 0); @@ -320,6 +321,7 @@ xfs_reinit_inode( inode->i_rdev = dev; inode->i_uid = uid; inode->i_gid = gid; + mapping_set_large_folios(inode->i_mapping); return error; }