From patchwork Mon Jul 19 18:39:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7122C07E95 for ; Mon, 19 Jul 2021 18:41:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 25D0D6101E for ; Mon, 19 Jul 2021 18:41:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 25D0D6101E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C52318D00F5; Mon, 19 Jul 2021 14:41:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C026A8D00EC; Mon, 19 Jul 2021 14:41:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF17D8D00F5; Mon, 19 Jul 2021 14:41:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id 88A4E8D00EC for ; Mon, 19 Jul 2021 14:41:29 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 289E025F51 for ; Mon, 19 Jul 2021 18:41:28 +0000 (UTC) X-FDA: 78380205456.35.CE7C8C2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 84B8B801F256 for ; Mon, 19 Jul 2021 18:41:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DGdOungzrgHQI6fcBN2msBQ7P+kjo/OBfBcHoJ3oKjw=; b=nlXB9J7KU5ap1IncoGefJMYFQA yjRDnp0ngc4KU0len+Sv0hWLmLhtgFh92rgG+ltrweswLHUj8WcBb8GOD0MzqCq2WftuzN3js6qOk vvZj9E0IynUm7QSUWmyY7UCasxljQzt8Ec6x9kcDFKgQqq/RvFBnLIoQ8Lp+9eFhu1fVBsMDp3xCP Gtp9wyziE091sQPr8TK7QVNGjzkogYCt86E2Prs76gtgWUv31JHq2qlSbdSRvHj5FkLtmkGVD6Csy r1zZB+5zGwAF0rpPpBo4wupmYMbLVEI8V4qLgHvQ3GtPpY+HEhoPxgUKRhlxiDPat5H5WWr16i7YF l43k8rYQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YBg-007LT3-DD; Mon, 19 Jul 2021 18:40:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org Subject: [PATCH v15 01/17] block: Add bio_add_folio() Date: Mon, 19 Jul 2021 19:39:45 +0100 Message-Id: <20210719184001.1750630-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nlXB9J7K; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 84B8B801F256 X-Stat-Signature: c9cs5w4mmzzggdt6isqk797pd9zr8bsk X-HE-Tag: 1626720087-92166 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a thin wrapper around bio_add_page(). The main advantage here is the documentation that the submitter can expect to see folios in the completion handler, and that stupidly large folios are not supported. It's not currently possible to allocate stupidly large folios, but if it ever becomes possible, this function will fail gracefully instead of doing I/O to the wrong bytes. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- block/bio.c | 21 +++++++++++++++++++++ include/linux/bio.h | 3 ++- 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 1fab762e079b..c64e35548fb2 100644 --- a/block/bio.c +++ b/block/bio.c @@ -933,6 +933,27 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +/** + * bio_add_folio - Attempt to add part of a folio to a bio. + * @bio: Bio to add to. + * @folio: Folio to add. + * @len: How many bytes from the folio to add. + * @off: First byte in this folio to add. + * + * Always uses the head page of the folio in the bio. If a submitter only + * uses bio_add_folio(), it can count on never seeing tail pages in the + * completion routine. BIOs do not support folios that are 4GiB or larger. + * + * Return: The number of bytes from this folio added to the bio. + */ +size_t bio_add_folio(struct bio *bio, struct folio *folio, size_t len, + size_t off) +{ + if (len > UINT_MAX || off > UINT_MAX) + return 0; + return bio_add_page(bio, &folio->page, len, off); +} + void bio_release_pages(struct bio *bio, bool mark_dirty) { struct bvec_iter_all iter_all; diff --git a/include/linux/bio.h b/include/linux/bio.h index 2203b686e1f0..ade93e2de6a1 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -462,7 +462,8 @@ extern void bio_uninit(struct bio *); extern void bio_reset(struct bio *); void bio_chain(struct bio *, struct bio *); -extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int); +int bio_add_page(struct bio *, struct page *, unsigned len, unsigned off); +size_t bio_add_folio(struct bio *, struct folio *, size_t len, size_t off); extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *, unsigned int, unsigned int); int bio_add_zone_append_page(struct bio *bio, struct page *page, From patchwork Mon Jul 19 18:39:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 922BBC07E9D for ; Mon, 19 Jul 2021 18:41:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2DB0961003 for ; Mon, 19 Jul 2021 18:41:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2DB0961003 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CEE208D00F6; Mon, 19 Jul 2021 14:41:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C9DE28D00EC; Mon, 19 Jul 2021 14:41:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B8D768D00F6; Mon, 19 Jul 2021 14:41:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 924038D00EC for ; Mon, 19 Jul 2021 14:41:52 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 266F3184C2170 for ; Mon, 19 Jul 2021 18:41:51 +0000 (UTC) X-FDA: 78380206422.38.84293CB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 3545C90000B4 for ; Mon, 19 Jul 2021 18:41:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zCf/+WAt5kl+bvMl9m+HufeShSb0v0zgYzCTuNOFtYg=; b=Umasx1i81sMBjUJVKN9cC0afgY lhrSYnm7IglhzP/2Y1Mq08lwm8fTV163MBtkXyFylN8FanM9CL60jNeSnyxpfPgf0vqv7yAoA1sec fj8VNTXZBUDZb4pt9y/SK2frJCQ25fL+L4SztXnZdYFxHy7BqO8rkWsOxc2WpLJKX7jgaTQyQa+lY /5M1ByspktSe+/rNftsokYIvN1ilmnpL4Pdudf+Mh6FdFJjedJ8vXAFgrRW7N/LQpS2FA6btVcLtm jKNNubcEfJrU0iIFsiDkVp+2/geTv4wlVrtL1G4AJ3l9EI74WPmzI/t2p+s0iK9oWHS8aXESCFDjH 8lYr8/3g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YCM-007LUk-PC; Mon, 19 Jul 2021 18:41:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org Subject: [PATCH v15 02/17] block: Add bio_for_each_folio_all() Date: Mon, 19 Jul 2021 19:39:46 +0100 Message-Id: <20210719184001.1750630-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Umasx1i8; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: xi4jc6q4dr9peafeu4c9sk43664djnn8 X-Rspamd-Queue-Id: 3545C90000B4 X-Rspamd-Server: rspam01 X-HE-Tag: 1626720110-558452 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow callers to iterate over each folio instead of each page. The bio need not have been constructed using folios originally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot --- Documentation/core-api/kernel-api.rst | 1 + include/linux/bio.h | 53 ++++++++++++++++++++++++++- 2 files changed, 53 insertions(+), 1 deletion(-) diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index 2a7444e3a4c2..b804605f705e 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -279,6 +279,7 @@ Accounting Framework Block Devices ============= +.. kernel-doc:: include/linux/bio.h .. kernel-doc:: block/blk-core.c :export: diff --git a/include/linux/bio.h b/include/linux/bio.h index ade93e2de6a1..a90a79ad7bd1 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -189,7 +189,7 @@ static inline void bio_advance_iter_single(const struct bio *bio, */ #define bio_for_each_bvec_all(bvl, bio, i) \ for (i = 0, bvl = bio_first_bvec_all(bio); \ - i < (bio)->bi_vcnt; i++, bvl++) \ + i < (bio)->bi_vcnt; i++, bvl++) #define bio_iter_last(bvec, iter) ((iter).bi_size == (bvec).bv_len) @@ -314,6 +314,57 @@ static inline struct bio_vec *bio_last_bvec_all(struct bio *bio) return &bio->bi_io_vec[bio->bi_vcnt - 1]; } +/** + * struct folio_iter - State for iterating all folios in a bio. + * @folio: The current folio we're iterating. NULL after the last folio. + * @offset: The byte offset within the current folio. + * @length: The number of bytes in this iteration (will not cross folio + * boundary). + */ +struct folio_iter { + struct folio *folio; + size_t offset; + size_t length; + /* private: for use by the iterator */ + size_t _seg_count; + int _i; +}; + +static inline +void bio_first_folio(struct folio_iter *fi, struct bio *bio, int i) +{ + struct bio_vec *bvec = bio_first_bvec_all(bio) + i; + + fi->folio = page_folio(bvec->bv_page); + fi->offset = bvec->bv_offset + + PAGE_SIZE * (bvec->bv_page - &fi->folio->page); + fi->_seg_count = bvec->bv_len; + fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count); + fi->_i = i; +} + +static inline void bio_next_folio(struct folio_iter *fi, struct bio *bio) +{ + fi->_seg_count -= fi->length; + if (fi->_seg_count) { + fi->folio = folio_next(fi->folio); + fi->offset = 0; + fi->length = min(folio_size(fi->folio), fi->_seg_count); + } else if (fi->_i + 1 < bio->bi_vcnt) { + bio_first_folio(fi, bio, fi->_i + 1); + } else { + fi->folio = NULL; + } +} + +/** + * bio_for_each_folio_all - Iterate over each folio in a bio. + * @fi: struct folio_iter which is updated for each folio. + * @bio: struct bio to iterate over. + */ +#define bio_for_each_folio_all(fi, bio) \ + for (bio_first_folio(&fi, bio, 0); fi.folio; bio_next_folio(&fi, bio)) + enum bip_flags { BIP_BLOCK_INTEGRITY = 1 << 0, /* block layer owns integrity data */ BIP_MAPPED_INTEGRITY = 1 << 1, /* ref tag has been remapped */ From patchwork Mon Jul 19 18:39:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0692EC07E95 for ; Mon, 19 Jul 2021 18:42:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9BC4861029 for ; Mon, 19 Jul 2021 18:42:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9BC4861029 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 43BE08D00F4; Mon, 19 Jul 2021 14:42:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F71F8D00EC; Mon, 19 Jul 2021 14:42:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28C488D00F4; Mon, 19 Jul 2021 14:42:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0218.hostedemail.com [216.40.44.218]) by kanga.kvack.org (Postfix) with ESMTP id 0121B8D00EC for ; Mon, 19 Jul 2021 14:42:38 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 91C908248D52 for ; Mon, 19 Jul 2021 18:42:37 +0000 (UTC) X-FDA: 78380208354.22.3BEC062 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id EFFBE5000097 for ; Mon, 19 Jul 2021 18:42:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EgqZeV+YXoPwC3VyWrO7ZM46UKPgEwS7pK1czp7B0BA=; b=tfu7I2yHFOHf8awsTxvo5f++kx dINVZ/N6uYDnoqiJVwhlku5qlhqjRTFCDBZ70qH8G/9lsTWtpvPTViL338LYYFuRf9gHI680GL458 b51zrdYW9KX8h+ZUW8NVEPLRA2I/gFhE9VSquXorCJ6lQPKpJcUAywEOOzScQx+GXqQtkrVFe4sjI yLBA/2XCSwp/O68IjtVFQTYfvAOyKbOVFTne3NiKWquuRxVD8lVip4/8b7xoTpWRZt0gqVEYZl6Ia EBfuOVsRAFcDb0aqmoWgIyb8OnIXI/cZieT6hTujGFYfuCw/XRxLJEDM9QdN3uHRrHzK8ZJzglpgB k57w7W0Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YCr-007LWK-KZ; Mon, 19 Jul 2021 18:41:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 03/17] iomap: Convert to_iomap_page to take a folio Date: Mon, 19 Jul 2021 19:39:47 +0100 Message-Id: <20210719184001.1750630-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tfu7I2yH; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: dtkmqw795k9e31njoam4cfafw69wh8ab X-Rspamd-Queue-Id: EFFBE5000097 X-Rspamd-Server: rspam01 X-HE-Tag: 1626720156-919233 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The big comment about only using a head page can go away now that it takes a folio argument. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 32 +++++++++++++++----------------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 87ccb3438bec..d4870a2691f9 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -22,8 +22,8 @@ #include "../internal.h" /* - * Structure allocated for each page or THP when block size < page size - * to track sub-page uptodate status and I/O completions. + * Structure allocated for each folio when block size < folio size + * to track sub-folio uptodate status and I/O completions. */ struct iomap_page { atomic_t read_bytes_pending; @@ -32,17 +32,10 @@ struct iomap_page { unsigned long uptodate[]; }; -static inline struct iomap_page *to_iomap_page(struct page *page) +static inline struct iomap_page *to_iomap_page(struct folio *folio) { - /* - * per-block data is stored in the head page. Callers should - * not be dealing with tail pages (and if they are, they can - * call thp_head() first. - */ - VM_BUG_ON_PGFLAGS(PageTail(page), page); - - if (page_has_private(page)) - return (struct iomap_page *)page_private(page); + if (folio_test_private(folio)) + return folio_get_private(folio); return NULL; } @@ -51,7 +44,8 @@ static struct bio_set iomap_ioend_bioset; static struct iomap_page * iomap_page_create(struct inode *inode, struct page *page) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); unsigned int nr_blocks = i_blocks_per_page(inode, page); if (iop || nr_blocks <= 1) @@ -144,7 +138,8 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, static void iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; @@ -173,7 +168,8 @@ static void iomap_read_page_end_io(struct bio_vec *bvec, int error) { struct page *page = bvec->bv_page; - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { ClearPageUptodate(page); @@ -435,7 +431,8 @@ int iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned long count) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned len, first, last; unsigned i; @@ -1012,7 +1009,8 @@ static void iomap_finish_page_writeback(struct inode *inode, struct page *page, int error, unsigned int len) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); if (error) { SetPageError(page); From patchwork Mon Jul 19 18:39:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04A87C07E9B for ; Mon, 19 Jul 2021 19:10:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A6C346109E for ; Mon, 19 Jul 2021 19:10:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6C346109E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 54E186B018C; Mon, 19 Jul 2021 15:10:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FD048D00EC; Mon, 19 Jul 2021 15:10:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EC596B018E; Mon, 19 Jul 2021 15:10:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id 0F26B6B018C for ; Mon, 19 Jul 2021 15:10:36 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 999A58248D7C for ; Mon, 19 Jul 2021 19:10:34 +0000 (UTC) X-FDA: 78380278788.34.F4AAFB7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 5D376D0000A2 for ; Mon, 19 Jul 2021 19:10:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=T/Dzopz4wPug8dYWGXlrS52upno8KobFcrpG2U14MrM=; b=ocZCurIALumvN27ERakjswTMo9 BHjPJpEPKNkB/r9zD6MFD4T+Xauib6vEMYWly54fQFdIpf7REE2ekZ3RT9kYP3QwOxjR5dclSyE8C 1BKFMTPUP3B9RZ0iJd8+t1LGNt/6E8pxZ+Rw80QPyjLO58QE/BR557CF+uNTmw/f1zB5eSCCtvMEu iirb9u98XW5b2f5S+KkID4EB472Ai1Gkj8HP0KxvNQcOA4/gLntlfshdwCXh6p+2l5iSEsgcQZsTt QeHbXlzHmDjzgtjW2cr6LlnTPvxLK9WBRFZTieTN13J/Bv4FiWSKg9WENfyMtEO4/BCeL7Qkl3UkR pYmvjNEA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YDR-007LXm-Ou; Mon, 19 Jul 2021 18:42:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 04/17] iomap: Convert iomap_page_create to take a folio Date: Mon, 19 Jul 2021 19:39:48 +0100 Message-Id: <20210719184001.1750630-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ocZCurIA; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 5spfr7reicikt9sg84p8q5gn74o57j6u X-Rspamd-Queue-Id: 5D376D0000A2 X-Rspamd-Server: rspam01 X-HE-Tag: 1626721834-826134 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function already assumed it was being passed a head page, so just formalise that. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index d4870a2691f9..ec8bdc0c63df 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -42,11 +42,10 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio) static struct bio_set iomap_ioend_bioset; static struct iomap_page * -iomap_page_create(struct inode *inode, struct page *page) +iomap_page_create(struct inode *inode, struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); - unsigned int nr_blocks = i_blocks_per_page(inode, page); + unsigned int nr_blocks = i_blocks_per_folio(inode, folio); if (iop || nr_blocks <= 1) return iop; @@ -54,9 +53,9 @@ iomap_page_create(struct inode *inode, struct page *page) iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)), GFP_NOFS | __GFP_NOFAIL); spin_lock_init(&iop->uptodate_lock); - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) bitmap_fill(iop->uptodate, nr_blocks); - attach_page_private(page, iop); + folio_attach_private(folio, iop); return iop; } @@ -236,6 +235,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, { struct iomap_readpage_ctx *ctx = data; struct page *page = ctx->cur_page; + struct folio *folio = page_folio(page); struct iomap_page *iop; bool same_page = false, is_contig = false; loff_t orig_pos = pos; @@ -249,7 +249,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, } /* zero post-eof blocks as the page may be mapped */ - iop = iomap_page_create(inode, page); + iop = iomap_page_create(inode, folio); iomap_adjust_read_range(inode, iop, &pos, length, &poff, &plen); if (plen == 0) goto done; @@ -549,7 +549,8 @@ static int __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, struct page *page, struct iomap *srcmap) { - struct iomap_page *iop = iomap_page_create(inode, page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = iomap_page_create(inode, folio); loff_t block_size = i_blocksize(inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); @@ -1303,7 +1304,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct page *page, u64 end_offset) { - struct iomap_page *iop = iomap_page_create(inode, page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); u64 file_offset; /* file offset of page */ From patchwork Mon Jul 19 18:39:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E59E3C07E9B for ; Mon, 19 Jul 2021 18:44:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 847A561029 for ; Mon, 19 Jul 2021 18:44:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 847A561029 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 28A7D8D00F6; Mon, 19 Jul 2021 14:44:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 261538D00EC; Mon, 19 Jul 2021 14:44:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12AB18D00F6; Mon, 19 Jul 2021 14:44:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id E41488D00EC for ; Mon, 19 Jul 2021 14:44:27 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 899F826DE8 for ; Mon, 19 Jul 2021 18:44:26 +0000 (UTC) X-FDA: 78380212932.34.5273EAC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 237286001E73 for ; Mon, 19 Jul 2021 18:44:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=N9B5ve6ovqtgjce1eRAExZZQLHs/EQ6RuhK6otFJV0c=; b=sUQPfapsAfFUCAT9eCjcP4/j/J OmEricRCvhqn6ZhKtiBB1idVqanyxKN7MqJfUu6TLwQAFHmQVBSGLDzCa0179RBznoOnHobwMbRXV cyRtY8VvjZKsybiBQzvOLfhHIfMryGLx2OxAjWnok4HN/QvRhBy4/wNbx02Vl3SsssmrAme0siE+R OCltdb4nml7wK+e4DR0J61kI9NBc7Rwc5OmITfx4uD7XUYNJhsQdafv8j5n9MggkzPwfX223mMTZJ 1U0NDTFr108cQzgJa8Rqmd6h27fohIqmWG+5WZ1iiljm3qf5Eg8JYtHPAijnq7jIzO46rrOeMLNY7 5tq0pF+g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YE9-007LZ7-3A; Mon, 19 Jul 2021 18:43:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 05/17] iomap: Convert iomap_page_release to take a folio Date: Mon, 19 Jul 2021 19:39:49 +0100 Message-Id: <20210719184001.1750630-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 237286001E73 X-Stat-Signature: axy1ci7cdx8icfxgeorzrxdg76fknc7b Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sUQPfaps; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626720266-208073 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: iomap_page_release() was also assuming that it was being passed a head page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ec8bdc0c63df..83eb5fdcbe05 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -59,18 +59,18 @@ iomap_page_create(struct inode *inode, struct folio *folio) return iop; } -static void -iomap_page_release(struct page *page) +static void iomap_page_release(struct folio *folio) { - struct iomap_page *iop = detach_page_private(page); - unsigned int nr_blocks = i_blocks_per_page(page->mapping->host, page); + struct iomap_page *iop = folio_detach_private(folio); + unsigned int nr_blocks = i_blocks_per_folio(folio->mapping->host, + folio); if (!iop) return; WARN_ON_ONCE(atomic_read(&iop->read_bytes_pending)); WARN_ON_ONCE(atomic_read(&iop->write_bytes_pending)); WARN_ON_ONCE(bitmap_full(iop->uptodate, nr_blocks) != - PageUptodate(page)); + folio_test_uptodate(folio)); kfree(iop); } @@ -458,6 +458,8 @@ EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate); int iomap_releasepage(struct page *page, gfp_t gfp_mask) { + struct folio *folio = page_folio(page); + trace_iomap_releasepage(page->mapping->host, page_offset(page), PAGE_SIZE); @@ -468,7 +470,7 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) */ if (PageDirty(page) || PageWriteback(page)) return 0; - iomap_page_release(page); + iomap_page_release(folio); return 1; } EXPORT_SYMBOL_GPL(iomap_releasepage); @@ -476,6 +478,8 @@ EXPORT_SYMBOL_GPL(iomap_releasepage); void iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) { + struct folio *folio = page_folio(page); + trace_iomap_invalidatepage(page->mapping->host, offset, len); /* @@ -485,7 +489,7 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) if (offset == 0 && len == PAGE_SIZE) { WARN_ON_ONCE(PageWriteback(page)); cancel_dirty_page(page); - iomap_page_release(page); + iomap_page_release(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidatepage); From patchwork Mon Jul 19 18:39:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B594C07E95 for ; Mon, 19 Jul 2021 19:27:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD07E610F7 for ; Mon, 19 Jul 2021 19:27:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD07E610F7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4C1918D00F4; Mon, 19 Jul 2021 15:27:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 472408D00EC; Mon, 19 Jul 2021 15:27:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 338F28D00F4; Mon, 19 Jul 2021 15:27:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id 1283F8D00EC for ; Mon, 19 Jul 2021 15:27:01 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9B276184D8F89 for ; Mon, 19 Jul 2021 19:26:59 +0000 (UTC) X-FDA: 78380320158.01.150BD51 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 555621917 for ; Mon, 19 Jul 2021 19:26:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IcqJM902sWDHlXXt8KHS5wivXFgGLpkF/GJJNHr6Ztg=; b=YfcIIN675oN36zD8aZlB85W3OM B00FvdJea0x80Vd9OzW1h4EaeRnH1eeQ50ZgdbuAt+jU5ORHWYK3xIc9Fxd6lPtgrSFBhJ8jb3vbX X8HJ1URooNyjHvlx/5F0maLeMyd/ZARrDJCQMX1eDaxK6umdwoAbtyBbZtVc8cSCgZo8zWF/R7BfB MFAjxYPGsjBd2+7NjiIudxpWwcN8p1OHPagMdg7kRS4SmUiOylH2i/RGCSoPjirirILPfVn6w3ygz H5cI3ebTi+a35NDWbeFbrT8CY/Fb5aTPJvEg8vxFsT8sl2r/u0g0VpOhkmTT7Y1UAqNzF3DCKbWKX qI5Hl0Cg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YEq-007LaI-FS; Mon, 19 Jul 2021 18:43:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org Subject: [PATCH v15 06/17] iomap: Convert iomap_releasepage to use a folio Date: Mon, 19 Jul 2021 19:39:50 +0100 Message-Id: <20210719184001.1750630-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 555621917 X-Stat-Signature: 1zmxj5j5epcr38ts9dw8i9bbkk7cuqmx Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YfcIIN67; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626722819-26877 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is an address_space operation, so its argument must remain as a struct page, but we can use a folio internally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 83eb5fdcbe05..715b25a1c1e6 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -460,15 +460,15 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) { struct folio *folio = page_folio(page); - trace_iomap_releasepage(page->mapping->host, page_offset(page), - PAGE_SIZE); + trace_iomap_releasepage(folio->mapping->host, folio_pos(folio), + folio_size(folio)); /* * mm accommodates an old ext3 case where clean pages might not have had * the dirty bit cleared. Thus, it can send actual dirty pages to * ->releasepage() via shrink_active_list(), skip those here. */ - if (PageDirty(page) || PageWriteback(page)) + if (folio_test_dirty(folio) || folio_test_writeback(folio)) return 0; iomap_page_release(folio); return 1; From patchwork Mon Jul 19 18:39:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AE25C07E95 for ; Mon, 19 Jul 2021 18:46:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E4F69610F7 for ; Mon, 19 Jul 2021 18:46:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E4F69610F7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 87C1E8D00F6; Mon, 19 Jul 2021 14:46:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 82BE68D00EC; Mon, 19 Jul 2021 14:46:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F3838D00F6; Mon, 19 Jul 2021 14:46:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 45F968D00EC for ; Mon, 19 Jul 2021 14:46:10 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C331C181A3493 for ; Mon, 19 Jul 2021 18:46:08 +0000 (UTC) X-FDA: 78380217216.23.393C17D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 1F748600198E for ; Mon, 19 Jul 2021 18:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bTUmV80VA1p5trYWH6euekPKFgfy6gnhL3vAo4kaEiY=; b=L17h48Aff2X0T5+yXGvQ5EpPfL iYq+dwdWkGKu79sCL1s2a5kVYv5y9Jo/xg1JrAWzujTmhHtm3bjfUezXJfsl0qmfejabSzR7KkV6W p9zXTfS5TYtDr5K7MUr17qZEMU2mx+k8Z7SKFx3iWMCqt0ueEeDNKf/kiaL+yLM2zdN0QDkMKuZUL iZ+kOzaj6ufHI6tIzrxYkjpSxAfvPwSRJL2s1fc88C5ehSCMnYrfvdmusluisG/TfwyhzLz4rTUcQ 2q4ipDx0blp3Q5m+jmsgbnoTymVJMTzOqYLCh4leCHlgi7ADX0Ci/+XGqgQz/b44mpTLeBaooO2aw /S/4nwVA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YFc-007LeE-Td; Mon, 19 Jul 2021 18:44:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org Subject: [PATCH v15 07/17] iomap: Convert iomap_invalidatepage to use a folio Date: Mon, 19 Jul 2021 19:39:51 +0100 Message-Id: <20210719184001.1750630-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1F748600198E X-Stat-Signature: 99wjwh6mxz5jdwfg7cusj9egkf4b9zqd Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=L17h48Af; dmarc=none; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626720367-446517 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is an address_space operation, so its argument must remain as a struct page, but we can use a folio internally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 715b25a1c1e6..0d7b6ef4c5cc 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -480,15 +480,15 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) { struct folio *folio = page_folio(page); - trace_iomap_invalidatepage(page->mapping->host, offset, len); + trace_iomap_invalidatepage(folio->mapping->host, offset, len); /* * If we are invalidating the entire page, clear the dirty state from it * and release it to avoid unnecessary buildup of the LRU. */ - if (offset == 0 && len == PAGE_SIZE) { - WARN_ON_ONCE(PageWriteback(page)); - cancel_dirty_page(page); + if (offset == 0 && len == folio_size(folio)) { + WARN_ON_ONCE(folio_test_writeback(folio)); + folio_cancel_dirty(folio); iomap_page_release(folio); } } From patchwork Mon Jul 19 18:39:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00624C07E95 for ; Mon, 19 Jul 2021 18:46:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 65A3E610D0 for ; Mon, 19 Jul 2021 18:46:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 65A3E610D0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B27D8D00F7; Mon, 19 Jul 2021 14:46:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08A118D00EC; Mon, 19 Jul 2021 14:46:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6CC28D00F7; Mon, 19 Jul 2021 14:46:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C73A68D00EC for ; Mon, 19 Jul 2021 14:46:52 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 60AD9202FF for ; Mon, 19 Jul 2021 18:46:51 +0000 (UTC) X-FDA: 78380219022.12.EE15F7F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id EC47F5011FBF for ; Mon, 19 Jul 2021 18:46:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kKwIRh+f9wsPxETlTXIuMZIZvRf6jI5cu52KEL/0IJ8=; b=QSrCIFRG2r3pi9/+NP3OnfdbHl KxQzgCBEbrYNCzH67uM7WYfm7XWgcjsr4KOzKG9kEGlV1VpEijqVAp0saLueLtn2e5/j/n7MwWlJu Y0IyJkxTyL2qREci6sGWM0IJsO0RCJeTwv8Fpzf5KaKIgIGhNi6uenuaDQET2MHHbMj/gGuMshZbb 6JTI/gyB5MwrDts+5pZGxHyve9BqETH+xQ99aUus0zeJMS6uXY7dEqNP0GkkQeKdMas4tKGfkzqOm KV11f1fM/GgEAAOFiO6q70z1Wanjjqgyp3zxAVFA4Ky5vLG1Y3DnWLFOMbE/YUv36qfYUbKekEmYa RkT9rvsA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YGS-007Lip-O5; Mon, 19 Jul 2021 18:45:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 08/17] iomap: Pass the iomap_page into iomap_set_range_uptodate Date: Mon, 19 Jul 2021 19:39:52 +0100 Message-Id: <20210719184001.1750630-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QSrCIFRG; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EC47F5011FBF X-Stat-Signature: r5poubj7dn1jd7m7cuiwg6zu5p5xs4ou X-HE-Tag: 1626720410-210399 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All but one caller already has the iomap_page, and we can avoid getting it again. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 0d7b6ef4c5cc..62d8ebcda429 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -134,11 +134,9 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, *lenp = plen; } -static void -iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) +static void iomap_iop_set_range_uptodate(struct page *page, + struct iomap_page *iop, unsigned off, unsigned len) { - struct folio *folio = page_folio(page); - struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; @@ -151,14 +149,14 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) spin_unlock_irqrestore(&iop->uptodate_lock, flags); } -static void -iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len) +static void iomap_set_range_uptodate(struct page *page, + struct iomap_page *iop, unsigned off, unsigned len) { if (PageError(page)) return; - if (page_has_private(page)) - iomap_iop_set_range_uptodate(page, off, len); + if (iop) + iomap_iop_set_range_uptodate(page, iop, off, len); else SetPageUptodate(page); } @@ -174,7 +172,8 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error) ClearPageUptodate(page); SetPageError(page); } else { - iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len); + iomap_set_range_uptodate(page, iop, bvec->bv_offset, + bvec->bv_len); } if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) @@ -256,7 +255,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (iomap_block_needs_zeroing(inode, iomap, pos)) { zero_user(page, poff, plen); - iomap_set_range_uptodate(page, poff, plen); + iomap_set_range_uptodate(page, iop, poff, plen); goto done; } @@ -585,7 +584,7 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, if (status) return status; } - iomap_set_range_uptodate(page, poff, plen); + iomap_set_range_uptodate(page, iop, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -647,6 +646,8 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t copied, struct page *page) { + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); flush_dcache_page(page); /* @@ -662,7 +663,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, offset_in_page(pos), len); + iomap_set_range_uptodate(page, iop, offset_in_page(pos), len); __set_page_dirty_nobuffers(page); return copied; } From patchwork Mon Jul 19 18:39:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60B54C07E95 for ; Mon, 19 Jul 2021 19:09:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DDB356109E for ; Mon, 19 Jul 2021 19:09:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DDB356109E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 883436B018C; Mon, 19 Jul 2021 15:09:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 85B3E8D00F5; Mon, 19 Jul 2021 15:09:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FB3B8D00EC; Mon, 19 Jul 2021 15:09:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 4DC9D6B018C for ; Mon, 19 Jul 2021 15:09:31 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D586E1DAF8 for ; Mon, 19 Jul 2021 19:09:29 +0000 (UTC) X-FDA: 78380276058.39.994B561 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 6CD7E5011FB6 for ; Mon, 19 Jul 2021 19:09:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mlTMIIhEO4A+eKAPhaylrj4/q9kL52p6vpG/M9ZjXV4=; b=LhvRmpC3e3NwuMAruZCouKkTJU IuLW+omfJc+5c+rPmflNkkUQCjs566QAfP0BYUDQR86Wv+UT6221juYSaNfv5KMULw+co9bgisTnX RlzO+/baMCvEfd9HZiSCQtBs/yNdMOcNVKE+zUfyPzzQu7dj4jksYPEBbYdmLj567o8uNb1R4KiGb KYaNpVboSy1YU2j/QGBW65+sG3i/ZVxTHbjGWgM+RvSMYn1JwoqOXtOJZtSDWMqYLO1hqcfmJ6Sgb hyGcGWbScoh5JuIfvJuSTnju4Iri35onbdOgz4uqcCSJwOxxzjo34ron6tZqESIm262rKwirOLbXc UcTxNmsw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YHL-007Ln3-2b; Mon, 19 Jul 2021 18:46:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 09/17] iomap: Use folio offsets instead of page offsets Date: Mon, 19 Jul 2021 19:39:53 +0100 Message-Id: <20210719184001.1750630-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LhvRmpC3; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 5ykhcguccioq5zbmygn81hjuhnpy3p31 X-Rspamd-Queue-Id: 6CD7E5011FB6 X-HE-Tag: 1626721769-464992 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass a folio around instead of the page, and make sure the offset is relative to the start of the folio instead of the start of a page. Also use size_t for offset & length to make it clear that these are byte counts, and to support >2GB folios in the future. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 83 ++++++++++++++++++++++-------------------- 1 file changed, 43 insertions(+), 40 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 62d8ebcda429..0f70bd185c2a 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -75,18 +75,18 @@ static void iomap_page_release(struct folio *folio) } /* - * Calculate the range inside the page that we actually need to read. + * Calculate the range inside the folio that we actually need to read. */ -static void -iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, - loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) +static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, + loff_t *pos, loff_t length, size_t *offp, size_t *lenp) { + struct iomap_page *iop = to_iomap_page(folio); loff_t orig_pos = *pos; loff_t isize = i_size_read(inode); unsigned block_bits = inode->i_blkbits; unsigned block_size = (1 << block_bits); - unsigned poff = offset_in_page(*pos); - unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); + size_t poff = offset_in_folio(folio, *pos); + size_t plen = min_t(loff_t, folio_size(folio) - poff, length); unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; @@ -124,7 +124,7 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, * page cache for blocks that are entirely outside of i_size. */ if (orig_pos <= isize && orig_pos + length > isize) { - unsigned end = offset_in_page(isize - 1) >> block_bits; + unsigned end = offset_in_folio(folio, isize - 1) >> block_bits; if (first <= end && last > end) plen -= (last - end) * block_size; @@ -134,31 +134,31 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, *lenp = plen; } -static void iomap_iop_set_range_uptodate(struct page *page, - struct iomap_page *iop, unsigned off, unsigned len) +static void iomap_iop_set_range_uptodate(struct folio *folio, + struct iomap_page *iop, size_t off, size_t len) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; unsigned long flags; spin_lock_irqsave(&iop->uptodate_lock, flags); bitmap_set(iop->uptodate, first, last - first + 1); - if (bitmap_full(iop->uptodate, i_blocks_per_page(inode, page))) - SetPageUptodate(page); + if (bitmap_full(iop->uptodate, i_blocks_per_folio(inode, folio))) + folio_mark_uptodate(folio); spin_unlock_irqrestore(&iop->uptodate_lock, flags); } -static void iomap_set_range_uptodate(struct page *page, - struct iomap_page *iop, unsigned off, unsigned len) +static void iomap_set_range_uptodate(struct folio *folio, + struct iomap_page *iop, size_t off, size_t len) { - if (PageError(page)) + if (folio_test_error(folio)) return; if (iop) - iomap_iop_set_range_uptodate(page, iop, off, len); + iomap_iop_set_range_uptodate(folio, iop, off, len); else - SetPageUptodate(page); + folio_mark_uptodate(folio); } static void @@ -169,15 +169,17 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error) struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { - ClearPageUptodate(page); - SetPageError(page); + folio_clear_uptodate(folio); + folio_set_error(folio); } else { - iomap_set_range_uptodate(page, iop, bvec->bv_offset, - bvec->bv_len); + size_t off = (page - &folio->page) * PAGE_SIZE + + bvec->bv_offset; + + iomap_set_range_uptodate(folio, iop, off, bvec->bv_len); } if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) - unlock_page(page); + folio_unlock(folio); } static void @@ -238,7 +240,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap_page *iop; bool same_page = false, is_contig = false; loff_t orig_pos = pos; - unsigned poff, plen; + size_t poff, plen; sector_t sector; if (iomap->type == IOMAP_INLINE) { @@ -249,13 +251,13 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, /* zero post-eof blocks as the page may be mapped */ iop = iomap_page_create(inode, folio); - iomap_adjust_read_range(inode, iop, &pos, length, &poff, &plen); + iomap_adjust_read_range(inode, folio, &pos, length, &poff, &plen); if (plen == 0) goto done; if (iomap_block_needs_zeroing(inode, iomap, pos)) { - zero_user(page, poff, plen); - iomap_set_range_uptodate(page, iop, poff, plen); + zero_user(&folio->page, poff, plen); + iomap_set_range_uptodate(folio, iop, poff, plen); goto done; } @@ -266,7 +268,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, /* Try to merge into a previous segment if we can */ sector = iomap_sector(iomap, pos); if (ctx->bio && bio_end_sector(ctx->bio) == sector) { - if (__bio_try_merge_page(ctx->bio, page, plen, poff, + if (__bio_try_merge_page(ctx->bio, &folio->page, plen, poff, &same_page)) goto done; is_contig = true; @@ -298,7 +300,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, ctx->bio->bi_end_io = iomap_read_end_io; } - bio_add_page(ctx->bio, page, plen, poff); + bio_add_folio(ctx->bio, folio, plen, poff); done: /* * Move the caller beyond our range so that it keeps making progress. @@ -533,9 +535,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) truncate_pagecache_range(inode, max(pos, i_size), pos + len); } -static int -iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, - unsigned plen, struct iomap *iomap) +static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, + size_t poff, size_t plen, struct iomap *iomap) { struct bio_vec bvec; struct bio bio; @@ -544,7 +545,7 @@ iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, bio.bi_opf = REQ_OP_READ; bio.bi_iter.bi_sector = iomap_sector(iomap, block_start); bio_set_dev(&bio, iomap->bdev); - __bio_add_page(&bio, page, plen, poff); + bio_add_folio(&bio, folio, plen, poff); return submit_bio_wait(&bio); } @@ -557,14 +558,15 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, loff_t block_size = i_blocksize(inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); - unsigned from = offset_in_page(pos), to = from + len, poff, plen; + size_t from = offset_in_folio(folio, pos), to = from + len; + size_t poff, plen; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return 0; - ClearPageError(page); + folio_clear_error(folio); do { - iomap_adjust_read_range(inode, iop, &block_start, + iomap_adjust_read_range(inode, folio, &block_start, block_end - block_start, &poff, &plen); if (plen == 0) break; @@ -577,14 +579,15 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, if (iomap_block_needs_zeroing(inode, srcmap, block_start)) { if (WARN_ON_ONCE(flags & IOMAP_WRITE_F_UNSHARE)) return -EIO; - zero_user_segments(page, poff, from, to, poff + plen); + zero_user_segments(&folio->page, poff, from, to, + poff + plen); } else { - int status = iomap_read_page_sync(block_start, page, + int status = iomap_read_folio_sync(block_start, folio, poff, plen, srcmap); if (status) return status; } - iomap_set_range_uptodate(page, iop, poff, plen); + iomap_set_range_uptodate(folio, iop, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -663,7 +666,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, iop, offset_in_page(pos), len); + iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); __set_page_dirty_nobuffers(page); return copied; } From patchwork Mon Jul 19 18:39:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89B85C07E95 for ; Mon, 19 Jul 2021 19:29:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2606A610D2 for ; Mon, 19 Jul 2021 19:29:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2606A610D2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C86D78D00F6; Mon, 19 Jul 2021 15:29:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C35E08D00EC; Mon, 19 Jul 2021 15:29:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD7038D00F6; Mon, 19 Jul 2021 15:29:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 89A2F8D00EC for ; Mon, 19 Jul 2021 15:29:52 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 29256274D0 for ; Mon, 19 Jul 2021 19:29:51 +0000 (UTC) X-FDA: 78380327382.22.8FC9201 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id D54C06001E77 for ; Mon, 19 Jul 2021 19:29:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=CVUbYg0GRazBihZOhNx1Lyi/5a6+wnNnvf1AHvFt9fo=; b=CGewHcaqRTg1xp5wqLf95o49n7 RYqaLi1e5u2SWbejLjp4Rp4bC3kQhUHy8IY0vXZjPIfe34Nqt3uWlh/lMYt3CP2huYu27ig2GcBkt eKDt1s69BxczanCFjt5vZMuW/9SHjd21/L5qC+GTdCTsmf1KQyVnJOz6yNh6kxjdSPMX+JEyz36Ps TCgg1HrhxxVrgerz22N9jKRcxXQl0y328TNPmOsIS8nix43d7TGPb5jOkuqKZ5+jSvilR5DAGJy6a bE1GK8HEIdS8qqiDAaSuJPLkYuXU4fY+LarfK/TYfb2Oo26NpO0ayovke9G1EOsLZ+mmeI8NVGV56 JYZoS0DQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YIB-007LwP-ML; Mon, 19 Jul 2021 18:47:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 10/17] iomap: Convert bio completions to use folios Date: Mon, 19 Jul 2021 19:39:54 +0100 Message-Id: <20210719184001.1750630-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CGewHcaq; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: iqhxm5tkpy9geu7hchhewdup3czfxyrc X-Rspamd-Queue-Id: D54C06001E77 X-HE-Tag: 1626722990-740981 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use bio_for_each_folio() to iterate over each folio in the bio instead of iterating over each page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 46 +++++++++++++++++------------------------- 1 file changed, 18 insertions(+), 28 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 0f70bd185c2a..ee33a4e7b946 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -161,36 +161,29 @@ static void iomap_set_range_uptodate(struct folio *folio, folio_mark_uptodate(folio); } -static void -iomap_read_page_end_io(struct bio_vec *bvec, int error) +static void iomap_finish_folio_read(struct folio *folio, size_t offset, + size_t len, int error) { - struct page *page = bvec->bv_page; - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { folio_clear_uptodate(folio); folio_set_error(folio); } else { - size_t off = (page - &folio->page) * PAGE_SIZE + - bvec->bv_offset; - - iomap_set_range_uptodate(folio, iop, off, bvec->bv_len); + iomap_set_range_uptodate(folio, iop, offset, len); } - if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) + if (!iop || atomic_sub_and_test(len, &iop->read_bytes_pending)) folio_unlock(folio); } -static void -iomap_read_end_io(struct bio *bio) +static void iomap_read_end_io(struct bio *bio) { int error = blk_status_to_errno(bio->bi_status); - struct bio_vec *bvec; - struct bvec_iter_all iter_all; + struct folio_iter fi; - bio_for_each_segment_all(bvec, bio, iter_all) - iomap_read_page_end_io(bvec, error); + bio_for_each_folio_all(fi, bio) + iomap_finish_folio_read(fi.folio, fi.offset, fi.length, error); bio_put(bio); } @@ -1014,23 +1007,21 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); -static void -iomap_finish_page_writeback(struct inode *inode, struct page *page, - int error, unsigned int len) +static void iomap_finish_folio_write(struct inode *inode, struct folio *folio, + size_t len, int error) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); if (error) { - SetPageError(page); + folio_set_error(folio); mapping_set_error(inode->i_mapping, -EIO); } - WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); + WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) <= 0); if (!iop || atomic_sub_and_test(len, &iop->write_bytes_pending)) - end_page_writeback(page); + folio_end_writeback(folio); } /* @@ -1049,8 +1040,7 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) bool quiet = bio_flagged(bio, BIO_QUIET); for (bio = &ioend->io_inline_bio; bio; bio = next) { - struct bio_vec *bv; - struct bvec_iter_all iter_all; + struct folio_iter fi; /* * For the last bio, bi_private points to the ioend, so we @@ -1061,10 +1051,10 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) else next = bio->bi_private; - /* walk each page on bio, ending page IO on them */ - bio_for_each_segment_all(bv, bio, iter_all) - iomap_finish_page_writeback(inode, bv->bv_page, error, - bv->bv_len); + /* walk all folios in bio, ending page IO on them */ + bio_for_each_folio_all(fi, bio) + iomap_finish_folio_write(inode, fi.folio, fi.length, + error); bio_put(bio); } /* The ioend has been freed by bio_put() */ From patchwork Mon Jul 19 18:39:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2734AC07E95 for ; Mon, 19 Jul 2021 19:27:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9E5DE610D0 for ; Mon, 19 Jul 2021 19:27:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E5DE610D0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 472BE8D00F5; Mon, 19 Jul 2021 15:27:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 422688D00EC; Mon, 19 Jul 2021 15:27:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29D698D00F5; Mon, 19 Jul 2021 15:27:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id F0D448D00EC for ; Mon, 19 Jul 2021 15:27:33 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 91E4A184D83E3 for ; Mon, 19 Jul 2021 19:27:32 +0000 (UTC) X-FDA: 78380321544.35.65B0E04 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 5050F6001E48 for ; Mon, 19 Jul 2021 19:27:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=B4GO5lZHCwERDoxVSozl+ecfyg8sQE9L0PPr0tQ3uNs=; b=YUtfKtGGO+1qS9G9NUA7cCljyY P+yGCtnVQAS5iqFAFSfi4NmLIvjEFSYt1EsO7+yKWrP5VyO5xiMa2CFYqOWRm4x4+2jFLDgwFRyaw vfIULujqVgTrxrkNMNGl/Qt9P22/v+qgjrp7RsjeQu0lpZt03IsvtvZXDzwhQyokSPRU6UULpCvD1 9H72o/qzP6K9ZOjQNL1WAXuU1S40SiDWe6+jLoeuJRSUDj+fT0CirfZC7S1CgybTXjyTGuXBT3/4e AywVvAE7bLWUvhGZw5vgVAkCN63fAoDmU7KDZtISYqJO2m+8EMbvV1Hcs8m/TdF70EOevUf1+EsGj WVvvUDHA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YIx-007M0t-EG; Mon, 19 Jul 2021 18:48:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 11/17] iomap: Convert readahead and readpage to use a folio Date: Mon, 19 Jul 2021 19:39:55 +0100 Message-Id: <20210719184001.1750630-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5050F6001E48 X-Stat-Signature: e55mjunhz3x597jbn8jftsep3mrptycz Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YUtfKtGG; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626722852-817868 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Handle folios of arbitrary size instead of working in PAGE_SIZE units. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 61 +++++++++++++++++++++--------------------- 1 file changed, 30 insertions(+), 31 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ee33a4e7b946..dfb2eec520bd 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -188,8 +188,8 @@ static void iomap_read_end_io(struct bio *bio) } struct iomap_readpage_ctx { - struct page *cur_page; - bool cur_page_in_bio; + struct folio *cur_folio; + bool cur_folio_in_bio; struct bio *bio; struct readahead_control *rac; }; @@ -228,8 +228,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) { struct iomap_readpage_ctx *ctx = data; - struct page *page = ctx->cur_page; - struct folio *folio = page_folio(page); + struct folio *folio = ctx->cur_folio; struct iomap_page *iop; bool same_page = false, is_contig = false; loff_t orig_pos = pos; @@ -238,7 +237,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (iomap->type == IOMAP_INLINE) { WARN_ON_ONCE(pos); - iomap_read_inline_data(inode, page, iomap); + iomap_read_inline_data(inode, &folio->page, iomap); return PAGE_SIZE; } @@ -254,7 +253,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, goto done; } - ctx->cur_page_in_bio = true; + ctx->cur_folio_in_bio = true; if (iop) atomic_add(plen, &iop->read_bytes_pending); @@ -268,7 +267,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, } if (!is_contig || bio_full(ctx->bio, plen)) { - gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); + gfp_t gfp = mapping_gfp_constraint(folio->mapping, GFP_KERNEL); gfp_t orig_gfp = gfp; unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE); @@ -307,30 +306,31 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, int iomap_readpage(struct page *page, const struct iomap_ops *ops) { - struct iomap_readpage_ctx ctx = { .cur_page = page }; - struct inode *inode = page->mapping->host; - unsigned poff; + struct folio *folio = page_folio(page); + struct iomap_readpage_ctx ctx = { .cur_folio = folio }; + struct inode *inode = folio->mapping->host; + size_t poff; loff_t ret; + size_t len = folio_size(folio); - trace_iomap_readpage(page->mapping->host, 1); + trace_iomap_readpage(inode, 1); - for (poff = 0; poff < PAGE_SIZE; poff += ret) { - ret = iomap_apply(inode, page_offset(page) + poff, - PAGE_SIZE - poff, 0, ops, &ctx, - iomap_readpage_actor); + for (poff = 0; poff < len; poff += ret) { + ret = iomap_apply(inode, folio_pos(folio) + poff, len - poff, + 0, ops, &ctx, iomap_readpage_actor); if (ret <= 0) { WARN_ON_ONCE(ret == 0); - SetPageError(page); + folio_set_error(folio); break; } } if (ctx.bio) { submit_bio(ctx.bio); - WARN_ON_ONCE(!ctx.cur_page_in_bio); + WARN_ON_ONCE(!ctx.cur_folio_in_bio); } else { - WARN_ON_ONCE(ctx.cur_page_in_bio); - unlock_page(page); + WARN_ON_ONCE(ctx.cur_folio_in_bio); + folio_unlock(folio); } /* @@ -350,15 +350,15 @@ iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length, loff_t done, ret; for (done = 0; done < length; done += ret) { - if (ctx->cur_page && offset_in_page(pos + done) == 0) { - if (!ctx->cur_page_in_bio) - unlock_page(ctx->cur_page); - put_page(ctx->cur_page); - ctx->cur_page = NULL; + if (ctx->cur_folio && + offset_in_folio(ctx->cur_folio, pos + done) == 0) { + if (!ctx->cur_folio_in_bio) + folio_unlock(ctx->cur_folio); + ctx->cur_folio = NULL; } - if (!ctx->cur_page) { - ctx->cur_page = readahead_page(ctx->rac); - ctx->cur_page_in_bio = false; + if (!ctx->cur_folio) { + ctx->cur_folio = readahead_folio(ctx->rac); + ctx->cur_folio_in_bio = false; } ret = iomap_readpage_actor(inode, pos + done, length - done, ctx, iomap, srcmap); @@ -406,10 +406,9 @@ void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops) if (ctx.bio) submit_bio(ctx.bio); - if (ctx.cur_page) { - if (!ctx.cur_page_in_bio) - unlock_page(ctx.cur_page); - put_page(ctx.cur_page); + if (ctx.cur_folio) { + if (!ctx.cur_folio_in_bio) + folio_unlock(ctx.cur_folio); } } EXPORT_SYMBOL_GPL(iomap_readahead); From patchwork Mon Jul 19 18:39:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4341C07E95 for ; Mon, 19 Jul 2021 19:08:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 39D18610D0 for ; Mon, 19 Jul 2021 19:08:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 39D18610D0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DDC756B018C; Mon, 19 Jul 2021 15:08:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D8C536B018D; Mon, 19 Jul 2021 15:08:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C54226B018E; Mon, 19 Jul 2021 15:08:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 9979A6B018C for ; Mon, 19 Jul 2021 15:08:11 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3C65E1C95B for ; Mon, 19 Jul 2021 19:08:10 +0000 (UTC) X-FDA: 78380272740.34.CF289B4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id E43865012737 for ; Mon, 19 Jul 2021 19:08:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vdlwBejAxJc0A/oAoZS0rkuvhx+5SsxgXdoh50bXO1A=; b=SX9p5G+peIEkXSMdc5vIdxs88f RwU1X7SjRf3lrgdK28iPf5QVPUuFMvmf1peTwW8ltbfqFGtgk0plIo4nEeoThLP2QggA8tcgKuQqZ X2kcBXIFqqvUqve03Xx4yRl+Ytk0XZVAZkc57fQnQcvKSfvj2qVXnM/HMooQyQrU5GENCSkXjVoIm ujqCYdTQTjbK7RtIvTYX+7zpa0jpk4q6XcwF+VkzT/Lr6hh5b9G7Wl0R6oMYDaz0SJ7M4mScOseYT ZxfNuAIm9P9D6slVquR69usQ7Mx6Mpp+xiVi6fS2pDNo52ZiANGDl772hzmHFu0kD6/u143/gsalB pGvXn+6A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YK8-007MBx-F5; Mon, 19 Jul 2021 18:49:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 12/17] iomap: Convert iomap_page_mkwrite to use a folio Date: Mon, 19 Jul 2021 19:39:56 +0100 Message-Id: <20210719184001.1750630-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SX9p5G+p; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: 5fghfrskqmusob1s5afc814z77pgzzmt X-Rspamd-Queue-Id: E43865012737 X-HE-Tag: 1626721689-258587 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we write to any page in a folio, we have to mark the entire folio as dirty, and potentially COW the entire folio, because it'll all get written back as one unit. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 41 +++++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index dfb2eec520bd..dd05db36e135 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -953,21 +953,22 @@ iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, } EXPORT_SYMBOL_GPL(iomap_truncate_page); -static loff_t -iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length, - void *data, struct iomap *iomap, struct iomap *srcmap) +static loff_t iomap_folio_mkwrite_actor(struct inode *inode, loff_t pos, + loff_t length, void *data, struct iomap *iomap, + struct iomap *srcmap) { - struct page *page = data; + struct folio *folio = data; int ret; if (iomap->flags & IOMAP_F_BUFFER_HEAD) { - ret = __block_write_begin_int(page, pos, length, NULL, iomap); + ret = __block_write_begin_int(&folio->page, pos, length, NULL, + iomap); if (ret) return ret; - block_commit_write(page, 0, length); + block_commit_write(&folio->page, 0, length); } else { - WARN_ON_ONCE(!PageUptodate(page)); - set_page_dirty(page); + WARN_ON_ONCE(!folio_test_uptodate(folio)); + folio_mark_dirty(folio); } return length; @@ -975,33 +976,33 @@ iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length, vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) { - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); struct inode *inode = file_inode(vmf->vma->vm_file); - unsigned long length; - loff_t offset; + size_t length; + loff_t pos; ssize_t ret; - lock_page(page); - ret = page_mkwrite_check_truncate(page, inode); + folio_lock(folio); + ret = folio_mkwrite_check_truncate(folio, inode); if (ret < 0) goto out_unlock; length = ret; - offset = page_offset(page); + pos = folio_pos(folio); while (length > 0) { - ret = iomap_apply(inode, offset, length, - IOMAP_WRITE | IOMAP_FAULT, ops, page, - iomap_page_mkwrite_actor); + ret = iomap_apply(inode, pos, length, + IOMAP_WRITE | IOMAP_FAULT, ops, folio, + iomap_folio_mkwrite_actor); if (unlikely(ret <= 0)) goto out_unlock; - offset += ret; + pos += ret; length -= ret; } - wait_for_stable_page(page); + folio_wait_stable(folio); return VM_FAULT_LOCKED; out_unlock: - unlock_page(page); + folio_unlock(folio); return block_page_mkwrite_return(ret); } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); From patchwork Mon Jul 19 18:39:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 564EBC07E9B for ; Mon, 19 Jul 2021 19:07:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E9014610CC for ; Mon, 19 Jul 2021 19:07:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E9014610CC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 922848D00EC; Mon, 19 Jul 2021 15:07:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F8D76B018E; Mon, 19 Jul 2021 15:07:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79A218D00EC; Mon, 19 Jul 2021 15:07:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 4E1906B018D for ; Mon, 19 Jul 2021 15:07:45 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E467D184B83EC for ; Mon, 19 Jul 2021 19:07:43 +0000 (UTC) X-FDA: 78380271606.30.76A83FD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 7A035D009E53 for ; Mon, 19 Jul 2021 19:07:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8oEtzw7Wtww6cLhpam+LORNjOFIlMkhUcFQofhWnEFo=; b=RSTRqrKE/VVt8UuDkahmTro8GF vF4Eyxh4qgC/wk5QyjGguk+vwYIaKm5u2wUYpjld9Uaqk1lfIa+mtLrFkIzORbYue7rtfTIc9syeR xYoE3ZLbaheEPHwso52FTz7+Prx8JDQ1tvZvcjZORLFwUahisVDUyboRb6JDUUAu6BhwrZyrldb9z WrNnCjEESO1BqpBzTKEN5WW7ureAs0xWYYJAvXAbW+N+wohWmUfBbTxh3O9h8RebBUkmfU/iI3efn oLdi/66s3yFfELv0AnCCyhIKj7deGSO49wl9fRbNXaWDfBI6p5xBEKIci6eE+vCAOJeDZlHGb563g 1OkTYjiA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YL9-007MEn-Q1; Mon, 19 Jul 2021 18:50:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org Subject: [PATCH v15 13/17] iomap: Convert iomap_write_begin and iomap_write_end to folios Date: Mon, 19 Jul 2021 19:39:57 +0100 Message-Id: <20210719184001.1750630-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RSTRqrKE; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: 85tfek4q8o8osnx1finuwjjj5cunj46z X-Rspamd-Queue-Id: 7A035D009E53 X-HE-Tag: 1626721663-702930 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions still only work in PAGE_SIZE chunks, but there are fewer conversions from head to tail pages as a result of this patch. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 68 ++++++++++++++++++++++-------------------- 1 file changed, 36 insertions(+), 32 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index dd05db36e135..4b02337009bc 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -543,9 +543,8 @@ static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, static int __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, - struct page *page, struct iomap *srcmap) + struct folio *folio, struct iomap *srcmap) { - struct folio *folio = page_folio(page); struct iomap_page *iop = iomap_page_create(inode, folio); loff_t block_size = i_blocksize(inode); loff_t block_start = round_down(pos, block_size); @@ -585,12 +584,14 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, return 0; } -static int -iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, - struct page **pagep, struct iomap *iomap, struct iomap *srcmap) +static int iomap_write_begin(struct inode *inode, loff_t pos, size_t len, + unsigned flags, struct folio **foliop, struct iomap *iomap, + struct iomap *srcmap) { const struct iomap_page_ops *page_ops = iomap->page_ops; + struct folio *folio; struct page *page; + unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS; int status = 0; BUG_ON(pos + len > iomap->offset + iomap->length); @@ -606,30 +607,31 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, return status; } - page = grab_cache_page_write_begin(inode->i_mapping, pos >> PAGE_SHIFT, - AOP_FLAG_NOFS); - if (!page) { + folio = __filemap_get_folio(inode->i_mapping, pos >> PAGE_SHIFT, fgp, + mapping_gfp_mask(inode->i_mapping)); + if (!folio) { status = -ENOMEM; goto out_no_page; } + page = folio_file_page(folio, pos >> PAGE_SHIFT); if (srcmap->type == IOMAP_INLINE) iomap_read_inline_data(inode, page, srcmap); else if (iomap->flags & IOMAP_F_BUFFER_HEAD) status = __block_write_begin_int(page, pos, len, NULL, srcmap); else - status = __iomap_write_begin(inode, pos, len, flags, page, + status = __iomap_write_begin(inode, pos, len, flags, folio, srcmap); if (unlikely(status)) goto out_unlock; - *pagep = page; + *foliop = folio; return 0; out_unlock: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); iomap_write_failed(inode, pos, len); out_no_page: @@ -639,11 +641,10 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, } static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, - size_t copied, struct page *page) + size_t copied, struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); - flush_dcache_page(page); + flush_dcache_folio(folio); /* * The blocks that were entirely written will now be uptodate, so we @@ -656,10 +657,10 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, * uptodate page as a zero-length write, and force the caller to redo * the whole thing. */ - if (unlikely(copied < len && !PageUptodate(page))) + if (unlikely(copied < len && !folio_test_uptodate(folio))) return 0; iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); - __set_page_dirty_nobuffers(page); + filemap_dirty_folio(inode->i_mapping, folio); return copied; } @@ -682,9 +683,10 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page, /* Returns the number of bytes copied. May be 0. Cannot be an errno. */ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, - size_t copied, struct page *page, struct iomap *iomap, + size_t copied, struct folio *folio, struct iomap *iomap, struct iomap *srcmap) { + struct page *page = folio_file_page(folio, pos >> PAGE_SHIFT); const struct iomap_page_ops *page_ops = iomap->page_ops; loff_t old_size = inode->i_size; size_t ret; @@ -695,7 +697,7 @@ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, ret = block_write_end(NULL, inode->i_mapping, pos, len, copied, page, NULL); } else { - ret = __iomap_write_end(inode, pos, len, copied, page); + ret = __iomap_write_end(inode, pos, len, copied, folio); } /* @@ -707,13 +709,13 @@ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, i_size_write(inode, pos + ret); iomap->flags |= IOMAP_F_SIZE_CHANGED; } - unlock_page(page); + folio_unlock(folio); if (old_size < pos) pagecache_isize_extended(inode, old_size, pos); if (page_ops && page_ops->page_done) page_ops->page_done(inode, pos, ret, page, iomap); - put_page(page); + folio_put(folio); if (ret < len) iomap_write_failed(inode, pos, len); @@ -729,6 +731,7 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data, ssize_t written = 0; do { + struct folio *folio; struct page *page; unsigned long offset; /* Offset into pagecache page */ unsigned long bytes; /* Bytes to write to page */ @@ -752,18 +755,19 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data, break; } - status = iomap_write_begin(inode, pos, bytes, 0, &page, iomap, + status = iomap_write_begin(inode, pos, bytes, 0, &folio, iomap, srcmap); if (unlikely(status)) break; + page = folio_file_page(folio, pos >> PAGE_SHIFT); if (mapping_writably_mapped(inode->i_mapping)) flush_dcache_page(page); copied = copy_page_from_iter_atomic(page, offset, bytes, i); - status = iomap_write_end(inode, pos, bytes, copied, page, iomap, - srcmap); + status = iomap_write_end(inode, pos, bytes, copied, folio, + iomap, srcmap); if (unlikely(copied != status)) iov_iter_revert(i, copied - status); @@ -827,14 +831,14 @@ iomap_unshare_actor(struct inode *inode, loff_t pos, loff_t length, void *data, do { unsigned long offset = offset_in_page(pos); unsigned long bytes = min_t(loff_t, PAGE_SIZE - offset, length); - struct page *page; + struct folio *folio; status = iomap_write_begin(inode, pos, bytes, - IOMAP_WRITE_F_UNSHARE, &page, iomap, srcmap); + IOMAP_WRITE_F_UNSHARE, &folio, iomap, srcmap); if (unlikely(status)) return status; - status = iomap_write_end(inode, pos, bytes, bytes, page, iomap, + status = iomap_write_end(inode, pos, bytes, bytes, folio, iomap, srcmap); if (WARN_ON_ONCE(status == 0)) return -EIO; @@ -873,19 +877,19 @@ EXPORT_SYMBOL_GPL(iomap_file_unshare); static s64 iomap_zero(struct inode *inode, loff_t pos, u64 length, struct iomap *iomap, struct iomap *srcmap) { - struct page *page; + struct folio *folio; int status; unsigned offset = offset_in_page(pos); unsigned bytes = min_t(u64, PAGE_SIZE - offset, length); - status = iomap_write_begin(inode, pos, bytes, 0, &page, iomap, srcmap); + status = iomap_write_begin(inode, pos, bytes, 0, &folio, iomap, srcmap); if (status) return status; - zero_user(page, offset, bytes); - mark_page_accessed(page); + zero_user(folio_file_page(folio, pos >> PAGE_SHIFT), offset, bytes); + folio_mark_accessed(folio); - return iomap_write_end(inode, pos, bytes, bytes, page, iomap, srcmap); + return iomap_write_end(inode, pos, bytes, bytes, folio, iomap, srcmap); } static loff_t iomap_zero_range_actor(struct inode *inode, loff_t pos, From patchwork Mon Jul 19 18:39:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81815C07E95 for ; Mon, 19 Jul 2021 19:08:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2616460720 for ; Mon, 19 Jul 2021 19:08:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2616460720 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C7CC18D00F4; Mon, 19 Jul 2021 15:08:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C534D6B018E; Mon, 19 Jul 2021 15:08:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1ACE8D00F4; Mon, 19 Jul 2021 15:08:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id 8B06C6B018D for ; Mon, 19 Jul 2021 15:08:48 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 15BD7182DE974 for ; Mon, 19 Jul 2021 19:08:47 +0000 (UTC) X-FDA: 78380274294.33.C6749BE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id AAF59100A7DA for ; Mon, 19 Jul 2021 19:08:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=w4Z4Q5QrzeK+2DIgyhzgWQKVqxCOm0WVZy+YjyhlB6g=; b=syDtFN8ARUe87gRMQF2TqThld/ SBeniiT2vzFulHOnN0J71wNsINmJozbZq+hyWB0qw+wGiGgtNplVfPhQWvRhQ+qWH+udhuMAiKzln NRU30lnSEmtUcWCAWwnXzndKRVb7FOuLOo7xFmveDoeuvbNR16jRbg7VFBpf/YGxiiMXfljpoJL8a vxX5qplGEFHjXY0suAsfOCsgq0+S4JqtfpvwvFfi74WmTwQSQYkzH4X84CSWsZY6iPkxKkapodLXy 6MQuz7lf3maCoN2PUucgaDdX879wFRPd7CPTOHDi1UyPTQ62PMHrwKM4hSmu5Odgs7jZOJwlrWf1B YN28FXNA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YLk-007MHc-BC; Mon, 19 Jul 2021 18:51:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 14/17] iomap: Convert iomap_read_inline_data to take a folio Date: Mon, 19 Jul 2021 19:39:58 +0100 Message-Id: <20210719184001.1750630-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=syDtFN8A; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: o7mnnjgdy85xtehzz48ugxw85dcrr43i X-Rspamd-Queue-Id: AAF59100A7DA X-Rspamd-Server: rspam01 X-HE-Tag: 1626721726-47562 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Inline data is restricted to being less than a page in size, so we don't need to handle multi-page folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 4b02337009bc..23ee86aba9d6 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -194,25 +194,25 @@ struct iomap_readpage_ctx { struct readahead_control *rac; }; -static void -iomap_read_inline_data(struct inode *inode, struct page *page, +static void iomap_read_inline_data(struct inode *inode, struct folio *folio, struct iomap *iomap) { size_t size = i_size_read(inode); void *addr; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return; - BUG_ON(page_has_private(page)); - BUG_ON(page->index); + BUG_ON(folio_test_private(folio)); + BUG_ON(folio->index); + BUG_ON(folio_multi(folio)); BUG_ON(size > PAGE_SIZE - offset_in_page(iomap->inline_data)); - addr = kmap_atomic(page); + addr = kmap_local_folio(folio, 0); memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); - kunmap_atomic(addr); - SetPageUptodate(page); + kunmap_local(addr); + folio_mark_uptodate(folio); } static inline bool iomap_block_needs_zeroing(struct inode *inode, @@ -237,7 +237,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (iomap->type == IOMAP_INLINE) { WARN_ON_ONCE(pos); - iomap_read_inline_data(inode, &folio->page, iomap); + iomap_read_inline_data(inode, folio, iomap); return PAGE_SIZE; } @@ -616,7 +616,7 @@ static int iomap_write_begin(struct inode *inode, loff_t pos, size_t len, page = folio_file_page(folio, pos >> PAGE_SHIFT); if (srcmap->type == IOMAP_INLINE) - iomap_read_inline_data(inode, page, srcmap); + iomap_read_inline_data(inode, folio, srcmap); else if (iomap->flags & IOMAP_F_BUFFER_HEAD) status = __block_write_begin_int(page, pos, len, NULL, srcmap); else From patchwork Mon Jul 19 18:39:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9957DC07E95 for ; Mon, 19 Jul 2021 19:28:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15E9E61107 for ; Mon, 19 Jul 2021 19:28:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15E9E61107 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B02498D00F4; Mon, 19 Jul 2021 15:28:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB1F08D00EC; Mon, 19 Jul 2021 15:28:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 952C08D00F4; Mon, 19 Jul 2021 15:28:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 6B6A48D00EC for ; Mon, 19 Jul 2021 15:28:06 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 064C2274D0 for ; Mon, 19 Jul 2021 19:28:05 +0000 (UTC) X-FDA: 78380322930.34.3347275 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id BC5B09000735 for ; Mon, 19 Jul 2021 19:28:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3YXV7bsz65ycT7LtiOSUerTEWLmLCqMdPeOobe2FK+Y=; b=UeKoKZpAdOt2FXA5UVmWeuV+JS E5+Nzf8tps7DVB9p5CzJ9m5OGEWskLgO7tHg2xhOIZn/YV7sHZ0gxi1LfFbPtnIV2blAfPPT71NyX 2sWzY7tqXADZrbsaNFe9BIXFTUE71I7ZPamuwrvi6fLupXw1ed+J5kzEddwIL0EAzMnC2Pbhm3nGA WklbdYhVnXiVG06pku0fzaPOu9RkUi3rVZJYmFWWBYaqio6IX6LEraTmEZgiogx9Yk/fVbdGIoxKe GDApKtwyUS2G8TGjGfEhC9AVpw6Ruqzvy4UB1zrxyN81qdRGgDft07zKziKhxEki3UOh/W/9ucFQ6 8BuAskNA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YO5-007MNU-AT; Mon, 19 Jul 2021 18:53:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org, "Darrick J . Wong" Subject: [PATCH v15 15/17] iomap: Convert iomap_write_end_inline to take a folio Date: Mon, 19 Jul 2021 19:39:59 +0100 Message-Id: <20210719184001.1750630-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UeKoKZpA; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 7ts791hmacuq39sqrc133q45euz7izmi X-Rspamd-Queue-Id: BC5B09000735 X-HE-Tag: 1626722884-705513 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Inline data only occupies a single page, but using a folio means that we don't need to call compound_head() in PageUptodate(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 23ee86aba9d6..f8ca307270e7 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -664,18 +664,18 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, return copied; } -static size_t iomap_write_end_inline(struct inode *inode, struct page *page, +static size_t iomap_write_end_inline(struct inode *inode, struct folio *folio, struct iomap *iomap, loff_t pos, size_t copied) { void *addr; - WARN_ON_ONCE(!PageUptodate(page)); + WARN_ON_ONCE(!folio_test_uptodate(folio)); BUG_ON(pos + copied > PAGE_SIZE - offset_in_page(iomap->inline_data)); - flush_dcache_page(page); - addr = kmap_atomic(page); + flush_dcache_folio(folio); + addr = kmap_local_folio(folio, 0); memcpy(iomap->inline_data + pos, addr + pos, copied); - kunmap_atomic(addr); + kunmap_local(addr); mark_inode_dirty(inode); return copied; @@ -692,7 +692,7 @@ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t ret; if (srcmap->type == IOMAP_INLINE) { - ret = iomap_write_end_inline(inode, page, iomap, pos, copied); + ret = iomap_write_end_inline(inode, folio, iomap, pos, copied); } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { ret = block_write_end(NULL, inode->i_mapping, pos, len, copied, page, NULL); From patchwork Mon Jul 19 18:40:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1932BC07E95 for ; Mon, 19 Jul 2021 19:07:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A15EF60720 for ; Mon, 19 Jul 2021 19:07:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A15EF60720 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3CAE06B018C; Mon, 19 Jul 2021 15:07:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37B868D00F4; Mon, 19 Jul 2021 15:07:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21C1F8D00EC; Mon, 19 Jul 2021 15:07:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0218.hostedemail.com [216.40.44.218]) by kanga.kvack.org (Postfix) with ESMTP id EE45D6B018C for ; Mon, 19 Jul 2021 15:07:00 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 908218248D7C for ; Mon, 19 Jul 2021 19:06:59 +0000 (UTC) X-FDA: 78380269758.33.77D89AB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 22AB8D009E5E for ; Mon, 19 Jul 2021 19:06:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Wt8akQHsu1kN8nqRP6WSiuGEKy3H738/UTIxIBbjchM=; b=o+mYoP8f+PsuB63z7qRjTdgrEb NUm2NYuXLwhBDQoqArbCEm38wJGHtj+phrmMZzEdxqsnQZwyeK0i0x/fwlf7ky6QrKCrrvoF+85o3 EnsJXbup/EwOj0edPA2K6iHwUbZPKMAqneq8+0A2VYwo5TRXifG1uI91HEWSuR4XKGk7+16ZszUhE r+86oAF/x+xBeAen2tDcFm57QerU4Okb8ptXTlS6ex5tZk4RQNCpgMCNRcnug1YTCcDhirM4GQMIH GygqOg2WrfXU8y2YHDf4gPBJMR7AeQB2A1sGxay/D6B7BK58Yn4eUG9tLlI4Gv29pxCyVqrj6QG3t YDX2sxnA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YP9-007MRw-Qf; Mon, 19 Jul 2021 18:54:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org Subject: [PATCH v15 16/17] iomap: Convert iomap_add_to_ioend to take a folio Date: Mon, 19 Jul 2021 19:40:00 +0100 Message-Id: <20210719184001.1750630-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=o+mYoP8f; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: z9t19nacur9kgubdskrmwpb15wgk8t7j X-Rspamd-Queue-Id: 22AB8D009E5E X-HE-Tag: 1626721619-897613 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We still iterate one block at a time, but now we call compound_head() less often. Rename file_offset to pos to fit the rest of the file. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 109 ++++++++++++++++++----------------------- 1 file changed, 48 insertions(+), 61 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index f8ca307270e7..60d3b7af61d1 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1253,36 +1253,29 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, * first, otherwise finish off the current ioend and start another. */ static void -iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, +iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, struct iomap_page *iop, struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { - sector_t sector = iomap_sector(&wpc->iomap, offset); + sector_t sector = iomap_sector(&wpc->iomap, pos); unsigned len = i_blocksize(inode); - unsigned poff = offset & (PAGE_SIZE - 1); - bool merged, same_page = false; + size_t poff = offset_in_folio(folio, pos); - if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) { + if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos, sector)) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc); + wpc->ioend = iomap_alloc_ioend(inode, wpc, pos, sector, wbc); } - merged = __bio_try_merge_page(wpc->ioend->io_bio, page, len, poff, - &same_page); if (iop) atomic_add(len, &iop->write_bytes_pending); - - if (!merged) { - if (bio_full(wpc->ioend->io_bio, len)) { - wpc->ioend->io_bio = - iomap_chain_bio(wpc->ioend->io_bio); - } - bio_add_page(wpc->ioend->io_bio, page, len, poff); + if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) { + wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio); + bio_add_folio(wpc->ioend->io_bio, folio, len, poff); } wpc->ioend->io_size += len; - wbc_account_cgroup_owner(wbc, page, len); + wbc_account_cgroup_owner(wbc, &folio->page, len); } /* @@ -1304,45 +1297,43 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, - struct page *page, u64 end_offset) + struct folio *folio, loff_t end_pos) { - struct folio *folio = page_folio(page); struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); - u64 file_offset; /* file offset of page */ + unsigned nblocks = i_blocks_per_folio(inode, folio); + loff_t pos = folio_pos(folio); int error = 0, count = 0, i; LIST_HEAD(submit_list); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0); /* - * Walk through the page to find areas to write back. If we run off the - * end of the current map or find the current map invalid, grab a new - * one. + * Walk through the folio to find areas to write back. If we + * run off the end of the current map or find the current map + * invalid, grab a new one. */ - for (i = 0, file_offset = page_offset(page); - i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; - i++, file_offset += len) { + for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { if (iop && !test_bit(i, iop->uptodate)) continue; - error = wpc->ops->map_blocks(wpc, inode, file_offset); + error = wpc->ops->map_blocks(wpc, inode, pos); if (error) break; if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, folio, iop, wpc, wbc, &submit_list); count++; } WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); - WARN_ON_ONCE(!PageLocked(page)); - WARN_ON_ONCE(PageWriteback(page)); - WARN_ON_ONCE(PageDirty(page)); + WARN_ON_ONCE(!folio_test_locked(folio)); + WARN_ON_ONCE(folio_test_writeback(folio)); + WARN_ON_ONCE(folio_test_dirty(folio)); /* * We cannot cancel the ioend directly here on error. We may have @@ -1358,16 +1349,16 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * now. */ if (wpc->ops->discard_page) - wpc->ops->discard_page(page, file_offset); + wpc->ops->discard_page(&folio->page, pos); if (!count) { - ClearPageUptodate(page); - unlock_page(page); + folio_clear_uptodate(folio); + folio_unlock(folio); goto done; } } - set_page_writeback(page); - unlock_page(page); + folio_start_writeback(folio); + folio_unlock(folio); /* * Preserve the original error if there was one, otherwise catch @@ -1388,9 +1379,9 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * with a partial page truncate on a sub-page block sized filesystem. */ if (!count) - end_page_writeback(page); + folio_end_writeback(folio); done: - mapping_set_error(page->mapping, error); + mapping_set_error(folio->mapping, error); return error; } @@ -1404,16 +1395,15 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, static int iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) { + struct folio *folio = page_folio(page); struct iomap_writepage_ctx *wpc = data; - struct inode *inode = page->mapping->host; - pgoff_t end_index; - u64 end_offset; - loff_t offset; + struct inode *inode = folio->mapping->host; + loff_t end_pos, isize; - trace_iomap_writepage(inode, page_offset(page), PAGE_SIZE); + trace_iomap_writepage(inode, folio_pos(folio), folio_size(folio)); /* - * Refuse to write the page out if we are called from reclaim context. + * Refuse to write the folio out if we are called from reclaim context. * * This avoids stack overflows when called from deeply used stacks in * random callers for direct reclaim or memcg reclaim. We explicitly @@ -1427,10 +1417,10 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) goto redirty; /* - * Is this page beyond the end of the file? + * Is this folio beyond the end of the file? * - * The page index is less than the end_index, adjust the end_offset - * to the highest offset that this page should represent. + * The folio index is less than the end_index, adjust the end_pos + * to the highest offset that this folio should represent. * ----------------------------------------------------- * | file mapping | | * ----------------------------------------------------- @@ -1439,11 +1429,9 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | desired writeback range | see else | * ---------------------------------^------------------| */ - offset = i_size_read(inode); - end_index = offset >> PAGE_SHIFT; - if (page->index < end_index) - end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT; - else { + isize = i_size_read(inode); + end_pos = folio_pos(folio) + folio_size(folio); + if (end_pos - 1 >= isize) { /* * Check whether the page to write out is beyond or straddles * i_size or not. @@ -1455,7 +1443,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | | Straddles | * ---------------------------------^-----------|--------| */ - unsigned offset_into_page = offset & (PAGE_SIZE - 1); + size_t poff = offset_in_folio(folio, isize); + pgoff_t end_index = isize >> PAGE_SHIFT; /* * Skip the page if it is fully outside i_size, e.g. due to a @@ -1474,8 +1463,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * if the page to write is totally beyond the i_size or if it's * offset is just equal to the EOF. */ - if (page->index > end_index || - (page->index == end_index && offset_into_page == 0)) + if (folio->index > end_index || + (folio->index == end_index && poff == 0)) goto redirty; /* @@ -1486,17 +1475,15 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * memory is zeroed when mapped, and writes to that region are * not written out to the file." */ - zero_user_segment(page, offset_into_page, PAGE_SIZE); - - /* Adjust the end_offset to the end of file */ - end_offset = offset; + zero_user_segment(&folio->page, poff, folio_size(folio)); + end_pos = isize; } - return iomap_writepage_map(wpc, wbc, inode, page, end_offset); + return iomap_writepage_map(wpc, wbc, inode, folio, end_pos); redirty: - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); return 0; } From patchwork Mon Jul 19 18:40:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12386657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CEEFC07E95 for ; Mon, 19 Jul 2021 19:28:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 35DE7610D2 for ; Mon, 19 Jul 2021 19:28:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 35DE7610D2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D23CB8D00F5; Mon, 19 Jul 2021 15:28:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD3958D00EC; Mon, 19 Jul 2021 15:28:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B76668D00F5; Mon, 19 Jul 2021 15:28:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8DA688D00EC for ; Mon, 19 Jul 2021 15:28:44 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 31D8A1E4D7 for ; Mon, 19 Jul 2021 19:28:43 +0000 (UTC) X-FDA: 78380324526.28.1D46055 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 7226E100A7D3 for ; Mon, 19 Jul 2021 19:28:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=W8PwHzjpB4aVlX4Erzmy7nPi4Frr4taQ2kuLjYKQ+Hk=; b=rKTaja6pHvmYOl5n8sOPFHzRC/ mi6IAdriTI87FrWSST2pBxQCtStzPYvLClvHi/3oJQNAKoJ6pZ86ZscLj/ZbqIJB50AoJIqMIh4Jn Y7lzbUGA4HhWc+G6U3Ga8+3/X7J55S4mmqwc2T0z357P9SzwA/OONbgA8MzP7SFGMbT9OCIwB+gx3 DYKbcjxDUrGTQpeeI4XGowBLaRhNGdcHAZZ4tvFtsntkWn9gO0jOrTB+jwv9efO42+7xn4gFHqDF5 IDYI9DLU6unnCdo0ZWPhXJbjjtZtAmWSbhKt4iNdxdfGcLdqBw63N9xHXkGpy20eAdqbs3RgwFO9G 5gkMbwKQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YQ2-007MUk-0O; Mon, 19 Jul 2021 18:55:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-block@vger.kernel.org Subject: [PATCH v15 17/17] iomap: Convert iomap_migrate_page to use folios Date: Mon, 19 Jul 2021 19:40:01 +0100 Message-Id: <20210719184001.1750630-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210719184001.1750630-1-willy@infradead.org> References: <20210719184001.1750630-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rKTaja6p; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7226E100A7D3 X-Stat-Signature: h4wne98om8poy1caf1jzqnottw388rkc X-HE-Tag: 1626722922-838622 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The arguments are still pages for now, but we can use folios internally and cut out a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reported-by: kernel test robot Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 60d3b7af61d1..cf56b19fb101 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -492,19 +492,21 @@ int iomap_migrate_page(struct address_space *mapping, struct page *newpage, struct page *page, enum migrate_mode mode) { + struct folio *folio = page_folio(page); + struct folio *newfolio = page_folio(newpage); int ret; - ret = migrate_page_move_mapping(mapping, newpage, page, 0); + ret = folio_migrate_mapping(mapping, newfolio, folio, 0); if (ret != MIGRATEPAGE_SUCCESS) return ret; - if (page_has_private(page)) - attach_page_private(newpage, detach_page_private(page)); + if (folio_test_private(folio)) + folio_attach_private(newfolio, folio_detach_private(folio)); if (mode != MIGRATE_SYNC_NO_COPY) - migrate_page_copy(newpage, page); + folio_migrate_copy(newfolio, folio); else - migrate_page_states(newpage, page); + folio_migrate_flags(newfolio, folio); return MIGRATEPAGE_SUCCESS; } EXPORT_SYMBOL_GPL(iomap_migrate_page);