From patchwork Tue May 3 06:39:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC0C9C433EF for ; Tue, 3 May 2022 06:40:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230076AbiECGns (ORCPT ); Tue, 3 May 2022 02:43:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbiECGnr (ORCPT ); Tue, 3 May 2022 02:43:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3E761A4 for ; Mon, 2 May 2022 23:40:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XCUVBbBEPVIU9BeQxjHqMwA6uziqJF6mSDyRf8zgf0Y=; b=LLHZ0lNyIJ65aqwUTUJXJDjhAu vv5+awhouGM2yQ5vmW0rf0ZW8z4XRyKA+EpppiWw1lGw6uYzAzdte725K8TSWpoqn8aehvHp8mf7Y AKCzSR2Cnt+dlH1dFQChrSbtfuPe0ObbXdnHHg12uRDsKIIkwFwTDYMed4/9/zTSyM6M9rfw2MOAX AtYkIUW3jMeFdwdLry/tB2fmajnr0hzQ/5BZ+2h4U3oSpi8Ox5kXWeU2JmkGUDSi+/dvCQvQPgoSm XRuxrui6BkovKwSqX5J6hH8wNqFiJTY0CvU4JbpXlgFhfm6BgI/jmS3gmDvhMMW/4LBTGVB/4g6wV uOKvcTRw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCh-00FRxE-Mi; Tue, 03 May 2022 06:40:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 01/10] iomap: Pass struct iomap to iomap_alloc_ioend() Date: Tue, 3 May 2022 07:39:59 +0100 Message-Id: <20220503064008.3682332-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org iomap_alloc_ioend() does not need the rest of iomap_writepage_ctx; only the iomap contained in it. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 6c51a75d0be6..03c7c97bc871 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1222,15 +1222,15 @@ iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, return 0; } -static struct iomap_ioend * -iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc, - loff_t offset, sector_t sector, struct writeback_control *wbc) +static struct iomap_ioend *iomap_alloc_ioend(struct inode *inode, + struct iomap *iomap, loff_t offset, sector_t sector, + struct writeback_control *wbc) { struct iomap_ioend *ioend; struct bio *bio; bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &iomap_ioend_bioset); - bio_set_dev(bio, wpc->iomap.bdev); + bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = sector; bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); bio->bi_write_hint = inode->i_write_hint; @@ -1238,8 +1238,8 @@ iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc, ioend = container_of(bio, struct iomap_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); - ioend->io_type = wpc->iomap.type; - ioend->io_flags = wpc->iomap.flags; + ioend->io_type = iomap->type; + ioend->io_flags = iomap->flags; ioend->io_inode = inode; ioend->io_size = 0; ioend->io_folios = 0; @@ -1305,14 +1305,15 @@ iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, struct iomap_page *iop, struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { - sector_t sector = iomap_sector(&wpc->iomap, pos); + struct iomap *iomap = &wpc->iomap; + sector_t sector = iomap_sector(iomap, pos); unsigned len = i_blocksize(inode); size_t poff = offset_in_folio(folio, pos); if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos, sector)) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = iomap_alloc_ioend(inode, wpc, pos, sector, wbc); + wpc->ioend = iomap_alloc_ioend(inode, iomap, pos, sector, wbc); } if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) { From patchwork Tue May 3 06:40:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CB0FC433EF for ; Tue, 3 May 2022 06:41:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230204AbiECGof (ORCPT ); Tue, 3 May 2022 02:44:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230229AbiECGoD (ORCPT ); Tue, 3 May 2022 02:44:03 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB227107 for ; Mon, 2 May 2022 23:40:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Di4jPnOXCZQJlZKyMXkv6+DkGMbrUEgKEtDIWyIlRv8=; b=ruzsaEx+JFdWQFikwZoqTgX7qL EnLeF+iY+EVACejAM+U6ZmuUMCQTrojr99olY000rBSe1cHFQN5UL9K20/P+rgKBKruuOPKbGJ2LS gygUL3efkBysmjAQkkvUEu6uENoTfuudwSHfTRn3li/o5P904yvaneSEYjg1qqftjNGo32frMgYoZ K0lGzpP25LaLn5qHDtZ4wIKxE4S5busLLlanV3i+4sp/ELMcMzV7n9De0GZKwrZ3s41LdAvFqYQE/ Z5kTAc7wxC+Ylde9ZJOegTxkqQ0HshRCsjYschyvvggeqEeuWTkeNVt0UU+1V/TJ9cF0gQq+FH7n1 K4H06+GQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCh-00FRxG-Ps; Tue, 03 May 2022 06:40:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 02/10] iomap: Remove iomap_writepage_ctx from iomap_can_add_to_ioend() Date: Tue, 3 May 2022 07:40:00 +0100 Message-Id: <20220503064008.3682332-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In preparation for using this function without an iomap_writepage_ctx, pass in the iomap and ioend. Also simplify iomap_add_to_ioend() by using the iomap & ioend directly. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 03c7c97bc871..c91259530ac1 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1273,25 +1273,24 @@ iomap_chain_bio(struct bio *prev) return new; } -static bool -iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, - sector_t sector) +static bool iomap_can_add_to_ioend(struct iomap *iomap, + struct iomap_ioend *ioend, loff_t offset, sector_t sector) { - if ((wpc->iomap.flags & IOMAP_F_SHARED) != - (wpc->ioend->io_flags & IOMAP_F_SHARED)) + if ((iomap->flags & IOMAP_F_SHARED) != + (ioend->io_flags & IOMAP_F_SHARED)) return false; - if (wpc->iomap.type != wpc->ioend->io_type) + if (iomap->type != ioend->io_type) return false; - if (offset != wpc->ioend->io_offset + wpc->ioend->io_size) + if (offset != ioend->io_offset + ioend->io_size) return false; - if (sector != bio_end_sector(wpc->ioend->io_bio)) + if (sector != bio_end_sector(ioend->io_bio)) return false; /* * Limit ioend bio chain lengths to minimise IO completion latency. This * also prevents long tight loops ending page writeback on all the * folios in the ioend. */ - if (wpc->ioend->io_folios >= IOEND_BATCH_SIZE) + if (ioend->io_folios >= IOEND_BATCH_SIZE) return false; return true; } @@ -1306,24 +1305,26 @@ iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, struct writeback_control *wbc, struct list_head *iolist) { struct iomap *iomap = &wpc->iomap; + struct iomap_ioend *ioend = wpc->ioend; sector_t sector = iomap_sector(iomap, pos); unsigned len = i_blocksize(inode); size_t poff = offset_in_folio(folio, pos); - if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos, sector)) { - if (wpc->ioend) - list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = iomap_alloc_ioend(inode, iomap, pos, sector, wbc); + if (!ioend || !iomap_can_add_to_ioend(iomap, ioend, pos, sector)) { + if (ioend) + list_add(&ioend->io_list, iolist); + ioend = iomap_alloc_ioend(inode, iomap, pos, sector, wbc); + wpc->ioend = ioend; } - if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) { - wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio); - bio_add_folio(wpc->ioend->io_bio, folio, len, poff); + if (!bio_add_folio(ioend->io_bio, folio, len, poff)) { + ioend->io_bio = iomap_chain_bio(ioend->io_bio); + bio_add_folio(ioend->io_bio, folio, len, poff); } if (iop) atomic_add(len, &iop->write_bytes_pending); - wpc->ioend->io_size += len; + ioend->io_size += len; wbc_account_cgroup_owner(wbc, &folio->page, len); } From patchwork Tue May 3 06:40:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E470BC433F5 for ; Tue, 3 May 2022 06:41:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230130AbiECGoi (ORCPT ); Tue, 3 May 2022 02:44:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230223AbiECGoA (ORCPT ); Tue, 3 May 2022 02:44:00 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3ABA5F42 for ; Mon, 2 May 2022 23:40:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=W95F6vgOTyekoy37n4Q60E+qAvQTF3eBfM3YF+VAdao=; b=j1ed4yRMlO+WfFqLqSCx4FYtqW df+6rBcUCX0M+2HDlHi38qd+mRwGz2e5BD7T9RMsIWhXw5DarAVQM/3QIA7sIH4xpLKGi6wM6T2lw gt14xk9srvGcWa5da8zH7oFxXGNYctxIuOX75BMYRYPDjZroTu0m9/EgAShwU6GjhnDzX8OGCAUki 9H2bYoVeVeLysJY2MCMaiIniYI/QGlc7Y1mIAPtqd05tDeKUsHonSwePX6TkUrpipiTXjB/re/PwJ OLK4H23+6aNygZiafw00I2b41tMdt6/Px2YHVPMbA2DNuBBotr2yalT8iDU5JRM1mnAhV1jo8qUYk I6cGCszA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCh-00FRxI-S7; Tue, 03 May 2022 06:40:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 03/10] iomap: Do not pass iomap_writepage_ctx to iomap_add_to_ioend() Date: Tue, 3 May 2022 07:40:01 +0100 Message-Id: <20220503064008.3682332-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In preparation for calling iomap_add_to_ioend() without a writepage_ctx available, pass in the iomap and the (current) ioend, and return the current ioend. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index c91259530ac1..1bf361446267 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1299,13 +1299,11 @@ static bool iomap_can_add_to_ioend(struct iomap *iomap, * Test to see if we have an existing ioend structure that we could append to * first; otherwise finish off the current ioend and start another. */ -static void -iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, - struct iomap_page *iop, struct iomap_writepage_ctx *wpc, +static struct iomap_ioend *iomap_add_to_ioend(struct inode *inode, + loff_t pos, struct folio *folio, struct iomap_page *iop, + struct iomap *iomap, struct iomap_ioend *ioend, struct writeback_control *wbc, struct list_head *iolist) { - struct iomap *iomap = &wpc->iomap; - struct iomap_ioend *ioend = wpc->ioend; sector_t sector = iomap_sector(iomap, pos); unsigned len = i_blocksize(inode); size_t poff = offset_in_folio(folio, pos); @@ -1314,7 +1312,6 @@ iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, if (ioend) list_add(&ioend->io_list, iolist); ioend = iomap_alloc_ioend(inode, iomap, pos, sector, wbc); - wpc->ioend = ioend; } if (!bio_add_folio(ioend->io_bio, folio, len, poff)) { @@ -1326,6 +1323,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, atomic_add(len, &iop->write_bytes_pending); ioend->io_size += len; wbc_account_cgroup_owner(wbc, &folio->page, len); + return ioend; } /* @@ -1375,8 +1373,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, pos, folio, iop, wpc, wbc, - &submit_list); + wpc->ioend = iomap_add_to_ioend(inode, pos, folio, iop, + &wpc->iomap, wpc->ioend, wbc, &submit_list); count++; } if (count) From patchwork Tue May 3 06:40:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F956C433EF for ; Tue, 3 May 2022 06:41:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230133AbiECGoe (ORCPT ); Tue, 3 May 2022 02:44:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230212AbiECGn4 (ORCPT ); Tue, 3 May 2022 02:43:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7F8962E1 for ; Mon, 2 May 2022 23:40:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X9KRB5HztTBHbcbradeeuPmxnugGH4lxxu89NI0o210=; b=rGiY8fhSPHK5Alk9rcIIwam2G7 txjyqMhc3wyq7MILHu7L9jWBp3KuHb7BPMloqXzOywIUmn3AaGP8vDT+fZzB2P8OyLINCNBK6ho9D KrWcmONj//jPU0rFCZEvaWXp2KJU2JXCCTWuvg2+Tu5obugzr0zLdvIWR0qIF5Sznw93lBzPpjssN BRIMiPn1+/HhvwiiBu2wL7CsjW4Qs1dsYAz12cABHkxQaYCtWTIxy6SD4U/ap9RydiOS5xBZk+xWE gk+WHLqwjkLMOCD8nP/m1gonuEDUChndbuFGF3merodDbkgKtWrr37IqHO10oQwFUPIg4aSNE/SE5 pUysfEHQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCh-00FRxK-UY; Tue, 03 May 2022 06:40:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 04/10] iomap: Accept a NULL iomap_writepage_ctx in iomap_submit_ioend() Date: Tue, 3 May 2022 07:40:02 +0100 Message-Id: <20220503064008.3682332-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Prepare for I/O without an iomap_writepage_ctx() by accepting a NULL wpc. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 1bf361446267..85bcdb0dc66c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1204,7 +1204,7 @@ iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, ioend->io_bio->bi_private = ioend; ioend->io_bio->bi_end_io = iomap_writepage_end_bio; - if (wpc->ops->prepare_ioend) + if (wpc && wpc->ops->prepare_ioend) error = wpc->ops->prepare_ioend(ioend, error); if (error) { /* From patchwork Tue May 3 06:40:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22DD2C433F5 for ; Tue, 3 May 2022 06:41:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230158AbiECGoc (ORCPT ); Tue, 3 May 2022 02:44:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230204AbiECGnz (ORCPT ); Tue, 3 May 2022 02:43:55 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5CEB5F42 for ; Mon, 2 May 2022 23:40:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZzrfwhFvZ4lKbA1Gqsb9Nd8jE6YDqzct+3TYHW354nk=; b=ta/NJP8Ql0eG6wgTZCPPRF6LlU IldbZQaj28YB9otiqpgmMy1Unbyax0nMaqWyk+lwq8uCJd1843AQL4YIWv93zHIKaVS9CA2gIJxZu nZMQwydk3l7wxqWUfZBgMYZd/dfXXtMvDy4YhGAOeIw2tFfKy8296/+EDgBDF+vsKp+NFC9MxAMj+ gLO26JOS9q/4f8ktSB+xdB7cHU9dtmZP/3rQUFYR8cGATZtayKJkimRDEUjIkvXCKoIOveV2qeaAN Z6K6IEVih6tfhLZ/wVgHhejzwrB8K9zhiuEoGEV6nxU00tLtG6SK6LZPX++wzKtE8ck51RYrJvtk2 Ot8bCo0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCi-00FRxM-1y; Tue, 03 May 2022 06:40:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 05/10] iomap: Allow a NULL writeback_control argument to iomap_alloc_ioend() Date: Tue, 3 May 2022 07:40:03 +0100 Message-Id: <20220503064008.3682332-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When we're doing writethrough, we don't have a writeback_control to pass in. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 85bcdb0dc66c..024e16fb95a8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1232,9 +1232,13 @@ static struct iomap_ioend *iomap_alloc_ioend(struct inode *inode, bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &iomap_ioend_bioset); bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = sector; - bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); + bio->bi_opf = REQ_OP_WRITE; bio->bi_write_hint = inode->i_write_hint; - wbc_init_bio(wbc, bio); + + if (wbc) { + bio->bi_opf |= wbc_to_write_flags(wbc); + wbc_init_bio(wbc, bio); + } ioend = container_of(bio, struct iomap_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); From patchwork Tue May 3 06:40:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0B4EC433EF for ; Tue, 3 May 2022 06:41:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230086AbiECGob (ORCPT ); Tue, 3 May 2022 02:44:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230164AbiECGnv (ORCPT ); Tue, 3 May 2022 02:43:51 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C12DB25F4 for ; Mon, 2 May 2022 23:40:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0dCe1WB6ORwMe0scEVZDq3u+o381aHETVcM7HD9aQ3Q=; b=tTXkD9zZL2vx56180ECQHTreHf htidH5xq0h/gl8qMDk7uqOJzkVlHvWZ569GcY+X9AxuALFWbQQk97LT1Z7lZRaW4aTtf1fyKg4rgq m7n98nEdg/2Zv3iHUs63os1CCiD6GcvRlh+mvC+wB/IIJLyZGFywcqaOAEZ4nN0trnpz00eYT1m/v 0OCmhC47K5A2EWwAepIerQfcWYVIoUGiJLxU3ZtTgoFbqQQQ53eZ34ewHJKti2B3EwR8ag/jH4pDV /Qb+uj9f+cItmomKls5IM50YQgP5LxPPd7UxRzAL06b2lMV+nb2a2/pUFVnhNOwL2PmjzVmTU1Dky z4ZD/orw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCi-00FRxO-4U; Tue, 03 May 2022 06:40:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 06/10] iomap: Pass a length to iomap_add_to_ioend() Date: Tue, 3 May 2022 07:40:04 +0100 Message-Id: <20220503064008.3682332-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Allow the caller to specify how much of the page to add to the ioend instead of assuming a single sector. Somebody should probably enhance iomap_writepage_map() to make one call per extent instead of one per block. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 024e16fb95a8..5b69cea71f71 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1304,12 +1304,12 @@ static bool iomap_can_add_to_ioend(struct iomap *iomap, * first; otherwise finish off the current ioend and start another. */ static struct iomap_ioend *iomap_add_to_ioend(struct inode *inode, - loff_t pos, struct folio *folio, struct iomap_page *iop, - struct iomap *iomap, struct iomap_ioend *ioend, - struct writeback_control *wbc, struct list_head *iolist) + loff_t pos, size_t len, struct folio *folio, + struct iomap_page *iop, struct iomap *iomap, + struct iomap_ioend *ioend, struct writeback_control *wbc, + struct list_head *iolist) { sector_t sector = iomap_sector(iomap, pos); - unsigned len = i_blocksize(inode); size_t poff = offset_in_folio(folio, pos); if (!ioend || !iomap_can_add_to_ioend(iomap, ioend, pos, sector)) { @@ -1377,7 +1377,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - wpc->ioend = iomap_add_to_ioend(inode, pos, folio, iop, + wpc->ioend = iomap_add_to_ioend(inode, pos, len, folio, iop, &wpc->iomap, wpc->ioend, wbc, &submit_list); count++; } From patchwork Tue May 3 06:40:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835053 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B028C433EF for ; Tue, 3 May 2022 06:40:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229950AbiECGoZ (ORCPT ); Tue, 3 May 2022 02:44:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229503AbiECGnr (ORCPT ); Tue, 3 May 2022 02:43:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBE2F200 for ; Mon, 2 May 2022 23:40:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FPAgjc8cietwEYq/FZXzPxSlo/iLr66s0PJvUP9s09A=; b=Oh1zJK6EZuibg8bHSuHew0Z8Gn 8CoaXsYrLcT+oe2l155N8Ln1AwH38MAD2NSYyGeZ0tilT87oY6NkT9PyMhrfF6ts24K3Dy40j6w0T Xy6rFAd5CcyzcmZ5H4qAQjCjDDMkI2JIHzSO6L9Q9fIewQ8doYzkpx6rIxMYiPh27612p0EqxvPt0 rUGEpjmQap6kXdO/uAWRl754W9z8SOH7AGwgSZAb0cFeXO8CyoXBveZjNejXTwKv48JthOXlOsdDz 0DlbZD0+60VhvTvmdLGOPQIvFMqt/rGV7u1IKvEnWkblyRfsaZ6r8WzCupeeWqlxROwnQG2Zo2rqq 4zeo02ow==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCi-00FRxQ-7H; Tue, 03 May 2022 06:40:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 07/10] iomap: Reorder functions Date: Tue, 3 May 2022 07:40:05 +0100 Message-Id: <20220503064008.3682332-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move the ioend creation functions earlier in the file so write_end can create ioends without requiring forward declarations. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 215 ++++++++++++++++++++--------------------- 1 file changed, 107 insertions(+), 108 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 5b69cea71f71..4aa2209fb003 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -558,6 +558,113 @@ static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, return submit_bio_wait(&bio); } +static bool iomap_can_add_to_ioend(struct iomap *iomap, + struct iomap_ioend *ioend, loff_t offset, sector_t sector) +{ + if ((iomap->flags & IOMAP_F_SHARED) != + (ioend->io_flags & IOMAP_F_SHARED)) + return false; + if (iomap->type != ioend->io_type) + return false; + if (offset != ioend->io_offset + ioend->io_size) + return false; + if (sector != bio_end_sector(ioend->io_bio)) + return false; + /* + * Limit ioend bio chain lengths to minimise IO completion latency. This + * also prevents long tight loops ending page writeback on all the + * folios in the ioend. + */ + if (ioend->io_folios >= IOEND_BATCH_SIZE) + return false; + return true; +} + +static struct iomap_ioend *iomap_alloc_ioend(struct inode *inode, + struct iomap *iomap, loff_t offset, sector_t sector, + struct writeback_control *wbc) +{ + struct iomap_ioend *ioend; + struct bio *bio; + + bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &iomap_ioend_bioset); + bio_set_dev(bio, iomap->bdev); + bio->bi_iter.bi_sector = sector; + bio->bi_opf = REQ_OP_WRITE; + bio->bi_write_hint = inode->i_write_hint; + + if (wbc) { + bio->bi_opf |= wbc_to_write_flags(wbc); + wbc_init_bio(wbc, bio); + } + + ioend = container_of(bio, struct iomap_ioend, io_inline_bio); + INIT_LIST_HEAD(&ioend->io_list); + ioend->io_type = iomap->type; + ioend->io_flags = iomap->flags; + ioend->io_inode = inode; + ioend->io_size = 0; + ioend->io_folios = 0; + ioend->io_offset = offset; + ioend->io_bio = bio; + ioend->io_sector = sector; + return ioend; +} + +/* + * Allocate a new bio, and chain the old bio to the new one. + * + * Note that we have to perform the chaining in this unintuitive order + * so that the bi_private linkage is set up in the right direction for the + * traversal in iomap_finish_ioend(). + */ +static struct bio *iomap_chain_bio(struct bio *prev) +{ + struct bio *new; + + new = bio_alloc(GFP_NOFS, BIO_MAX_VECS); + bio_copy_dev(new, prev);/* also copies over blkcg information */ + new->bi_iter.bi_sector = bio_end_sector(prev); + new->bi_opf = prev->bi_opf; + new->bi_write_hint = prev->bi_write_hint; + + bio_chain(prev, new); + bio_get(prev); /* for iomap_finish_ioend */ + submit_bio(prev); + return new; +} + +/* + * Test to see if we have an existing ioend structure that we could append to + * first; otherwise finish off the current ioend and start another. + */ +static struct iomap_ioend *iomap_add_to_ioend(struct inode *inode, + loff_t pos, size_t len, struct folio *folio, + struct iomap_page *iop, struct iomap *iomap, + struct iomap_ioend *ioend, struct writeback_control *wbc, + struct list_head *iolist) +{ + sector_t sector = iomap_sector(iomap, pos); + size_t poff = offset_in_folio(folio, pos); + + if (!ioend || !iomap_can_add_to_ioend(iomap, ioend, pos, sector)) { + if (ioend) + list_add(&ioend->io_list, iolist); + ioend = iomap_alloc_ioend(inode, iomap, pos, sector, wbc); + } + + if (!bio_add_folio(ioend->io_bio, folio, len, poff)) { + ioend->io_bio = iomap_chain_bio(ioend->io_bio); + bio_add_folio(ioend->io_bio, folio, len, poff); + } + + if (iop) + atomic_add(len, &iop->write_bytes_pending); + ioend->io_size += len; + wbc_account_cgroup_owner(wbc, &folio->page, len); + return ioend; +} + static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, size_t len, struct folio *folio) { @@ -1222,114 +1329,6 @@ iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, return 0; } -static struct iomap_ioend *iomap_alloc_ioend(struct inode *inode, - struct iomap *iomap, loff_t offset, sector_t sector, - struct writeback_control *wbc) -{ - struct iomap_ioend *ioend; - struct bio *bio; - - bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_VECS, &iomap_ioend_bioset); - bio_set_dev(bio, iomap->bdev); - bio->bi_iter.bi_sector = sector; - bio->bi_opf = REQ_OP_WRITE; - bio->bi_write_hint = inode->i_write_hint; - - if (wbc) { - bio->bi_opf |= wbc_to_write_flags(wbc); - wbc_init_bio(wbc, bio); - } - - ioend = container_of(bio, struct iomap_ioend, io_inline_bio); - INIT_LIST_HEAD(&ioend->io_list); - ioend->io_type = iomap->type; - ioend->io_flags = iomap->flags; - ioend->io_inode = inode; - ioend->io_size = 0; - ioend->io_folios = 0; - ioend->io_offset = offset; - ioend->io_bio = bio; - ioend->io_sector = sector; - return ioend; -} - -/* - * Allocate a new bio, and chain the old bio to the new one. - * - * Note that we have to perform the chaining in this unintuitive order - * so that the bi_private linkage is set up in the right direction for the - * traversal in iomap_finish_ioend(). - */ -static struct bio * -iomap_chain_bio(struct bio *prev) -{ - struct bio *new; - - new = bio_alloc(GFP_NOFS, BIO_MAX_VECS); - bio_copy_dev(new, prev);/* also copies over blkcg information */ - new->bi_iter.bi_sector = bio_end_sector(prev); - new->bi_opf = prev->bi_opf; - new->bi_write_hint = prev->bi_write_hint; - - bio_chain(prev, new); - bio_get(prev); /* for iomap_finish_ioend */ - submit_bio(prev); - return new; -} - -static bool iomap_can_add_to_ioend(struct iomap *iomap, - struct iomap_ioend *ioend, loff_t offset, sector_t sector) -{ - if ((iomap->flags & IOMAP_F_SHARED) != - (ioend->io_flags & IOMAP_F_SHARED)) - return false; - if (iomap->type != ioend->io_type) - return false; - if (offset != ioend->io_offset + ioend->io_size) - return false; - if (sector != bio_end_sector(ioend->io_bio)) - return false; - /* - * Limit ioend bio chain lengths to minimise IO completion latency. This - * also prevents long tight loops ending page writeback on all the - * folios in the ioend. - */ - if (ioend->io_folios >= IOEND_BATCH_SIZE) - return false; - return true; -} - -/* - * Test to see if we have an existing ioend structure that we could append to - * first; otherwise finish off the current ioend and start another. - */ -static struct iomap_ioend *iomap_add_to_ioend(struct inode *inode, - loff_t pos, size_t len, struct folio *folio, - struct iomap_page *iop, struct iomap *iomap, - struct iomap_ioend *ioend, struct writeback_control *wbc, - struct list_head *iolist) -{ - sector_t sector = iomap_sector(iomap, pos); - size_t poff = offset_in_folio(folio, pos); - - if (!ioend || !iomap_can_add_to_ioend(iomap, ioend, pos, sector)) { - if (ioend) - list_add(&ioend->io_list, iolist); - ioend = iomap_alloc_ioend(inode, iomap, pos, sector, wbc); - } - - if (!bio_add_folio(ioend->io_bio, folio, len, poff)) { - ioend->io_bio = iomap_chain_bio(ioend->io_bio); - bio_add_folio(ioend->io_bio, folio, len, poff); - } - - if (iop) - atomic_add(len, &iop->write_bytes_pending); - ioend->io_size += len; - wbc_account_cgroup_owner(wbc, &folio->page, len); - return ioend; -} - /* * We implement an immediate ioend submission policy here to avoid needing to * chain multiple ioends and hence nest mempool allocations which can violate From patchwork Tue May 3 06:40:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3351C433F5 for ; Tue, 3 May 2022 06:40:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230216AbiECGo1 (ORCPT ); Tue, 3 May 2022 02:44:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229770AbiECGnr (ORCPT ); Tue, 3 May 2022 02:43:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0720A303 for ; Mon, 2 May 2022 23:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=r+CYFeOXePy+J6U6EySURLhgv/r0S7wzDzjrfZfjEtw=; b=ksk8AzssEVTOarrRqN/Jy2Kc7Q 4eMqcIQQcyXmuOrrzxM6izXO509rlAd2cIE8SY2Amn9DLO+Uuu5AYuD++L6L0sNO3V0nV7ci6nVar m1BNqRvnnwi5o89pJoKUSZosvJmlPQKZnF51WkHsT8BCKxnQsXfN5tMxQ73c4G9e+F3zauyJyIzyg ZdF+KMRlHuac1ZU1WyyS2tGPPRMJQGWKixS/ELqhg5NP+p5QA9cIkht0suaZgVsU4nvMg3zGjDsUo I/2V6R75DlfuuBdRPuEJMBcn6anK2Ka4JYUJBtX+A9CeyZ2Qi+dEqQiK44gbUekWF9jK8HrS82And /wA4NilA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCi-00FRxc-Aw; Tue, 03 May 2022 06:40:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 08/10] iomap: Reorder functions Date: Tue, 3 May 2022 07:40:06 +0100 Message-Id: <20220503064008.3682332-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move the ioend submission functions earlier in the file so write_iter can submit ioends without requiring forward declarations. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 204 ++++++++++++++++++++--------------------- 1 file changed, 101 insertions(+), 103 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 4aa2209fb003..6c540390eec3 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -665,6 +665,107 @@ static struct iomap_ioend *iomap_add_to_ioend(struct inode *inode, return ioend; } +static void iomap_finish_folio_write(struct inode *inode, struct folio *folio, + size_t len, int error) +{ + struct iomap_page *iop = to_iomap_page(folio); + + if (error) { + folio_set_error(folio); + mapping_set_error(inode->i_mapping, error); + } + + WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !iop); + WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) <= 0); + + if (!iop || atomic_sub_and_test(len, &iop->write_bytes_pending)) + folio_end_writeback(folio); +} + +/* + * We're now finished for good with this ioend structure. Update the page + * state, release holds on bios, and finally free up memory. Do not use the + * ioend after this. + */ +static u32 iomap_finish_ioend(struct iomap_ioend *ioend, int error) +{ + struct inode *inode = ioend->io_inode; + struct bio *bio = &ioend->io_inline_bio; + struct bio *last = ioend->io_bio, *next; + u64 start = bio->bi_iter.bi_sector; + loff_t offset = ioend->io_offset; + bool quiet = bio_flagged(bio, BIO_QUIET); + u32 folio_count = 0; + + for (bio = &ioend->io_inline_bio; bio; bio = next) { + struct folio_iter fi; + + /* + * For the last bio, bi_private points to the ioend, so we + * need to explicitly end the iteration here. + */ + if (bio == last) + next = NULL; + else + next = bio->bi_private; + + /* walk all folios in bio, ending page IO on them */ + bio_for_each_folio_all(fi, bio) { + iomap_finish_folio_write(inode, fi.folio, fi.length, + error); + folio_count++; + } + bio_put(bio); + } + /* The ioend has been freed by bio_put() */ + + if (unlikely(error && !quiet)) { + printk_ratelimited(KERN_ERR +"%s: writeback error on inode %lu, offset %lld, sector %llu", + inode->i_sb->s_id, inode->i_ino, offset, start); + } + return folio_count; +} + +static void iomap_writepage_end_bio(struct bio *bio) +{ + struct iomap_ioend *ioend = bio->bi_private; + + iomap_finish_ioend(ioend, blk_status_to_errno(bio->bi_status)); +} + +/* + * Submit the final bio for an ioend. + * + * If @error is non-zero, it means that we have a situation where some part of + * the submission process has failed after we've marked pages for writeback + * and unlocked them. In this situation, we need to fail the bio instead of + * submitting it. This typically only happens on a filesystem shutdown. + */ +static int iomap_submit_ioend(struct iomap_writepage_ctx *wpc, + struct iomap_ioend *ioend, int error) +{ + ioend->io_bio->bi_private = ioend; + ioend->io_bio->bi_end_io = iomap_writepage_end_bio; + + if (wpc && wpc->ops->prepare_ioend) + error = wpc->ops->prepare_ioend(ioend, error); + if (error) { + /* + * If we're failing the IO now, just mark the ioend with an + * error and finish it. This will run IO completion immediately + * as there is only one reference to the ioend at this point in + * time. + */ + ioend->io_bio->bi_status = errno_to_blk_status(error); + bio_endio(ioend->io_bio); + return error; + } + + submit_bio(ioend->io_bio); + return 0; +} + static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, size_t len, struct folio *folio) { @@ -1126,69 +1227,6 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); -static void iomap_finish_folio_write(struct inode *inode, struct folio *folio, - size_t len, int error) -{ - struct iomap_page *iop = to_iomap_page(folio); - - if (error) { - folio_set_error(folio); - mapping_set_error(inode->i_mapping, error); - } - - WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !iop); - WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) <= 0); - - if (!iop || atomic_sub_and_test(len, &iop->write_bytes_pending)) - folio_end_writeback(folio); -} - -/* - * We're now finished for good with this ioend structure. Update the page - * state, release holds on bios, and finally free up memory. Do not use the - * ioend after this. - */ -static u32 -iomap_finish_ioend(struct iomap_ioend *ioend, int error) -{ - struct inode *inode = ioend->io_inode; - struct bio *bio = &ioend->io_inline_bio; - struct bio *last = ioend->io_bio, *next; - u64 start = bio->bi_iter.bi_sector; - loff_t offset = ioend->io_offset; - bool quiet = bio_flagged(bio, BIO_QUIET); - u32 folio_count = 0; - - for (bio = &ioend->io_inline_bio; bio; bio = next) { - struct folio_iter fi; - - /* - * For the last bio, bi_private points to the ioend, so we - * need to explicitly end the iteration here. - */ - if (bio == last) - next = NULL; - else - next = bio->bi_private; - - /* walk all folios in bio, ending page IO on them */ - bio_for_each_folio_all(fi, bio) { - iomap_finish_folio_write(inode, fi.folio, fi.length, - error); - folio_count++; - } - bio_put(bio); - } - /* The ioend has been freed by bio_put() */ - - if (unlikely(error && !quiet)) { - printk_ratelimited(KERN_ERR -"%s: writeback error on inode %lu, offset %lld, sector %llu", - inode->i_sb->s_id, inode->i_ino, offset, start); - } - return folio_count; -} - /* * Ioend completion routine for merged bios. This can only be called from task * contexts as merged ioends can be of unbound length. Hence we have to break up @@ -1289,46 +1327,6 @@ iomap_sort_ioends(struct list_head *ioend_list) } EXPORT_SYMBOL_GPL(iomap_sort_ioends); -static void iomap_writepage_end_bio(struct bio *bio) -{ - struct iomap_ioend *ioend = bio->bi_private; - - iomap_finish_ioend(ioend, blk_status_to_errno(bio->bi_status)); -} - -/* - * Submit the final bio for an ioend. - * - * If @error is non-zero, it means that we have a situation where some part of - * the submission process has failed after we've marked pages for writeback - * and unlocked them. In this situation, we need to fail the bio instead of - * submitting it. This typically only happens on a filesystem shutdown. - */ -static int -iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, - int error) -{ - ioend->io_bio->bi_private = ioend; - ioend->io_bio->bi_end_io = iomap_writepage_end_bio; - - if (wpc && wpc->ops->prepare_ioend) - error = wpc->ops->prepare_ioend(ioend, error); - if (error) { - /* - * If we're failing the IO now, just mark the ioend with an - * error and finish it. This will run IO completion immediately - * as there is only one reference to the ioend at this point in - * time. - */ - ioend->io_bio->bi_status = errno_to_blk_status(error); - bio_endio(ioend->io_bio); - return error; - } - - submit_bio(ioend->io_bio); - return 0; -} - /* * We implement an immediate ioend submission policy here to avoid needing to * chain multiple ioends and hence nest mempool allocations which can violate From patchwork Tue May 3 06:40:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835052 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA4EDC433F5 for ; Tue, 3 May 2022 06:40:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbiECGoZ (ORCPT ); Tue, 3 May 2022 02:44:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229868AbiECGnr (ORCPT ); Tue, 3 May 2022 02:43:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29C09306 for ; Mon, 2 May 2022 23:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XCJ91jXIpCbq7tbi/tOEae2ltQHQZ4hxcr1LF5joPeU=; b=SFORG1Z6RmnvNAVglTUAQwU0Sw siYSCFV9+Rfi1QoEdPdeflyvd8JzgUoavCsQQ43qzwbprnNb6QjihWjFeRHnbQKsxgEA9DgvjVFYn 7JaXfN8tuiJ6qHL8qDfsqMiZAw8bpKxlP6cinoRHULAI/oKXdQZdMDcvtsjrQFTzM65nzjCg6SQY+ e/Bt/8kfreXPstssN73oUQbFx8X4L4TXpg0FWnu7c45jFwnfC/7T2naipr7lCpHtUjoaCM7EnI0Ig fhHe43HcxAx2gMvD07oEK9WtnxBpC1w8D+0+IxibqCezbwIphrRVP8SMaktsAJmrIC+FbpQgrq02y SZg7PE0A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCi-00FRxi-EV; Tue, 03 May 2022 06:40:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 09/10] iomap: Add writethrough for O_SYNC Date: Tue, 3 May 2022 07:40:07 +0100 Message-Id: <20220503064008.3682332-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org For O_SYNC writes, if the filesystem has already allocated blocks for the range, we can avoid marking the page as dirty and skip straight to marking the page as writeback. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 74 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 64 insertions(+), 10 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 6c540390eec3..5050adbd4bc8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -531,6 +531,12 @@ iomap_migrate_page(struct address_space *mapping, struct page *newpage, EXPORT_SYMBOL_GPL(iomap_migrate_page); #endif /* CONFIG_MIGRATION */ +struct iomap_write_ctx { + struct iomap_ioend *ioend; + struct list_head iolist; + bool write_through; +}; + static void iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) { @@ -875,8 +881,38 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, return status; } -static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, - size_t copied, struct folio *folio) +/* Returns true if we can skip dirtying the page */ +static bool iomap_write_through(struct iomap_write_ctx *iwc, + struct iomap *iomap, struct inode *inode, struct folio *folio, + loff_t pos, size_t len) +{ + unsigned int blksize = i_blocksize(inode); + + if (!iwc || !iwc->write_through) + return false; + if (folio_test_dirty(folio)) + return true; + if (folio_test_writeback(folio)) + return false; + + /* Can't allocate blocks here because we don't have ->prepare_ioend */ + if (iomap->type != IOMAP_MAPPED || iomap->type != IOMAP_UNWRITTEN || + iomap->flags & IOMAP_F_SHARED) + return false; + + len = round_up(pos + len - 1, blksize); + pos = round_down(pos, blksize); + len -= pos; + iwc->ioend = iomap_add_to_ioend(inode, pos, len, folio, + iomap_page_create(inode, folio), iomap, iwc->ioend, + NULL, &iwc->iolist); + folio_start_writeback(folio); + return true; +} + +static size_t __iomap_write_end(struct iomap_write_ctx *iwc, + struct iomap *iomap, struct inode *inode, loff_t pos, + size_t len, size_t copied, struct folio *folio) { struct iomap_page *iop = to_iomap_page(folio); flush_dcache_folio(folio); @@ -895,7 +931,8 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, if (unlikely(copied < len && !folio_test_uptodate(folio))) return 0; iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); - filemap_dirty_folio(inode->i_mapping, folio); + if (!iomap_write_through(iwc, iomap, inode, folio, pos, len)) + filemap_dirty_folio(inode->i_mapping, folio); return copied; } @@ -918,7 +955,8 @@ static size_t iomap_write_end_inline(const struct iomap_iter *iter, } /* Returns the number of bytes copied. May be 0. Cannot be an errno. */ -static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, +static size_t iomap_write_end(struct iomap_write_ctx *iwc, + struct iomap_iter *iter, loff_t pos, size_t len, size_t copied, struct folio *folio) { const struct iomap_page_ops *page_ops = iter->iomap.page_ops; @@ -932,7 +970,8 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, ret = block_write_end(NULL, iter->inode->i_mapping, pos, len, copied, &folio->page, NULL); } else { - ret = __iomap_write_end(iter->inode, pos, len, copied, folio); + ret = __iomap_write_end(iwc, &iter->iomap, iter->inode, pos, + len, copied, folio); } /* @@ -957,7 +996,8 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, return ret; } -static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) +static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i, + struct iomap_write_ctx *iwc) { loff_t length = iomap_length(iter); loff_t pos = iter->pos; @@ -999,7 +1039,7 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) copied = copy_page_from_iter_atomic(page, offset, bytes, i); - status = iomap_write_end(iter, pos, bytes, copied, folio); + status = iomap_write_end(iwc, iter, pos, bytes, copied, folio); if (unlikely(copied != status)) iov_iter_revert(i, copied - status); @@ -1036,10 +1076,24 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, .len = iov_iter_count(i), .flags = IOMAP_WRITE, }; + struct iomap_write_ctx iwc = { + .iolist = LIST_HEAD_INIT(iwc.iolist), + .write_through = iocb->ki_flags & IOCB_SYNC, + }; + struct iomap_ioend *ioend, *next; int ret; while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_write_iter(&iter, i); + iter.processed = iomap_write_iter(&iter, i, &iwc); + + list_for_each_entry_safe(ioend, next, &iwc.iolist, io_list) { + list_del_init(&ioend->io_list); + ret = iomap_submit_ioend(NULL, ioend, ret); + } + + if (iwc.ioend) + ret = iomap_submit_ioend(NULL, iwc.ioend, ret); + if (iter.pos == iocb->ki_pos) return ret; return iter.pos - iocb->ki_pos; @@ -1071,7 +1125,7 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) if (unlikely(status)) return status; - status = iomap_write_end(iter, pos, bytes, bytes, folio); + status = iomap_write_end(NULL, iter, pos, bytes, bytes, folio); if (WARN_ON_ONCE(status == 0)) return -EIO; @@ -1133,7 +1187,7 @@ static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) folio_zero_range(folio, offset, bytes); folio_mark_accessed(folio); - bytes = iomap_write_end(iter, pos, bytes, bytes, folio); + bytes = iomap_write_end(NULL, iter, pos, bytes, bytes, folio); if (WARN_ON_ONCE(bytes == 0)) return -EIO; From patchwork Tue May 3 06:40:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12835054 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60439C433FE for ; Tue, 3 May 2022 06:40:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230179AbiECGo0 (ORCPT ); Tue, 3 May 2022 02:44:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229905AbiECGnr (ORCPT ); Tue, 3 May 2022 02:43:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B1F8318 for ; Mon, 2 May 2022 23:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DXnyU8UdaKn/dEACocxQq39SsVe49AkFR7WSu5FfCiE=; b=cBdvTrBunnC7GNLOd0caKL/FrG QZ5H+H2wQZKDvvKtBe5z+6h2nRtQdPkWS+cGUot09sJpsmFnmtkPB6zoBOqVYLL2VX/yU1/KpKcCW Sj3Iih4s6ABoW7vtPSJs3XB7OoentfS+mUWHCRMChflJaYgLYj5/igPWsMmFGo4d+MNL0ReKAJr4S J9QqXoptsnANEtjS7UX0+WyBUZFJgXXOBhADKfjD3MHzOHf4CVcqY3M1NcuEN/3EWIovTDCjLRwVa MRwkWZJf8ko+oUb3pyael981/p1vOGO5tAx6G7uh6xOcORebrlS0fG3aJGQJj0f3FtX7WHLsRf4Bx nHdYjWxw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlmCi-00FRxo-Ip; Tue, 03 May 2022 06:40:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Damien Le Moal , Christoph Hellwig , "Darrick J . Wong" Subject: [RFC PATCH 10/10] remove write_through bool Date: Tue, 3 May 2022 07:40:08 +0100 Message-Id: <20220503064008.3682332-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220503064008.3682332-1-willy@infradead.org> References: <20220503064008.3682332-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 5050adbd4bc8..ec0189dc6747 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -534,7 +534,6 @@ EXPORT_SYMBOL_GPL(iomap_migrate_page); struct iomap_write_ctx { struct iomap_ioend *ioend; struct list_head iolist; - bool write_through; }; static void @@ -587,7 +586,7 @@ static bool iomap_can_add_to_ioend(struct iomap *iomap, } static struct iomap_ioend *iomap_alloc_ioend(struct inode *inode, - struct iomap *iomap, loff_t offset, sector_t sector, + const struct iomap *iomap, loff_t offset, sector_t sector, struct writeback_control *wbc) { struct iomap_ioend *ioend; @@ -888,7 +887,7 @@ static bool iomap_write_through(struct iomap_write_ctx *iwc, { unsigned int blksize = i_blocksize(inode); - if (!iwc || !iwc->write_through) + if (!iwc) return false; if (folio_test_dirty(folio)) return true; @@ -1078,13 +1077,13 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, }; struct iomap_write_ctx iwc = { .iolist = LIST_HEAD_INIT(iwc.iolist), - .write_through = iocb->ki_flags & IOCB_SYNC, }; + struct iomap_write_ctx *iwcp = iocb->ki_flags & IOCB_SYNC ? &iwc : NULL; struct iomap_ioend *ioend, *next; int ret; while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_write_iter(&iter, i, &iwc); + iter.processed = iomap_write_iter(&iter, i, iwcp); list_for_each_entry_safe(ioend, next, &iwc.iolist, io_list) { list_del_init(&ioend->io_list);