From patchwork Wed Jun 28 15:31:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62545C001DE for ; Wed, 28 Jun 2023 15:32:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232221AbjF1PcG (ORCPT ); Wed, 28 Jun 2023 11:32:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232198AbjF1Pb5 (ORCPT ); Wed, 28 Jun 2023 11:31:57 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E31A2110; Wed, 28 Jun 2023 08:31:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=KsdBeZ20scZ+l4cg5+MIpGOwLSatgPbh99awfYXDByY=; b=VO3AYglIk3x3IpPAiRmRiuXKXB VdnsWQxrA8znWTTbdvHMqOVQvG8Pfm4qHtX14QXPOgDm6SL7CcOmb0LO4GxjJ5eOk9Uf7qbue6/oG NfZmUOCrThzlfwJazEdeLdx4bS/xQRxqsAQi5bL100N750Vy4fdzqFMosz764s1KRR7KwO74GP71Y dV6nqABjGdU59KaswNOl40MrPiSpFXTPudlqYz3Kfonw1Ff6CfqqssrBVX6wZg+Ka1iJz97njBXry FKLMM8b4SjusevfsaX9te+1+EYHJ7qpVNCTpjPLw3LEg+Uq2sbjhAvTdg8xUJ41KWBiiuuyKDnt3z Wy2x6yVw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX95-00G02d-2Y; Wed, 28 Jun 2023 15:31:52 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 01/23] btrfs: pass a flags argument to cow_file_range Date: Wed, 28 Jun 2023 17:31:22 +0200 Message-Id: <20230628153144.22834-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The int used as bool unlock is not a very good way to describe the behavior, and the next patch will have to add another beahvior modifier. Switch to pass a flag instead, with an inital CFR_KEEP_LOCKED flag that specifies the pages should always be kept locked. This is the inverse of the old unlock argument for the reason that it requires a flag for the exceptional behavior. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/inode.c | 51 ++++++++++++++++++++++-------------------------- 1 file changed, 23 insertions(+), 28 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index dbbb67293e345c..92a78940991fcb 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -124,11 +124,13 @@ static struct kmem_cache *btrfs_inode_cachep; static int btrfs_setsize(struct inode *inode, struct iattr *attr); static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback); + +#define CFR_KEEP_LOCKED (1 << 0) static noinline int cow_file_range(struct btrfs_inode *inode, struct page *locked_page, u64 start, u64 end, int *page_started, - unsigned long *nr_written, int unlock, - u64 *done_offset); + unsigned long *nr_written, u64 *done_offset, + u32 flags); static struct extent_map *create_io_em(struct btrfs_inode *inode, u64 start, u64 len, u64 orig_start, u64 block_start, u64 block_len, u64 orig_block_len, @@ -1148,7 +1150,7 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, * can directly submit them without interruption. */ ret = cow_file_range(inode, locked_page, start, end, &page_started, - &nr_written, 0, NULL); + &nr_written, NULL, CFR_KEEP_LOCKED); /* Inline extent inserted, page gets unlocked and everything is done */ if (page_started) return 0; @@ -1362,25 +1364,18 @@ static u64 get_extent_allocation_hint(struct btrfs_inode *inode, u64 start, * locked_page is the page that writepage had locked already. We use * it to make sure we don't do extra locks or unlocks. * - * *page_started is set to one if we unlock locked_page and do everything - * required to start IO on it. It may be clean and already done with - * IO when we return. - * - * When unlock == 1, we unlock the pages in successfully allocated regions. - * When unlock == 0, we leave them locked for writing them out. + * When this function fails, it unlocks all pages except @locked_page. * - * However, we unlock all the pages except @locked_page in case of failure. + * When this function successfully creates an inline extent, it sets page_started + * to 1 and unlocks all pages including locked_page and starts I/O on them. + * (In reality inline extents are limited to a single page, so locked_page is + * the only page handled anyway). * - * In summary, page locking state will be as follow: + * When this function succeed and creates a normal extent, the page locking + * status depends on the passed in flags: * - * - page_started == 1 (return value) - * - All the pages are unlocked. IO is started. - * - Note that this can happen only on success - * - unlock == 1 - * - All the pages except @locked_page are unlocked in any case - * - unlock == 0 - * - On success, all the pages are locked for writing out them - * - On failure, all the pages except @locked_page are unlocked + * - If CFR_KEEP_LOCKED is set, all pages are kept locked. + * - Else all pages except for @locked_page are unlocked. * * When a failure happens in the second or later iteration of the * while-loop, the ordered extents created in previous iterations are kept @@ -1391,8 +1386,8 @@ static u64 get_extent_allocation_hint(struct btrfs_inode *inode, u64 start, static noinline int cow_file_range(struct btrfs_inode *inode, struct page *locked_page, u64 start, u64 end, int *page_started, - unsigned long *nr_written, int unlock, - u64 *done_offset) + unsigned long *nr_written, u64 *done_offset, + u32 flags) { struct btrfs_root *root = inode->root; struct btrfs_fs_info *fs_info = root->fs_info; @@ -1558,7 +1553,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * Do set the Ordered (Private2) bit so we know this page was * properly setup for writepage. */ - page_ops = unlock ? PAGE_UNLOCK : 0; + page_ops = (flags & CFR_KEEP_LOCKED) ? 0 : PAGE_UNLOCK; page_ops |= PAGE_SET_ORDERED; extent_clear_unlock_delalloc(inode, start, start + ram_size - 1, @@ -1627,10 +1622,10 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * EXTENT_DEFRAG | EXTENT_CLEAR_META_RESV are handled by the cleanup * function. * - * However, in case of unlock == 0, we still need to unlock the pages - * (except @locked_page) to ensure all the pages are unlocked. + * However, in case of CFR_KEEP_LOCKED, we still need to unlock the + * pages (except @locked_page) to ensure all the pages are unlocked. */ - if (!unlock && orig_start < start) { + if ((flags & CFR_KEEP_LOCKED) && orig_start < start) { if (!locked_page) mapping_set_error(inode->vfs_inode.i_mapping, ret); extent_clear_unlock_delalloc(inode, orig_start, start - 1, @@ -1836,7 +1831,7 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, while (start <= end) { ret = cow_file_range(inode, locked_page, start, end, page_started, - nr_written, 0, &done_offset); + nr_written, &done_offset, CFR_KEEP_LOCKED); if (ret && ret != -EAGAIN) return ret; @@ -1956,7 +1951,7 @@ static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page, } return cow_file_range(inode, locked_page, start, end, page_started, - nr_written, 1, NULL); + nr_written, NULL, 0); } struct can_nocow_file_extent_args { @@ -2433,7 +2428,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page page_started, nr_written, wbc); else ret = cow_file_range(inode, locked_page, start, end, - page_started, nr_written, 1, NULL); + page_started, nr_written, NULL, 0); out: ASSERT(ret <= 0); From patchwork Wed Jun 28 15:31:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92AE0C001B0 for ; Wed, 28 Jun 2023 15:32:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232203AbjF1PcC (ORCPT ); Wed, 28 Jun 2023 11:32:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232206AbjF1Pb6 (ORCPT ); Wed, 28 Jun 2023 11:31:58 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 355552690; Wed, 28 Jun 2023 08:31:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=EmHjEGI5RwNCUkQMYYFchukDY4G0WoIisHyFoBWsgr4=; b=qBgtvZM3SOSebXIxvtEij6ZZ3y UEciY4wAoc/9OjQLI8NRlwAHU1Utx3Aw1WvZnSD1J+sK3xkfX1Qfbkj0R5Z68c+NyOHb9iPxLAAMC 87MK5wbiF0nr+EaoWWvvOCKiRkDC2eu05X31ihRwrgLQmE4JTW7o88Pl2LR98c25fIX9h2b4JnStF bu2ZeoFOyTGG4TOAEft6Y8jOuvrlk9HE6YhS/4OyTXGChwp6yMfji/rPl2aMcyLhPvhA8/xqROkDI C6VxgaeLgs2LoMybpupjpcF7tp8ZNNsQm6BQDIxcLfPDO3C8bHvqR+roRobGzYJejWHgAcbrIz26E qtdEIfhQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX98-00G03C-2I; Wed, 28 Jun 2023 15:31:55 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 02/23] btrfs: don't create inline extents in fallback_to_cow Date: Wed, 28 Jun 2023 17:31:23 +0200 Message-Id: <20230628153144.22834-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org For nodatacow files, run_delalloc_nocow can still fall back to COW allocations when required and calls to fallback_to_cow helper for that. For such an allocation we can have multiple ordered_extents for existing extents that NOCOW overwrites and new allocations that fallback_to_cow creates. If one of the new extents is an inline extent, the writepages could would have to avoid normal page writeback for them as indicated by the page_started return argument, which run_delalloc_nocow can't return. Fix this by never creating inline extents from fallback_to_cow. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 92a78940991fcb..cddf54bc330c44 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -126,6 +126,7 @@ static int btrfs_setsize(struct inode *inode, struct iattr *attr); static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback); #define CFR_KEEP_LOCKED (1 << 0) +#define CFR_NOINLINE (1 << 1) static noinline int cow_file_range(struct btrfs_inode *inode, struct page *locked_page, u64 start, u64 end, int *page_started, @@ -1426,7 +1427,8 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * This means we can trigger inline extent even if we didn't want to. * So here we skip inline extent creation completely. */ - if (start == 0 && fs_info->sectorsize == PAGE_SIZE) { + if (start == 0 && fs_info->sectorsize == PAGE_SIZE && + !(flags & CFR_NOINLINE)) { u64 actual_end = min_t(u64, i_size_read(&inode->vfs_inode), end + 1); @@ -1889,15 +1891,17 @@ static noinline int csum_exist_in_range(struct btrfs_fs_info *fs_info, } static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page, - const u64 start, const u64 end, - int *page_started, unsigned long *nr_written) + const u64 start, const u64 end) { const bool is_space_ino = btrfs_is_free_space_inode(inode); const bool is_reloc_ino = btrfs_is_data_reloc_root(inode->root); const u64 range_bytes = end + 1 - start; struct extent_io_tree *io_tree = &inode->io_tree; + int page_started = 0; + unsigned long nr_written; u64 range_start = start; u64 count; + int ret; /* * If EXTENT_NORESERVE is set it means that when the buffered write was @@ -1950,8 +1954,15 @@ static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page, NULL); } - return cow_file_range(inode, locked_page, start, end, page_started, - nr_written, NULL, 0); + /* + * Don't try to create inline extents, as a mix of inline extent that + * is written out and unlocked directly and a normal nocow extent + * doesn't work. + */ + ret = cow_file_range(inode, locked_page, start, end, &page_started, + &nr_written, NULL, CFR_NOINLINE); + ASSERT(!page_started); + return ret; } struct can_nocow_file_extent_args { @@ -2100,9 +2111,7 @@ static int can_nocow_file_extent(struct btrfs_path *path, */ static noinline int run_delalloc_nocow(struct btrfs_inode *inode, struct page *locked_page, - const u64 start, const u64 end, - int *page_started, - unsigned long *nr_written) + const u64 start, const u64 end) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct btrfs_root *root = inode->root; @@ -2270,8 +2279,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode, */ if (cow_start != (u64)-1) { ret = fallback_to_cow(inode, locked_page, - cow_start, found_key.offset - 1, - page_started, nr_written); + cow_start, found_key.offset - 1); if (ret) goto error; cow_start = (u64)-1; @@ -2352,8 +2360,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode, if (cow_start != (u64)-1) { cur_offset = end; - ret = fallback_to_cow(inode, locked_page, cow_start, end, - page_started, nr_written); + ret = fallback_to_cow(inode, locked_page, cow_start, end); if (ret) goto error; } @@ -2412,8 +2419,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page * preallocated inodes. */ ASSERT(!zoned || btrfs_is_data_reloc_root(inode->root)); - ret = run_delalloc_nocow(inode, locked_page, start, end, - page_started, nr_written); + ret = run_delalloc_nocow(inode, locked_page, start, end); goto out; } From patchwork Wed Jun 28 15:31:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61952EB64D7 for ; Wed, 28 Jun 2023 15:32:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232229AbjF1PcH (ORCPT ); Wed, 28 Jun 2023 11:32:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232180AbjF1PcC (ORCPT ); Wed, 28 Jun 2023 11:32:02 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18D132713; Wed, 28 Jun 2023 08:32:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=/4LhoYS1a46Ypg0rLbmBV6jQxcERRkdLSBppKY3OP2k=; b=AfQSP5MnIyK8y404njG+aVVs8y Ow1utPln33dlQfuZljzUafZZBcFti1Yv9hiGawAOW7go48tSf5RuBPyRsOCuzny/57EvqvGR252nV bCTYODM+OV19mSxbicTGueCvw3DcCX28uqGRqmpDCnEASvcc9URCTyMQ6hioeHDT1klIqfv6BdM0u 6Ytx23q2bUf8qxqoGmiwqO7cWhWWAQSBEA4x2JtGYF1GK9b4NyFKnYqIoOFiMRqOItu7f4vXBxrWw H4bqsumkZ1L7Lu0naWXgI1ofDGgDGKvPiIxVXapRLKgtG5/VykmR+6InYnGsINnNo6tHzC3K6inP4 g1FnBruQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9B-00G03d-1I; Wed, 28 Jun 2023 15:31:57 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 03/23] btrfs: split page locking out of __process_pages_contig Date: Wed, 28 Jun 2023 17:31:24 +0200 Message-Id: <20230628153144.22834-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org There is a lot of complexity in __process_pages_contig to deal with the PAGE_LOCK case that can return an error unlike all the other actions. Open code the page iteration for page locking in lock_delalloc_pages and remove all the now unused code from __process_pages_contig. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 149 +++++++++++++++++-------------------------- fs/btrfs/extent_io.h | 1 - 2 files changed, 59 insertions(+), 91 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index a91d5ad2798428..36c3ae947ae8e0 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -197,18 +197,9 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end) } } -/* - * Process one page for __process_pages_contig(). - * - * Return >0 if we hit @page == @locked_page. - * Return 0 if we updated the page status. - * Return -EGAIN if the we need to try again. - * (For PAGE_LOCK case but got dirty page or page not belong to mapping) - */ -static int process_one_page(struct btrfs_fs_info *fs_info, - struct address_space *mapping, - struct page *page, struct page *locked_page, - unsigned long page_ops, u64 start, u64 end) +static void process_one_page(struct btrfs_fs_info *fs_info, + struct page *page, struct page *locked_page, + unsigned long page_ops, u64 start, u64 end) { u32 len; @@ -224,29 +215,13 @@ static int process_one_page(struct btrfs_fs_info *fs_info, if (page_ops & PAGE_END_WRITEBACK) btrfs_page_clamp_clear_writeback(fs_info, page, start, len); - if (page == locked_page) - return 1; - - if (page_ops & PAGE_LOCK) { - int ret; - - ret = btrfs_page_start_writer_lock(fs_info, page, start, len); - if (ret) - return ret; - if (!PageDirty(page) || page->mapping != mapping) { - btrfs_page_end_writer_lock(fs_info, page, start, len); - return -EAGAIN; - } - } - if (page_ops & PAGE_UNLOCK) + if (page != locked_page && (page_ops & PAGE_UNLOCK)) btrfs_page_end_writer_lock(fs_info, page, start, len); - return 0; } -static int __process_pages_contig(struct address_space *mapping, - struct page *locked_page, - u64 start, u64 end, unsigned long page_ops, - u64 *processed_end) +static void __process_pages_contig(struct address_space *mapping, + struct page *locked_page, u64 start, u64 end, + unsigned long page_ops) { struct btrfs_fs_info *fs_info = btrfs_sb(mapping->host->i_sb); pgoff_t start_index = start >> PAGE_SHIFT; @@ -254,64 +229,24 @@ static int __process_pages_contig(struct address_space *mapping, pgoff_t index = start_index; unsigned long pages_processed = 0; struct folio_batch fbatch; - int err = 0; int i; - if (page_ops & PAGE_LOCK) { - ASSERT(page_ops == PAGE_LOCK); - ASSERT(processed_end && *processed_end == start); - } - folio_batch_init(&fbatch); while (index <= end_index) { int found_folios; found_folios = filemap_get_folios_contig(mapping, &index, end_index, &fbatch); - - if (found_folios == 0) { - /* - * Only if we're going to lock these pages, we can find - * nothing at @index. - */ - ASSERT(page_ops & PAGE_LOCK); - err = -EAGAIN; - goto out; - } - for (i = 0; i < found_folios; i++) { - int process_ret; struct folio *folio = fbatch.folios[i]; - process_ret = process_one_page(fs_info, mapping, - &folio->page, locked_page, page_ops, - start, end); - if (process_ret < 0) { - err = -EAGAIN; - folio_batch_release(&fbatch); - goto out; - } + + process_one_page(fs_info, &folio->page, locked_page, + page_ops, start, end); pages_processed += folio_nr_pages(folio); } folio_batch_release(&fbatch); cond_resched(); } -out: - if (err && processed_end) { - /* - * Update @processed_end. I know this is awful since it has - * two different return value patterns (inclusive vs exclusive). - * - * But the exclusive pattern is necessary if @start is 0, or we - * underflow and check against processed_end won't work as - * expected. - */ - if (pages_processed) - *processed_end = min(end, - ((u64)(start_index + pages_processed) << PAGE_SHIFT) - 1); - else - *processed_end = start; - } - return err; } static noinline void __unlock_for_delalloc(struct inode *inode, @@ -326,29 +261,63 @@ static noinline void __unlock_for_delalloc(struct inode *inode, return; __process_pages_contig(inode->i_mapping, locked_page, start, end, - PAGE_UNLOCK, NULL); + PAGE_UNLOCK); } static noinline int lock_delalloc_pages(struct inode *inode, struct page *locked_page, - u64 delalloc_start, - u64 delalloc_end) + u64 start, + u64 end) { - unsigned long index = delalloc_start >> PAGE_SHIFT; - unsigned long end_index = delalloc_end >> PAGE_SHIFT; - u64 processed_end = delalloc_start; - int ret; + struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); + struct address_space *mapping = inode->i_mapping; + pgoff_t start_index = start >> PAGE_SHIFT; + pgoff_t end_index = end >> PAGE_SHIFT; + pgoff_t index = start_index; + u64 processed_end = start; + struct folio_batch fbatch; - ASSERT(locked_page); if (index == locked_page->index && index == end_index) return 0; - ret = __process_pages_contig(inode->i_mapping, locked_page, delalloc_start, - delalloc_end, PAGE_LOCK, &processed_end); - if (ret == -EAGAIN && processed_end > delalloc_start) - __unlock_for_delalloc(inode, locked_page, delalloc_start, - processed_end); - return ret; + folio_batch_init(&fbatch); + while (index <= end_index) { + unsigned int found_folios, i; + + found_folios = filemap_get_folios_contig(mapping, &index, + end_index, &fbatch); + if (found_folios == 0) + goto out; + + for (i = 0; i < found_folios; i++) { + struct page *page = &fbatch.folios[i]->page; + u32 len = end + 1 - start; + + if (page == locked_page) + continue; + + if (btrfs_page_start_writer_lock(fs_info, page, start, + len)) + goto out; + + if (!PageDirty(page) || page->mapping != mapping) { + btrfs_page_end_writer_lock(fs_info, page, start, + len); + goto out; + } + + processed_end = page_offset(page) + PAGE_SIZE - 1; + } + folio_batch_release(&fbatch); + cond_resched(); + } + + return 0; +out: + folio_batch_release(&fbatch); + if (processed_end > start) + __unlock_for_delalloc(inode, locked_page, start, processed_end); + return -EAGAIN; } /* @@ -467,7 +436,7 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, clear_extent_bit(&inode->io_tree, start, end, clear_bits, NULL); __process_pages_contig(inode->vfs_inode.i_mapping, locked_page, - start, end, page_ops, NULL); + start, end, page_ops); } static bool btrfs_verify_page(struct page *page, u64 start) diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index c5fae3a7d911bf..285754154fdc5c 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -40,7 +40,6 @@ enum { ENUM_BIT(PAGE_START_WRITEBACK), ENUM_BIT(PAGE_END_WRITEBACK), ENUM_BIT(PAGE_SET_ORDERED), - ENUM_BIT(PAGE_LOCK), }; /* From patchwork Wed Jun 28 15:31:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295972 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69682EB64DA for ; Wed, 28 Jun 2023 15:32:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232243AbjF1PcU (ORCPT ); Wed, 28 Jun 2023 11:32:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232220AbjF1PcE (ORCPT ); Wed, 28 Jun 2023 11:32:04 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D89472110; Wed, 28 Jun 2023 08:32:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=bO9tOc0j9/dZluOR3Y1G+XOqcPGQ2FwtMsXfQlry8Vw=; b=AjnFoZ0dkGsHEAkJhiS6CYT0C3 JoRKdUPSi4ODap8ki5muKE0Z5J2btOAqKXfqwBVob0AAtcG35zvJG8sb/BItdiFuGBjZi41j8qPhW XqgW5i946yJAE9ynIvY5cYLiB/24rLua5CtstCxMqk9GXL9veYp1aQ1ceEzovFCtfSeKYNf6Kwj+U kUoBi7WjAx19VJn0xETJrAS2EP2Xh7ybBRbzNonJhxIpDsD8NYpUjL4LBMFcTyDJXECIySsyvrXXP BYBZuxpDo+wH/PtoMSgbpJ3+Qdgm/Qz180BkIBCQiFJZY+E5jEvmqVv+jN/XxrNWy0bF+RWRNOVBh uSPvIbpg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9E-00G03q-38; Wed, 28 Jun 2023 15:32:01 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 04/23] btrfs: remove btrfs_writepage_endio_finish_ordered Date: Wed, 28 Jun 2023 17:31:25 +0200 Message-Id: <20230628153144.22834-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org btrfs_writepage_endio_finish_ordered is a small wrapper around btrfs_mark_ordered_io_finished that just changs the argument passing slightly, and adds a tracepoint. Move the tracpoint to btrfs_mark_ordered_io_finished, which means it now also covers the error handling in btrfs_cleanup_ordered_extent and switch all callers to just call btrfs_mark_ordered_io_finished directly. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/btrfs_inode.h | 3 --- fs/btrfs/extent_io.c | 17 ++++++++--------- fs/btrfs/inode.c | 9 --------- fs/btrfs/ordered-data.c | 4 ++++ 4 files changed, 12 insertions(+), 21 deletions(-) diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h index d47a927b3504d6..90e60ad9db6200 100644 --- a/fs/btrfs/btrfs_inode.h +++ b/fs/btrfs/btrfs_inode.h @@ -501,9 +501,6 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page u64 start, u64 end, int *page_started, unsigned long *nr_written, struct writeback_control *wbc); int btrfs_writepage_cow_fixup(struct page *page); -void btrfs_writepage_endio_finish_ordered(struct btrfs_inode *inode, - struct page *page, u64 start, - u64 end, bool uptodate); int btrfs_encoded_io_compression_from_extent(struct btrfs_fs_info *fs_info, int compress_type); int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode, diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 36c3ae947ae8e0..af05237dc2f186 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -473,17 +473,15 @@ void end_extent_writepage(struct page *page, int err, u64 start, u64 end) struct btrfs_inode *inode; const bool uptodate = (err == 0); int ret = 0; + u32 len = end + 1 - start; + ASSERT(end + 1 - start <= U32_MAX); ASSERT(page && page->mapping); inode = BTRFS_I(page->mapping->host); - btrfs_writepage_endio_finish_ordered(inode, page, start, end, uptodate); + btrfs_mark_ordered_io_finished(inode, page, start, len, uptodate); if (!uptodate) { const struct btrfs_fs_info *fs_info = inode->root->fs_info; - u32 len; - - ASSERT(end + 1 - start <= U32_MAX); - len = end + 1 - start; btrfs_page_clear_uptodate(fs_info, page, start, len); ret = err < 0 ? err : -EIO; @@ -1328,6 +1326,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, bio_ctrl->end_io_func = end_bio_extent_writepage; while (cur <= end) { + u32 len = end - cur + 1; u64 disk_bytenr; u64 em_end; u64 dirty_range_start = cur; @@ -1335,8 +1334,8 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, u32 iosize; if (cur >= i_size) { - btrfs_writepage_endio_finish_ordered(inode, page, cur, - end, true); + btrfs_mark_ordered_io_finished(inode, page, cur, len, + true); /* * This range is beyond i_size, thus we don't need to * bother writing back. @@ -1345,7 +1344,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, * writeback the sectors with subpage dirty bits, * causing writeback without ordered extent. */ - btrfs_page_clear_dirty(fs_info, page, cur, end + 1 - cur); + btrfs_page_clear_dirty(fs_info, page, cur, len); break; } @@ -1356,7 +1355,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, continue; } - em = btrfs_get_extent(inode, NULL, 0, cur, end - cur + 1); + em = btrfs_get_extent(inode, NULL, 0, cur, len); if (IS_ERR(em)) { ret = PTR_ERR_OR_ZERO(em); goto out_error; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index cddf54bc330c44..b158db44b268a6 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -3385,15 +3385,6 @@ int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered) return btrfs_finish_one_ordered(ordered); } -void btrfs_writepage_endio_finish_ordered(struct btrfs_inode *inode, - struct page *page, u64 start, - u64 end, bool uptodate) -{ - trace_btrfs_writepage_end_io_hook(inode, start, end, uptodate); - - btrfs_mark_ordered_io_finished(inode, page, start, end + 1 - start, uptodate); -} - /* * Verify the checksum for a single sector without any extra action that depend * on the type of I/O. diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index a629532283bc33..109e80ed25b669 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -410,6 +410,10 @@ void btrfs_mark_ordered_io_finished(struct btrfs_inode *inode, unsigned long flags; u64 cur = file_offset; + trace_btrfs_writepage_end_io_hook(inode, file_offset, + file_offset + num_bytes - 1, + uptodate); + spin_lock_irqsave(&tree->lock, flags); while (cur < file_offset + num_bytes) { u64 entry_end; From patchwork Wed Jun 28 15:31:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CCD3EB64DA for ; Wed, 28 Jun 2023 15:32:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232257AbjF1Pc2 (ORCPT ); Wed, 28 Jun 2023 11:32:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232234AbjF1PcH (ORCPT ); Wed, 28 Jun 2023 11:32:07 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DBA626AB; Wed, 28 Jun 2023 08:32:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Dg1yxeg5xlrkS1kEyPQLp/xQuMm+hYqoqp8Zg193E/s=; b=Ni5TN9nUC0JctF33Q+uoNVhB76 BdmAwkfQuBVswApL+L6z5n5eG7J/b1rea+a067IpAqhH+glv2fZAGXciPXpCe9STJj0R4LADWXkEr /JssMBSTqKatFPjbxbdZrhtWnx8xhn5sPDluXwHa86zllTao7rgAIxOEDu0+wsmvb6KtIdkEvlGcD i+VJhk8vvf2OX4o6WLSMxo4p3Qc39YMm6w2ObUBENM5PXRPDBFSFuCbwwjNAXii5htuDODZrCW6lm wNLo3cPchK+lAhbgOJMpi9S1PM+N0HABtt/qAo6V9F+2MMXT04gyp58Pl89o5ZcZOE52xbaO+WoTs VSW9i99A==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9H-00G04D-2m; Wed, 28 Jun 2023 15:32:04 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 05/23] btrfs: remove end_extent_writepage Date: Wed, 28 Jun 2023 17:31:26 +0200 Message-Id: <20230628153144.22834-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org end_extent_writepage is a small helper that combines a call to btrfs_mark_ordered_io_finished with conditional error-only calls to btrfs_page_clear_uptodate and mapping_set_error with a somewhat unfortunate calling convention that passes and inclusive end instead of the len expected by the underlying functions. Remove end_extent_writepage and open code it in the 4 callers. Out of those two already are error-only and thus don't need the extra conditional, and one already has the mapping_set_error, so a duplicate call can be avoided. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 44 +++++++++++++++----------------------------- fs/btrfs/extent_io.h | 2 -- fs/btrfs/inode.c | 42 ++++++++++++++++++++++-------------------- 3 files changed, 37 insertions(+), 51 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index af05237dc2f186..5a4f5fc09a2354 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -466,29 +466,6 @@ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) btrfs_subpage_end_reader(fs_info, page, start, len); } -/* lots and lots of room for performance fixes in the end_bio funcs */ - -void end_extent_writepage(struct page *page, int err, u64 start, u64 end) -{ - struct btrfs_inode *inode; - const bool uptodate = (err == 0); - int ret = 0; - u32 len = end + 1 - start; - - ASSERT(end + 1 - start <= U32_MAX); - ASSERT(page && page->mapping); - inode = BTRFS_I(page->mapping->host); - btrfs_mark_ordered_io_finished(inode, page, start, len, uptodate); - - if (!uptodate) { - const struct btrfs_fs_info *fs_info = inode->root->fs_info; - - btrfs_page_clear_uptodate(fs_info, page, start, len); - ret = err < 0 ? err : -EIO; - mapping_set_error(page->mapping, ret); - } -} - /* * after a writepage IO is done, we need to: * clear the uptodate bits on error @@ -1431,7 +1408,6 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl struct folio *folio = page_folio(page); struct inode *inode = page->mapping->host; const u64 page_start = page_offset(page); - const u64 page_end = page_start + PAGE_SIZE - 1; int ret; int nr = 0; size_t pg_offset; @@ -1475,8 +1451,13 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl set_page_writeback(page); end_page_writeback(page); } - if (ret) - end_extent_writepage(page, ret, page_start, page_end); + if (ret) { + btrfs_mark_ordered_io_finished(BTRFS_I(inode), page, page_start, + PAGE_SIZE, !ret); + btrfs_page_clear_uptodate(btrfs_sb(inode->i_sb), page, + page_start, PAGE_SIZE); + mapping_set_error(page->mapping, ret); + } unlock_page(page); ASSERT(ret <= 0); return ret; @@ -2194,6 +2175,7 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end, while (cur <= end) { u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end); + u32 cur_len = cur_end + 1 - cur; struct page *page; int nr = 0; @@ -2217,9 +2199,13 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end, set_page_writeback(page); end_page_writeback(page); } - if (ret) - end_extent_writepage(page, ret, cur, cur_end); - btrfs_page_unlock_writer(fs_info, page, cur, cur_end + 1 - cur); + if (ret) { + btrfs_mark_ordered_io_finished(BTRFS_I(inode), page, + cur, cur_len, !ret); + btrfs_page_clear_uptodate(fs_info, page, cur, cur_len); + mapping_set_error(page->mapping, ret); + } + btrfs_page_unlock_writer(fs_info, page, cur, cur_len); if (ret < 0) { found_error = true; first_error = ret; diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 285754154fdc5c..8d11e17c0be9fa 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -276,8 +276,6 @@ void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans, int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array); -void end_extent_writepage(struct page *page, int err, u64 start, u64 end); - #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS bool find_lock_delalloc_range(struct inode *inode, struct page *locked_page, u64 *start, diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index b158db44b268a6..d746b0fe0f994b 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -426,11 +426,10 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, while (index <= end_index) { /* - * For locked page, we will call end_extent_writepage() on it - * in run_delalloc_range() for the error handling. That - * end_extent_writepage() function will call - * btrfs_mark_ordered_io_finished() to clear page Ordered and - * run the ordered extent accounting. + * For locked page, we will call btrfs_mark_ordered_io_finished + * through btrfs_mark_ordered_io_finished() on it + * in run_delalloc_range() for the error handling, which will + * clear page Ordered and run the ordered extent accounting. * * Here we can't just clear the Ordered bit, or * btrfs_mark_ordered_io_finished() would skip the accounting @@ -1160,11 +1159,16 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, btrfs_cleanup_ordered_extents(inode, locked_page, start, end - start + 1); if (locked_page) { const u64 page_start = page_offset(locked_page); - const u64 page_end = page_start + PAGE_SIZE - 1; set_page_writeback(locked_page); end_page_writeback(locked_page); - end_extent_writepage(locked_page, ret, page_start, page_end); + btrfs_mark_ordered_io_finished(inode, locked_page, + page_start, PAGE_SIZE, + !ret); + btrfs_page_clear_uptodate(inode->root->fs_info, + locked_page, page_start, + PAGE_SIZE); + mapping_set_error(locked_page->mapping, ret); unlock_page(locked_page); } return ret; @@ -2841,23 +2845,19 @@ struct btrfs_writepage_fixup { static void btrfs_writepage_fixup_worker(struct btrfs_work *work) { - struct btrfs_writepage_fixup *fixup; + struct btrfs_writepage_fixup *fixup = + container_of(work, struct btrfs_writepage_fixup, work); struct btrfs_ordered_extent *ordered; struct extent_state *cached_state = NULL; struct extent_changeset *data_reserved = NULL; - struct page *page; - struct btrfs_inode *inode; - u64 page_start; - u64 page_end; + struct page *page = fixup->page; + struct btrfs_inode *inode = fixup->inode; + struct btrfs_fs_info *fs_info = inode->root->fs_info; + u64 page_start = page_offset(page); + u64 page_end = page_offset(page) + PAGE_SIZE - 1; int ret = 0; bool free_delalloc_space = true; - fixup = container_of(work, struct btrfs_writepage_fixup, work); - page = fixup->page; - inode = fixup->inode; - page_start = page_offset(page); - page_end = page_offset(page) + PAGE_SIZE - 1; - /* * This is similar to page_mkwrite, we need to reserve the space before * we take the page lock. @@ -2950,10 +2950,12 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work) * to reflect the errors and clean the page. */ mapping_set_error(page->mapping, ret); - end_extent_writepage(page, ret, page_start, page_end); + btrfs_mark_ordered_io_finished(inode, page, page_start, + PAGE_SIZE, !ret); + btrfs_page_clear_uptodate(fs_info, page, page_start, PAGE_SIZE); clear_page_dirty_for_io(page); } - btrfs_page_clear_checked(inode->root->fs_info, page, page_start, PAGE_SIZE); + btrfs_page_clear_checked(fs_info, page, page_start, PAGE_SIZE); unlock_page(page); put_page(page); kfree(fixup); From patchwork Wed Jun 28 15:31:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E3B3EB64D7 for ; Wed, 28 Jun 2023 15:32:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232280AbjF1Pca (ORCPT ); Wed, 28 Jun 2023 11:32:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232238AbjF1PcR (ORCPT ); Wed, 28 Jun 2023 11:32:17 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB8B12713; Wed, 28 Jun 2023 08:32:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Y3NvaUq2249b1rQUIlkJw0hqbZfV+rhFDosLqAAfO14=; b=hm9NV0S/AzFOxIxFGFhPs1W3kz mKwMu8JPOoJqpbayBuwTr37HTBmPeyGwCp7BKt+7LRdlDLDwBztWL9/CEEo27De/6IfIXYjlyJgI/ qZAEbAtJZRrDMrP/GlHWCEL64o2gLYS9SRyCBalc5ysiqRRccfBXF0u9cjylXG8LOv1xF0cJ0Tvc6 ZPNLo7TRchxtvwEkGfAecwjTaqHDBmSQ6PlM+BoIYco5VwAE/n0e7YdJQd+KhOejmippGA/WZr1iL ctM9gALO9yhSvAM+5GE3HAg35FXxkp27DGCobXpN5IQKZwPYwhsvxYbgFMJVnmoi6qY/NNgosTiMZ P69C1zJA==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9L-00G04o-0M; Wed, 28 Jun 2023 15:32:07 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 06/23] btrfs: reduce debug spam from submit_compressed_extents Date: Wed, 28 Jun 2023 17:31:27 +0200 Message-Id: <20230628153144.22834-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Move the printk that is supposed to help to debug failures in submit_one_async_extent into submit_one_async_extent and make it coniditonal on actually having an error condition instead of spamming the log unconditionally. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/inode.c | 33 +++++++++++++-------------------- 1 file changed, 13 insertions(+), 20 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index d746b0fe0f994b..0f709f766b6a94 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1181,11 +1181,11 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, return ret; } -static int submit_one_async_extent(struct btrfs_inode *inode, - struct async_chunk *async_chunk, - struct async_extent *async_extent, - u64 *alloc_hint) +static void submit_one_async_extent(struct async_chunk *async_chunk, + struct async_extent *async_extent, + u64 *alloc_hint) { + struct btrfs_inode *inode = async_chunk->inode; struct extent_io_tree *io_tree = &inode->io_tree; struct btrfs_root *root = inode->root; struct btrfs_fs_info *fs_info = root->fs_info; @@ -1279,7 +1279,7 @@ static int submit_one_async_extent(struct btrfs_inode *inode, if (async_chunk->blkcg_css) kthread_associate_blkcg(NULL); kfree(async_extent); - return ret; + return; out_free_reserve: btrfs_dec_block_group_reservations(fs_info, ins.objectid); @@ -1293,7 +1293,13 @@ static int submit_one_async_extent(struct btrfs_inode *inode, PAGE_UNLOCK | PAGE_START_WRITEBACK | PAGE_END_WRITEBACK); free_async_extent_pages(async_extent); - goto done; + if (async_chunk->blkcg_css) + kthread_associate_blkcg(NULL); + btrfs_debug(fs_info, +"async extent submission failed root=%lld inode=%llu start=%llu len=%llu ret=%d", + root->root_key.objectid, btrfs_ino(inode), start, + async_extent->ram_size, ret); + kfree(async_extent); } /* @@ -1303,28 +1309,15 @@ static int submit_one_async_extent(struct btrfs_inode *inode, */ static noinline void submit_compressed_extents(struct async_chunk *async_chunk) { - struct btrfs_inode *inode = async_chunk->inode; - struct btrfs_fs_info *fs_info = inode->root->fs_info; struct async_extent *async_extent; u64 alloc_hint = 0; - int ret = 0; while (!list_empty(&async_chunk->extents)) { - u64 extent_start; - u64 ram_size; - async_extent = list_entry(async_chunk->extents.next, struct async_extent, list); list_del(&async_extent->list); - extent_start = async_extent->start; - ram_size = async_extent->ram_size; - ret = submit_one_async_extent(inode, async_chunk, async_extent, - &alloc_hint); - btrfs_debug(fs_info, -"async extent submission failed root=%lld inode=%llu start=%llu len=%llu ret=%d", - inode->root->root_key.objectid, - btrfs_ino(inode), extent_start, ram_size, ret); + submit_one_async_extent(async_chunk, async_extent, &alloc_hint); } } From patchwork Wed Jun 28 15:31:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8403EEB64DC for ; Wed, 28 Jun 2023 15:32:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232272AbjF1Pc3 (ORCPT ); Wed, 28 Jun 2023 11:32:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232237AbjF1PcR (ORCPT ); Wed, 28 Jun 2023 11:32:17 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08CA52728; Wed, 28 Jun 2023 08:32:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=LcDc0H6zoBs18aKmyXsHcP1XOIqUzH3TPTBSNaCP85U=; b=XtwQ3J6r3aD2UL9OJVshlNS5sq 7tAFY5W89qsJiLRL0Y2t6medd6KBM4HwolyLlUP+UGg7XxW+O6cS5DbCj2AUAhNAghfkwNfM0sA0+ XUmwRdygKTb0Vw68QQ7IUgV5hktQQipifFQNiAKu4Y+B0w4d1iaNXBLR90mr5uFAWfvPlX2XfZLbZ zYnD2XU1nwvdPwpAJLK/p0wM+ntJNJT5IUwse1smN/WmCassoQ85tRkD9iO5zudgLCgBdfx8AQpqJ vRn/B5sTmZv8nCs86epOoG5a0ocAG6CpdkgGTAZsM3fbnye/SOmQUM5l3o6U49pMrmwyGlW4LyuGA X9XLXAdA==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9O-00G068-2E; Wed, 28 Jun 2023 15:32:11 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 07/23] btrfs: remove the return value from submit_uncompressed_range Date: Wed, 28 Jun 2023 17:31:28 +0200 Message-Id: <20230628153144.22834-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The return value from submit_uncompressed_range is ignored, and that's fine because the error reporting happens through the mapping and ordered_extent. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/inode.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 0f709f766b6a94..c6845b0591b77e 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1126,9 +1126,9 @@ static void free_async_extent_pages(struct async_extent *async_extent) async_extent->pages = NULL; } -static int submit_uncompressed_range(struct btrfs_inode *inode, - struct async_extent *async_extent, - struct page *locked_page) +static void submit_uncompressed_range(struct btrfs_inode *inode, + struct async_extent *async_extent, + struct page *locked_page) { u64 start = async_extent->start; u64 end = async_extent->start + async_extent->ram_size - 1; @@ -1153,7 +1153,7 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, &nr_written, NULL, CFR_KEEP_LOCKED); /* Inline extent inserted, page gets unlocked and everything is done */ if (page_started) - return 0; + return; if (ret < 0) { btrfs_cleanup_ordered_extents(inode, locked_page, start, end - start + 1); @@ -1171,14 +1171,13 @@ static int submit_uncompressed_range(struct btrfs_inode *inode, mapping_set_error(locked_page->mapping, ret); unlock_page(locked_page); } - return ret; + return; } /* All pages will be unlocked, including @locked_page */ wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); - ret = extent_write_locked_range(&inode->vfs_inode, start, end, &wbc); + extent_write_locked_range(&inode->vfs_inode, start, end, &wbc); wbc_detach_inode(&wbc); - return ret; } static void submit_one_async_extent(struct async_chunk *async_chunk, @@ -1215,7 +1214,7 @@ static void submit_one_async_extent(struct async_chunk *async_chunk, /* We have fall back to uncompressed write */ if (!async_extent->pages) { - ret = submit_uncompressed_range(inode, async_extent, locked_page); + submit_uncompressed_range(inode, async_extent, locked_page); goto done; } From patchwork Wed Jun 28 15:31:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B17AEC001DB for ; Wed, 28 Jun 2023 15:32:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232294AbjF1Pcc (ORCPT ); Wed, 28 Jun 2023 11:32:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232218AbjF1PcR (ORCPT ); Wed, 28 Jun 2023 11:32:17 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E8052979; Wed, 28 Jun 2023 08:32:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=CRVWOZ/K4h6govUCQeEsnFecA9gGv9ULTyS6OF67+D0=; b=2vEsS1XMlo1E0JwCfcfvNaQR/8 58MQ97e/KdjXSK3sqPoG6CfyjNsyBvc67/ju4mSkVIT5dtvmv/eiyyF3AAx3Hcj3VoWQkknBS+xx8 Tcx1kjsHeBYL1FnXSqBCw1QmhEbecEbRk49dgKBO4goEQ+f0aU2oQM2sXjjoel9mvNJv44JxL9fTD Fgf4ROcIivp51fC+y/kBmXH8MCtI8W+O1hDPio+srXngH7epQGxd8SUdfU4lVQwU2Z0P6gJvqLtBI jPpbkvbrKYTbhE3v92bfsW1ldEd/qBWMo8JqUlLaRsfVfKaUUFmr5bPKvt5EyseyZisYpvZ7aT9AT HfYMt0gg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9R-00G06r-2b; Wed, 28 Jun 2023 15:32:14 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 08/23] btrfs: remove the return value from extent_write_locked_range Date: Wed, 28 Jun 2023 17:31:29 +0200 Message-Id: <20230628153144.22834-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The return value from extent_write_locked_range is ignored, and that's fine because the error reporting happens through the mapping and ordered_extent. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 13 +++---------- fs/btrfs/extent_io.h | 4 ++-- 2 files changed, 5 insertions(+), 12 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 5a4f5fc09a2354..e32ec41bade681 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2152,11 +2152,10 @@ static int extent_write_cache_pages(struct address_space *mapping, * already been ran (aka, ordered extent inserted) and all pages are still * locked. */ -int extent_write_locked_range(struct inode *inode, u64 start, u64 end, - struct writeback_control *wbc) +void extent_write_locked_range(struct inode *inode, u64 start, u64 end, + struct writeback_control *wbc) { bool found_error = false; - int first_error = 0; int ret = 0; struct address_space *mapping = inode->i_mapping; struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); @@ -2206,20 +2205,14 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end, mapping_set_error(page->mapping, ret); } btrfs_page_unlock_writer(fs_info, page, cur, cur_len); - if (ret < 0) { + if (ret < 0) found_error = true; - first_error = ret; - } next_page: put_page(page); cur = cur_end + 1; } submit_write_bio(&bio_ctrl, found_error ? ret : 0); - - if (found_error) - return first_error; - return ret; } int extent_writepages(struct address_space *mapping, diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 8d11e17c0be9fa..0312022bbf4b7a 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -177,8 +177,8 @@ int try_release_extent_mapping(struct page *page, gfp_t mask); int try_release_extent_buffer(struct page *page); int btrfs_read_folio(struct file *file, struct folio *folio); -int extent_write_locked_range(struct inode *inode, u64 start, u64 end, - struct writeback_control *wbc); +void extent_write_locked_range(struct inode *inode, u64 start, u64 end, + struct writeback_control *wbc); int extent_writepages(struct address_space *mapping, struct writeback_control *wbc); int btree_write_cache_pages(struct address_space *mapping, From patchwork Wed Jun 28 15:31:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94368EB64DA for ; Wed, 28 Jun 2023 15:32:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232332AbjF1Pcl (ORCPT ); Wed, 28 Jun 2023 11:32:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232248AbjF1Pc0 (ORCPT ); Wed, 28 Jun 2023 11:32:26 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A3DA2110; Wed, 28 Jun 2023 08:32:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=hg5WWy8kMqhkood0mgXihIPnhNAN0IuoJkNdpCiQHag=; b=aBxZ9BC6XmMlmbtTKhNTlD2aLb 6EMH3x+h89Vot2Bvlus8VlVExzxluejfUumibyBi+PBDvyRbZzCNWXYg9XAJ89epKXh/Hx8Acdhcg VpsUhHyv4sRwyMkqpt4+YOu6+0oeDYrRzBDekU3Ebz6lvjz98pOTS+FMGlSDG/x/QyAFeNAxewt6u dqzJ9rHkwzAGS+JCw25FHkyOlGnh2jSn4RPCtXLK1bcc9/F7lgv/szKEcUISGVnhQwJsHeEeDRGoB 8AkJjUMOhze5koCM1ELp29xWOeVwG+solbaq1lFLlGz0DyRK0of6HdED059NK5zf48arROhK8Dh7K IlU/fqaw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9U-00G07B-3A; Wed, 28 Jun 2023 15:32:17 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 09/23] btrfs: improve the delalloc_to_write calculation in writepage_delalloc Date: Wed, 28 Jun 2023 17:31:30 +0200 Message-Id: <20230628153144.22834-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Currently writepage_delalloc adds to delalloc_to_write in every loop operation. That is not only more work than doing it once after the loop, but can also over-increment the counter due to rounding errors when a new loop iteration starts with an offset into a page. Add a new page_start variable instead of recaculation that value over and over, move the delalloc_to_write calculation out of the loop, use the DIV_ROUND_UP helper instead of open coding it and remove the pointless found local variable. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index e32ec41bade681..aa2f88365ad05a 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1164,8 +1164,10 @@ static inline void contiguous_readpages(struct page *pages[], int nr_pages, static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, struct page *page, struct writeback_control *wbc) { - const u64 page_end = page_offset(page) + PAGE_SIZE - 1; - u64 delalloc_start = page_offset(page); + const u64 page_start = page_offset(page); + const u64 page_end = page_start + PAGE_SIZE - 1; + u64 delalloc_start = page_start; + u64 delalloc_end = page_end; u64 delalloc_to_write = 0; /* How many pages are started by btrfs_run_delalloc_range() */ unsigned long nr_written = 0; @@ -1173,13 +1175,9 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, int page_started = 0; while (delalloc_start < page_end) { - u64 delalloc_end = page_end; - bool found; - - found = find_lock_delalloc_range(&inode->vfs_inode, page, - &delalloc_start, - &delalloc_end); - if (!found) { + delalloc_end = page_end; + if (!find_lock_delalloc_range(&inode->vfs_inode, page, + &delalloc_start, &delalloc_end)) { delalloc_start = delalloc_end + 1; continue; } @@ -1188,14 +1186,15 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, if (ret) return ret; - /* - * delalloc_end is already one less than the total length, so - * we don't subtract one from PAGE_SIZE - */ - delalloc_to_write += (delalloc_end - delalloc_start + - PAGE_SIZE) >> PAGE_SHIFT; delalloc_start = delalloc_end + 1; } + + /* + * delalloc_end is already one less than the total length, so + * we don't subtract one from PAGE_SIZE + */ + delalloc_to_write += + DIV_ROUND_UP(delalloc_end + 1 - page_start, PAGE_SIZE); if (wbc->nr_to_write < delalloc_to_write) { int thresh = 8192; From patchwork Wed Jun 28 15:31:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295980 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 744CBEB64DD for ; Wed, 28 Jun 2023 15:32:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232343AbjF1Pcl (ORCPT ); Wed, 28 Jun 2023 11:32:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232251AbjF1Pc0 (ORCPT ); Wed, 28 Jun 2023 11:32:26 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FD862D62; Wed, 28 Jun 2023 08:32:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=llA+f4Ies2XcDGVW7SotkLbLtRt6F36tThJQhL/EIUg=; b=ipXZUKMyUGAwTlFWNEQQPcO2We OrSZe4i5lYW8qM0LCswqVDcWL8VCYgVfNNm7KwezBf6OAq4/f9HfDGWopUXIO0J9F0dVbSpNz9z8f hBOX8YveuajWDOBNwJeXKi2aFYXUCoi4rFktPiSy5t2IW4+wrVAon5p3OK1pHyoRWmPQ4+t/EsSBT 10/WqjFhoBbSP8i2ZyTqCBbpYQRnKNzWBU5q5CgbCyQehnCv9X6Q27STHDhSLTZPDyYlgaq58pni3 RLXlTelLOjhzKUDPLrTM94h7lebmhX1n6w4Y//qjwaF2aUIk3VU8u9jZAd7uLJHN3MqvSgxF4QCE0 C25XWquw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9Y-00G07o-1N; Wed, 28 Jun 2023 15:32:20 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 10/23] btrfs: reduce the number of arguments to btrfs_run_delalloc_range Date: Wed, 28 Jun 2023 17:31:31 +0200 Message-Id: <20230628153144.22834-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Instead of a separate page_started argument that tells the callers that btrfs_run_delalloc_range already started writeback by itself, overload the return value with a positive 1 in additio to 0 and a negative error code to indicate that is has already started writeback, and remove the nr_written argument as that caller can calculate it directly based on the range, and in fact already does so for the case where writeback wasn't started yet. Signed-off-by: Christoph Hellwig --- fs/btrfs/btrfs_inode.h | 3 +- fs/btrfs/extent_io.c | 30 +++++++-------- fs/btrfs/inode.c | 87 ++++++++++++++---------------------------- 3 files changed, 44 insertions(+), 76 deletions(-) diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h index 90e60ad9db6200..bda1fdbba666aa 100644 --- a/fs/btrfs/btrfs_inode.h +++ b/fs/btrfs/btrfs_inode.h @@ -498,8 +498,7 @@ int btrfs_prealloc_file_range_trans(struct inode *inode, u64 start, u64 num_bytes, u64 min_size, loff_t actual_len, u64 *alloc_hint); int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written, struct writeback_control *wbc); + u64 start, u64 end, struct writeback_control *wbc); int btrfs_writepage_cow_fixup(struct page *page); int btrfs_encoded_io_compression_from_extent(struct btrfs_fs_info *fs_info, int compress_type); diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index aa2f88365ad05a..6befffd76e8808 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1169,10 +1169,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, u64 delalloc_start = page_start; u64 delalloc_end = page_end; u64 delalloc_to_write = 0; - /* How many pages are started by btrfs_run_delalloc_range() */ - unsigned long nr_written = 0; - int ret; - int page_started = 0; + int ret = 0; while (delalloc_start < page_end) { delalloc_end = page_end; @@ -1181,9 +1178,10 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, delalloc_start = delalloc_end + 1; continue; } + ret = btrfs_run_delalloc_range(inode, page, delalloc_start, - delalloc_end, &page_started, &nr_written, wbc); - if (ret) + delalloc_end, wbc); + if (ret < 0) return ret; delalloc_start = delalloc_end + 1; @@ -1195,6 +1193,16 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, */ delalloc_to_write += DIV_ROUND_UP(delalloc_end + 1 - page_start, PAGE_SIZE); + + /* + * If btrfs_run_dealloc_range() already started I/O and unlocked + * the pages, we just need to account for them here. + */ + if (ret == 1) { + wbc->nr_to_write -= delalloc_to_write; + return 1; + } + if (wbc->nr_to_write < delalloc_to_write) { int thresh = 8192; @@ -1204,16 +1212,6 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, thresh); } - /* Did btrfs_run_dealloc_range() already unlock and start the IO? */ - if (page_started) { - /* - * We've unlocked the page, so we can't update the mapping's - * writeback index, just update nr_to_write. - */ - wbc->nr_to_write -= nr_written; - return 1; - } - return 0; } diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index c6845b0591b77e..8185e95ad12a19 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -129,8 +129,7 @@ static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback); #define CFR_NOINLINE (1 << 1) static noinline int cow_file_range(struct btrfs_inode *inode, struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written, u64 *done_offset, + u64 start, u64 end, u64 *done_offset, u32 flags); static struct extent_map *create_io_em(struct btrfs_inode *inode, u64 start, u64 len, u64 orig_start, u64 block_start, @@ -1132,8 +1131,6 @@ static void submit_uncompressed_range(struct btrfs_inode *inode, { u64 start = async_extent->start; u64 end = async_extent->start + async_extent->ram_size - 1; - unsigned long nr_written = 0; - int page_started = 0; int ret; struct writeback_control wbc = { .sync_mode = WB_SYNC_ALL, @@ -1149,10 +1146,10 @@ static void submit_uncompressed_range(struct btrfs_inode *inode, * Also we call cow_file_range() with @unlock_page == 0, so that we * can directly submit them without interruption. */ - ret = cow_file_range(inode, locked_page, start, end, &page_started, - &nr_written, NULL, CFR_KEEP_LOCKED); + ret = cow_file_range(inode, locked_page, start, end, NULL, + CFR_KEEP_LOCKED); /* Inline extent inserted, page gets unlocked and everything is done */ - if (page_started) + if (ret == 1) return; if (ret < 0) { @@ -1363,8 +1360,8 @@ static u64 get_extent_allocation_hint(struct btrfs_inode *inode, u64 start, * * When this function fails, it unlocks all pages except @locked_page. * - * When this function successfully creates an inline extent, it sets page_started - * to 1 and unlocks all pages including locked_page and starts I/O on them. + * When this function successfully creates an inline extent, it returns 1 and + * unlocks all pages including locked_page and starts I/O on them. * (In reality inline extents are limited to a single page, so locked_page is * the only page handled anyway). * @@ -1381,10 +1378,8 @@ static u64 get_extent_allocation_hint(struct btrfs_inode *inode, u64 start, * example. */ static noinline int cow_file_range(struct btrfs_inode *inode, - struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written, u64 *done_offset, - u32 flags) + struct page *locked_page, u64 start, u64 end, + u64 *done_offset, u32 flags) { struct btrfs_root *root = inode->root; struct btrfs_fs_info *fs_info = root->fs_info; @@ -1444,9 +1439,6 @@ static noinline int cow_file_range(struct btrfs_inode *inode, EXTENT_DELALLOC_NEW | EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING, PAGE_UNLOCK | PAGE_START_WRITEBACK | PAGE_END_WRITEBACK); - *nr_written = *nr_written + - (end - start + PAGE_SIZE) / PAGE_SIZE; - *page_started = 1; /* * locked_page is locked by the caller of * writepage_delalloc(), not locked by @@ -1456,11 +1448,11 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * as it doesn't have any subpage::writers recorded. * * Here we manually unlock the page, since the caller - * can't use page_started to determine if it's an - * inline extent or a compressed extent. + * can't determine if it's an inline extent or a + * compressed extent. */ unlock_page(locked_page); - goto out; + return 1; } else if (ret < 0) { goto out_unlock; } @@ -1574,7 +1566,6 @@ static noinline int cow_file_range(struct btrfs_inode *inode, if (ret) goto out_unlock; } -out: return ret; out_drop_extent_cache: @@ -1725,10 +1716,8 @@ static noinline void async_cow_free(struct btrfs_work *work) } static bool run_delalloc_compressed(struct btrfs_inode *inode, - struct writeback_control *wbc, - struct page *locked_page, - u64 start, u64 end, int *page_started, - unsigned long *nr_written) + struct page *locked_page, u64 start, + u64 end, struct writeback_control *wbc) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct cgroup_subsys_state *blkcg_css = wbc_blkcg_css(wbc); @@ -1810,34 +1799,25 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode, btrfs_queue_work(fs_info->delalloc_workers, &async_chunk[i].work); - *nr_written += nr_pages; start = cur_end + 1; } - *page_started = 1; return true; } static noinline int run_delalloc_zoned(struct btrfs_inode *inode, struct page *locked_page, u64 start, - u64 end, int *page_started, - unsigned long *nr_written, - struct writeback_control *wbc) + u64 end, struct writeback_control *wbc) { u64 done_offset = end; int ret; bool locked_page_done = false; while (start <= end) { - ret = cow_file_range(inode, locked_page, start, end, page_started, - nr_written, &done_offset, CFR_KEEP_LOCKED); + ret = cow_file_range(inode, locked_page, start, end, + &done_offset, CFR_KEEP_LOCKED); if (ret && ret != -EAGAIN) return ret; - if (*page_started) { - ASSERT(ret == 0); - return 0; - } - if (ret == 0) done_offset = end; @@ -1858,9 +1838,7 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, start = done_offset + 1; } - *page_started = 1; - - return 0; + return 1; } static noinline int csum_exist_in_range(struct btrfs_fs_info *fs_info, @@ -1893,8 +1871,6 @@ static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page, const bool is_reloc_ino = btrfs_is_data_reloc_root(inode->root); const u64 range_bytes = end + 1 - start; struct extent_io_tree *io_tree = &inode->io_tree; - int page_started = 0; - unsigned long nr_written; u64 range_start = start; u64 count; int ret; @@ -1955,9 +1931,9 @@ static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page, * is written out and unlocked directly and a normal nocow extent * doesn't work. */ - ret = cow_file_range(inode, locked_page, start, end, &page_started, - &nr_written, NULL, CFR_NOINLINE); - ASSERT(!page_started); + ret = cow_file_range(inode, locked_page, start, end, NULL, + CFR_NOINLINE); + ASSERT(ret != 1); return ret; } @@ -2393,15 +2369,14 @@ static bool should_nocow(struct btrfs_inode *inode, u64 start, u64 end) * being touched for the first time. */ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page, - u64 start, u64 end, int *page_started, unsigned long *nr_written, - struct writeback_control *wbc) + u64 start, u64 end, struct writeback_control *wbc) { - int ret = 0; const bool zoned = btrfs_is_zoned(inode->root->fs_info); + int ret; /* - * The range must cover part of the @locked_page, or the returned - * @page_started can confuse the caller. + * The range must cover part of the @locked_page, or a return of 1 + * can confuse the caller. */ ASSERT(!(end <= page_offset(locked_page) || start >= page_offset(locked_page) + PAGE_SIZE)); @@ -2421,20 +2396,16 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page if (btrfs_inode_can_compress(inode) && inode_need_compress(inode, start, end) && - run_delalloc_compressed(inode, wbc, locked_page, start, - end, page_started, nr_written)) - goto out; + run_delalloc_compressed(inode, locked_page, start, end, wbc)) + return 1; if (zoned) - ret = run_delalloc_zoned(inode, locked_page, start, end, - page_started, nr_written, wbc); + ret = run_delalloc_zoned(inode, locked_page, start, end, wbc); else - ret = cow_file_range(inode, locked_page, start, end, - page_started, nr_written, NULL, 0); + ret = cow_file_range(inode, locked_page, start, end, NULL, 0); out: - ASSERT(ret <= 0); - if (ret) + if (ret < 0) btrfs_cleanup_ordered_extents(inode, locked_page, start, end - start + 1); return ret; From patchwork Wed Jun 28 15:31:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295978 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68137EB64DD for ; Wed, 28 Jun 2023 15:32:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232319AbjF1Pci (ORCPT ); Wed, 28 Jun 2023 11:32:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232255AbjF1Pc1 (ORCPT ); Wed, 28 Jun 2023 11:32:27 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D786D30C5; Wed, 28 Jun 2023 08:32:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=7ayhH8VZI9kxAFPFQzSgayXxw0cyNFJ82CM16lTSVPI=; b=Fc2PuFirqXY1uwY9mNYHYVZ5ct vSUgnRDBLtrjWA4htbVmZrSLMOyk/6FCW+YaW6B7LMl5vizPZy6dQQvahtpDl9QeqjzlZppaLaMzP pIL59Cld2oWu/mFgtL/OQEH8fgewCoOsgRG9VD0kkyhwyt6bclsA1JnjmJj5UUEEOhUO79hn//nZX SFwwEvaE/O9H6zh7qW4dJYdd6lrwKvujKvHMXQegIgFCSIvU1SBocFnPaAQMoUJBhnasCiNKZ1+mW a6dYdNStF7EMY58bLxU9Z8j1PlaPLTBHmSk57dcGLodjTsnDArRpuABuZXGsYjg7hvEtCMJI+pXC2 Jyz9+yQg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9b-00G08W-24; Wed, 28 Jun 2023 15:32:23 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 11/23] btrfs: clean up the check for uncompressed ranges in submit_one_async_extent Date: Wed, 28 Jun 2023 17:31:32 +0200 Message-Id: <20230628153144.22834-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Instead of checking for a NULL !pages and explaining this with a cryptic comment, just check the compression type for BTRFS_COMPRESS_NONE to make the check self-explanatory. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/inode.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 8185e95ad12a19..6197b33fb0b23b 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1209,8 +1209,7 @@ static void submit_one_async_extent(struct async_chunk *async_chunk, } lock_extent(io_tree, start, end, NULL); - /* We have fall back to uncompressed write */ - if (!async_extent->pages) { + if (async_extent->compress_type == BTRFS_COMPRESS_NONE) { submit_uncompressed_range(inode, async_extent, locked_page); goto done; } From patchwork Wed Jun 28 15:31:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AFBBEB64DA for ; Wed, 28 Jun 2023 15:32:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232305AbjF1Pcg (ORCPT ); Wed, 28 Jun 2023 11:32:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231268AbjF1Pca (ORCPT ); Wed, 28 Jun 2023 11:32:30 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31BAC268F; Wed, 28 Jun 2023 08:32:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=PJKul60FOrF0f6XpU9hW903HmzmLSrQrksohee9v4z0=; b=X6RDIoN45khiqil7iJ0SnHv1Aw w5x9uGpEx6EWZTE2nfb5niCwAg0r4uy6bOfJLle0uEstoRP/zxZ5Rhqw8Rmsoktg4SlRuPlx7MKMF hsbeQXS1Gr+9uZG5DNj8fmbNxNYd6Lraq4FarhpsubSB/FqJ28XedqTFP2tKG7LVOM1VtfU0AzkwZ 3j4l+fmvGibllJUFR/KOI0v760gzvfHfRXVPzmgIxlb7UKUexM2/SkE816KNWWzrVlGAUzzuoYVmh QDRr4iIdlMcGgERqgeKe5Eaw2xkZrHdWhIsgQuOUL+JHj6dmd6D1M0NZQXktRxKedPQHv0KvEvImz a7zruMoQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9e-00G08o-2p; Wed, 28 Jun 2023 15:32:27 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 12/23] btrfs: don't clear async_chunk->inode in async_cow_start Date: Wed, 28 Jun 2023 17:31:33 +0200 Message-Id: <20230628153144.22834-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Now that the ->inode check isn't needed in submit_compressed_extents any more, there is no reason to clear the field early. Always keep the inode around until the work item is finished and remove the special casing, and the counting of compressed extents in compress_file_range. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 23 +++++------------------ 1 file changed, 5 insertions(+), 18 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 6197b33fb0b23b..f8fbcd359a304d 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -832,7 +832,7 @@ static inline void inode_should_defrag(struct btrfs_inode *inode, * are written in the same order that the flusher thread sent them * down. */ -static noinline int compress_file_range(struct async_chunk *async_chunk) +static noinline void compress_file_range(struct async_chunk *async_chunk) { struct btrfs_inode *inode = async_chunk->inode; struct btrfs_fs_info *fs_info = inode->root->fs_info; @@ -850,7 +850,6 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) int i; int will_compress; int compress_type = fs_info->compress_type; - int compressed_extents = 0; int redirty = 0; inode_should_defrag(inode, start, end, end - start + 1, SZ_16K); @@ -1027,7 +1026,7 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) } kfree(pages); } - return 0; + return; } } @@ -1046,8 +1045,6 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) */ total_in = round_up(total_in, fs_info->sectorsize); if (total_compressed + blocksize <= total_in) { - compressed_extents++; - /* * The async work queues will take care of doing actual * allocation on disk for these compressed pages, and @@ -1063,7 +1060,7 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) cond_resched(); goto again; } - return compressed_extents; + return; } } if (pages) { @@ -1104,9 +1101,6 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) extent_range_redirty_for_io(&inode->vfs_inode, start, end); add_async_extent(async_chunk, start, end - start + 1, 0, NULL, 0, BTRFS_COMPRESS_NONE); - compressed_extents++; - - return compressed_extents; } static void free_async_extent_pages(struct async_extent *async_extent) @@ -1659,15 +1653,9 @@ static noinline int cow_file_range(struct btrfs_inode *inode, static noinline void async_cow_start(struct btrfs_work *work) { struct async_chunk *async_chunk; - int compressed_extents; async_chunk = container_of(work, struct async_chunk, work); - - compressed_extents = compress_file_range(async_chunk); - if (compressed_extents == 0) { - btrfs_add_delayed_iput(async_chunk->inode); - async_chunk->inode = NULL; - } + compress_file_range(async_chunk); } /* @@ -1704,8 +1692,7 @@ static noinline void async_cow_free(struct btrfs_work *work) struct async_cow *async_cow; async_chunk = container_of(work, struct async_chunk, work); - if (async_chunk->inode) - btrfs_add_delayed_iput(async_chunk->inode); + btrfs_add_delayed_iput(async_chunk->inode); if (async_chunk->blkcg_css) css_put(async_chunk->blkcg_css); From patchwork Wed Jun 28 15:31:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62ED8C001B0 for ; Wed, 28 Jun 2023 15:32:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232362AbjF1Pct (ORCPT ); Wed, 28 Jun 2023 11:32:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232292AbjF1Pcc (ORCPT ); Wed, 28 Jun 2023 11:32:32 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E34802110; Wed, 28 Jun 2023 08:32:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=xJ1qPIJfB3vnqEcbxhABSkV4Ss9+HxmJh6KO2xy1RwM=; b=aToMyk5kp3G+oyaR05SyvG45RM YFwbeCyZajLKo8OxWD9LuMhDm3RX4hyuQONN2DTLG63hqDRxTb7x//SmiYIvbpqTVmOgU7N5YQhv5 6ohvtydKJfaf0dZ/fc4dQp/ZlFgadg0t6ETKYGRro6XDXo26eolUOKasOnYDPAER1PgiwytW8U2oO 0v5HjpDwJbwFcjt3YNZrtSrhs1NCXoikvJMsUhb8pUfCBf/zpCWb8blyrNRBl2Gtge58QXKDGwCCM t3aa21HFltrmVW4qRJhnSrcRkKS4cXuYw0/e5K0jNJLzDFEzCwyzZshVmcTaORwjI6v7Ctxz9CRDk EWBqzO8w==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9h-00G09M-1t; Wed, 28 Jun 2023 15:32:30 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 13/23] btrfs: merge async_cow_start and compress_file_range Date: Wed, 28 Jun 2023 17:31:34 +0200 Message-Id: <20230628153144.22834-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org There is no good reason to have the simple async_cow_start wrapper, merge the argument conversion into the main compress_file_range function. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/inode.c | 43 ++++++++++++++++--------------------------- 1 file changed, 16 insertions(+), 27 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index f8fbcd359a304d..1e1d6584e1abaa 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -816,24 +816,22 @@ static inline void inode_should_defrag(struct btrfs_inode *inode, } /* - * we create compressed extents in two phases. The first - * phase compresses a range of pages that have already been - * locked (both pages and state bits are locked). + * Work queue call back to started compression on a file and pages. * - * This is done inside an ordered work queue, and the compression - * is spread across many cpus. The actual IO submission is step - * two, and the ordered work queue takes care of making sure that - * happens in the same order things were put onto the queue by - * writepages and friends. + * This is done inside an ordered work queue, and the compression is spread + * across many cpus. The actual IO submission is step two, and the ordered work + * queue takes care of making sure that happens in the same order things were + * put onto the queue by writepages and friends. * - * If this code finds it can't get good compression, it puts an - * entry onto the work queue to write the uncompressed bytes. This - * makes sure that both compressed inodes and uncompressed inodes - * are written in the same order that the flusher thread sent them - * down. + * If this code finds it can't get good compression, it puts an entry onto the + * work queue to write the uncompressed bytes. This makes sure that both + * compressed inodes and uncompressed inodes are written in the same order that + * the flusher thread sent them down. */ -static noinline void compress_file_range(struct async_chunk *async_chunk) +static void compress_file_range(struct btrfs_work *work) { + struct async_chunk *async_chunk = + container_of(work, struct async_chunk, work); struct btrfs_inode *inode = async_chunk->inode; struct btrfs_fs_info *fs_info = inode->root->fs_info; struct address_space *mapping = inode->vfs_inode.i_mapping; @@ -1648,18 +1646,9 @@ static noinline int cow_file_range(struct btrfs_inode *inode, } /* - * work queue call back to started compression on a file and pages - */ -static noinline void async_cow_start(struct btrfs_work *work) -{ - struct async_chunk *async_chunk; - - async_chunk = container_of(work, struct async_chunk, work); - compress_file_range(async_chunk); -} - -/* - * work queue call back to submit previously compressed pages + * Phase two of compressed writeback. This is the ordered portion of the code, + * which only gets called in the order the work was queued. We walk all the + * async extents created by compress_file_range and send them down to the disk. */ static noinline void async_cow_submit(struct btrfs_work *work) { @@ -1777,7 +1766,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode, async_chunk[i].blkcg_css = NULL; } - btrfs_init_work(&async_chunk[i].work, async_cow_start, + btrfs_init_work(&async_chunk[i].work, compress_file_range, async_cow_submit, async_cow_free); nr_pages = DIV_ROUND_UP(cur_end - start, PAGE_SIZE); From patchwork Wed Jun 28 15:31:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295982 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A78AEB64DA for ; Wed, 28 Jun 2023 15:32:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232390AbjF1Pcw (ORCPT ); Wed, 28 Jun 2023 11:32:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232311AbjF1Pcg (ORCPT ); Wed, 28 Jun 2023 11:32:36 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AF3A268F; Wed, 28 Jun 2023 08:32:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=B6Fz3GIuNoCACxA3ezmjCwn+xoSZbIPepw0++x2ObAA=; b=j0tWBvm1rnFENeLu0ylogbwyIU u3WmCLvd5HeQ38foiJVSRC8CScMG3btoj6dN6ijZnlAz9NuuGN4HsXUo0r8b1yUpTg1szPyWYgD5y p2zySF4RlzVpj0oIRgRvfjy5LVsOmANReNjE6UiQeSs1ZeVKJelAVCmWguCPOROmtaOrUr2W9fwOC ci9a1UKsiIO8eXkIPdVAXzyFhCgrLrL3gDFE9wlA9WeJEHw/VeY4FphTpbiE+TjreuxrDUcR6x1MU hY4WI9ZqTIdkrPGQYJMAT9LymoFpG4B6JK43NMubLXascaiNwFYkIskDrY8kcfnjm0UFUC5IUHHnM MJuv7H3Q==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9l-00G0A5-03; Wed, 28 Jun 2023 15:32:33 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 14/23] btrfs: merge submit_compressed_extents and async_cow_submit Date: Wed, 28 Jun 2023 17:31:35 +0200 Message-Id: <20230628153144.22834-15-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The code in submit_compressed_extents just loops over the async_extents, and doesn't need to be conditional on an inode being present, as there won't be any async_extent in the list if we created and inline extent. Merge the two functions to simplify the logic. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 39 ++++++++++----------------------------- 1 file changed, 10 insertions(+), 29 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 1e1d6584e1abaa..09f8c6f2f4bf88 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1289,25 +1289,6 @@ static void submit_one_async_extent(struct async_chunk *async_chunk, kfree(async_extent); } -/* - * Phase two of compressed writeback. This is the ordered portion of the code, - * which only gets called in the order the work was queued. We walk all the - * async extents created by compress_file_range and send them down to the disk. - */ -static noinline void submit_compressed_extents(struct async_chunk *async_chunk) -{ - struct async_extent *async_extent; - u64 alloc_hint = 0; - - while (!list_empty(&async_chunk->extents)) { - async_extent = list_entry(async_chunk->extents.next, - struct async_extent, list); - list_del(&async_extent->list); - - submit_one_async_extent(async_chunk, async_extent, &alloc_hint); - } -} - static u64 get_extent_allocation_hint(struct btrfs_inode *inode, u64 start, u64 num_bytes) { @@ -1650,24 +1631,24 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * which only gets called in the order the work was queued. We walk all the * async extents created by compress_file_range and send them down to the disk. */ -static noinline void async_cow_submit(struct btrfs_work *work) +static noinline void submit_compressed_extents(struct btrfs_work *work) { struct async_chunk *async_chunk = container_of(work, struct async_chunk, work); struct btrfs_fs_info *fs_info = btrfs_work_owner(work); + struct async_extent *async_extent; unsigned long nr_pages; + u64 alloc_hint = 0; nr_pages = (async_chunk->end - async_chunk->start + PAGE_SIZE) >> PAGE_SHIFT; - /* - * ->inode could be NULL if async_chunk_start has failed to compress, - * in which case we don't have anything to submit, yet we need to - * always adjust ->async_delalloc_pages as its paired with the init - * happening in run_delalloc_compressed - */ - if (async_chunk->inode) - submit_compressed_extents(async_chunk); + while (!list_empty(&async_chunk->extents)) { + async_extent = list_entry(async_chunk->extents.next, + struct async_extent, list); + list_del(&async_extent->list); + submit_one_async_extent(async_chunk, async_extent, &alloc_hint); + } /* atomic_sub_return implies a barrier */ if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) < @@ -1767,7 +1748,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode, } btrfs_init_work(&async_chunk[i].work, compress_file_range, - async_cow_submit, async_cow_free); + submit_compressed_extents, async_cow_free); nr_pages = DIV_ROUND_UP(cur_end - start, PAGE_SIZE); atomic_add(nr_pages, &fs_info->async_delalloc_pages); From patchwork Wed Jun 28 15:31:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C020AEB64D7 for ; Wed, 28 Jun 2023 15:32:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232396AbjF1Pcx (ORCPT ); Wed, 28 Jun 2023 11:32:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232322AbjF1Pck (ORCPT ); Wed, 28 Jun 2023 11:32:40 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C826E2690; Wed, 28 Jun 2023 08:32:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=D+Oa6Jx9+3YAgFpxJAjbhlYziAbEOo8V8qTVMG1/XKE=; b=IHYwHm+vjVHFYDPFr4A4s6eUol N5vxZKqfsc4SYiq4wJI9WsYERZwS4Qfgphc472y6V4ou7xyDdsuHuDPIu7agIQceg7q75CXaz8KLn YDjbPsH6MKWbf38NPKPM3EF7s0pwRkLu4lyICUbU4sQ3W1ANtHNi0Lz6muDPp5yWMoxOChd0NpIgZ tSwGMKi7TGuVkpQHIWAW7oBNuh2zl9XAB9usYuPtU/2r+IHH8QYi7CaKknxadyfyDGMPgUrZPIPNa G+1rvjqBVev/xg70H93lSlYu2nA1bwxh3rST08mIdNXuYsuEisYUnPflKmOVFin3gzKjfweJca4Fi cn9kKI2g==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9o-00G0AW-0x; Wed, 28 Jun 2023 15:32:36 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 15/23] btrfs: streamline compress_file_range Date: Wed, 28 Jun 2023 17:31:36 +0200 Message-Id: <20230628153144.22834-16-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Reorder compress_file_range so that the main compression flow happens straight line and not in branches. To do this ensure that pages is always zeroed before a page allocation happens, which allows the cleanup_and_bail_uncompressed label to clean up the page allocations as needed. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 165 +++++++++++++++++++++++------------------------ 1 file changed, 81 insertions(+), 84 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 09f8c6f2f4bf88..e7c05d07ff50f8 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -841,10 +841,11 @@ static void compress_file_range(struct btrfs_work *work) u64 actual_end; u64 i_size; int ret = 0; - struct page **pages = NULL; + struct page **pages; unsigned long nr_pages; unsigned long total_compressed = 0; unsigned long total_in = 0; + unsigned int poff; int i; int will_compress; int compress_type = fs_info->compress_type; @@ -867,6 +868,7 @@ static void compress_file_range(struct btrfs_work *work) actual_end = min_t(u64, i_size, end + 1); again: will_compress = 0; + pages = NULL; nr_pages = (end >> PAGE_SHIFT) - (start >> PAGE_SHIFT) + 1; nr_pages = min_t(unsigned long, nr_pages, BTRFS_MAX_COMPRESSED_PAGES); @@ -910,66 +912,62 @@ static void compress_file_range(struct btrfs_work *work) ret = 0; /* - * we do compression for mount -o compress and when the - * inode has not been flagged as nocompress. This flag can - * change at any time if we discover bad compression ratios. + * We do compression for mount -o compress and when the inode has not + * been flagged as nocompress. This flag can change at any time if we + * discover bad compression ratios. */ - if (inode_need_compress(inode, start, end)) { - WARN_ON(pages); - pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS); - if (!pages) { - /* just bail out to the uncompressed code */ - nr_pages = 0; - goto cont; - } + if (!inode_need_compress(inode, start, end)) + goto cont; - if (inode->defrag_compress) - compress_type = inode->defrag_compress; - else if (inode->prop_compress) - compress_type = inode->prop_compress; + pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS); + if (!pages) { + /* just bail out to the uncompressed code */ + nr_pages = 0; + goto cont; + } - /* - * we need to call clear_page_dirty_for_io on each - * page in the range. Otherwise applications with the file - * mmap'd can wander in and change the page contents while - * we are compressing them. - * - * If the compression fails for any reason, we set the pages - * dirty again later on. - * - * Note that the remaining part is redirtied, the start pointer - * has moved, the end is the original one. - */ - if (!redirty) { - extent_range_clear_dirty_for_io(&inode->vfs_inode, start, end); - redirty = 1; - } + if (inode->defrag_compress) + compress_type = inode->defrag_compress; + else if (inode->prop_compress) + compress_type = inode->prop_compress; + + /* + * We need to call clear_page_dirty_for_io on each page in the range. + * Otherwise applications with the file mmap'd can wander in and change + * the page contents while we are compressing them. + * + * If the compression fails for any reason, we set the pages dirty again + * later on. + * + * Note that the remaining part is redirtied, the start pointer has + * moved, the end is the original one. + */ + if (!redirty) { + extent_range_clear_dirty_for_io(&inode->vfs_inode, start, end); + redirty = 1; + } - /* Compression level is applied here and only here */ - ret = btrfs_compress_pages( - compress_type | (fs_info->compress_level << 4), - mapping, start, - pages, - &nr_pages, - &total_in, - &total_compressed); + /* Compression level is applied here and only here */ + ret = btrfs_compress_pages(compress_type | + (fs_info->compress_level << 4), + mapping, start, pages, &nr_pages, &total_in, + &total_compressed); + if (ret) + goto cont; - if (!ret) { - unsigned long offset = offset_in_page(total_compressed); - struct page *page = pages[nr_pages - 1]; + /* + * Zero the tail end of the last page, as we might be sending it down + * to disk. + */ + poff = offset_in_page(total_compressed); + if (poff) + memzero_page(pages[nr_pages - 1], poff, PAGE_SIZE - poff); + will_compress = 1; - /* zero the tail end of the last page, we might be - * sending it down to disk - */ - if (offset) - memzero_page(page, offset, PAGE_SIZE - offset); - will_compress = 1; - } - } cont: /* * Check cow_file_range() for why we don't even try to create inline - * extent for subpage case. + * extent for the subpage case. */ if (start == 0 && fs_info->sectorsize == PAGE_SIZE) { /* lets try to make an inline extent */ @@ -1028,39 +1026,38 @@ static void compress_file_range(struct btrfs_work *work) } } - if (will_compress) { - /* - * we aren't doing an inline extent round the compressed size - * up to a block size boundary so the allocator does sane - * things - */ - total_compressed = ALIGN(total_compressed, blocksize); + if (!will_compress) + goto cleanup_and_bail_uncompressed; - /* - * one last check to make sure the compression is really a - * win, compare the page count read with the blocks on disk, - * compression must free at least one sector size - */ - total_in = round_up(total_in, fs_info->sectorsize); - if (total_compressed + blocksize <= total_in) { - /* - * The async work queues will take care of doing actual - * allocation on disk for these compressed pages, and - * will submit them to the elevator. - */ - add_async_extent(async_chunk, start, total_in, - total_compressed, pages, nr_pages, - compress_type); - - if (start + total_in < end) { - start += total_in; - pages = NULL; - cond_resched(); - goto again; - } - return; - } + /* + * We aren't doing an inline extent. Round the compressed size up to a + * block size boundary so the allocator does sane things. + */ + total_compressed = ALIGN(total_compressed, blocksize); + + /* + * One last check to make sure the compression is really a win, compare + * the page count read with the blocks on disk, compression must free at + * least one sector. + */ + total_in = round_up(total_in, fs_info->sectorsize); + if (total_compressed + blocksize > total_in) + goto cleanup_and_bail_uncompressed; + + /* + * The async work queues will take care of doing actual allocation on + * disk for these compressed pages, and will submit the bios. + */ + add_async_extent(async_chunk, start, total_in, total_compressed, pages, + nr_pages, compress_type); + if (start + total_in < end) { + start += total_in; + cond_resched(); + goto again; } + return; + +cleanup_and_bail_uncompressed: if (pages) { /* * the compression code ran but failed to make things smaller, @@ -1081,7 +1078,7 @@ static void compress_file_range(struct btrfs_work *work) inode->flags |= BTRFS_INODE_NOCOMPRESS; } } -cleanup_and_bail_uncompressed: + /* * No compression, but we still need to write the pages in the file * we've been given so far. redirty the locked page if it corresponds From patchwork Wed Jun 28 15:31:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6C0DEB64DC for ; Wed, 28 Jun 2023 15:33:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232376AbjF1PdD (ORCPT ); Wed, 28 Jun 2023 11:33:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232369AbjF1Pcu (ORCPT ); Wed, 28 Jun 2023 11:32:50 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDD0F2979; Wed, 28 Jun 2023 08:32:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=oWSLZv12x05QP3p0AQrKmp5Y2EJAD7LcPxF5Vt7GEF8=; b=T/z/rrYf+BURpRoG7MkvJpXTNw qCXpZvDJyOfAR2Nf1tINAnHvbLul+bCw6b4JUiwc6s0a03Fzxq4efg7bDGARWtv/70jHjEsAbpSLJ VMzbDly3irlt8UyLdf8Z1i/zN+Ind3ps5cr1yIL+WKJ4mh+iQzjzpKDM0RrodNxIjaEM9pTZBZThr GQFLsE7pwAh2pwKD9PNWj/A2In/++yWbZ3+6e/rXGQgYrFfEtn13PeZ3LJcHCVkeJNSRA/rzCkQWR 9l72JCxQ9cIJZprvAHNis0HbXzw9SfI1B0LYnJJx5B0yShS9Wi4BAV/nJMSge6hWgbiHo+StLFtZb IL+wgRJw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9r-00G0Bt-1B; Wed, 28 Jun 2023 15:32:39 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 16/23] btrfs: further simplify the compress or not logic in compress_file_range Date: Wed, 28 Jun 2023 17:31:37 +0200 Message-Id: <20230628153144.22834-17-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Currently the logic whether to compress or not in compress_file_range is a bit convoluted because it tries to share code for creating inline extents for the compressible [1] path and the bail to uncompressed path. But the latter isn't needed at all, because cow_file_range as called by submit_uncompressed_range will already create inline extents as needed, so there is no need to have special handling for it if we can live with the fact that it will be called a bit later in the ->ordered_func of the workqueue instead of right now. [1] there is undocumented logic that creates an uncompressed inline extent outside of the shall not compress logic if total_in is too small. This logic isn't explained in comments or any commit log I could find, so I've preserved it. Documentation explaining it would be appreciated if anyone understands this code. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 51 ++++++++++++++++-------------------------------- 1 file changed, 17 insertions(+), 34 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index e7c05d07ff50f8..560682a5d9d7aa 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -847,7 +847,6 @@ static void compress_file_range(struct btrfs_work *work) unsigned long total_in = 0; unsigned int poff; int i; - int will_compress; int compress_type = fs_info->compress_type; int redirty = 0; @@ -867,7 +866,6 @@ static void compress_file_range(struct btrfs_work *work) barrier(); actual_end = min_t(u64, i_size, end + 1); again: - will_compress = 0; pages = NULL; nr_pages = (end >> PAGE_SHIFT) - (start >> PAGE_SHIFT) + 1; nr_pages = min_t(unsigned long, nr_pages, BTRFS_MAX_COMPRESSED_PAGES); @@ -917,14 +915,12 @@ static void compress_file_range(struct btrfs_work *work) * discover bad compression ratios. */ if (!inode_need_compress(inode, start, end)) - goto cont; + goto cleanup_and_bail_uncompressed; pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS); - if (!pages) { + if (!pages) /* just bail out to the uncompressed code */ - nr_pages = 0; - goto cont; - } + goto cleanup_and_bail_uncompressed; if (inode->defrag_compress) compress_type = inode->defrag_compress; @@ -953,7 +949,7 @@ static void compress_file_range(struct btrfs_work *work) mapping, start, pages, &nr_pages, &total_in, &total_compressed); if (ret) - goto cont; + goto cleanup_and_bail_uncompressed; /* * Zero the tail end of the last page, as we might be sending it down @@ -962,24 +958,22 @@ static void compress_file_range(struct btrfs_work *work) poff = offset_in_page(total_compressed); if (poff) memzero_page(pages[nr_pages - 1], poff, PAGE_SIZE - poff); - will_compress = 1; -cont: /* + * Try to create an inline extent. + * + * If we didn't compress the entire range, try to create an uncompressed + * inline extent, else a compressed one. + * * Check cow_file_range() for why we don't even try to create inline * extent for the subpage case. */ if (start == 0 && fs_info->sectorsize == PAGE_SIZE) { - /* lets try to make an inline extent */ - if (ret || total_in < actual_end) { - /* we didn't compress the entire range, try - * to make an uncompressed inline extent. - */ - ret = cow_file_range_inline(inode, actual_end, - 0, BTRFS_COMPRESS_NONE, - NULL, false); + if (total_in < actual_end) { + ret = cow_file_range_inline(inode, actual_end, 0, + BTRFS_COMPRESS_NONE, NULL, + false); } else { - /* try making a compressed inline extent */ ret = cow_file_range_inline(inode, actual_end, total_compressed, compress_type, pages, @@ -1009,26 +1003,15 @@ static void compress_file_range(struct btrfs_work *work) PAGE_UNLOCK | PAGE_START_WRITEBACK | PAGE_END_WRITEBACK); - - /* - * Ensure we only free the compressed pages if we have - * them allocated, as we can still reach here with - * inode_need_compress() == false. - */ - if (pages) { - for (i = 0; i < nr_pages; i++) { - WARN_ON(pages[i]->mapping); - put_page(pages[i]); - } - kfree(pages); + for (i = 0; i < nr_pages; i++) { + WARN_ON(pages[i]->mapping); + put_page(pages[i]); } + kfree(pages); return; } } - if (!will_compress) - goto cleanup_and_bail_uncompressed; - /* * We aren't doing an inline extent. Round the compressed size up to a * block size boundary so the allocator does sane things. From patchwork Wed Jun 28 15:31:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9F54C001B0 for ; Wed, 28 Jun 2023 15:33:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232430AbjF1PdE (ORCPT ); Wed, 28 Jun 2023 11:33:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232381AbjF1Pcv (ORCPT ); Wed, 28 Jun 2023 11:32:51 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3FCC2D72; Wed, 28 Jun 2023 08:32:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=t0IHZiCOY5VW7cxEG9l1eoC3QT227WuoADquDBph/4c=; b=RiKGY6P1bGKm3tCLfjrTvQudpk yHI/EcLjRO5QLRqtSe0TeOzdHOclhbkhqn7M6LPlLr7Y/Br/xJ4HTwxzqVbRiD0cKRY1M0bbdegM0 fv2CE6UnJ0B4g70v6oeBDzvbuM/saU+dDeIleVve/7JqwBD+3DKVkaGnihxUgleJ7FhgK+gfXDXLy Nr36LY/xkXFc/TkwQ26fvNe8lujBKdTFslbr5CfVbZdGnaRLXOfjmciWomdC/qH/GrLjIE+Mimpim GoKVDHyNQjBJD8Bx2VDFIlTiU0ReoUWPilycqJeeH6YO3OMtzRNQ24C/6e91z/hxLn+7KdOfTBNrh kyVnOb/g==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9u-00G0CW-0Y; Wed, 28 Jun 2023 15:32:42 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 17/23] btrfs: use a separate label for the incompressible case in compress_file_range Date: Wed, 28 Jun 2023 17:31:38 +0200 Message-Id: <20230628153144.22834-18-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org compress_file_range can fail to compress either because of resource or alignment constraints or because the data is incompressible. In the latter case the inode is marked so that compression isn't tried again. Currently that check is based on the condition that the pages array has been allocated which is rather cryptic. Use a separate label to clearly distinguish this case. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 560682a5d9d7aa..00aabc088a9deb 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -949,7 +949,7 @@ static void compress_file_range(struct btrfs_work *work) mapping, start, pages, &nr_pages, &total_in, &total_compressed); if (ret) - goto cleanup_and_bail_uncompressed; + goto mark_incompressible; /* * Zero the tail end of the last page, as we might be sending it down @@ -1025,7 +1025,7 @@ static void compress_file_range(struct btrfs_work *work) */ total_in = round_up(total_in, fs_info->sectorsize); if (total_compressed + blocksize > total_in) - goto cleanup_and_bail_uncompressed; + goto mark_incompressible; /* * The async work queues will take care of doing actual allocation on @@ -1040,6 +1040,9 @@ static void compress_file_range(struct btrfs_work *work) } return; +mark_incompressible: + if (!btrfs_test_opt(fs_info, FORCE_COMPRESS) && !inode->prop_compress) + inode->flags |= BTRFS_INODE_NOCOMPRESS; cleanup_and_bail_uncompressed: if (pages) { /* @@ -1054,12 +1057,6 @@ static void compress_file_range(struct btrfs_work *work) pages = NULL; total_compressed = 0; nr_pages = 0; - - /* flag the file so we don't compress in the future */ - if (!btrfs_test_opt(fs_info, FORCE_COMPRESS) && - !(inode->prop_compress)) { - inode->flags |= BTRFS_INODE_NOCOMPRESS; - } } /* From patchwork Wed Jun 28 15:31:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93FD9EB64DC for ; Wed, 28 Jun 2023 15:33:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232419AbjF1Pdg (ORCPT ); Wed, 28 Jun 2023 11:33:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232421AbjF1PdA (ORCPT ); Wed, 28 Jun 2023 11:33:00 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69CD330C7; Wed, 28 Jun 2023 08:32:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=+7+4nsdFDbsRT1q2Uw8AAPCu0PUJ7daQq6Y87tECxWI=; b=2T/gp4mwuW67RaeBf5V/vXFlDv Pvb90WeIH3YAGSimQS7fDf17hIp289HXwiI2t8AnAHn3bM0dZfFVlmxzsqd7BJTC4h17mTzIkxnrI eeNsfr++uhQso3qBMDhRU0zl0bpXdGK5lgRvILAvH1EGzR7dbO3PoUi+siGkK4K2CPvV5+HMgl4XX WsBmv396BHuJSuJAyeCOVXOJSY6JJPlBANBrTKje7n0B08N+Rxt3Z+S1B7qO7rdcXEYKNePwvqFmw N4qUIpoasUFAdi4EwL8FEXXEu8rxOc+clH/9roN/caUzFRf8ClAPQxPRqyoCfBs++/HjLrkyryv/J IcNK3z4A==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEX9x-00G0Dg-1K; Wed, 28 Jun 2023 15:32:45 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 18/23] btrfs: share the code to free the page array in compress_file_range Date: Wed, 28 Jun 2023 17:31:39 +0200 Message-Id: <20230628153144.22834-19-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org compress_file_range has two code blocks to free the page array for the compressed data. Share the code using a goto label. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 30 +++++++++--------------------- 1 file changed, 9 insertions(+), 21 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 00aabc088a9deb..8f3a72f3f897a1 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1003,12 +1003,7 @@ static void compress_file_range(struct btrfs_work *work) PAGE_UNLOCK | PAGE_START_WRITEBACK | PAGE_END_WRITEBACK); - for (i = 0; i < nr_pages; i++) { - WARN_ON(pages[i]->mapping); - put_page(pages[i]); - } - kfree(pages); - return; + goto free_pages; } } @@ -1044,21 +1039,6 @@ static void compress_file_range(struct btrfs_work *work) if (!btrfs_test_opt(fs_info, FORCE_COMPRESS) && !inode->prop_compress) inode->flags |= BTRFS_INODE_NOCOMPRESS; cleanup_and_bail_uncompressed: - if (pages) { - /* - * the compression code ran but failed to make things smaller, - * free any pages it allocated and our page pointer array - */ - for (i = 0; i < nr_pages; i++) { - WARN_ON(pages[i]->mapping); - put_page(pages[i]); - } - kfree(pages); - pages = NULL; - total_compressed = 0; - nr_pages = 0; - } - /* * No compression, but we still need to write the pages in the file * we've been given so far. redirty the locked page if it corresponds @@ -1076,6 +1056,14 @@ static void compress_file_range(struct btrfs_work *work) extent_range_redirty_for_io(&inode->vfs_inode, start, end); add_async_extent(async_chunk, start, end - start + 1, 0, NULL, 0, BTRFS_COMPRESS_NONE); +free_pages: + if (pages) { + for (i = 0; i < nr_pages; i++) { + WARN_ON(pages[i]->mapping); + put_page(pages[i]); + } + kfree(pages); + } } static void free_async_extent_pages(struct async_extent *async_extent) From patchwork Wed Jun 28 15:31:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B19D9EB64DD for ; Wed, 28 Jun 2023 15:33:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232462AbjF1Pdr (ORCPT ); Wed, 28 Jun 2023 11:33:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232253AbjF1PdA (ORCPT ); Wed, 28 Jun 2023 11:33:00 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 536FD268F; Wed, 28 Jun 2023 08:32:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=4HETsFJ8ye30gs+F7MRgBcVCM5W6d7BFL975DNKYFJM=; b=B8kzV0fM6jGe6h9dhVdSCwnD6l Mo+pIBZ5HjCvNJorW5sPA5Xi8h/UzPxih7Fo0zEpqQCQofGFP2lEw8lEMqSvzretn+B5XHBLcTz9g 2GmtMK6gebzOjd7xMipJY/iHdwZdYLBwaXev05BPeSNJwaxPQ0HGW+FeSVqFvqhKf4byPJeaVcJPQ LCgr4cXVVk9aYAEsS/1CiuKsgbMduiKS53zLrRdKJIts436+P/IntqsuuZ6b8WVqxbgYNvSrCzD4v IfZluOAr0GIaloTnYiy+DAKTLQn5d9WFyYShQJ/OuHPm5FXvUUbcybkL2YkNTuw/YhsCBNSzdlppt 007/QyFg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEXA0-00G0Ed-3C; Wed, 28 Jun 2023 15:32:49 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 19/23] btrfs: don't redirty pages in compress_file_range Date: Wed, 28 Jun 2023 17:31:40 +0200 Message-Id: <20230628153144.22834-20-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org compress_file_range needs to clear the dirty bit before handing off work to the compression worker threads to prevent processes coming in through mmap and changing the file contents while the compression is accessing the data (See commit 4adaa611020f ("Btrfs: fix race between mmap writes and compression"). But when compress_file_range decides to not compress the data, it falls back to submit_uncompressed_range which uses extent_write_locked_range to write the uncompressed data. extent_write_locked_range currently expects all pages to be marked dirty so that it can clear the dirty bit itself, and thus compress_file_range has to redirty the page range. Redirtying the page range is rather inefficient and also pointless, so instead pass a pages_dirty parameter to extent_write_locked_range and skip the redirty game entirely. Note that compress_file_range was even redirtying the locked_page twice given that extent_range_clear_dirty_for_io already redirties all pages in the range, which must include locked_page if there is one. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 29 +++++------------------------ fs/btrfs/extent_io.h | 3 +-- fs/btrfs/inode.c | 43 +++++++++---------------------------------- 3 files changed, 15 insertions(+), 60 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 6befffd76e8808..e74153c02d84c7 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -181,22 +181,6 @@ void extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end) } } -void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end) -{ - struct address_space *mapping = inode->i_mapping; - unsigned long index = start >> PAGE_SHIFT; - unsigned long end_index = end >> PAGE_SHIFT; - struct folio *folio; - - while (index <= end_index) { - folio = filemap_get_folio(mapping, index); - filemap_dirty_folio(mapping, folio); - folio_account_redirty(folio); - index += folio_nr_pages(folio); - folio_put(folio); - } -} - static void process_one_page(struct btrfs_fs_info *fs_info, struct page *page, struct page *locked_page, unsigned long page_ops, u64 start, u64 end) @@ -2150,7 +2134,7 @@ static int extent_write_cache_pages(struct address_space *mapping, * locked. */ void extent_write_locked_range(struct inode *inode, u64 start, u64 end, - struct writeback_control *wbc) + struct writeback_control *wbc, bool pages_dirty) { bool found_error = false; int ret = 0; @@ -2176,14 +2160,11 @@ void extent_write_locked_range(struct inode *inode, u64 start, u64 end, int nr = 0; page = find_get_page(mapping, cur >> PAGE_SHIFT); - /* - * All pages in the range are locked since - * btrfs_run_delalloc_range(), thus there is no way to clear - * the page dirty flag. - */ ASSERT(PageLocked(page)); - ASSERT(PageDirty(page)); - clear_page_dirty_for_io(page); + if (pages_dirty) { + ASSERT(PageDirty(page)); + clear_page_dirty_for_io(page); + } ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl, i_size, &nr); diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 0312022bbf4b7a..2678906e87c506 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -178,7 +178,7 @@ int try_release_extent_buffer(struct page *page); int btrfs_read_folio(struct file *file, struct folio *folio); void extent_write_locked_range(struct inode *inode, u64 start, u64 end, - struct writeback_control *wbc); + struct writeback_control *wbc, bool pages_dirty); int extent_writepages(struct address_space *mapping, struct writeback_control *wbc); int btree_write_cache_pages(struct address_space *mapping, @@ -265,7 +265,6 @@ void set_extent_buffer_dirty(struct extent_buffer *eb); void set_extent_buffer_uptodate(struct extent_buffer *eb); void clear_extent_buffer_uptodate(struct extent_buffer *eb); void extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end); -void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end); void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, struct page *locked_page, u32 bits_to_clear, unsigned long page_ops); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 8f3a72f3f897a1..556f63e8496ff8 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -848,10 +848,16 @@ static void compress_file_range(struct btrfs_work *work) unsigned int poff; int i; int compress_type = fs_info->compress_type; - int redirty = 0; inode_should_defrag(inode, start, end, end - start + 1, SZ_16K); + /* + * We need to call clear_page_dirty_for_io on each page in the range. + * Otherwise applications with the file mmap'd can wander in and change + * the page contents while we are compressing them. + */ + extent_range_clear_dirty_for_io(&inode->vfs_inode, start, end); + /* * We need to save i_size before now because it could change in between * us evaluating the size and assigning it. This is because we lock and @@ -927,22 +933,6 @@ static void compress_file_range(struct btrfs_work *work) else if (inode->prop_compress) compress_type = inode->prop_compress; - /* - * We need to call clear_page_dirty_for_io on each page in the range. - * Otherwise applications with the file mmap'd can wander in and change - * the page contents while we are compressing them. - * - * If the compression fails for any reason, we set the pages dirty again - * later on. - * - * Note that the remaining part is redirtied, the start pointer has - * moved, the end is the original one. - */ - if (!redirty) { - extent_range_clear_dirty_for_io(&inode->vfs_inode, start, end); - redirty = 1; - } - /* Compression level is applied here and only here */ ret = btrfs_compress_pages(compress_type | (fs_info->compress_level << 4), @@ -1039,21 +1029,6 @@ static void compress_file_range(struct btrfs_work *work) if (!btrfs_test_opt(fs_info, FORCE_COMPRESS) && !inode->prop_compress) inode->flags |= BTRFS_INODE_NOCOMPRESS; cleanup_and_bail_uncompressed: - /* - * No compression, but we still need to write the pages in the file - * we've been given so far. redirty the locked page if it corresponds - * to our extent and set things up for the async work queue to run - * cow_file_range to do the normal delalloc dance. - */ - if (async_chunk->locked_page && - (page_offset(async_chunk->locked_page) >= start && - page_offset(async_chunk->locked_page)) <= end) { - __set_page_dirty_nobuffers(async_chunk->locked_page); - /* unlocked later on in the async handlers */ - } - - if (redirty) - extent_range_redirty_for_io(&inode->vfs_inode, start, end); add_async_extent(async_chunk, start, end - start + 1, 0, NULL, 0, BTRFS_COMPRESS_NONE); free_pages: @@ -1130,7 +1105,7 @@ static void submit_uncompressed_range(struct btrfs_inode *inode, /* All pages will be unlocked, including @locked_page */ wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); - extent_write_locked_range(&inode->vfs_inode, start, end, &wbc); + extent_write_locked_range(&inode->vfs_inode, start, end, &wbc, false); wbc_detach_inode(&wbc); } @@ -1755,7 +1730,7 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, } locked_page_done = true; extent_write_locked_range(&inode->vfs_inode, start, done_offset, - wbc); + wbc, true); start = done_offset + 1; } From patchwork Wed Jun 28 15:31:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B76EB64D7 for ; Wed, 28 Jun 2023 15:33:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232461AbjF1Pdi (ORCPT ); Wed, 28 Jun 2023 11:33:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232425AbjF1PdB (ORCPT ); Wed, 28 Jun 2023 11:33:01 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BC0D26AB; Wed, 28 Jun 2023 08:32:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=qx+gJ458aK+w8h5IAu2/xFThKmZn2rfF87JLskCyXl0=; b=39SuDBgZxbpLbHeTACnruu8DVF tJFuJedPZV/h4Ihz7hI1C3gco2ksH3sU8ZVho3+zYJUSbBZsR8DPeRIRW9zA44iWqQ231csSe+069 OPCQUPtrmpCTuQN+vTP0+6WW2A/R8EwQFSl6UfAOfWAVMEH7C6BPc4Y6nBR0J6pcXMJwiwu9cVNAw Gm7UWIdxso3rdXM6yqPqZya6iTHSJ/MWmtn6R+64q+f4Uy9Pihvp5qeUSG2b2oNtXg13gSriqDT7P FZ0Ptofn4RPob5Z4AWKILypIb/xUBaf0Q8LVenz+HM6TXBmY8QjNT+1hqKAho7U7C1Di6wrGb9S3w za3mrXDg==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEXA4-00G0F8-0r; Wed, 28 Jun 2023 15:32:52 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 20/23] btrfs: refactor the zoned device handling in cow_file_range Date: Wed, 28 Jun 2023 17:31:41 +0200 Message-Id: <20230628153144.22834-21-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Handling of the done_offset to cow_file_range is a bit confusing, as it is not updated at all when the function succeeds, and the -EAGAIN status is used bother for the case where we need to wait for a zone finish and the one where the allocation was partially successful. Change the calling convention so that done_offset is always updated, and 0 is returned if some allocation was successful (partial allocation can still only happen for zoned devices), and waiting for a zone finish is done internally in cow_file_range instead of the caller. Also write a big fat comment explaining the logic. Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 58 ++++++++++++++++++++++++++---------------------- 1 file changed, 31 insertions(+), 27 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 556f63e8496ff8..2a4b62398ee7a3 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1364,7 +1364,8 @@ static noinline int cow_file_range(struct btrfs_inode *inode, * compressed extent. */ unlock_page(locked_page); - return 1; + ret = 1; + goto done; } else if (ret < 0) { goto out_unlock; } @@ -1395,6 +1396,31 @@ static noinline int cow_file_range(struct btrfs_inode *inode, ret = btrfs_reserve_extent(root, cur_alloc_size, cur_alloc_size, min_alloc_size, 0, alloc_hint, &ins, 1, 1); + if (ret == -EAGAIN) { + /* + * btrfs_reserve_extent only returns -EAGAIN for zoned + * file systems, which is an indication that there are + * no active zones to allocate from at the moment. + * + * If this is the first loop iteration, wait for at + * least one zone to finish before retrying the + * allocation. Otherwise ask the caller to write out + * the already allocated blocks before coming back to + * us, or return -ENOSPC if it can't handle retries. + */ + ASSERT(btrfs_is_zoned(fs_info)); + if (start == orig_start) { + wait_on_bit_io(&inode->root->fs_info->flags, + BTRFS_FS_NEED_ZONE_FINISH, + TASK_UNINTERRUPTIBLE); + continue; + } + if (done_offset) { + *done_offset = start - 1; + return 0; + } + ret = -ENOSPC; + } if (ret < 0) goto out_unlock; cur_alloc_size = ins.offset; @@ -1478,6 +1504,9 @@ static noinline int cow_file_range(struct btrfs_inode *inode, if (ret) goto out_unlock; } +done: + if (done_offset) + *done_offset = end; return ret; out_drop_extent_cache: @@ -1486,21 +1515,6 @@ static noinline int cow_file_range(struct btrfs_inode *inode, btrfs_dec_block_group_reservations(fs_info, ins.objectid); btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1); out_unlock: - /* - * If done_offset is non-NULL and ret == -EAGAIN, we expect the - * caller to write out the successfully allocated region and retry. - */ - if (done_offset && ret == -EAGAIN) { - if (orig_start < start) - *done_offset = start - 1; - else - *done_offset = start; - return ret; - } else if (ret == -EAGAIN) { - /* Convert to -ENOSPC since the caller cannot retry. */ - ret = -ENOSPC; - } - /* * Now, we have three regions to clean up: * @@ -1711,19 +1725,9 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, while (start <= end) { ret = cow_file_range(inode, locked_page, start, end, &done_offset, CFR_KEEP_LOCKED); - if (ret && ret != -EAGAIN) + if (ret) return ret; - if (ret == 0) - done_offset = end; - - if (done_offset == start) { - wait_on_bit_io(&inode->root->fs_info->flags, - BTRFS_FS_NEED_ZONE_FINISH, - TASK_UNINTERRUPTIBLE); - continue; - } - if (!locked_page_done) { __set_page_dirty_nobuffers(locked_page); account_page_redirty(locked_page); From patchwork Wed Jun 28 15:31:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28DB8C001B0 for ; Wed, 28 Jun 2023 15:33:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232469AbjF1Pdt (ORCPT ); Wed, 28 Jun 2023 11:33:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232259AbjF1PdB (ORCPT ); Wed, 28 Jun 2023 11:33:01 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF21730D3; Wed, 28 Jun 2023 08:32:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=3Vs6HRFrG4S3aq9C96+j6uLPDbL41Vn9nahZIdhyzu4=; b=E6WNPevrPJqOks5BVluWFc33BF 1eRE6ZnxBzV/CFDaBVopI0jmhucFqnPb8yGpmqkUac5ae33RVn03YqTkJin5cYq0/qQfNvP88TCp8 kLlNSnlh1UyIbuGYHSXg7nm30j66a5bZwPyWt4k8r5UOzp7o1oskl4KwAAnbkN0OxZ3VQXs/XHwB+ Ucky5nYrCLlGt1s8+UAEAmr90AKQBlwNYa5KKJZ80DkRwbRWveqmoEvm5+uQ060rHUurJnvAu8fTX Isw1b9cYLKQopAVHTIjgyzlqX6JyQrkC7szjsy+Ti7cD3HKTfBretetxNGawDR3EyA4D9YT0v9RpH 30op/yIw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEXA7-00G0G9-1l; Wed, 28 Jun 2023 15:32:56 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 21/23] btrfs: don't redirty locked_page in run_delalloc_zoned Date: Wed, 28 Jun 2023 17:31:42 +0200 Message-Id: <20230628153144.22834-22-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org extent_write_locked_range currently expects that either all or no pages are dirty when it is called. Bur run_delalloc_zoned is called directly in the writepages path, and has the dirty bit cleared only for locked_page and which the extent_write_cache_pages currently operates. It currently works around this by redirtying locked_page, but that is a bit inefficient and cumbersome. Pass a locked_page argument to run_delalloc_zoned so that clearing the dirty bit can be skipped on just that page. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 7 ++++--- fs/btrfs/extent_io.h | 5 +++-- fs/btrfs/inode.c | 13 ++++--------- 3 files changed, 11 insertions(+), 14 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index e74153c02d84c7..efcac0b56b8252 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2133,8 +2133,9 @@ static int extent_write_cache_pages(struct address_space *mapping, * already been ran (aka, ordered extent inserted) and all pages are still * locked. */ -void extent_write_locked_range(struct inode *inode, u64 start, u64 end, - struct writeback_control *wbc, bool pages_dirty) +void extent_write_locked_range(struct inode *inode, struct page *locked_page, + u64 start, u64 end, struct writeback_control *wbc, + bool pages_dirty) { bool found_error = false; int ret = 0; @@ -2161,7 +2162,7 @@ void extent_write_locked_range(struct inode *inode, u64 start, u64 end, page = find_get_page(mapping, cur >> PAGE_SHIFT); ASSERT(PageLocked(page)); - if (pages_dirty) { + if (pages_dirty && page != locked_page) { ASSERT(PageDirty(page)); clear_page_dirty_for_io(page); } diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 2678906e87c506..c01f9c5ddc13c0 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -177,8 +177,9 @@ int try_release_extent_mapping(struct page *page, gfp_t mask); int try_release_extent_buffer(struct page *page); int btrfs_read_folio(struct file *file, struct folio *folio); -void extent_write_locked_range(struct inode *inode, u64 start, u64 end, - struct writeback_control *wbc, bool pages_dirty); +void extent_write_locked_range(struct inode *inode, struct page *locked_page, + u64 start, u64 end, struct writeback_control *wbc, + bool pages_dirty); int extent_writepages(struct address_space *mapping, struct writeback_control *wbc); int btree_write_cache_pages(struct address_space *mapping, diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 2a4b62398ee7a3..ae5166d33253a5 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1105,7 +1105,8 @@ static void submit_uncompressed_range(struct btrfs_inode *inode, /* All pages will be unlocked, including @locked_page */ wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); - extent_write_locked_range(&inode->vfs_inode, start, end, &wbc, false); + extent_write_locked_range(&inode->vfs_inode, NULL, start, end, &wbc, + false); wbc_detach_inode(&wbc); } @@ -1720,7 +1721,6 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, { u64 done_offset = end; int ret; - bool locked_page_done = false; while (start <= end) { ret = cow_file_range(inode, locked_page, start, end, @@ -1728,13 +1728,8 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, if (ret) return ret; - if (!locked_page_done) { - __set_page_dirty_nobuffers(locked_page); - account_page_redirty(locked_page); - } - locked_page_done = true; - extent_write_locked_range(&inode->vfs_inode, start, done_offset, - wbc, true); + extent_write_locked_range(&inode->vfs_inode, locked_page, start, + done_offset, wbc, true); start = done_offset + 1; } From patchwork Wed Jun 28 15:31:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B94D4EB64D7 for ; Wed, 28 Jun 2023 15:33:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232426AbjF1Pdp (ORCPT ); Wed, 28 Jun 2023 11:33:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232429AbjF1PdD (ORCPT ); Wed, 28 Jun 2023 11:33:03 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEF8E2713; Wed, 28 Jun 2023 08:33:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=chok8CbnygTGU4+BjwiSJTAKUtXt5cAc3nHU6IX1dKg=; b=Qxpb1SBjrS351UEWgPXEytbDNy SEPDU5Ofxd2zEnVIQ4QJnOm6KOqYH3qnaWVTlqAMOMZ+lWtnqDfHWUvwzQ7gucS4RPNwrEaMTs1Cx atGM9xLrKJ3r9FuAcj+2yioOjUA4UeZdjySQo0+FzekRA4hmilXPlszUEOCEPIvgU69RDWYqtcwyZ lnEE4mwHxWpn+2c2gd4wm79lsJl/DWtVMolaVlTEj50Dxlsnggt8aW3vPhmtM3zNfgyziqQOtvQh1 yJr6GVPDXHWrYe2oFOY+8I+5rWENvWVkNHtIy+/4cdjuSVNxK+txdkUMqtWFIT3glEfKiDxsbylPM hbLUW1cw==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEXAA-00G0H0-37; Wed, 28 Jun 2023 15:32:59 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 22/23] btrfs: fix zoned handling in submit_uncompressed_range Date: Wed, 28 Jun 2023 17:31:43 +0200 Message-Id: <20230628153144.22834-23-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org For zoned file systems we need to use run_delalloc_zoned to submit writeback, as we need to write out partial allocations when running into zone active limits. submit_uncompressed_range currently always calls cow_file_range to allocate blocks and thus misses the active zone limits handling. Fix this by passing the pages_dirty argument to run_delalloc_zoned and always using it from submit_uncompressed_range as it does the right thing for zoned and non-zone file systems. To account for the fact that run_delalloc_zoned is now also used for non-zoned file systems rename it to run_delalloc_cow, and add comment describing it. Fixes: 42c011000963 ("btrfs: zoned: introduce dedicated data write path for zoned filesystems") Signed-off-by: Christoph Hellwig --- fs/btrfs/inode.c | 52 +++++++++++++++++++----------------------------- 1 file changed, 20 insertions(+), 32 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index ae5166d33253a5..2079bf48629b59 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -125,12 +125,10 @@ static struct kmem_cache *btrfs_inode_cachep; static int btrfs_setsize(struct inode *inode, struct iattr *attr); static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback); -#define CFR_KEEP_LOCKED (1 << 0) -#define CFR_NOINLINE (1 << 1) -static noinline int cow_file_range(struct btrfs_inode *inode, - struct page *locked_page, - u64 start, u64 end, u64 *done_offset, - u32 flags); +static noinline int run_delalloc_cow(struct btrfs_inode *inode, + struct page *locked_page, u64 start, + u64 end, struct writeback_control *wbc, + bool pages_dirty); static struct extent_map *create_io_em(struct btrfs_inode *inode, u64 start, u64 len, u64 orig_start, u64 block_start, u64 block_len, u64 orig_block_len, @@ -1071,19 +1069,9 @@ static void submit_uncompressed_range(struct btrfs_inode *inode, .no_cgroup_owner = 1, }; - /* - * Call cow_file_range() to run the delalloc range directly, since we - * won't go to NOCOW or async path again. - * - * Also we call cow_file_range() with @unlock_page == 0, so that we - * can directly submit them without interruption. - */ - ret = cow_file_range(inode, locked_page, start, end, NULL, - CFR_KEEP_LOCKED); - /* Inline extent inserted, page gets unlocked and everything is done */ - if (ret == 1) - return; - + wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); + ret = run_delalloc_cow(inode, locked_page, start, end, &wbc, false); + wbc_detach_inode(&wbc); if (ret < 0) { btrfs_cleanup_ordered_extents(inode, locked_page, start, end - start + 1); if (locked_page) { @@ -1100,14 +1088,7 @@ static void submit_uncompressed_range(struct btrfs_inode *inode, mapping_set_error(locked_page->mapping, ret); unlock_page(locked_page); } - return; } - - /* All pages will be unlocked, including @locked_page */ - wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode); - extent_write_locked_range(&inode->vfs_inode, NULL, start, end, &wbc, - false); - wbc_detach_inode(&wbc); } static void submit_one_async_extent(struct async_chunk *async_chunk, @@ -1290,6 +1271,8 @@ static u64 get_extent_allocation_hint(struct btrfs_inode *inode, u64 start, * btrfs_cleanup_ordered_extents(). See btrfs_run_delalloc_range() for * example. */ +#define CFR_KEEP_LOCKED (1 << 0) +#define CFR_NOINLINE (1 << 1) static noinline int cow_file_range(struct btrfs_inode *inode, struct page *locked_page, u64 start, u64 end, u64 *done_offset, u32 flags) @@ -1715,9 +1698,14 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode, return true; } -static noinline int run_delalloc_zoned(struct btrfs_inode *inode, - struct page *locked_page, u64 start, - u64 end, struct writeback_control *wbc) +/* + * Run the delalloc range from start to end, and write back any dirty pages + * covered by the range. + */ +static noinline int run_delalloc_cow(struct btrfs_inode *inode, + struct page *locked_page, u64 start, + u64 end, struct writeback_control *wbc, + bool pages_dirty) { u64 done_offset = end; int ret; @@ -1727,9 +1715,8 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode, &done_offset, CFR_KEEP_LOCKED); if (ret) return ret; - extent_write_locked_range(&inode->vfs_inode, locked_page, start, - done_offset, wbc, true); + done_offset, wbc, pages_dirty); start = done_offset + 1; } @@ -2295,7 +2282,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page return 1; if (zoned) - ret = run_delalloc_zoned(inode, locked_page, start, end, wbc); + ret = run_delalloc_cow(inode, locked_page, start, end, wbc, + true); else ret = cow_file_range(inode, locked_page, start, end, NULL, 0); From patchwork Wed Jun 28 15:31:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13295988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E954BEB64DC for ; Wed, 28 Jun 2023 15:33:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232261AbjF1Pdn (ORCPT ); Wed, 28 Jun 2023 11:33:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232427AbjF1PdF (ORCPT ); Wed, 28 Jun 2023 11:33:05 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B3CB2D62; Wed, 28 Jun 2023 08:33:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=yQ5/EYqh6+7ElCFQh/cPZvm0jOmok7mMWkqAn5adj+Q=; b=GjOCK+Yk0gPnEnIK/bOhw0H4CU x+oK1pZnspyDScImnM0MHM++VFu/WzVLjafUF7cq3IGqRBWY+6khvsF9C6jCQo01TXYCf1TYeIj8I 7qSTMESgU9htmvPzgy3PAuTFCrUHZ+00OyGumOyrNkk/OHZ/FLZ/6YmdSCjuGZzgYVtyXzwfNSOnN IoG45jUzpRgj/BkOA3c2sl3pvj3kZ6kpfecZtq3VTjdpHQCWOE6pSin0i7Pj843ZdR4fV1HlNaNyL lKe3lpUqlFrfvObEW5Vfm1cbpUpWgf+BMYhiJqFOL0w5FToDS0OwMabe0O1txjKQ201YaErqTn9+n KDx73vbA==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1qEXAE-00G0Hu-17; Wed, 28 Jun 2023 15:33:02 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: Matthew Wilcox , linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 23/23] mm: remove folio_account_redirty Date: Wed, 28 Jun 2023 17:31:44 +0200 Message-Id: <20230628153144.22834-24-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230628153144.22834-1-hch@lst.de> References: <20230628153144.22834-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Fold folio_account_redirty into folio_redirty_for_writepage now that all other users except for the also unused account_page_redirty wrapper are gone. Signed-off-by: Christoph Hellwig Reviewed-by: Matthew Wilcox (Oracle) --- include/linux/writeback.h | 5 ---- mm/page-writeback.c | 49 +++++++++++---------------------------- 2 files changed, 14 insertions(+), 40 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index fba937999fbfd3..083387c00f0c8b 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -375,11 +375,6 @@ void tag_pages_for_writeback(struct address_space *mapping, pgoff_t start, pgoff_t end); bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio); -void folio_account_redirty(struct folio *folio); -static inline void account_page_redirty(struct page *page) -{ - folio_account_redirty(page_folio(page)); -} bool folio_redirty_for_writepage(struct writeback_control *, struct folio *); bool redirty_page_for_writepage(struct writeback_control *, struct page *); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index db794399900734..56074637ef4fe0 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1193,7 +1193,7 @@ static void wb_update_write_bandwidth(struct bdi_writeback *wb, * write_bandwidth = --------------------------------------------------- * period * - * @written may have decreased due to folio_account_redirty(). + * @written may have decreased due to folio_redirty_for_writepage(). * Avoid underflowing @bw calculation. */ bw = written - min(written, wb->written_stamp); @@ -2709,37 +2709,6 @@ bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio) } EXPORT_SYMBOL(filemap_dirty_folio); -/** - * folio_account_redirty - Manually account for redirtying a page. - * @folio: The folio which is being redirtied. - * - * Most filesystems should call folio_redirty_for_writepage() instead - * of this fuction. If your filesystem is doing writeback outside the - * context of a writeback_control(), it can call this when redirtying - * a folio, to de-account the dirty counters (NR_DIRTIED, WB_DIRTIED, - * tsk->nr_dirtied), so that they match the written counters (NR_WRITTEN, - * WB_WRITTEN) in long term. The mismatches will lead to systematic errors - * in balanced_dirty_ratelimit and the dirty pages position control. - */ -void folio_account_redirty(struct folio *folio) -{ - struct address_space *mapping = folio->mapping; - - if (mapping && mapping_can_writeback(mapping)) { - struct inode *inode = mapping->host; - struct bdi_writeback *wb; - struct wb_lock_cookie cookie = {}; - long nr = folio_nr_pages(folio); - - wb = unlocked_inode_to_wb_begin(inode, &cookie); - current->nr_dirtied -= nr; - node_stat_mod_folio(folio, NR_DIRTIED, -nr); - wb_stat_mod(wb, WB_DIRTIED, -nr); - unlocked_inode_to_wb_end(inode, &cookie); - } -} -EXPORT_SYMBOL(folio_account_redirty); - /** * folio_redirty_for_writepage - Decline to write a dirty folio. * @wbc: The writeback control. @@ -2755,13 +2724,23 @@ EXPORT_SYMBOL(folio_account_redirty); bool folio_redirty_for_writepage(struct writeback_control *wbc, struct folio *folio) { - bool ret; + struct address_space *mapping = folio->mapping; long nr = folio_nr_pages(folio); + bool ret; wbc->pages_skipped += nr; - ret = filemap_dirty_folio(folio->mapping, folio); - folio_account_redirty(folio); + ret = filemap_dirty_folio(mapping, folio); + if (mapping && mapping_can_writeback(mapping)) { + struct inode *inode = mapping->host; + struct bdi_writeback *wb; + struct wb_lock_cookie cookie = {}; + wb = unlocked_inode_to_wb_begin(inode, &cookie); + current->nr_dirtied -= nr; + node_stat_mod_folio(folio, NR_DIRTIED, -nr); + wb_stat_mod(wb, WB_DIRTIED, -nr); + unlocked_inode_to_wb_end(inode, &cookie); + } return ret; } EXPORT_SYMBOL(folio_redirty_for_writepage);