From patchwork Mon Jul 24 04:18:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 13323409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7F78EB64DD for ; Mon, 24 Jul 2023 04:19:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbjGXETC (ORCPT ); Mon, 24 Jul 2023 00:19:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229628AbjGXETA (ORCPT ); Mon, 24 Jul 2023 00:19:00 -0400 Received: from esa3.hgst.iphmx.com (esa3.hgst.iphmx.com [216.71.153.141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 072C6138 for ; Sun, 23 Jul 2023 21:18:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1690172338; x=1721708338; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LKXvNWF7OtiyUhlJVvIaxe2pDZxA+zKUoLQGxrYwxMo=; b=mYI1/rVH/T1MCBlJcjM3m11YJXpURAKDz2loNikMiEibVAtjoRStwOiH jo6IBBeYGbqNdZRRLIdsanoqWmflODwIdIANfKgAixS657XCTT80Dqeg1 ZJPf6B+pajEwcfIj4w/04IaOVnYTXuHWbbSLv2Abg27P7B5FOGC46C86R d9gRWJtD+x+KtE1xFfWoIY0SCXee/Ic0px/ndvANs511XcUGCtuHKPr3h WqbjePexZUH0SiB88uMuvr95oQHjffzpSwVxbyOsPOTRjBmoeIOPrkpkb NMWVK13RcXxgLzSfHGrQXsVP5C+yQINy+OhFba1G8ioaio/n8KkSpE6m4 w==; X-IronPort-AV: E=Sophos;i="6.01,228,1684771200"; d="scan'208";a="243524370" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 24 Jul 2023 12:18:58 +0800 IronPort-SDR: 3rfAwoNovKYgS6L/8EXxOIJL6wtMTVfy/TZHT/nPkI7DGqVbDoKPgtFa9UbwViQIaWV4x5qq+w V9abZlC6TB0zk62WVLwFdTIzvvDLKlsxvbxPe7/fcKWGwaRjBqiE95wa0igcWzcFtBRh/37JSU wY0oWZYmgzxsuCCzcgex+9y1K5uXTG2wNOjgmvjEqvX+SHSGD+UvQQ0cJJeRAuxV17dAt8hJrp y1GXr8TepsUpBgdwzgl8EqCz45jrBBVY33SH1rO/QxVv5RgNiMdOqQgjrbvhILlzg7AAi46A6d JPU= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 23 Jul 2023 20:27:03 -0700 IronPort-SDR: SCTc/1F3iBcdM96fsUFgS0t17Yabz0C2v0+k0FkGrKtvXYSNqAlsirJAJWyHYE8KQPGLKBS7mp /ZNSjAyDuB8WhANw1qb6WkJbSgzX0LlfLmHYT8Esw9LG0InizRReOcFP+zVjsiJCwR1dnMG9Zg z4GnJdYfMsWDX0UK+DINf7rgLjKTdqmMbGr4uv5JeqdsiuccqXdpHo+aX2KJlnragTSpRt3R9y KO1qzmN1CoWPZ6EEcUy8CiPaUYnnirEY9MET96X9ZwyTozwGnRUfi3qmVJtSlUrhLQVomFo8fo wHg= WDCIronportException: Internal Received: from unknown (HELO naota-xeon.wdc.com) ([10.225.163.123]) by uls-op-cesaip02.wdc.com with ESMTP; 23 Jul 2023 21:18:58 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: Naohiro Aota Subject: [PATCH 1/8] btrfs: zoned: introduce block_group context for submit_eb_page() Date: Mon, 24 Jul 2023 13:18:30 +0900 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org For metadata write out on the zoned mode, we call btrfs_check_meta_write_pointer() to check if an extent buffer to be written is aligned to the write pointer. We lookup for a block group containing the extent buffer for every extent buffer, which take unnecessary effort as the writing extent buffers are mostly contiguous. Introduce "bg_context" to cache the block group working on. Signed-off-by: Naohiro Aota --- fs/btrfs/extent_io.c | 21 +++++++++++---------- fs/btrfs/zoned.c | 32 +++++++++++++++++++------------- 2 files changed, 30 insertions(+), 23 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 91282aefcb77..c7a88d2b5555 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1855,10 +1855,10 @@ static int submit_eb_subpage(struct page *page, struct writeback_control *wbc) * Return <0 for fatal error. */ static int submit_eb_page(struct page *page, struct writeback_control *wbc, - struct extent_buffer **eb_context) + struct extent_buffer **eb_context, + struct btrfs_block_group **bg_context) { struct address_space *mapping = page->mapping; - struct btrfs_block_group *cache = NULL; struct extent_buffer *eb; int ret; @@ -1894,7 +1894,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, if (!ret) return 0; - if (!btrfs_check_meta_write_pointer(eb->fs_info, eb, &cache)) { + if (!btrfs_check_meta_write_pointer(eb->fs_info, eb, bg_context)) { /* * If for_sync, this hole will be filled with * trasnsaction commit. @@ -1910,18 +1910,15 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, *eb_context = eb; if (!lock_extent_buffer_for_io(eb, wbc)) { - btrfs_revert_meta_write_pointer(cache, eb); - if (cache) - btrfs_put_block_group(cache); + btrfs_revert_meta_write_pointer(*bg_context, eb); free_extent_buffer(eb); return 0; } - if (cache) { + if (*bg_context) { /* * Implies write in zoned mode. Mark the last eb in a block group. */ - btrfs_schedule_zone_finish_bg(cache, eb); - btrfs_put_block_group(cache); + btrfs_schedule_zone_finish_bg(*bg_context, eb); } write_one_eb(eb, wbc); free_extent_buffer(eb); @@ -1932,6 +1929,7 @@ int btree_write_cache_pages(struct address_space *mapping, struct writeback_control *wbc) { struct extent_buffer *eb_context = NULL; + struct btrfs_block_group *bg_context = NULL; struct btrfs_fs_info *fs_info = BTRFS_I(mapping->host)->root->fs_info; int ret = 0; int done = 0; @@ -1973,7 +1971,7 @@ int btree_write_cache_pages(struct address_space *mapping, for (i = 0; i < nr_folios; i++) { struct folio *folio = fbatch.folios[i]; - ret = submit_eb_page(&folio->page, wbc, &eb_context); + ret = submit_eb_page(&folio->page, wbc, &eb_context, &bg_context); if (ret == 0) continue; if (ret < 0) { @@ -2034,6 +2032,9 @@ int btree_write_cache_pages(struct address_space *mapping, ret = 0; if (!ret && BTRFS_FS_ERROR(fs_info)) ret = -EROFS; + + if (bg_context) + btrfs_put_block_group(bg_context); btrfs_zoned_meta_io_unlock(fs_info); return ret; } diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 5e4285ae112c..58bd2de4026d 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1751,27 +1751,33 @@ bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, struct extent_buffer *eb, struct btrfs_block_group **cache_ret) { - struct btrfs_block_group *cache; - bool ret = true; + struct btrfs_block_group *cache = NULL; if (!btrfs_is_zoned(fs_info)) return true; - cache = btrfs_lookup_block_group(fs_info, eb->start); - if (!cache) - return true; + if (*cache_ret) { + cache = *cache_ret; + if (cache->start > eb->start || + cache->start + cache->length <= eb->start) { + btrfs_put_block_group(cache); + cache = NULL; + *cache_ret = NULL; + } + } - if (cache->meta_write_pointer != eb->start) { - btrfs_put_block_group(cache); - cache = NULL; - ret = false; - } else { - cache->meta_write_pointer = eb->start + eb->len; + if (!cache) { + cache = btrfs_lookup_block_group(fs_info, eb->start); + if (!cache) + return true; + *cache_ret = cache; } - *cache_ret = cache; + if (cache->meta_write_pointer != eb->start) + return false; + cache->meta_write_pointer = eb->start + eb->len; - return ret; + return true; } void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache, From patchwork Mon Jul 24 04:18:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 13323411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16D91C001B0 for ; Mon, 24 Jul 2023 04:19:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230036AbjGXETG (ORCPT ); Mon, 24 Jul 2023 00:19:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229956AbjGXETB (ORCPT ); Mon, 24 Jul 2023 00:19:01 -0400 Received: from esa3.hgst.iphmx.com (esa3.hgst.iphmx.com [216.71.153.141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5851E12A for ; Sun, 23 Jul 2023 21:19:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1690172340; x=1721708340; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fkbHsQPodeumTQV0ZPXUopEi0yKEpcZC1ntWalkN2oE=; b=QaWAwPzms5SHNs477pTcn1kgXFEauXM7C7YjG2zbUggF/qZyuv13wHLy KKB1HbVFMeuhHx1Ltc80lZXCNo8QtOoQLKMQAXJlNNWwK70I6/64mw74j caP29qi00wP4EqLtIsfYEGKr8EWl1VBRFWuG5NHceeBcXOHmzZXTnf7Zf aILziMjMHWiSE1gYZFOijKJWWY9o4BIbWmaAtJCrxFsuv9oJ3P8qDOTfS nGQA3Azoc5b7uUzqiW09F8wsphYyTy4SNfENU5kfT2VDtXrYusESoR0yH 54ibgKWddRYXds8CJD4yLFyVeIi60Lc8HSDkBpPlWAZY8UVbujFm2JaYa A==; X-IronPort-AV: E=Sophos;i="6.01,228,1684771200"; d="scan'208";a="243524371" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 24 Jul 2023 12:18:58 +0800 IronPort-SDR: KWP8RZHO89IHooLuVCLo4I0Vm2KiwXv5h3pUe5iv+DZIodZsfs3H4IAqFmLxNYg1e9k/mbcOyz JHzaDvCyfEfbcyxRq+9v6IZq/MKw/fAjPrERj1dwc0efgQHvlfqX/hLMj7Zd89/dyMY0QkU4hR Ra4xaJKC81XLThlPKoYz67w2EDCPaxvNOzBy5CsAUMnXR2n9UxAYVdtKgsTpq2b7NR4dPBLfs2 W3NFe/3vjd3vvV2K0syPbLIgBk6sjh8x9FCWrLV9MWfTBah+wvZ6DDEyp/BKI7rGzZwo3pyZ+u LgE= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 23 Jul 2023 20:27:04 -0700 IronPort-SDR: Xd7EMD95+FIfZ4Rj8xl3d47SEJ95TaGdLk6X1DZzJg6gdMyZCMiG0h4wjREuovEPOvRsq0qdui 5KvaFvNe5xDokyZ0/psK5s5C4fiSj6Zvu2Iq6W7e1PXORIXf3E0rGnJNHFvHMvR0xbVk3rhWI3 dO+dWkoIM1ldAMN2wM8HqfpGoRNlKEF7BXEPD0qA4SXLdoMgZ7bqrmfiKa8QPCd7binogqU++K MOUT7mfr2u8+XobVAaY7inMTj3NnUHKR+c3DBRpO3+ZvLPZiubPej9YYlbdiYqLP2Hdjm984t3 +gE= WDCIronportException: Internal Received: from unknown (HELO naota-xeon.wdc.com) ([10.225.163.123]) by uls-op-cesaip02.wdc.com with ESMTP; 23 Jul 2023 21:18:59 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: Naohiro Aota Subject: [PATCH 2/8] btrfs: zoned: defer advancing meta_write_pointer Date: Mon, 24 Jul 2023 13:18:31 +0900 Message-ID: <0c1e65736a8263e514ffb6f7ce8dd1047fbb916a.1690171333.git.naohiro.aota@wdc.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We currently advance the meta_write_pointer in btrfs_check_meta_write_pointer(). That make it necessary to revert to it when locking the buffer failed. Instead, we can advance it just before sending the buffer. Also, this is necessary for the following commit. In the commit, it needs to release the zoned_meta_io_lock to allow IOs to come in and wait for them to fill the currently active block group. If we advance the meta_write_pointer before locking the extent buffer, the following extent buffer can pass the meta_write_pointer check, resuting in an unaligned write failure. Advancing the pointer is still thread-safe as the extent buffer is locked. Signed-off-by: Naohiro Aota --- fs/btrfs/extent_io.c | 11 ++++++----- fs/btrfs/zoned.c | 14 +++----------- fs/btrfs/zoned.h | 8 -------- 3 files changed, 9 insertions(+), 24 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index c7a88d2b5555..46a0b5357009 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1910,15 +1910,16 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, *eb_context = eb; if (!lock_extent_buffer_for_io(eb, wbc)) { - btrfs_revert_meta_write_pointer(*bg_context, eb); free_extent_buffer(eb); return 0; } if (*bg_context) { - /* - * Implies write in zoned mode. Mark the last eb in a block group. - */ - btrfs_schedule_zone_finish_bg(*bg_context, eb); + /* Implies write in zoned mode. */ + struct btrfs_block_group *bg = *bg_context; + + /* Mark the last eb in the block group. */ + btrfs_schedule_zone_finish_bg(bg, eb); + bg->meta_write_pointer += eb->len; } write_one_eb(eb, wbc); free_extent_buffer(eb); diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 58bd2de4026d..3f8f5e8c28a9 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1773,23 +1773,15 @@ bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, *cache_ret = cache; } + /* Someone already start writing this eb. */ + if (cache->meta_write_pointer > eb->start) + return true; if (cache->meta_write_pointer != eb->start) return false; - cache->meta_write_pointer = eb->start + eb->len; return true; } -void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache, - struct extent_buffer *eb) -{ - if (!btrfs_is_zoned(eb->fs_info) || !cache) - return; - - ASSERT(cache->meta_write_pointer == eb->start + eb->len); - cache->meta_write_pointer = eb->start; -} - int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 length) { if (!btrfs_dev_is_sequential(device, physical)) diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h index 27322b926038..42a4df94dc29 100644 --- a/fs/btrfs/zoned.h +++ b/fs/btrfs/zoned.h @@ -61,8 +61,6 @@ void btrfs_record_physical_zoned(struct btrfs_bio *bbio); bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, struct extent_buffer *eb, struct btrfs_block_group **cache_ret); -void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache, - struct extent_buffer *eb); int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 length); int btrfs_sync_zone_write_pointer(struct btrfs_device *tgt_dev, u64 logical, u64 physical_start, u64 physical_pos); @@ -196,12 +194,6 @@ static inline bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, return true; } -static inline void btrfs_revert_meta_write_pointer( - struct btrfs_block_group *cache, - struct extent_buffer *eb) -{ -} - static inline int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 length) { From patchwork Mon Jul 24 04:18:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 13323410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C0AFC001DE for ; Mon, 24 Jul 2023 04:19:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230018AbjGXETD (ORCPT ); Mon, 24 Jul 2023 00:19:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229931AbjGXETB (ORCPT ); Mon, 24 Jul 2023 00:19:01 -0400 Received: from esa3.hgst.iphmx.com (esa3.hgst.iphmx.com [216.71.153.141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DF9A131 for ; Sun, 23 Jul 2023 21:19:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1690172340; x=1721708340; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6ljIh24FNWlgJBFOhpynLtk4jSiZqbvtiBu5m7M+TjI=; b=CKdSShYChMIYJFXLfEbxdVhFqf9wgfGLMm7GYPRTT1eSLcuPczit8Myy /B6i2Qj40zIhzKYYMt23vLgDdqzuTNMQNpTy5JXAP6X8veIYLtfybqXYZ YzOWZGtnkY8tHvHkjMs8kX1B0OYWG5++4MbcoepL258KYbvM3YNe4vHBT Kl19mZWP9bx/cY7ySNjQtyfl4pkObNkb7JlsX+jQh5lzspHs4Z/canOK3 HzSJsN30z/txlhKm16oSLPw5GBko5gkKX+H7uWbMhmYveqwqvUEHlty3P VBl/4SbwlqZkqY+CnmTBGwhAxiAx51MvGX7iFzg4qTptIh0QB3hAhDN0j Q==; X-IronPort-AV: E=Sophos;i="6.01,228,1684771200"; d="scan'208";a="243524373" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 24 Jul 2023 12:18:59 +0800 IronPort-SDR: rnXIbDwoX4/jcTuRYABuPqeKjO35FPhkzlr0QQmMm89YGA0DRcbsECs4SuEVadW/nrkjK5Tm9E KPw9DZhuO7iCC0/ckLTReDSzQgGm1NN1JjfutrPWH5vzkrR1lqhjX/Q/SWwGMz68zzV8C0jG63 601iwPupsP+V8z1Dj4Lfe5uPh3CnmLDcTTKxFwXq9meZ8Gl94brp+4odgFhI9+5l7AI4gTwHKb Qlgja1uPaLVorTsJ+V4qIPK6CwKFHkOFAj+o29zIRczLB+xVPR7sqv9x/KBkNad6QU9Ap9r5Lw +8E= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 23 Jul 2023 20:27:05 -0700 IronPort-SDR: isEoQpJzqmsUpNJAK2fUied295LnhJ/Wo/iMU5IK8F/z7qwKdmmcZX3gQHJYvXpYljbprQBYyh hFNSrla6Vbyoll5oH++8w9q5pMiRXXY2708x3+wujJGYUdMdC2A0rz6bHpwA2IPjKlGqfxg40X eMKedK3XPocgnvMSwBmZ3BuVbZmXYT304DTbxLseu5B1CFrh8GKFSCcbXHqgWjfn3ynHFZ6Hj9 7ffWrxWvCoRT50kwiw4asDO1/RaGzDJjF2PVNAxPWVM6q4X8Qd+Kwi+PRUOEZkmuETMLwjRqGE KyU= WDCIronportException: Internal Received: from unknown (HELO naota-xeon.wdc.com) ([10.225.163.123]) by uls-op-cesaip02.wdc.com with ESMTP; 23 Jul 2023 21:19:00 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: Naohiro Aota Subject: [PATCH 3/8] btrfs: zoned: update meta_write_pointer on zone finish Date: Mon, 24 Jul 2023 13:18:32 +0900 Message-ID: <61507f174fbd62e792667bec3defed99633605bd.1690171333.git.naohiro.aota@wdc.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org On finishing a zone, the meta_write_pointer should be set of the end of the zone to reflect the actual write pointer position. Signed-off-by: Naohiro Aota Reviewed-by: Christoph Hellwig --- fs/btrfs/zoned.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 3f8f5e8c28a9..dd86f1858c88 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -2048,6 +2048,9 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ clear_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags); block_group->alloc_offset = block_group->zone_capacity; + if (block_group->flags & (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM)) + block_group->meta_write_pointer = block_group->start + + block_group->zone_capacity; block_group->free_space_ctl->free_space = 0; btrfs_clear_treelog_bg(block_group); btrfs_clear_data_reloc_bg(block_group); From patchwork Mon Jul 24 04:18:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 13323412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BA6EEB64DD for ; Mon, 24 Jul 2023 04:19:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230029AbjGXETE (ORCPT ); Mon, 24 Jul 2023 00:19:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229974AbjGXETB (ORCPT ); Mon, 24 Jul 2023 00:19:01 -0400 Received: from esa3.hgst.iphmx.com (esa3.hgst.iphmx.com [216.71.153.141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA3A6183 for ; Sun, 23 Jul 2023 21:19:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1690172340; x=1721708340; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jMLoMloQswD5Su5jdSIOAyVbWdMn4f5JbcoU9G0vHBs=; b=S0kd6vEYydY3Xgzki6pYnYqyhQtLoP0s/4JZRXiNiSH4NwtA9mLtCp42 lgI9q5XfDDxCrL7BzflecqeIy79euSoF7RbaYp6yYx2U4czElgoGBcmxY 9KA57d8YOAC3Br7zZVeYk4wq1PZJntN7EwxsIIdfyRiV098GS11y84yfi naZwWcN2s6zpmHtp5eyw3Jj1kk7SodMFR7P5Jl4auKt2EpIVi1INx7g7H V4mrMqXL9iS4H4HdsVP6LFapQFMmt2PzsIqjl1G7hhz4UI05SEkZJkR4t x3AmAM6K+H+HLGMkKFIUwY25iIwx4rPxYsHk/cHBYxuUCf+qcvLxAY2lW A==; X-IronPort-AV: E=Sophos;i="6.01,228,1684771200"; d="scan'208";a="243524374" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 24 Jul 2023 12:19:00 +0800 IronPort-SDR: 3dRolEoKdQF3bhjTwYgKoIngJVmYGvl1CBqTTr3BFII8sKYCpfVEB/eUNguJFDxkuU+oMKpOyc urIy2qVjtbyzB/CAKu7xcrYlU2E+1St2QT4v7d6y118eF+aR2+uNB23UEK4ZONRIzJg11JrRf0 F1kN3mNjmieSpe0sW2kZ4pqUb9hRAU/0+QVEx+k03X0ZP/4PNsHpbIESBWWbSV+0zFfQRqiCRv PluIzXKtvYYSmRvufR/MoMPeDjOU7xWmDJWtjLi6HK5Ex/ymciDRXPCAbZ+TeujvyKjPyRU1uR oxo= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 23 Jul 2023 20:27:05 -0700 IronPort-SDR: c4GtxJJEPRyQTCruKndXEVbR8hSvbrxdEpUPOstJPdEjTB9NN8y0jeug0tzf3QV/xsJcUAuMpW 7iLRcjae+IGmLIafaXjCFW2Ebf3+o1qIqAq3aEjLIqrbjvDu44ZCaqEWrj0q+XQYd9th2kMiEb qnSTgF2iHXnSibrR4/GD8WoLbSgl7KXDGlmTORbftlifaAniSDhRBpazfT0YigCS6PKk/NzK6l rNGVWqSLX7Xdts8wUtP2Rrc6YtPcqRdYpHlBZx7o7+aYVk6AXqaRLodcvyIW2yRKykSuaB/+zN bqw= WDCIronportException: Internal Received: from unknown (HELO naota-xeon.wdc.com) ([10.225.163.123]) by uls-op-cesaip02.wdc.com with ESMTP; 23 Jul 2023 21:19:00 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: Naohiro Aota Subject: [PATCH 4/8] btrfs: zoned: reserve zones for an active metadata/system block group Date: Mon, 24 Jul 2023 13:18:33 +0900 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Ensure a metadata and system block group can be activated on write time, by leaving a certain number of active zones when trying to activate a data block group. Signed-off-by: Naohiro Aota --- fs/btrfs/zoned.c | 46 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 44 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index dd86f1858c88..dbfa49c70c1a 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1867,6 +1867,35 @@ int btrfs_sync_zone_write_pointer(struct btrfs_device *tgt_dev, u64 logical, return btrfs_zoned_issue_zeroout(tgt_dev, physical_pos, length); } +/* + * Number of active zones reserved for one metadata block group and one + * system block group. + */ +static int reserved_zones(struct btrfs_fs_info *fs_info) +{ + const u64 flags[] = {BTRFS_BLOCK_GROUP_METADATA, BTRFS_BLOCK_GROUP_SYSTEM}; + int reserved = 0; + int i; + + for (i = 0; i < ARRAY_SIZE(flags); i++) { + u64 profile = btrfs_get_alloc_profile(fs_info, flags[i]); + + switch (profile) { + case 0: /* single */ + reserved += 1; + break; + case BTRFS_BLOCK_GROUP_DUP: + reserved += 2; + break; + default: + ASSERT(0); + break; + } + } + + return reserved; +} + /* * Activate block group and underlying device zones * @@ -1880,6 +1909,8 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group) struct btrfs_space_info *space_info = block_group->space_info; struct map_lookup *map; struct btrfs_device *device; + const int reserved = (block_group->flags & BTRFS_BLOCK_GROUP_DATA) ? + reserved_zones(fs_info) : 0; u64 physical; bool ret; int i; @@ -1909,6 +1940,15 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group) if (device->zone_info->max_active_zones == 0) continue; + /* + * For the data block group, leave active zones for one + * metadata block group and one system block group. + */ + if (atomic_read(&device->zone_info->active_zones_left) <= reserved) { + ret = false; + goto out_unlock; + } + if (!btrfs_dev_set_active_zone(device, physical)) { /* Cannot activate the zone */ ret = false; @@ -2103,6 +2143,8 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags) { struct btrfs_fs_info *fs_info = fs_devices->fs_info; struct btrfs_device *device; + const int reserved = (flags & BTRFS_BLOCK_GROUP_DATA) ? + reserved_zones(fs_info) : 0; bool ret = false; if (!btrfs_is_zoned(fs_info)) @@ -2123,10 +2165,10 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags) switch (flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) { case 0: /* single */ - ret = (atomic_read(&zinfo->active_zones_left) >= 1); + ret = (atomic_read(&zinfo->active_zones_left) >= (1 + reserved)); break; case BTRFS_BLOCK_GROUP_DUP: - ret = (atomic_read(&zinfo->active_zones_left) >= 2); + ret = (atomic_read(&zinfo->active_zones_left) >= (2 + reserved)); break; } if (ret) From patchwork Mon Jul 24 04:18:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 13323413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 511C7C001E0 for ; Mon, 24 Jul 2023 04:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230037AbjGXETI (ORCPT ); Mon, 24 Jul 2023 00:19:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229994AbjGXETC (ORCPT ); Mon, 24 Jul 2023 00:19:02 -0400 Received: from esa3.hgst.iphmx.com (esa3.hgst.iphmx.com [216.71.153.141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24545138 for ; Sun, 23 Jul 2023 21:19:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1690172341; x=1721708341; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=23BEuDCqBU7IvYdp5r+d7/FJLJ9D5r6G2Ogo5cX2Qmg=; b=IZy4cFSER/pjI7bcQ5LwyEZaOtLEim98IECbxZdQckKj5FqbalmcDbbZ I1L+6mr2wt8UkqxX5ihfO1cuyyics5nf7C7Kb2LSwnfjSFE6VrpcucRbx hihzreERzs/VBRfrKY8N5Bvp+hRKTyRM0H1+NvW3CnmtjM3WSduYC40Mb PAhDZhi8esVe60O/by0NlQ3yoQ3avIKDMjfpreJdhhej9EdfsE4OD85+G 1pUJFLDZGCyedsg2o8yWJBNqFIT+ospdz6uXS+ZhymfRK790M81cnXZkb xEOVxTG6aY74mF5IOlNwE8Eqx9q2uQ/j/+Msc9vYdw/EspKNEt6F/003V A==; X-IronPort-AV: E=Sophos;i="6.01,228,1684771200"; d="scan'208";a="243524375" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 24 Jul 2023 12:19:00 +0800 IronPort-SDR: RBPr+o36lvmnARnzg0/ct3lhfU1T1I+LLmOfeGrnK2QH33zBG4agO5qeEVPJnbiPx1a0ZGRtgX ykVsUDvVx6rh4lxEs47mZ+f9w6uVGUc5s4XGwq8yY70Ypw39il0kwNV8F6kpVPCPCN4rk3ufNg 7jCtvc320IvXgOc2cLW+dZ0XPbvBQ5Yb0oY4D2phJRpGhsQ9/RTO5exi+FEYKII/BKocoEP4Dc YWqzDaWo0U/1Qd7LWWa+sSa5/XZj+y3eybXbb+UVQXImvP6dyuCM72tTs8J2PGNoU+bnTYcKIW 0lk= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 23 Jul 2023 20:27:06 -0700 IronPort-SDR: YDfBC7j7c4ipKQGO47DkUiSYuHKUax7RGxwfqlKed+7Z6B7sebVswLsbR+4vUK2guT8IcrMmiz vsPerxp3CeOvFzHaJ16sgm8m3tyCwdixvO5muJfPRVFJjuWPhX0WGKK4pvPT54vRHmFYSAqY77 aaMj952GyZd9eRQxyhXfajwU3BhuYN2w8AZFtaSFJzb0tXPYQEUFnc1KPCXo0CdDYJjtO7gRew T5QE5exDI0kvX2lIbNQwuI2yjM9tvt3FOt+Okb/NU5XwPVPx0w94VhSj+oecjX3VQKbjYA+gac cdk= WDCIronportException: Internal Received: from unknown (HELO naota-xeon.wdc.com) ([10.225.163.123]) by uls-op-cesaip02.wdc.com with ESMTP; 23 Jul 2023 21:19:01 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: Naohiro Aota Subject: [PATCH 5/8] btrfs: zoned: activate metadata block group on write time Date: Mon, 24 Jul 2023 13:18:34 +0900 Message-ID: <2f8e30f966cfd9e06b0745a691bd1a5566aab780.1690171333.git.naohiro.aota@wdc.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org In the current implementation, block groups are activated at reservation time to ensure that all reserved bytes can be written to an active metadata block group. However, this approach has proven to be less efficient, as it activates block groups more frequently than necessary, putting pressure on the active zone resource and leading to potential issues such as early ENOSPC or hung_task. Another drawback of the current method is that it hampers metadata over-commit, and necessitates additional flush operations and block group allocations, resulting in decreased overall performance. To address these issues, this commit introduces a write-time activation of metadata and system block group. This involves reserving at least one active block group specifically for a metadata and system block group. Since metadata write-out is always allocated sequentially, when we need to write to a non-active block group, we can wait for the ongoing IOs to complete, activate a new block group, and then proceed with writing to the new block group. Fixes: b09315139136 ("btrfs: zoned: activate metadata block group on flush_space") CC: stable@vger.kernel.org # 6.1+ Signed-off-by: Naohiro Aota --- fs/btrfs/block-group.c | 11 +++++++ fs/btrfs/extent_io.c | 2 +- fs/btrfs/fs.h | 3 ++ fs/btrfs/zoned.c | 72 ++++++++++++++++++++++++++++++++++++++++++ fs/btrfs/zoned.h | 1 + 5 files changed, 88 insertions(+), 1 deletion(-) diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 91d38f38c1e7..75f8482f45e5 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -4274,6 +4274,17 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info) struct btrfs_caching_control *caching_ctl; struct rb_node *n; + if (btrfs_is_zoned(info)) { + if (info->active_meta_bg) { + btrfs_put_block_group(info->active_meta_bg); + info->active_meta_bg = NULL; + } + if (info->active_system_bg) { + btrfs_put_block_group(info->active_system_bg); + info->active_system_bg = NULL; + } + } + write_lock(&info->block_group_cache_lock); while (!list_empty(&info->caching_block_groups)) { caching_ctl = list_entry(info->caching_block_groups.next, diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 46a0b5357009..3f104559c0cc 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1894,7 +1894,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, if (!ret) return 0; - if (!btrfs_check_meta_write_pointer(eb->fs_info, eb, bg_context)) { + if (!btrfs_check_meta_write_pointer(eb->fs_info, wbc, eb, bg_context)) { /* * If for_sync, this hole will be filled with * trasnsaction commit. diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h index 203d2a267828..1f2d33112106 100644 --- a/fs/btrfs/fs.h +++ b/fs/btrfs/fs.h @@ -766,6 +766,9 @@ struct btrfs_fs_info { u64 data_reloc_bg; struct mutex zoned_data_reloc_io_lock; + struct btrfs_block_group *active_meta_bg; + struct btrfs_block_group *active_system_bg; + u64 nr_global_roots; spinlock_t zone_active_bgs_lock; diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index dbfa49c70c1a..f440853dff1c 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -65,6 +65,9 @@ #define SUPER_INFO_SECTORS ((u64)BTRFS_SUPER_INFO_SIZE >> SECTOR_SHIFT) +static void wait_eb_writebacks(struct btrfs_block_group *block_group); +static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_written); + static inline bool sb_zone_is_full(const struct blk_zone *zone) { return (zone->cond == BLK_ZONE_COND_FULL) || @@ -1748,6 +1751,7 @@ void btrfs_finish_ordered_zoned(struct btrfs_ordered_extent *ordered) } bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, + struct writeback_control *wbc, struct extent_buffer *eb, struct btrfs_block_group **cache_ret) { @@ -1779,6 +1783,74 @@ bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, if (cache->meta_write_pointer != eb->start) return false; + if (test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &fs_info->flags)) { + bool is_system = cache->flags & BTRFS_BLOCK_GROUP_SYSTEM; + + spin_lock(&cache->lock); + if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, + &cache->runtime_flags)) { + spin_unlock(&cache->lock); + return true; + } + + spin_unlock(&cache->lock); + if (fs_info->treelog_bg == cache->start) { + if (!btrfs_zone_activate(cache)) { + int ret_fin = btrfs_zone_finish_one_bg(fs_info); + + if (ret_fin != 1 || !btrfs_zone_activate(cache)) + return false; + } + } else if ((!is_system && fs_info->active_meta_bg != cache) || + (is_system && fs_info->active_system_bg != cache)) { + struct btrfs_block_group *tgt = is_system ? + fs_info->active_system_bg : fs_info->active_meta_bg; + + /* + * zoned_meta_io_lock protects + * fs_info->active_{meta,system}_bg. + */ + lockdep_assert_held(&fs_info->zoned_meta_io_lock); + + if (tgt) { + /* + * If there is an unsent IO left in the + * allocated area, we cannot wait for them + * as it may cause a deadlock. + */ + if (tgt->meta_write_pointer < tgt->start + tgt->alloc_offset) { + if (wbc->sync_mode == WB_SYNC_NONE || + (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync)) + return false; + } + + /* Pivot active metadata/system block group. */ + btrfs_zoned_meta_io_unlock(fs_info); + wait_eb_writebacks(tgt); + do_zone_finish(tgt, true); + btrfs_zoned_meta_io_lock(fs_info); + if (is_system && fs_info->active_system_bg == tgt) { + btrfs_put_block_group(tgt); + fs_info->active_system_bg = NULL; + } else if (!is_system && fs_info->active_meta_bg == tgt) { + btrfs_put_block_group(tgt); + fs_info->active_meta_bg = NULL; + } + } + if (!btrfs_zone_activate(cache)) + return false; + if (is_system && fs_info->active_system_bg != cache) { + ASSERT(fs_info->active_system_bg == NULL); + fs_info->active_system_bg = cache; + btrfs_get_block_group(cache); + } else if (!is_system && fs_info->active_meta_bg != cache) { + ASSERT(fs_info->active_meta_bg == NULL); + fs_info->active_meta_bg = cache; + btrfs_get_block_group(cache); + } + } + } + return true; } diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h index 42a4df94dc29..6935e04effdd 100644 --- a/fs/btrfs/zoned.h +++ b/fs/btrfs/zoned.h @@ -59,6 +59,7 @@ void btrfs_redirty_list_add(struct btrfs_transaction *trans, bool btrfs_use_zone_append(struct btrfs_bio *bbio); void btrfs_record_physical_zoned(struct btrfs_bio *bbio); bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info, + struct writeback_control *wbc, struct extent_buffer *eb, struct btrfs_block_group **cache_ret); int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 length); From patchwork Mon Jul 24 04:18:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 13323415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C70CC001DE for ; Mon, 24 Jul 2023 04:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229931AbjGXETJ (ORCPT ); Mon, 24 Jul 2023 00:19:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230009AbjGXETD (ORCPT ); Mon, 24 Jul 2023 00:19:03 -0400 Received: from esa3.hgst.iphmx.com (esa3.hgst.iphmx.com [216.71.153.141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00093131 for ; Sun, 23 Jul 2023 21:19:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1690172341; x=1721708341; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uOwL2ZkTcNZo9+yeNnbkQ8RWCvjl2jY+93WfzI6hTV8=; b=JYYhq9HOaPQ/25aznz9W/Y85NyeDwrtkHkVhF98Zi0corYSfw7VKCpzR Nadd90++Y4hQB8Hu1aKS65/cWKNOgXawtqnruzx0JzOUWy+RF/wB7btsR yg6m8EIn/HcjJX0UY3zRcHIeakW31H7kry2e3m77qLW/vdsGRxBd1lONv v3qNY28rSHy7khMgFT75B0NHsIQRUJb77j1/g2ZZcOROiZD4TaBD8AOZk 0PNAHmx9/B10TVPSNZD94YlBNAVVr5OwA6pynVa0mqDKljuS+xomaaLdZ aI2O/Wtsrjq/Br9VARZYafeaHaZo4pwNePMXSpM8xIuIawJrjZaLpIVIb g==; X-IronPort-AV: E=Sophos;i="6.01,228,1684771200"; d="scan'208";a="243524376" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 24 Jul 2023 12:19:01 +0800 IronPort-SDR: fCuIlM/PoersAeXoqIE2NSfvJeSdDCPm+RRJw+jDq8twOnrHkr/OMN4J0V0jRkGcNgTxMhj0gc O6pdmJJ1OC/hB1P6kIw+9uouYc1jqCZBvKnIdJVSjGqK8M7WKj2GccB/cjx4Jc19C4ZOgaHyUO oOejJ4qL+uCvgLwNyHA8ihpoBd6oc1aPtfo/9AKDs0GduhnCtgR8QW/7ombvBstKqzLw0MU6LS s09d37ewPsWWLWiHfjvawEdjZR0uSErchVY3Y9SAkSBVppZ6xFGTZw0oCWgzZcW1wpTnFAsEg1 4A0= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 23 Jul 2023 20:27:07 -0700 IronPort-SDR: FZ3BM6KpRFYHlKZ14HIpZKO9DvfQWvx87g5SOgXj+u9Osfm6l/U6xY2k26V8juntIHveY0yldq ZHS4ouTQgrOwsUxuJbgK2IOZ4Ok/qDUREF2tX8lAv3e09Lkqsp6SR548y1zsAna9Ph5gesrf4R I3PxL+0ycvmu35yNv1K+VpFczP30dgQjCcflEICeRyR6RQmLOBr8A8npkXAK6gRton3sO1FVu+ tFkRRK6zm8ix7ZGR/sxNJ3dmm2sepo7KN8+j/kTwUXsSZrrEZ10cAP6kPc8bHvU0rOlSdzn3Jt QIM= WDCIronportException: Internal Received: from unknown (HELO naota-xeon.wdc.com) ([10.225.163.123]) by uls-op-cesaip02.wdc.com with ESMTP; 23 Jul 2023 21:19:02 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: Naohiro Aota Subject: [PATCH 6/8] btrfs: zoned: no longer count fresh BG region as zone unusable Date: Mon, 24 Jul 2023 13:18:35 +0900 Message-ID: <6dbd4c356609a0ff5bdce7de408a1659a784886d.1690171333.git.naohiro.aota@wdc.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Now that we switched to write time activation, we no longer need to (and must not) count the fresh region as zone unusable. This commit is similar to revert commit fc22cf8eba79 ("btrfs: zoned: count fresh BG region as zone unusable"). Signed-off-by: Naohiro Aota --- fs/btrfs/free-space-cache.c | 8 +------- fs/btrfs/zoned.c | 26 +++----------------------- 2 files changed, 4 insertions(+), 30 deletions(-) diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index bfc01352351c..13f47d9ec13d 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -2704,13 +2704,8 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group, bg_reclaim_threshold = READ_ONCE(sinfo->bg_reclaim_threshold); spin_lock(&ctl->tree_lock); - /* Count initial region as zone_unusable until it gets activated. */ if (!used) to_free = size; - else if (initial && - test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &block_group->fs_info->flags) && - (block_group->flags & (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM))) - to_free = 0; else if (initial) to_free = block_group->zone_capacity; else if (offset >= block_group->alloc_offset) @@ -2738,8 +2733,7 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group, reclaimable_unusable = block_group->zone_unusable - (block_group->length - block_group->zone_capacity); /* All the region is now unusable. Mark it as unused and reclaim */ - if (block_group->zone_unusable == block_group->length && - block_group->alloc_offset) { + if (block_group->zone_unusable == block_group->length) { btrfs_mark_bg_unused(block_group); } else if (bg_reclaim_threshold && reclaimable_unusable >= diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index f440853dff1c..ce816f5885fb 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1586,19 +1586,9 @@ void btrfs_calc_zone_unusable(struct btrfs_block_group *cache) return; WARN_ON(cache->bytes_super != 0); - - /* Check for block groups never get activated */ - if (test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &cache->fs_info->flags) && - cache->flags & (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM) && - !test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &cache->runtime_flags) && - cache->alloc_offset == 0) { - unusable = cache->length; - free = 0; - } else { - unusable = (cache->alloc_offset - cache->used) + - (cache->length - cache->zone_capacity); - free = cache->zone_capacity - cache->alloc_offset; - } + unusable = (cache->alloc_offset - cache->used) + + (cache->length - cache->zone_capacity); + free = cache->zone_capacity - cache->alloc_offset; /* We only need ->free_space in ALLOC_SEQ block groups */ cache->cached = BTRFS_CACHE_FINISHED; @@ -1978,7 +1968,6 @@ static int reserved_zones(struct btrfs_fs_info *fs_info) bool btrfs_zone_activate(struct btrfs_block_group *block_group) { struct btrfs_fs_info *fs_info = block_group->fs_info; - struct btrfs_space_info *space_info = block_group->space_info; struct map_lookup *map; struct btrfs_device *device; const int reserved = (block_group->flags & BTRFS_BLOCK_GROUP_DATA) ? @@ -1992,7 +1981,6 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group) map = block_group->physical_map; - spin_lock(&space_info->lock); spin_lock(&block_group->lock); if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags)) { ret = true; @@ -2030,14 +2018,7 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group) /* Successfully activated all the zones */ set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags); - WARN_ON(block_group->alloc_offset != 0); - if (block_group->zone_unusable == block_group->length) { - block_group->zone_unusable = block_group->length - block_group->zone_capacity; - space_info->bytes_zone_unusable -= block_group->zone_capacity; - } spin_unlock(&block_group->lock); - btrfs_try_granting_tickets(fs_info, space_info); - spin_unlock(&space_info->lock); /* For the active block group list */ btrfs_get_block_group(block_group); @@ -2050,7 +2031,6 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group) out_unlock: spin_unlock(&block_group->lock); - spin_unlock(&space_info->lock); return ret; } From patchwork Mon Jul 24 04:18:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 13323416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B004EB64DD for ; Mon, 24 Jul 2023 04:19:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230032AbjGXETL (ORCPT ); Mon, 24 Jul 2023 00:19:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230034AbjGXETF (ORCPT ); Mon, 24 Jul 2023 00:19:05 -0400 Received: from esa3.hgst.iphmx.com (esa3.hgst.iphmx.com [216.71.153.141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14ED912A for ; Sun, 23 Jul 2023 21:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1690172344; x=1721708344; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EOIx5R8F2C+vcy+NgP4Gc8zlL/SVADlDIopMb2F49TM=; b=fqbm5WnznkXcdS4qmyab6dcPlomwnHTtGgIaqLmswDM2ztF5K+AzqEk3 nSfm7/WQbvXsPNstriJQmIVdwMb+bOnDc4mVFaGRMaQU6ShpKkAw+zCy6 k54oy0VkjLOyiyRsP3Gx9Zg4syOOX7Pqunapzxgvsk8a00Z4OrItngh/4 hDZJ+LQHinyMWJu6xKRScf2T1RfOcQpej2HWmky7a3eoTrdsPO68IE/d+ fLEjZGI3hipuJ1S/pHBWLpstvzWEFn+l+BYKg4EfQG815aWF59daGC8s4 ZLPamsHYiUdIwaxJyQA/aDqqdy9CWFkog8qgsyyORLyU+X1uuJvj+MKTC g==; X-IronPort-AV: E=Sophos;i="6.01,228,1684771200"; d="scan'208";a="243524381" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 24 Jul 2023 12:19:03 +0800 IronPort-SDR: DlRNV4z9yTjH6xhBKQVaEr+Ma0xrl8gZL9lktkl+bXnTy+aKpxTEmje3XV+2BJ4V3LQilrmvYz XE7VfG4bDXqmxlI8G1Fsync3TIUN0J4aafp1OkbuUoePe6mpQTRV9EjtjYC2Ou6a8eW8Tlp3/v RE5V2df6Fmiqj8pDaEfJZmY+oKz2Ws3R+p7Uu8twUNbVTIca5WaNJoRT5qcAEuUPYAtepUVZGO qZ533LgrABC5979u7qYYJl77/LoGl+SQaGdAY/sDHG+1jGYvKn0Cl/CJMnjfzdPuMyMkiuv6YK Jlk= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 23 Jul 2023 20:27:09 -0700 IronPort-SDR: 0ibRRB1Flv2uqIkz/kS04bCGqFFJ5ZlQ/+8X2wRUwJXi3iBytvWK1bDTd1Ubl8BsmyJqg3V9dC zN82mNeTeZ5hd/txkCrOOQ+mczOuXSKymBzH9QviKX3UVd4oVaG4KGpX2LbBBDELyJxpPRDuPd l3txTeeZyYgPejP4iXA2hGhymdpmX/ieP37niUFfxS3LYmw4oWVNcQXvZktRGUCK7LUZCwVJ3E 0x72MXG/J3GBUhhHRkaHOJDyNx/me1amCsINQSCMCdat+KBUeq6QNYuxMKvnF0BlNGPLx0a9j0 0bo= WDCIronportException: Internal Received: from unknown (HELO naota-xeon.wdc.com) ([10.225.163.123]) by uls-op-cesaip02.wdc.com with ESMTP; 23 Jul 2023 21:19:02 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: Naohiro Aota Subject: [PATCH 7/8] btrfs: zoned: don't activate non-DATA BG on allocation Date: Mon, 24 Jul 2023 13:18:36 +0900 Message-ID: <49ce7039618840211f6e06034200c7e7ba178e08.1690171333.git.naohiro.aota@wdc.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Now that, a non-DATA block group is activated at write time. Don't activate it on allocation time. Signed-off-by: Naohiro Aota --- fs/btrfs/block-group.c | 2 +- fs/btrfs/extent-tree.c | 8 +++++++- fs/btrfs/space-info.c | 28 ---------------------------- 3 files changed, 8 insertions(+), 30 deletions(-) diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 75f8482f45e5..fc5f6b977189 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -4076,7 +4076,7 @@ int btrfs_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags, if (IS_ERR(ret_bg)) { ret = PTR_ERR(ret_bg); - } else if (from_extent_allocation) { + } else if (from_extent_allocation && (flags & BTRFS_BLOCK_GROUP_DATA)) { /* * New block group is likely to be used soon. Try to activate * it now. Failure is OK for now. diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 602cb750100c..9804e3fcc5ba 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -3663,7 +3663,9 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group, } spin_unlock(&block_group->lock); - if (!ret && !btrfs_zone_activate(block_group)) { + /* Metadata block group is activated on write time. */ + if (!ret && (block_group->flags & BTRFS_BLOCK_GROUP_DATA) && + !btrfs_zone_activate(block_group)) { ret = 1; /* * May need to clear fs_info->{treelog,data_reloc}_bg. @@ -3843,6 +3845,10 @@ static void found_extent(struct find_free_extent_ctl *ffe_ctl, static int can_allocate_chunk_zoned(struct btrfs_fs_info *fs_info, struct find_free_extent_ctl *ffe_ctl) { + /* Block group's activeness is not a requirement for METADATA block groups. */ + if (!(ffe_ctl->flags & BTRFS_BLOCK_GROUP_DATA)) + return 0; + /* If we can activate new zone, just allocate a chunk and use it */ if (btrfs_can_activate_zone(fs_info->fs_devices, ffe_ctl->flags)) return 0; diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c index 75e7fa337e66..a84b6088a73d 100644 --- a/fs/btrfs/space-info.c +++ b/fs/btrfs/space-info.c @@ -747,18 +747,6 @@ static void flush_space(struct btrfs_fs_info *fs_info, break; case ALLOC_CHUNK: case ALLOC_CHUNK_FORCE: - /* - * For metadata space on zoned filesystem, reaching here means we - * don't have enough space left in active_total_bytes. Try to - * activate a block group first, because we may have inactive - * block group already allocated. - */ - ret = btrfs_zoned_activate_one_bg(fs_info, space_info, false); - if (ret < 0) - break; - else if (ret == 1) - break; - trans = btrfs_join_transaction(root); if (IS_ERR(trans)) { ret = PTR_ERR(trans); @@ -770,22 +758,6 @@ static void flush_space(struct btrfs_fs_info *fs_info, CHUNK_ALLOC_FORCE); btrfs_end_transaction(trans); - /* - * For metadata space on zoned filesystem, allocating a new chunk - * is not enough. We still need to activate the block * group. - * Active the newly allocated block group by (maybe) finishing - * a block group. - */ - if (ret == 1) { - ret = btrfs_zoned_activate_one_bg(fs_info, space_info, true); - /* - * Revert to the original ret regardless we could finish - * one block group or not. - */ - if (ret >= 0) - ret = 1; - } - if (ret > 0 || ret == -ENOSPC) ret = 0; break; From patchwork Mon Jul 24 04:18:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naohiro Aota X-Patchwork-Id: 13323414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68448C001B0 for ; Mon, 24 Jul 2023 04:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230009AbjGXETK (ORCPT ); Mon, 24 Jul 2023 00:19:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230032AbjGXETF (ORCPT ); Mon, 24 Jul 2023 00:19:05 -0400 Received: from esa4.hgst.iphmx.com (esa4.hgst.iphmx.com [216.71.154.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 526F112B for ; Sun, 23 Jul 2023 21:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1690172344; x=1721708344; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fXEuxlJbtMpUQmEtvAa9FpAu8brpelokqTcANQs5Wto=; b=QRaGcnSXGRSGH1HLiVv1E6DrmwRKE2qsoz1OKZ3n9yH4v+F7dee/TVFe 5o4MJbALdiyxF1y4QgqusZ4bq4WNeVsR/p/Qv0i7DS9dW6MFC75Skb1x1 U7o0LRf7u1tHYCTQUD9MDiCNSPL6nLVN/XUaoUmoOdjIfWsdfujBUyrK/ 986qa9oD9kTMHIVfID+ngXeI0NCGrfEBubc/EWjHDPvwfJpLSn5ETFH8G A5o0/+1lY8PmsmnesGFfOwKumE9e5PwzEUSYE+L8BHfAdRrjEs4Zaa4oj nJ5g54Q4C0QO3SWRaN06u72F75FzpCohwKZW0OYo7U4IEWmwydH8iIpYe w==; X-IronPort-AV: E=Sophos;i="6.01,228,1684771200"; d="scan'208";a="237227990" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 24 Jul 2023 12:19:03 +0800 IronPort-SDR: /4oy9v7m6gHXYCrTiVCfE9DQtZfLrgATXLCkL3bRh8wysiBOw8AlwFrjJlq+2sLCkPrem9alHQ lpIs8jbtddDjDPGiA0m+wJi4qZiEjhQwVjuExLhXyafFEWMwgR//DDa9hboPFCKyulJ3XVyz6M f0xAR2Wu41DPZVljeWrAkBbtsU0q4Ne3Y3UdQ/lY4RaVuFmPowFrR3tKBt/xFN+b9pGU9hSxEi LERY0DHR/j5/THN83/W6DTG9GEes5oYKxtvRm+AEElQm0bXfDEexGrxMSILB8e4coLk3G2kTo/ X+w= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 23 Jul 2023 20:27:09 -0700 IronPort-SDR: 0xFm37NnXtRcRv35qlzwbsZjzGhxEVoncsqg7LwmNkd1kGNbPdrd4iHn0OAc1KdwqLLDI7R72H hoveDbdffbAhFX5SElz6ItdrDgcnW9uVVES4hGcWeuRld58+69dgJTf1J/xPXoQu8n49eRv4OX xlzx/lutB3wISPbxZTFyUvhpIOqs083jJ7tYc1pvYsVNU13by0IVF+NLbHZYmEMjSgJFRLKJ6G mptPAIvJbSCwOdMUWzCHcs4+ec9mB/n4pS+5fzdviihflwrRH0v2HBM8GhOXM3Fcv55oOWr3Wd eWg= WDCIronportException: Internal Received: from unknown (HELO naota-xeon.wdc.com) ([10.225.163.123]) by uls-op-cesaip02.wdc.com with ESMTP; 23 Jul 2023 21:19:03 -0700 From: Naohiro Aota To: linux-btrfs@vger.kernel.org Cc: Naohiro Aota Subject: [PATCH 8/8] btrfs: zoned: re-enable metadata over-commit for zoned mode Date: Mon, 24 Jul 2023 13:18:37 +0900 Message-ID: <3de6d3396704159bb30d77061907e789966fcf67.1690171333.git.naohiro.aota@wdc.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Now that, we can re-enable metadata over-commit. As we moved the activation from the reservation time to the write time, we no longer need to ensure all the reserved bytes is properly activated. Without the metadata over-commit, it suffers from lower performance because it needs to flush the delalloc items more often and allocate more block groups. Re-enabling metadata over-commit will solve the issue. Fixes: 79417d040f4f ("btrfs: zoned: disable metadata overcommit for zoned") CC: stable@vger.kernel.org # 6.1+ Signed-off-by: Naohiro Aota --- fs/btrfs/space-info.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c index a84b6088a73d..4c4a30439fcf 100644 --- a/fs/btrfs/space-info.c +++ b/fs/btrfs/space-info.c @@ -389,11 +389,7 @@ int btrfs_can_overcommit(struct btrfs_fs_info *fs_info, return 0; used = btrfs_space_info_used(space_info, true); - if (test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &fs_info->flags) && - (space_info->flags & BTRFS_BLOCK_GROUP_METADATA)) - avail = 0; - else - avail = calc_available_free_space(fs_info, space_info, flush); + avail = calc_available_free_space(fs_info, space_info, flush); if (used + bytes < space_info->total_bytes + avail) return 1;