From patchwork Thu Apr 15 13:58:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 12205565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47C27C43461 for ; Thu, 15 Apr 2021 13:59:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F28D613D2 for ; Thu, 15 Apr 2021 13:59:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233259AbhDON7Y (ORCPT ); Thu, 15 Apr 2021 09:59:24 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:46699 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233216AbhDON7X (ORCPT ); Thu, 15 Apr 2021 09:59:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618495141; x=1650031141; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MoCcl/0HATB1DrxZlF860oBJ5o00QB6mhbpEGDWyuUo=; b=UdXkZt3bKQ+Le8oFjZbKuABbCedqO+XlqUZsuU0QezGVv0DnjCqR7XQm WPisk5TBU/53tvFD4jVmbe6FYg5l8//eaX0GZkQhXrmWebJfRlK3i2hX2 XdJ6w7yhmcdOzC/GpFISpyTK6VsoXIH/RA2kR8D/2q0Oe/JlxyYqmUqb6 OdVrxBBOsKiJZyq2GjOgE/V4ag7kloc15KxRIl1uVGSZFQmtbubWp9+WQ uh8ITC08PWbFT9yWA9/TPkCaSdhIWiW8NAKx2D8XAQayl5r1tmKpImXNq RqJoGuASs44ezM24Y+NketmcB76+qUpx/SfvfiOwxeG/MhN5/u1nQOaR6 g==; IronPort-SDR: GKMqBSrHXt4FaTuzLVSxeOcmu0gmkgDV+9Zhx6vEzDLUPTjUh56uWVcmW3kVVESp+c2X4eAjR3 M5reLx9ehcg7tjCnx/WwaXltFuR+LmwPKOMtgm95lKUZJgNbbYeEOmoPJQpx1O3xUF6QvvONFp ifbTbUloD4+pXvlr+kNixUrCn0CwpGVz8wrHbfUvYDfHFStyp5QGB8VTOlHJJcJEMtfMcqay2a MzMakFLS8tocd8/IsL9+epuFOrzLGYaagVTR9FldXBMdp5W+hWyu6SflW8itMHSbfKvlXBVeqx KwM= X-IronPort-AV: E=Sophos;i="5.82,225,1613404800"; d="scan'208";a="165555188" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 15 Apr 2021 21:58:55 +0800 IronPort-SDR: rR9lLBhCTzRnuIdqZNqeYmV7/Zb1/WGdE4W5frHYHLUVqKF5OqMMoFoAHztMKQvdMR/YCY3Q+x YF63a8qWXbFHSeJn3MjYghmlRHGDLzH9KLzz9MRsISd5cyqIIXTZqtdws09BRUhHI4Oa4myPwK Ito1CnSqqpQx0VhUwjNoDi0jV3QaFpkwb2wtfONvm0DfVzYBrc011try8gMr7jUgIbtaPCOXMh PBmJlc5uWm/eJImwhaU+SSGsiH/exIjZjzroVJEgJF8xKDOWrW1+Bag+zkqz9Y1K6PYozZjWxK FO3xGcdG/OSxhG/guRseGhp8 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 06:38:07 -0700 IronPort-SDR: ZJEoaSfy0XeC05Nn+Hi/cDLCILrdLg+U+vj0YKBYZ+ii6Fwt2OS85j6UlMWh292CeWmgXvpOC9 a/X0TsusfXqEfMeq9mSH+DdwVmqxYSZspX43c1CNsrGuNr6YhSHWWMs5WkT6u8RncQNpjtjY28 sCAcdLXP6CNz4R09KvtFGAM1akttGB87rA5HOnpriybhH+y5svZfxnZLt2LeWCNfp4erNgSF/6 NahGJD9WkvLMalNVHEBlEQT0B5mZQFcoLMyEfB78r99H+jQ4g0plfcAN9INSEarZ7YwWntZc3z MMg= WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip02.wdc.com with ESMTP; 15 Apr 2021 06:58:48 -0700 From: Johannes Thumshirn To: David Sterba Cc: Johannes Thumshirn , linux-btrfs@vger.kernel.org, Josef Bacik , Naohiro Aota , Filipe Manana , Anand Jain Subject: [PATCH v4 1/3] btrfs: zoned: reset zones of relocated block groups Date: Thu, 15 Apr 2021 22:58:33 +0900 Message-Id: X-Mailer: git-send-email 2.30.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org When relocating a block group the freed up space is not discarded in one big block, but each extent is discarded on it's own with -odisard=sync. For a zoned filesystem we need to discard the whole block group at once, so btrfs_discard_extent() will translate the discard into a REQ_OP_ZONE_RESET operation, which then resets the device's zone. Link: https://lore.kernel.org/linux-btrfs/459e2932c48e12e883dcfd3dda828d9da251d5b5.1617962110.git.johannes.thumshirn@wdc.com Signed-off-by: Johannes Thumshirn Reviewed-by: Josef Bacik Reviewed-by: Filipe Manana --- fs/btrfs/volumes.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 6d9b2369f17a..b1bab75ec12a 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -3103,6 +3103,7 @@ static int btrfs_relocate_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset) struct btrfs_root *root = fs_info->chunk_root; struct btrfs_trans_handle *trans; struct btrfs_block_group *block_group; + u64 length; int ret; /* @@ -3130,8 +3131,24 @@ static int btrfs_relocate_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset) if (!block_group) return -ENOENT; btrfs_discard_cancel_work(&fs_info->discard_ctl, block_group); + length = block_group->length; btrfs_put_block_group(block_group); + /* + * Step two, delete the device extents and the chunk tree entries. + * + * On a zoned file system, discard the whole block group, this will + * trigger a REQ_OP_ZONE_RESET operation on the device zone. If + * resetting the zone fails, don't treat it as a fatal problem from the + * filesystem's point of view. + */ + if (btrfs_is_zoned(fs_info)) { + ret = btrfs_discard_extent(fs_info, chunk_offset, length, NULL); + if (ret) + btrfs_info(fs_info, "failed to reset zone %llu", + chunk_offset); + } + trans = btrfs_start_trans_remove_block_group(root->fs_info, chunk_offset); if (IS_ERR(trans)) { @@ -3140,10 +3157,6 @@ static int btrfs_relocate_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset) return ret; } - /* - * step two, delete the device extents and the - * chunk tree entries - */ ret = btrfs_remove_chunk(trans, chunk_offset); btrfs_end_transaction(trans); return ret; From patchwork Thu Apr 15 13:58:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 12205559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33B45C433B4 for ; Thu, 15 Apr 2021 13:58:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B29361153 for ; Thu, 15 Apr 2021 13:58:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232796AbhDON7O (ORCPT ); Thu, 15 Apr 2021 09:59:14 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:18660 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231549AbhDON7N (ORCPT ); Thu, 15 Apr 2021 09:59:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618495130; x=1650031130; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=65k0AiJJfIpF11NbwIiVLuNKqXwIAy08/aNU4XC4+RQ=; b=GGl15E+r25N9xyP1CAc+jQCugnvThfMbiF8TWcxsGEreAZD19oLosLpA ZsbuBuQTtyqNOeyiKkTE4ICyhg0++xI5R3IQxUmilVvXcl9Nk4BrykXLO ZaQIUGihXXEOXhUQxXJkXcXF+w7c7VfKWS1x2yFEaPMj49unYRheFwBhL VyrW3P8DfrPrTojyV/qovQkZcTN/ysI5eHZCSiCS7OltCeBOn6DbqtJl7 5ztZCUgORkXIx1SNx9zNxrEulEC/fqlWg6IPo0L5f2/6Xzsa0UFtacsXM FgLrpHYjy/IGqQqZ1GRzZj3kF77uGC8W+ihpIQFVlyEbgTuzFvEeeYez0 g==; IronPort-SDR: HejWVLQ/8cx4yFZ5rlmzuDoPj9aOb43zs0CJLFRkU5rSOTCThXgQQC6phIZlXc5CU180QpBI4M 7r1pvnFCjLvTG18GxpHsPcNwXIeDcUiB3Ga022mlGCOzCXbLHbJH8y0jLh7j4oqhdfX8kOJ7cF ELZcEa+F/UJX17B3LTK7THhs5zhjPNd7qKmNmBTOLbAWcf/dzIfnjbqcV5rZjh7Bi8zsnwjDPb LVrQk5z/vvl1aHE8n+O3T7ILn7Ln5DXmhD063CnlNLTuiZeBN4YWEFAhGjtfeWhivI60M1UQDm wWA= X-IronPort-AV: E=Sophos;i="5.82,225,1613404800"; d="scan'208";a="169443351" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 15 Apr 2021 21:58:49 +0800 IronPort-SDR: p9g7CwVDaKo4DIlJFb3HeAL2Bt08xmG1cqkNOE0DIL+5b/YPFXVTkkpoglvb6YocQjrlBo8iAJ wbUsCsuoGm40eC6GhBvMLLcQT+7TYYo4WfWVAG5iT9Ib1S3S0GQ8dtv9klVGib0KK6hveVWlPS qMgaMLiyYVHpHAF3jJYil1UJee3toRRMKVcm6ihkn4ShhSkr8sg9KL6yLy7avAmYbeYTfLV0Sv KseFFPA2n0GMhlP7hX84a9VpVlhW8o5OR0u12Rn5VPryI5uBnjSULxsIosV/+KrHkI018wu97x jGWQxATMYgAVR3CkBAkdokAB Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 06:38:09 -0700 IronPort-SDR: ImL5sRYPUH3nwiL+qDPlS2sQ1ewtmKkIlR3kRmnd1DgdvkWxKNhyPGiA1/XOsBhHmGqE2vqz6h kjmMQ+7BujJa10OtKS8QPLLPOkJAxqhpgfAxCe+m1sW+xl9jcDd07bI4SDPA0JKiZ+ckH4FW4N pInHQAPklb5B09MUNdNWAw5nLFCuhGDqbODuozbEDGRZe9wqlf9ZYVOHOvD6bpS7u7++8GwAGN WB2B8SMp+Bp6TM36tei7FfxcGw6k0ePknIKMGGtuB73nkw8gucLsu+feR3Z9VL2i3c3+UIfobw 8To= WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip02.wdc.com with ESMTP; 15 Apr 2021 06:58:49 -0700 From: Johannes Thumshirn To: David Sterba Cc: Johannes Thumshirn , linux-btrfs@vger.kernel.org, Josef Bacik , Naohiro Aota , Filipe Manana , Anand Jain Subject: [PATCH v4 2/3] btrfs: rename delete_unused_bgs_mutex Date: Thu, 15 Apr 2021 22:58:34 +0900 Message-Id: <160b0452ecb4a810b819e0eae68bd9ef507cc813.1618494550.git.johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org As a preparation for another user, rename the unused_bgs_mutex into reclaim_bgs_lock. Signed-off-by: Johannes Thumshirn Reviewed-by: Josef Bacik Reviewed-by: Filipe Manana --- fs/btrfs/block-group.c | 6 +++--- fs/btrfs/ctree.h | 2 +- fs/btrfs/disk-io.c | 6 +++--- fs/btrfs/volumes.c | 46 +++++++++++++++++++++--------------------- 4 files changed, 30 insertions(+), 30 deletions(-) diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 293f3169be80..bbb5a6e170c7 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -1289,7 +1289,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info) * Long running balances can keep us blocked here for eternity, so * simply skip deletion if we're unable to get the mutex. */ - if (!mutex_trylock(&fs_info->delete_unused_bgs_mutex)) + if (!mutex_trylock(&fs_info->reclaim_bgs_lock)) return; spin_lock(&fs_info->unused_bgs_lock); @@ -1462,12 +1462,12 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info) spin_lock(&fs_info->unused_bgs_lock); } spin_unlock(&fs_info->unused_bgs_lock); - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); return; flip_async: btrfs_end_transaction(trans); - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); btrfs_put_block_group(block_group); btrfs_discard_punt_unused_bgs_list(fs_info); } diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 2c858d5349c8..c80302564e6b 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -957,7 +957,7 @@ struct btrfs_fs_info { spinlock_t unused_bgs_lock; struct list_head unused_bgs; struct mutex unused_bg_unpin_mutex; - struct mutex delete_unused_bgs_mutex; + struct mutex reclaim_bgs_lock; /* Cached block sizes */ u32 nodesize; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 0a1182694f48..e52b89ad0a61 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -1890,10 +1890,10 @@ static int cleaner_kthread(void *arg) btrfs_run_defrag_inodes(fs_info); /* - * Acquires fs_info->delete_unused_bgs_mutex to avoid racing + * Acquires fs_info->reclaim_bgs_lock to avoid racing * with relocation (btrfs_relocate_chunk) and relocation * acquires fs_info->cleaner_mutex (btrfs_relocate_block_group) - * after acquiring fs_info->delete_unused_bgs_mutex. So we + * after acquiring fs_info->reclaim_bgs_lock. So we * can't hold, nor need to, fs_info->cleaner_mutex when deleting * unused block groups. */ @@ -2876,7 +2876,7 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) spin_lock_init(&fs_info->treelog_bg_lock); rwlock_init(&fs_info->tree_mod_log_lock); mutex_init(&fs_info->unused_bg_unpin_mutex); - mutex_init(&fs_info->delete_unused_bgs_mutex); + mutex_init(&fs_info->reclaim_bgs_lock); mutex_init(&fs_info->reloc_mutex); mutex_init(&fs_info->delalloc_root_mutex); mutex_init(&fs_info->zoned_meta_io_lock); diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index b1bab75ec12a..a2a7f5ab0a3e 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -3118,7 +3118,7 @@ static int btrfs_relocate_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset) * we release the path used to search the chunk/dev tree and before * the current task acquires this mutex and calls us. */ - lockdep_assert_held(&fs_info->delete_unused_bgs_mutex); + lockdep_assert_held(&fs_info->reclaim_bgs_lock); /* step one, relocate all the extents inside this chunk */ btrfs_scrub_pause(fs_info); @@ -3185,10 +3185,10 @@ static int btrfs_relocate_sys_chunks(struct btrfs_fs_info *fs_info) key.type = BTRFS_CHUNK_ITEM_KEY; while (1) { - mutex_lock(&fs_info->delete_unused_bgs_mutex); + mutex_lock(&fs_info->reclaim_bgs_lock); ret = btrfs_search_slot(NULL, chunk_root, &key, path, 0, 0); if (ret < 0) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); goto error; } BUG_ON(ret == 0); /* Corruption */ @@ -3196,7 +3196,7 @@ static int btrfs_relocate_sys_chunks(struct btrfs_fs_info *fs_info) ret = btrfs_previous_item(chunk_root, path, key.objectid, key.type); if (ret) - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); if (ret < 0) goto error; if (ret > 0) @@ -3217,7 +3217,7 @@ static int btrfs_relocate_sys_chunks(struct btrfs_fs_info *fs_info) else BUG_ON(ret); } - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); if (found_key.offset == 0) break; @@ -3757,10 +3757,10 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) goto error; } - mutex_lock(&fs_info->delete_unused_bgs_mutex); + mutex_lock(&fs_info->reclaim_bgs_lock); ret = btrfs_search_slot(NULL, chunk_root, &key, path, 0, 0); if (ret < 0) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); goto error; } @@ -3774,7 +3774,7 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) ret = btrfs_previous_item(chunk_root, path, 0, BTRFS_CHUNK_ITEM_KEY); if (ret) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); ret = 0; break; } @@ -3784,7 +3784,7 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) btrfs_item_key_to_cpu(leaf, &found_key, slot); if (found_key.objectid != key.objectid) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); break; } @@ -3801,12 +3801,12 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) btrfs_release_path(path); if (!ret) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); goto loop; } if (counting) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); spin_lock(&fs_info->balance_lock); bctl->stat.expected++; spin_unlock(&fs_info->balance_lock); @@ -3831,7 +3831,7 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) count_meta < bctl->meta.limit_min) || ((chunk_type & BTRFS_BLOCK_GROUP_SYSTEM) && count_sys < bctl->sys.limit_min)) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); goto loop; } @@ -3845,7 +3845,7 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) ret = btrfs_may_alloc_data_chunk(fs_info, found_key.offset); if (ret < 0) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); goto error; } else if (ret == 1) { chunk_reserved = 1; @@ -3853,7 +3853,7 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info) } ret = btrfs_relocate_chunk(fs_info, found_key.offset); - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); if (ret == -ENOSPC) { enospc_errors++; } else if (ret == -ETXTBSY) { @@ -4738,16 +4738,16 @@ int btrfs_shrink_device(struct btrfs_device *device, u64 new_size) key.type = BTRFS_DEV_EXTENT_KEY; do { - mutex_lock(&fs_info->delete_unused_bgs_mutex); + mutex_lock(&fs_info->reclaim_bgs_lock); ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); if (ret < 0) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); goto done; } ret = btrfs_previous_item(root, path, 0, key.type); if (ret) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); if (ret < 0) goto done; ret = 0; @@ -4760,7 +4760,7 @@ int btrfs_shrink_device(struct btrfs_device *device, u64 new_size) btrfs_item_key_to_cpu(l, &key, path->slots[0]); if (key.objectid != device->devid) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); btrfs_release_path(path); break; } @@ -4769,7 +4769,7 @@ int btrfs_shrink_device(struct btrfs_device *device, u64 new_size) length = btrfs_dev_extent_length(l, dev_extent); if (key.offset + length <= new_size) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); btrfs_release_path(path); break; } @@ -4785,12 +4785,12 @@ int btrfs_shrink_device(struct btrfs_device *device, u64 new_size) */ ret = btrfs_may_alloc_data_chunk(fs_info, chunk_offset); if (ret < 0) { - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); goto done; } ret = btrfs_relocate_chunk(fs_info, chunk_offset); - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); if (ret == -ENOSPC) { failed++; } else if (ret) { @@ -8016,7 +8016,7 @@ static int relocating_repair_kthread(void *data) return -EBUSY; } - mutex_lock(&fs_info->delete_unused_bgs_mutex); + mutex_lock(&fs_info->reclaim_bgs_lock); /* Ensure block group still exists */ cache = btrfs_lookup_block_group(fs_info, target); @@ -8038,7 +8038,7 @@ static int relocating_repair_kthread(void *data) out: if (cache) btrfs_put_block_group(cache); - mutex_unlock(&fs_info->delete_unused_bgs_mutex); + mutex_unlock(&fs_info->reclaim_bgs_lock); btrfs_exclop_finish(fs_info); return ret; From patchwork Thu Apr 15 13:58:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 12205561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FDFBC43460 for ; Thu, 15 Apr 2021 13:58:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51443611AD for ; Thu, 15 Apr 2021 13:58:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233038AbhDON7P (ORCPT ); Thu, 15 Apr 2021 09:59:15 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:18660 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231549AbhDON7O (ORCPT ); Thu, 15 Apr 2021 09:59:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1618495131; x=1650031131; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HgYQX3uPbJpNxEdqOz1/p5a6Ux/8WTIjG8uvprvdeqU=; b=TKSY5msQtTXo8yf3RTgudsSM9q5gP5kAK08MeNDvrovO6XmtFLE+3sXZ 2p0WcOVntmVvPKZu7/CyCfvZhvMIpART4aSyISPnb4RpjCah/aT8XuduN dZs4I2dx3Vq/kZinxAeuZZvEtbyixDD7MX5taeW18GSkvWxYIu8i09Hab G/ZQVRM4QECFrhB5/WBPK7dSpsQoGgKkk+y8dH4/Vqo/GZ3H79ITt6UTV ZPJF60W1l9V3ld8XvXGFPxKntRuJ9+M9iZzbpMBBQK8wXR5ur4A+UPRpf 4zliiFO+ClKEkf13cYOad5N0Eea6yMlBzqnhKYiTbGtib+yZoXHtdy9+l w==; IronPort-SDR: nglcrJzMQC8W8erfSjQIlVfvPV2uDZNXd/HG4wIIIc5C4iFynYjV+zaHQ/gbuL6D3xjEXKb3i3 rJHP2BmJHl+J67Zba32SzqJ9i3QG7gbjd0lOfct5f7PvS7WTLmh6S6Ay2/jR9rK80vg3omM2J+ pHS8sXmOVmLNNJe22E8hB/oU96m/YQJcfI7PSaeXAyhVXImKuv7BVu46LDPaDQqX6v/vyfP1m0 Xe/pRRkVmsc3nZJCG1H7bjs/pwkQ+p0nVuH0/46afjRi+s5bHgMsO27mkzfqVIe1Bm1EK92tXT MrU= X-IronPort-AV: E=Sophos;i="5.82,225,1613404800"; d="scan'208";a="169443354" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 15 Apr 2021 21:58:51 +0800 IronPort-SDR: JjMVY9Sb/3l00/tSi85rm/BpbQSlMicY41QLix1bkNR4x/wbZvMRPiuIfpl1Q7whZux5iDn41r zElu52cb09LNRvBf9t4tK3E1gERFZn/AAG3izjOC2qSLEkWy+eh41aR+CQKQenuxeFUwXsEy5g WDSKovgof4RA5v3Qq5lXSuJ6L6L7LL8u1WMW56p122I++WvyfzyM87ehrAdDVZsvBSIjv27DPh xjMlVB3K8cAvEBKHWjJBkdOu7CxWesuJ4vbL/uMa8rlKa+Ru3AIN0dIvjAYR2Zn13bUTl76ac7 MRsBjIfStCGo64Ma9bqd/3cE Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 06:38:10 -0700 IronPort-SDR: 8GNCpW8ZP+cVdQNCS8zCOfShASi2XjB6VguXVQurdMb8eSlaMFQ+vNqtUYm1DFl2fs37DF5Q2k Tb+Axt3JiI0HjRnL9ZXD9/5C1PjfdiXwChMQCyPhjLVza7WSy8yIlAPChCyp6DujH4rgx5ZfOu zxkHL+UGpEPnCOKXGpkN256qd8N0t6rtSmJcVrv7OUCVbeTxZ8BLKvSXTs8IbL0XylxEoraXqA prdHRK2STViKsLcwHcoUgkmST+aG+BSM97NgxMbxu6EvBVQEs44u03/xZl7Z/ISS9CW7Gz0xgE qxQ= WDCIronportException: Internal Received: from unknown (HELO redsun60.ssa.fujisawa.hgst.com) ([10.149.66.36]) by uls-op-cesaip02.wdc.com with ESMTP; 15 Apr 2021 06:58:51 -0700 From: Johannes Thumshirn To: David Sterba Cc: Johannes Thumshirn , linux-btrfs@vger.kernel.org, Josef Bacik , Naohiro Aota , Filipe Manana , Anand Jain Subject: [PATCH v4 3/3] btrfs: zoned: automatically reclaim zones Date: Thu, 15 Apr 2021 22:58:35 +0900 Message-Id: <920701be19f36b4f7ed84efd53a3d0550700f047.1618494550.git.johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org When a file gets deleted on a zoned file system, the space freed is not returned back into the block group's free space, but is migrated to zone_unusable. As this zone_unusable space is behind the current write pointer it is not possible to use it for new allocations. In the current implementation a zone is reset once all of the block group's space is accounted as zone unusable. This behaviour can lead to premature ENOSPC errors on a busy file system. Instead of only reclaiming the zone once it is completely unusable, kick off a reclaim job once the amount of unusable bytes exceeds a user configurable threshold between 51% and 100%. It can be set per mounted filesystem via the sysfs tunable bg_reclaim_threshold which is set to 75% per default. Similar to reclaiming unused block groups, these dirty block groups are added to a to_reclaim list and then on a transaction commit, the reclaim process is triggered but after we deleted unused block groups, which will free space for the relocation process. Signed-off-by: Johannes Thumshirn Reviewed-by: Filipe Manana --- fs/btrfs/block-group.c | 96 ++++++++++++++++++++++++++++++++++++ fs/btrfs/block-group.h | 3 ++ fs/btrfs/ctree.h | 6 +++ fs/btrfs/disk-io.c | 13 +++++ fs/btrfs/free-space-cache.c | 9 +++- fs/btrfs/sysfs.c | 35 +++++++++++++ fs/btrfs/volumes.c | 2 +- fs/btrfs/volumes.h | 1 + include/trace/events/btrfs.h | 12 +++++ 9 files changed, 175 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index bbb5a6e170c7..3f06ea42c013 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -1485,6 +1485,92 @@ void btrfs_mark_bg_unused(struct btrfs_block_group *bg) spin_unlock(&fs_info->unused_bgs_lock); } +void btrfs_reclaim_bgs_work(struct work_struct *work) +{ + struct btrfs_fs_info *fs_info = + container_of(work, struct btrfs_fs_info, reclaim_bgs_work); + struct btrfs_block_group *bg; + struct btrfs_space_info *space_info; + int ret = 0; + + if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags)) + return; + + if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE)) + return; + + mutex_lock(&fs_info->reclaim_bgs_lock); + spin_lock(&fs_info->unused_bgs_lock); + while (!list_empty(&fs_info->reclaim_bgs)) { + bg = list_first_entry(&fs_info->reclaim_bgs, + struct btrfs_block_group, + bg_list); + list_del_init(&bg->bg_list); + + space_info = bg->space_info; + spin_unlock(&fs_info->unused_bgs_lock); + + /* Don't want to race with allocators so take the groups_sem */ + down_write(&space_info->groups_sem); + + spin_lock(&bg->lock); + if (bg->reserved || bg->pinned || bg->ro) { + /* + * We want to bail if we made new allocations or have + * outstanding allocations in this block group. We do + * the ro check in case balance is currently acting on + * this block group. + */ + spin_unlock(&bg->lock); + up_write(&space_info->groups_sem); + goto next; + } + spin_unlock(&bg->lock); + + ret = inc_block_group_ro(bg, 0); + up_write(&space_info->groups_sem); + if (ret < 0) { + ret = 0; + goto next; + } + + btrfs_info(fs_info, "reclaiming chunk %llu", bg->start); + trace_btrfs_reclaim_block_group(bg); + ret = btrfs_relocate_chunk(fs_info, bg->start); + if (ret) + btrfs_err(fs_info, "error relocating chunk %llu", + bg->start); + +next: + btrfs_put_block_group(bg); + spin_lock(&fs_info->unused_bgs_lock); + } + spin_unlock(&fs_info->unused_bgs_lock); + mutex_unlock(&fs_info->reclaim_bgs_lock); + btrfs_exclop_finish(fs_info); +} + +void btrfs_reclaim_bgs(struct btrfs_fs_info *fs_info) +{ + spin_lock(&fs_info->unused_bgs_lock); + if (!list_empty(&fs_info->reclaim_bgs)) + queue_work(system_unbound_wq, &fs_info->reclaim_bgs_work); + spin_unlock(&fs_info->unused_bgs_lock); +} + +void btrfs_mark_bg_to_reclaim(struct btrfs_block_group *bg) +{ + struct btrfs_fs_info *fs_info = bg->fs_info; + + spin_lock(&fs_info->unused_bgs_lock); + if (list_empty(&bg->bg_list)) { + btrfs_get_block_group(bg); + trace_btrfs_add_reclaim_block_group(bg); + list_add_tail(&bg->bg_list, &fs_info->reclaim_bgs); + } + spin_unlock(&fs_info->unused_bgs_lock); +} + static int read_bg_from_eb(struct btrfs_fs_info *fs_info, struct btrfs_key *key, struct btrfs_path *path) { @@ -3446,6 +3532,16 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info) } spin_unlock(&info->unused_bgs_lock); + spin_lock(&info->unused_bgs_lock); + while (!list_empty(&info->reclaim_bgs)) { + block_group = list_first_entry(&info->reclaim_bgs, + struct btrfs_block_group, + bg_list); + list_del_init(&block_group->bg_list); + btrfs_put_block_group(block_group); + } + spin_unlock(&info->unused_bgs_lock); + spin_lock(&info->block_group_cache_lock); while ((n = rb_last(&info->block_group_cache_tree)) != NULL) { block_group = rb_entry(n, struct btrfs_block_group, diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h index 3ecc3372a5ce..7b927425dc71 100644 --- a/fs/btrfs/block-group.h +++ b/fs/btrfs/block-group.h @@ -264,6 +264,9 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, u64 group_start, struct extent_map *em); void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info); void btrfs_mark_bg_unused(struct btrfs_block_group *bg); +void btrfs_reclaim_bgs_work(struct work_struct *work); +void btrfs_reclaim_bgs(struct btrfs_fs_info *fs_info); +void btrfs_mark_bg_to_reclaim(struct btrfs_block_group *bg); int btrfs_read_block_groups(struct btrfs_fs_info *info); int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used, u64 type, u64 chunk_offset, u64 size); diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index c80302564e6b..88531c1fbcdf 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -954,10 +954,14 @@ struct btrfs_fs_info { struct work_struct async_data_reclaim_work; struct work_struct preempt_reclaim_work; + /* Used to reclaim data space in the background */ + struct work_struct reclaim_bgs_work; + spinlock_t unused_bgs_lock; struct list_head unused_bgs; struct mutex unused_bg_unpin_mutex; struct mutex reclaim_bgs_lock; + struct list_head reclaim_bgs; /* Cached block sizes */ u32 nodesize; @@ -998,6 +1002,8 @@ struct btrfs_fs_info { spinlock_t treelog_bg_lock; u64 treelog_bg; + int bg_reclaim_threshold; + #ifdef CONFIG_BTRFS_FS_REF_VERIFY spinlock_t ref_verify_lock; struct rb_root block_tree; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index e52b89ad0a61..942d894ec175 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -1898,6 +1898,13 @@ static int cleaner_kthread(void *arg) * unused block groups. */ btrfs_delete_unused_bgs(fs_info); + + /* + * Reclaim block groups in the reclaim_bgs list after we deleted + * all unused block_groups. This possibly gives us some more free + * space. + */ + btrfs_reclaim_bgs(fs_info); sleep: clear_and_wake_up_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags); if (kthread_should_park()) @@ -2886,6 +2893,7 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) INIT_LIST_HEAD(&fs_info->space_info); INIT_LIST_HEAD(&fs_info->tree_mod_seq_list); INIT_LIST_HEAD(&fs_info->unused_bgs); + INIT_LIST_HEAD(&fs_info->reclaim_bgs); #ifdef CONFIG_BTRFS_DEBUG INIT_LIST_HEAD(&fs_info->allocated_roots); INIT_LIST_HEAD(&fs_info->allocated_ebs); @@ -2974,6 +2982,9 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) fs_info->swapfile_pins = RB_ROOT; fs_info->send_in_progress = 0; + + fs_info->bg_reclaim_threshold = 75; + INIT_WORK(&fs_info->reclaim_bgs_work, btrfs_reclaim_bgs_work); } static int init_mount_fs_info(struct btrfs_fs_info *fs_info, struct super_block *sb) @@ -4332,6 +4343,8 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info) cancel_work_sync(&fs_info->async_data_reclaim_work); cancel_work_sync(&fs_info->preempt_reclaim_work); + cancel_work_sync(&fs_info->reclaim_bgs_work); + /* Cancel or finish ongoing discard work */ btrfs_discard_cleanup(fs_info); diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index 9988decd5717..e54466fc101f 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -11,6 +11,7 @@ #include #include #include +#include "misc.h" #include "ctree.h" #include "free-space-cache.h" #include "transaction.h" @@ -2539,6 +2540,7 @@ int __btrfs_add_free_space(struct btrfs_fs_info *fs_info, static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group, u64 bytenr, u64 size, bool used) { + struct btrfs_fs_info *fs_info = block_group->fs_info; struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl; u64 offset = bytenr - block_group->start; u64 to_free, to_unusable; @@ -2569,8 +2571,13 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group, } /* All the region is now unusable. Mark it as unused and reclaim */ - if (block_group->zone_unusable == block_group->length) + if (block_group->zone_unusable == block_group->length) { btrfs_mark_bg_unused(block_group); + } else if (block_group->zone_unusable >= + div_factor_fine(block_group->length, + fs_info->bg_reclaim_threshold)) { + btrfs_mark_bg_to_reclaim(block_group); + } return 0; } diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c index a99d1f415a7f..436ac7b4b334 100644 --- a/fs/btrfs/sysfs.c +++ b/fs/btrfs/sysfs.c @@ -980,6 +980,40 @@ static ssize_t btrfs_read_policy_store(struct kobject *kobj, } BTRFS_ATTR_RW(, read_policy, btrfs_read_policy_show, btrfs_read_policy_store); +static ssize_t btrfs_bg_reclaim_threshold_show(struct kobject *kobj, + struct kobj_attribute *a, + char *buf) +{ + struct btrfs_fs_info *fs_info = to_fs_info(kobj); + ssize_t ret; + + ret = scnprintf(buf, PAGE_SIZE, "%d\n", fs_info->bg_reclaim_threshold); + + return ret; +} + +static ssize_t btrfs_bg_reclaim_threshold_store(struct kobject *kobj, + struct kobj_attribute *a, + const char *buf, size_t len) +{ + struct btrfs_fs_info *fs_info = to_fs_info(kobj); + int thresh; + int ret; + + ret = kstrtoint(buf, 10, &thresh); + if (ret) + return ret; + + if (thresh <= 50 || thresh > 100) + return -EINVAL; + + fs_info->bg_reclaim_threshold = thresh; + + return len; +} +BTRFS_ATTR_RW(, bg_reclaim_threshold, btrfs_bg_reclaim_threshold_show, + btrfs_bg_reclaim_threshold_store); + static const struct attribute *btrfs_attrs[] = { BTRFS_ATTR_PTR(, label), BTRFS_ATTR_PTR(, nodesize), @@ -991,6 +1025,7 @@ static const struct attribute *btrfs_attrs[] = { BTRFS_ATTR_PTR(, exclusive_operation), BTRFS_ATTR_PTR(, generation), BTRFS_ATTR_PTR(, read_policy), + BTRFS_ATTR_PTR(, bg_reclaim_threshold), NULL, }; diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index a2a7f5ab0a3e..527324154e3e 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -3098,7 +3098,7 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset) return ret; } -static int btrfs_relocate_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset) +int btrfs_relocate_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset) { struct btrfs_root *root = fs_info->chunk_root; struct btrfs_trans_handle *trans; diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h index d4c3e0dd32b8..9c0d84e5ec06 100644 --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -484,6 +484,7 @@ void btrfs_describe_block_groups(u64 flags, char *buf, u32 size_buf); int btrfs_resume_balance_async(struct btrfs_fs_info *fs_info); int btrfs_recover_balance(struct btrfs_fs_info *fs_info); int btrfs_pause_balance(struct btrfs_fs_info *fs_info); +int btrfs_relocate_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset); int btrfs_cancel_balance(struct btrfs_fs_info *fs_info); int btrfs_create_uuid_tree(struct btrfs_fs_info *fs_info); int btrfs_uuid_scan_kthread(void *data); diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h index 0551ea65374f..a41dd8a0c730 100644 --- a/include/trace/events/btrfs.h +++ b/include/trace/events/btrfs.h @@ -1903,6 +1903,18 @@ DEFINE_EVENT(btrfs__block_group, btrfs_add_unused_block_group, TP_ARGS(bg_cache) ); +DEFINE_EVENT(btrfs__block_group, btrfs_add_reclaim_block_group, + TP_PROTO(const struct btrfs_block_group *bg_cache), + + TP_ARGS(bg_cache) +); + +DEFINE_EVENT(btrfs__block_group, btrfs_reclaim_block_group, + TP_PROTO(const struct btrfs_block_group *bg_cache), + + TP_ARGS(bg_cache) +); + DEFINE_EVENT(btrfs__block_group, btrfs_skip_unused_block_group, TP_PROTO(const struct btrfs_block_group *bg_cache),