From patchwork Tue Aug 16 23:12:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 12945348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13BBBC2BB41 for ; Tue, 16 Aug 2022 23:12:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237545AbiHPXM3 (ORCPT ); Tue, 16 Aug 2022 19:12:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237491AbiHPXM0 (ORCPT ); Tue, 16 Aug 2022 19:12:26 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D77AD923E3 for ; Tue, 16 Aug 2022 16:12:23 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id s206so10575633pgs.3 for ; Tue, 16 Aug 2022 16:12:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=3a4TqtkHcbsVC/tLvafFZxS6Ql3tQG6iF3M5JznUDrE=; b=bU5AqjggHqiK4UKq5WuJwZaFJ7MWt8+2E/xjzHhfwcsCzvtOjUqRm7UvHr+j/4sH/Z KHo/LPDzrW7+C5xr7u2l2E5SXeTpDnV/h5MzllkD+ttz68x/PvlzrRBGhR0f+6gwsjot kxVFi3df4BFdcxPCtazSca7dtn2cOkk8k7tLJI1UANFel4U/M6WwVYSIbeBdlyOxRF2Q npFtot3sfE4Z8Q9+dprkxMKq1obOm1aKWlV8MU1eZinYv78V2WtcTNf96fkpTVrpUcOv astAw/uJxhSyrdQNhQxKhIR1hsBsvQ/q9XdRB5aCnqDnqJTDYvUGftIpGvgGfwFevx+e UOhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=3a4TqtkHcbsVC/tLvafFZxS6Ql3tQG6iF3M5JznUDrE=; b=SJlmMuYuVGlyr3DCqwmO3VInZWBt5YPzpyme7mudp9NzdKflVI+p9Ium0JV+57Ng9C y+6SqK4A8FwfBTOZV/GGGRbLDEw2NTgnc8O3cFK0Yv3FXC9Iv0G5Etc2lKnKE7vKgsL9 I0SmIzI5UELGXKJ1rYmM0Zu5t+2ypvTpKMxWBhnt1kG0cYOKI0qTqdIso7l3bZtQrbIi mAh8/9Hnfb+EJ+87QvADlyysL+N2JUHtlrj1nxA1S56uywuyDQ9fi+YaGDOdbelRztua DJ1mTkZnC+xii+67L7l7eI9pgU7EgAFIb2BHMIHvG0Wl9ZuWC226NETtw4j7C/036phl Zp5w== X-Gm-Message-State: ACgBeo3xoLvBNyzqFDSW4A0ladzU/pgv8NuCyXpv2aIx4nom+6bnjBnN aWDgoTEU3c+07ebLmmpjCC2uBgCmpAD7dg== X-Google-Smtp-Source: AA6agR4P8GawT46JaZhgdgJy9gI7GPjrL9IdhL78OGEewyVVfFEHVGJclLEsmKSQaE/nNugDrISVnA== X-Received: by 2002:a05:6a00:a19:b0:535:49ee:da43 with SMTP id p25-20020a056a000a1900b0053549eeda43mr3321689pfh.53.1660691542998; Tue, 16 Aug 2022 16:12:22 -0700 (PDT) Received: from relinquished.hsd1.wa.comcast.net ([2601:602:a300:cc0::f972]) by smtp.gmail.com with ESMTPSA id p3-20020a170902e74300b0016c4f0065b4sm9721316plf.84.2022.08.16.16.12.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Aug 2022 16:12:22 -0700 (PDT) From: Omar Sandoval To: linux-btrfs@vger.kernel.org Cc: kernel-team@fb.com Subject: [PATCH 1/2] btrfs: fix space cache corruption and potential double allocations Date: Tue, 16 Aug 2022 16:12:15 -0700 Message-Id: <9ee45db86433bb8e4d7daff35502db241c69ad16.1660690698.git.osandov@fb.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org From: Omar Sandoval When testing space_cache v2 on a large set of machines, we encountered a few symptoms: 1. "unable to add free space :-17" (EEXIST) errors. 2. Missing free space info items, sometimes caught with a "missing free space info for X" error. 3. Double-accounted space: ranges that were allocated in the extent tree and also marked as free in the free space tree, ranges that were marked as allocated twice in the extent tree, or ranges that were marked as free twice in the free space tree. If the latter made it onto disk, the next reboot would hit the BUG_ON() in add_new_free_space(). 4. On some hosts with no on-disk corruption or error messages, the in-memory space cache (dumped with drgn) disagreed with the free space tree. All of these symptoms have the same underlying cause: a race between caching the free space for a block group and returning free space to the in-memory space cache for pinned extents causes us to double-add a free range to the space cache. This race exists when free space is cached from the free space tree (space_cache=v2) or the extent tree (nospace_cache, or space_cache=v1 if the cache needs to be regenerated). struct btrfs_block_group::last_byte_to_unpin and struct btrfs_block_group::progress are supposed to protect against this race, but commit d0c2f4fa555e ("btrfs: make concurrent fsyncs wait less when waiting for a transaction commit") subtly broke this by allowing multiple transactions to be unpinning extents at the same time. Specifically, the race is as follows: 1. An extent is deleted from an uncached block group in transaction A. 2. btrfs_commit_transaction() is called for transaction A. 3. btrfs_run_delayed_refs() -> __btrfs_free_extent() runs the delayed ref for the deleted extent. 4. __btrfs_free_extent() -> do_free_extent_accounting() -> add_to_free_space_tree() adds the deleted extent back to the free space tree. 5. do_free_extent_accounting() -> btrfs_update_block_group() -> btrfs_cache_block_group() queues up the block group to get cached. block_group->progress is set to block_group->start. 6. btrfs_commit_transaction() for transaction A calls switch_commit_roots(). It sets block_group->last_byte_to_unpin to block_group->progress, which is block_group->start because the block group hasn't been cached yet. 7. The caching thread gets to our block group. Since the commit roots were already switched, load_free_space_tree() sees the deleted extent as free and adds it to the space cache. It finishes caching and sets block_group->progress to U64_MAX. 8. btrfs_commit_transaction() advances transaction A to TRANS_STATE_SUPER_COMMITTED. 9. fsync calls btrfs_commit_transaction() for transaction B. Since transaction A is already in TRANS_STATE_SUPER_COMMITTED and the commit is for fsync, it advances. 10. btrfs_commit_transaction() for transaction B calls switch_commit_roots(). This time, the block group has already been cached, so it sets block_group->last_byte_to_unpin to U64_MAX. 11. btrfs_commit_transaction() for transaction A calls btrfs_finish_extent_commit(), which calls unpin_extent_range() for the deleted extent. It sees last_byte_to_unpin set to U64_MAX (by transaction B!), so it adds the deleted extent to the space cache again! This explains all of our symptoms above: * If the sequence of events is exactly as described above, when the free space is re-added in step 11, it will fail with EEXIST. * If another thread reallocates the deleted extent in between steps 7 and 11, then step 11 will silently re-add that space to the space cache as free even though it is actually allocated. Then, if that space is allocated *again*, the free space tree will be corrupted (namely, the wrong item will be deleted). * If we don't catch this free space tree corruption, it will continue to get worse as extents are deleted and reallocated. The v1 space_cache is synchronously loaded when an extent is deleted (btrfs_update_block_group() with alloc=0 calls btrfs_cache_block_group() with load_cache_only=1), so it is not normally affected by this bug. However, as noted above, if we fail to load the space cache, we will fall back to caching from the extent tree and may hit this bug. The easiest fix for this race is to also make caching from the free space tree or extent tree synchronous. Josef tested this and found no performance regressions. A few extra changes fall out of this change. Namely, this fix does the following, with step 2 being the crucial fix: 1. Factor btrfs_caching_ctl_wait_done() out of btrfs_wait_block_group_cache_done() to allow waiting on a caching_ctl that we already hold a reference to. 2. Change the call in btrfs_cache_block_group() of btrfs_wait_space_cache_v1_finished() to btrfs_caching_ctl_wait_done(), which makes us wait regardless of the space_cache option. 3. Delete the now unused btrfs_wait_space_cache_v1_finished() and space_cache_v1_done(). 4. Change btrfs_cache_block_group()'s `int load_cache_only` parameter to `bool wait` to more accurately describe its new meaning. 5. Change a few callers which had a separate call to btrfs_wait_block_group_cache_done() to use wait = true instead. 6. Make btrfs_wait_block_group_cache_done() static now that it's not used outside of block-group.c anymore. Fixes: d0c2f4fa555e ("btrfs: make concurrent fsyncs wait less when waiting for a transaction commit") Signed-off-by: Omar Sandoval Reviewed-by: Filipe Manana --- fs/btrfs/block-group.c | 36 ++++++++++++++---------------------- fs/btrfs/block-group.h | 3 +-- fs/btrfs/extent-tree.c | 30 ++++++------------------------ 3 files changed, 21 insertions(+), 48 deletions(-) diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index c8162b8d85a2..1af6fc395a52 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -440,33 +440,26 @@ void btrfs_wait_block_group_cache_progress(struct btrfs_block_group *cache, btrfs_put_caching_control(caching_ctl); } -int btrfs_wait_block_group_cache_done(struct btrfs_block_group *cache) +static int btrfs_caching_ctl_wait_done(struct btrfs_block_group *cache, + struct btrfs_caching_control *caching_ctl) +{ + wait_event(caching_ctl->wait, btrfs_block_group_done(cache)); + return cache->cached == BTRFS_CACHE_ERROR ? -EIO : 0; +} + +static int btrfs_wait_block_group_cache_done(struct btrfs_block_group *cache) { struct btrfs_caching_control *caching_ctl; - int ret = 0; + int ret; caching_ctl = btrfs_get_caching_control(cache); if (!caching_ctl) return (cache->cached == BTRFS_CACHE_ERROR) ? -EIO : 0; - - wait_event(caching_ctl->wait, btrfs_block_group_done(cache)); - if (cache->cached == BTRFS_CACHE_ERROR) - ret = -EIO; + ret = btrfs_caching_ctl_wait_done(cache, caching_ctl); btrfs_put_caching_control(caching_ctl); return ret; } -static bool space_cache_v1_done(struct btrfs_block_group *cache) -{ - bool ret; - - spin_lock(&cache->lock); - ret = cache->cached != BTRFS_CACHE_FAST; - spin_unlock(&cache->lock); - - return ret; -} - #ifdef CONFIG_BTRFS_DEBUG static void fragment_free_space(struct btrfs_block_group *block_group) { @@ -744,9 +737,8 @@ static noinline void caching_thread(struct btrfs_work *work) btrfs_put_block_group(block_group); } -int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only) +int btrfs_cache_block_group(struct btrfs_block_group *cache, bool wait) { - DEFINE_WAIT(wait); struct btrfs_fs_info *fs_info = cache->fs_info; struct btrfs_caching_control *caching_ctl = NULL; int ret = 0; @@ -794,8 +786,8 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only btrfs_queue_work(fs_info->caching_workers, &caching_ctl->work); out: - if (load_cache_only && caching_ctl) - wait_event(caching_ctl->wait, space_cache_v1_done(cache)); + if (wait && caching_ctl) + ret = btrfs_caching_ctl_wait_done(cache, caching_ctl); if (caching_ctl) btrfs_put_caching_control(caching_ctl); @@ -3288,7 +3280,7 @@ int btrfs_update_block_group(struct btrfs_trans_handle *trans, * space back to the block group, otherwise we will leak space. */ if (!alloc && !btrfs_block_group_done(cache)) - btrfs_cache_block_group(cache, 1); + btrfs_cache_block_group(cache, true); byte_in_group = bytenr - cache->start; WARN_ON(byte_in_group > cache->length); diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h index b5c8425839dc..9dba28bb1806 100644 --- a/fs/btrfs/block-group.h +++ b/fs/btrfs/block-group.h @@ -267,9 +267,8 @@ void btrfs_dec_nocow_writers(struct btrfs_block_group *bg); void btrfs_wait_nocow_writers(struct btrfs_block_group *bg); void btrfs_wait_block_group_cache_progress(struct btrfs_block_group *cache, u64 num_bytes); -int btrfs_wait_block_group_cache_done(struct btrfs_block_group *cache); int btrfs_cache_block_group(struct btrfs_block_group *cache, - int load_cache_only); + bool wait); void btrfs_put_caching_control(struct btrfs_caching_control *ctl); struct btrfs_caching_control *btrfs_get_caching_control( struct btrfs_block_group *cache); diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 7bb05d49809b..86ac953c69ac 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -2551,17 +2551,10 @@ int btrfs_pin_extent_for_log_replay(struct btrfs_trans_handle *trans, return -EINVAL; /* - * pull in the free space cache (if any) so that our pin - * removes the free space from the cache. We have load_only set - * to one because the slow code to read in the free extents does check - * the pinned extents. + * Fully cache the free space first so that our pin removes the free space + * from the cache. */ - btrfs_cache_block_group(cache, 1); - /* - * Make sure we wait until the cache is completely built in case it is - * missing or is invalid and therefore needs to be rebuilt. - */ - ret = btrfs_wait_block_group_cache_done(cache); + ret = btrfs_cache_block_group(cache, true); if (ret) goto out; @@ -2584,12 +2577,7 @@ static int __exclude_logged_extent(struct btrfs_fs_info *fs_info, if (!block_group) return -EINVAL; - btrfs_cache_block_group(block_group, 1); - /* - * Make sure we wait until the cache is completely built in case it is - * missing or is invalid and therefore needs to be rebuilt. - */ - ret = btrfs_wait_block_group_cache_done(block_group); + ret = btrfs_cache_block_group(block_group, true); if (ret) goto out; @@ -4400,7 +4388,7 @@ static noinline int find_free_extent(struct btrfs_root *root, ffe_ctl->cached = btrfs_block_group_done(block_group); if (unlikely(!ffe_ctl->cached)) { ffe_ctl->have_caching_bg = true; - ret = btrfs_cache_block_group(block_group, 0); + ret = btrfs_cache_block_group(block_group, false); /* * If we get ENOMEM here or something else we want to @@ -6170,13 +6158,7 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range) if (end - start >= range->minlen) { if (!btrfs_block_group_done(cache)) { - ret = btrfs_cache_block_group(cache, 0); - if (ret) { - bg_failed++; - bg_ret = ret; - continue; - } - ret = btrfs_wait_block_group_cache_done(cache); + ret = btrfs_cache_block_group(cache, true); if (ret) { bg_failed++; bg_ret = ret; From patchwork Tue Aug 16 23:12:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 12945347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBFB5C32772 for ; Tue, 16 Aug 2022 23:12:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237555AbiHPXM3 (ORCPT ); Tue, 16 Aug 2022 19:12:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237518AbiHPXM1 (ORCPT ); Tue, 16 Aug 2022 19:12:27 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9774E923DF for ; Tue, 16 Aug 2022 16:12:24 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id 202so10559391pgc.8 for ; Tue, 16 Aug 2022 16:12:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=lamnR9VU9+/qiZCmlJbumFNHhXBmimK0eAh6OelanXM=; b=h5MCpkUumdpr2R5TWV5GoqsCAVsFq+Vh99++l0Rw7sm8MVpissSRxy7JAgnWjXWPfN B5WSxl4/BqNUbygZPWkMsy9z3OyUpfBsumGa3lTAdXaVMecExfqewlvxvgZJQj/NgfEl SIzTOD225qJBO/X3fIVRoGvMrEJd1ovwGJqotbVesvNs03gqZQQS76qbo+vMj6qwXzmQ MjgDkZ3Z36as9kgtc7nTAFQNxLqPOo1Q9CYWwOGFHLDGZ4GvdHwR2gvfeEW+W756tH/X L2+yug88hliZYj5ziV1hjeIfLEt2Y/v+hiPuZXqtyNb0cAUuP9PKtizqmQj3B869IbWl ffDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=lamnR9VU9+/qiZCmlJbumFNHhXBmimK0eAh6OelanXM=; b=n5KXwZvRgzB4+b7TndFarZEDm44DwCVNrTBejBQRENTWGSd+98oJgAVBBFIbPsT3x+ ILj5l3zBQ9UkhxzkTivRza8sT8UpQ8kjjOIl0POK0qSGT3CTcdeW6xM7q99L39WOJixG UZwahhMxdQIu6gKl0E+sSCTxFFziCe6U2XbKSSPgGYl9HT+ZZ/QKIPHvVenpZpUYWJpQ E9PgnKseNoskfILZn3prB9tf6cmpvNXqamKNLRi/e7Bnv5Q2lY/mppl+hSlxGHxsh5/s IzFrIrXC0EtFti7Nm74LbwJRD/nlevBdeXlsCOPIbBBBWsk9nc46e8ouEBRDKjSBcTTu pV5A== X-Gm-Message-State: ACgBeo2FjBTRZLz+O1a5II2KvUAeu52fGh98u/qlE94Qg3GRQEJ6Xlky hV7TD5PaKEWA6f+JF4iMvapVLql5SKGe3w== X-Google-Smtp-Source: AA6agR4rlipE5gPq0vSfiyaa1WJP1WIGhqyEgNiHnTp2S7iV1Dv6Dg/Aw9BSIBqF5iYHzswJTjytcA== X-Received: by 2002:a05:6a00:2181:b0:51b:560b:dd30 with SMTP id h1-20020a056a00218100b0051b560bdd30mr23654774pfi.44.1660691543863; Tue, 16 Aug 2022 16:12:23 -0700 (PDT) Received: from relinquished.hsd1.wa.comcast.net ([2601:602:a300:cc0::f972]) by smtp.gmail.com with ESMTPSA id p3-20020a170902e74300b0016c4f0065b4sm9721316plf.84.2022.08.16.16.12.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Aug 2022 16:12:23 -0700 (PDT) From: Omar Sandoval To: linux-btrfs@vger.kernel.org Cc: kernel-team@fb.com Subject: [PATCH 2/2] btrfs: get rid of block group caching progress logic Date: Tue, 16 Aug 2022 16:12:16 -0700 Message-Id: <1ac68be51384f9cc2433bb7979f4cda563e72976.1660690698.git.osandov@fb.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org From: Omar Sandoval struct btrfs_caching_ctl::progress and struct btrfs_block_group::last_byte_to_unpin were previously needed to ensure that unpin_extent_range() didn't return a range to the free space cache before the caching thread had a chance to cache that range. However, the previous commit made it so that we always synchronously cache the block group at the time that we pin the extent, so this machinery is no longer necessary. Signed-off-by: Omar Sandoval Reviewed-by: Filipe Manana --- fs/btrfs/block-group.c | 13 ------------ fs/btrfs/block-group.h | 2 -- fs/btrfs/extent-tree.c | 9 ++------- fs/btrfs/free-space-tree.c | 8 -------- fs/btrfs/transaction.c | 41 -------------------------------------- fs/btrfs/zoned.c | 1 - 6 files changed, 2 insertions(+), 72 deletions(-) diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 1af6fc395a52..68992ad9ff2a 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -593,8 +593,6 @@ static int load_extent_tree_free(struct btrfs_caching_control *caching_ctl) if (need_resched() || rwsem_is_contended(&fs_info->commit_root_sem)) { - if (wakeup) - caching_ctl->progress = last; btrfs_release_path(path); up_read(&fs_info->commit_root_sem); mutex_unlock(&caching_ctl->mutex); @@ -618,9 +616,6 @@ static int load_extent_tree_free(struct btrfs_caching_control *caching_ctl) key.objectid = last; key.offset = 0; key.type = BTRFS_EXTENT_ITEM_KEY; - - if (wakeup) - caching_ctl->progress = last; btrfs_release_path(path); goto next; } @@ -655,7 +650,6 @@ static int load_extent_tree_free(struct btrfs_caching_control *caching_ctl) total_found += add_new_free_space(block_group, last, block_group->start + block_group->length); - caching_ctl->progress = (u64)-1; out: btrfs_free_path(path); @@ -725,8 +719,6 @@ static noinline void caching_thread(struct btrfs_work *work) } #endif - caching_ctl->progress = (u64)-1; - up_read(&fs_info->commit_root_sem); btrfs_free_excluded_extents(block_group); mutex_unlock(&caching_ctl->mutex); @@ -755,7 +747,6 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, bool wait) mutex_init(&caching_ctl->mutex); init_waitqueue_head(&caching_ctl->wait); caching_ctl->block_group = cache; - caching_ctl->progress = cache->start; refcount_set(&caching_ctl->count, 2); btrfs_init_work(&caching_ctl->work, caching_thread, NULL, NULL); @@ -2076,11 +2067,9 @@ static int read_one_block_group(struct btrfs_fs_info *info, /* Should not have any excluded extents. Just in case, though. */ btrfs_free_excluded_extents(cache); } else if (cache->length == cache->used) { - cache->last_byte_to_unpin = (u64)-1; cache->cached = BTRFS_CACHE_FINISHED; btrfs_free_excluded_extents(cache); } else if (cache->used == 0) { - cache->last_byte_to_unpin = (u64)-1; cache->cached = BTRFS_CACHE_FINISHED; add_new_free_space(cache, cache->start, cache->start + cache->length); @@ -2136,7 +2125,6 @@ static int fill_dummy_bgs(struct btrfs_fs_info *fs_info) /* Fill dummy cache as FULL */ bg->length = em->len; bg->flags = map->type; - bg->last_byte_to_unpin = (u64)-1; bg->cached = BTRFS_CACHE_FINISHED; bg->used = em->len; bg->flags = map->type; @@ -2482,7 +2470,6 @@ struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *tran set_free_space_tree_thresholds(cache); cache->used = bytes_used; cache->flags = type; - cache->last_byte_to_unpin = (u64)-1; cache->cached = BTRFS_CACHE_FINISHED; cache->global_root_id = calculate_global_root_id(fs_info, cache->start); diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h index 9dba28bb1806..59a86e82a28e 100644 --- a/fs/btrfs/block-group.h +++ b/fs/btrfs/block-group.h @@ -63,7 +63,6 @@ struct btrfs_caching_control { wait_queue_head_t wait; struct btrfs_work work; struct btrfs_block_group *block_group; - u64 progress; refcount_t count; }; @@ -115,7 +114,6 @@ struct btrfs_block_group { /* Cache tracking stuff */ int cached; struct btrfs_caching_control *caching_ctl; - u64 last_byte_to_unpin; struct btrfs_space_info *space_info; diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 86ac953c69ac..bcd0e72cded3 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -2686,13 +2686,8 @@ static int unpin_extent_range(struct btrfs_fs_info *fs_info, len = cache->start + cache->length - start; len = min(len, end + 1 - start); - down_read(&fs_info->commit_root_sem); - if (start < cache->last_byte_to_unpin && return_free_space) { - u64 add_len = min(len, cache->last_byte_to_unpin - start); - - btrfs_add_free_space(cache, start, add_len); - } - up_read(&fs_info->commit_root_sem); + if (return_free_space) + btrfs_add_free_space(cache, start, len); start += len; total_unpinned += len; diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c index 1bf89aa67216..367bcfcf68f5 100644 --- a/fs/btrfs/free-space-tree.c +++ b/fs/btrfs/free-space-tree.c @@ -1453,8 +1453,6 @@ static int load_free_space_bitmaps(struct btrfs_caching_control *caching_ctl, ASSERT(key.type == BTRFS_FREE_SPACE_BITMAP_KEY); ASSERT(key.objectid < end && key.objectid + key.offset <= end); - caching_ctl->progress = key.objectid; - offset = key.objectid; while (offset < key.objectid + key.offset) { bit = free_space_test_bit(block_group, path, offset); @@ -1490,8 +1488,6 @@ static int load_free_space_bitmaps(struct btrfs_caching_control *caching_ctl, goto out; } - caching_ctl->progress = (u64)-1; - ret = 0; out: return ret; @@ -1531,8 +1527,6 @@ static int load_free_space_extents(struct btrfs_caching_control *caching_ctl, ASSERT(key.type == BTRFS_FREE_SPACE_EXTENT_KEY); ASSERT(key.objectid < end && key.objectid + key.offset <= end); - caching_ctl->progress = key.objectid; - total_found += add_new_free_space(block_group, key.objectid, key.objectid + key.offset); if (total_found > CACHING_CTL_WAKE_UP) { @@ -1552,8 +1546,6 @@ static int load_free_space_extents(struct btrfs_caching_control *caching_ctl, goto out; } - caching_ctl->progress = (u64)-1; - ret = 0; out: return ret; diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index 6e3b2cb6a04a..4c87bf2abc14 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -161,7 +161,6 @@ static noinline void switch_commit_roots(struct btrfs_trans_handle *trans) struct btrfs_transaction *cur_trans = trans->transaction; struct btrfs_fs_info *fs_info = trans->fs_info; struct btrfs_root *root, *tmp; - struct btrfs_caching_control *caching_ctl, *next; /* * At this point no one can be using this transaction to modify any tree @@ -196,46 +195,6 @@ static noinline void switch_commit_roots(struct btrfs_trans_handle *trans) } spin_unlock(&cur_trans->dropped_roots_lock); - /* - * We have to update the last_byte_to_unpin under the commit_root_sem, - * at the same time we swap out the commit roots. - * - * This is because we must have a real view of the last spot the caching - * kthreads were while caching. Consider the following views of the - * extent tree for a block group - * - * commit root - * +----+----+----+----+----+----+----+ - * |\\\\| |\\\\|\\\\| |\\\\|\\\\| - * +----+----+----+----+----+----+----+ - * 0 1 2 3 4 5 6 7 - * - * new commit root - * +----+----+----+----+----+----+----+ - * | | | |\\\\| | |\\\\| - * +----+----+----+----+----+----+----+ - * 0 1 2 3 4 5 6 7 - * - * If the cache_ctl->progress was at 3, then we are only allowed to - * unpin [0,1) and [2,3], because the caching thread has already - * processed those extents. We are not allowed to unpin [5,6), because - * the caching thread will re-start it's search from 3, and thus find - * the hole from [4,6) to add to the free space cache. - */ - write_lock(&fs_info->block_group_cache_lock); - list_for_each_entry_safe(caching_ctl, next, - &fs_info->caching_block_groups, list) { - struct btrfs_block_group *cache = caching_ctl->block_group; - - if (btrfs_block_group_done(cache)) { - cache->last_byte_to_unpin = (u64)-1; - list_del_init(&caching_ctl->list); - btrfs_put_caching_control(caching_ctl); - } else { - cache->last_byte_to_unpin = caching_ctl->progress; - } - } - write_unlock(&fs_info->block_group_cache_lock); up_write(&fs_info->commit_root_sem); } diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 61ae58c3a354..56a147a6e571 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -1558,7 +1558,6 @@ void btrfs_calc_zone_unusable(struct btrfs_block_group *cache) free = cache->zone_capacity - cache->alloc_offset; /* We only need ->free_space in ALLOC_SEQ block groups */ - cache->last_byte_to_unpin = (u64)-1; cache->cached = BTRFS_CACHE_FINISHED; cache->free_space_ctl->free_space = free; cache->zone_unusable = unusable;