From patchwork Thu Mar 27 16:13:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Filipe Manana X-Patchwork-Id: 14031277 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 354FA1A238D for ; Thu, 27 Mar 2025 16:13:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743092037; cv=none; b=BjDCTbrUfGuXSlJWJjDwpsJD76+kBnCMuBS999sFt7sB9EClQUSQNxOmQpK0E0QK+OnUGjYvQwa8HZmw1tEcgYV+JaLd9trr4kWd3HUvyaBZz0YFTzWAwcftyeouqneIaQ8dpoyu5OBzWgLfg1p4awiib+o92qBvcRjPAOX4SbI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743092037; c=relaxed/simple; bh=i7261Us3yzii9//wu93/R0ILd9bENbjdeYg4d85OoTM=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RRTYptqWCTv3Wsj16GaDlu1Z7sIVgD1oFh/EjR47Nl4+zd75xr64jd2LWTQL14L87pF718nOakROtzSJpSPZmOGFmLu3YNIl7QEggYaGVudPTWeqyR+hh6f4e8JHslbxh6QiqejPpuqBDYX4QCOBn+HBzPuxk05G8X42tFZvQZY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rduBNpPk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rduBNpPk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2A268C4CEE5 for ; Thu, 27 Mar 2025 16:13:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743092036; bh=i7261Us3yzii9//wu93/R0ILd9bENbjdeYg4d85OoTM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=rduBNpPku4vvlf1zeexg+z7TIz4dvV8ad6ijU8dFT0u3P9wQcjEKiymZG1oNsdpzC aylTNU3NdB/U6IIzXen9FL+cWnW1RANGw+WLzkMPVBUsYavHkwTeVaZlZFsB1UrBs6 dkPCP8Hm81LT1sfWQKb4esg7qP6Ly20ADT+FoGwzVPgqDpqOk7W8aRGez7D8P2PJPB HCpkc3iBVoeE15oaGo6VGbo24Vxd9dCIw6PriLJBmKwiADeL6Su1jyB4c4s8ZYdg6T tRAXCopxvHOobVRn1SVrDb4r85qW9rJy364nkZfHsToFjB5umxnJ7jVyfVVkgciYMy ksN75gvoXwYZA== From: fdmanana@kernel.org To: linux-btrfs@vger.kernel.org Subject: [PATCH 1/3] btrfs: update comment for try_release_extent_state() Date: Thu, 27 Mar 2025 16:13:50 +0000 Message-Id: <7cee4c63c1fb8bc66efda5acb6257e504f3eb053.1743004734.git.fdmanana@suse.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Filipe Manana Drop reference to pages from the comment since the function is fully folio aware and works regardless of how many pages are in the folio. Also while at it, capitalize the first word and make it more explicit that release_folio is a callback from struct address_space_operations. Signed-off-by: Filipe Manana --- fs/btrfs/extent_io.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index b168dc354f20..a11b22fcd154 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2618,9 +2618,9 @@ int extent_invalidate_folio(struct extent_io_tree *tree, } /* - * a helper for release_folio, this tests for areas of the page that - * are locked or under IO and drops the related state bits if it is safe - * to drop the page. + * A helper for struct address_space_operations::release_folio, this tests for + * areas of the folio that are locked or under IO and drops the related state + * bits if it is safe to drop the folio. */ static bool try_release_extent_state(struct extent_io_tree *tree, struct folio *folio) From patchwork Thu Mar 27 16:13:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Filipe Manana X-Patchwork-Id: 14031278 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30A801A2846 for ; Thu, 27 Mar 2025 16:13:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743092038; cv=none; b=VPTzegeUpNcnQKJ/X4VVeHnaJ2ayOAuoSo+ow/Nq+cNq3fFbqkuDmlQzbaJ70yHFCYTqbQRosmD1lkTSWPCfXGwAL33YdyPswJaoPwSO+Jv6GefPpB9SVYJn5NkRyBmv1MbWwJbcCCT7+iy2r5iU+52jYeP9SR6G6Af/JtDOn+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743092038; c=relaxed/simple; bh=eJg2Y6t4wDtRnf6qwDghQQK0IczGvXuCl+jUu2pSrzs=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DsgYvF9u9mkG8zrB9jOvok6Gq23mRRpIZPS6ijUrlYMqOJVwNQTSltUECN6sgIRp93uwb+1+iSvK684AYnRPO/+6VN8wvcv0IpEtplw3nVIIaRwlF7RZkCvw24Jt0MJkW7RKX6u6upc7/9KHEYs1k/G0pwGjle4uJJ5bI11zMJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=k5rKXnQn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="k5rKXnQn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23B6FC4CEED for ; Thu, 27 Mar 2025 16:13:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743092037; bh=eJg2Y6t4wDtRnf6qwDghQQK0IczGvXuCl+jUu2pSrzs=; h=From:To:Subject:Date:In-Reply-To:References:From; b=k5rKXnQnxzKFXATpLtBU3np/4bhFibAiGoX/nHmg2pQzZs8WJFJRY2sfUQK/0DOFW wNnjZFa6j5DYAGzbY0/uxj1vmuwWUgBQv5/fkmPrLzyc6H4f3NR/CyCJxBK9v/j8L0 wAo6EjVv5mVOMED6vPKcC3sUHHol1zoti+GOXll9UtPC/0WX8tHkZaTVKlCZzV3pkd mM1OrFlKh6KmSWA5KQlP/Ow8ebnBLpLlk0EglVH4A01P64zSCpGz1EHf4XI5PopwZi Gzf1xQG3VyfvfxvsR3nbhd53n0fVttPV6q0uNr22bvYaoJrS/1sf5nQyf1JrfV16ZI j4o4ZAaz/oMng== From: fdmanana@kernel.org To: linux-btrfs@vger.kernel.org Subject: [PATCH 2/3] btrfs: allow folios to be released while ordered extent is finishing Date: Thu, 27 Mar 2025 16:13:51 +0000 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Filipe Manana When the release_folio callback (from struct address_space_operations) is invoked we don't allow the folio to be released if its range is currently locked in the inode's io_tree, as it may indicate the folio may be needed by the task that locked the range. However if the range is locked because an ordered extent is finishing, then we can safely allow the folio to be released because ordered extent completion doesn't need to use the folio at all. When we are under memory pressure, the kernel starts writeback of dirty pages (folios) with the goal of releasing the pages from the page cache after writeback completes, however this often is not possible on btrfs because: * Once the writeback completes we queue the ordered extent completion; * Once the ordered extent completion starts, we lock the range in the inode's io_tree (at btrfs_finish_one_ordered()); * If the release_folio callback is called while the folio's range is locked in the inode's io_tree, we don't allow the folio to be released, so the kernel has to try to release memory elsewhere, which may result in triggering more writeback or releasing other pages from the page cache which may be more useful to have around for applications. In contrast, when the release_folio callback is invoked after writeback finishes and before ordered extent completion starts or locks the range, we allow the folio to be released, as well as when the release_folio callback is invoked after ordered extent completion unlocks the range. Improve on this by detecting if the range is locked for ordered extent completion and if it is, allow the folio to be released. This detection is achieved by adding a new extent flag in the io_tree that is set when the range is locked during ordered extent completion. Signed-off-by: Filipe Manana --- fs/btrfs/extent-io-tree.c | 22 +++++++++++++++++ fs/btrfs/extent-io-tree.h | 6 +++++ fs/btrfs/extent_io.c | 52 +++++++++++++++++++++------------------ fs/btrfs/inode.c | 6 +++-- 4 files changed, 60 insertions(+), 26 deletions(-) diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c index 13de6af279e5..14510a71a8fd 100644 --- a/fs/btrfs/extent-io-tree.c +++ b/fs/btrfs/extent-io-tree.c @@ -1752,6 +1752,28 @@ bool test_range_bit_exists(struct extent_io_tree *tree, u64 start, u64 end, u32 return bitset; } +void get_range_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 *bits) +{ + struct extent_state *state; + + *bits = 0; + + spin_lock(&tree->lock); + state = tree_search(tree, start); + while (state) { + if (state->start > end) + break; + + *bits |= state->state; + + if (state->end >= end) + break; + + state = next_state(state); + } + spin_unlock(&tree->lock); +} + /* * Check if the whole range [@start,@end) contains the single @bit set. */ diff --git a/fs/btrfs/extent-io-tree.h b/fs/btrfs/extent-io-tree.h index 6ffef1cd37c1..e49f24151167 100644 --- a/fs/btrfs/extent-io-tree.h +++ b/fs/btrfs/extent-io-tree.h @@ -38,6 +38,11 @@ enum { * that is left for the ordered extent completion. */ ENUM_BIT(EXTENT_DELALLOC_NEW), + /* + * Mark that a range is being locked for finishing an ordered extent. + * Used together with EXTENT_LOCKED. + */ + ENUM_BIT(EXTENT_FINISHING_ORDERED), /* * When an ordered extent successfully completes for a region marked as * a new delalloc range, use this flag when clearing a new delalloc @@ -166,6 +171,7 @@ void free_extent_state(struct extent_state *state); bool test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, u32 bit, struct extent_state *cached_state); bool test_range_bit_exists(struct extent_io_tree *tree, u64 start, u64 end, u32 bit); +void get_range_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 *bits); int clear_record_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 bits, struct extent_changeset *changeset); int __clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index a11b22fcd154..6b9a80f9e0f5 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2627,33 +2627,37 @@ static bool try_release_extent_state(struct extent_io_tree *tree, { u64 start = folio_pos(folio); u64 end = start + folio_size(folio) - 1; - bool ret; + u32 range_bits; + u32 clear_bits; + int ret; - if (test_range_bit_exists(tree, start, end, EXTENT_LOCKED)) { - ret = false; - } else { - u32 clear_bits = ~(EXTENT_LOCKED | EXTENT_NODATASUM | - EXTENT_DELALLOC_NEW | EXTENT_CTLBITS | - EXTENT_QGROUP_RESERVED); - int ret2; + get_range_bits(tree, start, end, &range_bits); - /* - * At this point we can safely clear everything except the - * locked bit, the nodatasum bit and the delalloc new bit. - * The delalloc new bit will be cleared by ordered extent - * completion. - */ - ret2 = __clear_extent_bit(tree, start, end, clear_bits, NULL, NULL); + /* + * We can release the folio if it's locked only for ordered extent + * completion, since that doesn't require using the folio. + */ + if ((range_bits & EXTENT_LOCKED) && + !(range_bits & EXTENT_FINISHING_ORDERED)) + return false; - /* if clear_extent_bit failed for enomem reasons, - * we can't allow the release to continue. - */ - if (ret2 < 0) - ret = false; - else - ret = true; - } - return ret; + clear_bits = ~(EXTENT_LOCKED | EXTENT_NODATASUM | EXTENT_DELALLOC_NEW | + EXTENT_CTLBITS | EXTENT_QGROUP_RESERVED | + EXTENT_FINISHING_ORDERED); + /* + * At this point we can safely clear everything except the locked, + * nodatasum, delalloc new and finishing ordered bits. The delalloc new + * bit will be cleared by ordered extent completion. + */ + ret = __clear_extent_bit(tree, start, end, clear_bits, NULL, NULL); + /* + * If clear_extent_bit failed for enomem reasons, we can't allow the + * release to continue. + */ + if (ret < 0) + return false; + + return true; } /* diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index e283627c087d..469b3fd64f17 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -3132,8 +3132,10 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent) * depending on their current state). */ if (!test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags)) { - clear_bits |= EXTENT_LOCKED; - lock_extent(io_tree, start, end, &cached_state); + clear_bits |= EXTENT_LOCKED | EXTENT_FINISHING_ORDERED; + __lock_extent(io_tree, start, end, + EXTENT_LOCKED | EXTENT_FINISHING_ORDERED, + &cached_state); } if (freespace_inode) From patchwork Thu Mar 27 16:13:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Filipe Manana X-Patchwork-Id: 14031279 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1D9BD1A3A8A for ; Thu, 27 Mar 2025 16:13:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743092039; cv=none; b=mKaq81+oOSDzXqsuRvuQ/mAjkGhKa4jpl9bHxaePZdxBqP4NVH26ETO06d7uGZex+EcJ/a7gAUmwePxLoqJwILt+Hfy/1aVR33G95LyC9zdexydEuvpXbwLXhoQsY1Jmp3tOQ88d4fMk8vmlpT/UQCUHRx2eeaVOVzy4jPuKcZ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743092039; c=relaxed/simple; bh=hzNvGHPoAIyNT0XHe+i2gRaAtOx8wdfGRlJn+eIiKO8=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FC3YehG4Hk5C+f2yb3YjTTGSMymYE/j8x566e1Sjkp20VqoZtz/5TP4w38GPrRIyFeWaufhEbzosgmfPy1PCD+EPqr3jIJbUUWuv26Pn7ydSumqFQOJ5hHk5YnY7k++LDcqMXZjrDXGTFYCZw92Tu9z3Uo14cmwX5NriRJ6EH+Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UkSsLa3f; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UkSsLa3f" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1D5F7C4CEDD for ; Thu, 27 Mar 2025 16:13:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743092038; bh=hzNvGHPoAIyNT0XHe+i2gRaAtOx8wdfGRlJn+eIiKO8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=UkSsLa3fLMP/XGVeHTPUoobgVgAYnPqTyHSqYRlx07KC4hp31MY5xT7wFrB6pLT0S IAE2UxClYQ8nwqUQefJkLzy/ui49vHuHsHnuYVqCtnjNUnYMd2OvxdzO5olG1HLwcB NvvibbHjo50ooyLxVulZeRdxg0RUQnYAy/elTYaGW5hM1fBkgAYLA0Fjs0e+VYWPxt sX7Ykr+rUFIJIXS286GDWPLHuz8bnhWI3lqzQ9aL+1gwYy5vWmxKQZMoH2FnGsFXc4 JOjkglI8yplBneNUuTn0krgcA1hyhaI8lbLnB/Xo+2eNIRZ2v6yAO8ski7HH+RzgaI MmNRXDBo+3utQ== From: fdmanana@kernel.org To: linux-btrfs@vger.kernel.org Subject: [PATCH 3/3] btrfs: pass a pointer to get_range_bits() to cache first search result Date: Thu, 27 Mar 2025 16:13:52 +0000 Message-Id: <6e5df30b5e774df8bdfa24b34cceaa717ede7453.1743004734.git.fdmanana@suse.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Filipe Manana Allow get_range_bits() to take an extent state pointer to pointer argument so that we can cache the first extent state record in the target range, so that a caller can use it for subsequent operations without doing a full tree search. Currently the only user is try_release_extent_state(), which then does a call to __clear_extent_bit() which can use such a cached state record. Signed-off-by: Filipe Manana --- fs/btrfs/extent-io-tree.c | 14 +++++++++++++- fs/btrfs/extent-io-tree.h | 3 ++- fs/btrfs/extent_io.c | 18 +++++++++++------- 3 files changed, 26 insertions(+), 9 deletions(-) diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c index 14510a71a8fd..7ae24a533404 100644 --- a/fs/btrfs/extent-io-tree.c +++ b/fs/btrfs/extent-io-tree.c @@ -1752,14 +1752,26 @@ bool test_range_bit_exists(struct extent_io_tree *tree, u64 start, u64 end, u32 return bitset; } -void get_range_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 *bits) +void get_range_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 *bits, + struct extent_state **cached_state) { struct extent_state *state; + /* + * The cached state is currently mandatory and not used to start the + * search, only to cache the first state record found in the range. + */ + ASSERT(cached_state != NULL); + ASSERT(*cached_state == NULL); + *bits = 0; spin_lock(&tree->lock); state = tree_search(tree, start); + if (state && state->start < end) { + *cached_state = state; + refcount_inc(&state->refs); + } while (state) { if (state->start > end) break; diff --git a/fs/btrfs/extent-io-tree.h b/fs/btrfs/extent-io-tree.h index e49f24151167..cf83e094b00e 100644 --- a/fs/btrfs/extent-io-tree.h +++ b/fs/btrfs/extent-io-tree.h @@ -171,7 +171,8 @@ void free_extent_state(struct extent_state *state); bool test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, u32 bit, struct extent_state *cached_state); bool test_range_bit_exists(struct extent_io_tree *tree, u64 start, u64 end, u32 bit); -void get_range_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 *bits); +void get_range_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 *bits, + struct extent_state **cached_state); int clear_record_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 bits, struct extent_changeset *changeset); int __clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 6b9a80f9e0f5..6a6f9ded00e3 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2625,13 +2625,15 @@ int extent_invalidate_folio(struct extent_io_tree *tree, static bool try_release_extent_state(struct extent_io_tree *tree, struct folio *folio) { + struct extent_state *cached_state = NULL; u64 start = folio_pos(folio); u64 end = start + folio_size(folio) - 1; u32 range_bits; u32 clear_bits; - int ret; + bool ret = false; + int ret2; - get_range_bits(tree, start, end, &range_bits); + get_range_bits(tree, start, end, &range_bits, &cached_state); /* * We can release the folio if it's locked only for ordered extent @@ -2639,7 +2641,7 @@ static bool try_release_extent_state(struct extent_io_tree *tree, */ if ((range_bits & EXTENT_LOCKED) && !(range_bits & EXTENT_FINISHING_ORDERED)) - return false; + goto out; clear_bits = ~(EXTENT_LOCKED | EXTENT_NODATASUM | EXTENT_DELALLOC_NEW | EXTENT_CTLBITS | EXTENT_QGROUP_RESERVED | @@ -2649,15 +2651,17 @@ static bool try_release_extent_state(struct extent_io_tree *tree, * nodatasum, delalloc new and finishing ordered bits. The delalloc new * bit will be cleared by ordered extent completion. */ - ret = __clear_extent_bit(tree, start, end, clear_bits, NULL, NULL); + ret2 = __clear_extent_bit(tree, start, end, clear_bits, &cached_state, NULL); /* * If clear_extent_bit failed for enomem reasons, we can't allow the * release to continue. */ - if (ret < 0) - return false; + if (ret2 == 0) + ret = true; +out: + free_extent_state(cached_state); - return true; + return ret; } /*