From patchwork Fri Sep 22 10:39:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Filipe Manana X-Patchwork-Id: 13395524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A11D8CD4F57 for ; Fri, 22 Sep 2023 10:39:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233391AbjIVKja (ORCPT ); Fri, 22 Sep 2023 06:39:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233384AbjIVKjY (ORCPT ); Fri, 22 Sep 2023 06:39:24 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BCE5AB for ; Fri, 22 Sep 2023 03:39:19 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 481A7C433C8 for ; Fri, 22 Sep 2023 10:39:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695379158; bh=CbQ1wCUvWqRzpR55VbFjyLCavn+pfmmk+nuAuyMwU6Q=; h=From:To:Subject:Date:In-Reply-To:References:From; b=t8IGL4w1lgv+qC1F4zzuXbCbkIifMdqfNQDhY8/gBI9WgDs18PGAtKqe/h+cjYx5m D7Lu7rlNypJ3RJSuI0nuJo21nnins8OiVacC881xCaqaFhDZc41dULzyWPQjFGmvsN 9NKueTM9oVvu14Wyb1Yymg7lUWI2vGwhWFjxA5nqqyYhVt2L2njcQPmp48ZXXXuOu4 Xi/43ct3TYZxntPpNNQy2vDyOgK6TGV6Ohb0HA4QJUJ+oeXDc5Widv7+KBgHms8ZSE 9TQHSDEZtlTNlgzP4j968f6K66Ny++h7HLiQuqzE9lRZiUhe+JafNTJnxISbpA7UFb aooy+b0B/bB3Q== From: fdmanana@kernel.org To: linux-btrfs@vger.kernel.org Subject: [PATCH 5/8] btrfs: collapse wait_on_state() to its caller wait_extent_bit() Date: Fri, 22 Sep 2023 11:39:06 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org From: Filipe Manana The wait_on_state() function is very short and has a single caller, which is wait_extent_bit(), so remove the function and put its code into the caller. Signed-off-by: Filipe Manana Reviewed-by: Anand Jain --- fs/btrfs/extent-io-tree.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c index c939c2bc88e5..700b84fc1588 100644 --- a/fs/btrfs/extent-io-tree.c +++ b/fs/btrfs/extent-io-tree.c @@ -127,7 +127,7 @@ void extent_io_tree_release(struct extent_io_tree *tree) /* * No need for a memory barrier here, as we are holding the tree * lock and we only change the waitqueue while holding that lock - * (see wait_on_state()). + * (see wait_extent_bit()). */ ASSERT(!waitqueue_active(&state->wq)); free_extent_state(state); @@ -747,19 +747,6 @@ int __clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, } -static void wait_on_state(struct extent_io_tree *tree, - struct extent_state *state) - __releases(tree->lock) - __acquires(tree->lock) -{ - DEFINE_WAIT(wait); - prepare_to_wait(&state->wq, &wait, TASK_UNINTERRUPTIBLE); - spin_unlock(&tree->lock); - schedule(); - spin_lock(&tree->lock); - finish_wait(&state->wq, &wait); -} - /* * Wait for one or more bits to clear on a range in the state tree. * The range [start, end] is inclusive. @@ -797,9 +784,15 @@ static void wait_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, goto out; if (state->state & bits) { + DEFINE_WAIT(wait); + start = state->start; refcount_inc(&state->refs); - wait_on_state(tree, state); + prepare_to_wait(&state->wq, &wait, TASK_UNINTERRUPTIBLE); + spin_unlock(&tree->lock); + schedule(); + spin_lock(&tree->lock); + finish_wait(&state->wq, &wait); free_extent_state(state); goto again; }