diff mbox series

[4/8] btrfs: remove pointless memory barrier from extent_io_tree_release()

Message ID f7f89af6619fc4ba4f98c7654496c4a4c13445b9.1695333278.git.fdmanana@suse.com (mailing list archive)
State New, archived
Headers show
Series btrfs: some speedups for io tree operations and cleanups | expand

Commit Message

Filipe Manana Sept. 22, 2023, 10:39 a.m. UTC
From: Filipe Manana <fdmanana@suse.com>

The memory barrier at extent_io_tree_release() is pointless because:

1) We have just called spin_lock() and that implies a memory barrier;

2) We only change the waitqueue of an extent state record while holding
   the tree lock - see wait_on_state()

So remove the memory barrier.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
 fs/btrfs/extent-io-tree.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

Comments

David Sterba Sept. 29, 2023, 3:25 p.m. UTC | #1
On Fri, Sep 22, 2023 at 11:39:05AM +0100, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
> 
> The memory barrier at extent_io_tree_release() is pointless because:
> 
> 1) We have just called spin_lock() and that implies a memory barrier;

Spin lock ins not a full memory barrier, only half (one direction of
updates), full memory barrier is implied by spin_unlock-spin_lock
sequence, so I this reasoning is a bit imprecise.

> 
> 2) We only change the waitqueue of an extent state record while holding
>    the tree lock - see wait_on_state()

This looks like the real reason why it can be reomved. What
waitqueue_active cares about are updates that must not be lost between
CPUs for the wakeup.

Here the updates under spin lock mean that there's always the
unlock/lock sequence when one thread does the updates (and unlocks),
then the other one (fist locks) and then checks.

> So remove the memory barrier.

Yeah, it's redundant, I would not call it pointless because there's a
legit reason in connection with waitqueue_active().

I'll reword the changelog along the reasoning above.
diff mbox series

Patch

diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
index 033544f79e2b..c939c2bc88e5 100644
--- a/fs/btrfs/extent-io-tree.c
+++ b/fs/btrfs/extent-io-tree.c
@@ -115,12 +115,6 @@  void extent_io_tree_init(struct btrfs_fs_info *fs_info,
 void extent_io_tree_release(struct extent_io_tree *tree)
 {
 	spin_lock(&tree->lock);
-	/*
-	 * Do a single barrier for the waitqueue_active check here, the state
-	 * of the waitqueue should not change once extent_io_tree_release is
-	 * called.
-	 */
-	smp_mb();
 	while (!RB_EMPTY_ROOT(&tree->state)) {
 		struct rb_node *node;
 		struct extent_state *state;
@@ -130,6 +124,11 @@  void extent_io_tree_release(struct extent_io_tree *tree)
 		rb_erase(&state->rb_node, &tree->state);
 		RB_CLEAR_NODE(&state->rb_node);
 		ASSERT(!(state->state & EXTENT_LOCKED));
+		/*
+		 * No need for a memory barrier here, as we are holding the tree
+		 * lock and we only change the waitqueue while holding that lock
+		 * (see wait_on_state()).
+		 */
 		ASSERT(!waitqueue_active(&state->wq));
 		free_extent_state(state);