diff mbox series

[5/6] btrfs: simplify waiting loop in btrfs_tree_lock

Message ID cc392a65b45fc90672c8e1ebf5f81cc96254a964.1548264220.git.dsterba@suse.com (mailing list archive)
State New, archived
Headers show
Series Extent buffer locking cleanups, part 1 | expand

Commit Message

David Sterba Jan. 23, 2019, 5:38 p.m. UTC
Currently, the number of readers and writers is checked and in case
there are any, wait and redo the locks. There's some duplication
before the branches go back to again label, eg. calling wait_event on
blocking_readers twice.

The sequence is transformed

loop:
* wait for readers
* wait for writers
* write_lock
* check readers, unlock and wait for readers, loop
* check writers, unlock and wait for writers, loop

The new sequence is not exactly the same due to the simplification, for
readers it's slightly faster. For the writers, original code does

* wait for writers
* (loop) wait for readers
*        wait for writers -- again

while the new goes directly to the reader check. This should behave the
same on a contended lock with multiple writers and readers, but can
reduce number of times we're waiting on something.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/locking.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

Comments

Johannes Thumshirn Jan. 24, 2019, 9:08 a.m. UTC | #1
Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
diff mbox series

Patch

diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c
index 7f89ca6f1fbc..82b84e4daad1 100644
--- a/fs/btrfs/locking.c
+++ b/fs/btrfs/locking.c
@@ -237,16 +237,9 @@  void btrfs_tree_lock(struct extent_buffer *eb)
 	wait_event(eb->read_lock_wq, atomic_read(&eb->blocking_readers) == 0);
 	wait_event(eb->write_lock_wq, atomic_read(&eb->blocking_writers) == 0);
 	write_lock(&eb->lock);
-	if (atomic_read(&eb->blocking_readers)) {
+	if (atomic_read(&eb->blocking_readers) ||
+	    atomic_read(&eb->blocking_writers)) {
 		write_unlock(&eb->lock);
-		wait_event(eb->read_lock_wq,
-			   atomic_read(&eb->blocking_readers) == 0);
-		goto again;
-	}
-	if (atomic_read(&eb->blocking_writers)) {
-		write_unlock(&eb->lock);
-		wait_event(eb->write_lock_wq,
-			   atomic_read(&eb->blocking_writers) == 0);
 		goto again;
 	}
 	WARN_ON(atomic_read(&eb->spinning_writers));