diff mbox series

[v2,3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range

Message ID 20190507071924.17643-4-nborisov@suse.com (mailing list archive)
State New, archived
Headers show
Series Ordered extent flushing refactor | expand

Commit Message

Nikolay Borisov May 7, 2019, 7:19 a.m. UTC
In case no cached_state argument is passed to
btrfs_lock_and_flush_ordered_range use one locally in the function. This
optimises the case when an ordered extent is found since the unlock
function will be able to unlock that state directly without searching
for it again.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/ordered-data.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

Comments

David Sterba May 9, 2019, 3:35 p.m. UTC | #1
On Tue, May 07, 2019 at 10:19:24AM +0300, Nikolay Borisov wrote:
> In case no cached_state argument is passed to
> btrfs_lock_and_flush_ordered_range use one locally in the function. This
> optimises the case when an ordered extent is found since the unlock
> function will be able to unlock that state directly without searching
> for it again.

This will speed up all callers that previously did not cache the state,
right? That can improve bring some improvement, I wonder if the caching
can be used in more places, there are still many plain lock_extent
calls.

check_can_nocow calls unlock_extent from 2 locations, so passing the
cached pointer could help in case the ordered is found and thus unlock
happens outside of btrfs_lock_and_flush_ordered_range. Elsewhere it's
only the locking part so the cache would have to be passed along, but
this might not make sense in all cases. Anyway, that's for another
patch.
diff mbox series

Patch

diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 37401cc04a6b..df02ed25b7db 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -982,14 +982,26 @@  void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
 					struct extent_state **cached_state)
 {
 	struct btrfs_ordered_extent *ordered;
+	struct extent_state *cachedp = NULL;
+
+	if (cached_state)
+		cachedp = *cached_state;
 
 	while (1) {
-		lock_extent_bits(tree, start, end, cached_state);
+		lock_extent_bits(tree, start, end, &cachedp);
 		ordered = btrfs_lookup_ordered_range(inode, start,
 						     end - start + 1);
-		if (!ordered)
+		if (!ordered) {
+			/*
+			 * If no external cached_state has been passed then
+			 * decrement the extra ref taken for cachedp since we
+			 * aren't exposing it outside of this function
+			 */
+			if (!cached_state)
+				refcount_dec(&cachedp->refs);
 			break;
-		unlock_extent_cached(tree, start, end, cached_state);
+		}
+		unlock_extent_cached(tree, start, end, &cachedp);
 		btrfs_start_ordered_extent(&inode->vfs_inode, ordered, 1);
 		btrfs_put_ordered_extent(ordered);
 	}