diff mbox series

[1/2] btrfs: drop radix-tree preload from btrfs_get_or_create_delayed_node()

Message ID b2e1e88e405ef7e0fbaf9e9aa667c95265f0aa68.1701384168.git.dsterba@suse.com (mailing list archive)
State New, archived
Headers show
Series Convert btrfs_root::delayed_nodes_tree to xarray | expand

Commit Message

David Sterba Nov. 30, 2023, 10:49 p.m. UTC
This is preparatory work for conversion of delayed_nodes_tree to xarray.
The preload interface has no equivalent in xarray API. It has a benefit
of an early allocation outside of a spin lock with less strict GFP
flags.

Without that we rely on GFP_ATOMIC that is set initially for the
structure. In order to bring back the less strict flags we'd need to
convert the btrfs_root::inode_lock to a mutex but this a more
significant change and should be done separately later.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/delayed-inode.c | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)
diff mbox series

Patch

diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index 91159dd7355b..c9c4a53048a1 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -134,23 +134,17 @@  static struct btrfs_delayed_node *btrfs_get_or_create_delayed_node(
 	/* cached in the btrfs inode and can be accessed */
 	refcount_set(&node->refs, 2);
 
-	ret = radix_tree_preload(GFP_NOFS);
-	if (ret) {
-		kmem_cache_free(delayed_node_cache, node);
-		return ERR_PTR(ret);
-	}
-
 	spin_lock(&root->inode_lock);
 	ret = radix_tree_insert(&root->delayed_nodes_tree, ino, node);
-	if (ret == -EEXIST) {
+	if (ret < 0) {
 		spin_unlock(&root->inode_lock);
 		kmem_cache_free(delayed_node_cache, node);
-		radix_tree_preload_end();
-		goto again;
+		if (ret == -EEXIST)
+			goto again;
+		return ERR_PTR(ret);
 	}
 	btrfs_inode->delayed_node = node;
 	spin_unlock(&root->inode_lock);
-	radix_tree_preload_end();
 
 	return node;
 }