diff mbox series

[v2,11/11] btrfs: add assertions and comment about path expectations to btrfs_cross_ref_exist()

Message ID 4d5d175428bb38d17fc2214f8f31f511298ba67f.1733929328.git.fdmanana@suse.com (mailing list archive)
State New
Headers show
Series btrfs: fixes around swap file activation and cleanups | expand

Commit Message

Filipe Manana Dec. 11, 2024, 3:05 p.m. UTC
From: Filipe Manana <fdmanana@suse.com>

We should always call check_delayed_ref() with a path having a locked leaf
from the extent tree where either the extent item is located or where it
should be located in case it doesn't exist yet (when there's a pending
unflushed delayed ref to do it), as we need to lock any existing delayed
ref head while holding such leaf locked in order to avoid races with
flushing delayed references, which could make us think an extent is not
shared when it really is.

So add some assertions and a comment about such expectations to
btrfs_cross_ref_exist(), which is the only caller of check_delayed_ref().

Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
 fs/btrfs/extent-tree.c | 25 +++++++++++++++++++++++++
 fs/btrfs/locking.h     |  5 +++++
 2 files changed, 30 insertions(+)

Comments

Qu Wenruo Dec. 13, 2024, 9:03 p.m. UTC | #1
在 2024/12/12 01:35, fdmanana@kernel.org 写道:
> From: Filipe Manana <fdmanana@suse.com>
>
> We should always call check_delayed_ref() with a path having a locked leaf
> from the extent tree where either the extent item is located or where it
> should be located in case it doesn't exist yet (when there's a pending
> unflushed delayed ref to do it), as we need to lock any existing delayed
> ref head while holding such leaf locked in order to avoid races with
> flushing delayed references, which could make us think an extent is not
> shared when it really is.
>
> So add some assertions and a comment about such expectations to
> btrfs_cross_ref_exist(), which is the only caller of check_delayed_ref().
>
> Signed-off-by: Filipe Manana <fdmanana@suse.com>

Just a small nitpick.
> ---
>   fs/btrfs/extent-tree.c | 25 +++++++++++++++++++++++++
>   fs/btrfs/locking.h     |  5 +++++
>   2 files changed, 30 insertions(+)
>
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index bd13059299e1..0f30f53f51b9 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -2426,6 +2426,31 @@ int btrfs_cross_ref_exist(struct btrfs_inode *inode, u64 offset,
>   		if (ret && ret != -ENOENT)
>   			goto out;
>
> +		/*
> +		 * The path must have a locked leaf from the extent tree where
> +		 * the extent item for our extent is located, in case it exists,
> +		 * or where it should be located in case it doesn't exist yet
> +		 * because it's new and its delayed ref was not yet flushed.
> +		 * We need to lock the delayed ref head at check_delayed_ref(),
> +		 * if one exists, while holding the leaf locked in order to not
> +		 * race with delayed ref flushing, missing references and
> +		 * incorrectly reporting that the extent is not shared.
> +		 */
> +		if (IS_ENABLED(CONFIG_BTRFS_ASSERT)) {

I think we can just get rid of the CONFIG_BTRFS_ASSERT() check.

All the assert functions have already done the check anyway.
We only skip the btrfs_item_key_to_cpu() call which shouldn't be that
costly.

Thanks,
Qu
> +			struct extent_buffer *leaf = path->nodes[0];
> +
> +			ASSERT(leaf != NULL);
> +			btrfs_assert_tree_read_locked(leaf);
> +
> +			if (ret != -ENOENT) {
> +				struct btrfs_key key;
> +
> +				btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
> +				ASSERT(key.objectid == bytenr);
> +				ASSERT(key.type == BTRFS_EXTENT_ITEM_KEY);
> +			}
> +		}
> +
>   		ret = check_delayed_ref(inode, path, offset, bytenr);
>   	} while (ret == -EAGAIN && !path->nowait);
>
> diff --git a/fs/btrfs/locking.h b/fs/btrfs/locking.h
> index 35036b151bf5..c69e57ff804b 100644
> --- a/fs/btrfs/locking.h
> +++ b/fs/btrfs/locking.h
> @@ -199,8 +199,13 @@ static inline void btrfs_assert_tree_write_locked(struct extent_buffer *eb)
>   {
>   	lockdep_assert_held_write(&eb->lock);
>   }
> +static inline void btrfs_assert_tree_read_locked(struct extent_buffer *eb)
> +{
> +	lockdep_assert_held_read(&eb->lock);
> +}
>   #else
>   static inline void btrfs_assert_tree_write_locked(struct extent_buffer *eb) { }
> +static inline void btrfs_assert_tree_read_locked(struct extent_buffer *eb) { }
>   #endif
>
>   void btrfs_unlock_up_safe(struct btrfs_path *path, int level);
Filipe Manana Dec. 13, 2024, 10:29 p.m. UTC | #2
On Fri, Dec 13, 2024 at 9:04 PM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> 在 2024/12/12 01:35, fdmanana@kernel.org 写道:
> > From: Filipe Manana <fdmanana@suse.com>
> >
> > We should always call check_delayed_ref() with a path having a locked leaf
> > from the extent tree where either the extent item is located or where it
> > should be located in case it doesn't exist yet (when there's a pending
> > unflushed delayed ref to do it), as we need to lock any existing delayed
> > ref head while holding such leaf locked in order to avoid races with
> > flushing delayed references, which could make us think an extent is not
> > shared when it really is.
> >
> > So add some assertions and a comment about such expectations to
> > btrfs_cross_ref_exist(), which is the only caller of check_delayed_ref().
> >
> > Signed-off-by: Filipe Manana <fdmanana@suse.com>
>
> Just a small nitpick.
> > ---
> >   fs/btrfs/extent-tree.c | 25 +++++++++++++++++++++++++
> >   fs/btrfs/locking.h     |  5 +++++
> >   2 files changed, 30 insertions(+)
> >
> > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > index bd13059299e1..0f30f53f51b9 100644
> > --- a/fs/btrfs/extent-tree.c
> > +++ b/fs/btrfs/extent-tree.c
> > @@ -2426,6 +2426,31 @@ int btrfs_cross_ref_exist(struct btrfs_inode *inode, u64 offset,
> >               if (ret && ret != -ENOENT)
> >                       goto out;
> >
> > +             /*
> > +              * The path must have a locked leaf from the extent tree where
> > +              * the extent item for our extent is located, in case it exists,
> > +              * or where it should be located in case it doesn't exist yet
> > +              * because it's new and its delayed ref was not yet flushed.
> > +              * We need to lock the delayed ref head at check_delayed_ref(),
> > +              * if one exists, while holding the leaf locked in order to not
> > +              * race with delayed ref flushing, missing references and
> > +              * incorrectly reporting that the extent is not shared.
> > +              */
> > +             if (IS_ENABLED(CONFIG_BTRFS_ASSERT)) {
>
> I think we can just get rid of the CONFIG_BTRFS_ASSERT() check.
>
> All the assert functions have already done the check anyway.
> We only skip the btrfs_item_key_to_cpu() call which shouldn't be that
> costly.

It's done this way because otherwise it may trigger warnings from
compilers or static analysis tools if assertions are disabled.
As in that case we write to the key structure but never read from it.

Thanks.

>
> Thanks,
> Qu
> > +                     struct extent_buffer *leaf = path->nodes[0];
> > +
> > +                     ASSERT(leaf != NULL);
> > +                     btrfs_assert_tree_read_locked(leaf);
> > +
> > +                     if (ret != -ENOENT) {
> > +                             struct btrfs_key key;
> > +
> > +                             btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
> > +                             ASSERT(key.objectid == bytenr);
> > +                             ASSERT(key.type == BTRFS_EXTENT_ITEM_KEY);
> > +                     }
> > +             }
> > +
> >               ret = check_delayed_ref(inode, path, offset, bytenr);
> >       } while (ret == -EAGAIN && !path->nowait);
> >
> > diff --git a/fs/btrfs/locking.h b/fs/btrfs/locking.h
> > index 35036b151bf5..c69e57ff804b 100644
> > --- a/fs/btrfs/locking.h
> > +++ b/fs/btrfs/locking.h
> > @@ -199,8 +199,13 @@ static inline void btrfs_assert_tree_write_locked(struct extent_buffer *eb)
> >   {
> >       lockdep_assert_held_write(&eb->lock);
> >   }
> > +static inline void btrfs_assert_tree_read_locked(struct extent_buffer *eb)
> > +{
> > +     lockdep_assert_held_read(&eb->lock);
> > +}
> >   #else
> >   static inline void btrfs_assert_tree_write_locked(struct extent_buffer *eb) { }
> > +static inline void btrfs_assert_tree_read_locked(struct extent_buffer *eb) { }
> >   #endif
> >
> >   void btrfs_unlock_up_safe(struct btrfs_path *path, int level);
>
diff mbox series

Patch

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index bd13059299e1..0f30f53f51b9 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2426,6 +2426,31 @@  int btrfs_cross_ref_exist(struct btrfs_inode *inode, u64 offset,
 		if (ret && ret != -ENOENT)
 			goto out;
 
+		/*
+		 * The path must have a locked leaf from the extent tree where
+		 * the extent item for our extent is located, in case it exists,
+		 * or where it should be located in case it doesn't exist yet
+		 * because it's new and its delayed ref was not yet flushed.
+		 * We need to lock the delayed ref head at check_delayed_ref(),
+		 * if one exists, while holding the leaf locked in order to not
+		 * race with delayed ref flushing, missing references and
+		 * incorrectly reporting that the extent is not shared.
+		 */
+		if (IS_ENABLED(CONFIG_BTRFS_ASSERT)) {
+			struct extent_buffer *leaf = path->nodes[0];
+
+			ASSERT(leaf != NULL);
+			btrfs_assert_tree_read_locked(leaf);
+
+			if (ret != -ENOENT) {
+				struct btrfs_key key;
+
+				btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
+				ASSERT(key.objectid == bytenr);
+				ASSERT(key.type == BTRFS_EXTENT_ITEM_KEY);
+			}
+		}
+
 		ret = check_delayed_ref(inode, path, offset, bytenr);
 	} while (ret == -EAGAIN && !path->nowait);
 
diff --git a/fs/btrfs/locking.h b/fs/btrfs/locking.h
index 35036b151bf5..c69e57ff804b 100644
--- a/fs/btrfs/locking.h
+++ b/fs/btrfs/locking.h
@@ -199,8 +199,13 @@  static inline void btrfs_assert_tree_write_locked(struct extent_buffer *eb)
 {
 	lockdep_assert_held_write(&eb->lock);
 }
+static inline void btrfs_assert_tree_read_locked(struct extent_buffer *eb)
+{
+	lockdep_assert_held_read(&eb->lock);
+}
 #else
 static inline void btrfs_assert_tree_write_locked(struct extent_buffer *eb) { }
+static inline void btrfs_assert_tree_read_locked(struct extent_buffer *eb) { }
 #endif
 
 void btrfs_unlock_up_safe(struct btrfs_path *path, int level);