diff mbox series

[v4,16/68] btrfs: extent_io: add assert_spin_locked() for attach_extent_buffer_page()

Message ID 20201021062554.68132-17-wqu@suse.com (mailing list archive)
State New, archived
Headers show
Series btrfs: add basic rw support for subpage sector size | expand

Commit Message

Qu Wenruo Oct. 21, 2020, 6:25 a.m. UTC
When calling attach_extent_buffer_page(), either we're attaching
anonymous pages, called from btrfs_clone_extent_buffer().

Or we're attaching btree_inode pages, called from alloc_extent_buffer().

For the later case, we should have page->mapping->private_lock hold to
avoid race modifying page->private.

Add assert_spin_locked() if we're calling from alloc_extent_buffer().

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/extent_io.c | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

David Sterba Oct. 27, 2020, 10:43 a.m. UTC | #1
On Wed, Oct 21, 2020 at 02:25:02PM +0800, Qu Wenruo wrote:
> When calling attach_extent_buffer_page(), either we're attaching
> anonymous pages, called from btrfs_clone_extent_buffer().
> 
> Or we're attaching btree_inode pages, called from alloc_extent_buffer().
> 
> For the later case, we should have page->mapping->private_lock hold to
> avoid race modifying page->private.
> 
> Add assert_spin_locked() if we're calling from alloc_extent_buffer().
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> Reviewed-by: Nikolay Borisov <nborisov@suse.com>
> ---
>  fs/btrfs/extent_io.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 5842d3522865..8bf38948bd37 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -3106,6 +3106,15 @@ static int submit_extent_page(unsigned int opf,
>  static void attach_extent_buffer_page(struct extent_buffer *eb,
>  				      struct page *page)
>  {
> +	/*
> +	 * If the page is mapped to btree inode, we should hold the private
> +	 * lock to prevent race.
> +	 * For cloned or dummy extent buffers, their pages are not mapped and
> +	 * will not race with any other ebs.
> +	 */
> +	if (page->mapping)
> +		assert_spin_locked(&page->mapping->private_lock);

assert_spin_locked per documentation checks if the spinlock is lockded
on any cpu, but from the comments above you want to assert that it's
held by the caller. So for that you need lockdep_assert_held, I don't
thing we'd ever want assert_spin_locked in our code.

> +
>  	if (!PagePrivate(page))
>  		attach_page_private(page, eb);
>  	else
> -- 
> 2.28.0
diff mbox series

Patch

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 5842d3522865..8bf38948bd37 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3106,6 +3106,15 @@  static int submit_extent_page(unsigned int opf,
 static void attach_extent_buffer_page(struct extent_buffer *eb,
 				      struct page *page)
 {
+	/*
+	 * If the page is mapped to btree inode, we should hold the private
+	 * lock to prevent race.
+	 * For cloned or dummy extent buffers, their pages are not mapped and
+	 * will not race with any other ebs.
+	 */
+	if (page->mapping)
+		assert_spin_locked(&page->mapping->private_lock);
+
 	if (!PagePrivate(page))
 		attach_page_private(page, eb);
 	else