diff mbox series

[07/20] btrfs: simplify the read_extent_buffer end_io handler

Message ID 20230309090526.332550-8-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [01/20] btrfs: mark extent_buffer_under_io static | expand

Commit Message

Christoph Hellwig March 9, 2023, 9:05 a.m. UTC
Now that we always use a single bio to read an extent_buffer, the buffer
can be passed to the end_io handler as private data.  This allows
implementing a much simplified dedicated end I/O handler for metadata
reads.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/btrfs/disk-io.c   | 105 +------------------------------------------
 fs/btrfs/disk-io.h   |   5 +--
 fs/btrfs/extent_io.c |  80 +++++++++++++++------------------
 3 files changed, 41 insertions(+), 149 deletions(-)

Comments

Johannes Thumshirn March 9, 2023, 1:08 p.m. UTC | #1
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Qu Wenruo March 10, 2023, 8:14 a.m. UTC | #2
On 2023/3/9 17:05, Christoph Hellwig wrote:
> Now that we always use a single bio to read an extent_buffer, the buffer
> can be passed to the end_io handler as private data.  This allows
> implementing a much simplified dedicated end I/O handler for metadata
> reads.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

This greatly simplify the subpage routine.

Although we no longer share data and metadata read endio function, it's 
still a net reduce of codes.

But there is a problem related to how we handle the validation.

>   
> +static void extent_buffer_read_end_io(struct btrfs_bio *bbio)
> +{
> +	struct extent_buffer *eb = bbio->private;
> +	bool uptodate = !bbio->bio.bi_status;
> +	struct bvec_iter_all iter_all;
> +	struct bio_vec *bvec;
> +	u32 bio_offset = 0;
> +
> +	atomic_inc(&eb->refs);
> +	eb->read_mirror = bbio->mirror_num;
> +
> +	if (uptodate &&
> +	    btrfs_validate_extent_buffer(eb, &bbio->parent_check) < 0)

Here we call btrfs_validate_extent_buffer() directly.

But in the case that a metadata bio is split, the endio function would 
be called on each split part.

Thus for the first half, we may fail on checking the eb, as the second 
half may not yet finished.

I'm afraid here in the endio function, we still need to call 
btrfs_validate_metadata_buffer(), which will only do the validation 
after all parts of the metadata is properly read.

Thanks,
Qu
> +		uptodate = false;
> +
> +	if (uptodate) {
> +		set_extent_buffer_uptodate(eb);
> +	} else {
> +		clear_extent_buffer_uptodate(eb);
> +		set_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
> +	}
> +
> +	bio_for_each_segment_all(bvec, &bbio->bio, iter_all) {
> +		atomic_dec(&eb->io_pages);
> +		end_page_read(bvec->bv_page, uptodate, eb->start + bio_offset,
> +			      bvec->bv_len);
> +		bio_offset += bvec->bv_len;
> +	}
> +
> +	unlock_extent(&bbio->inode->io_tree, eb->start,
> +		      eb->start + bio_offset - 1, NULL);
> +	free_extent_buffer(eb);
> +
> +	bio_put(&bbio->bio);
> +}
> +
>   static void __read_extent_buffer_pages(struct extent_buffer *eb, int mirror_num,
>   				       struct btrfs_tree_parent_check *check)
>   {
> @@ -4233,7 +4227,7 @@ static void __read_extent_buffer_pages(struct extent_buffer *eb, int mirror_num,
>   	bbio = btrfs_bio_alloc(INLINE_EXTENT_BUFFER_PAGES,
>   			       REQ_OP_READ | REQ_META,
>   			       BTRFS_I(eb->fs_info->btree_inode),
> -			       end_bio_extent_readpage, NULL);
> +			       extent_buffer_read_end_io, eb);
>   	bbio->bio.bi_iter.bi_sector = eb->start >> SECTOR_SHIFT;
>   	bbio->file_offset = eb->start;
>   	memcpy(&bbio->parent_check, check, sizeof(*check));
Christoph Hellwig March 10, 2023, 8:17 a.m. UTC | #3
On Fri, Mar 10, 2023 at 04:14:46PM +0800, Qu Wenruo wrote:
> Here we call btrfs_validate_extent_buffer() directly.
>
> But in the case that a metadata bio is split, the endio function would be 
> called on each split part.

No.  bbio->end_io is called on the originally submitted bbio after all
I/O has finished, and is not called multiple times when split.  Without
that all consumers of btrfs_submit_bio would have to know about splitting
and need to be able to deal with cloned bios, which is exactly what I
spent great effort on to avoid (similar to how the block layer works).
Qu Wenruo March 10, 2023, 8:30 a.m. UTC | #4
On 2023/3/10 16:17, Christoph Hellwig wrote:
> On Fri, Mar 10, 2023 at 04:14:46PM +0800, Qu Wenruo wrote:
>> Here we call btrfs_validate_extent_buffer() directly.
>>
>> But in the case that a metadata bio is split, the endio function would be
>> called on each split part.
> 
> No.  bbio->end_io is called on the originally submitted bbio after all
> I/O has finished, and is not called multiple times when split.  Without
> that all consumers of btrfs_submit_bio would have to know about splitting
> and need to be able to deal with cloned bios, which is exactly what I
> spent great effort on to avoid (similar to how the block layer works).

Oh, you avoided the endio call for each splited bio but still handle 
interleaved RAID0 corruption cases by going btrfs_check_read_bio(), and 
directly submit repair for the failed sectors.


Then the code is fine, and makes the life of endio much easier.

Reviewed-by: Qu Wenruo <wqu@suse.com>

And finally I'm understanding why you move the data read repair part to 
bio layer.

Thanks,
Qu
Qu Wenruo March 10, 2023, 9:30 a.m. UTC | #5
On 2023/3/9 17:05, Christoph Hellwig wrote:
> Now that we always use a single bio to read an extent_buffer, the buffer
> can be passed to the end_io handler as private data.  This allows
> implementing a much simplified dedicated end I/O handler for metadata
> reads.

This greately simplify the behavior for subpage.

Looks pretty good.

Reviewed-by: Qu Wenruo <wqu@suse.com>

Thanks,
Qu

> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   fs/btrfs/disk-io.c   | 105 +------------------------------------------
>   fs/btrfs/disk-io.h   |   5 +--
>   fs/btrfs/extent_io.c |  80 +++++++++++++++------------------
>   3 files changed, 41 insertions(+), 149 deletions(-)
> 
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index d03b431b07781c..6795acae476993 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -485,8 +485,8 @@ static int check_tree_block_fsid(struct extent_buffer *eb)
>   }
>   
>   /* Do basic extent buffer checks at read time */
> -static int validate_extent_buffer(struct extent_buffer *eb,
> -				  struct btrfs_tree_parent_check *check)
> +int btrfs_validate_extent_buffer(struct extent_buffer *eb,
> +				 struct btrfs_tree_parent_check *check)
>   {
>   	struct btrfs_fs_info *fs_info = eb->fs_info;
>   	u64 found_start;
> @@ -599,107 +599,6 @@ static int validate_extent_buffer(struct extent_buffer *eb,
>   	return ret;
>   }
>   
> -static int validate_subpage_buffer(struct page *page, u64 start, u64 end,
> -				   int mirror, struct btrfs_tree_parent_check *check)
> -{
> -	struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb);
> -	struct extent_buffer *eb;
> -	bool reads_done;
> -	int ret = 0;
> -
> -	ASSERT(check);
> -
> -	/*
> -	 * We don't allow bio merge for subpage metadata read, so we should
> -	 * only get one eb for each endio hook.
> -	 */
> -	ASSERT(end == start + fs_info->nodesize - 1);
> -	ASSERT(PagePrivate(page));
> -
> -	eb = find_extent_buffer(fs_info, start);
> -	/*
> -	 * When we are reading one tree block, eb must have been inserted into
> -	 * the radix tree. If not, something is wrong.
> -	 */
> -	ASSERT(eb);
> -
> -	reads_done = atomic_dec_and_test(&eb->io_pages);
> -	/* Subpage read must finish in page read */
> -	ASSERT(reads_done);
> -
> -	eb->read_mirror = mirror;
> -	if (test_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags)) {
> -		ret = -EIO;
> -		goto err;
> -	}
> -	ret = validate_extent_buffer(eb, check);
> -	if (ret < 0)
> -		goto err;
> -
> -	set_extent_buffer_uptodate(eb);
> -
> -	free_extent_buffer(eb);
> -	return ret;
> -err:
> -	/*
> -	 * end_bio_extent_readpage decrements io_pages in case of error,
> -	 * make sure it has something to decrement.
> -	 */
> -	atomic_inc(&eb->io_pages);
> -	clear_extent_buffer_uptodate(eb);
> -	free_extent_buffer(eb);
> -	return ret;
> -}
> -
> -int btrfs_validate_metadata_buffer(struct btrfs_bio *bbio,
> -				   struct page *page, u64 start, u64 end,
> -				   int mirror)
> -{
> -	struct extent_buffer *eb;
> -	int ret = 0;
> -	int reads_done;
> -
> -	ASSERT(page->private);
> -
> -	if (btrfs_sb(page->mapping->host->i_sb)->nodesize < PAGE_SIZE)
> -		return validate_subpage_buffer(page, start, end, mirror,
> -					       &bbio->parent_check);
> -
> -	eb = (struct extent_buffer *)page->private;
> -
> -	/*
> -	 * The pending IO might have been the only thing that kept this buffer
> -	 * in memory.  Make sure we have a ref for all this other checks
> -	 */
> -	atomic_inc(&eb->refs);
> -
> -	reads_done = atomic_dec_and_test(&eb->io_pages);
> -	if (!reads_done)
> -		goto err;
> -
> -	eb->read_mirror = mirror;
> -	if (test_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags)) {
> -		ret = -EIO;
> -		goto err;
> -	}
> -	ret = validate_extent_buffer(eb, &bbio->parent_check);
> -	if (!ret)
> -		set_extent_buffer_uptodate(eb);
> -err:
> -	if (ret) {
> -		/*
> -		 * our io error hook is going to dec the io pages
> -		 * again, we have to make sure it has something
> -		 * to decrement
> -		 */
> -		atomic_inc(&eb->io_pages);
> -		clear_extent_buffer_uptodate(eb);
> -	}
> -	free_extent_buffer(eb);
> -
> -	return ret;
> -}
> -
>   #ifdef CONFIG_MIGRATION
>   static int btree_migrate_folio(struct address_space *mapping,
>   		struct folio *dst, struct folio *src, enum migrate_mode mode)
> diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
> index 4d577233011023..2923b5d7cfca0b 100644
> --- a/fs/btrfs/disk-io.h
> +++ b/fs/btrfs/disk-io.h
> @@ -84,9 +84,8 @@ void btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info);
>   void btrfs_btree_balance_dirty_nodelay(struct btrfs_fs_info *fs_info);
>   void btrfs_drop_and_free_fs_root(struct btrfs_fs_info *fs_info,
>   				 struct btrfs_root *root);
> -int btrfs_validate_metadata_buffer(struct btrfs_bio *bbio,
> -				   struct page *page, u64 start, u64 end,
> -				   int mirror);
> +int btrfs_validate_extent_buffer(struct extent_buffer *eb,
> +				 struct btrfs_tree_parent_check *check);
>   #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
>   struct btrfs_root *btrfs_alloc_dummy_root(struct btrfs_fs_info *fs_info);
>   #endif
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index d60a80572b8ba2..738fcf5cbc71d6 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -663,35 +663,6 @@ static void begin_page_read(struct btrfs_fs_info *fs_info, struct page *page)
>   	btrfs_subpage_start_reader(fs_info, page, page_offset(page), PAGE_SIZE);
>   }
>   
> -/*
> - * Find extent buffer for a givne bytenr.
> - *
> - * This is for end_bio_extent_readpage(), thus we can't do any unsafe locking
> - * in endio context.
> - */
> -static struct extent_buffer *find_extent_buffer_readpage(
> -		struct btrfs_fs_info *fs_info, struct page *page, u64 bytenr)
> -{
> -	struct extent_buffer *eb;
> -
> -	/*
> -	 * For regular sectorsize, we can use page->private to grab extent
> -	 * buffer
> -	 */
> -	if (fs_info->nodesize >= PAGE_SIZE) {
> -		ASSERT(PagePrivate(page) && page->private);
> -		return (struct extent_buffer *)page->private;
> -	}
> -
> -	/* For subpage case, we need to lookup buffer radix tree */
> -	rcu_read_lock();
> -	eb = radix_tree_lookup(&fs_info->buffer_radix,
> -			       bytenr >> fs_info->sectorsize_bits);
> -	rcu_read_unlock();
> -	ASSERT(eb);
> -	return eb;
> -}
> -
>   /*
>    * after a readpage IO is done, we need to:
>    * clear the uptodate bits on error
> @@ -713,7 +684,6 @@ static void end_bio_extent_readpage(struct btrfs_bio *bbio)
>   	 * larger than UINT_MAX, u32 here is enough.
>   	 */
>   	u32 bio_offset = 0;
> -	int mirror;
>   	struct bvec_iter_all iter_all;
>   
>   	ASSERT(!bio_flagged(bio, BIO_CLONED));
> @@ -753,11 +723,6 @@ static void end_bio_extent_readpage(struct btrfs_bio *bbio)
>   		end = start + bvec->bv_len - 1;
>   		len = bvec->bv_len;
>   
> -		mirror = bbio->mirror_num;
> -		if (uptodate && !is_data_inode(inode) &&
> -		    btrfs_validate_metadata_buffer(bbio, page, start, end, mirror))
> -			uptodate = false;
> -
>   		if (likely(uptodate)) {
>   			loff_t i_size = i_size_read(inode);
>   			pgoff_t end_index = i_size >> PAGE_SHIFT;
> @@ -778,13 +743,6 @@ static void end_bio_extent_readpage(struct btrfs_bio *bbio)
>   				zero_user_segment(page, zero_start,
>   						  offset_in_page(end) + 1);
>   			}
> -		} else if (!is_data_inode(inode)) {
> -			struct extent_buffer *eb;
> -
> -			eb = find_extent_buffer_readpage(fs_info, page, start);
> -			set_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
> -			eb->read_mirror = mirror;
> -			atomic_dec(&eb->io_pages);
>   		}
>   
>   		/* Update page status and unlock. */
> @@ -4219,6 +4177,42 @@ void set_extent_buffer_uptodate(struct extent_buffer *eb)
>   	}
>   }
>   
> +static void extent_buffer_read_end_io(struct btrfs_bio *bbio)
> +{
> +	struct extent_buffer *eb = bbio->private;
> +	bool uptodate = !bbio->bio.bi_status;
> +	struct bvec_iter_all iter_all;
> +	struct bio_vec *bvec;
> +	u32 bio_offset = 0;
> +
> +	atomic_inc(&eb->refs);
> +	eb->read_mirror = bbio->mirror_num;
> +
> +	if (uptodate &&
> +	    btrfs_validate_extent_buffer(eb, &bbio->parent_check) < 0)
> +		uptodate = false;
> +
> +	if (uptodate) {
> +		set_extent_buffer_uptodate(eb);
> +	} else {
> +		clear_extent_buffer_uptodate(eb);
> +		set_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
> +	}
> +
> +	bio_for_each_segment_all(bvec, &bbio->bio, iter_all) {
> +		atomic_dec(&eb->io_pages);
> +		end_page_read(bvec->bv_page, uptodate, eb->start + bio_offset,
> +			      bvec->bv_len);
> +		bio_offset += bvec->bv_len;
> +	}
> +
> +	unlock_extent(&bbio->inode->io_tree, eb->start,
> +		      eb->start + bio_offset - 1, NULL);
> +	free_extent_buffer(eb);
> +
> +	bio_put(&bbio->bio);
> +}
> +
>   static void __read_extent_buffer_pages(struct extent_buffer *eb, int mirror_num,
>   				       struct btrfs_tree_parent_check *check)
>   {
> @@ -4233,7 +4227,7 @@ static void __read_extent_buffer_pages(struct extent_buffer *eb, int mirror_num,
>   	bbio = btrfs_bio_alloc(INLINE_EXTENT_BUFFER_PAGES,
>   			       REQ_OP_READ | REQ_META,
>   			       BTRFS_I(eb->fs_info->btree_inode),
> -			       end_bio_extent_readpage, NULL);
> +			       extent_buffer_read_end_io, eb);
>   	bbio->bio.bi_iter.bi_sector = eb->start >> SECTOR_SHIFT;
>   	bbio->file_offset = eb->start;
>   	memcpy(&bbio->parent_check, check, sizeof(*check));
diff mbox series

Patch

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index d03b431b07781c..6795acae476993 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -485,8 +485,8 @@  static int check_tree_block_fsid(struct extent_buffer *eb)
 }
 
 /* Do basic extent buffer checks at read time */
-static int validate_extent_buffer(struct extent_buffer *eb,
-				  struct btrfs_tree_parent_check *check)
+int btrfs_validate_extent_buffer(struct extent_buffer *eb,
+				 struct btrfs_tree_parent_check *check)
 {
 	struct btrfs_fs_info *fs_info = eb->fs_info;
 	u64 found_start;
@@ -599,107 +599,6 @@  static int validate_extent_buffer(struct extent_buffer *eb,
 	return ret;
 }
 
-static int validate_subpage_buffer(struct page *page, u64 start, u64 end,
-				   int mirror, struct btrfs_tree_parent_check *check)
-{
-	struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb);
-	struct extent_buffer *eb;
-	bool reads_done;
-	int ret = 0;
-
-	ASSERT(check);
-
-	/*
-	 * We don't allow bio merge for subpage metadata read, so we should
-	 * only get one eb for each endio hook.
-	 */
-	ASSERT(end == start + fs_info->nodesize - 1);
-	ASSERT(PagePrivate(page));
-
-	eb = find_extent_buffer(fs_info, start);
-	/*
-	 * When we are reading one tree block, eb must have been inserted into
-	 * the radix tree. If not, something is wrong.
-	 */
-	ASSERT(eb);
-
-	reads_done = atomic_dec_and_test(&eb->io_pages);
-	/* Subpage read must finish in page read */
-	ASSERT(reads_done);
-
-	eb->read_mirror = mirror;
-	if (test_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags)) {
-		ret = -EIO;
-		goto err;
-	}
-	ret = validate_extent_buffer(eb, check);
-	if (ret < 0)
-		goto err;
-
-	set_extent_buffer_uptodate(eb);
-
-	free_extent_buffer(eb);
-	return ret;
-err:
-	/*
-	 * end_bio_extent_readpage decrements io_pages in case of error,
-	 * make sure it has something to decrement.
-	 */
-	atomic_inc(&eb->io_pages);
-	clear_extent_buffer_uptodate(eb);
-	free_extent_buffer(eb);
-	return ret;
-}
-
-int btrfs_validate_metadata_buffer(struct btrfs_bio *bbio,
-				   struct page *page, u64 start, u64 end,
-				   int mirror)
-{
-	struct extent_buffer *eb;
-	int ret = 0;
-	int reads_done;
-
-	ASSERT(page->private);
-
-	if (btrfs_sb(page->mapping->host->i_sb)->nodesize < PAGE_SIZE)
-		return validate_subpage_buffer(page, start, end, mirror,
-					       &bbio->parent_check);
-
-	eb = (struct extent_buffer *)page->private;
-
-	/*
-	 * The pending IO might have been the only thing that kept this buffer
-	 * in memory.  Make sure we have a ref for all this other checks
-	 */
-	atomic_inc(&eb->refs);
-
-	reads_done = atomic_dec_and_test(&eb->io_pages);
-	if (!reads_done)
-		goto err;
-
-	eb->read_mirror = mirror;
-	if (test_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags)) {
-		ret = -EIO;
-		goto err;
-	}
-	ret = validate_extent_buffer(eb, &bbio->parent_check);
-	if (!ret)
-		set_extent_buffer_uptodate(eb);
-err:
-	if (ret) {
-		/*
-		 * our io error hook is going to dec the io pages
-		 * again, we have to make sure it has something
-		 * to decrement
-		 */
-		atomic_inc(&eb->io_pages);
-		clear_extent_buffer_uptodate(eb);
-	}
-	free_extent_buffer(eb);
-
-	return ret;
-}
-
 #ifdef CONFIG_MIGRATION
 static int btree_migrate_folio(struct address_space *mapping,
 		struct folio *dst, struct folio *src, enum migrate_mode mode)
diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
index 4d577233011023..2923b5d7cfca0b 100644
--- a/fs/btrfs/disk-io.h
+++ b/fs/btrfs/disk-io.h
@@ -84,9 +84,8 @@  void btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info);
 void btrfs_btree_balance_dirty_nodelay(struct btrfs_fs_info *fs_info);
 void btrfs_drop_and_free_fs_root(struct btrfs_fs_info *fs_info,
 				 struct btrfs_root *root);
-int btrfs_validate_metadata_buffer(struct btrfs_bio *bbio,
-				   struct page *page, u64 start, u64 end,
-				   int mirror);
+int btrfs_validate_extent_buffer(struct extent_buffer *eb,
+				 struct btrfs_tree_parent_check *check);
 #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
 struct btrfs_root *btrfs_alloc_dummy_root(struct btrfs_fs_info *fs_info);
 #endif
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index d60a80572b8ba2..738fcf5cbc71d6 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -663,35 +663,6 @@  static void begin_page_read(struct btrfs_fs_info *fs_info, struct page *page)
 	btrfs_subpage_start_reader(fs_info, page, page_offset(page), PAGE_SIZE);
 }
 
-/*
- * Find extent buffer for a givne bytenr.
- *
- * This is for end_bio_extent_readpage(), thus we can't do any unsafe locking
- * in endio context.
- */
-static struct extent_buffer *find_extent_buffer_readpage(
-		struct btrfs_fs_info *fs_info, struct page *page, u64 bytenr)
-{
-	struct extent_buffer *eb;
-
-	/*
-	 * For regular sectorsize, we can use page->private to grab extent
-	 * buffer
-	 */
-	if (fs_info->nodesize >= PAGE_SIZE) {
-		ASSERT(PagePrivate(page) && page->private);
-		return (struct extent_buffer *)page->private;
-	}
-
-	/* For subpage case, we need to lookup buffer radix tree */
-	rcu_read_lock();
-	eb = radix_tree_lookup(&fs_info->buffer_radix,
-			       bytenr >> fs_info->sectorsize_bits);
-	rcu_read_unlock();
-	ASSERT(eb);
-	return eb;
-}
-
 /*
  * after a readpage IO is done, we need to:
  * clear the uptodate bits on error
@@ -713,7 +684,6 @@  static void end_bio_extent_readpage(struct btrfs_bio *bbio)
 	 * larger than UINT_MAX, u32 here is enough.
 	 */
 	u32 bio_offset = 0;
-	int mirror;
 	struct bvec_iter_all iter_all;
 
 	ASSERT(!bio_flagged(bio, BIO_CLONED));
@@ -753,11 +723,6 @@  static void end_bio_extent_readpage(struct btrfs_bio *bbio)
 		end = start + bvec->bv_len - 1;
 		len = bvec->bv_len;
 
-		mirror = bbio->mirror_num;
-		if (uptodate && !is_data_inode(inode) &&
-		    btrfs_validate_metadata_buffer(bbio, page, start, end, mirror))
-			uptodate = false;
-
 		if (likely(uptodate)) {
 			loff_t i_size = i_size_read(inode);
 			pgoff_t end_index = i_size >> PAGE_SHIFT;
@@ -778,13 +743,6 @@  static void end_bio_extent_readpage(struct btrfs_bio *bbio)
 				zero_user_segment(page, zero_start,
 						  offset_in_page(end) + 1);
 			}
-		} else if (!is_data_inode(inode)) {
-			struct extent_buffer *eb;
-
-			eb = find_extent_buffer_readpage(fs_info, page, start);
-			set_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
-			eb->read_mirror = mirror;
-			atomic_dec(&eb->io_pages);
 		}
 
 		/* Update page status and unlock. */
@@ -4219,6 +4177,42 @@  void set_extent_buffer_uptodate(struct extent_buffer *eb)
 	}
 }
 
+static void extent_buffer_read_end_io(struct btrfs_bio *bbio)
+{
+	struct extent_buffer *eb = bbio->private;
+	bool uptodate = !bbio->bio.bi_status;
+	struct bvec_iter_all iter_all;
+	struct bio_vec *bvec;
+	u32 bio_offset = 0;
+
+	atomic_inc(&eb->refs);
+	eb->read_mirror = bbio->mirror_num;
+
+	if (uptodate &&
+	    btrfs_validate_extent_buffer(eb, &bbio->parent_check) < 0)
+		uptodate = false;
+
+	if (uptodate) {
+		set_extent_buffer_uptodate(eb);
+	} else {
+		clear_extent_buffer_uptodate(eb);
+		set_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
+	}
+
+	bio_for_each_segment_all(bvec, &bbio->bio, iter_all) {
+		atomic_dec(&eb->io_pages);
+		end_page_read(bvec->bv_page, uptodate, eb->start + bio_offset,
+			      bvec->bv_len);
+		bio_offset += bvec->bv_len;
+	}
+
+	unlock_extent(&bbio->inode->io_tree, eb->start,
+		      eb->start + bio_offset - 1, NULL);
+	free_extent_buffer(eb);
+
+	bio_put(&bbio->bio);
+}
+
 static void __read_extent_buffer_pages(struct extent_buffer *eb, int mirror_num,
 				       struct btrfs_tree_parent_check *check)
 {
@@ -4233,7 +4227,7 @@  static void __read_extent_buffer_pages(struct extent_buffer *eb, int mirror_num,
 	bbio = btrfs_bio_alloc(INLINE_EXTENT_BUFFER_PAGES,
 			       REQ_OP_READ | REQ_META,
 			       BTRFS_I(eb->fs_info->btree_inode),
-			       end_bio_extent_readpage, NULL);
+			       extent_buffer_read_end_io, eb);
 	bbio->bio.bi_iter.bi_sector = eb->start >> SECTOR_SHIFT;
 	bbio->file_offset = eb->start;
 	memcpy(&bbio->parent_check, check, sizeof(*check));