diff mbox series

[1/8] block: tidy up the bio full checks in bio_add_hw_page

Message ID 20230724165433.117645-2-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [1/8] block: tidy up the bio full checks in bio_add_hw_page | expand

Commit Message

Christoph Hellwig July 24, 2023, 4:54 p.m. UTC
bio_add_hw_page already checks if the number of bytes trying to be added
even fit into max_hw_sectors limit of the queue.   Remove the call to
bio_full and just do a check for the smaller of the number of segments
in the bio and the queue max segments limit, and do this cheap check
before the more expensive gap to previous check.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

Comments

Jinyoung Choi July 25, 2023, 1:41 a.m. UTC | #1
Looks good to me,

Reviewed-by: Jinyoung Choi <j-young.choi@samsung.com>
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index 8672179213b939..72488ecea47acf 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1014,6 +1014,10 @@  int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		if (bio_try_merge_hw_seg(q, bio, page, len, offset, same_page))
 			return len;
 
+		if (bio->bi_vcnt >=
+		    min(bio->bi_max_vecs, queue_max_segments(q)))
+			return 0;
+
 		/*
 		 * If the queue doesn't support SG gaps and adding this segment
 		 * would create a gap, disallow it.
@@ -1023,12 +1027,6 @@  int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 			return 0;
 	}
 
-	if (bio_full(bio, len))
-		return 0;
-
-	if (bio->bi_vcnt >= queue_max_segments(q))
-		return 0;
-
 	bvec_set_page(&bio->bi_io_vec[bio->bi_vcnt], page, len, offset);
 	bio->bi_vcnt++;
 	bio->bi_iter.bi_size += len;