diff mbox series

[02/26] block: Remove req_bio_endio()

Message ID 20240202073104.2418230-3-dlemoal@kernel.org (mailing list archive)
State Not Applicable
Headers show
Series Zone write plugging | expand

Commit Message

Damien Le Moal Feb. 2, 2024, 7:30 a.m. UTC
Moving req_bio_endio() code into its only caller, blk_update_request(),
allows reducing accesses to and tests of bio and request fields. Also,
given that partial completions of zone append operations is not
possible and that zone append operations cannot be merged, the update
of the BIO sector using the request sector for these operations can be
moved directly before the call to bio_endio().

Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq.c | 66 ++++++++++++++++++++++++--------------------------
 1 file changed, 31 insertions(+), 35 deletions(-)

Comments

Hannes Reinecke Feb. 4, 2024, 11:57 a.m. UTC | #1
On 2/2/24 15:30, Damien Le Moal wrote:
> Moving req_bio_endio() code into its only caller, blk_update_request(),
> allows reducing accesses to and tests of bio and request fields. Also,
> given that partial completions of zone append operations is not
> possible and that zone append operations cannot be merged, the update
> of the BIO sector using the request sector for these operations can be
> moved directly before the call to bio_endio().
> 
> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-mq.c | 66 ++++++++++++++++++++++++--------------------------
>   1 file changed, 31 insertions(+), 35 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
Bart Van Assche Feb. 5, 2024, 5:28 p.m. UTC | #2
On 2/1/24 23:30, Damien Le Moal wrote:
> @@ -916,9 +888,8 @@ bool blk_update_request(struct request *req, blk_status_t error,
>   	if (blk_crypto_rq_has_keyslot(req) && nr_bytes >= blk_rq_bytes(req))
>   		__blk_crypto_rq_put_keyslot(req);
>   
> -	if (unlikely(error && !blk_rq_is_passthrough(req) &&
> -		     !(req->rq_flags & RQF_QUIET)) &&
> -		     !test_bit(GD_DEAD, &req->q->disk->state)) {
> +	if (unlikely(error && !blk_rq_is_passthrough(req) && !quiet) &&
> +	    !test_bit(GD_DEAD, &req->q->disk->state)) {

The new indentation of !test_bit(GD_DEAD, &req->q->disk->state) looks odd to me ...

>   		blk_print_req_error(req, error);
>   		trace_block_rq_error(req, error, nr_bytes);
>   	}
> @@ -930,12 +901,37 @@ bool blk_update_request(struct request *req, blk_status_t error,
>   		struct bio *bio = req->bio;
>   		unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes);
>   
> -		if (bio_bytes == bio->bi_iter.bi_size)
> +		if (unlikely(error))
> +			bio->bi_status = error;
> +
> +		if (bio_bytes == bio->bi_iter.bi_size) {
>   			req->bio = bio->bi_next;

The behavior has been changed compared to the original code: the original code
only tests bio_bytes if error == 0. The new code tests bio_bytes no matter what
value the 'error' variable has. Is this behavior change intentional?

Otherwise this patch looks good to me.

Thanks,

Bart.
Damien Le Moal Feb. 5, 2024, 11:45 p.m. UTC | #3
On 2/6/24 02:28, Bart Van Assche wrote:
> On 2/1/24 23:30, Damien Le Moal wrote:
>> @@ -916,9 +888,8 @@ bool blk_update_request(struct request *req, blk_status_t error,
>>   	if (blk_crypto_rq_has_keyslot(req) && nr_bytes >= blk_rq_bytes(req))
>>   		__blk_crypto_rq_put_keyslot(req);
>>   
>> -	if (unlikely(error && !blk_rq_is_passthrough(req) &&
>> -		     !(req->rq_flags & RQF_QUIET)) &&
>> -		     !test_bit(GD_DEAD, &req->q->disk->state)) {
>> +	if (unlikely(error && !blk_rq_is_passthrough(req) && !quiet) &&
>> +	    !test_bit(GD_DEAD, &req->q->disk->state)) {
> 
> The new indentation of !test_bit(GD_DEAD, &req->q->disk->state) looks odd to me ...
> 
>>   		blk_print_req_error(req, error);
>>   		trace_block_rq_error(req, error, nr_bytes);
>>   	}
>> @@ -930,12 +901,37 @@ bool blk_update_request(struct request *req, blk_status_t error,
>>   		struct bio *bio = req->bio;
>>   		unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes);
>>   
>> -		if (bio_bytes == bio->bi_iter.bi_size)
>> +		if (unlikely(error))
>> +			bio->bi_status = error;
>> +
>> +		if (bio_bytes == bio->bi_iter.bi_size) {
>>   			req->bio = bio->bi_next;
> 
> The behavior has been changed compared to the original code: the original code
> only tests bio_bytes if error == 0. The new code tests bio_bytes no matter what
> value the 'error' variable has. Is this behavior change intentional?

No. I do not think it is a problem though since if there is an error, bio_bytes
will always be less than bio->bi_iter.bi_size. I will tweak this to restore the
previous behavior.
Damien Le Moal Feb. 9, 2024, 6:53 a.m. UTC | #4
On 2/6/24 02:28, Bart Van Assche wrote:
> On 2/1/24 23:30, Damien Le Moal wrote:
>> @@ -916,9 +888,8 @@ bool blk_update_request(struct request *req, blk_status_t
>> error,
>>       if (blk_crypto_rq_has_keyslot(req) && nr_bytes >= blk_rq_bytes(req))
>>           __blk_crypto_rq_put_keyslot(req);
>>   -    if (unlikely(error && !blk_rq_is_passthrough(req) &&
>> -             !(req->rq_flags & RQF_QUIET)) &&
>> -             !test_bit(GD_DEAD, &req->q->disk->state)) {
>> +    if (unlikely(error && !blk_rq_is_passthrough(req) && !quiet) &&
>> +        !test_bit(GD_DEAD, &req->q->disk->state)) {
> 
> The new indentation of !test_bit(GD_DEAD, &req->q->disk->state) looks odd to me

But it is actually correct because that test bit is not part of the unlikely().
Not sure if that is intentional though.

> ...
> 
>>           blk_print_req_error(req, error);
>>           trace_block_rq_error(req, error, nr_bytes);
>>       }
>> @@ -930,12 +901,37 @@ bool blk_update_request(struct request *req,
>> blk_status_t error,
>>           struct bio *bio = req->bio;
>>           unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes);
>>   -        if (bio_bytes == bio->bi_iter.bi_size)
>> +        if (unlikely(error))
>> +            bio->bi_status = error;
>> +
>> +        if (bio_bytes == bio->bi_iter.bi_size) {
>>               req->bio = bio->bi_next;
> 
> The behavior has been changed compared to the original code: the original code
> only tests bio_bytes if error == 0. The new code tests bio_bytes no matter what
> value the 'error' variable has. Is this behavior change intentional?

No change actually. The bio_bytes test was in blk_update_request() already.

> 
> Otherwise this patch looks good to me.
> 
> Thanks,
> 
> Bart.
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 21cd54ad1873..bfebb8fcd248 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -763,36 +763,6 @@  void blk_dump_rq_flags(struct request *rq, char *msg)
 }
 EXPORT_SYMBOL(blk_dump_rq_flags);
 
-static void req_bio_endio(struct request *rq, struct bio *bio,
-			  unsigned int nbytes, blk_status_t error)
-{
-	if (unlikely(error)) {
-		bio->bi_status = error;
-	} else if (req_op(rq) == REQ_OP_ZONE_APPEND) {
-		/*
-		 * Partial zone append completions cannot be supported as the
-		 * BIO fragments may end up not being written sequentially.
-		 * For such case, force the completed nbytes to be equal to
-		 * the BIO size so that bio_advance() sets the BIO remaining
-		 * size to 0 and we end up calling bio_endio() before returning.
-		 */
-		if (bio->bi_iter.bi_size != nbytes) {
-			bio->bi_status = BLK_STS_IOERR;
-			nbytes = bio->bi_iter.bi_size;
-		} else {
-			bio->bi_iter.bi_sector = rq->__sector;
-		}
-	}
-
-	bio_advance(bio, nbytes);
-
-	if (unlikely(rq->rq_flags & RQF_QUIET))
-		bio_set_flag(bio, BIO_QUIET);
-	/* don't actually finish bio if it's part of flush sequence */
-	if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ))
-		bio_endio(bio);
-}
-
 static void blk_account_io_completion(struct request *req, unsigned int bytes)
 {
 	if (req->part && blk_do_io_stat(req)) {
@@ -896,6 +866,8 @@  static void blk_complete_request(struct request *req)
 bool blk_update_request(struct request *req, blk_status_t error,
 		unsigned int nr_bytes)
 {
+	bool is_flush = req->rq_flags & RQF_FLUSH_SEQ;
+	bool quiet = req->rq_flags & RQF_QUIET;
 	int total_bytes;
 
 	trace_block_rq_complete(req, error, nr_bytes);
@@ -916,9 +888,8 @@  bool blk_update_request(struct request *req, blk_status_t error,
 	if (blk_crypto_rq_has_keyslot(req) && nr_bytes >= blk_rq_bytes(req))
 		__blk_crypto_rq_put_keyslot(req);
 
-	if (unlikely(error && !blk_rq_is_passthrough(req) &&
-		     !(req->rq_flags & RQF_QUIET)) &&
-		     !test_bit(GD_DEAD, &req->q->disk->state)) {
+	if (unlikely(error && !blk_rq_is_passthrough(req) && !quiet) &&
+	    !test_bit(GD_DEAD, &req->q->disk->state)) {
 		blk_print_req_error(req, error);
 		trace_block_rq_error(req, error, nr_bytes);
 	}
@@ -930,12 +901,37 @@  bool blk_update_request(struct request *req, blk_status_t error,
 		struct bio *bio = req->bio;
 		unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes);
 
-		if (bio_bytes == bio->bi_iter.bi_size)
+		if (unlikely(error))
+			bio->bi_status = error;
+
+		if (bio_bytes == bio->bi_iter.bi_size) {
 			req->bio = bio->bi_next;
+		} else if (req_op(req) == REQ_OP_ZONE_APPEND) {
+			/*
+			 * Partial zone append completions cannot be supported
+			 * as the BIO fragments may end up not being written
+			 * sequentially. For such case, force the completed
+			 * nbytes to be equal to the BIO size so that
+			 * bio_advance() sets the BIO remaining size to 0 and
+			 * we end up calling bio_endio() before returning.
+			 */
+			bio->bi_status = BLK_STS_IOERR;
+			bio_bytes = bio->bi_iter.bi_size;
+		}
 
 		/* Completion has already been traced */
 		bio_clear_flag(bio, BIO_TRACE_COMPLETION);
-		req_bio_endio(req, bio, bio_bytes, error);
+		if (unlikely(quiet))
+			bio_set_flag(bio, BIO_QUIET);
+
+		bio_advance(bio, bio_bytes);
+
+		/* Don't actually finish bio if it's part of flush sequence */
+		if (!bio->bi_iter.bi_size && !is_flush) {
+			if (req_op(req) == REQ_OP_ZONE_APPEND)
+				bio->bi_iter.bi_sector = req->__sector;
+			bio_endio(bio);
+		}
 
 		total_bytes += bio_bytes;
 		nr_bytes -= bio_bytes;