diff mbox series

[v15,03/19] block: Preserve the order of requeued zoned writes

Message ID 20231114211804.1449162-4-bvanassche@acm.org (mailing list archive)
State New, archived
Headers show
Series Improve write performance for zoned UFS devices​ | expand

Commit Message

Bart Van Assche Nov. 14, 2023, 9:16 p.m. UTC
blk_mq_requeue_work() inserts requeued requests in front of other
requests. This is fine for all request types except for sequential zoned
writes. Hence this patch.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-mq.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e2d11183f62e..e678edca3fa8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1484,8 +1484,12 @@  static void blk_mq_requeue_work(struct work_struct *work)
 			list_del_init(&rq->queuelist);
 			blk_mq_request_bypass_insert(rq, 0);
 		} else {
+			blk_insert_t insert_flags = BLK_MQ_INSERT_AT_HEAD;
+
 			list_del_init(&rq->queuelist);
-			blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD);
+			if (blk_rq_is_seq_zoned_write(rq))
+				insert_flags = 0;
+			blk_mq_insert_request(rq, insert_flags);
 		}
 	}