diff mbox series

[07/12] block: Make it easier to debug zoned write reordering

Message ID 20230407001710.104169-8-bvanassche@acm.org (mailing list archive)
State New, archived
Headers show
Series Submit zoned writes in order | expand

Commit Message

Bart Van Assche April 7, 2023, 12:17 a.m. UTC
Issue a kernel warning if reordering could happen.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-mq.c | 4 ++++
 1 file changed, 4 insertions(+)
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2cf317d49f56..07426dbbe720 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2480,6 +2480,8 @@  void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
 {
 	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
 
+	WARN_ON_ONCE(rq->q->elevator && blk_rq_is_seq_zoned_write(rq));
+
 	spin_lock(&hctx->lock);
 	if (at_head)
 		list_add(&rq->queuelist, &hctx->dispatch);
@@ -2572,6 +2574,8 @@  static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 	bool run_queue = true;
 	int budget_token;
 
+	WARN_ON_ONCE(q->elevator && blk_rq_is_seq_zoned_write(rq));
+
 	/*
 	 * RCU or SRCU read lock is needed before checking quiesced flag.
 	 *