Message ID | 20230320234905.3832131-4-bvanassche@acm.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Submit zoned requests in LBA order per zone | expand |
On Mon, Mar 20, 2023 at 04:49:05PM -0700, Bart Van Assche wrote: > When requeuing a request to a zoned block device, preserve the LBA order > per zone. What causes this requeue?
On 3/20/23 22:58, Christoph Hellwig wrote: > On Mon, Mar 20, 2023 at 04:49:05PM -0700, Bart Van Assche wrote: >> When requeuing a request to a zoned block device, preserve the LBA order >> per zone. > > What causes this requeue? Hi Christoph, Two examples of why the SCSI core can decide to requeue a command are a retryable unit attention or ufshcd_queuecommand() returning SCSI_MLQUEUE_HOST_BUSY. For example, ufshcd_queuecommand() returns SCSI_MLQUEUE_HOST_BUSY while clock scaling is in progress (changing the frequency of the link between host controller and UFS device). Thanks, Bart.
On Tue, Mar 21, 2023 at 07:46:51AM -0700, Bart Van Assche wrote: > On 3/20/23 22:58, Christoph Hellwig wrote: >> On Mon, Mar 20, 2023 at 04:49:05PM -0700, Bart Van Assche wrote: >>> When requeuing a request to a zoned block device, preserve the LBA order >>> per zone. >> >> What causes this requeue? > > Hi Christoph, > > Two examples of why the SCSI core can decide to requeue a command are a > retryable unit attention or ufshcd_queuecommand() returning > SCSI_MLQUEUE_HOST_BUSY. For example, ufshcd_queuecommand() returns > SCSI_MLQUEUE_HOST_BUSY while clock scaling is in progress (changing the > frequency of the link between host controller and UFS device). None of these should happen as the upper layers enforce a per-zone queue depth of 1.
diff --git a/block/blk-mq.c b/block/blk-mq.c index cc32ad0cd548..2ec7d6140114 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1495,6 +1495,44 @@ static void blk_mq_requeue_work(struct work_struct *work) blk_mq_run_hw_queues(q, false); } +static void blk_mq_insert_rq(struct request *rq, struct list_head *list, + bool at_head) +{ + bool zone_in_list = false; + struct request *rq2; + + /* + * For request queues associated with a zoned block device, check + * whether another request for the same zone has already been queued. + */ + if (blk_queue_is_zoned(rq->q)) { + const unsigned int zno = blk_rq_zone_no(rq); + + list_for_each_entry(rq2, list, queuelist) { + if (blk_rq_zone_no(rq2) == zno) { + zone_in_list = true; + if (blk_rq_pos(rq) < blk_rq_pos(rq2)) + break; + } + } + } + if (!zone_in_list) { + if (at_head) { + rq->rq_flags |= RQF_SOFTBARRIER; + list_add(&rq->queuelist, list); + } else { + list_add_tail(&rq->queuelist, list); + } + } else { + /* + * Insert the request in the list before another request for + * the same zone and with a higher LBA. If there is no such + * request, insert the request at the end of the list. + */ + list_add_tail(&rq->queuelist, &rq2->queuelist); + } +} + void blk_mq_add_to_requeue_list(struct request *rq, bool at_head, bool kick_requeue_list) { @@ -1508,12 +1546,7 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head, BUG_ON(rq->rq_flags & RQF_SOFTBARRIER); spin_lock_irqsave(&q->requeue_lock, flags); - if (at_head) { - rq->rq_flags |= RQF_SOFTBARRIER; - list_add(&rq->queuelist, &q->requeue_list); - } else { - list_add_tail(&rq->queuelist, &q->requeue_list); - } + blk_mq_insert_rq(rq, &q->requeue_list, at_head); spin_unlock_irqrestore(&q->requeue_lock, flags); if (kick_requeue_list)
When requeuing a request to a zoned block device, preserve the LBA order per zone. Cc: Christoph Hellwig <hch@lst.de> Cc: Ming Lei <ming.lei@redhat.com> Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> --- block/blk-mq.c | 45 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 39 insertions(+), 6 deletions(-)