Message ID | 20230418224002.1195163-10-bvanassche@acm.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mq-deadline: Improve support for zoned block devices | expand |
> deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq) > { > struct rb_root *root = deadline_rb_root(per_prio, rq); > + struct request **next_rq __maybe_unused; > > elv_rb_add(root, rq); > +#ifdef CONFIG_BLK_DEV_ZONED > + next_rq = &per_prio->next_rq[rq_data_dir(rq)]; > + if (*next_rq == NULL || !blk_queue_is_zoned(rq->q)) > + return; > + /* > + * If a request got requeued or requests have been submitted out of > + * order, make sure that per zone the request with the lowest LBA is > + * submitted first. > + */ > + if (blk_rq_pos(rq) < blk_rq_pos(*next_rq) && > + blk_rq_zone_no(rq) == blk_rq_zone_no(*next_rq)) > + *next_rq = rq; > +#endif Please key move this into a helper only called when blk_queue_is_zoned is true.
On 4/18/23 22:07, Christoph Hellwig wrote: >> deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq) >> { >> struct rb_root *root = deadline_rb_root(per_prio, rq); >> + struct request **next_rq __maybe_unused; >> >> elv_rb_add(root, rq); >> +#ifdef CONFIG_BLK_DEV_ZONED >> + next_rq = &per_prio->next_rq[rq_data_dir(rq)]; >> + if (*next_rq == NULL || !blk_queue_is_zoned(rq->q)) >> + return; >> + /* >> + * If a request got requeued or requests have been submitted out of >> + * order, make sure that per zone the request with the lowest LBA is >> + * submitted first. >> + */ >> + if (blk_rq_pos(rq) < blk_rq_pos(*next_rq) && >> + blk_rq_zone_no(rq) == blk_rq_zone_no(*next_rq)) >> + *next_rq = rq; >> +#endif > > Please key move this into a helper only called when blk_queue_is_zoned > is true. Hi Christoph, I'm looking into an alternative, namely to remove the next_rq member from struct dd_per_prio and instead to do the following: * Track the offset (blk_rq_pos()) of the most recently dispatched request ("latest_pos"). * Where the next_rq member is read, look up the request that comes after latest_pos in the RB-tree. This should require an effort that is similar to updating next_rq after having dispatched a request. With this approach the code quoted above and that is surrounded with #ifdef/#endif will disappear. Bart.
On 4/20/23 08:01, Bart Van Assche wrote: > On 4/18/23 22:07, Christoph Hellwig wrote: >>> deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq) >>> { >>> struct rb_root *root = deadline_rb_root(per_prio, rq); >>> + struct request **next_rq __maybe_unused; >>> >>> elv_rb_add(root, rq); >>> +#ifdef CONFIG_BLK_DEV_ZONED >>> + next_rq = &per_prio->next_rq[rq_data_dir(rq)]; >>> + if (*next_rq == NULL || !blk_queue_is_zoned(rq->q)) >>> + return; >>> + /* >>> + * If a request got requeued or requests have been submitted out of >>> + * order, make sure that per zone the request with the lowest LBA is >>> + * submitted first. >>> + */ >>> + if (blk_rq_pos(rq) < blk_rq_pos(*next_rq) && >>> + blk_rq_zone_no(rq) == blk_rq_zone_no(*next_rq)) >>> + *next_rq = rq; >>> +#endif >> >> Please key move this into a helper only called when blk_queue_is_zoned >> is true. > Hi Christoph, > > I'm looking into an alternative, namely to remove the next_rq member > from struct dd_per_prio and instead to do the following: > * Track the offset (blk_rq_pos()) of the most recently dispatched > request ("latest_pos"). > * Where the next_rq member is read, look up the request that comes after > latest_pos in the RB-tree. This should require an effort that is similar > to updating next_rq after having dispatched a request. > > With this approach the code quoted above and that is surrounded with > #ifdef/#endif will disappear. This sounds much better, given that there are in practice lots of cases where next_rq is set null and we endup getting the next req from the fifo list head. At least last time I looked at this is what I saw (it was when I patched for the skip seq writes over 2 zones). > > Bart.
diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 32a2cc013ed3..4d2bfb3898b0 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -160,8 +160,22 @@ static void deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq) { struct rb_root *root = deadline_rb_root(per_prio, rq); + struct request **next_rq __maybe_unused; elv_rb_add(root, rq); +#ifdef CONFIG_BLK_DEV_ZONED + next_rq = &per_prio->next_rq[rq_data_dir(rq)]; + if (*next_rq == NULL || !blk_queue_is_zoned(rq->q)) + return; + /* + * If a request got requeued or requests have been submitted out of + * order, make sure that per zone the request with the lowest LBA is + * submitted first. + */ + if (blk_rq_pos(rq) < blk_rq_pos(*next_rq) && + blk_rq_zone_no(rq) == blk_rq_zone_no(*next_rq)) + *next_rq = rq; +#endif } static inline void @@ -816,6 +830,8 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, list_add(&rq->queuelist, &per_prio->dispatch); rq->fifo_time = jiffies; } else { + struct list_head *insert_before; + deadline_add_rq_rb(per_prio, rq); if (rq_mergeable(rq)) { @@ -828,7 +844,16 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, * set expire time and add to fifo list */ rq->fifo_time = jiffies + dd->fifo_expire[data_dir]; - list_add_tail(&rq->queuelist, &per_prio->fifo_list[data_dir]); + insert_before = &per_prio->fifo_list[data_dir]; +#ifdef CONFIG_BLK_DEV_ZONED + if (blk_rq_is_seq_zoned_write(rq)) { + struct request *rq2 = deadline_latter_request(rq); + + if (rq2 && blk_rq_zone_no(rq2) == blk_rq_zone_no(rq)) + insert_before = &rq2->queuelist; + } +#endif + list_add_tail(&rq->queuelist, insert_before); } }
If a zoned write is requeued with an LBA that is lower than already inserted zoned writes, make sure that it is submitted first. Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Ming Lei <ming.lei@redhat.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> --- block/mq-deadline.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-)