Message ID | 20240403212354.523925-2-bvanassche@acm.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Fix the mq-deadline async_depth implementation | expand |
Calling limit_depth with the hctx set make sense, but the way it's done looks odd. Why not something like this? diff --git a/block/blk-mq.c b/block/blk-mq.c index b8dbfed8b28be1..88886fd93b1a9c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -448,6 +448,10 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) if (data->cmd_flags & REQ_NOWAIT) data->flags |= BLK_MQ_REQ_NOWAIT; +retry: + data->ctx = blk_mq_get_ctx(q); + data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); + if (q->elevator) { /* * All requests use scheduler tags when an I/O scheduler is @@ -469,13 +473,9 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) if (ops->limit_depth) ops->limit_depth(data->cmd_flags, data); } - } - -retry: - data->ctx = blk_mq_get_ctx(q); - data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); - if (!(data->rq_flags & RQF_SCHED_TAGS)) + } else { blk_mq_tag_busy(data->hctx); + } if (data->flags & BLK_MQ_REQ_RESERVED) data->rq_flags |= RQF_RESV;
On 4/5/24 01:46, Christoph Hellwig wrote: > Calling limit_depth with the hctx set make sense, but the way it's done > looks odd. Why not something like this? > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index b8dbfed8b28be1..88886fd93b1a9c 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -448,6 +448,10 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) > if (data->cmd_flags & REQ_NOWAIT) > data->flags |= BLK_MQ_REQ_NOWAIT; > > +retry: > + data->ctx = blk_mq_get_ctx(q); > + data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); > + > if (q->elevator) { > /* > * All requests use scheduler tags when an I/O scheduler is > @@ -469,13 +473,9 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) > if (ops->limit_depth) > ops->limit_depth(data->cmd_flags, data); > } > - } > - > -retry: > - data->ctx = blk_mq_get_ctx(q); > - data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); > - if (!(data->rq_flags & RQF_SCHED_TAGS)) > + } else { > blk_mq_tag_busy(data->hctx); > + } > > if (data->flags & BLK_MQ_REQ_RESERVED) > data->rq_flags |= RQF_RESV; Hi Christoph, The above patch looks good to me and I'm fine with replacing patch 1/2 with the above patch. Do you want me to add your Signed-off-by to the above patch? Thanks, Bart.
Hi Jens Axboe, Excuse me, do you have any comments about this patch set from Bart Van Assche, We meet this "warning issue" about async_depth, more detail info is in: https://lore.kernel.org/all/CAHJ8P3KEOC_DXQmZK3u7PHgZFmWpMVzPa6pgkOgpyoH7wgT5nw@mail.gmail.com/ please help consider it and it can solve the above warning issue. Thanks. Attach The following commit msg from Bart Van Assche https://lore.kernel.org/all/20240403212354.523925-1-bvanassche@acm.org/#R -----邮件原件----- 发件人: Bart Van Assche <bvanassche@acm.org> 发送时间: 2024年4月6日 4:05 收件人: Christoph Hellwig <hch@lst.de> 抄送: Jens Axboe <axboe@kernel.dk>; linux-block@vger.kernel.org; Damien Le Moal <dlemoal@kernel.org>; 牛志国 (Zhiguo Niu) <Zhiguo.Niu@unisoc.com> 主题: Re: [PATCH 1/2] block: Call .limit_depth() after .hctx has been set 注意: 这封邮件来自于外部。除非你确定邮件内容安全,否则不要点击任何链接和附件。 CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. On 4/5/24 01:46, Christoph Hellwig wrote: > Calling limit_depth with the hctx set make sense, but the way it's > done looks odd. Why not something like this? > > diff --git a/block/blk-mq.c b/block/blk-mq.c index > b8dbfed8b28be1..88886fd93b1a9c 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -448,6 +448,10 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) > if (data->cmd_flags & REQ_NOWAIT) > data->flags |= BLK_MQ_REQ_NOWAIT; > > +retry: > + data->ctx = blk_mq_get_ctx(q); > + data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); > + > if (q->elevator) { > /* > * All requests use scheduler tags when an I/O scheduler > is @@ -469,13 +473,9 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) > if (ops->limit_depth) > ops->limit_depth(data->cmd_flags, data); > } > - } > - > -retry: > - data->ctx = blk_mq_get_ctx(q); > - data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); > - if (!(data->rq_flags & RQF_SCHED_TAGS)) > + } else { > blk_mq_tag_busy(data->hctx); > + } > > if (data->flags & BLK_MQ_REQ_RESERVED) > data->rq_flags |= RQF_RESV; Hi Christoph, The above patch looks good to me and I'm fine with replacing patch 1/2 with the above patch. Do you want me to add your Signed-off-by to the above patch? Thanks, Bart.
On 5/7/24 19:28, 牛志国 (Zhiguo Niu) wrote: > Excuse me, do you have any comments about this patch set from Bart > Van Assche, We meet this "warning issue" about async_depth, more > detail info is in: > https://lore.kernel.org/all/CAHJ8P3KEOC_DXQmZK3u7PHgZFmWpMVzPa6pgkOgpyoH7wgT5nw@mail.gmail.com/ > please help consider it and it can solve the above warning issue. Since Christoph posted a comment I think it's up to me to address his comment. I plan to repost this patch series next week. I'm currently OoO. Thanks, Bart.
diff --git a/block/blk-mq.c b/block/blk-mq.c index 34060d885c5a..bcaa722896a0 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -434,6 +434,7 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data) static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) { + void (*limit_depth)(blk_opf_t, struct blk_mq_alloc_data *) = NULL; struct request_queue *q = data->q; u64 alloc_time_ns = 0; struct request *rq; @@ -459,13 +460,11 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) */ if ((data->cmd_flags & REQ_OP_MASK) != REQ_OP_FLUSH && !blk_op_is_passthrough(data->cmd_flags)) { - struct elevator_mq_ops *ops = &q->elevator->type->ops; + limit_depth = q->elevator->type->ops.limit_depth; WARN_ON_ONCE(data->flags & BLK_MQ_REQ_RESERVED); data->rq_flags |= RQF_USE_SCHED; - if (ops->limit_depth) - ops->limit_depth(data->cmd_flags, data); } } @@ -478,6 +477,9 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) if (data->flags & BLK_MQ_REQ_RESERVED) data->rq_flags |= RQF_RESV; + if (limit_depth) + limit_depth(data->cmd_flags, data); + /* * Try batched alloc if we want more than 1 tag. */
Call .limit_depth() after data->hctx has been set such that data->hctx can be used in .limit_depth() implementations. Cc: Christoph Hellwig <hch@lst.de> Cc: Damien Le Moal <dlemoal@kernel.org> Cc: Zhiguo Niu <zhiguo.niu@unisoc.com> Fixes: 07757588e507 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests") Signed-off-by: Bart Van Assche <bvanassche@acm.org> --- block/blk-mq.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)