Message ID | 1691061162-22898-1-git-send-email-zhiguo.niu@unisoc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | block/mq-deadline: use correct way to throttling write requests | expand |
On 8/3/23 04:12, Zhiguo Niu wrote: > diff --git a/block/mq-deadline.c b/block/mq-deadline.c > index 5839a027e0f0..7e043d4a78f8 100644 > --- a/block/mq-deadline.c > +++ b/block/mq-deadline.c > @@ -620,8 +620,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) > struct request_queue *q = hctx->queue; > struct deadline_data *dd = q->elevator->elevator_data; > struct blk_mq_tags *tags = hctx->sched_tags; > + unsigned int shift = tags->bitmap_tags.sb.shift; > > - dd->async_depth = max(1UL, 3 * q->nr_requests / 4); > + dd->async_depth = max(1U, 3 * (1U << shift) / 4); > > sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); > } Reviewed-by: Bart Van Assche <bvanassche@acm.org>
On Thu, 03 Aug 2023 19:12:42 +0800, Zhiguo Niu wrote: > The original formula was inaccurate: > dd->async_depth = max(1UL, 3 * q->nr_requests / 4); > > For write requests, when we assign a tags from sched_tags, > data->shallow_depth will be passed to sbitmap_find_bit, > see the following code: > > [...] Applied, thanks! [1/1] block/mq-deadline: use correct way to throttling write requests commit: d47f9717e5cfd0dd8c0ba2ecfa47c38d140f1bb6 Best regards,
diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 5839a027e0f0..7e043d4a78f8 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -620,8 +620,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) struct request_queue *q = hctx->queue; struct deadline_data *dd = q->elevator->elevator_data; struct blk_mq_tags *tags = hctx->sched_tags; + unsigned int shift = tags->bitmap_tags.sb.shift; - dd->async_depth = max(1UL, 3 * q->nr_requests / 4); + dd->async_depth = max(1U, 3 * (1U << shift) / 4); sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); }
The original formula was inaccurate: dd->async_depth = max(1UL, 3 * q->nr_requests / 4); For write requests, when we assign a tags from sched_tags, data->shallow_depth will be passed to sbitmap_find_bit, see the following code: nr = sbitmap_find_bit_in_word(&sb->map[index], min_t (unsigned int, __map_depth(sb, index), depth), alloc_hint, wrap); The smaller of data->shallow_depth and __map_depth(sb, index) will be used as the maximum range when allocating bits. For a mmc device (one hw queue, deadline I/O scheduler): q->nr_requests = sched_tags = 128, so according to the previous calculation method, dd->async_depth = data->shallow_depth = 96, and the platform is 64bits with 8 cpus, sched_tags.bitmap_tags.sb.shift=5, sb.maps[]=32/32/32/32, 32 is smaller than 96, whether it is a read or a write I/O, tags can be allocated to the maximum range each time, which has not throttling effect. In addition, refer to the methods of bfg/kyber I/O scheduler, limit ratiois are calculated base on sched_tags.bitmap_tags.sb.shift. This patch can throttle write requests really. Fixes: 07757588e507 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests") Signed-off-by: Zhiguo Niu <zhiguo.niu@unisoc.com> --- block/mq-deadline.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)