diff mbox series

[RFC] blk-mq: cut fair share of tag depth

Message ID 20200104102056.1632-1-hdanton@sina.com (mailing list archive)
State New, archived
Headers show
Series [RFC] blk-mq: cut fair share of tag depth | expand

Commit Message

Hillf Danton Jan. 4, 2020, 10:20 a.m. UTC
Currently active tag allocators are tracked in an attempt to provide
a fair share of the tag depth for each of them, but the number of
allocators is bogus as test_bit() and test_and_set_bit() are used for
tracking it.

Even it is not bogus, however, the result of hctx_may_queue() is
incorrect because the number of allocated tags is compared with the
expected fair share.

Worse, even the consumed tags per allocater for a new allocator is
not an argument strong enough to turn what was determined for previous
allocators upside down, as it is hard to tell how many tags each
allocator actually needs. IOW tag depth share can't be fair without
innocent victims voted for in a dark box.

Then fair share of tag depth is no longer provided.

Signed-off-by: Hillf Danton <hdanton@sina.com>
---
diff mbox series

Patch

--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -56,35 +56,15 @@  void __blk_mq_tag_idle(struct blk_mq_hw_
 	blk_mq_tag_wakeup_all(tags, false);
 }
 
-/*
- * For shared tag users, we track the number of currently active users
- * and attempt to provide a fair share of the tag depth for each of them.
- */
 static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
 				  struct sbitmap_queue *bt)
 {
-	unsigned int depth, users;
-
 	if (!hctx || !(hctx->flags & BLK_MQ_F_TAG_SHARED))
 		return true;
 	if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
 		return true;
 
-	/*
-	 * Don't try dividing an ant
-	 */
-	if (bt->sb.depth == 1)
-		return true;
-
-	users = atomic_read(&hctx->tags->active_queues);
-	if (!users)
-		return true;
-
-	/*
-	 * Allow at least some tags
-	 */
-	depth = max((bt->sb.depth + users - 1) / users, 4U);
-	return atomic_read(&hctx->nr_active) < depth;
+	return atomic_read(&hctx->nr_active) < bt->sb.depth;
 }
 
 static int __blk_mq_get_tag(struct blk_mq_alloc_data *data,