From patchwork Sat Apr 16 09:37:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12815768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22CF5C433F5 for ; Sat, 16 Apr 2022 09:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230368AbiDPJ0A (ORCPT ); Sat, 16 Apr 2022 05:26:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230252AbiDPJZ7 (ORCPT ); Sat, 16 Apr 2022 05:25:59 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 614B110F4; Sat, 16 Apr 2022 02:23:28 -0700 (PDT) Received: from kwepemi100021.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KgSRc3wLKzQjCx; Sat, 16 Apr 2022 17:23:24 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100021.china.huawei.com (7.221.188.223) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:26 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:25 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v2 1/5] block, bfq: cleanup bfq_weights_tree add/remove apis Date: Sat, 16 Apr 2022 17:37:49 +0800 Message-ID: <20220416093753.3054696-2-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220416093753.3054696-1-yukuai3@huawei.com> References: <20220416093753.3054696-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org They already pass 'bfqd' as the first parameter, there is no need to pass 'bfqd->queue_weights_tree' as another parameter. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 14 +++++++------- block/bfq-iosched.h | 7 ++----- block/bfq-wf2q.c | 16 +++++----------- 3 files changed, 14 insertions(+), 23 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 2e0dd68a3cbe..2deea2d07a1f 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -862,9 +862,9 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd, * In most scenarios, the rate at which nodes are created/destroyed * should be low too. */ -void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, - struct rb_root_cached *root) +void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq) { + struct rb_root_cached *root = &bfqd->queue_weights_tree; struct bfq_entity *entity = &bfqq->entity; struct rb_node **new = &(root->rb_root.rb_node), *parent = NULL; bool leftmost = true; @@ -936,13 +936,14 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, * See the comments to the function bfq_weights_tree_add() for considerations * about overhead. */ -void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq, - struct rb_root_cached *root) +void __bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { + struct rb_root_cached *root; + if (!bfqq->weight_counter) return; + root = &bfqd->queue_weights_tree; bfqq->weight_counter->num_active--; if (bfqq->weight_counter->num_active > 0) goto reset_entity_pointer; @@ -1004,8 +1005,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd, * has no dispatched request. DO NOT use bfqq after the next * function invocation. */ - __bfq_weights_tree_remove(bfqd, bfqq, - &bfqd->queue_weights_tree); + __bfq_weights_tree_remove(bfqd, bfqq); } /* diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 3b83e3d1c2e5..072099b0c11a 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -969,11 +969,8 @@ struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync); void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync); struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic); void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq); -void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, - struct rb_root_cached *root); -void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq, - struct rb_root_cached *root); +void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq); +void __bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index f8eb340381cf..a1296058c1ec 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -707,7 +707,6 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); unsigned int prev_weight, new_weight; struct bfq_data *bfqd = NULL; - struct rb_root_cached *root; #ifdef CONFIG_BFQ_GROUP_IOSCHED struct bfq_sched_data *sd; struct bfq_group *bfqg; @@ -770,19 +769,15 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, * queue, remove the entity from its old weight counter (if * there is a counter associated with the entity). */ - if (prev_weight != new_weight && bfqq) { - root = &bfqd->queue_weights_tree; - __bfq_weights_tree_remove(bfqd, bfqq, root); - } + if (prev_weight != new_weight && bfqq) + __bfq_weights_tree_remove(bfqd, bfqq); entity->weight = new_weight; /* * Add the entity, if it is not a weight-raised queue, * to the counter associated with its new weight. */ - if (prev_weight != new_weight && bfqq && bfqq->wr_coeff == 1) { - /* If we get here, root has been initialized. */ - bfq_weights_tree_add(bfqd, bfqq, root); - } + if (prev_weight != new_weight && bfqq && bfqq->wr_coeff == 1) + bfq_weights_tree_add(bfqd, bfqq); new_st->wsum += entity->weight; @@ -1686,8 +1681,7 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq) if (!bfqq->dispatched) if (bfqq->wr_coeff == 1) - bfq_weights_tree_add(bfqd, bfqq, - &bfqd->queue_weights_tree); + bfq_weights_tree_add(bfqd, bfqq); if (bfqq->wr_coeff > 1) bfqd->wr_busy_queues++; From patchwork Sat Apr 16 09:37:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12815767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39584C4332F for ; Sat, 16 Apr 2022 09:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229507AbiDPJ0B (ORCPT ); Sat, 16 Apr 2022 05:26:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230296AbiDPJ0A (ORCPT ); Sat, 16 Apr 2022 05:26:00 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E5951031; Sat, 16 Apr 2022 02:23:29 -0700 (PDT) Received: from kwepemi100019.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KgSNr057nzFphG; Sat, 16 Apr 2022 17:20:59 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100019.china.huawei.com (7.221.188.189) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:27 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:26 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v2 2/5] block, bfq: add fake weight_counter for weight-raised queue Date: Sat, 16 Apr 2022 17:37:50 +0800 Message-ID: <20220416093753.3054696-3-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220416093753.3054696-1-yukuai3@huawei.com> References: <20220416093753.3054696-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Weight-raised queue is not inserted to weights_tree, which makes it impossible to track how many queues have pending requests through weights_tree insertion and removel. This patch add fake weight_counter for weight-raised queue to do that. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 11 +++++++++++ block/bfq-wf2q.c | 5 ++--- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 2deea2d07a1f..a2977c938c70 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -134,6 +134,8 @@ #include "bfq-iosched.h" #include "blk-wbt.h" +#define BFQ_FAKE_WEIGHT_COUNTER ((void *) POISON_INUSE) + #define BFQ_BFQQ_FNS(name) \ void bfq_mark_bfqq_##name(struct bfq_queue *bfqq) \ { \ @@ -884,6 +886,12 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq) if (bfqq->weight_counter) return; + if (bfqq->wr_coeff != 1) { + bfqq->weight_counter = BFQ_FAKE_WEIGHT_COUNTER; + bfqq->ref++; + return; + } + while (*new) { struct bfq_weight_counter *__counter = container_of(*new, struct bfq_weight_counter, @@ -943,6 +951,9 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) if (!bfqq->weight_counter) return; + if (bfqq->weight_counter == BFQ_FAKE_WEIGHT_COUNTER) + goto reset_entity_pointer; + root = &bfqd->queue_weights_tree; bfqq->weight_counter->num_active--; if (bfqq->weight_counter->num_active > 0) diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index a1296058c1ec..ae12c6b2c525 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -776,7 +776,7 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, * Add the entity, if it is not a weight-raised queue, * to the counter associated with its new weight. */ - if (prev_weight != new_weight && bfqq && bfqq->wr_coeff == 1) + if (prev_weight != new_weight && bfqq) bfq_weights_tree_add(bfqd, bfqq); new_st->wsum += entity->weight; @@ -1680,8 +1680,7 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq) bfqd->busy_queues[bfqq->ioprio_class - 1]++; if (!bfqq->dispatched) - if (bfqq->wr_coeff == 1) - bfq_weights_tree_add(bfqd, bfqq); + bfq_weights_tree_add(bfqd, bfqq); if (bfqq->wr_coeff > 1) bfqd->wr_busy_queues++; From patchwork Sat Apr 16 09:37:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12815769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87109C43217 for ; Sat, 16 Apr 2022 09:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230507AbiDPJ0B (ORCPT ); Sat, 16 Apr 2022 05:26:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230449AbiDPJ0B (ORCPT ); Sat, 16 Apr 2022 05:26:01 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5504B1171; Sat, 16 Apr 2022 02:23:30 -0700 (PDT) Received: from kwepemi100018.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KgSQw60rfz1HBnQ; Sat, 16 Apr 2022 17:22:48 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100018.china.huawei.com (7.221.188.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:28 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:27 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v2 3/5] bfq, block: record how many queues have pending requests in bfq_group Date: Sat, 16 Apr 2022 17:37:51 +0800 Message-ID: <20220416093753.3054696-4-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220416093753.3054696-1-yukuai3@huawei.com> References: <20220416093753.3054696-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Prepare to refactor the counting of 'num_groups_with_pending_reqs'. bfqq will be inserted to weights_tree when new io is inserted to it, and bfqq will be removed from weights_tree when all the requests are completed. Thus use weights_tree insertion and removal to track how many queues have pending requests. Signed-off-by: Yu Kuai --- block/bfq-cgroup.c | 1 + block/bfq-iosched.c | 17 ++++++++++++++++- block/bfq-iosched.h | 1 + 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 420eda2589c0..e8b58b2361a6 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -557,6 +557,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd) */ bfqg->bfqd = bfqd; bfqg->active_entities = 0; + bfqg->num_entities_with_pending_reqs = 0; bfqg->rq_pos_tree = RB_ROOT; } diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index a2977c938c70..994c6b36a5d5 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -889,7 +889,7 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq) if (bfqq->wr_coeff != 1) { bfqq->weight_counter = BFQ_FAKE_WEIGHT_COUNTER; bfqq->ref++; - return; + goto update; } while (*new) { @@ -936,6 +936,14 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq) inc_counter: bfqq->weight_counter->num_active++; bfqq->ref++; + +update: +#ifdef CONFIG_BFQ_GROUP_IOSCHED + if (!entity->in_groups_with_pending_reqs) { + entity->in_groups_with_pending_reqs = true; + bfqq_group(bfqq)->num_entities_with_pending_reqs++; + } +#endif } /* @@ -963,6 +971,13 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) kfree(bfqq->weight_counter); reset_entity_pointer: +#ifdef CONFIG_BFQ_GROUP_IOSCHED + if (bfqq->entity.in_groups_with_pending_reqs) { + bfqq->entity.in_groups_with_pending_reqs = false; + bfqq_group(bfqq)->num_entities_with_pending_reqs--; + } +#endif + bfqq->weight_counter = NULL; bfq_put_queue(bfqq); } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 072099b0c11a..5e1a0ead2b6a 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -940,6 +940,7 @@ struct bfq_group { struct bfq_entity *my_entity; int active_entities; + int num_entities_with_pending_reqs; struct rb_root rq_pos_tree; From patchwork Sat Apr 16 09:37:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12815770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21258C433EF for ; Sat, 16 Apr 2022 09:23:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231254AbiDPJ0D (ORCPT ); Sat, 16 Apr 2022 05:26:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231248AbiDPJ0C (ORCPT ); Sat, 16 Apr 2022 05:26:02 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15C471031; Sat, 16 Apr 2022 02:23:31 -0700 (PDT) Received: from kwepemi100016.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KgSLg6x3RzCr6N; Sat, 16 Apr 2022 17:19:07 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100016.china.huawei.com (7.221.188.123) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:29 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:28 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v2 4/5] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Date: Sat, 16 Apr 2022 17:37:52 +0800 Message-ID: <20220416093753.3054696-5-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220416093753.3054696-1-yukuai3@huawei.com> References: <20220416093753.3054696-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, bfq can't handle sync io concurrently as long as they are not issued from root group. This is because 'bfqd->num_groups_with_pending_reqs > 0' is always true in bfq_asymmetric_scenario(). The way that bfqg is counted to 'num_groups_with_pending_reqs': Before this patch: 1) root group will never be counted. 2) Count if bfqg or it's child bfqgs have pending requests. 3) Don't count if bfqg and it's child bfqgs complete all the requests. After this patch: 1) root group is counted. 2) Count if bfqg have pending requests. 3) Don't count if bfqg complete all the requests. With this patch, the occasion that only one group have pending requests can be detected, and next patch will support concurrent sync io in the occasion. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 52 ++++++--------------------------------------- block/bfq-iosched.h | 18 ++++++++-------- block/bfq-wf2q.c | 13 ------------ 3 files changed, 15 insertions(+), 68 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 994c6b36a5d5..39abcd95df8e 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -941,7 +941,8 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq) #ifdef CONFIG_BFQ_GROUP_IOSCHED if (!entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = true; - bfqq_group(bfqq)->num_entities_with_pending_reqs++; + if (!(bfqq_group(bfqq)->num_entities_with_pending_reqs++)) + bfqd->num_groups_with_pending_reqs++; } #endif } @@ -974,7 +975,8 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) #ifdef CONFIG_BFQ_GROUP_IOSCHED if (bfqq->entity.in_groups_with_pending_reqs) { bfqq->entity.in_groups_with_pending_reqs = false; - bfqq_group(bfqq)->num_entities_with_pending_reqs--; + if (!(--bfqq_group(bfqq)->num_entities_with_pending_reqs)) + bfqd->num_groups_with_pending_reqs--; } #endif @@ -989,48 +991,6 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { - struct bfq_entity *entity = bfqq->entity.parent; - - for_each_entity(entity) { - struct bfq_sched_data *sd = entity->my_sched_data; - - if (sd->next_in_service || sd->in_service_entity) { - /* - * entity is still active, because either - * next_in_service or in_service_entity is not - * NULL (see the comments on the definition of - * next_in_service for details on why - * in_service_entity must be checked too). - * - * As a consequence, its parent entities are - * active as well, and thus this loop must - * stop here. - */ - break; - } - - /* - * The decrement of num_groups_with_pending_reqs is - * not performed immediately upon the deactivation of - * entity, but it is delayed to when it also happens - * that the first leaf descendant bfqq of entity gets - * all its pending requests completed. The following - * instructions perform this delayed decrement, if - * needed. See the comments on - * num_groups_with_pending_reqs for details. - */ - if (entity->in_groups_with_pending_reqs) { - entity->in_groups_with_pending_reqs = false; - bfqd->num_groups_with_pending_reqs--; - } - } - - /* - * Next function is invoked last, because it causes bfqq to be - * freed if the following holds: bfqq is not in service and - * has no dispatched request. DO NOT use bfqq after the next - * function invocation. - */ __bfq_weights_tree_remove(bfqd, bfqq); } @@ -3710,7 +3670,7 @@ static void bfq_dispatch_remove(struct request_queue *q, struct request *rq) * group. More precisely, for conditions (i-a) or (i-b) to become * false because of such a group, it is not even necessary that the * group is (still) active: it is sufficient that, even if the group - * has become inactive, some of its descendant processes still have + * has become inactive, some of its processes still have * some request already dispatched but still waiting for * completion. In fact, requests have still to be guaranteed their * share of the throughput even after being dispatched. In this @@ -3719,7 +3679,7 @@ static void bfq_dispatch_remove(struct request_queue *q, struct request *rq) * happens, the group is not considered in the calculation of whether * the scenario is asymmetric, then the group may fail to be * guaranteed its fair share of the throughput (basically because - * idling may not be performed for the descendant processes of the + * idling may not be performed for the processes of the * group, but it had to be). We address this issue with the following * bi-modal behavior, implemented in the function * bfq_asymmetric_scenario(). diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 5e1a0ead2b6a..0850ca03e1d5 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -495,27 +495,27 @@ struct bfq_data { struct rb_root_cached queue_weights_tree; /* - * Number of groups with at least one descendant process that + * Number of groups with at least one process that * has at least one request waiting for completion. Note that * this accounts for also requests already dispatched, but not * yet completed. Therefore this number of groups may differ * (be larger) than the number of active groups, as a group is * considered active only if its corresponding entity has - * descendant queues with at least one request queued. This + * queues with at least one request queued. This * number is used to decide whether a scenario is symmetric. * For a detailed explanation see comments on the computation * of the variable asymmetric_scenario in the function * bfq_better_to_idle(). * * However, it is hard to compute this number exactly, for - * groups with multiple descendant processes. Consider a group - * that is inactive, i.e., that has no descendant process with + * groups with multiple processes. Consider a group + * that is inactive, i.e., that has no process with * pending I/O inside BFQ queues. Then suppose that * num_groups_with_pending_reqs is still accounting for this - * group, because the group has descendant processes with some + * group, because the group has processes with some * I/O request still in flight. num_groups_with_pending_reqs * should be decremented when the in-flight request of the - * last descendant process is finally completed (assuming that + * last process is finally completed (assuming that * nothing else has changed for the group in the meantime, in * terms of composition of the group and active/inactive state of child * groups and processes). To accomplish this, an additional @@ -524,7 +524,7 @@ struct bfq_data { * we resort to the following tradeoff between simplicity and * accuracy: for an inactive group that is still counted in * num_groups_with_pending_reqs, we decrement - * num_groups_with_pending_reqs when the first descendant + * num_groups_with_pending_reqs when the first * process of the group remains with no request waiting for * completion. * @@ -532,12 +532,12 @@ struct bfq_data { * carefulness: to avoid multiple decrements, we flag a group, * more precisely an entity representing a group, as still * counted in num_groups_with_pending_reqs when it becomes - * inactive. Then, when the first descendant queue of the + * inactive. Then, when the first queue of the * entity remains with no request waiting for completion, * num_groups_with_pending_reqs is decremented, and this flag * is reset. After this flag is reset for the entity, * num_groups_with_pending_reqs won't be decremented any - * longer in case a new descendant queue of the entity remains + * longer in case a new queue of the entity remains * with no request waiting for completion. */ unsigned int num_groups_with_pending_reqs; diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index ae12c6b2c525..e848d5d2bcdc 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -979,19 +979,6 @@ static void __bfq_activate_entity(struct bfq_entity *entity, entity->on_st_or_in_serv = true; } -#ifdef CONFIG_BFQ_GROUP_IOSCHED - if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */ - struct bfq_group *bfqg = - container_of(entity, struct bfq_group, entity); - struct bfq_data *bfqd = bfqg->bfqd; - - if (!entity->in_groups_with_pending_reqs) { - entity->in_groups_with_pending_reqs = true; - bfqd->num_groups_with_pending_reqs++; - } - } -#endif - bfq_update_fin_time_enqueue(entity, st, backshifted); } From patchwork Sat Apr 16 09:37:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12815771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2A88C433FE for ; Sat, 16 Apr 2022 09:23:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231265AbiDPJ0E (ORCPT ); Sat, 16 Apr 2022 05:26:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231252AbiDPJ0C (ORCPT ); Sat, 16 Apr 2022 05:26:02 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8031C1171; Sat, 16 Apr 2022 02:23:31 -0700 (PDT) Received: from kwepemi100017.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KgSLh57KQzCr6c; Sat, 16 Apr 2022 17:19:08 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100017.china.huawei.com (7.221.188.163) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:29 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 16 Apr 2022 17:23:28 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v2 5/5] block, bfq: do not idle if only one cgroup is activated Date: Sat, 16 Apr 2022 17:37:53 +0800 Message-ID: <20220416093753.3054696-6-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220416093753.3054696-1-yukuai3@huawei.com> References: <20220416093753.3054696-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that root group is counted into 'num_groups_with_pending_reqs', 'num_groups_with_pending_reqs > 0' is always true in bfq_asymmetric_scenario(). Thus change the condition to 'num_groups_with_pending_reqs > 1'. On the other hand, now that 'num_groups_with_pending_reqs' represents how many groups have pending requests, this change can enable concurrent sync io is only on cgroup is activated. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 39abcd95df8e..7d9f94882f8e 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -846,7 +846,7 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd, return varied_queue_weights || multiple_classes_busy #ifdef CONFIG_BFQ_GROUP_IOSCHED - || bfqd->num_groups_with_pending_reqs > 0 + || bfqd->num_groups_with_pending_reqs > 1 #endif ; }