From patchwork Sat Nov 27 10:11:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABFAAC433FE for ; Sat, 27 Nov 2021 10:01:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354578AbhK0KEi (ORCPT ); Sat, 27 Nov 2021 05:04:38 -0500 Received: from szxga03-in.huawei.com ([45.249.212.189]:28182 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353952AbhK0KCh (ORCPT ); Sat, 27 Nov 2021 05:02:37 -0500 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4J1RqX1Lyfz8vfT; Sat, 27 Nov 2021 17:57:28 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:22 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:21 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 1/9] block, bfq: add new apis to iterate bfq entities Date: Sat, 27 Nov 2021 18:11:24 +0800 Message-ID: <20211127101132.486806-2-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The old and the new apis are the same currently, prepare to count root group into 'num_groups_with_pending_reqs'. The old apis will be used to iterate with root group's entity, and the new apis will be used to iterate without root group's entity. Signed-off-by: Yu Kuai --- block/bfq-iosched.h | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index a73488eec8a4..f5afc80ff11c 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -1034,9 +1034,20 @@ extern struct blkcg_policy blkcg_policy_bfq; #define for_each_entity_safe(entity, parent) \ for (; entity && ({ parent = entity->parent; 1; }); entity = parent) +#define is_root_entity(entity) \ + (entity->sched_data == NULL) + +#define for_each_entity_not_root(entity) \ + for (; entity && !is_root_entity(entity); entity = entity->parent) + +#define for_each_entity_not_root_safe(entity, parent) \ + for (; entity && !is_root_entity(entity) && \ + ({ parent = entity->parent; 1; }); entity = parent) #else /* CONFIG_BFQ_GROUP_IOSCHED */ +#define is_root_entity(entity) (false) + /* - * Next two macros are fake loops when cgroups support is not + * Next four macros are fake loops when cgroups support is not * enabled. I fact, in such a case, there is only one level to go up * (to reach the root group). */ @@ -1045,6 +1056,12 @@ extern struct blkcg_policy blkcg_policy_bfq; #define for_each_entity_safe(entity, parent) \ for (parent = NULL; entity ; entity = parent) + +#define for_each_entity_not_root(entity) \ + for (; entity ; entity = NULL) + +#define for_each_entity_not_root_safe(entity, parent) \ + for (parent = NULL; entity ; entity = parent) #endif /* CONFIG_BFQ_GROUP_IOSCHED */ struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq); From patchwork Sat Nov 27 10:11:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F18BDC433F5 for ; Sat, 27 Nov 2021 10:01:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354598AbhK0KEj (ORCPT ); Sat, 27 Nov 2021 05:04:39 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:14987 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353957AbhK0KCi (ORCPT ); Sat, 27 Nov 2021 05:02:38 -0500 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4J1Rpk4pnczZdFc; Sat, 27 Nov 2021 17:56:46 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:22 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:22 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 2/9] block, bfq: apply news apis where root group is not expected Date: Sat, 27 Nov 2021 18:11:25 +0800 Message-ID: <20211127101132.486806-3-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org 'entity->sched_data' is set to parent group's sched_data, thus it's NULL for root group. And for_each_entity() is used widely to access 'entity->sched_data', thus aplly news apis if root group is not expected. The case that root group is expected will be handled in next patch. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 2 +- block/bfq-iosched.h | 22 ++++++++-------------- block/bfq-wf2q.c | 10 +++++----- 3 files changed, 14 insertions(+), 20 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 1ce1a99a7160..3262d062e21f 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -4273,7 +4273,7 @@ void bfq_bfqq_expire(struct bfq_data *bfqd, * service with the same budget. */ entity = entity->parent; - for_each_entity(entity) + for_each_entity_not_root(entity) entity->service = 0; } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index f5afc80ff11c..ef875b8046e5 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -1021,25 +1021,22 @@ extern struct blkcg_policy blkcg_policy_bfq; /* - interface of the internal hierarchical B-WF2Q+ scheduler - */ #ifdef CONFIG_BFQ_GROUP_IOSCHED -/* both next loops stop at one of the child entities of the root group */ +/* stop at one of the child entities of the root group */ #define for_each_entity(entity) \ for (; entity ; entity = entity->parent) -/* - * For each iteration, compute parent in advance, so as to be safe if - * entity is deallocated during the iteration. Such a deallocation may - * happen as a consequence of a bfq_put_queue that frees the bfq_queue - * containing entity. - */ -#define for_each_entity_safe(entity, parent) \ - for (; entity && ({ parent = entity->parent; 1; }); entity = parent) - #define is_root_entity(entity) \ (entity->sched_data == NULL) #define for_each_entity_not_root(entity) \ for (; entity && !is_root_entity(entity); entity = entity->parent) +/* + * For each iteration, compute parent in advance, so as to be safe if + * entity is deallocated during the iteration. Such a deallocation may + * happen as a consequence of a bfq_put_queue that frees the bfq_queue + * containing entity. + */ #define for_each_entity_not_root_safe(entity, parent) \ for (; entity && !is_root_entity(entity) && \ ({ parent = entity->parent; 1; }); entity = parent) @@ -1047,16 +1044,13 @@ extern struct blkcg_policy blkcg_policy_bfq; #define is_root_entity(entity) (false) /* - * Next four macros are fake loops when cgroups support is not + * Next three macros are fake loops when cgroups support is not * enabled. I fact, in such a case, there is only one level to go up * (to reach the root group). */ #define for_each_entity(entity) \ for (; entity ; entity = NULL) -#define for_each_entity_safe(entity, parent) \ - for (parent = NULL; entity ; entity = parent) - #define for_each_entity_not_root(entity) \ for (; entity ; entity = NULL) diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index b74cc0da118e..67e32481e455 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -830,7 +830,7 @@ void bfq_bfqq_served(struct bfq_queue *bfqq, int served) bfqq->service_from_wr += served; bfqq->service_from_backlogged += served; - for_each_entity(entity) { + for_each_entity_not_root(entity) { st = bfq_entity_service_tree(entity); entity->service += served; @@ -1216,7 +1216,7 @@ static void bfq_deactivate_entity(struct bfq_entity *entity, struct bfq_sched_data *sd; struct bfq_entity *parent = NULL; - for_each_entity_safe(entity, parent) { + for_each_entity_not_root_safe(entity, parent) { sd = entity->sched_data; if (!__bfq_deactivate_entity(entity, ins_into_idle_tree)) { @@ -1285,7 +1285,7 @@ static void bfq_deactivate_entity(struct bfq_entity *entity, * is not the case. */ entity = parent; - for_each_entity(entity) { + for_each_entity_not_root(entity) { /* * Invoke __bfq_requeue_entity on entity, even if * already active, to requeue/reposition it in the @@ -1585,7 +1585,7 @@ struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd) * We can finally update all next-to-serve entities along the * path from the leaf entity just set in service to the root. */ - for_each_entity(entity) { + for_each_entity_not_root(entity) { struct bfq_sched_data *sd = entity->sched_data; if (!bfq_update_next_in_service(sd, NULL, false)) @@ -1612,7 +1612,7 @@ bool __bfq_bfqd_reset_in_service(struct bfq_data *bfqd) * execute the final step: reset in_service_entity along the * path from entity to the root. */ - for_each_entity(entity) + for_each_entity_not_root(entity) entity->sched_data->in_service_entity = NULL; /* From patchwork Sat Nov 27 10:11:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFBF7C433FE for ; Sat, 27 Nov 2021 10:01:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354753AbhK0KEw (ORCPT ); Sat, 27 Nov 2021 05:04:52 -0500 Received: from szxga02-in.huawei.com ([45.249.212.188]:16310 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354012AbhK0KCj (ORCPT ); Sat, 27 Nov 2021 05:02:39 -0500 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4J1RsB2m8Xz90xm; Sat, 27 Nov 2021 17:58:54 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:23 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:22 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 3/9] block, bfq: handle the case when for_each_entity() access root group Date: Sat, 27 Nov 2021 18:11:26 +0800 Message-ID: <20211127101132.486806-4-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Prevent null-ptr-deref after counting root group into 'num_groups_with_pending_reqs'. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 2 +- block/bfq-wf2q.c | 17 +++++++++++++---- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 3262d062e21f..47722f931ee3 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -864,7 +864,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd, for_each_entity(entity) { struct bfq_sched_data *sd = entity->my_sched_data; - if (sd->next_in_service || sd->in_service_entity) { + if (sd && (sd->next_in_service || sd->in_service_entity)) { /* * entity is still active, because either * next_in_service or in_service_entity is not diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 67e32481e455..6693765ff3a0 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -965,6 +965,13 @@ static void __bfq_activate_entity(struct bfq_entity *entity, bool backshifted = false; unsigned long long min_vstart; + if (is_root_entity(entity)) +#ifdef CONFIG_BFQ_GROUP_IOSCHED + goto update; +#else + return; +#endif + /* See comments on bfq_fqq_update_budg_for_activation */ if (non_blocking_wait_rq && bfq_gt(st->vtime, entity->finish)) { backshifted = true; @@ -999,7 +1006,10 @@ static void __bfq_activate_entity(struct bfq_entity *entity, entity->on_st_or_in_serv = true; } + bfq_update_fin_time_enqueue(entity, st, backshifted); + #ifdef CONFIG_BFQ_GROUP_IOSCHED +update: if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */ struct bfq_group *bfqg = container_of(entity, struct bfq_group, entity); @@ -1011,8 +1021,6 @@ static void __bfq_activate_entity(struct bfq_entity *entity, } } #endif - - bfq_update_fin_time_enqueue(entity, st, backshifted); } /** @@ -1102,7 +1110,8 @@ static void __bfq_activate_requeue_entity(struct bfq_entity *entity, { struct bfq_service_tree *st = bfq_entity_service_tree(entity); - if (sd->in_service_entity == entity || entity->tree == &st->active) + if (sd && (sd->in_service_entity == entity || + entity->tree == &st->active)) /* * in service or already queued on the active tree, * requeue or reposition @@ -1140,7 +1149,7 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity, sd = entity->sched_data; __bfq_activate_requeue_entity(entity, sd, non_blocking_wait_rq); - if (!bfq_update_next_in_service(sd, entity, expiration) && + if (sd && !bfq_update_next_in_service(sd, entity, expiration) && !requeue) break; } From patchwork Sat Nov 27 10:11:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C419C433EF for ; Sat, 27 Nov 2021 10:01:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354734AbhK0KEr (ORCPT ); Sat, 27 Nov 2021 05:04:47 -0500 Received: from szxga02-in.huawei.com ([45.249.212.188]:27306 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354047AbhK0KCj (ORCPT ); Sat, 27 Nov 2021 05:02:39 -0500 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4J1Rsh0gJ6zbj81; Sat, 27 Nov 2021 17:59:20 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:24 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:23 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 4/9] block, bfq: count root group into 'num_groups_with_pending_reqs' Date: Sat, 27 Nov 2021 18:11:27 +0800 Message-ID: <20211127101132.486806-5-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Root group is not counted into 'num_groups_with_pending_reqs' because 'entity->parent' is set to NULL for child entities, thus for_each_entity() can't access root group. This patch set root_group's entity to 'entity->parent' for child entities, It's okay with previous patches handle the case that for_each_entity() access root group. Signed-off-by: Yu Kuai --- block/bfq-cgroup.c | 2 +- block/bfq-iosched.h | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 24a5c5329bcd..9c36deda4ed4 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -436,7 +436,7 @@ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg) */ bfqg_and_blkg_get(bfqg); } - entity->parent = bfqg->my_entity; /* NULL for root group */ + entity->parent = &bfqg->entity; entity->sched_data = &bfqg->sched_data; } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index ef875b8046e5..515eb2604a37 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -1021,13 +1021,14 @@ extern struct blkcg_policy blkcg_policy_bfq; /* - interface of the internal hierarchical B-WF2Q+ scheduler - */ #ifdef CONFIG_BFQ_GROUP_IOSCHED -/* stop at one of the child entities of the root group */ +/* stop at root group */ #define for_each_entity(entity) \ for (; entity ; entity = entity->parent) #define is_root_entity(entity) \ (entity->sched_data == NULL) +/* stop at one of the child entities of the root group */ #define for_each_entity_not_root(entity) \ for (; entity && !is_root_entity(entity); entity = entity->parent) From patchwork Sat Nov 27 10:11:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCCA2C433EF for ; Sat, 27 Nov 2021 10:01:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354101AbhK0KEt (ORCPT ); Sat, 27 Nov 2021 05:04:49 -0500 Received: from szxga03-in.huawei.com ([45.249.212.189]:28183 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354061AbhK0KCl (ORCPT ); Sat, 27 Nov 2021 05:02:41 -0500 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4J1Rqb1NrDz8vfs; Sat, 27 Nov 2021 17:57:31 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:25 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:24 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 5/9] block, bfq: do not idle if only one cgroup is activated Date: Sat, 27 Nov 2021 18:11:28 +0800 Message-ID: <20211127101132.486806-6-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that root group is counted into 'num_groups_with_pending_reqs', 'num_groups_with_pending_reqs > 0' is always true in bfq_asymmetric_scenario(). If only one group is activated, there is no need to guarantee the same share of the throughput of queues in the same group. Thus change the condition to 'num_groups_with_pending_reqs > 1'. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 47722f931ee3..63e4e13db92f 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -709,7 +709,7 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) * much easier to maintain the needed state: * 1) all active queues have the same weight, * 2) all active queues belong to the same I/O-priority class, - * 3) there are no active groups. + * 3) there are one active groups at most. * In particular, the last condition is always true if hierarchical * support or the cgroups interface are not enabled, thus no state * needs to be maintained in this case. @@ -741,7 +741,7 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd, return varied_queue_weights || multiple_classes_busy #ifdef CONFIG_BFQ_GROUP_IOSCHED - || bfqd->num_groups_with_pending_reqs > 0 + || bfqd->num_groups_with_pending_reqs > 1 #endif ; } From patchwork Sat Nov 27 10:11:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DF2EC433EF for ; Sat, 27 Nov 2021 10:01:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354661AbhK0KEn (ORCPT ); Sat, 27 Nov 2021 05:04:43 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:14988 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354068AbhK0KCl (ORCPT ); Sat, 27 Nov 2021 05:02:41 -0500 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4J1Rpn58JkzZcNX; Sat, 27 Nov 2021 17:56:49 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:25 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:24 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 6/9] block, bfq: only count group that the bfq_queue belongs to Date: Sat, 27 Nov 2021 18:11:29 +0800 Message-ID: <20211127101132.486806-7-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, group will be counted into 'num_groups_with_pending_reqs' once it's child cgroup is activated, even if the group doesn't have any pending requests itself. For example, if we issue sync io in cgroup /root/c1/c2, root, c1 and c2 will all be counted into 'num_groups_with_pending_reqs', which makes it impossible to handle requests concurrently. This patch doesn't count the group that doesn't have any pending request, even if it's child group is activated. Signed-off-by: Yu Kuai --- block/bfq-wf2q.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 6693765ff3a0..343cfc8b952e 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -950,6 +950,8 @@ static void bfq_update_fin_time_enqueue(struct bfq_entity *entity, * __bfq_activate_entity - handle activation of entity. * @entity: the entity being activated. * @non_blocking_wait_rq: true if entity was waiting for a request + * @count_group: if entity represents group, true if the group will be + * counted in 'num_groups_with_pending_reqs'. * * Called for a 'true' activation, i.e., if entity is not active and * one of its children receives a new request. @@ -959,7 +961,8 @@ static void bfq_update_fin_time_enqueue(struct bfq_entity *entity, * from its idle tree. */ static void __bfq_activate_entity(struct bfq_entity *entity, - bool non_blocking_wait_rq) + bool non_blocking_wait_rq, + bool count_group) { struct bfq_service_tree *st = bfq_entity_service_tree(entity); bool backshifted = false; @@ -1010,7 +1013,7 @@ static void __bfq_activate_entity(struct bfq_entity *entity, #ifdef CONFIG_BFQ_GROUP_IOSCHED update: - if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */ + if (count_group && !bfq_entity_to_bfqq(entity)) { /* bfq_group */ struct bfq_group *bfqg = container_of(entity, struct bfq_group, entity); struct bfq_data *bfqd = bfqg->bfqd; @@ -1106,7 +1109,8 @@ static void __bfq_requeue_entity(struct bfq_entity *entity) static void __bfq_activate_requeue_entity(struct bfq_entity *entity, struct bfq_sched_data *sd, - bool non_blocking_wait_rq) + bool non_blocking_wait_rq, + bool count_group) { struct bfq_service_tree *st = bfq_entity_service_tree(entity); @@ -1122,7 +1126,8 @@ static void __bfq_activate_requeue_entity(struct bfq_entity *entity, * Not in service and not queued on its active tree: * the activity is idle and this is a true activation. */ - __bfq_activate_entity(entity, non_blocking_wait_rq); + __bfq_activate_entity(entity, non_blocking_wait_rq, + count_group); } @@ -1144,10 +1149,12 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity, bool requeue, bool expiration) { struct bfq_sched_data *sd; + int depth = 0; for_each_entity(entity) { sd = entity->sched_data; - __bfq_activate_requeue_entity(entity, sd, non_blocking_wait_rq); + __bfq_activate_requeue_entity(entity, sd, non_blocking_wait_rq, + depth++ == 1); if (sd && !bfq_update_next_in_service(sd, entity, expiration) && !requeue) From patchwork Sat Nov 27 10:11:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9E8DC433F5 for ; Sat, 27 Nov 2021 10:01:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354683AbhK0KEp (ORCPT ); Sat, 27 Nov 2021 05:04:45 -0500 Received: from szxga02-in.huawei.com ([45.249.212.188]:16311 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354075AbhK0KCm (ORCPT ); Sat, 27 Nov 2021 05:02:42 -0500 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4J1RsF4FY5z91Ct; Sat, 27 Nov 2021 17:58:57 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:26 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:25 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 7/9] block, bfq: record how many queues have pending requests in bfq_group Date: Sat, 27 Nov 2021 18:11:30 +0800 Message-ID: <20211127101132.486806-8-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Prepare to decrease 'num_groups_with_pending_reqs' earlier. Signed-off-by: Yu Kuai --- block/bfq-cgroup.c | 1 + block/bfq-iosched.c | 21 +++++++++++++++++++++ block/bfq-iosched.h | 1 + 3 files changed, 23 insertions(+) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 9c36deda4ed4..d6f7e96ec852 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -557,6 +557,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd) */ bfqg->bfqd = bfqd; bfqg->active_entities = 0; + bfqg->num_entities_with_pending_reqs = 0; bfqg->rq_pos_tree = RB_ROOT; } diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 63e4e13db92f..e3c31db4bffb 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -825,6 +825,16 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, inc_counter: bfqq->weight_counter->num_active++; bfqq->ref++; + +#ifdef CONFIG_BFQ_GROUP_IOSCHED + if (!entity->in_groups_with_pending_reqs) { + struct bfq_group *bfqg = + container_of(entity->parent, struct bfq_group, entity); + + entity->in_groups_with_pending_reqs = true; + bfqg->num_entities_with_pending_reqs++; + } +#endif } /* @@ -841,6 +851,17 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, return; bfqq->weight_counter->num_active--; + +#ifdef CONFIG_BFQ_GROUP_IOSCHED + if (bfqq->entity.in_groups_with_pending_reqs) { + struct bfq_group *bfqg = container_of(bfqq->entity.parent, + struct bfq_group, entity); + + bfqq->entity.in_groups_with_pending_reqs = false; + bfqg->num_entities_with_pending_reqs--; + } +#endif + if (bfqq->weight_counter->num_active > 0) goto reset_entity_pointer; diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 515eb2604a37..df08bff89a70 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -937,6 +937,7 @@ struct bfq_group { struct bfq_entity *my_entity; int active_entities; + int num_entities_with_pending_reqs; struct rb_root rq_pos_tree; From patchwork Sat Nov 27 10:11:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3433FC433F5 for ; Sat, 27 Nov 2021 10:01:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354769AbhK0KEv (ORCPT ); Sat, 27 Nov 2021 05:04:51 -0500 Received: from szxga08-in.huawei.com ([45.249.212.255]:28111 "EHLO szxga08-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354079AbhK0KCm (ORCPT ); Sat, 27 Nov 2021 05:02:42 -0500 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4J1Rpq2FLHz1DJNq; Sat, 27 Nov 2021 17:56:51 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:27 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:26 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 8/9] block, bfq: move forward __bfq_weights_tree_remove() Date: Sat, 27 Nov 2021 18:11:31 +0800 Message-ID: <20211127101132.486806-9-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Prepare to decrease 'num_groups_with_pending_reqs' earlier. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index e3c31db4bffb..4239b3996e23 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -882,6 +882,10 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd, { struct bfq_entity *entity = bfqq->entity.parent; + bfqq->ref++; + __bfq_weights_tree_remove(bfqd, bfqq, + &bfqd->queue_weights_tree); + for_each_entity(entity) { struct bfq_sched_data *sd = entity->my_sched_data; @@ -916,14 +920,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd, } } - /* - * Next function is invoked last, because it causes bfqq to be - * freed if the following holds: bfqq is not in service and - * has no dispatched request. DO NOT use bfqq after the next - * function invocation. - */ - __bfq_weights_tree_remove(bfqd, bfqq, - &bfqd->queue_weights_tree); + bfq_put_queue(bfqq); } /* From patchwork Sat Nov 27 10:11:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2C72C433F5 for ; Sat, 27 Nov 2021 10:01:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354788AbhK0KEy (ORCPT ); Sat, 27 Nov 2021 05:04:54 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:14989 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354086AbhK0KCn (ORCPT ); Sat, 27 Nov 2021 05:02:43 -0500 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4J1Rpq5n5VzZd5M; Sat, 27 Nov 2021 17:56:51 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:28 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:27 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 9/9] block, bfq: decrease 'num_groups_with_pending_reqs' earlier Date: Sat, 27 Nov 2021 18:11:32 +0800 Message-ID: <20211127101132.486806-10-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently 'num_groups_with_pending_reqs' won't be decreased when the group doesn't have any pending requests, while any child group have any pending requests. The decrement is delayed to when all the child groups doesn't have any pending requests. For example: 1) t1 issue sync io on root group, t2 and t3 issue sync io on the same child group. num_groups_with_pending_reqs is 2 now. 2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and t3 still can't be handled concurrently. Fix the problem by decreasing 'num_groups_with_pending_reqs' immediately upon the deactivation of last entity of the group. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 58 ++++++++++++++++----------------------------- block/bfq-iosched.h | 16 ++++++------- 2 files changed, 29 insertions(+), 45 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 4239b3996e23..55925e1ee85d 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -873,6 +873,26 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, bfq_put_queue(bfqq); } +static void decrease_groups_with_pending_reqs(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ +#ifdef CONFIG_BFQ_GROUP_IOSCHED + struct bfq_entity *entity = bfqq->entity.parent; + struct bfq_group *bfqg = container_of(entity, struct bfq_group, entity); + + /* + * The decrement of num_groups_with_pending_reqs is performed + * immediately upon the deactivation of last entity that have pending + * requests + */ + if (!bfqg->num_entities_with_pending_reqs && + entity->in_groups_with_pending_reqs) { + entity->in_groups_with_pending_reqs = false; + bfqd->num_groups_with_pending_reqs--; + } +#endif +} + /* * Invoke __bfq_weights_tree_remove on bfqq and decrement the number * of active groups for each queue's inactive parent entity. @@ -880,46 +900,10 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { - struct bfq_entity *entity = bfqq->entity.parent; - bfqq->ref++; __bfq_weights_tree_remove(bfqd, bfqq, &bfqd->queue_weights_tree); - - for_each_entity(entity) { - struct bfq_sched_data *sd = entity->my_sched_data; - - if (sd && (sd->next_in_service || sd->in_service_entity)) { - /* - * entity is still active, because either - * next_in_service or in_service_entity is not - * NULL (see the comments on the definition of - * next_in_service for details on why - * in_service_entity must be checked too). - * - * As a consequence, its parent entities are - * active as well, and thus this loop must - * stop here. - */ - break; - } - - /* - * The decrement of num_groups_with_pending_reqs is - * not performed immediately upon the deactivation of - * entity, but it is delayed to when it also happens - * that the first leaf descendant bfqq of entity gets - * all its pending requests completed. The following - * instructions perform this delayed decrement, if - * needed. See the comments on - * num_groups_with_pending_reqs for details. - */ - if (entity->in_groups_with_pending_reqs) { - entity->in_groups_with_pending_reqs = false; - bfqd->num_groups_with_pending_reqs--; - } - } - + decrease_groups_with_pending_reqs(bfqd, bfqq); bfq_put_queue(bfqq); } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index df08bff89a70..7ae11f62900b 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -493,7 +493,7 @@ struct bfq_data { struct rb_root_cached queue_weights_tree; /* - * Number of groups with at least one descendant process that + * Number of groups with at least one process that * has at least one request waiting for completion. Note that * this accounts for also requests already dispatched, but not * yet completed. Therefore this number of groups may differ @@ -506,14 +506,14 @@ struct bfq_data { * bfq_better_to_idle(). * * However, it is hard to compute this number exactly, for - * groups with multiple descendant processes. Consider a group - * that is inactive, i.e., that has no descendant process with + * groups with multiple processes. Consider a group + * that is inactive, i.e., that has no process with * pending I/O inside BFQ queues. Then suppose that * num_groups_with_pending_reqs is still accounting for this - * group, because the group has descendant processes with some + * group, because the group has processes with some * I/O request still in flight. num_groups_with_pending_reqs * should be decremented when the in-flight request of the - * last descendant process is finally completed (assuming that + * last process is finally completed (assuming that * nothing else has changed for the group in the meantime, in * terms of composition of the group and active/inactive state of child * groups and processes). To accomplish this, an additional @@ -522,7 +522,7 @@ struct bfq_data { * we resort to the following tradeoff between simplicity and * accuracy: for an inactive group that is still counted in * num_groups_with_pending_reqs, we decrement - * num_groups_with_pending_reqs when the first descendant + * num_groups_with_pending_reqs when the last * process of the group remains with no request waiting for * completion. * @@ -530,12 +530,12 @@ struct bfq_data { * carefulness: to avoid multiple decrements, we flag a group, * more precisely an entity representing a group, as still * counted in num_groups_with_pending_reqs when it becomes - * inactive. Then, when the first descendant queue of the + * inactive. Then, when the last queue of the * entity remains with no request waiting for completion, * num_groups_with_pending_reqs is decremented, and this flag * is reset. After this flag is reset for the entity, * num_groups_with_pending_reqs won't be decremented any - * longer in case a new descendant queue of the entity remains + * longer in case a new queue of the entity remains * with no request waiting for completion. */ unsigned int num_groups_with_pending_reqs;