From patchwork Wed May 31 16:34:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 13262526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D637C77B7A for ; Wed, 31 May 2023 16:35:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229805AbjEaQfh (ORCPT ); Wed, 31 May 2023 12:35:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229782AbjEaQfg (ORCPT ); Wed, 31 May 2023 12:35:36 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 714BEE4E for ; Wed, 31 May 2023 09:34:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685550857; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=hNwDItSa7gah8NTUpgcKB6a8T7nVlWQpfJD91nGKxdk=; b=ZjqX0PbZcsUhCbdzr+FG+6lLgBlpYmnBfCO2pl9zt35zTUHoDYibWyU0UrBMhvEnN4w9Xp j6Y2GP6bqsufArcYij5AtLpE2eq1e89qQu31wTQVa/LTmYIvVKE9CZk6wIItZb59Ro2qeq BVT7P1CLHYrjTyeHEB7WubREoXRRyn8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-537-puj2kxdNPPKPRc2zfJywhA-1; Wed, 31 May 2023 12:34:12 -0400 X-MC-Unique: puj2kxdNPPKPRc2zfJywhA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2358C185A7B0; Wed, 31 May 2023 16:34:12 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E35E2166B26; Wed, 31 May 2023 16:34:11 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v2 1/6] cgroup/cpuset: Extract out CS_CPU_EXCLUSIVE & CS_SCHED_LOAD_BALANCE handling Date: Wed, 31 May 2023 12:34:00 -0400 Message-Id: <20230531163405.2200292-2-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Extract out the setting of CS_CPU_EXCLUSIVE and CS_SCHED_LOAD_BALANCE flags as well as the rebuilding of scheduling domains into the new update_partition_exclusive() and update_partition_sd_lb() helper functions to simplify the logic. The update_partition_exclusive() helper is called mainly at the beginning of the caller, but it may be called at the end too. The update_partition_sd_lb() helper is called at the end of the caller. This patch should reduce the chance that cpuset partition will end up in an incorrect state. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 134 ++++++++++++++++++++++++----------------- 1 file changed, 79 insertions(+), 55 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 2c76fcd9f0bc..12a0b583aca4 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1278,7 +1278,7 @@ static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus) static void compute_effective_cpumask(struct cpumask *new_cpus, struct cpuset *cs, struct cpuset *parent) { - if (parent->nr_subparts_cpus) { + if (parent->nr_subparts_cpus && is_partition_valid(cs)) { cpumask_or(new_cpus, parent->effective_cpus, parent->subparts_cpus); cpumask_and(new_cpus, new_cpus, cs->cpus_allowed); @@ -1300,6 +1300,43 @@ enum subparts_cmd { static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on); + +/* + * Update partition exclusive flag + * + * Return: 0 if successful, an error code otherwise + */ +static int update_partition_exclusive(struct cpuset *cs, int new_prs) +{ + bool exclusive = (new_prs > 0); + + if (exclusive && !is_cpu_exclusive(cs)) { + if (update_flag(CS_CPU_EXCLUSIVE, cs, 1)) + return PERR_NOTEXCL; + } else if (!exclusive && is_cpu_exclusive(cs)) { + /* Turning off CS_CPU_EXCLUSIVE will not return error */ + update_flag(CS_CPU_EXCLUSIVE, cs, 0); + } + return 0; +} + +/* + * Update partition load balance flag and/or rebuild sched domain + * + * Changing load balance flag will automatically call + * rebuild_sched_domains_locked(). + */ +static void update_partition_sd_lb(struct cpuset *cs, int old_prs) +{ + int new_prs = cs->partition_root_state; + bool new_lb = (new_prs != PRS_ISOLATED); + + if (new_lb != !!is_sched_load_balance(cs)) + update_flag(CS_SCHED_LOAD_BALANCE, cs, new_lb); + else if ((new_prs > 0) || (old_prs > 0)) + rebuild_sched_domains_locked(); +} + /** * update_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset * @cs: The cpuset that requests change in partition root state @@ -1359,8 +1396,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, return is_partition_invalid(parent) ? PERR_INVPARENT : PERR_NOTPART; } - if ((newmask && cpumask_empty(newmask)) || - (!newmask && cpumask_empty(cs->cpus_allowed))) + if (!newmask && cpumask_empty(cs->cpus_allowed)) return PERR_CPUSEMPTY; /* @@ -1426,11 +1462,16 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, cpumask_and(tmp->addmask, newmask, parent->cpus_allowed); adding = cpumask_andnot(tmp->addmask, tmp->addmask, parent->subparts_cpus); + /* + * Empty cpumask is not allewed + */ + if (cpumask_empty(newmask)) { + part_error = PERR_CPUSEMPTY; /* * Make partition invalid if parent's effective_cpus could * become empty and there are tasks in the parent. */ - if (adding && + } else if (adding && cpumask_subset(parent->effective_cpus, tmp->addmask) && !cpumask_intersects(tmp->delmask, cpu_active_mask) && partition_is_populated(parent, cs)) { @@ -1503,14 +1544,13 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, /* * Transitioning between invalid to valid or vice versa may require - * changing CS_CPU_EXCLUSIVE and CS_SCHED_LOAD_BALANCE. + * changing CS_CPU_EXCLUSIVE. */ if (old_prs != new_prs) { - if (is_prs_invalid(old_prs) && !is_cpu_exclusive(cs) && - (update_flag(CS_CPU_EXCLUSIVE, cs, 1) < 0)) - return PERR_NOTEXCL; - if (is_prs_invalid(new_prs) && is_cpu_exclusive(cs)) - update_flag(CS_CPU_EXCLUSIVE, cs, 0); + int err = update_partition_exclusive(cs, new_prs); + + if (err) + return err; } /* @@ -1547,15 +1587,16 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, update_tasks_cpumask(parent, tmp->addmask); /* - * Set or clear CS_SCHED_LOAD_BALANCE when partcmd_update, if necessary. - * rebuild_sched_domains_locked() may be called. + * For partcmd_update without newmask, it is being called from + * cpuset_hotplug_workfn() where cpus_read_lock() wasn't taken. + * Update the load balance flag and scheduling domain if + * cpus_read_trylock() is successful. */ - if (old_prs != new_prs) { - if (old_prs == PRS_ISOLATED) - update_flag(CS_SCHED_LOAD_BALANCE, cs, 1); - else if (new_prs == PRS_ISOLATED) - update_flag(CS_SCHED_LOAD_BALANCE, cs, 0); + if ((cmd == partcmd_update) && !newmask && cpus_read_trylock()) { + update_partition_sd_lb(cs, old_prs); + cpus_read_unlock(); } + notify_partition_change(cs, old_prs); return 0; } @@ -1770,6 +1811,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, int retval; struct tmpmasks tmp; bool invalidate = false; + int old_prs = cs->partition_root_state; /* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */ if (cs == &top_cpuset) @@ -1889,6 +1931,9 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, */ if (parent->child_ecpus_count) update_sibling_cpumasks(parent, cs, &tmp); + + /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains */ + update_partition_sd_lb(cs, old_prs); } return 0; } @@ -2265,7 +2310,6 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, static int update_prstate(struct cpuset *cs, int new_prs) { int err = PERR_NONE, old_prs = cs->partition_root_state; - bool sched_domain_rebuilt = false; struct cpuset *parent = parent_cs(cs); struct tmpmasks tmpmask; @@ -2284,45 +2328,28 @@ static int update_prstate(struct cpuset *cs, int new_prs) if (alloc_cpumasks(NULL, &tmpmask)) return -ENOMEM; + err = update_partition_exclusive(cs, new_prs); + if (err) + goto out; + if (!old_prs) { /* - * Turning on partition root requires setting the - * CS_CPU_EXCLUSIVE bit implicitly as well and cpus_allowed - * cannot be empty. + * cpus_allowed cannot be empty. */ if (cpumask_empty(cs->cpus_allowed)) { err = PERR_CPUSEMPTY; goto out; } - err = update_flag(CS_CPU_EXCLUSIVE, cs, 1); - if (err) { - err = PERR_NOTEXCL; - goto out; - } - err = update_parent_subparts_cpumask(cs, partcmd_enable, NULL, &tmpmask); - if (err) { - update_flag(CS_CPU_EXCLUSIVE, cs, 0); + if (err) goto out; - } - - if (new_prs == PRS_ISOLATED) { - /* - * Disable the load balance flag should not return an - * error unless the system is running out of memory. - */ - update_flag(CS_SCHED_LOAD_BALANCE, cs, 0); - sched_domain_rebuilt = true; - } } else if (old_prs && new_prs) { /* * A change in load balance state only, no change in cpumasks. */ - update_flag(CS_SCHED_LOAD_BALANCE, cs, (new_prs != PRS_ISOLATED)); - sched_domain_rebuilt = true; - goto out; /* Sched domain is rebuilt in update_flag() */ + goto out; } else { /* * Switching back to member is always allowed even if it @@ -2341,15 +2368,6 @@ static int update_prstate(struct cpuset *cs, int new_prs) compute_effective_cpumask(cs->effective_cpus, cs, parent); spin_unlock_irq(&callback_lock); } - - /* Turning off CS_CPU_EXCLUSIVE will not return error */ - update_flag(CS_CPU_EXCLUSIVE, cs, 0); - - if (!is_sched_load_balance(cs)) { - /* Make sure load balance is on */ - update_flag(CS_SCHED_LOAD_BALANCE, cs, 1); - sched_domain_rebuilt = true; - } } update_tasks_cpumask(parent, tmpmask.new_cpus); @@ -2357,18 +2375,24 @@ static int update_prstate(struct cpuset *cs, int new_prs) if (parent->child_ecpus_count) update_sibling_cpumasks(parent, cs, &tmpmask); - if (!sched_domain_rebuilt) - rebuild_sched_domains_locked(); out: /* - * Make partition invalid if an error happen + * Make partition invalid & disable CS_CPU_EXCLUSIVE if an error + * happens. */ - if (err) + if (err) { new_prs = -new_prs; + update_partition_exclusive(cs, new_prs); + } + spin_lock_irq(&callback_lock); cs->partition_root_state = new_prs; WRITE_ONCE(cs->prs_err, err); spin_unlock_irq(&callback_lock); + + /* Update sched domains and load balance flag */ + update_partition_sd_lb(cs, old_prs); + /* * Update child cpusets, if present. * Force update if switching back to member. From patchwork Wed May 31 16:34:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 13262528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AA39C7EE2D for ; Wed, 31 May 2023 16:36:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbjEaQgd (ORCPT ); Wed, 31 May 2023 12:36:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229934AbjEaQgW (ORCPT ); Wed, 31 May 2023 12:36:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 906D3E58 for ; Wed, 31 May 2023 09:34:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685550858; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=dnclgW5iNRTHpshzwEto29UyjPb2x7GdQxiRrKMztLM=; b=Ea4bZ7Hyb27KnAhLJZmi7QEi3dRIk3MQpMWMtn1bFHF4HryQIzm6Q/gOHnEl3TcLXax26q 28KAb+kftBFc84PgSjybcGbps9lIdlNQI+2f8DqBsnxT+47EWPFVktY77MOM0h3CkaGz+D kbHoowZaGv94yuIkbYvHrjW/LERmM8E= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-304-Fsumc8WaOPO8O5kahn-oUA-1; Wed, 31 May 2023 12:34:13 -0400 X-MC-Unique: Fsumc8WaOPO8O5kahn-oUA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A53C285A5B5; Wed, 31 May 2023 16:34:12 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2C3B12166B28; Wed, 31 May 2023 16:34:12 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v2 2/6] cgroup/cpuset: Improve temporary cpumasks handling Date: Wed, 31 May 2023 12:34:01 -0400 Message-Id: <20230531163405.2200292-3-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The limitation that update_parent_subparts_cpumask() can only use addmask & delmask in the given tmp cpumasks is fragile and may lead to unexpected error. Add a new statically allocated cs_tmp_cpus cpumask (protected by cpuset_mutex) for internal use so that all the three temporary cpumasks can be freely used. With this change, we can move the update_tasks_cpumask() for the parent and update_sibling_cpumasks() for the sibling to inside update_parent_subparts_cpumask(). Also add a init_tmpmasks() helper to handle initialization of the tmpmasks structure when cpumasks are too big to be statically allocated on stack. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 66 ++++++++++++++++++++++++------------------ 1 file changed, 38 insertions(+), 28 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 12a0b583aca4..8604c919e1e4 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -208,6 +208,8 @@ struct cpuset { struct cgroup_file partition_file; }; +static cpumask_var_t cs_tmp_cpus; /* Temp cpumask for partition */ + /* * Partition root states: * @@ -668,6 +670,24 @@ static inline void free_cpumasks(struct cpuset *cs, struct tmpmasks *tmp) } } +/* + * init_tmpmasks - Initialize the cpumasks in tmpmasks with the given ones + */ +#ifdef CONFIG_CPUMASK_OFFSTACK +static inline void +init_tmpmasks(struct tmpmasks *tmp, struct cpumask *new_cpus, + struct cpumask *addmask, struct cpumask *delmask) +{ + tmp->new_cpus = new_cpus; + tmp->addmask = addmask; + tmp->delmask = delmask; +} +#else +static inline void +init_tmpmasks(struct tmpmasks *tmp, struct cpumask *new_cpus, + struct cpumask *addmask, struct cpumask *delmask) { } +#endif + /** * alloc_trial_cpuset - allocate a trial cpuset * @cs: the cpuset that the trial cpuset duplicates @@ -1300,6 +1320,8 @@ enum subparts_cmd { static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on); +static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs, + struct tmpmasks *tmp); /* * Update partition exclusive flag @@ -1463,7 +1485,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, adding = cpumask_andnot(tmp->addmask, tmp->addmask, parent->subparts_cpus); /* - * Empty cpumask is not allewed + * Empty cpumask is not allowed */ if (cpumask_empty(newmask)) { part_error = PERR_CPUSEMPTY; @@ -1583,8 +1605,11 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, spin_unlock_irq(&callback_lock); - if (adding || deleting) + if (adding || deleting) { update_tasks_cpumask(parent, tmp->addmask); + if (parent->child_ecpus_count) + update_sibling_cpumasks(parent, cs, tmp); + } /* * For partcmd_update without newmask, it is being called from @@ -1839,18 +1864,13 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, if (cpumask_equal(cs->cpus_allowed, trialcs->cpus_allowed)) return 0; -#ifdef CONFIG_CPUMASK_OFFSTACK /* * Use the cpumasks in trialcs for tmpmasks when they are pointers - * to allocated cpumasks. - * - * Note that update_parent_subparts_cpumask() uses only addmask & - * delmask, but not new_cpus. + * to allocated cpumasks & save the newmask into cs_tmp_cpus. */ - tmp.addmask = trialcs->subparts_cpus; - tmp.delmask = trialcs->effective_cpus; - tmp.new_cpus = NULL; -#endif + cpumask_copy(cs_tmp_cpus, trialcs->cpus_allowed); + init_tmpmasks(&tmp, trialcs->cpus_allowed, trialcs->subparts_cpus, + trialcs->effective_cpus); retval = validate_change(cs, trialcs); @@ -1870,7 +1890,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, parent = parent_cs(cs); cpuset_for_each_child(cp, css, parent) if (is_partition_valid(cp) && - cpumask_intersects(trialcs->cpus_allowed, cp->cpus_allowed)) { + cpumask_intersects(cs_tmp_cpus, cp->cpus_allowed)) { rcu_read_unlock(); update_parent_subparts_cpumask(cp, partcmd_invalidate, NULL, &tmp); rcu_read_lock(); @@ -1887,13 +1907,15 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, NULL, &tmp); else update_parent_subparts_cpumask(cs, partcmd_update, - trialcs->cpus_allowed, &tmp); + cs_tmp_cpus, &tmp); } + /* Restore trialcs->cpus_allowed */ + cpumask_copy(trialcs->cpus_allowed, cs_tmp_cpus); compute_effective_cpumask(trialcs->effective_cpus, trialcs, parent_cs(cs)); spin_lock_irq(&callback_lock); - cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed); + cpumask_copy(cs->cpus_allowed, cs_tmp_cpus); /* * Make sure that subparts_cpus, if not empty, is a subset of @@ -1914,11 +1936,6 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, } spin_unlock_irq(&callback_lock); -#ifdef CONFIG_CPUMASK_OFFSTACK - /* Now trialcs->cpus_allowed is available */ - tmp.new_cpus = trialcs->cpus_allowed; -#endif - /* effective_cpus will be updated here */ update_cpumasks_hier(cs, &tmp, false); @@ -2343,13 +2360,11 @@ static int update_prstate(struct cpuset *cs, int new_prs) err = update_parent_subparts_cpumask(cs, partcmd_enable, NULL, &tmpmask); - if (err) - goto out; } else if (old_prs && new_prs) { /* * A change in load balance state only, no change in cpumasks. */ - goto out; + ; } else { /* * Switching back to member is always allowed even if it @@ -2369,12 +2384,6 @@ static int update_prstate(struct cpuset *cs, int new_prs) spin_unlock_irq(&callback_lock); } } - - update_tasks_cpumask(parent, tmpmask.new_cpus); - - if (parent->child_ecpus_count) - update_sibling_cpumasks(parent, cs, &tmpmask); - out: /* * Make partition invalid & disable CS_CPU_EXCLUSIVE if an error @@ -3500,6 +3509,7 @@ int __init cpuset_init(void) BUG_ON(!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL)); BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL)); BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL)); + BUG_ON(!zalloc_cpumask_var(&cs_tmp_cpus, GFP_KERNEL)); cpumask_setall(top_cpuset.cpus_allowed); nodes_setall(top_cpuset.mems_allowed); From patchwork Wed May 31 16:34:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 13262531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46ED7C7EE2D for ; Wed, 31 May 2023 16:36:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229883AbjEaQgs (ORCPT ); Wed, 31 May 2023 12:36:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229941AbjEaQgY (ORCPT ); Wed, 31 May 2023 12:36:24 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA958E67 for ; Wed, 31 May 2023 09:34:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685550864; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=MunOFJWMe7tgW+XM6Of/HoGxuH83AnN8mCmfMnOlUl8=; b=KBaAfdqCS5RwPEyeqcY1RMsORNHzZsGPqIsrS///+NUpmI0mkms1YOAWOACbpg1A8LcGEy cPNeoQxlcAZe/7yBPvGYBEGiNu8Zma4RaJVqNVRLY6vO6hpNPY+nuhbeGdT0PcoggJOAIa 3LXfkil+K8tSkojpfAZznN2SKSL2APo= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-344-hyO3eOWBMMSUW9Ab4Zwq2A-1; Wed, 31 May 2023 12:34:13 -0400 X-MC-Unique: hyO3eOWBMMSUW9Ab4Zwq2A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 388F1381D1ED; Wed, 31 May 2023 16:34:13 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD43D2166B25; Wed, 31 May 2023 16:34:12 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v2 3/6] cgroup/cpuset: Add cpuset.cpus.reserve for top cpuset Date: Wed, 31 May 2023 12:34:02 -0400 Message-Id: <20230531163405.2200292-4-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org A cpuset partition is a collection of cpusets with a partition root and its descendants from that root downward excluding any cpusets that are part of other partitions. A partition has exclusive access to a set of CPUs granted to it. Other cpusets outside of a partition cannot use any CPUs in that set. Currently, creation of partitions requires a hierarchical CPUs distribution model where the parent of a partition root has to be a partition root itself. Hence all the partition roots have to be clustered around the cgroup root. To enable the creation of a remote partition down in the hierarchy without a parental partition root, we need a way to reserve the CPUs that will be used in a remote partition. Introduce a new root-only "cpuset.cpus.reserve" control file in the top cpuset for this particular purpose. By default, the new "cpuset.cpus.reserve" control file will track the subparts_cpus cpumask in the top cpuset. By writing into this new control file, however, we can reserve additional CPUs that can be used in a remote partition. Any CPUs that are in "cpuset.cpus.reserve" will have to be removed from the effective_cpus of all the cpusets that are not part of that valid partitions. The prefix "+" and "-" can be used to indicate the addition to or the subtraction from the existing CPUs in "cpuset.cpus.reserve". A single "-" character indicate the deletion of all the free reserve CPUs not allocated to any existing partition. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 253 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 239 insertions(+), 14 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 8604c919e1e4..69abe95a9969 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -208,7 +208,33 @@ struct cpuset { struct cgroup_file partition_file; }; -static cpumask_var_t cs_tmp_cpus; /* Temp cpumask for partition */ +/* + * Reserved CPUs for partitions. + * + * By default, CPUs used in partitions are tracked in the parent's + * subparts_cpus mask following a hierarchical CPUs distribution model. + * To enable the creation of a remote partition down in the hierarchy + * without a parental partition root, one can write directly to + * cpuset.cpus.reserve in the root cgroup to allocate more CPUs that can + * be used by remote partitions. Removal of existing reserved CPUs may + * also cause some existing partitions to become invalid. + * + * All the cpumasks below should only be used with cpuset_mutex held. + * Modification of cs_reserve_cpus & cs_free_reserve_cpus also requires + * holding the callback_lock. + * + * Relationship among cs_reserve_cpus, cs_free_reserve_cpus and + * top_cpuset.subparts_cpus are: + * + * top_cpuset.subparts_cpus ⊆ cs_reserve_cpus + * cs_free_reserve_cpus ⊆ cs_reserve_cpus + * top_cpuset.subparts_cpus ∩ cs_free_reserve_cpus = ∅ + * cs_reserve_cpus - cs_free_reserve_cpus - top_cpuset.subparts_cpus + * = CPUs dedicated to remote partitions + */ +static cpumask_var_t cs_reserve_cpus; /* Reserved CPUs */ +static cpumask_var_t cs_free_reserve_cpus; /* Unallocated reserved CPUs */ +static cpumask_var_t cs_tmp_cpus; /* Temp cpumask for partition */ /* * Partition root states: @@ -1202,13 +1228,13 @@ static void rebuild_sched_domains_locked(void) * should be the same as the active CPUs, so checking only top_cpuset * is enough to detect racing CPU offlines. */ - if (!top_cpuset.nr_subparts_cpus && + if (cpumask_empty(cs_reserve_cpus) && !cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask)) return; /* * With subpartition CPUs, however, the effective CPUs of a partition - * root should be only a subset of the active CPUs. Since a CPU in any + * root should only be a subset of the active CPUs. Since a CPU in any * partition root could be offlined, all must be checked. */ if (top_cpuset.nr_subparts_cpus) { @@ -1275,7 +1301,7 @@ static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus) */ if ((task->flags & PF_KTHREAD) && kthread_is_per_cpu(task)) continue; - cpumask_andnot(new_cpus, possible_mask, cs->subparts_cpus); + cpumask_andnot(new_cpus, possible_mask, cs_reserve_cpus); } else { cpumask_and(new_cpus, possible_mask, cs->effective_cpus); } @@ -1406,6 +1432,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, int deleting; /* Moving cpus from subparts_cpus to effective_cpus */ int old_prs, new_prs; int part_error = PERR_NONE; /* Partition error? */ + bool update_reserve = (parent == &top_cpuset); lockdep_assert_held(&cpuset_mutex); @@ -1576,7 +1603,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, } /* - * Change the parent's subparts_cpus. + * Change the parent's subparts_cpus and maybe cs_reserve_cpus. * Newly added CPUs will be removed from effective_cpus and * newly deleted ones will be added back to effective_cpus. */ @@ -1586,10 +1613,25 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, parent->subparts_cpus, tmp->addmask); cpumask_andnot(parent->effective_cpus, parent->effective_cpus, tmp->addmask); + if (update_reserve) { + cpumask_or(cs_reserve_cpus, + cs_reserve_cpus, tmp->addmask); + cpumask_andnot(cs_free_reserve_cpus, + cs_free_reserve_cpus, tmp->addmask); + } } if (deleting) { cpumask_andnot(parent->subparts_cpus, parent->subparts_cpus, tmp->delmask); + /* + * The automatic cpu reservation of adjacent partition + * won't add back the deleted CPUs to cs_free_reserve_cpus. + * Instead, they are returned back to effective_cpus of top + * cpuset. + */ + if (update_reserve) + cpumask_andnot(cs_reserve_cpus, + cs_reserve_cpus, tmp->delmask); /* * Some of the CPUs in subparts_cpus might have been offlined. */ @@ -1783,6 +1825,8 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, if (need_rebuild_sched_domains) rebuild_sched_domains_locked(); + + return; } /** @@ -1955,6 +1999,167 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, return 0; } +/** + * update_reserve_cpumask - update cs_reserve_cpus + * @trialcs: trial cpuset + * @buf: buffer of cpu numbers written to this cpuset + * Return: 0 if successful, < 0 if error + */ +static int update_reserve_cpumask(struct cpuset *trialcs, const char *buf) +{ + struct cgroup_subsys_state *css; + struct cpuset *cs; + bool adding, deleting; + struct tmpmasks tmp; + + adding = deleting = false; + if (*buf == '+') { + adding = true; + buf++; + } else if (*buf == '-') { + deleting = true; + buf++; + } + + if (!*buf) { + if (adding) + return -EINVAL; + + if (deleting) { + if (cpumask_empty(cs_free_reserve_cpus)) + return 0; + cpumask_copy(trialcs->cpus_allowed, cs_free_reserve_cpus); + } else { + cpumask_clear(trialcs->cpus_allowed); + } + } else { + int retval = cpulist_parse(buf, trialcs->cpus_allowed); + + if (retval < 0) + return retval; + } + + if (!adding && !deleting && + cpumask_equal(trialcs->cpus_allowed, cs_reserve_cpus)) + return 0; + + /* Preserve trialcs->cpus_allowed for now */ + init_tmpmasks(&tmp, NULL, trialcs->subparts_cpus, + trialcs->effective_cpus); + + /* + * Compute the addition and removal of CPUs to/from cs_reserve_cpus + */ + if (!adding && !deleting) { + adding = cpumask_andnot(tmp.addmask, trialcs->cpus_allowed, + cs_reserve_cpus); + deleting = cpumask_andnot(tmp.delmask, cs_reserve_cpus, + trialcs->cpus_allowed); + } else if (adding) { + adding = cpumask_andnot(tmp.addmask, + trialcs->cpus_allowed, cs_reserve_cpus); + cpumask_or(trialcs->cpus_allowed, cs_reserve_cpus, tmp.addmask); + } else { /* deleting */ + deleting = cpumask_and(tmp.delmask, + trialcs->cpus_allowed, cs_reserve_cpus); + cpumask_andnot(trialcs->cpus_allowed, cs_reserve_cpus, tmp.delmask); + } + + if (!adding && !deleting) + return 0; + + /* + * Invalidate remote partitions if necessary + */ + if (deleting) { + /* TODO */ + } + + /* + * Cannot use up all the CPUs in top_cpuset.effective_cpus + */ + if (!deleting && adding && + cpumask_subset(top_cpuset.effective_cpus, tmp.addmask)) + return -EINVAL; + + spin_lock_irq(&callback_lock); + /* + * Update top_cpuset.effective_cpus, cs_reserve_cpus & + * cs_free_reserve_cpus. + */ + if (adding) + cpumask_or(cs_free_reserve_cpus, cs_free_reserve_cpus, + tmp.addmask); + cpumask_copy(cs_reserve_cpus, trialcs->cpus_allowed); + cpumask_andnot(top_cpuset.effective_cpus, + cpu_active_mask, cs_reserve_cpus); + + /* + * Remove CPUs from cs_free_reserve_cpus first. Anything left + * means some partitions has to be made invalid. + */ + if (deleting & cpumask_and(cs_tmp_cpus, cs_free_reserve_cpus, + tmp.delmask)) { + cpumask_andnot(cs_free_reserve_cpus, cs_free_reserve_cpus, + cs_tmp_cpus); + deleting = cpumask_andnot(tmp.delmask, tmp.delmask, + cs_tmp_cpus); + } + spin_unlock_irq(&callback_lock); + + /* + * Invalidate some adjacent partitions under top cpuset, if necessary + */ + if (deleting && cpumask_and(cs_tmp_cpus, tmp.delmask, + top_cpuset.subparts_cpus)) { + struct cgroup_subsys_state *css; + struct cpuset *cp; + + /* + * Temporarily save the remaining CPUs to be deleted in + * trialcs->cpus_allowed to be restored back to tmp.delmask + * later. + */ + deleting = cpumask_andnot(trialcs->cpus_allowed, tmp.delmask, + cs_tmp_cpus); + rcu_read_lock(); + cpuset_for_each_child(cp, css, &top_cpuset) + if (is_partition_valid(cp) && + cpumask_intersects(cs_tmp_cpus, cp->cpus_allowed)) { + rcu_read_unlock(); + update_parent_subparts_cpumask(cp, partcmd_invalidate, NULL, &tmp); + rcu_read_lock(); + } + rcu_read_unlock(); + if (deleting) + cpumask_copy(tmp.delmask, trialcs->cpus_allowed); + } + + /* Can now use all of trialcs */ + init_tmpmasks(&tmp, trialcs->cpus_allowed, trialcs->subparts_cpus, + trialcs->effective_cpus); + + /* + * Update effective_cpus of all descendants that are not in + * partitions and rebuild sched domaiins. + */ + rcu_read_lock(); + cpuset_for_each_child(cs, css, &top_cpuset) { + compute_effective_cpumask(tmp.new_cpus, cs, &top_cpuset); + if (cpumask_equal(tmp.new_cpus, cs->effective_cpus)) + continue; + if (!css_tryget_online(&cs->css)) + continue; + rcu_read_unlock(); + update_cpumasks_hier(cs, &tmp, false); + rcu_read_lock(); + css_put(&cs->css); + } + rcu_read_unlock(); + rebuild_sched_domains_locked(); + return 0; +} + /* * Migrate memory region from one set of nodes to another. This is * performed asynchronously as it can be called from process migration path @@ -2743,6 +2948,7 @@ typedef enum { FILE_EFFECTIVE_CPULIST, FILE_EFFECTIVE_MEMLIST, FILE_SUBPARTS_CPULIST, + FILE_RESERVE_CPULIST, FILE_CPU_EXCLUSIVE, FILE_MEM_EXCLUSIVE, FILE_MEM_HARDWALL, @@ -2880,6 +3086,9 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of, case FILE_CPULIST: retval = update_cpumask(cs, trialcs, buf); break; + case FILE_RESERVE_CPULIST: + retval = update_reserve_cpumask(trialcs, buf); + break; case FILE_MEMLIST: retval = update_nodemask(cs, trialcs, buf); break; @@ -2927,6 +3136,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v) case FILE_EFFECTIVE_MEMLIST: seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems)); break; + case FILE_RESERVE_CPULIST: + seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs_reserve_cpus)); + break; case FILE_SUBPARTS_CPULIST: seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs->subparts_cpus)); break; @@ -3200,6 +3412,14 @@ static struct cftype dfl_files[] = { .file_offset = offsetof(struct cpuset, partition_file), }, + { + .name = "cpus.reserve", + .seq_show = cpuset_common_seq_show, + .write = cpuset_write_resmask, + .private = FILE_RESERVE_CPULIST, + .flags = CFTYPE_ONLY_ON_ROOT, + }, + { .name = "cpus.subpartitions", .seq_show = cpuset_common_seq_show, @@ -3510,6 +3730,8 @@ int __init cpuset_init(void) BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL)); BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL)); BUG_ON(!zalloc_cpumask_var(&cs_tmp_cpus, GFP_KERNEL)); + BUG_ON(!zalloc_cpumask_var(&cs_reserve_cpus, GFP_KERNEL)); + BUG_ON(!zalloc_cpumask_var(&cs_free_reserve_cpus, GFP_KERNEL)); cpumask_setall(top_cpuset.cpus_allowed); nodes_setall(top_cpuset.mems_allowed); @@ -3788,10 +4010,10 @@ static void cpuset_hotplug_workfn(struct work_struct *work) mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems); /* - * In the rare case that hotplug removes all the cpus in subparts_cpus, + * In the rare case that hotplug removes all the reserve cpus, * we assumed that cpus are updated. */ - if (!cpus_updated && top_cpuset.nr_subparts_cpus) + if (!cpus_updated && !cpumask_empty(cs_reserve_cpus)) cpus_updated = true; /* synchronize cpus_allowed to cpu_active_mask */ @@ -3801,18 +4023,21 @@ static void cpuset_hotplug_workfn(struct work_struct *work) cpumask_copy(top_cpuset.cpus_allowed, &new_cpus); /* * Make sure that CPUs allocated to child partitions - * do not show up in effective_cpus. If no CPU is left, - * we clear the subparts_cpus & let the child partitions - * fight for the CPUs again. + * do not show up in top_cpuset's effective_cpus. In the + * unlikely event tht no effective CPU is left in top_cpuset, + * we clear all the reserve cpus and let the non-remote child + * partitions fight for the CPUs again. */ - if (top_cpuset.nr_subparts_cpus) { - if (cpumask_subset(&new_cpus, - top_cpuset.subparts_cpus)) { + if (!cpumask_empty(cs_reserve_cpus)) { + + if (cpumask_subset(&new_cpus, cs_reserve_cpus)) { top_cpuset.nr_subparts_cpus = 0; cpumask_clear(top_cpuset.subparts_cpus); + cpumask_clear(cs_free_reserve_cpus); + cpumask_clear(cs_reserve_cpus); } else { cpumask_andnot(&new_cpus, &new_cpus, - top_cpuset.subparts_cpus); + cs_reserve_cpus); } } cpumask_copy(top_cpuset.effective_cpus, &new_cpus); From patchwork Wed May 31 16:34:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 13262530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40CF2C7EE2D for ; Wed, 31 May 2023 16:36:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229846AbjEaQgh (ORCPT ); Wed, 31 May 2023 12:36:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229873AbjEaQgX (ORCPT ); Wed, 31 May 2023 12:36:23 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BC77E5A for ; Wed, 31 May 2023 09:34:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685550859; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=huxOXSUowwd1tXGrlrpMLvoq5FwRvlfTWIg18UA81GE=; b=EmmS0P989dA0lQS2PwZAnBzZM76JwoNd3dXmzaZHmDLsJzZTYunTmZ/bz8Ehdh6JOx9/x/ Q9IkU0CEWlONHyp4pMkrrd9AqDcpLF91raUzl06Iq7m3kqRZpS3MjgLRtG1OeRg7sts25r goC00rhS8AgoDRm5aKV1MfYTpmJhI/8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-7-QXO1xQS0PnGOFuuYU1_Ktw-1; Wed, 31 May 2023 12:34:14 -0400 X-MC-Unique: QXO1xQS0PnGOFuuYU1_Ktw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BCA14802A55; Wed, 31 May 2023 16:34:13 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id 42D992166B25; Wed, 31 May 2023 16:34:13 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v2 4/6] cgroup/cpuset: Introduce remote isolated partition Date: Wed, 31 May 2023 12:34:03 -0400 Message-Id: <20230531163405.2200292-5-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org One can use "cpuset.cpus.partition" to create multiple scheduling domains or to produce a set of isolated CPUs where load balancing is disabled. The former use case is less common but the latter one can be frequently used especially for the Telco use cases like DPDK. The existing "isolated" partition can be used to produce isolated CPUs if the applications have full control of a system. However, in a containerized environment where all the apps are run in a container, it is hard to distribute out isolated CPUs from the root down given the unified hierarchy nature of cgroup v2. The container running on isolated CPUs can be several layers down from the root. The current partition feature requires that all the ancestors of a leaf partition root must be parititon roots themselves. This can be hard to configure. This patch introduces a new type of partition called remote partition. A remote partition is a partition whose parent is not a partition root itself and its CPUs are acquired directly from available CPUs in the top cpuset's cpuset.cpus.reserve. For contrast, the existing type of partitions where their parents have to be valid partition roots are referred to as adjacent partitions as they have to be clustered around the cgroup root. This patch enables only the creation of remote isolated partitions for now. The creation of a remote isolated partition is a 2-step process. 1) Reserve the CPUs needed by the remote partition by adding CPUs to cpuset.cpus.reserve of the top cpuset. 2) Enable an isolated partition by # echo isolated > cpuset.cpus.partition Such a remote isolated partition P will only be valid if the following conditions are true. 1) P/cpuset.cpus is a subset of top cpuset's cpuset.cpus.reserve. 2) All the CPUs in P/cpuset.cpus are present in the cpuset.cpus of all its ancestors to ensure that those CPUs are properly granted to P in a hierarchical manner. 3) None of the CPUs in P/cpuset.cpus have been acquired by other valid partitions. Like adjacent partitions, a remote partition has exclusive access to the CPUs allocated to that partition. Because of the exclusive nature, none of the cpuset.cpus of its sibling cpusets can contain any CPUs allocated to the remote partition or the partition creation process will fail. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 306 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 291 insertions(+), 15 deletions(-) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 69abe95a9969..280018cddaba 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -98,6 +98,7 @@ enum prs_errcode { PERR_NOCPUS, PERR_HOTPLUG, PERR_CPUSEMPTY, + PERR_RMTPARENT, }; static const char * const perr_strings[] = { @@ -108,6 +109,7 @@ static const char * const perr_strings[] = { [PERR_NOCPUS] = "Parent unable to distribute cpu downstream", [PERR_HOTPLUG] = "No cpu available due to hotplug", [PERR_CPUSEMPTY] = "cpuset.cpus is empty", + [PERR_RMTPARENT] = "New partition not allowed under remote partition", }; struct cpuset { @@ -206,6 +208,9 @@ struct cpuset { /* Handle for cpuset.cpus.partition */ struct cgroup_file partition_file; + + /* Remote partition silbling list anchored at remote_children */ + struct list_head remote_sibling; }; /* @@ -236,6 +241,9 @@ static cpumask_var_t cs_reserve_cpus; /* Reserved CPUs */ static cpumask_var_t cs_free_reserve_cpus; /* Unallocated reserved CPUs */ static cpumask_var_t cs_tmp_cpus; /* Temp cpumask for partition */ +/* List of remote partition root children */ +static struct list_head remote_children; + /* * Partition root states: * @@ -385,6 +393,8 @@ static struct cpuset top_cpuset = { .flags = ((1 << CS_ONLINE) | (1 << CS_CPU_EXCLUSIVE) | (1 << CS_MEM_EXCLUSIVE)), .partition_root_state = PRS_ROOT, + .remote_sibling = LIST_HEAD_INIT(top_cpuset.remote_sibling), + }; /** @@ -1385,6 +1395,209 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs) rebuild_sched_domains_locked(); } +static inline bool is_remote_partition(struct cpuset *cs) +{ + return !list_empty(&cs->remote_sibling); +} + +/* + * update_isolated_cpumasks_hier - Update effective cpumasks and tasks + * @cs: the cpuset to consider + * @lb: load balance flag + * + * This is called for descendant cpusets when a cpuset switches to or + * from an isolated remote partition. There can't be any remote partitions + * underneath it. + */ +static void update_isolated_cpumasks_hier(struct cpuset *cs, bool lb) +{ + struct cpuset *cp; + struct cgroup_subsys_state *pos_css; + + rcu_read_lock(); + cpuset_for_each_descendant_pre(cp, pos_css, cs) { + struct cpuset *parent = parent_cs(cp); + + if (cp == cs) + continue; /* Skip partition root */ + + WARN_ON_ONCE(is_partition_valid(cp)); + spin_lock_irq(&callback_lock); + + if (cpumask_and(cp->effective_cpus, cp->cpus_allowed, + parent->effective_cpus)) { + if (cp->use_parent_ecpus) { + WARN_ON_ONCE(--parent->child_ecpus_count < 0); + cp->use_parent_ecpus = false; + } + } else { + cpumask_copy(cp->effective_cpus, parent->effective_cpus); + if (!cp->use_parent_ecpus) { + parent->child_ecpus_count++; + cp->use_parent_ecpus = true; + } + } + if (lb) + set_bit(CS_SCHED_LOAD_BALANCE, &cp->flags); + else + clear_bit(CS_SCHED_LOAD_BALANCE, &cp->flags); + + spin_unlock_irq(&callback_lock); + } + rcu_read_unlock(); +} + +/* + * isolated_cpus_acquire - Acquire isolated CPUs from cpuset.cpus.reserve + * @cs: the cpuset to update + * Return: 1 if successful, 0 if error + * + * Acquire isolated CPUs from cpuset.cpus.reserve and become an isolated + * partition root. cpuset_mutex must be held by the caller. + * + * Note that freely available reserve CPUs have already been isolated, so + * we don't need to rebuild sched domains. Since the cpuset is likely + * using effective_cpus from its parent before the conversion, we have to + * update parent's child_ecpus_count accordingly. + */ +static int isolated_cpus_acquire(struct cpuset *cs) +{ + struct cpuset *ancestor, *parent; + + ancestor = parent = parent_cs(cs); + + /* + * To enable acquiring of isolated CPUs from cpuset.cpus.reserve, + * cpus_allowed must be a subset of both its ancestor's cpus_allowed + * and cs_free_reserve_cpus and the user must have sysadmin privilege. + */ + if (!capable(CAP_SYS_ADMIN) || + !cpumask_subset(cs->cpus_allowed, cs_free_reserve_cpus)) + return 0; + + /* + * Check cpus_allowed of all its ancestors, except top_cpuset. + */ + while (ancestor != &top_cpuset) { + if (!cpumask_subset(cs->cpus_allowed, ancestor->cpus_allowed)) + return 0; + ancestor = parent_cs(ancestor); + } + + spin_lock_irq(&callback_lock); + cpumask_andnot(cs_free_reserve_cpus, + cs_free_reserve_cpus, cs->cpus_allowed); + cpumask_and(cs->effective_cpus, cs->cpus_allowed, cpu_active_mask); + + if (cs->use_parent_ecpus) { + cs->use_parent_ecpus = false; + parent->child_ecpus_count--; + } + list_add(&cs->remote_sibling, &remote_children); + clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); + spin_unlock_irq(&callback_lock); + + if (!list_empty(&cs->css.children)) + update_isolated_cpumasks_hier(cs, false); + + return 1; +} + +/* + * isolated_cpus_release - Release isolated CPUs back to cpuset.cpus.reserve + * @cs: the cpuset to update + * + * Release isolated CPUs back to cpuset.cpus.reserve. + * cpuset_mutex must be held by the caller. + */ +static void isolated_cpus_release(struct cpuset *cs) +{ + struct cpuset *parent = parent_cs(cs); + + if (!is_remote_partition(cs)) + return; + + /* + * This can be called when the cpu list in cs_reserve_cpus + * is reduced. So not all the cpus should be returned back to + * cs_free_reserve_cpus. + */ + WARN_ON_ONCE(cs->partition_root_state != PRS_ISOLATED); + WARN_ON_ONCE(!cpumask_subset(cs->cpus_allowed, cs_reserve_cpus)); + spin_lock_irq(&callback_lock); + if (!cpumask_and(cs->effective_cpus, + parent->effective_cpus, cs->cpus_allowed)) { + cs->use_parent_ecpus = true; + parent->child_ecpus_count++; + cpumask_copy(cs->effective_cpus, parent->effective_cpus); + } + list_del_init(&cs->remote_sibling); + cs->partition_root_state = PRS_INVALID_ISOLATED; + if (!cs->prs_err) + cs->prs_err = PERR_INVCPUS; + + /* Add the CPUs back to cs_free_reserve_cpus */ + cpumask_or(cs_free_reserve_cpus, + cs_free_reserve_cpus, cs->cpus_allowed); + + /* + * There is no change in the CPU load balance state that requires + * rebuilding sched domains. So the flags bits can be set directly. + */ + set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); + clear_bit(CS_CPU_EXCLUSIVE, &cs->flags); + spin_unlock_irq(&callback_lock); + + if (!list_empty(&cs->css.children)) + update_isolated_cpumasks_hier(cs, true); +} + +/* + * isolated_cpus_update - cpuset.cpus change in a remote isolated partition + * + * Return: 1 if successful, 0 if it needs to become invalid. + */ +static int isolated_cpus_update(struct cpuset *cs, struct cpumask *newmask, + struct tmpmasks *tmp) +{ + bool adding, deleting; + + if (WARN_ON_ONCE((cs->partition_root_state != PRS_ISOLATED) || + !is_remote_partition(cs))) + return 0; + + if (cpumask_empty(newmask)) + goto invalidate; + + adding = cpumask_andnot(tmp->addmask, newmask, cs->cpus_allowed); + deleting = cpumask_andnot(tmp->delmask, cs->cpus_allowed, newmask); + + /* + * Additions of isolation CPUs is only allowed if those CPUs are + * in cs_free_reserve_cpus and the caller has sysadmin privilege. + */ + if (adding && (!capable(CAP_SYS_ADMIN) || + !cpumask_subset(tmp->addmask, cs_free_reserve_cpus))) + goto invalidate; + + spin_lock_irq(&callback_lock); + if (adding) + cpumask_andnot(cs_free_reserve_cpus, + cs_free_reserve_cpus, tmp->addmask); + if (deleting) + cpumask_or(cs_free_reserve_cpus, + cs_free_reserve_cpus, tmp->delmask); + cpumask_copy(cs->cpus_allowed, newmask); + cpumask_andnot(cs->effective_cpus, newmask, cs->subparts_cpus); + cpumask_and(cs->effective_cpus, cs->effective_cpus, cpu_active_mask); + spin_unlock_irq(&callback_lock); + return 1; + +invalidate: + isolated_cpus_release(cs); + return 0; +} + /** * update_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset * @cs: The cpuset that requests change in partition root state @@ -1457,9 +1670,12 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, if (cmd == partcmd_enable) { /* * Enabling partition root is not allowed if cpus_allowed - * doesn't overlap parent's cpus_allowed. + * doesn't overlap parent's cpus_allowed or if it intersects + * cs_free_reserve_cpus since it needs to be a remote partition + * in this case. */ - if (!cpumask_intersects(cs->cpus_allowed, parent->cpus_allowed)) + if (!cpumask_intersects(cs->cpus_allowed, parent->cpus_allowed) || + cpumask_intersects(cs->cpus_allowed, cs_free_reserve_cpus)) return PERR_INVCPUS; /* @@ -1694,6 +1910,15 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, struct cpuset *parent = parent_cs(cp); bool update_parent = false; + /* + * Skip remote partition that acquires isolated CPUs directly + * from cs_reserve_cpus. + */ + if (is_remote_partition(cp)) { + pos_css = css_rightmost_descendant(pos_css); + continue; + } + compute_effective_cpumask(tmp->new_cpus, cp, parent); /* @@ -1804,7 +2029,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, WARN_ON(!is_in_v2_mode() && !cpumask_equal(cp->cpus_allowed, cp->effective_cpus)); - update_tasks_cpumask(cp, tmp->new_cpus); + update_tasks_cpumask(cp, cp->effective_cpus); /* * On legacy hierarchy, if the effective cpumask of any non- @@ -1946,6 +2171,14 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, return retval; if (cs->partition_root_state) { + /* + * Call isolated_cpus_update() to handle valid remote partition + */ + if (is_remote_partition(cs)) { + isolated_cpus_update(cs, cs_tmp_cpus, &tmp); + goto update_hier; + } + if (invalidate) update_parent_subparts_cpumask(cs, partcmd_invalidate, NULL, &tmp); @@ -1980,10 +2213,11 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, } spin_unlock_irq(&callback_lock); +update_hier: /* effective_cpus will be updated here */ update_cpumasks_hier(cs, &tmp, false); - if (cs->partition_root_state) { + if (cs->partition_root_state && !is_remote_partition(cs)) { struct cpuset *parent = parent_cs(cs); /* @@ -2072,7 +2306,13 @@ static int update_reserve_cpumask(struct cpuset *trialcs, const char *buf) * Invalidate remote partitions if necessary */ if (deleting) { - /* TODO */ + struct cpuset *child, *next; + + list_for_each_entry_safe(child, next, &remote_children, + remote_sibling) { + if (cpumask_intersects(child->cpus_allowed, tmp.delmask)) + isolated_cpus_release(child); + } } /* @@ -2539,21 +2779,32 @@ static int update_prstate(struct cpuset *cs, int new_prs) return 0; /* - * For a previously invalid partition root, leave it at being - * invalid if new_prs is not "member". + * For a previously invalid partition root, treat it like a "member". */ - if (new_prs && is_prs_invalid(old_prs)) { - cs->partition_root_state = -new_prs; - return 0; - } + if (new_prs && is_prs_invalid(old_prs)) + old_prs = PRS_MEMBER; if (alloc_cpumasks(NULL, &tmpmask)) return -ENOMEM; + if ((old_prs == PRS_ISOLATED) && is_remote_partition(cs)) { + /* Pre-invalidate a remote isolated partition */ + isolated_cpus_release(cs); + old_prs = PRS_MEMBER; + } + err = update_partition_exclusive(cs, new_prs); if (err) goto out; + /* + * New partition is not allowed under a remote partition + */ + if (new_prs && is_remote_partition(parent)) { + err = PERR_RMTPARENT; + goto out; + } + if (!old_prs) { /* * cpus_allowed cannot be empty. @@ -2565,6 +2816,12 @@ static int update_prstate(struct cpuset *cs, int new_prs) err = update_parent_subparts_cpumask(cs, partcmd_enable, NULL, &tmpmask); + /* + * If an attempt to become adjacent isolated partition fails, + * try to become a remote isolated partition instead. + */ + if (err && (new_prs == PRS_ISOLATED) && isolated_cpus_acquire(cs)) + err = 0; /* Become remote isolated partition */ } else if (old_prs && new_prs) { /* * A change in load balance state only, no change in cpumasks. @@ -3462,6 +3719,7 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css) nodes_clear(cs->effective_mems); fmeter_init(&cs->fmeter); cs->relax_domain_level = -1; + INIT_LIST_HEAD(&cs->remote_sibling); /* Set CS_MEMORY_MIGRATE for default hierarchy */ if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) @@ -3497,6 +3755,11 @@ static int cpuset_css_online(struct cgroup_subsys_state *css) cs->effective_mems = parent->effective_mems; cs->use_parent_ecpus = true; parent->child_ecpus_count++; + /* + * Clear CS_SCHED_LOAD_BALANCE if parent is isolated + */ + if (!is_sched_load_balance(parent)) + clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); } spin_unlock_irq(&callback_lock); @@ -3741,6 +4004,7 @@ int __init cpuset_init(void) fmeter_init(&top_cpuset.fmeter); set_bit(CS_SCHED_LOAD_BALANCE, &top_cpuset.flags); top_cpuset.relax_domain_level = -1; + INIT_LIST_HEAD(&remote_children); BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL)); @@ -3873,9 +4137,20 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp) } parent = parent_cs(cs); - compute_effective_cpumask(&new_cpus, cs, parent); nodes_and(new_mems, cs->mems_allowed, parent->effective_mems); + /* + * In the special case of a valid remote isolated partition. + * We just need to mask offline cpus from cpus_allowed unless + * all the isolated cpus are gone. + */ + if (is_remote_partition(cs)) { + if (!cpumask_and(&new_cpus, cs->cpus_allowed, cpu_active_mask)) + isolated_cpus_release(cs); + } else { + compute_effective_cpumask(&new_cpus, cs, parent); + } + if (cs->nr_subparts_cpus) /* * Make sure that CPUs allocated to child partitions @@ -3906,10 +4181,11 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp) * the following conditions hold: * 1) empty effective cpus but not valid empty partition. * 2) parent is invalid or doesn't grant any cpus to child - * partitions. + * partitions and not a remote partition. */ - if (is_partition_valid(cs) && (!parent->nr_subparts_cpus || - (cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)))) { + if (is_partition_valid(cs) && + ((!parent->nr_subparts_cpus && !is_remote_partition(cs)) || + (cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)))) { int old_prs, parent_prs; update_parent_subparts_cpumask(cs, partcmd_disable, NULL, tmp); From patchwork Wed May 31 16:34:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 13262527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB59DC77B7A for ; Wed, 31 May 2023 16:36:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229852AbjEaQg1 (ORCPT ); Wed, 31 May 2023 12:36:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229896AbjEaQgR (ORCPT ); Wed, 31 May 2023 12:36:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95B6918F for ; Wed, 31 May 2023 09:34:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685550857; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=vEOF8HrDImAV/WYXa7xNLQNUFj3sGLIQNJ3rM8mtWVY=; b=PzWELqpJJR+7ubLVgsq+PAzBIdgNVWruHg9vFscz/q30ciuhNcWOPItO4Qe72l6q+gEScw Mbw9Os1ornZuo4Pu2hjnavfSZhae2P8kgmazuSknomGtU1wapo0D+gbZokGjXnpozDlkK9 LsHHbzbhmvrS68HVSARMUCuv6kJBBRU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-422-MwNguLseOg23MkkEXw2tFA-1; Wed, 31 May 2023 12:34:15 -0400 X-MC-Unique: MwNguLseOg23MkkEXw2tFA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 489CD101A53B; Wed, 31 May 2023 16:34:14 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id C4BD22166B2B; Wed, 31 May 2023 16:34:13 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v2 5/6] cgroup/cpuset: Documentation update for partition Date: Wed, 31 May 2023 12:34:04 -0400 Message-Id: <20230531163405.2200292-6-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This patch updates the cgroup-v2.rst file to include information about the new "cpuset.cpus.reserve" control file as well as the new remote partition. Signed-off-by: Waiman Long --- Documentation/admin-guide/cgroup-v2.rst | 92 +++++++++++++++++++++---- 1 file changed, 79 insertions(+), 13 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index f67c0829350b..3e9351c2cd27 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -2215,6 +2215,38 @@ Cpuset Interface Files Its value will be affected by memory nodes hotplug events. + cpuset.cpus.reserve + A read-write multiple values file which exists only on root + cgroup. + + It lists all the CPUs that are reserved for adjacent and remote + partitions created in the system. See the next section for + more information on what an adjacent or remote partitions is. + + Creation of adjacent partition does not require touching this + control file as CPU reservation will be done automatically. + In order to create a remote partition, the CPUs needed by the + remote partition has to be written to this file first. + + Due to the fact that "cpuset.cpus.reserve" holds reserve CPUs + that can be used by multiple partitions and automatic reservation + may also race with manual reservation, an extension prefixes of + "+" and "-" are allowed for this file to reduce race. + + A "+" prefix can be used to indicate a list of additional + CPUs that are to be added without disturbing the CPUs that are + originally there. For example, if its current value is "3-4", + echoing ""+5" to it will change it to "3-5". + + Once a remote partition is destroyed, its CPUs have to be + removed from this file or no other process can use them. A "-" + prefix can be used to remove a list of CPUs from it. However, + removing CPUs that are currently used in existing partitions + may cause those partitions to become invalid. A single "-" + character without any number can be used to indicate removal + of all the free CPUs not yet allocated to any partitions to + avoid accidental partition invalidation. + cpuset.cpus.partition A read-write single value file which exists on non-root cpuset-enabled cgroups. This flag is owned by the parent cgroup @@ -2228,25 +2260,49 @@ Cpuset Interface Files "isolated" Partition root without load balancing ========== ===================================== - The root cgroup is always a partition root and its state - cannot be changed. All other non-root cgroups start out as - "member". + A cpuset partition is a collection of cgroups with a partition + root at the top of the hierarchy and its descendants except + those that are separate partition roots themselves and their + descendants. A partition has exclusive access to the set of + CPUs allocated to it. Other cgroups outside of that partition + cannot use any CPUs in that set. + + There are two types of partitions - adjacent and remote. The + parent of an adjacent partition must be a valid partition root. + Partition roots of adjacent partitions are all clustered around + the root cgroup. Creation of adjacent partition is done by + writing the desired partition type into "cpuset.cpus.partition". + + A remote partition does not require a partition root parent. + So a remote partition can be formed far from the root cgroup. + However, its creation is a 2-step process. The CPUs needed + by a remote partition ("cpuset.cpus" of the partition root) + has to be written into "cpuset.cpus.reserve" of the root + cgroup first. After that, "isolated" can be written into + "cpuset.cpus.partition" of the partition root to form a remote + isolated partition which is the only supported remote partition + type for now. + + All remote partitions are terminal as adjacent partition cannot + be created underneath it. With the way remote partition is + formed, it is not possible to create another valid remote + partition underneath it. + + The root cgroup is always a partition root and its state cannot + be changed. All other non-root cgroups start out as "member". When set to "root", the current cgroup is the root of a new - partition or scheduling domain that comprises itself and all - its descendants except those that are separate partition roots - themselves and their descendants. + partition or scheduling domain. - When set to "isolated", the CPUs in that partition root will + When set to "isolated", the CPUs in that partition will be in an isolated state without any load balancing from the scheduler. Tasks placed in such a partition with multiple CPUs should be carefully distributed and bound to each of the individual CPUs for optimal performance. - The value shown in "cpuset.cpus.effective" of a partition root - is the CPUs that the partition root can dedicate to a potential - new child partition root. The new child subtracts available - CPUs from its parent "cpuset.cpus.effective". + The value shown in "cpuset.cpus.effective" of a partition root is + the CPUs that are dedicated to that partition and not available + to cgroups outside of that partittion. A partition root ("root" or "isolated") can be in one of the two possible states - valid or invalid. An invalid partition @@ -2270,8 +2326,8 @@ Cpuset Interface Files In the case of an invalid partition root, a descriptive string on why the partition is invalid is included within parentheses. - For a partition root to become valid, the following conditions - must be met. + For an adjacent partition root to be valid, the following + conditions must be met. 1) The "cpuset.cpus" is exclusive with its siblings , i.e. they are not shared by any of its siblings (exclusivity rule). @@ -2281,6 +2337,16 @@ Cpuset Interface Files 4) The "cpuset.cpus.effective" cannot be empty unless there is no task associated with this partition. + For a remote partition root to be valid, the following conditions + must be met. + + 1) The same exclusivity rule as adjacent partition root. + 2) The "cpuset.cpus" is not empty and all the CPUs must be + present in "cpuset.cpus.reserve" of the root cgroup and none + of them are allocated to another partition. + 3) The "cpuset.cpus" value must be present in all its ancestors + to ensure proper hierarchical cpu distribution. + External events like hotplug or changes to "cpuset.cpus" can cause a valid partition root to become invalid and vice versa. Note that a task cannot be moved to a cgroup with empty From patchwork Wed May 31 16:34:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 13262529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 932A3C7EE2F for ; Wed, 31 May 2023 16:36:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229902AbjEaQgc (ORCPT ); Wed, 31 May 2023 12:36:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229925AbjEaQgV (ORCPT ); Wed, 31 May 2023 12:36:21 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14EB0E5F for ; Wed, 31 May 2023 09:34:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685550860; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=fWC+xMKYUWhcyHC+CrTtBR7WmXg0XqK4TynuYciSr5g=; b=TYQYNEF5D2xzKjuTaPRpC9CTQ2nF+ugIiChK0EWeLFr1ojY4Db9htYkSOn2aRX0WTDi8JA lSNqTm5bEupoHzkTIGnlTJk8mqI8v7oFLHa5Wk7cEWAccOeLYN33JXFM0+mTor6Lz+fO0r LzPdcTYVCYELhmDkWwoGznmhv7VbddA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-314-Gel4AudCPDytNW5EcgPFvg-1; Wed, 31 May 2023 12:34:15 -0400 X-MC-Unique: Gel4AudCPDytNW5EcgPFvg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CFCA5858280; Wed, 31 May 2023 16:34:14 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id 51CCF2166B28; Wed, 31 May 2023 16:34:14 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v2 6/6] cgroup/cpuset: Extend test_cpuset_prs.sh to test remote partition Date: Wed, 31 May 2023 12:34:05 -0400 Message-Id: <20230531163405.2200292-7-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This patch extends the test_cpuset_prs.sh test script to support testing the new remote partition and use the new "cpuset.cpus.reserve" file of the root cgroup by adding new tests for them. In addition, the following changes are also made: 1) Run the state transition tests directly under root to ease testing of remote partition and remove the unneeded test column. 2) Add support for the .__DEBUG__.cpuset.cpus.subpartitions file if "cgroup_debug" kernel boot option is specified and a new column into TEST_MATRIX for testing against this cgroup control file. 3) Add another column to for the list of expected isolated CPUs and compare it with the actual value by looking at the state of /sys/kernel/debug/sched/domains which will be available if the verbose flag is set. Signed-off-by: Waiman Long --- .../selftests/cgroup/test_cpuset_prs.sh | 403 ++++++++++++------ 1 file changed, 267 insertions(+), 136 deletions(-) diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/testing/selftests/cgroup/test_cpuset_prs.sh index 2b5215cc599f..8054b2f16a4f 100755 --- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh +++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh @@ -3,9 +3,13 @@ # # Test for cpuset v2 partition root state (PRS) # -# The sched verbose flag is set, if available, so that the console log +# The sched verbose flag can be optionally set so that the console log # can be examined for the correct setting of scheduling domain. # +# Due to RCU freeing of dying cpusets, this test may report effective cpus +# reset failure if the system is not able to clean up those cpusets in time. +# Repeat the test with the -d option to increase the delay factor may help. +# skip_test() { echo "$1" @@ -22,27 +26,27 @@ WAIT_INOTIFY=$(cd $(dirname $0); pwd)/wait_inotify # Find cgroup v2 mount point CGROUP2=$(mount -t cgroup2 | head -1 | awk -e '{print $3}') [[ -n "$CGROUP2" ]] || skip_test "Cgroup v2 mount point not found!" +RESERVE_CPUS=$CGROUP2/cpuset.cpus.reserve +CPULIST=$(cat $CGROUP2/cpuset.cpus.effective) -CPUS=$(lscpu | grep "^CPU(s):" | sed -e "s/.*:[[:space:]]*//") -[[ $CPUS -lt 8 ]] && skip_test "Test needs at least 8 cpus available!" +NR_CPUS=$(lscpu | grep "^CPU(s):" | sed -e "s/.*:[[:space:]]*//") +[[ $NR_CPUS -lt 8 ]] && skip_test "Test needs at least 8 cpus available!" # Set verbose flag and delay factor PROG=$1 -VERBOSE= +VERBOSE=0 DELAY_FACTOR=1 SCHED_DEBUG= while [[ "$1" = -* ]] do case "$1" in - -v) VERBOSE=1 + -v) ((VERBOSE++)) # Enable sched/verbose can slow thing down [[ $DELAY_FACTOR -eq 1 ]] && DELAY_FACTOR=2 - break ;; -d) DELAY_FACTOR=$2 shift - break ;; *) echo "Usage: $PROG [-v] [-d " exit @@ -52,7 +56,7 @@ do done # Set sched verbose flag if available when "-v" option is specified -if [[ -n "$VERBOSE" && -d /sys/kernel/debug/sched ]] +if [[ $VERBOSE -gt 0 && -d /sys/kernel/debug/sched ]] then # Used to restore the original setting during cleanup SCHED_DEBUG=$(cat /sys/kernel/debug/sched/verbose) @@ -61,15 +65,28 @@ fi cd $CGROUP2 echo +cpuset > cgroup.subtree_control + +# +# If cpuset has been set up and used in child cgroups, we may not be able to +# create partition under root cgroup because of the CPU exclusivity rule. +# So we are going to skip the test if this is the case. +# [[ -d test ]] || mkdir test -cd test +echo 0-6 > test/cpuset.cpus +echo root > test/cpuset.cpus.partition +cat test/cpuset.cpus.partition | grep -q invalid +RESULT=$? +echo member > test/cpuset.cpus.partition +echo "" > test/cpuset.cpus +[[ $RESULT -eq 0 ]] && skip_test "Child cgroups are using cpuset!" cleanup() { online_cpus + cd $CGROUP2 rmdir A1/A2/A3 A1/A2 A1 B1 > /dev/null 2>&1 - cd .. rmdir test > /dev/null 2>&1 + echo "" > cpuset.cpus.reserve [[ -n "$SCHED_DEBUG" ]] && echo "$SCHED_DEBUG" > /sys/kernel/debug/sched/verbose } @@ -103,7 +120,7 @@ test_partition() [[ $? -eq 0 ]] || exit 1 ACTUAL_VAL=$(cat cpuset.cpus.partition) [[ $ACTUAL_VAL != $EXPECTED_VAL ]] && { - echo "cpuset.cpus.partition: expect $EXPECTED_VAL, found $EXPECTED_VAL" + echo "cpuset.cpus.partition: expect $EXPECTED_VAL, found $ACTUAL_VAL" echo "Test FAILED" exit 1 } @@ -114,7 +131,7 @@ test_effective_cpus() EXPECTED_VAL=$1 ACTUAL_VAL=$(cat cpuset.cpus.effective) [[ "$ACTUAL_VAL" != "$EXPECTED_VAL" ]] && { - echo "cpuset.cpus.effective: expect '$EXPECTED_VAL', found '$EXPECTED_VAL'" + echo "cpuset.cpus.effective: expect '$EXPECTED_VAL', found '$ACTUAL_VAL'" echo "Test FAILED" exit 1 } @@ -139,6 +156,7 @@ test_add_proc() # test_isolated() { + cd $CGROUP2/test echo 2-3 > cpuset.cpus TYPE=$(cat cpuset.cpus.partition) [[ $TYPE = member ]] || echo member > cpuset.cpus.partition @@ -203,125 +221,152 @@ test_isolated() # # Cgroup test hierarchy # -# test -- A1 -- A2 -- A3 -# \- B1 +# root -- A1 -- A2 -- A3 +# +- B1 # -# P = set cpus.partition (0:member, 1:root, 2:isolated, -1:root invalid) +# P = set cpus.partition (0:member, 1:root, 2:isolated) # C = add cpu-list # S

= use prefix in subtree_control # T = put a task into cgroup -# O- = Write to CPU online file of +# O= = Write to CPU online file of # SETUP_A123_PARTITIONS="C1-3:P1:S+ C2-3:P1:S+ C3:P1" TEST_MATRIX=( - # test old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate - # ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ - " S+ C0-1 . . C2-3 S+ C4-5 . . 0 A2:0-1" - " S+ C0-1 . . C2-3 P1 . . . 0 " - " S+ C0-1 . . C2-3 P1:S+ C0-1:P1 . . 0 " - " S+ C0-1 . . C2-3 P1:S+ C1:P1 . . 0 " - " S+ C0-1:S+ . . C2-3 . . . P1 0 " - " S+ C0-1:P1 . . C2-3 S+ C1 . . 0 " - " S+ C0-1:P1 . . C2-3 S+ C1:P1 . . 0 " - " S+ C0-1:P1 . . C2-3 S+ C1:P1 . P1 0 " - " S+ C0-1:P1 . . C2-3 C4-5 . . . 0 A1:4-5" - " S+ C0-1:P1 . . C2-3 S+:C4-5 . . . 0 A1:4-5" - " S+ C0-1 . . C2-3:P1 . . . C2 0 " - " S+ C0-1 . . C2-3:P1 . . . C4-5 0 B1:4-5" - " S+ C0-3:P1:S+ C2-3:P1 . . . . . . 0 A1:0-1,A2:2-3" - " S+ C0-3:P1:S+ C2-3:P1 . . C1-3 . . . 0 A1:1,A2:2-3" - " S+ C2-3:P1:S+ C3:P1 . . C3 . . . 0 A1:,A2:3 A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P1 . . C3 P0 . . 0 A1:3,A2:3 A1:P1,A2:P0" - " S+ C2-3:P1:S+ C2:P1 . . C2-4 . . . 0 A1:3-4,A2:2" - " S+ C2-3:P1:S+ C3:P1 . . C3 . . C0-2 0 A1:,B1:0-2 A1:P1,A2:P1" - " S+ $SETUP_A123_PARTITIONS . C2-3 . . . 0 A1:,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + # old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate PCPUS ISOLCPUS + # ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ ----- -------- + " C0-1 . . C2-3 S+ C4-5 . . 0 A2:0-1" + " C0-1 . . C2-3 P1 . . . 0 " + " C0-1 . . C2-3 P1:S+ C0-1:P1 . . 0 " + " C0-1 . . C2-3 P1:S+ C1:P1 . . 0 " + " C0-1:S+ . . C2-3 . . . P1 0 " + " C0-1:P1 . . C2-3 S+ C1 . . 0 " + " C0-1:P1 . . C2-3 S+ C1:P1 . . 0 " + " C0-1:P1 . . C2-3 S+ C1:P1 . P1 0 " + " C0-1:P1 . . C2-3 C4-5 . . . 0 A1:4-5" + " C0-1:P1 . . C2-3 S+:C4-5 . . . 0 A1:4-5" + " C0-1 . . C2-3:P1 . . . C2 0 " + " C0-1 . . C2-3:P1 . . . C4-5 0 B1:4-5" + "C0-3:P1:S+ C2-3:P1 . . . . . . 0 A1:0-1,A2:2-3" + "C0-3:P1:S+ C2-3:P1 . . C1-3 . . . 0 A1:1,A2:2-3" + "C2-3:P1:S+ C3:P1 . . C3 . . . 0 A1:,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . C3 P0 . . 0 A1:3,A2:3 A1:P1,A2:P0" + "C2-3:P1:S+ C2:P1 . . C2-4 . . . 0 A1:3-4,A2:2" + "C2-3:P1:S+ C3:P1 . . C3 . . C0-2 0 A1:,B1:0-2 A1:P1,A2:P1" + "$SETUP_A123_PARTITIONS . C2-3 . . . 0 A1:,A2:2,A3:3 A1:P1,A2:P1,A3:P1" # CPU offlining cases: - " S+ C0-1 . . C2-3 S+ C4-5 . O2-0 0 A1:0-1,B1:3" - " S+ C0-3:P1:S+ C2-3:P1 . . O2-0 . . . 0 A1:0-1,A2:3" - " S+ C0-3:P1:S+ C2-3:P1 . . O2-0 O2-1 . . 0 A1:0-1,A2:2-3" - " S+ C0-3:P1:S+ C2-3:P1 . . O1-0 . . . 0 A1:0,A2:2-3" - " S+ C0-3:P1:S+ C2-3:P1 . . O1-0 O1-1 . . 0 A1:0-1,A2:2-3" - " S+ C2-3:P1:S+ C3:P1 . . O3-0 O3-1 . . 0 A1:2,A2:3 A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P2 . . O3-0 O3-1 . . 0 A1:2,A2:3 A1:P1,A2:P2" - " S+ C2-3:P1:S+ C3:P1 . . O2-0 O2-1 . . 0 A1:2,A2:3 A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P2 . . O2-0 O2-1 . . 0 A1:2,A2:3 A1:P1,A2:P2" - " S+ C2-3:P1:S+ C3:P1 . . O2-0 . . . 0 A1:,A2:3 A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P1 . . O3-0 . . . 0 A1:2,A2: A1:P1,A2:P1" - " S+ C2-3:P1:S+ C3:P1 . . T:O2-0 . . . 0 A1:3,A2:3 A1:P1,A2:P-1" - " S+ C2-3:P1:S+ C3:P1 . . . T:O3-0 . . 0 A1:2,A2:2 A1:P1,A2:P-1" - " S+ $SETUP_A123_PARTITIONS . O1-0 . . . 0 A1:,A2:2,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . O2-0 . . . 0 A1:1,A2:,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . O3-0 . . . 0 A1:1,A2:2,A3: A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . T:O1-0 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" - " S+ $SETUP_A123_PARTITIONS . . T:O2-0 . . 0 A1:1,A2:3,A3:3 A1:P1,A2:P1,A3:P-1" - " S+ $SETUP_A123_PARTITIONS . . . T:O3-0 . 0 A1:1,A2:2,A3:2 A1:P1,A2:P1,A3:P-1" - " S+ $SETUP_A123_PARTITIONS . T:O1-0 O1-1 . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . . T:O2-0 O2-1 . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . . . T:O3-0 O3-1 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . T:O1-0 O2-0 O1-1 . 0 A1:1,A2:,A3:3 A1:P1,A2:P1,A3:P1" - " S+ $SETUP_A123_PARTITIONS . T:O1-0 O2-0 O2-1 . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" - - # test old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate - # ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ + " C0-1 . . C2-3 S+ C4-5 . O2=0 0 A1:0-1,B1:3" + "C0-3:P1:S+ C2-3:P1 . . O2=0 . . . 0 A1:0-1,A2:3" + "C0-3:P1:S+ C2-3:P1 . . O2=0 O2=1 . . 0 A1:0-1,A2:2-3" + "C0-3:P1:S+ C2-3:P1 . . O1=0 . . . 0 A1:0,A2:2-3" + "C0-3:P1:S+ C2-3:P1 . . O1=0 O1=1 . . 0 A1:0-1,A2:2-3" + "C2-3:P1:S+ C3:P1 . . O3=0 O3=1 . . 0 A1:2,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P2 . . O3=0 O3=1 . . 0 A1:2,A2:3 A1:P1,A2:P2" + "C2-3:P1:S+ C3:P1 . . O2=0 O2=1 . . 0 A1:2,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P2 . . O2=0 O2=1 . . 0 A1:2,A2:3 A1:P1,A2:P2" + "C2-3:P1:S+ C3:P1 . . O2=0 . . . 0 A1:,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . O3=0 . . . 0 A1:2,A2: A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . T:O2=0 . . . 0 A1:3,A2:3 A1:P1,A2:P-1" + "C2-3:P1:S+ C3:P1 . . . T:O3=0 . . 0 A1:2,A2:2 A1:P1,A2:P-1" + "$SETUP_A123_PARTITIONS . O1=0 . . . 0 A1:,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . O2=0 . . . 0 A1:1,A2:,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . O3=0 . . . 0 A1:1,A2:2,A3: A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . T:O1=0 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" + "$SETUP_A123_PARTITIONS . . T:O2=0 . . 0 A1:1,A2:3,A3:3 A1:P1,A2:P1,A3:P-1" + "$SETUP_A123_PARTITIONS . . . T:O3=0 . 0 A1:1,A2:2,A3:2 A1:P1,A2:P1,A3:P-1" + "$SETUP_A123_PARTITIONS . T:O1=0 O1=1 . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . . T:O2=0 O2=1 . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . . . T:O3=0 O3=1 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . T:O1=0 O2=0 O1=1 . 0 A1:1,A2:,A3:3 A1:P1,A2:P1,A3:P1" + "$SETUP_A123_PARTITIONS . T:O1=0 O2=0 O2=1 . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" + + # old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate PCPUS ISOLCPUS + # ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ ----- -------- + # + # Remote partition and cpuset.cpus.reserve tests + # + " C0-3:S+ C1-3:S+ C2-3 . R2-3 . . . 0 A1:0-1,A2:1,A3:1 . . 2-3" + " C0-3:S+ C1-3:S+ C2-3 . R2-3 P2 . . 0 A1:0-1,A2:1,A3:1 A1:P0,A2:P-2 . 2-3" + " C0-3:S+ C1-3:S+ C2-3 . R2-3 . P2 . 0 A1:0-1,A2:1,A3:2-3 A1:P0,A3:P2 . 2-3" + " C0-3:S+ C1-3:S+ C2-3 . R2-3 . P2:C3 . 0 A1:0-1,A2:1,A3:3 A1:P0,A3:P2 . 2-3" + " C0-3:S+ C1-3:S+ C2-3 C2-3 R2-3 . . P2 0 A1:0-1,A2:1,A3:1 A1:P0,A3:P0,B1:P-2 . 2-3" + " C0-3:S+ C1-3:S+ C2-3 C4-5 R4-5 . . P2 0 B1:4-5 B1:P2 . 4-5" + " C0-3:S+ C1-3:S+ C2-3 C4 R2-4 . P2 P2 0 A3:2-3,B1:4 A3:P2,B1:P2 . 2-4" + " C0-3:S+ C1-3:S+ C2-3 C4 R2-4 . P2:C1-3 P2 0 A3:1,B1:4 A3:P-2,B1:P2 . 2-4" + " C0-3:S+ C1-3:S+ C2-3 C4 R2-3 . P2 P2 0 A3:2-3,B1:4 A3:P2,B1:P2 . 2-4" + " C0-3:S+ C1-3:S+ C2-3 C4 R1-3 P2 P2 . 0 A2:1-3,A3:2-3 A2:P2,A3:P-2 . 1-3" + " C0-3:S+ C1-3:S+ C2-3 C4 R2-3 . P2 P2:C4-5 0 A3:2-3,B1:4-5 A3:P2,B1:P2 . 2-5" + " C0-3:S+ C1-3:S+ C2-3 C4 R2-4 . P2 P2:R-4 0 A3:2-3 A3:P2,B1:P-2 . 2-3" + " C0-3:S+ C1-3:S+ C2-3 C4 R2-4 . P2:R-2 P2 0 A3:2,B1:4 A3:P-2,B1:P2 . 3-4" + + # Remote partition offline test + " C0-3:S+ C1-3:S+ C2-3 . R2-3 . P2:O2=0 . 0 A1:0-1,A2:1,A3:3 A1:P0,A3:P2 . 2-3" + " C0-3:S+ C1-3:S+ C2-3 . R2-3 . P2:O2=0 O2=1 0 A1:0-1,A2:1,A3:2-3 A1:P0,A3:P2 . 2-3" + " C0-3:S+ C1-3:S+ C2 . R2-3 . P2:O2=0 . 0 A1:0-1,A2:1,A3:1 A1:P0,A3:P-2 . 2-3" + + # An invalidated remote partition cannot self-recover from hotplug + " C0-3:S+ C1-3:S+ C2 . R2-3 . P2:O2=0 O2=1 0 A1:0-1,A2:1,A3:1 A1:P0,A3:P-2 . 2-3" + + # base old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate PCPUS ISOLCPUS + # ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ ----- -------- # # Incorrect change to cpuset.cpus invalidates partition root # # Adding CPUs to partition root that are not in parent's # cpuset.cpus is allowed, but those extra CPUs are ignored. - " S+ C2-3:P1:S+ C3:P1 . . . C2-4 . . 0 A1:,A2:2-3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . . C2-4 . . 0 A1:,A2:2-3 A1:P1,A2:P1" # Taking away all CPUs from parent or itself if there are tasks # will make the partition invalid. - " S+ C2-3:P1:S+ C3:P1 . . T C2-3 . . 0 A1:2-3,A2:2-3 A1:P1,A2:P-1" - " S+ C3:P1:S+ C3 . . T P1 . . 0 A1:3,A2:3 A1:P1,A2:P-1" - " S+ $SETUP_A123_PARTITIONS . T:C2-3 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" - " S+ $SETUP_A123_PARTITIONS . T:C2-3:C1-3 . . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" + "C2-3:P1:S+ C3:P1 . . T C2-3 . . 0 A1:2-3,A2:2-3 A1:P1,A2:P-1" + " C3:P1:S+ C3 . . T P1 . . 0 A1:3,A2:3 A1:P1,A2:P-1" + "$SETUP_A123_PARTITIONS . T:C2-3 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" + "$SETUP_A123_PARTITIONS . T:C2-3:C1-3 . . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" # Changing a partition root to member makes child partitions invalid - " S+ C2-3:P1:S+ C3:P1 . . P0 . . . 0 A1:2-3,A2:3 A1:P0,A2:P-1" - " S+ $SETUP_A123_PARTITIONS . C2-3 P0 . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P0,A3:P-1" + "C2-3:P1:S+ C3:P1 . . P0 . . . 0 A1:2-3,A2:3 A1:P0,A2:P-1" + "$SETUP_A123_PARTITIONS . C2-3 P0 . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P0,A3:P-1" # cpuset.cpus can contains cpus not in parent's cpuset.cpus as long # as they overlap. - " S+ C2-3:P1:S+ . . . . C3-4:P1 . . 0 A1:2,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ . . . . C3-4:P1 . . 0 A1:2,A2:3 A1:P1,A2:P1" # Deletion of CPUs distributed to child cgroup is allowed. - " S+ C0-1:P1:S+ C1 . C2-3 C4-5 . . . 0 A1:4-5,A2:4-5" + "C0-1:P1:S+ C1 . C2-3 C4-5 . . . 0 A1:4-5,A2:4-5" # To become a valid partition root, cpuset.cpus must overlap parent's # cpuset.cpus. - " S+ C0-1:P1 . . C2-3 S+ C4-5:P1 . . 0 A1:0-1,A2:0-1 A1:P1,A2:P-1" + " C0-1:P1 . . C2-3 S+ C4-5:P1 . . 0 A1:0-1,A2:0-1 A1:P1,A2:P-1" # Enabling partition with child cpusets is allowed - " S+ C0-1:S+ C1 . C2-3 P1 . . . 0 A1:0-1,A2:1 A1:P1" + " C0-1:S+ C1 . C2-3 P1 . . . 0 A1:0-1,A2:1 A1:P1" # A partition root with non-partition root parent is invalid, but it # can be made valid if its parent becomes a partition root too. - " S+ C0-1:S+ C1 . C2-3 . P2 . . 0 A1:0-1,A2:1 A1:P0,A2:P-2" - " S+ C0-1:S+ C1:P2 . C2-3 P1 . . . 0 A1:0,A2:1 A1:P1,A2:P2" + " C0-1:S+ C1 . C2-3 . P2 . . 0 A1:0-1,A2:1 A1:P0,A2:P-2" + " C0-1:S+ C1:P2 . C2-3 P1 . . . 0 A1:0,A2:1 A1:P1,A2:P2" # A non-exclusive cpuset.cpus change will invalidate partition and its siblings - " S+ C0-1:P1 . . C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P-1,B1:P0" - " S+ C0-1:P1 . . P1:C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P-1,B1:P-1" - " S+ C0-1 . . P1:C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P0,B1:P-1" + " C0-1:P1 . . C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P-1,B1:P0" + " C0-1:P1 . . P1:C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P-1,B1:P-1" + " C0-1 . . P1:C2-3 C0-2 . . . 0 A1:0-2,B1:2-3 A1:P0,B1:P-1" - # test old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate - # ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ + # base old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate PCPUS ISOLCPUS + # ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ ----- -------- # Failure cases: # A task cannot be added to a partition with no cpu - " S+ C2-3:P1:S+ C3:P1 . . O2-0:T . . . 1 A1:,A2:3 A1:P1,A2:P1" + "C2-3:P1:S+ C3:P1 . . O2=0:T . . . 1 A1:,A2:3 A1:P1,A2:P1" ) # # Write to the cpu online file -# $1 - - where = cpu number, value to be written +# $1 - = where = cpu number, value to be written # write_cpu_online() { - CPU=${1%-*} - VAL=${1#*-} + CPU=${1%=*} + VAL=${1#*=} CPUFILE=//sys/devices/system/cpu/cpu${CPU}/online if [[ $VAL -eq 0 ]] then @@ -349,11 +394,12 @@ set_ctrl_state() TMPMSG=/tmp/.msg_$$ CGRP=$1 STATE=$2 - SHOWERR=${3}${VERBOSE} + SHOWERR=${3} CTRL=${CTRL:=$CONTROLLER} HASERR=0 REDIRECT="2> $TMPMSG" [[ -z "$STATE" || "$STATE" = '.' ]] && return 0 + [[ $VERBOSE -gt 0 ]] && SHOWERR=1 rm -f $TMPMSG for CMD in $(echo $STATE | sed -e "s/:/ /g") @@ -368,6 +414,11 @@ set_ctrl_state() PREFIX=${CMD#?} COMM="echo ${PREFIX}${CTRL} > $SFILE" eval $COMM $REDIRECT + elif [[ $S = R ]] + then + CPUS=${CMD#?} + COMM="echo $CPUS > $RESERVE_CPUS" + eval $COMM $REDIRECT elif [[ $S = C ]] then CPUS=${CMD#?} @@ -430,7 +481,7 @@ online_cpus() [[ -n "OFFLINE_CPUS" ]] && { for C in $OFFLINE_CPUS do - write_cpu_online ${C}-1 + write_cpu_online ${C}=1 done } } @@ -443,18 +494,25 @@ reset_cgroup_states() echo 0 > $CGROUP2/cgroup.procs online_cpus rmdir A1/A2/A3 A1/A2 A1 B1 > /dev/null 2>&1 - set_ctrl_state . S- + pause 0.02 + set_ctrl_state . R- pause 0.01 } dump_states() { - for DIR in A1 A1/A2 A1/A2/A3 B1 + for DIR in . A1 A1/A2 A1/A2/A3 B1 do ECPUS=$DIR/cpuset.cpus.effective + RCPUS=$DIR/cpuset.cpus.reserve + CPUS=$DIR/cpuset.cpus PRS=$DIR/cpuset.cpus.partition + PCPUS=$DIR/.__DEBUG__.cpuset.cpus.subpartitions + [[ -e $CPUS ]] && echo "$CPUS: $(cat $CPUS)" + [[ -e $RCPUS ]] && echo "$RCPUS: $(cat $RCPUS)" [[ -e $ECPUS ]] && echo "$ECPUS: $(cat $ECPUS)" [[ -e $PRS ]] && echo "$PRS: $(cat $PRS)" + [[ -e $PCPUS ]] && echo "$PCPUS: $(cat $PCPUS)" done } @@ -478,6 +536,26 @@ check_effective_cpus() done } +# +# Check subparts cpus +# $1 - check string, format: :[,:]* +# +check_subparts_cpus() +{ + CHK_STR=$1 + for CHK in $(echo $CHK_STR | sed -e "s/,/ /g") + do + set -- $(echo $CHK | sed -e "s/:/ /g") + CGRP=$1 + CPUS=$2 + [[ $CGRP = A2 ]] && CGRP=A1/A2 + [[ $CGRP = A3 ]] && CGRP=A1/A2/A3 + FILE=$CGRP/.__DEBUG__.cpuset.cpus.subpartitions + [[ -e $FILE ]] || return 0 # Skip test + [[ $CPUS = $(cat $FILE) ]] || return 1 + done +} + # # Check cgroup states # $1 - check string, format: :[,:]* @@ -524,6 +602,67 @@ check_cgroup_states() return 0 } +# +# Get isolated (including offline) CPUs by looking at +# /sys/kernel/debug/sched/domains and compare that with the expected value. +# +# $1 - expected isolated cpu list +# +check_isolcpus() +{ + EXPECT_VAL=$1 + ISOLCPUS= + LASTISOLCPU= + SCHED_DOMAINS=/sys/kernel/debug/sched/domains + [[ -d $SCHED_DOMAINS ]] || { + # Check reserve cpus instead + ISOLCPUS=$(cat $RESERVE_CPUS) + [[ $EXPECT_VAL = $ISOLCPUS ]] + return $? + } + + for ((CPU=0; CPU < $NR_CPUS; CPU++)) + do + [[ -n "$(ls ${SCHED_DOMAINS}/cpu$CPU)" ]] && continue + + if [[ -z "$LASTISOLCPU" ]] + then + ISOLCPUS=$CPU + LASTISOLCPU=$CPU + elif [[ "$LASTISOLCPU" -eq $((CPU - 1)) ]] + then + echo $ISOLCPUS | grep -q "\<$LASTISOLCPU\$" + if [[ $? -eq 0 ]] + then + ISOLCPUS=${ISOLCPUS}- + fi + LASTISOLCPU=$CPU + else + if [[ $ISOLCPUS = *- ]] + then + ISOLCPUS=${ISOLCPUS}$LASTISOLCPU + fi + ISOLCPUS=${ISOLCPUS},$CPU + LASTISOLCPU=$CPU + fi + done + [[ "$ISOLCPUS" = *- ]] && ISOLCPUS=${ISOLCPUS}$LASTISOLCPU + [[ $EXPECT_VAL = $ISOLCPUS ]] +} + +test_fail() +{ + TESTNUM=$1 + TESTTYPE=$2 + ADDINFO=$3 + echo "Test $TEST[$TESTNUM] failed $TESTTYPE check!" + [[ -n "$ADDINFO" ]] && echo "*** $ADDINFO ***" + eval echo \"\${$TEST[$I]}\" + echo + dump_states + exit 1 +} + # # Run cpuset state transition test # $1 - test matrix name @@ -536,88 +675,81 @@ run_state_test() { TEST=$1 CONTROLLER=cpuset - CPULIST=0-6 I=0 eval CNT="\${#$TEST[@]}" reset_cgroup_states - echo $CPULIST > cpuset.cpus - echo root > cpuset.cpus.partition console_msg "Running state transition test ..." while [[ $I -lt $CNT ]] do echo "Running test $I ..." > /dev/console + [[ $VERBOSE -gt 1 ]] && { + echo "" + eval echo \"\${$TEST[$I]}\" + } eval set -- "\${$TEST[$I]}" - ROOT=$1 - OLD_A1=$2 - OLD_A2=$3 - OLD_A3=$4 - OLD_B1=$5 - NEW_A1=$6 - NEW_A2=$7 - NEW_A3=$8 - NEW_B1=$9 - RESULT=${10} - ECPUS=${11} - STATES=${12} - - set_ctrl_state_noerr . $ROOT + OLD_A1=$1 + OLD_A2=$2 + OLD_A3=$3 + OLD_B1=$4 + NEW_A1=$5 + NEW_A2=$6 + NEW_A3=$7 + NEW_B1=$8 + RESULT=$9 + ECPUS=${10} + STATES=${11} + PCPUS=${12} + ICPUS=${13} + + set_ctrl_state_noerr B1 $OLD_B1 set_ctrl_state_noerr A1 $OLD_A1 set_ctrl_state_noerr A1/A2 $OLD_A2 set_ctrl_state_noerr A1/A2/A3 $OLD_A3 - set_ctrl_state_noerr B1 $OLD_B1 RETVAL=0 set_ctrl_state A1 $NEW_A1; ((RETVAL += $?)) set_ctrl_state A1/A2 $NEW_A2; ((RETVAL += $?)) set_ctrl_state A1/A2/A3 $NEW_A3; ((RETVAL += $?)) set_ctrl_state B1 $NEW_B1; ((RETVAL += $?)) - [[ $RETVAL -ne $RESULT ]] && { - echo "Test $TEST[$I] failed result check!" - eval echo \"\${$TEST[$I]}\" - dump_states - exit 1 - } + [[ $RETVAL -ne $RESULT ]] && test_fail $I result [[ -n "$ECPUS" && "$ECPUS" != . ]] && { check_effective_cpus $ECPUS - [[ $? -ne 0 ]] && { - echo "Test $TEST[$I] failed effective CPU check!" - eval echo \"\${$TEST[$I]}\" - echo - dump_states - exit 1 - } + [[ $? -ne 0 ]] && test_fail $I "effective CPU" } - [[ -n "$STATES" ]] && { + [[ -n "$STATES" && "$STATES" != . ]] && { check_cgroup_states $STATES - [[ $? -ne 0 ]] && { - echo "FAILED: Test $TEST[$I] failed states check!" - eval echo \"\${$TEST[$I]}\" - echo - dump_states - exit 1 - } + [[ $? -ne 0 ]] && test_fail $I states + } + + [[ -n "$PCPUS" && "$PCPUS" != . ]] && { + check_subparts_cpus $PCPUS + [[ $? -ne 0 ]] && test_fail $I "subpartitions CPU" } + # Compare the expected isolated CPUs with the actual ones, + # if available + [[ -n "$ICPUS" ]] && { + check_isolcpus $ICPUS + [[ $? -ne 0 ]] && test_fail $I "isolated CPU" \ + "Expect $ICPUS, get $ISOLCPUS instead" + } reset_cgroup_states # # Check to see if effective cpu list changes # - pause 0.05 NEWLIST=$(cat cpuset.cpus.effective) [[ $NEWLIST != $CPULIST ]] && { echo "Effective cpus changed to $NEWLIST after test $I!" exit 1 } - [[ -n "$VERBOSE" ]] && echo "Test $I done." + [[ $VERBOSE -gt 0 ]] && echo "Test $I done." ((I++)) done echo "All $I tests of $TEST PASSED." - - echo member > cpuset.cpus.partition } # @@ -642,6 +774,7 @@ test_inotify() { ERR=0 PRS=/tmp/.prs_$$ + cd $CGROUP2/test [[ -f $WAIT_INOTIFY ]] || { echo "wait_inotify not found, inotify test SKIPPED." return @@ -655,7 +788,7 @@ test_inotify() rm -f $PRS wait_inotify $PWD/cpuset.cpus.partition $PRS & pause 0.01 - set_ctrl_state . "O1-0" + set_ctrl_state . "O1=0" pause 0.01 check_cgroup_states ".:P-1" if [[ $? -ne 0 ]] @@ -689,5 +822,3 @@ run_state_test TEST_MATRIX test_isolated test_inotify echo "All tests PASSED." -cd .. -rmdir test