diff mbox series

[v11,3/8] cgroup/cpuset: Allow no-task partition to have empty cpuset.cpus.effective

Message ID 20220510153413.400020-4-longman@redhat.com (mailing list archive)
State New
Headers show
Series [v11,1/8] cgroup/cpuset: Add top_cpuset check in update_tasks_cpumask() | expand

Commit Message

Waiman Long May 10, 2022, 3:34 p.m. UTC
Currently, a partition root cannot have empty "cpuset.cpus.effective".
As a result, a parent partition root cannot distribute out all its
CPUs to child partitions with no CPUs left. However in most cases,
there shouldn't be any tasks associated with intermediate nodes of the
default hierarchy. So the current rule is too restrictive and can waste
valuable CPU resource.

To address this issue, we are now allowing a partition to have empty
"cpuset.cpus.effective" as long as it has no task. Therefore, a parent
partition with no task can now have all its CPUs distributed out to its
child partitions. The top cpuset always have some house-keeping tasks
running and so its list of effective cpu can't be empty.

Once a partition with empty "cpuset.cpus.effective" is formed, no
new task can be moved into it until "cpuset.cpus.effective" becomes
non-empty.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/cgroup/cpuset.c | 112 +++++++++++++++++++++++++++++++----------
 1 file changed, 85 insertions(+), 27 deletions(-)

Comments

Tejun Heo June 12, 2022, 5:40 p.m. UTC | #1
Hello,

Sorry about the long delay.

On Tue, May 10, 2022 at 11:34:08AM -0400, Waiman Long wrote:
> Once a partition with empty "cpuset.cpus.effective" is formed, no
> new task can be moved into it until "cpuset.cpus.effective" becomes
> non-empty.

This is always true due to no-tasks-in-intermediate-cgroups requirement,
right?

Thanks.
Tejun Heo June 12, 2022, 5:41 p.m. UTC | #2
On Sun, Jun 12, 2022 at 07:40:25AM -1000, Tejun Heo wrote:
> Hello,
> 
> Sorry about the long delay.
> 
> On Tue, May 10, 2022 at 11:34:08AM -0400, Waiman Long wrote:
> > Once a partition with empty "cpuset.cpus.effective" is formed, no
> > new task can be moved into it until "cpuset.cpus.effective" becomes
> > non-empty.
> 
> This is always true due to no-tasks-in-intermediate-cgroups requirement,
> right?

or rather, I should have asked, why does this need an explicit check?

Thanks.
Waiman Long June 13, 2022, 2:50 a.m. UTC | #3
On 6/12/22 13:40, Tejun Heo wrote:
> Hello,
>
> Sorry about the long delay.
>
> On Tue, May 10, 2022 at 11:34:08AM -0400, Waiman Long wrote:
>> Once a partition with empty "cpuset.cpus.effective" is formed, no
>> new task can be moved into it until "cpuset.cpus.effective" becomes
>> non-empty.
> This is always true due to no-tasks-in-intermediate-cgroups requirement,
> right?

I seems to remember there are corner cases where a task can be moved to 
an intermediate cgroup under circumstances. I need to dig further to 
find out what it is.

Cheers,
Longman
Waiman Long June 13, 2022, 2:53 a.m. UTC | #4
On 6/12/22 13:41, Tejun Heo wrote:
> On Sun, Jun 12, 2022 at 07:40:25AM -1000, Tejun Heo wrote:
>> Hello,
>>
>> Sorry about the long delay.
>>
>> On Tue, May 10, 2022 at 11:34:08AM -0400, Waiman Long wrote:
>>> Once a partition with empty "cpuset.cpus.effective" is formed, no
>>> new task can be moved into it until "cpuset.cpus.effective" becomes
>>> non-empty.
>> This is always true due to no-tasks-in-intermediate-cgroups requirement,
>> right?
> or rather, I should have asked, why does this need an explicit check?

Without this patch, cpus.effective will never be empty. It just falls 
back to its parent if it becomes empty. Now with an empty 
cpus.effective, I am afraid that if a task is somehow moved to such a 
cpuset, something bad may happen. So I add this check as a safeguard.

Cheers,
Longman
Tejun Heo June 13, 2022, 2:55 a.m. UTC | #5
On Sun, Jun 12, 2022 at 10:53:53PM -0400, Waiman Long wrote:
> Without this patch, cpus.effective will never be empty. It just falls back
> to its parent if it becomes empty. Now with an empty cpus.effective, I am

Yeah, that part is fine.

> afraid that if a task is somehow moved to such a cpuset, something bad may
> happen. So I add this check as a safeguard.

But how would that happen? A lot of other things would break too if that
were to happen.

Thanks.
Waiman Long June 13, 2022, 3:04 a.m. UTC | #6
On 6/12/22 22:55, Tejun Heo wrote:
> On Sun, Jun 12, 2022 at 10:53:53PM -0400, Waiman Long wrote:
>> Without this patch, cpus.effective will never be empty. It just falls back
>> to its parent if it becomes empty. Now with an empty cpus.effective, I am
> Yeah, that part is fine.
>
>> afraid that if a task is somehow moved to such a cpuset, something bad may
>> happen. So I add this check as a safeguard.
> But how would that happen? A lot of other things would break too if that
> were to happen.

I will perform further check to see if this check is necessary.

Thanks,
Longman
Michal Koutný June 13, 2022, 2:02 p.m. UTC | #7
On Sun, Jun 12, 2022 at 04:55:13PM -1000, Tejun Heo <tj@kernel.org> wrote:
> But how would that happen? A lot of other things would break too if that
> were to happen.

cpuset is a threaded controller where the internal-node-constraint does
not hold. So the additional condition for cpuset migrations is IMO
warranted (and needed if there's no "fall up").

Michal
Waiman Long June 13, 2022, 4:47 p.m. UTC | #8
On 6/13/22 10:02, Michal Koutný wrote:
> On Sun, Jun 12, 2022 at 04:55:13PM -1000, Tejun Heo <tj@kernel.org> wrote:
>> But how would that happen? A lot of other things would break too if that
>> were to happen.
> cpuset is a threaded controller where the internal-node-constraint does
> not hold. So the additional condition for cpuset migrations is IMO
> warranted (and needed if there's no "fall up").

Yes, you are right. cpuset is threaded and so it may have tasks even if 
it is not the leaf node.

Thanks,
Longman
Tejun Heo June 13, 2022, 5:23 p.m. UTC | #9
Hello,

On Mon, Jun 13, 2022 at 12:47:37PM -0400, Waiman Long wrote:
> On 6/13/22 10:02, Michal Koutný wrote:
> > On Sun, Jun 12, 2022 at 04:55:13PM -1000, Tejun Heo <tj@kernel.org> wrote:
> > > But how would that happen? A lot of other things would break too if that
> > > were to happen.
> > cpuset is a threaded controller where the internal-node-constraint does
> > not hold. So the additional condition for cpuset migrations is IMO
> > warranted (and needed if there's no "fall up").
> 
> Yes, you are right. cpuset is threaded and so it may have tasks even if it
> is not the leaf node.

And we had this same exchange the last time. Can you please add a comment?
We might also already have had this exchange before too but is it necessary
to allow threaded cgroups to be isolated roots? The interaction between
being threaded and isolated is cleaner at that layer as it's interactions
between two explicit mode changes.

Thanks.
diff mbox series

Patch

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index d156a39d7a08..6c65bcf278cb 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -412,6 +412,41 @@  static inline bool is_in_v2_mode(void)
 	      (cpuset_cgrp_subsys.root->flags & CGRP_ROOT_CPUSET_V2_MODE);
 }
 
+/**
+ * partition_is_populated - check if partition has tasks
+ * @cs: partition root to be checked
+ * @excluded_child: a child cpuset to be excluded in task checking
+ * Return: true if there are tasks, false otherwise
+ *
+ * It is assumed that @cs is a valid partition root. @excluded_child should
+ * be non-NULL when this cpuset is going to become a partition itself.
+ */
+static inline bool partition_is_populated(struct cpuset *cs,
+					  struct cpuset *excluded_child)
+{
+	struct cgroup_subsys_state *css;
+	struct cpuset *child;
+
+	if (cs->css.cgroup->nr_populated_csets)
+		return true;
+	if (!excluded_child && !cs->nr_subparts_cpus)
+		return cgroup_is_populated(cs->css.cgroup);
+
+	rcu_read_lock();
+	cpuset_for_each_child(child, css, cs) {
+		if (child == excluded_child)
+			continue;
+		if (is_partition_valid(child))
+			continue;
+		if (cgroup_is_populated(child->css.cgroup)) {
+			rcu_read_unlock();
+			return true;
+		}
+	}
+	rcu_read_unlock();
+	return false;
+}
+
 /*
  * Return in pmask the portion of a task's cpusets's cpus_allowed that
  * are online and are capable of running the task.  If none are found,
@@ -1252,22 +1287,25 @@  static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
 	if ((cmd != partcmd_update) && css_has_online_children(&cs->css))
 		return -EBUSY;
 
-	/*
-	 * Enabling partition root is not allowed if not all the CPUs
-	 * can be granted from parent's effective_cpus or at least one
-	 * CPU will be left after that.
-	 */
-	if ((cmd == partcmd_enable) &&
-	   (!cpumask_subset(cs->cpus_allowed, parent->effective_cpus) ||
-	     cpumask_equal(cs->cpus_allowed, parent->effective_cpus)))
-		return -EINVAL;
-
-	/*
-	 * A cpumask update cannot make parent's effective_cpus become empty.
-	 */
 	adding = deleting = false;
 	old_prs = new_prs = cs->partition_root_state;
 	if (cmd == partcmd_enable) {
+		/*
+		 * Enabling partition root is not allowed if not all the CPUs
+		 * can be granted from parent's effective_cpus.
+		 */
+		if (!cpumask_subset(cs->cpus_allowed, parent->effective_cpus))
+			return -EINVAL;
+
+		/*
+		 * A parent can be left with no CPU as long as there is no
+		 * task directly associated with the parent partition. For
+		 * such a parent, no new task can be moved into it.
+		 */
+		if (cpumask_equal(cs->cpus_allowed, parent->effective_cpus) &&
+		    partition_is_populated(parent, cs))
+			return -EINVAL;
+
 		cpumask_copy(tmp->addmask, cs->cpus_allowed);
 		adding = true;
 	} else if (cmd == partcmd_disable) {
@@ -1289,10 +1327,12 @@  static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
 		adding = cpumask_andnot(tmp->addmask, tmp->addmask,
 					parent->subparts_cpus);
 		/*
-		 * Return error if the new effective_cpus could become empty.
+		 * Return error if the new effective_cpus could become empty
+		 * and there are tasks in the parent.
 		 */
 		if (adding &&
-		    cpumask_equal(parent->effective_cpus, tmp->addmask)) {
+		    cpumask_equal(parent->effective_cpus, tmp->addmask) &&
+		    partition_is_populated(parent, cs)) {
 			if (!deleting)
 				return -EINVAL;
 			/*
@@ -1317,8 +1357,8 @@  static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
 		 */
 		adding = cpumask_and(tmp->addmask, cs->cpus_allowed,
 				     parent->effective_cpus);
-		part_error = cpumask_equal(tmp->addmask,
-					   parent->effective_cpus);
+		part_error = cpumask_equal(tmp->addmask, parent->effective_cpus) &&
+			     partition_is_populated(parent, cs);
 	}
 
 	if (cmd == partcmd_update) {
@@ -1420,9 +1460,15 @@  static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
 
 		/*
 		 * If it becomes empty, inherit the effective mask of the
-		 * parent, which is guaranteed to have some CPUs.
+		 * parent, which is guaranteed to have some CPUs unless
+		 * it is a partition root that has explicitly distributed
+		 * out all its CPUs.
 		 */
 		if (is_in_v2_mode() && cpumask_empty(tmp->new_cpus)) {
+			if (is_partition_valid(cp) &&
+			    cpumask_equal(cp->cpus_allowed, cp->subparts_cpus))
+				goto update_parent_subparts;
+
 			cpumask_copy(tmp->new_cpus, parent->effective_cpus);
 			if (!cp->use_parent_ecpus) {
 				cp->use_parent_ecpus = true;
@@ -1444,6 +1490,7 @@  static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
 			continue;
 		}
 
+update_parent_subparts:
 		/*
 		 * update_parent_subparts_cpumask() should have been called
 		 * for cs already in update_cpumask(). We should also call
@@ -2249,6 +2296,13 @@  static int cpuset_can_attach(struct cgroup_taskset *tset)
 	    (cpumask_empty(cs->cpus_allowed) || nodes_empty(cs->mems_allowed)))
 		goto out_unlock;
 
+	/*
+	 * On default hierarchy, task cannot be moved to a cpuset with empty
+	 * effective cpus.
+	 */
+	if (is_in_v2_mode() && cpumask_empty(cs->effective_cpus))
+		goto out_unlock;
+
 	cgroup_taskset_for_each(task, css, tset) {
 		ret = task_can_attach(task, cs->cpus_allowed);
 		if (ret)
@@ -3115,7 +3169,8 @@  hotplug_update_tasks(struct cpuset *cs,
 		     struct cpumask *new_cpus, nodemask_t *new_mems,
 		     bool cpus_updated, bool mems_updated)
 {
-	if (cpumask_empty(new_cpus))
+	/* A partition root is allowed to have empty effective cpus */
+	if (cpumask_empty(new_cpus) && !is_partition_valid(cs))
 		cpumask_copy(new_cpus, parent_cs(cs)->effective_cpus);
 	if (nodes_empty(*new_mems))
 		*new_mems = parent_cs(cs)->effective_mems;
@@ -3184,10 +3239,11 @@  static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
 
 	/*
 	 * In the unlikely event that a partition root has empty
-	 * effective_cpus or its parent becomes invalid, we have to
-	 * transition it to the invalid state.
+	 * effective_cpus with tasks or its parent becomes invalid, we
+	 * have to transition it to the invalid state.
 	 */
-	if (is_partition_valid(cs) && (cpumask_empty(&new_cpus) ||
+	if (is_partition_valid(cs) &&
+	   ((cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)) ||
 	    is_partition_invalid(parent))) {
 		if (cs->nr_subparts_cpus) {
 			spin_lock_irq(&callback_lock);
@@ -3198,13 +3254,15 @@  static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
 		}
 
 		/*
-		 * If the effective_cpus is empty because the child
-		 * partitions take away all the CPUs, we can keep
-		 * the current partition and let the child partitions
-		 * fight for available CPUs.
+		 * Force the partition to become invalid if either one of
+		 * the following conditions hold:
+		 * 1) empty effective cpus but not valid empty partition.
+		 * 2) parent is invalid or doesn't grant any cpus to child
+		 *    partitions.
 		 */
 		if (is_partition_invalid(parent) ||
-		     cpumask_empty(&new_cpus)) {
+		    (cpumask_empty(&new_cpus) &&
+		     partition_is_populated(cs, NULL))) {
 			int old_prs;
 
 			update_parent_subparts_cpumask(cs, partcmd_disable,