diff mbox series

[v7,08/22] cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1

Message ID 20210525151432.16875-9-will@kernel.org (mailing list archive)
State New, archived
Headers show
Series Add support for 32-bit tasks on asymmetric AArch32 systems | expand

Commit Message

Will Deacon May 25, 2021, 3:14 p.m. UTC
If the scheduler cannot find an allowed CPU for a task,
cpuset_cpus_allowed_fallback() will widen the affinity to cpu_possible_mask
if cgroup v1 is in use.

In preparation for allowing architectures to provide their own fallback
mask, just return early if we're either using cgroup v1 or we're using
cgroup v2 with a mask that contains invalid CPUs. This will allow
select_fallback_rq() to figure out the mask by itself.

Cc: Li Zefan <lizefan@huawei.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 include/linux/cpuset.h |  1 +
 kernel/cgroup/cpuset.c | 12 ++++++++++--
 2 files changed, 11 insertions(+), 2 deletions(-)

Comments

Peter Zijlstra May 26, 2021, 3:02 p.m. UTC | #1
On Tue, May 25, 2021 at 04:14:18PM +0100, Will Deacon wrote:
>  void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
>  {
> +	const struct cpumask *cs_mask;
> +	const struct cpumask *possible_mask = task_cpu_possible_mask(tsk);
> +
>  	rcu_read_lock();
> +	cs_mask = task_cs(tsk)->cpus_allowed;
> +
> +	if (!is_in_v2_mode() || !cpumask_subset(cs_mask, possible_mask))
> +		goto unlock; /* select_fallback_rq will try harder */
> +
> +	do_set_cpus_allowed(tsk, cs_mask);
> +unlock:

	if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask))
		do_set_cpus_allowed(tsk, cs_mask);

perhaps?
Will Deacon May 26, 2021, 4:07 p.m. UTC | #2
On Wed, May 26, 2021 at 05:02:20PM +0200, Peter Zijlstra wrote:
> On Tue, May 25, 2021 at 04:14:18PM +0100, Will Deacon wrote:
> >  void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
> >  {
> > +	const struct cpumask *cs_mask;
> > +	const struct cpumask *possible_mask = task_cpu_possible_mask(tsk);
> > +
> >  	rcu_read_lock();
> > +	cs_mask = task_cs(tsk)->cpus_allowed;
> > +
> > +	if (!is_in_v2_mode() || !cpumask_subset(cs_mask, possible_mask))
> > +		goto unlock; /* select_fallback_rq will try harder */
> > +
> > +	do_set_cpus_allowed(tsk, cs_mask);
> > +unlock:
> 
> 	if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask))
> 		do_set_cpus_allowed(tsk, cs_mask);
> 
> perhaps?

Absolutely.

Will
diff mbox series

Patch

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 04c20de66afc..ed6ec677dd6b 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -15,6 +15,7 @@ 
 #include <linux/cpumask.h>
 #include <linux/nodemask.h>
 #include <linux/mm.h>
+#include <linux/mmu_context.h>
 #include <linux/jump_label.h>
 
 #ifdef CONFIG_CPUSETS
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index a945504c0ae7..8c799260a4a2 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -3322,9 +3322,17 @@  void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
 
 void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
 {
+	const struct cpumask *cs_mask;
+	const struct cpumask *possible_mask = task_cpu_possible_mask(tsk);
+
 	rcu_read_lock();
-	do_set_cpus_allowed(tsk, is_in_v2_mode() ?
-		task_cs(tsk)->cpus_allowed : cpu_possible_mask);
+	cs_mask = task_cs(tsk)->cpus_allowed;
+
+	if (!is_in_v2_mode() || !cpumask_subset(cs_mask, possible_mask))
+		goto unlock; /* select_fallback_rq will try harder */
+
+	do_set_cpus_allowed(tsk, cs_mask);
+unlock:
 	rcu_read_unlock();
 
 	/*