diff mbox series

[v4,11/14] sched: Reject CPU affinity changes based on arch_task_cpu_possible_mask()

Message ID 20201124155039.13804-12-will@kernel.org (mailing list archive)
State New, archived
Headers show
Series An alternative series for asymmetric AArch32 systems | expand

Commit Message

Will Deacon Nov. 24, 2020, 3:50 p.m. UTC
Reject explicit requests to change the affinity mask of a task via
set_cpus_allowed_ptr() if the requested mask is not a subset of the
mask returned by arch_task_cpu_possible_mask(). This ensures that the
'cpus_mask' for a given task cannot contain CPUs which are incapable of
executing it, except in cases where the affinity is forced.

Signed-off-by: Will Deacon <will@kernel.org>
---
 kernel/sched/core.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Quentin Perret Nov. 27, 2020, 9:54 a.m. UTC | #1
On Tuesday 24 Nov 2020 at 15:50:36 (+0000), Will Deacon wrote:
> Reject explicit requests to change the affinity mask of a task via
> set_cpus_allowed_ptr() if the requested mask is not a subset of the
> mask returned by arch_task_cpu_possible_mask(). This ensures that the
> 'cpus_mask' for a given task cannot contain CPUs which are incapable of
> executing it, except in cases where the affinity is forced.

I guess mentioning here (or as a comment) the 'funny' behaviour we get
with cpusets wouldn't hurt. But this is a sensible patch nonetheless so:

Reviewed-by: Quentin Perret <qperret@google.com>

Thanks,
Quentin
diff mbox series

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 99992d0beb65..095deda50643 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1877,6 +1877,7 @@  static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 					 struct rq_flags *rf)
 {
 	const struct cpumask *cpu_valid_mask = cpu_active_mask;
+	const struct cpumask *cpu_allowed_mask = arch_task_cpu_possible_mask(p);
 	unsigned int dest_cpu;
 	int ret = 0;
 
@@ -1887,6 +1888,9 @@  static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 		 * Kernel threads are allowed on online && !active CPUs
 		 */
 		cpu_valid_mask = cpu_online_mask;
+	} else if (!cpumask_subset(new_mask, cpu_allowed_mask)) {
+		ret = -EINVAL;
+		goto out;
 	}
 
 	/*