Message ID | 20221130082313.3241517-12-tj@kernel.org (mailing list archive) |
---|---|
State | RFC |
Delegated to: | BPF |
Headers | show |
Series | [01/31] rhashtable: Allow rhashtable to be used from irq-safe contexts | expand |
Context | Check | Description |
---|---|---|
bpf/vmtest-bpf-PR | fail | merge-conflict |
On Tue, Nov 29, 2022 at 10:22:53PM -1000, Tejun Heo wrote: > sched_move_task() can be called for both cgroup and autogroup moves. Add a > parameter to distinguish the two cases. This will be used by a new > sched_class to track cgroup migrations. This all seems pointless, you can trivially distinguish a cgroup/autogroup task_group if you so want (again for unspecified raisins).
On Mon, Dec 12, 2022 at 01:00:35PM +0100, Peter Zijlstra wrote: > On Tue, Nov 29, 2022 at 10:22:53PM -1000, Tejun Heo wrote: > > sched_move_task() can be called for both cgroup and autogroup moves. Add a > > parameter to distinguish the two cases. This will be used by a new > > sched_class to track cgroup migrations. > > This all seems pointless, you can trivially distinguish a > cgroup/autogroup task_group if you so want (again for unspecified > raisins). Lemme add better explanations on the patches. This one, sched_ext just wants to tell cgroup moves from autogroup ones to decide whether to invoke the BPF scheduler's cgroup migration callback. But, yeah, you're right. It should be able to tell that by looking at the task_group itself. Will try that. Thanks.
diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c index 991fc9002535..2be1b10ce93e 100644 --- a/kernel/sched/autogroup.c +++ b/kernel/sched/autogroup.c @@ -151,7 +151,7 @@ void sched_autogroup_exit_task(struct task_struct *p) * see this thread after that: we can no longer use signal->autogroup. * See the PF_EXITING check in task_wants_autogroup(). */ - sched_move_task(p); + sched_move_task(p, SCHED_MOVE_TASK_AUTOGROUP); } static void @@ -183,7 +183,7 @@ autogroup_move_group(struct task_struct *p, struct autogroup *ag) * sched_autogroup_exit_task(). */ for_each_thread(p, t) - sched_move_task(t); + sched_move_task(t, SCHED_MOVE_TASK_AUTOGROUP); unlock_task_sighand(p, &flags); autogroup_kref_put(prev); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0699b49b1a21..9c5bfeeb30ba 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10210,7 +10210,7 @@ static void sched_change_group(struct task_struct *tsk) * now. This function just updates tsk->se.cfs_rq and tsk->se.parent to reflect * its new group. */ -void sched_move_task(struct task_struct *tsk) +void sched_move_task(struct task_struct *tsk, enum sched_move_task_reason reason) { int queued, running, queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK; @@ -10321,7 +10321,7 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset) struct cgroup_subsys_state *css; cgroup_taskset_for_each(task, css, tset) - sched_move_task(task); + sched_move_task(task, SCHED_MOVE_TASK_CGROUP); } #ifdef CONFIG_UCLAMP_TASK_GROUP diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3c6ea8296ae4..ef8da88e677c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -506,7 +506,12 @@ extern void sched_online_group(struct task_group *tg, extern void sched_destroy_group(struct task_group *tg); extern void sched_release_group(struct task_group *tg); -extern void sched_move_task(struct task_struct *tsk); +enum sched_move_task_reason { + SCHED_MOVE_TASK_CGROUP, + SCHED_MOVE_TASK_AUTOGROUP, +}; +extern void sched_move_task(struct task_struct *tsk, + enum sched_move_task_reason reason); #ifdef CONFIG_FAIR_GROUP_SCHED extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);