Message ID | 20221130082313.3241517-5-tj@kernel.org (mailing list archive) |
---|---|
State | RFC |
Delegated to: | BPF |
Headers | show |
Series | [01/31] rhashtable: Allow rhashtable to be used from irq-safe contexts | expand |
Context | Check | Description |
---|---|---|
bpf/vmtest-bpf-PR | fail | merge-conflict |
On Tue, Nov 29, 2022 at 10:22:46PM -1000, Tejun Heo wrote: > A new sched_clas needs a bit more control over forking. This patch makes the ^ (insufficient s's) > following changes: > > * Add sched_cancel_fork() which is called if fork fails after sched_fork() > succeeds so that the preparation can be undone. > > * Allow sched_cgroup_fork() to fail. > > Neither is used yet and this patch shouldn't cause any behavior changes. Fails to explain why this would be needed and why that would be a good thing. IOW, total lack of justification.
On Mon, Dec 12, 2022 at 12:13:31PM +0100, Peter Zijlstra wrote: > On Tue, Nov 29, 2022 at 10:22:46PM -1000, Tejun Heo wrote: > > A new sched_clas needs a bit more control over forking. This patch makes the > ^ > (insufficient s's) Will update. > > following changes: > > > > * Add sched_cancel_fork() which is called if fork fails after sched_fork() > > succeeds so that the preparation can be undone. > > > > * Allow sched_cgroup_fork() to fail. > > > > Neither is used yet and this patch shouldn't cause any behavior changes. > > Fails to explain why this would be needed and why that would be a good > thing. IOW, total lack of justification. This is because sched_ext calls out to BPF scheduler's prepare_enable() operation to prepare the task. The operation is allowed to fail (e.g. it might need to allocate something which can fail), so we need a way back back out of it. Thanks.
On Mon, Dec 12, 2022 at 08:03:24AM -1000, Tejun Heo wrote: > On Mon, Dec 12, 2022 at 12:13:31PM +0100, Peter Zijlstra wrote: > > On Tue, Nov 29, 2022 at 10:22:46PM -1000, Tejun Heo wrote: > > > A new sched_clas needs a bit more control over forking. This patch makes the > > ^ > > (insufficient s's) > > Will update. > > > > following changes: > > > > > > * Add sched_cancel_fork() which is called if fork fails after sched_fork() > > > succeeds so that the preparation can be undone. > > > > > > * Allow sched_cgroup_fork() to fail. > > > > > > Neither is used yet and this patch shouldn't cause any behavior changes. > > > > Fails to explain why this would be needed and why that would be a good > > thing. IOW, total lack of justification. > > This is because sched_ext calls out to BPF scheduler's prepare_enable() > operation to prepare the task. The operation is allowed to fail (e.g. it > might need to allocate something which can fail), so we need a way back back > out of it. sched_fork() can already fail; why isn't that a suitable location to do what needs doing?
On Mon, Dec 12, 2022 at 09:07:11PM +0100, Peter Zijlstra wrote: > On Mon, Dec 12, 2022 at 08:03:24AM -1000, Tejun Heo wrote: > > On Mon, Dec 12, 2022 at 12:13:31PM +0100, Peter Zijlstra wrote: > > > On Tue, Nov 29, 2022 at 10:22:46PM -1000, Tejun Heo wrote: > > > > A new sched_clas needs a bit more control over forking. This patch makes the > > > ^ > > > (insufficient s's) > > > > Will update. > > > > > > following changes: > > > > > > > > * Add sched_cancel_fork() which is called if fork fails after sched_fork() > > > > succeeds so that the preparation can be undone. > > > > > > > > * Allow sched_cgroup_fork() to fail. > > > > > > > > Neither is used yet and this patch shouldn't cause any behavior changes. > > > > > > Fails to explain why this would be needed and why that would be a good > > > thing. IOW, total lack of justification. > > > > This is because sched_ext calls out to BPF scheduler's prepare_enable() > > operation to prepare the task. The operation is allowed to fail (e.g. it > > might need to allocate something which can fail), so we need a way back back > > out of it. > > sched_fork() can already fail; why isn't that a suitable location to do > what needs doing? Because SCX's ops.prepare_enable() wants the cgroup (p->sched_task_group) to be initialized in case the BPF scheduler wants to perform cgroup related initializations. Thanks.
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index d6c48163c6de..b5ff1361ac8d 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -58,7 +58,8 @@ extern asmlinkage void schedule_tail(struct task_struct *prev); extern void init_idle(struct task_struct *idle, int cpu); extern int sched_fork(unsigned long clone_flags, struct task_struct *p); -extern void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs); +extern int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs); +extern void sched_cancel_fork(struct task_struct *p); extern void sched_post_fork(struct task_struct *p); extern void sched_dead(struct task_struct *p); diff --git a/kernel/fork.c b/kernel/fork.c index 08969f5aa38d..a90c6a4938c6 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2226,7 +2226,7 @@ static __latent_entropy struct task_struct *copy_process( retval = perf_event_init_task(p, clone_flags); if (retval) - goto bad_fork_cleanup_policy; + goto bad_fork_sched_cancel_fork; retval = audit_alloc(p); if (retval) goto bad_fork_cleanup_perf; @@ -2367,7 +2367,9 @@ static __latent_entropy struct task_struct *copy_process( * cgroup specific, it unconditionally needs to place the task on a * runqueue. */ - sched_cgroup_fork(p, args); + retval = sched_cgroup_fork(p, args); + if (retval) + goto bad_fork_cancel_cgroup; /* * From this point on we must avoid any synchronous user-space @@ -2419,13 +2421,13 @@ static __latent_entropy struct task_struct *copy_process( /* Don't start children in a dying pid namespace */ if (unlikely(!(ns_of_pid(pid)->pid_allocated & PIDNS_ADDING))) { retval = -ENOMEM; - goto bad_fork_cancel_cgroup; + goto bad_fork_core_free; } /* Let kill terminate clone/fork in the middle */ if (fatal_signal_pending(current)) { retval = -EINTR; - goto bad_fork_cancel_cgroup; + goto bad_fork_core_free; } init_task_pid_links(p); @@ -2492,10 +2494,11 @@ static __latent_entropy struct task_struct *copy_process( return p; -bad_fork_cancel_cgroup: +bad_fork_core_free: sched_core_free(p); spin_unlock(¤t->sighand->siglock); write_unlock_irq(&tasklist_lock); +bad_fork_cancel_cgroup: cgroup_cancel_fork(p, args); bad_fork_put_pidfd: if (clone_flags & CLONE_PIDFD) { @@ -2534,6 +2537,8 @@ static __latent_entropy struct task_struct *copy_process( audit_free(p); bad_fork_cleanup_perf: perf_event_free_task(p); +bad_fork_sched_cancel_fork: + sched_cancel_fork(p); bad_fork_cleanup_policy: lockdep_free_task(p); #ifdef CONFIG_NUMA diff --git a/kernel/sched/core.c b/kernel/sched/core.c index cb2aa2b54c7a..85eb82ad2ffd 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4604,7 +4604,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) return 0; } -void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs) +int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs) { unsigned long flags; @@ -4631,6 +4631,12 @@ void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs) if (p->sched_class->task_fork) p->sched_class->task_fork(p); raw_spin_unlock_irqrestore(&p->pi_lock, flags); + + return 0; +} + +void sched_cancel_fork(struct task_struct *p) +{ } void sched_post_fork(struct task_struct *p)