Message ID | 20241014220603.35280-1-andrea.righi@linux.dev (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | [v2] sched_ext: Trigger ops.update_idle() from pick_task_idle() | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
Hello, On Tue, Oct 15, 2024 at 12:06:03AM +0200, Andrea Righi wrote: > @@ -459,13 +459,13 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t > static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) > { > update_idle_core(rq); > - scx_update_idle(rq, true); > schedstat_inc(rq->sched_goidle); > next->se.exec_start = rq_clock_task(rq); > } > > struct task_struct *pick_task_idle(struct rq *rq) > { > + scx_update_idle(rq, true); Thanks a lot for debugging this. Both the analysis and solution make sense to me. However, as this puts scx_update_idle() in a different place from other idle handling functions, can you please add a comment explaining why it needs to be in pick_task_idle() instead of set_next_task_idle()? Thanks.
On Mon, Oct 14, 2024 at 03:12:16PM -1000, Tejun Heo wrote: > Hello, > > On Tue, Oct 15, 2024 at 12:06:03AM +0200, Andrea Righi wrote: > > @@ -459,13 +459,13 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t > > static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) > > { > > update_idle_core(rq); > > - scx_update_idle(rq, true); > > schedstat_inc(rq->sched_goidle); > > next->se.exec_start = rq_clock_task(rq); > > } > > > > struct task_struct *pick_task_idle(struct rq *rq) > > { > > + scx_update_idle(rq, true); > > Thanks a lot for debugging this. Both the analysis and solution make sense > to me. However, as this puts scx_update_idle() in a different place from > other idle handling functions, can you please add a comment explaining why > it needs to be in pick_task_idle() instead of set_next_task_idle()? > > Thanks. Sure, I'll send a v3 with a proper comment. Thanks, -Andrea > > -- > tejun
On Tue, Oct 15, 2024 at 12:06:03AM +0200, Andrea Righi wrote: > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c > index d2f096bb274c..5a10cbc7e9df 100644 > --- a/kernel/sched/idle.c > +++ b/kernel/sched/idle.c > @@ -459,13 +459,13 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t > static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) > { > update_idle_core(rq); > - scx_update_idle(rq, true); > schedstat_inc(rq->sched_goidle); > next->se.exec_start = rq_clock_task(rq); > } > > struct task_struct *pick_task_idle(struct rq *rq) > { > + scx_update_idle(rq, true); > return rq->idle; > } Does this do the right thing in the case of core-scheduling doing pick_task() for force-idle on a remote cpu? The core-sched case is somewhat special in that the pick can be ignored -- in which case you're doing a spurious scx_update_idle() call.
On Tue, Oct 15, 2024 at 09:45:26AM +0200, Peter Zijlstra wrote: > On Tue, Oct 15, 2024 at 12:06:03AM +0200, Andrea Righi wrote: > > > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c > > index d2f096bb274c..5a10cbc7e9df 100644 > > --- a/kernel/sched/idle.c > > +++ b/kernel/sched/idle.c > > @@ -459,13 +459,13 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t > > static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) > > { > > update_idle_core(rq); > > - scx_update_idle(rq, true); > > schedstat_inc(rq->sched_goidle); > > next->se.exec_start = rq_clock_task(rq); > > } > > > > struct task_struct *pick_task_idle(struct rq *rq) > > { > > + scx_update_idle(rq, true); > > return rq->idle; > > } > > Does this do the right thing in the case of core-scheduling doing > pick_task() for force-idle on a remote cpu? > > The core-sched case is somewhat special in that the pick can be ignored > -- in which case you're doing a spurious scx_update_idle() call. Hm... that's right. So, what about keeping scx_update_idle() in set_next_task_idle() and also call it from pick_task(), but only when rq->curr == rq->idle? In this way, we should still be able to handle the scx_bpf_kick_cpu() call from ops.update_idle() properly and, while we might still encounter spurious calls in the core scheduling case, the idle state provided by ops.update_idle() will always be correct. So, scx schedulers that want to implement their own cpu idle state can rely on ops.update_idle(). -Andrea
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index d2f096bb274c..5a10cbc7e9df 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -459,13 +459,13 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) { update_idle_core(rq); - scx_update_idle(rq, true); schedstat_inc(rq->sched_goidle); next->se.exec_start = rq_clock_task(rq); } struct task_struct *pick_task_idle(struct rq *rq) { + scx_update_idle(rq, true); return rq->idle; }
With the consolidation of put_prev_task/set_next_task(), see commit 436f3eed5c69 ("sched: Combine the last put_prev_task() and the first set_next_task()"), we are now skipping the transition between these two functions when the previous and the next tasks are the same. As a result, ops.update_idle() is now called only once when the CPU transitions to the idle class. If the CPU stays active (e.g., through a call to scx_bpf_kick_cpu()), ops.update_idle() will not be triggered again since the task remains unchanged (rq->idle). While this behavior seems generally correct, it can cause issues in certain sched_ext scenarios. For example, a BPF scheduler might use logic like the following to keep the CPU active under specific conditions: void BPF_STRUCT_OPS(sched_update_idle, s32 cpu, bool idle) { if (!idle) return; if (condition) scx_bpf_kick_cpu(cpu, 0); } A call to scx_bpf_kick_cpu() wakes up the CPU, so in theory, ops.update_idle() should be triggered again until the condition becomes false. However, this doesn't happen, and scx_bpf_kick_cpu() doesn't produce the expected effect. In practice, this change badly impacts performance in user-space schedulers that rely on ops.update_idle() to activate user-space components. For instance, in the case of scx_rustland, performance drops significantly (e.g., gaming benchmarks fall from ~60fps to ~10fps). To address this, trigger ops.update_idle() from pick_task_idle() rather than set_next_task_idle(). This restores the correct behavior of ops.update_idle() and it allows to fix the performance regression in scx_rustland. Fixes: 7c65ae81ea86 ("sched_ext: Don't call put_prev_task_scx() before picking the next task") Signed-off-by: Andrea Righi <andrea.righi@linux.dev> --- kernel/sched/idle.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) ChangeLog v1 -> v2: - move the logic from put_prev_set_next_task() to scx_update_idle()