diff mbox series

[v8,1/2] sched: Move task_mm_cid_work to mm work_struct

Message ID 20250220102639.141314-2-gmonaco@redhat.com (mailing list archive)
State New
Headers show
Series [v8,1/2] sched: Move task_mm_cid_work to mm work_struct | expand

Commit Message

Gabriele Monaco Feb. 20, 2025, 10:26 a.m. UTC
Currently, the task_mm_cid_work function is called in a task work
triggered by a scheduler tick to frequently compact the mm_cids of each
process. This can delay the execution of the corresponding thread for
the entire duration of the function, negatively affecting the response
in case of real time tasks. In practice, we observe task_mm_cid_work
increasing the latency of 30-35us on a 128 cores system, this order of
magnitude is meaningful under PREEMPT_RT.

Run the task_mm_cid_work in a new work_struct connected to the
mm_struct rather than in the task context before returning to
userspace.

This work_struct is initialised with the mm and disabled before freeing
it. The queuing of the work happens while returning to userspace in
__rseq_handle_notify_resume, maintaining the checks to avoid running
more frequently than MM_CID_SCAN_DELAY.
To make sure this happens predictably also on long running tasks, we
trigger a call to __rseq_handle_notify_resume also from the scheduler
tick (which in turn will also schedule the work item).

The main advantage of this change is that the function can be offloaded
to a different CPU and even preempted by RT tasks.

Moreover, this new behaviour is more predictable with periodic tasks
with short runtime, which may rarely run during a scheduler tick.
Now, the work is always scheduled when the task returns to userspace.

The work is disabled during mmdrop, since the function cannot sleep in
all kernel configurations, we cannot wait for possibly running work
items to terminate. We make sure the mm is valid in case the task is
terminating by reserving it with mmgrab/mmdrop, returning prematurely if
we are really the last user while the work gets to run.
This situation is unlikely since we don't schedule the work for exiting
tasks, but we cannot rule it out.

Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
 include/linux/mm_types.h |  8 ++++++++
 include/linux/sched.h    |  7 ++++++-
 kernel/rseq.c            |  1 +
 kernel/sched/core.c      | 38 ++++++++++++++++----------------------
 kernel/sched/sched.h     |  2 --
 5 files changed, 31 insertions(+), 25 deletions(-)

Comments

Mathieu Desnoyers Feb. 20, 2025, 2:42 p.m. UTC | #1
On 2025-02-20 05:26, Gabriele Monaco wrote:
> Currently, the task_mm_cid_work function is called in a task work
> triggered by a scheduler tick to frequently compact the mm_cids of each
> process. This can delay the execution of the corresponding thread for
> the entire duration of the function, negatively affecting the response
> in case of real time tasks. In practice, we observe task_mm_cid_work
> increasing the latency of 30-35us on a 128 cores system, this order of
> magnitude is meaningful under PREEMPT_RT.
> 
> Run the task_mm_cid_work in a new work_struct connected to the
> mm_struct rather than in the task context before returning to
> userspace.
> 
> This work_struct is initialised with the mm and disabled before freeing
> it. The queuing of the work happens while returning to userspace in
> __rseq_handle_notify_resume, maintaining the checks to avoid running
> more frequently than MM_CID_SCAN_DELAY.
> To make sure this happens predictably also on long running tasks, we
> trigger a call to __rseq_handle_notify_resume also from the scheduler
> tick (which in turn will also schedule the work item).
> 
> The main advantage of this change is that the function can be offloaded
> to a different CPU and even preempted by RT tasks.
> 
> Moreover, this new behaviour is more predictable with periodic tasks
> with short runtime, which may rarely run during a scheduler tick.
> Now, the work is always scheduled when the task returns to userspace.
> 
> The work is disabled during mmdrop, since the function cannot sleep in
> all kernel configurations, we cannot wait for possibly running work
> items to terminate. We make sure the mm is valid in case the task is
> terminating by reserving it with mmgrab/mmdrop, returning prematurely if
> we are really the last user while the work gets to run.
> This situation is unlikely since we don't schedule the work for exiting
> tasks, but we cannot rule it out.
> 
> Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
[...]
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 9aecd914ac691..363e51dd25175 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5663,7 +5663,7 @@ void sched_tick(void)
>   		resched_latency = cpu_resched_latency(rq);
>   	calc_global_load_tick(rq);
>   	sched_core_tick(rq);
> -	task_tick_mm_cid(rq, donor);
> +	rseq_preempt(donor);
>   	scx_tick(rq);
>   
>   	rq_unlock(rq, &rf);

There is one tiny important detail worth discussing here: I wonder if
executing a __rseq_handle_notify_resume() on return to userspace on
every scheduler tick will cause noticeable performance degradation ?

I think we can mitigate the impact if we can quickly compute the amount
of contiguous unpreempted runtime since last preemption, then we could
use this as a way to only issue rseq_preempt() when there has been a
minimum amount of contiguous unpreempted execution. Otherwise the
rseq_preempt() already issued by preemption is enough.

I'm not entirely sure how to compute this "unpreempted contiguous
runtime" value within sched_tick() though, any ideas ?

Thanks,

Mathieu
Gabriele Monaco Feb. 20, 2025, 3:30 p.m. UTC | #2
On Thu, 2025-02-20 at 09:42 -0500, Mathieu Desnoyers wrote:
> On 2025-02-20 05:26, Gabriele Monaco wrote:
> > Currently, the task_mm_cid_work function is called in a task work
> > triggered by a scheduler tick to frequently compact the mm_cids of
> > each
> > process. This can delay the execution of the corresponding thread
> > for
> > the entire duration of the function, negatively affecting the
> > response
> > in case of real time tasks. In practice, we observe
> > task_mm_cid_work
> > increasing the latency of 30-35us on a 128 cores system, this order
> > of
> > magnitude is meaningful under PREEMPT_RT.
> > 
> > Run the task_mm_cid_work in a new work_struct connected to the
> > mm_struct rather than in the task context before returning to
> > userspace.
> > 
> > This work_struct is initialised with the mm and disabled before
> > freeing
> > it. The queuing of the work happens while returning to userspace in
> > __rseq_handle_notify_resume, maintaining the checks to avoid
> > running
> > more frequently than MM_CID_SCAN_DELAY.
> > To make sure this happens predictably also on long running tasks,
> > we
> > trigger a call to __rseq_handle_notify_resume also from the
> > scheduler
> > tick (which in turn will also schedule the work item).
> > 
> > The main advantage of this change is that the function can be
> > offloaded
> > to a different CPU and even preempted by RT tasks.
> > 
> > Moreover, this new behaviour is more predictable with periodic
> > tasks
> > with short runtime, which may rarely run during a scheduler tick.
> > Now, the work is always scheduled when the task returns to
> > userspace.
> > 
> > The work is disabled during mmdrop, since the function cannot sleep
> > in
> > all kernel configurations, we cannot wait for possibly running work
> > items to terminate. We make sure the mm is valid in case the task
> > is
> > terminating by reserving it with mmgrab/mmdrop, returning
> > prematurely if
> > we are really the last user while the work gets to run.
> > This situation is unlikely since we don't schedule the work for
> > exiting
> > tasks, but we cannot rule it out.
> > 
> > Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced
> > by mm_cid")
> > Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> > ---
> [...]
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 9aecd914ac691..363e51dd25175 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -5663,7 +5663,7 @@ void sched_tick(void)
> >   		resched_latency = cpu_resched_latency(rq);
> >   	calc_global_load_tick(rq);
> >   	sched_core_tick(rq);
> > -	task_tick_mm_cid(rq, donor);
> > +	rseq_preempt(donor);
> >   	scx_tick(rq);
> >   
> >   	rq_unlock(rq, &rf);
> 
> There is one tiny important detail worth discussing here: I wonder if
> executing a __rseq_handle_notify_resume() on return to userspace on
> every scheduler tick will cause noticeable performance degradation ?
> 
> I think we can mitigate the impact if we can quickly compute the
> amount
> of contiguous unpreempted runtime since last preemption, then we
> could
> use this as a way to only issue rseq_preempt() when there has been a
> minimum amount of contiguous unpreempted execution. Otherwise the
> rseq_preempt() already issued by preemption is enough.
> 
> I'm not entirely sure how to compute this "unpreempted contiguous
> runtime" value within sched_tick() though, any ideas ?

I was a bit concerned but, at least from the latency perspective, I
didn't see any noticeable difference. This may also depend on the
system under test, though.

We may not need to do that, what we are doing here is improperly
calling rseq_preempt. What if we call an rseq_tick which sets a
different bit in rseq_event_mask and take that into consideration while
running __rseq_handle_notify_resume?

We could follow the periodicity of the mm_cid compaction and, if the
rseq event is a tick, only continue if it is time to compact (and we
can return this value from task_queue_mm_cid to avoid checking twice).
We would be off by one period (commit the rseq happens before we
schedule the next compaction), but it should be acceptable:

    __rseq_handle_notify_resume()
    {
        should_queue = task_queue_mm_cid();
        if (!should_queue && test_bit(RSEQ_EVENT_TICK, &t-
>rseq_event_mask))
            return;
        /* go on with __rseq_handle_notify_resume */
    }

Does it sound like an acceptable solution?

Another doubt about this case, here we are worrying about this
hypothetical long-running task, I'm assuming this can happen only for:
1. isolated cpus with nohz_full and 1 task (the approach wouldn't work)
  or
2. tasks with RT priority mostly starving the cpu

In 1. I'm not sure the user would really need rseq in the first place,
in 2., assuming nothing like stalld/sched rt throttling is in place, we
will probably also never run the kworker doing mm_cid compaction (I'm
using the system_wq), for this reason it's probably wiser to use the
system_unbound_wq, which as far as I could understand is the only one
that would allow the work to run on any other CPU.

I might be missing something trivial here, what do you think though?

Thanks,
Gabriele
Mathieu Desnoyers Feb. 20, 2025, 3:30 p.m. UTC | #3
On 2025-02-20 09:42, Mathieu Desnoyers wrote:
> On 2025-02-20 05:26, Gabriele Monaco wrote:
>> Currently, the task_mm_cid_work function is called in a task work
>> triggered by a scheduler tick to frequently compact the mm_cids of each
>> process. This can delay the execution of the corresponding thread for
>> the entire duration of the function, negatively affecting the response
>> in case of real time tasks. In practice, we observe task_mm_cid_work
>> increasing the latency of 30-35us on a 128 cores system, this order of
>> magnitude is meaningful under PREEMPT_RT.
>>
>> Run the task_mm_cid_work in a new work_struct connected to the
>> mm_struct rather than in the task context before returning to
>> userspace.
>>
>> This work_struct is initialised with the mm and disabled before freeing
>> it. The queuing of the work happens while returning to userspace in
>> __rseq_handle_notify_resume, maintaining the checks to avoid running
>> more frequently than MM_CID_SCAN_DELAY.
>> To make sure this happens predictably also on long running tasks, we
>> trigger a call to __rseq_handle_notify_resume also from the scheduler
>> tick (which in turn will also schedule the work item).
>>
>> The main advantage of this change is that the function can be offloaded
>> to a different CPU and even preempted by RT tasks.
>>
>> Moreover, this new behaviour is more predictable with periodic tasks
>> with short runtime, which may rarely run during a scheduler tick.
>> Now, the work is always scheduled when the task returns to userspace.
>>
>> The work is disabled during mmdrop, since the function cannot sleep in
>> all kernel configurations, we cannot wait for possibly running work
>> items to terminate. We make sure the mm is valid in case the task is
>> terminating by reserving it with mmgrab/mmdrop, returning prematurely if
>> we are really the last user while the work gets to run.
>> This situation is unlikely since we don't schedule the work for exiting
>> tasks, but we cannot rule it out.
>>
>> Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by 
>> mm_cid")
>> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
>> ---
> [...]
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 9aecd914ac691..363e51dd25175 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -5663,7 +5663,7 @@ void sched_tick(void)
>>           resched_latency = cpu_resched_latency(rq);
>>       calc_global_load_tick(rq);
>>       sched_core_tick(rq);
>> -    task_tick_mm_cid(rq, donor);
>> +    rseq_preempt(donor);
>>       scx_tick(rq);
>>       rq_unlock(rq, &rf);
> 
> There is one tiny important detail worth discussing here: I wonder if
> executing a __rseq_handle_notify_resume() on return to userspace on
> every scheduler tick will cause noticeable performance degradation ?
> 
> I think we can mitigate the impact if we can quickly compute the amount
> of contiguous unpreempted runtime since last preemption, then we could
> use this as a way to only issue rseq_preempt() when there has been a
> minimum amount of contiguous unpreempted execution. Otherwise the
> rseq_preempt() already issued by preemption is enough.
> 
> I'm not entirely sure how to compute this "unpreempted contiguous
> runtime" value within sched_tick() though, any ideas ?

I just discussed this with Peter over IRC, here is a possible way
forward for this:

The fair class has the information we are looking for as:

   se->sum_exec_runtime - se->prev_sum_exec_runtime

for rt and dl classes, we'll need to keep track of prev_sum_exec_runtime
in their respective set_next_entity() in the same way as fair does.
AFAIU it's not tracked at the moment in neither rt and dl.

Then we can decide for a threshold of consecutive runtime that makes
sense to trigger a rseq_preempt() from sched_tick(), and use that to
lessen its impact.

Thanks,

Mathieu

> 
> Thanks,
> 
> Mathieu
>
Mathieu Desnoyers Feb. 20, 2025, 3:47 p.m. UTC | #4
On 2025-02-20 10:30, Gabriele Monaco wrote:
> 
> 
> On Thu, 2025-02-20 at 09:42 -0500, Mathieu Desnoyers wrote:
>> On 2025-02-20 05:26, Gabriele Monaco wrote:
>>> Currently, the task_mm_cid_work function is called in a task work
>>> triggered by a scheduler tick to frequently compact the mm_cids of
>>> each
>>> process. This can delay the execution of the corresponding thread
>>> for
>>> the entire duration of the function, negatively affecting the
>>> response
>>> in case of real time tasks. In practice, we observe
>>> task_mm_cid_work
>>> increasing the latency of 30-35us on a 128 cores system, this order
>>> of
>>> magnitude is meaningful under PREEMPT_RT.
>>>
>>> Run the task_mm_cid_work in a new work_struct connected to the
>>> mm_struct rather than in the task context before returning to
>>> userspace.
>>>
>>> This work_struct is initialised with the mm and disabled before
>>> freeing
>>> it. The queuing of the work happens while returning to userspace in
>>> __rseq_handle_notify_resume, maintaining the checks to avoid
>>> running
>>> more frequently than MM_CID_SCAN_DELAY.
>>> To make sure this happens predictably also on long running tasks,
>>> we
>>> trigger a call to __rseq_handle_notify_resume also from the
>>> scheduler
>>> tick (which in turn will also schedule the work item).
>>>
>>> The main advantage of this change is that the function can be
>>> offloaded
>>> to a different CPU and even preempted by RT tasks.
>>>
>>> Moreover, this new behaviour is more predictable with periodic
>>> tasks
>>> with short runtime, which may rarely run during a scheduler tick.
>>> Now, the work is always scheduled when the task returns to
>>> userspace.
>>>
>>> The work is disabled during mmdrop, since the function cannot sleep
>>> in
>>> all kernel configurations, we cannot wait for possibly running work
>>> items to terminate. We make sure the mm is valid in case the task
>>> is
>>> terminating by reserving it with mmgrab/mmdrop, returning
>>> prematurely if
>>> we are really the last user while the work gets to run.
>>> This situation is unlikely since we don't schedule the work for
>>> exiting
>>> tasks, but we cannot rule it out.
>>>
>>> Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced
>>> by mm_cid")
>>> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
>>> ---
>> [...]
>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>> index 9aecd914ac691..363e51dd25175 100644
>>> --- a/kernel/sched/core.c
>>> +++ b/kernel/sched/core.c
>>> @@ -5663,7 +5663,7 @@ void sched_tick(void)
>>>    		resched_latency = cpu_resched_latency(rq);
>>>    	calc_global_load_tick(rq);
>>>    	sched_core_tick(rq);
>>> -	task_tick_mm_cid(rq, donor);
>>> +	rseq_preempt(donor);
>>>    	scx_tick(rq);
>>>    
>>>    	rq_unlock(rq, &rf);
>>
>> There is one tiny important detail worth discussing here: I wonder if
>> executing a __rseq_handle_notify_resume() on return to userspace on
>> every scheduler tick will cause noticeable performance degradation ?
>>
>> I think we can mitigate the impact if we can quickly compute the
>> amount
>> of contiguous unpreempted runtime since last preemption, then we
>> could
>> use this as a way to only issue rseq_preempt() when there has been a
>> minimum amount of contiguous unpreempted execution. Otherwise the
>> rseq_preempt() already issued by preemption is enough.
>>
>> I'm not entirely sure how to compute this "unpreempted contiguous
>> runtime" value within sched_tick() though, any ideas ?
> 
> I was a bit concerned but, at least from the latency perspective, I
> didn't see any noticeable difference. This may also depend on the
> system under test, though.

I see this as an issue for performance-related workloads, not
specifically for latency: we'd be adding additional rseq notifiers
triggered by the tick in workloads that are CPU-heavy and would
otherwise not run it after tick. And we'd be adding this overhead
even in scenarios where there are relatively frequent preemptions
happening, because every tick would end up issuing rseq_preempt().

> We may not need to do that, what we are doing here is improperly
> calling rseq_preempt. What if we call an rseq_tick which sets a
> different bit in rseq_event_mask and take that into consideration while
> running __rseq_handle_notify_resume?

I'm not sure how much it would help. It may reduce the amount of
work to do, but we'd still be doing additional work at every tick.

See my other email about using

   se->sum_exec_runtime - se->prev_sum_exec_runtime

to only do rseq_preempt() when the last preemption was a certain amount
of consecutive runtime long ago. This is a better alternative I think.

> 
> We could follow the periodicity of the mm_cid compaction and, if the
> rseq event is a tick, only continue if it is time to compact (and we
> can return this value from task_queue_mm_cid to avoid checking twice).

Note that the mm_cid compaction delay is per-mm, and the fact that we
want to run __rseq_handle_notify_resume periodically to update the
mm_cid fields applies to all threads. Therefore, I don't think we can
use the mm_cid compaction delay (per-mm) for this.

> We would be off by one period (commit the rseq happens before we
> schedule the next compaction), but it should be acceptable:
> 
>      __rseq_handle_notify_resume()
>      {
>          should_queue = task_queue_mm_cid();
>          if (!should_queue && test_bit(RSEQ_EVENT_TICK, &t-
>> rseq_event_mask))
>              return;
>          /* go on with __rseq_handle_notify_resume */
>      }
> 
> Does it sound like an acceptable solution?

I'm not convinced your approach works due to the reasons explained
above. However the prev_sum_exec_runtime approach should work fine.

> 
> Another doubt about this case, here we are worrying about this
> hypothetical long-running task, I'm assuming this can happen only for:
> 1. isolated cpus with nohz_full and 1 task (the approach wouldn't work)

The prev_sum_exec_runtime approach would work for this case.

>    or
> 2. tasks with RT priority mostly starving the cpu

Likewise.

> 
> In 1. I'm not sure the user would really need rseq in the first place,

Not sure, but I'd prefer to keep this option available unless we have a
strong reason for not being able to support this.

> in 2., assuming nothing like stalld/sched rt throttling is in place, we
> will probably also never run the kworker doing mm_cid compaction (I'm
> using the system_wq), for this reason it's probably wiser to use the
> system_unbound_wq, which as far as I could understand is the only one
> that would allow the work to run on any other CPU.
> 
> I might be missing something trivial here, what do you think though?

Good point. I suspect using the system_unbound_wq would be preferable
here, especially given that we're iterating over possible CPUs anyway,
so I don't expect much gain from running in a system_wq over
system_unbound_wq. Or am I missing something ?

Thanks,

Mathieu

> 
> Thanks,
> Gabriele
>
Gabriele Monaco Feb. 20, 2025, 5:31 p.m. UTC | #5
2025-02-20T15:47:26Z Mathieu Desnoyers <mathieu.desnoyers@efficios.com>:

> On 2025-02-20 10:30, Gabriele Monaco wrote:
>>
>> On Thu, 2025-02-20 at 09:42 -0500, Mathieu Desnoyers wrote:
>>> On 2025-02-20 05:26, Gabriele Monaco wrote:
>>>> Currently, the task_mm_cid_work function is called in a task work
>>>> triggered by a scheduler tick to frequently compact the mm_cids of
>>>> each
>>>> process. This can delay the execution of the corresponding thread
>>>> for
>>>> the entire duration of the function, negatively affecting the
>>>> response
>>>> in case of real time tasks. In practice, we observe
>>>> task_mm_cid_work
>>>> increasing the latency of 30-35us on a 128 cores system, this order
>>>> of
>>>> magnitude is meaningful under PREEMPT_RT.
>>>>
>>>> Run the task_mm_cid_work in a new work_struct connected to the
>>>> mm_struct rather than in the task context before returning to
>>>> userspace.
>>>>
>>>> This work_struct is initialised with the mm and disabled before
>>>> freeing
>>>> it. The queuing of the work happens while returning to userspace in
>>>> __rseq_handle_notify_resume, maintaining the checks to avoid
>>>> running
>>>> more frequently than MM_CID_SCAN_DELAY.
>>>> To make sure this happens predictably also on long running tasks,
>>>> we
>>>> trigger a call to __rseq_handle_notify_resume also from the
>>>> scheduler
>>>> tick (which in turn will also schedule the work item).
>>>>
>>>> The main advantage of this change is that the function can be
>>>> offloaded
>>>> to a different CPU and even preempted by RT tasks.
>>>>
>>>> Moreover, this new behaviour is more predictable with periodic
>>>> tasks
>>>> with short runtime, which may rarely run during a scheduler tick.
>>>> Now, the work is always scheduled when the task returns to
>>>> userspace.
>>>>
>>>> The work is disabled during mmdrop, since the function cannot sleep
>>>> in
>>>> all kernel configurations, we cannot wait for possibly running work
>>>> items to terminate. We make sure the mm is valid in case the task
>>>> is
>>>> terminating by reserving it with mmgrab/mmdrop, returning
>>>> prematurely if
>>>> we are really the last user while the work gets to run.
>>>> This situation is unlikely since we don't schedule the work for
>>>> exiting
>>>> tasks, but we cannot rule it out.
>>>>
>>>> Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced
>>>> by mm_cid")
>>>> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
>>>> ---
>>> [...]
>>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>>> index 9aecd914ac691..363e51dd25175 100644
>>>> --- a/kernel/sched/core.c
>>>> +++ b/kernel/sched/core.c
>>>> @@ -5663,7 +5663,7 @@ void sched_tick(void)
>>>>        resched_latency = cpu_resched_latency(rq);
>>>>    calc_global_load_tick(rq);
>>>>    sched_core_tick(rq);
>>>> -   task_tick_mm_cid(rq, donor);
>>>> +   rseq_preempt(donor);
>>>>    scx_tick(rq);
>>>>         rq_unlock(rq, &rf);
>>>
>>> There is one tiny important detail worth discussing here: I wonder if
>>> executing a __rseq_handle_notify_resume() on return to userspace on
>>> every scheduler tick will cause noticeable performance degradation ?
>>>
>>> I think we can mitigate the impact if we can quickly compute the
>>> amount
>>> of contiguous unpreempted runtime since last preemption, then we
>>> could
>>> use this as a way to only issue rseq_preempt() when there has been a
>>> minimum amount of contiguous unpreempted execution. Otherwise the
>>> rseq_preempt() already issued by preemption is enough.
>>>
>>> I'm not entirely sure how to compute this "unpreempted contiguous
>>> runtime" value within sched_tick() though, any ideas ?
>> I was a bit concerned but, at least from the latency perspective, I
>> didn't see any noticeable difference. This may also depend on the
>> system under test, though.
>
> I see this as an issue for performance-related workloads, not
> specifically for latency: we'd be adding additional rseq notifiers
> triggered by the tick in workloads that are CPU-heavy and would
> otherwise not run it after tick. And we'd be adding this overhead
> even in scenarios where there are relatively frequent preemptions
> happening, because every tick would end up issuing rseq_preempt().
>
>> We may not need to do that, what we are doing here is improperly
>> calling rseq_preempt. What if we call an rseq_tick which sets a
>> different bit in rseq_event_mask and take that into consideration while
>> running __rseq_handle_notify_resume?
>
> I'm not sure how much it would help. It may reduce the amount of
> work to do, but we'd still be doing additional work at every tick.
>
> See my other email about using
>
>    se->sum_exec_runtime - se->prev_sum_exec_runtime
>
> to only do rseq_preempt() when the last preemption was a certain amount
> of consecutive runtime long ago. This is a better alternative I think.
>
>> We could follow the periodicity of the mm_cid compaction and, if the
>> rseq event is a tick, only continue if it is time to compact (and we
>> can return this value from task_queue_mm_cid to avoid checking twice).
>
> Note that the mm_cid compaction delay is per-mm, and the fact that we
> want to run __rseq_handle_notify_resume periodically to update the
> mm_cid fields applies to all threads. Therefore, I don't think we can
> use the mm_cid compaction delay (per-mm) for this.
>

Alright, didn't think of that, I can explore your suggestion. Looks like most of it is already implemented.
What would be a good value to consider the notify waited enough? 100ms or even less?
I don't think this would deserve a config.

>> We would be off by one period (commit the rseq happens before we
>> schedule the next compaction), but it should be acceptable:
>>      __rseq_handle_notify_resume()
>>      {
>>          should_queue = task_queue_mm_cid();
>
>> Another doubt about this case, here we are worrying about this
>> hypothetical long-running task, I'm assuming this can happen only for:
>> 1. isolated cpus with nohz_full and 1 task (the approach wouldn't work)
>
> The prev_sum_exec_runtime approach would work for this case.
>

I mean in that case nohz_full and isolation would ensure nothing else runs on the core, not even the tick (or perhaps that's also nohz=on). I don't think there's much we can do in such a case is there? (On that core/context at least)

>>    or
>> 2. tasks with RT priority mostly starving the cpu
>
> Likewise.
>
>> In 1. I'm not sure the user would really need rseq in the first place,
>
> Not sure, but I'd prefer to keep this option available unless we have a
> strong reason for not being able to support this.
>
>> in 2., assuming nothing like stalld/sched rt throttling is in place, we
>> will probably also never run the kworker doing mm_cid compaction (I'm
>> using the system_wq), for this reason it's probably wiser to use the
>> system_unbound_wq, which as far as I could understand is the only one
>> that would allow the work to run on any other CPU.
>> I might be missing something trivial here, what do you think though?
>
> Good point. I suspect using the system_unbound_wq would be preferable
> here, especially given that we're iterating over possible CPUs anyway,
> so I don't expect much gain from running in a system_wq over
> system_unbound_wq. Or am I missing something ?

I don't think so, I just picked it as it was easier, but it's probably best to switch.

Thanks,
Gabriele
Mathieu Desnoyers Feb. 20, 2025, 9:10 p.m. UTC | #6
On 2025-02-20 12:31, Gabriele Monaco wrote:
> 2025-02-20T15:47:26Z Mathieu Desnoyers <mathieu.desnoyers@efficios.com>:
> 
>> On 2025-02-20 10:30, Gabriele Monaco wrote:
>>>
>>> On Thu, 2025-02-20 at 09:42 -0500, Mathieu Desnoyers wrote:
>>>> On 2025-02-20 05:26, Gabriele Monaco wrote:
>>>>> Currently, the task_mm_cid_work function is called in a task work
>>>>> triggered by a scheduler tick to frequently compact the mm_cids of
>>>>> each
>>>>> process. This can delay the execution of the corresponding thread
>>>>> for
>>>>> the entire duration of the function, negatively affecting the
>>>>> response
>>>>> in case of real time tasks. In practice, we observe
>>>>> task_mm_cid_work
>>>>> increasing the latency of 30-35us on a 128 cores system, this order
>>>>> of
>>>>> magnitude is meaningful under PREEMPT_RT.
>>>>>
>>>>> Run the task_mm_cid_work in a new work_struct connected to the
>>>>> mm_struct rather than in the task context before returning to
>>>>> userspace.
>>>>>
>>>>> This work_struct is initialised with the mm and disabled before
>>>>> freeing
>>>>> it. The queuing of the work happens while returning to userspace in
>>>>> __rseq_handle_notify_resume, maintaining the checks to avoid
>>>>> running
>>>>> more frequently than MM_CID_SCAN_DELAY.
>>>>> To make sure this happens predictably also on long running tasks,
>>>>> we
>>>>> trigger a call to __rseq_handle_notify_resume also from the
>>>>> scheduler
>>>>> tick (which in turn will also schedule the work item).
>>>>>
>>>>> The main advantage of this change is that the function can be
>>>>> offloaded
>>>>> to a different CPU and even preempted by RT tasks.
>>>>>
>>>>> Moreover, this new behaviour is more predictable with periodic
>>>>> tasks
>>>>> with short runtime, which may rarely run during a scheduler tick.
>>>>> Now, the work is always scheduled when the task returns to
>>>>> userspace.
>>>>>
>>>>> The work is disabled during mmdrop, since the function cannot sleep
>>>>> in
>>>>> all kernel configurations, we cannot wait for possibly running work
>>>>> items to terminate. We make sure the mm is valid in case the task
>>>>> is
>>>>> terminating by reserving it with mmgrab/mmdrop, returning
>>>>> prematurely if
>>>>> we are really the last user while the work gets to run.
>>>>> This situation is unlikely since we don't schedule the work for
>>>>> exiting
>>>>> tasks, but we cannot rule it out.
>>>>>
>>>>> Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced
>>>>> by mm_cid")
>>>>> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
>>>>> ---
>>>> [...]
>>>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>>>> index 9aecd914ac691..363e51dd25175 100644
>>>>> --- a/kernel/sched/core.c
>>>>> +++ b/kernel/sched/core.c
>>>>> @@ -5663,7 +5663,7 @@ void sched_tick(void)
>>>>>         resched_latency = cpu_resched_latency(rq);
>>>>>     calc_global_load_tick(rq);
>>>>>     sched_core_tick(rq);
>>>>> -   task_tick_mm_cid(rq, donor);
>>>>> +   rseq_preempt(donor);
>>>>>     scx_tick(rq);
>>>>>          rq_unlock(rq, &rf);
>>>>
>>>> There is one tiny important detail worth discussing here: I wonder if
>>>> executing a __rseq_handle_notify_resume() on return to userspace on
>>>> every scheduler tick will cause noticeable performance degradation ?
>>>>
>>>> I think we can mitigate the impact if we can quickly compute the
>>>> amount
>>>> of contiguous unpreempted runtime since last preemption, then we
>>>> could
>>>> use this as a way to only issue rseq_preempt() when there has been a
>>>> minimum amount of contiguous unpreempted execution. Otherwise the
>>>> rseq_preempt() already issued by preemption is enough.
>>>>
>>>> I'm not entirely sure how to compute this "unpreempted contiguous
>>>> runtime" value within sched_tick() though, any ideas ?
>>> I was a bit concerned but, at least from the latency perspective, I
>>> didn't see any noticeable difference. This may also depend on the
>>> system under test, though.
>>
>> I see this as an issue for performance-related workloads, not
>> specifically for latency: we'd be adding additional rseq notifiers
>> triggered by the tick in workloads that are CPU-heavy and would
>> otherwise not run it after tick. And we'd be adding this overhead
>> even in scenarios where there are relatively frequent preemptions
>> happening, because every tick would end up issuing rseq_preempt().
>>
>>> We may not need to do that, what we are doing here is improperly
>>> calling rseq_preempt. What if we call an rseq_tick which sets a
>>> different bit in rseq_event_mask and take that into consideration while
>>> running __rseq_handle_notify_resume?
>>
>> I'm not sure how much it would help. It may reduce the amount of
>> work to do, but we'd still be doing additional work at every tick.
>>
>> See my other email about using
>>
>>     se->sum_exec_runtime - se->prev_sum_exec_runtime
>>
>> to only do rseq_preempt() when the last preemption was a certain amount
>> of consecutive runtime long ago. This is a better alternative I think.
>>
>>> We could follow the periodicity of the mm_cid compaction and, if the
>>> rseq event is a tick, only continue if it is time to compact (and we
>>> can return this value from task_queue_mm_cid to avoid checking twice).
>>
>> Note that the mm_cid compaction delay is per-mm, and the fact that we
>> want to run __rseq_handle_notify_resume periodically to update the
>> mm_cid fields applies to all threads. Therefore, I don't think we can
>> use the mm_cid compaction delay (per-mm) for this.
>>
> 
> Alright, didn't think of that, I can explore your suggestion. Looks like most of it is already implemented.
> What would be a good value to consider the notify waited enough? 100ms or even less?
> I don't think this would deserve a config.

I'd go with 100ms initially, and adjust if need be.

> 
>>> We would be off by one period (commit the rseq happens before we
>>> schedule the next compaction), but it should be acceptable:
>>>       __rseq_handle_notify_resume()
>>>       {
>>>           should_queue = task_queue_mm_cid();
>>
>>> Another doubt about this case, here we are worrying about this
>>> hypothetical long-running task, I'm assuming this can happen only for:
>>> 1. isolated cpus with nohz_full and 1 task (the approach wouldn't work)
>>
>> The prev_sum_exec_runtime approach would work for this case.
>>
> 
> I mean in that case nohz_full and isolation would ensure nothing else runs on the core, not even the tick (or perhaps that's also nohz=on). I don't think there's much we can do in such a case is there? (On that core/context at least)

In case of nohz_full without tick, the goal is to have pretty much no
kernel involved. So if userspace depends on the kernel for updating its
rseq mm_cid fields and it does not happen, well, too bad, userspace gets
what it asked for. Having less-compact mm_cid values than should be in
that corner-case is not an issue I think we need to deal with.

E.g. it's similar to missing scheduler stats bookkeeping with tick
disabled. I don't think userspace should expect precise stats in that
nohz_full without tick scenario.

> 
>>>     or
>>> 2. tasks with RT priority mostly starving the cpu
>>
>> Likewise.
>>
>>> In 1. I'm not sure the user would really need rseq in the first place,
>>
>> Not sure, but I'd prefer to keep this option available unless we have a
>> strong reason for not being able to support this.
>>
>>> in 2., assuming nothing like stalld/sched rt throttling is in place, we
>>> will probably also never run the kworker doing mm_cid compaction (I'm
>>> using the system_wq), for this reason it's probably wiser to use the
>>> system_unbound_wq, which as far as I could understand is the only one
>>> that would allow the work to run on any other CPU.
>>> I might be missing something trivial here, what do you think though?
>>
>> Good point. I suspect using the system_unbound_wq would be preferable
>> here, especially given that we're iterating over possible CPUs anyway,
>> so I don't expect much gain from running in a system_wq over
>> system_unbound_wq. Or am I missing something ?
> 
> I don't think so, I just picked it as it was easier, but it's probably best to switch.

OK,

Thanks,

Mathieu
diff mbox series

Patch

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 0234f14f2aa6b..e748cf51e0c32 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -889,6 +889,10 @@  struct mm_struct {
 		 * mm nr_cpus_allowed updates.
 		 */
 		raw_spinlock_t cpus_allowed_lock;
+		/*
+		 * @cid_work: Work item to run the mm_cid scan.
+		 */
+		struct work_struct cid_work;
 #endif
 #ifdef CONFIG_MMU
 		atomic_long_t pgtables_bytes;	/* size of all page tables */
@@ -1185,6 +1189,8 @@  enum mm_cid_state {
 	MM_CID_LAZY_PUT = (1U << 31),
 };
 
+extern void task_mm_cid_work(struct work_struct *work);
+
 static inline bool mm_cid_is_unset(int cid)
 {
 	return cid == MM_CID_UNSET;
@@ -1257,12 +1263,14 @@  static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *
 	if (!mm->pcpu_cid)
 		return -ENOMEM;
 	mm_init_cid(mm, p);
+	INIT_WORK(&mm->cid_work, task_mm_cid_work);
 	return 0;
 }
 #define mm_alloc_cid(...)	alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
 
 static inline void mm_destroy_cid(struct mm_struct *mm)
 {
+	disable_work(&mm->cid_work);
 	free_percpu(mm->pcpu_cid);
 	mm->pcpu_cid = NULL;
 }
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 9632e3318e0d6..2fd65f125153d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1397,7 +1397,6 @@  struct task_struct {
 	int				last_mm_cid;	/* Most recent cid in mm */
 	int				migrate_from_cpu;
 	int				mm_cid_active;	/* Whether cid bitmap is active */
-	struct callback_head		cid_work;
 #endif
 
 	struct tlbflush_unmap_batch	tlb_ubc;
@@ -2254,4 +2253,10 @@  static __always_inline void alloc_tag_restore(struct alloc_tag *tag, struct allo
 #define alloc_tag_restore(_tag, _old)		do {} while (0)
 #endif
 
+#ifdef CONFIG_SCHED_MM_CID
+extern void task_queue_mm_cid(struct task_struct *curr);
+#else
+static inline void task_queue_mm_cid(struct task_struct *curr) { }
+#endif
+
 #endif
diff --git a/kernel/rseq.c b/kernel/rseq.c
index 442aba29bc4cf..f8394ebbb6f4d 100644
--- a/kernel/rseq.c
+++ b/kernel/rseq.c
@@ -419,6 +419,7 @@  void __rseq_handle_notify_resume(struct ksignal *ksig, struct pt_regs *regs)
 	}
 	if (unlikely(rseq_update_cpu_node_id(t)))
 		goto error;
+	task_queue_mm_cid(t);
 	return;
 
 error:
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9aecd914ac691..363e51dd25175 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5663,7 +5663,7 @@  void sched_tick(void)
 		resched_latency = cpu_resched_latency(rq);
 	calc_global_load_tick(rq);
 	sched_core_tick(rq);
-	task_tick_mm_cid(rq, donor);
+	rseq_preempt(donor);
 	scx_tick(rq);
 
 	rq_unlock(rq, &rf);
@@ -10530,22 +10530,16 @@  static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
 	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
 }
 
-static void task_mm_cid_work(struct callback_head *work)
+void task_mm_cid_work(struct work_struct *work)
 {
 	unsigned long now = jiffies, old_scan, next_scan;
-	struct task_struct *t = current;
 	struct cpumask *cidmask;
-	struct mm_struct *mm;
+	struct mm_struct *mm = container_of(work, struct mm_struct, cid_work);
 	int weight, cpu;
 
-	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
-
-	work->next = work;	/* Prevent double-add */
-	if (t->flags & PF_EXITING)
-		return;
-	mm = t->mm;
-	if (!mm)
-		return;
+	/* We are the last user, process already terminated. */
+	if (atomic_read(&mm->mm_count) == 1)
+		goto out_drop;
 	old_scan = READ_ONCE(mm->mm_cid_next_scan);
 	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
 	if (!old_scan) {
@@ -10558,9 +10552,9 @@  static void task_mm_cid_work(struct callback_head *work)
 			old_scan = next_scan;
 	}
 	if (time_before(now, old_scan))
-		return;
+		goto out_drop;
 	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
-		return;
+		goto out_drop;
 	cidmask = mm_cidmask(mm);
 	/* Clear cids that were not recently used. */
 	for_each_possible_cpu(cpu)
@@ -10572,6 +10566,8 @@  static void task_mm_cid_work(struct callback_head *work)
 	 */
 	for_each_possible_cpu(cpu)
 		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
+out_drop:
+	mmdrop(mm);
 }
 
 void init_sched_mm_cid(struct task_struct *t)
@@ -10584,23 +10580,21 @@  void init_sched_mm_cid(struct task_struct *t)
 		if (mm_users == 1)
 			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
 	}
-	t->cid_work.next = &t->cid_work;	/* Protect against double add */
-	init_task_work(&t->cid_work, task_mm_cid_work);
 }
 
-void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
+void task_queue_mm_cid(struct task_struct *curr)
 {
-	struct callback_head *work = &curr->cid_work;
+	struct work_struct *work = &curr->mm->cid_work;
 	unsigned long now = jiffies;
 
-	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
-	    work->next != work)
+	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)))
 		return;
 	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
 		return;
 
-	/* No page allocation under rq lock */
-	task_work_add(curr, work, TWA_RESUME);
+	/* Ensure the mm exists when we run. */
+	mmgrab(curr->mm);
+	schedule_work(work);
 }
 
 void sched_mm_cid_exit_signals(struct task_struct *t)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c8512a9fb0229..37a2e2328283e 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3630,7 +3630,6 @@  extern int use_cid_lock;
 
 extern void sched_mm_cid_migrate_from(struct task_struct *t);
 extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
-extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
 extern void init_sched_mm_cid(struct task_struct *t);
 
 static inline void __mm_cid_put(struct mm_struct *mm, int cid)
@@ -3899,7 +3898,6 @@  static inline void switch_mm_cid(struct rq *rq,
 static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
 static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
 static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
-static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
 static inline void init_sched_mm_cid(struct task_struct *t) { }
 #endif /* !CONFIG_SCHED_MM_CID */