diff mbox series

[v8,5/5] tick/sched: Ensure quiet_vmstat() is called when the idle tick was stopped too

Message ID 20220924152441.822460-1-atomlin@redhat.com (mailing list archive)
State New
Headers show
Series Ensure quiet_vmstat() is called when the idle tick was stopped too | expand

Commit Message

Aaron Tomlin Sept. 24, 2022, 3:24 p.m. UTC
In the context of the idle task and an adaptive-tick mode/or a nohz_full
CPU, quiet_vmstat() can be called: before stopping the idle tick,
entering an idle state and on exit. In particular, for the latter case,
when the idle task is required to reschedule, the idle tick can remain
stopped and the timer expiration time endless i.e., KTIME_MAX. Now,
indeed before a nohz_full CPU enters an idle state, CPU-specific vmstat
counters should be processed to ensure the respective values have been
reset and folded into the zone specific 'vm_stat[]'. That being said, it
can only occur when: the idle tick was previously stopped, and
reprogramming of the timer is not required.

A customer provided some evidence which indicates that the idle tick was
stopped; albeit, CPU-specific vmstat counters still remained populated.
Thus one can only assume quiet_vmstat() was not invoked on return to the
idle loop.

If I understand correctly, I suspect this divergence might erroneously
prevent a reclaim attempt by kswapd. If the number of zone specific free
pages are below their per-cpu drift value then
zone_page_state_snapshot() is used to compute a more accurate view of
the aforementioned statistic.  Thus any task blocked on the NUMA node
specific pfmemalloc_wait queue will be unable to make significant
progress via direct reclaim unless it is killed after being woken up by
kswapd (see throttle_direct_reclaim()).

Consider the following theoretical scenario:

 - Note: CPU X is part of 'tick_nohz_full_mask'

    1.      CPU Y migrated running task A to CPU X that
	    was in an idle state i.e. waiting for an IRQ;
	    marked the current task on CPU X to need/or
	    require a reschedule i.e., set TIF_NEED_RESCHED
	    and invoked a reschedule IPI to CPU X
	    (see sched_move_task())

    2.      CPU X acknowledged the reschedule IPI. Generic
	    idle loop code noticed the TIF_NEED_RESCHED flag
	    against the idle task and attempts to exit of the
	    loop and calls the main scheduler function i.e.
	    __schedule().

	    Since the idle tick was previously stopped no
	    scheduling-clock tick would occur.
	    So, no deferred timers would be handled

    3.      Post transition to kernel execution Task A
	    running on CPU X, indirectly released a few pages
	    (e.g. see __free_one_page()); CPU X's
	    'vm_stat_diff[NR_FREE_PAGES]' was updated and zone
	    specific 'vm_stat[]' update was deferred as per the
	    CPU-specific stat threshold

    4.      Task A does invoke exit(2) and the kernel does
	    remove the task from the run-queue; the idle task
	    was selected to execute next since there are no
	    other runnable tasks assigned to the given CPU
	    (see pick_next_task() and pick_next_task_idle())

    5.      On return to the idle loop since the idle tick
	    was already stopped and can remain so (see [1]
	    below) e.g. no pending soft IRQs, no attempt is
	    made to zero and fold CPU X's vmstat counters
	    since reprogramming of the scheduling-clock tick
	    is not required/or needed (see [2])

		  ...
		    do_idle
		    {

		      __current_set_polling()
		      tick_nohz_idle_enter()

		      while (!need_resched()) {

			local_irq_disable()

			...

			/* No polling or broadcast event */
			cpuidle_idle_call()
			{

			  if (cpuidle_not_available(drv, dev)) {
			    tick_nohz_idle_stop_tick()
			      __tick_nohz_idle_stop_tick(this_cpu_ptr(&tick_cpu_sched))
			      {
				int cpu = smp_processor_id()

				if (ts->timer_expires_base)
				  expires = ts->timer_expires
				else if (can_stop_idle_tick(cpu, ts))
	      (1) ------->        expires = tick_nohz_next_event(ts, cpu)
				else
				  return

				ts->idle_calls++

				if (expires > 0LL) {

				  tick_nohz_stop_tick(ts, cpu)
				  {

				    if (ts->tick_stopped && (expires == ts->next_tick)) {
	      (2) ------->            if (tick == KTIME_MAX || ts->next_tick ==
					hrtimer_get_expires(&ts->sched_timer))
					return
				    }
				    ...
				  }

So, the idea of this patch is to ensure refresh_cpu_vm_stats(false) is
called, when it is appropriate, on return to the idle loop if the idle
tick was previously stopped too.

A trivial test program was used to determine the impact of the proposed
changes and under vanilla. The nanosleep(2) system call was used several
times to suspend execution for a period of time to approximately compute
the number of CPU-cycles in the idle code path. The following is an average
count of CPU-cycles:

				  Vanilla                 Modified

  Cycles per idle loop            151858                  153258  (+1.0%)

Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
---
 kernel/time/tick-sched.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Hillf Danton Sept. 25, 2022, 1:05 a.m. UTC | #1
On 24 Sep 2022 16:24:41 +0100 Aaron Tomlin <atomlin@redhat.com> wrote:
> 
> In the context of the idle task and an adaptive-tick mode/or a nohz_full
> CPU, quiet_vmstat() can be called: before stopping the idle tick,
> entering an idle state and on exit. In particular, for the latter case,
> when the idle task is required to reschedule, the idle tick can remain
> stopped and the timer expiration time endless i.e., KTIME_MAX. Now,
> indeed before a nohz_full CPU enters an idle state, CPU-specific vmstat
> counters should be processed to ensure the respective values have been
> reset and folded into the zone specific 'vm_stat[]'. That being said, it
> can only occur when: the idle tick was previously stopped, and
> reprogramming of the timer is not required.
> 
> A customer provided some evidence which indicates that the idle tick was
> stopped; albeit, CPU-specific vmstat counters still remained populated.
> Thus one can only assume quiet_vmstat() was not invoked on return to the
> idle loop.

Why did housekeeping CPUs fail to do their works, with this assumption
put aside?

> [ log message of
>   [PATCH v8 3/5] mm/vmstat: Do not queue vmstat_update if tick is stopped
> 
> From the vmstat shepherd, for CPUs that have the tick stopped, do not
> queue local work to flush the per-CPU vmstats, since in that case the
> flush is performed on return to userspace or when entering idle.
> Also cancel any delayed work on the local CPU, when entering idle on nohz
> full CPUs. Per-CPU pages can be freed remotely from housekeeping CPUs.
> 
> end of log message]
Aaron Tomlin Sept. 26, 2022, 9:20 a.m. UTC | #2
On Sun 2022-09-25 09:05 +0800, Hillf Danton wrote:
> On 24 Sep 2022 16:24:41 +0100 Aaron Tomlin <atomlin@redhat.com> wrote:
> > 
> > In the context of the idle task and an adaptive-tick mode/or a nohz_full
> > CPU, quiet_vmstat() can be called: before stopping the idle tick,
> > entering an idle state and on exit. In particular, for the latter case,
> > when the idle task is required to reschedule, the idle tick can remain
> > stopped and the timer expiration time endless i.e., KTIME_MAX. Now,
> > indeed before a nohz_full CPU enters an idle state, CPU-specific vmstat
> > counters should be processed to ensure the respective values have been
> > reset and folded into the zone specific 'vm_stat[]'. That being said, it
> > can only occur when: the idle tick was previously stopped, and
> > reprogramming of the timer is not required.
> > 
> > A customer provided some evidence which indicates that the idle tick was
> > stopped; albeit, CPU-specific vmstat counters still remained populated.
> > Thus one can only assume quiet_vmstat() was not invoked on return to the
> > idle loop.
> 
> Why did housekeeping CPUs fail to do their works, with this assumption
> put aside?

Hi Hillf,

I'm not sure I understand your question.

In this context, when tick processing is stopped, delayed work is not going
to be handled until the CPU exits idle.


Kind regards,
Hillf Danton Oct. 3, 2022, 12:44 p.m. UTC | #3
On 26 Sep 2022 10:20:04 +0100  Aaron Tomlin <atomlin@redhat.com> wrote:
> On Sun 2022-09-25 09:05 +0800, Hillf Danton wrote:
> > On 24 Sep 2022 16:24:41 +0100 Aaron Tomlin <atomlin@redhat.com> wrote:
> > > 
> > > In the context of the idle task and an adaptive-tick mode/or a nohz_full
> > > CPU, quiet_vmstat() can be called: before stopping the idle tick,
> > > entering an idle state and on exit. In particular, for the latter case,
> > > when the idle task is required to reschedule, the idle tick can remain
> > > stopped and the timer expiration time endless i.e., KTIME_MAX. Now,
> > > indeed before a nohz_full CPU enters an idle state, CPU-specific vmstat
> > > counters should be processed to ensure the respective values have been
> > > reset and folded into the zone specific 'vm_stat[]'. That being said, it
> > > can only occur when: the idle tick was previously stopped, and
> > > reprogramming of the timer is not required.
> > > 
> > > A customer provided some evidence which indicates that the idle tick was
> > > stopped; albeit, CPU-specific vmstat counters still remained populated.
> > > Thus one can only assume quiet_vmstat() was not invoked on return to the
> > > idle loop.
> > 
> > Why did housekeeping CPUs fail to do their works, with this assumption
> > put aside?
> 
> Hi Hillf,
> 
> I'm not sure I understand your question.
> 
> In this context, when tick processing is stopped, delayed work is not going
> to be handled until the CPU exits idle.

Given work canceled because per-CPU pages can be freed remotely from
housekeeping CPUs (see patch 3/5), what is added here is not needed.

IOW which one is incorrect?

BTW given delayed work is not going to be handled until the CPU exits idle,
canceling work is noop in 3/5, despite what the vmstat shepherd does depends
not on tick.
Aaron Tomlin Oct. 12, 2022, 12:41 p.m. UTC | #4
> Given work canceled because per-CPU pages can be freed remotely from
> housekeeping CPUs (see patch 3/5), what is added here is not needed.

Hi Hillf,

Firstly, apologies for the delay!

The concern is to ensure CPU-specific vmstat counters are reset and folded
into NUMA node and zone specific and global counters too before entering
idle. It is necessary to invoke quiet_vmstat() on return to idle even if
the scheduling-clock tick has been previously stopped.
Please refer to the complete scenario I described again.

If I understand correctly, indeed the remote drain/or free of zone
CPU-specific pages can be initiated by a "housekeeping" CPU i.e.
refresh_cpu_vm_stats(true) via a worker thread/or kworker, yet the actual
free will only occur when the nohz_full CPU exits idle code and calls
schedule_idle().


Kind regards,
Marcelo Tosatti Oct. 17, 2022, 4:04 p.m. UTC | #5
On Mon, Oct 03, 2022 at 08:44:35PM +0800, Hillf Danton wrote:
> On 26 Sep 2022 10:20:04 +0100  Aaron Tomlin <atomlin@redhat.com> wrote:
> > On Sun 2022-09-25 09:05 +0800, Hillf Danton wrote:
> > > On 24 Sep 2022 16:24:41 +0100 Aaron Tomlin <atomlin@redhat.com> wrote:
> > > > 
> > > > In the context of the idle task and an adaptive-tick mode/or a nohz_full
> > > > CPU, quiet_vmstat() can be called: before stopping the idle tick,
> > > > entering an idle state and on exit. In particular, for the latter case,
> > > > when the idle task is required to reschedule, the idle tick can remain
> > > > stopped and the timer expiration time endless i.e., KTIME_MAX. Now,
> > > > indeed before a nohz_full CPU enters an idle state, CPU-specific vmstat
> > > > counters should be processed to ensure the respective values have been
> > > > reset and folded into the zone specific 'vm_stat[]'. That being said, it
> > > > can only occur when: the idle tick was previously stopped, and
> > > > reprogramming of the timer is not required.
> > > > 
> > > > A customer provided some evidence which indicates that the idle tick was
> > > > stopped; albeit, CPU-specific vmstat counters still remained populated.
> > > > Thus one can only assume quiet_vmstat() was not invoked on return to the
> > > > idle loop.
> > > 
> > > Why did housekeeping CPUs fail to do their works, with this assumption
> > > put aside?
> > 
> > Hi Hillf,
> > 
> > I'm not sure I understand your question.
> > 
> > In this context, when tick processing is stopped, delayed work is not going
> > to be handled until the CPU exits idle.
> 
> Given work canceled because per-CPU pages can be freed remotely from
> housekeeping CPUs (see patch 3/5), what is added here is not needed.
> 
> IOW which one is incorrect?
> 
> BTW given delayed work is not going to be handled until the CPU exits idle,

Hi Hilf,

The comment on the codebase now is:

void quiet_vmstat(void)
{
        if (system_state != SYSTEM_RUNNING)
                return;

        if (!delayed_work_pending(this_cpu_ptr(&vmstat_work)))
                return;

        if (!need_update(smp_processor_id()))
                return;
        
        /*
         * Just refresh counters and do not care about the pending delayed
         * vmstat_update. It doesn't fire that often to matter and canceling
         * it would be too expensive from this path.
         * vmstat_shepherd will take care about that for us.
         */
        refresh_cpu_vm_stats(false);
}

However this is incorrect. The pending delayed work is only cancelled
when executed and not requeued from:

static void vmstat_update(struct work_struct *w)
{
        if (refresh_cpu_vm_stats(true)) {
                /*
                 * Counters were updated so we expect more updates
                 * to occur in the future. Keep on running the
                 * update worker thread.
                 */
                queue_delayed_work_on(smp_processor_id(), mm_percpu_wq,
                                this_cpu_ptr(&vmstat_work),
                                round_jiffies_relative(sysctl_stat_interval));
        }
}

Since this patchset changes the synchronization to happen at return to
userspace or entering idle, we do want to cancel that work (which, after
synchronization, is not necessary).

> canceling work is noop in 3/5, despite what the vmstat shepherd does depends
> not on tick.

Canceling work is a not a noop in 3/5: If the work is not cancelled (if 3/5 
is dropped), there will be a pending work to be executed, from the kworker thread 
on an isolated CPU. Which is undesired for a fully isolated CPU, with no
interruptions.
diff mbox series

Patch

diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 634cd0fac267..88a3e9fc3824 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -926,13 +926,14 @@  static void tick_nohz_stop_tick(struct tick_sched *ts, int cpu)
 	 */
 	if (!ts->tick_stopped) {
 		calc_load_nohz_start();
-		quiet_vmstat();
 
 		ts->last_tick = hrtimer_get_expires(&ts->sched_timer);
 		ts->tick_stopped = 1;
 		trace_tick_stop(1, TICK_DEP_MASK_NONE);
 	}
 
+	/* Attempt to fold when the idle tick is stopped or not */
+	quiet_vmstat();
 	ts->next_tick = tick;
 
 	/*