diff mbox series

sched: Fix stop_one_cpu_nowait() vs hotplug

Message ID 20231010200442.GA16515@noisy.programming.kicks-ass.net (mailing list archive)
State New, archived
Headers show
Series sched: Fix stop_one_cpu_nowait() vs hotplug | expand

Commit Message

Peter Zijlstra Oct. 10, 2023, 8:04 p.m. UTC
On Tue, Oct 10, 2023 at 04:57:47PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 10, 2023 at 02:40:22PM +0000, Kuyo Chang (張建文) wrote:

> > It is running good so far(more than a week)on hotplug/set affinity
> > stress test. I will keep it testing and report back if it happens
> > again.
> 
> OK, I suppose I should look at writing a coherent Changelog for this
> then...

Something like the below... ?

---
Subject: sched: Fix stop_one_cpu_nowait() vs hotplug
From: Peter Zijlstra <peterz@infradead.org>
Date: Tue Oct 10 20:57:39 CEST 2023

Kuyo reported sporadic failures on a sched_setaffinity() vs CPU
hotplug stress-test -- notably affine_move_task() remains stuck in
wait_for_completion(), leading to a hung-task detector warning.

Specifically, it was reported that stop_one_cpu_nowait(.fn =
migration_cpu_stop) returns false -- this stopper is responsible for
the matching complete().

The race scenario is:

	CPU0					CPU1

					// doing _cpu_down()

  __set_cpus_allowed_ptr()
    task_rq_lock();
					takedown_cpu()
					  stop_machine_cpuslocked(take_cpu_down..)
	
					<PREEMPT: cpu_stopper_thread()
					  MULTI_STOP_PREPARE
					  ...
    __set_cpus_allowed_ptr_locked()
      affine_move_task()
        task_rq_unlock();

  <PREEMPT: cpu_stopper_thread()\> 
    ack_state()
					  MULTI_STOP_RUN
					    take_cpu_down()
					      __cpu_disable();
					      stop_machine_park();
						stopper->enabled = false;
					 />
   />
	stop_one_cpu_nowait(.fn = migration_cpu_stop);
          if (stopper->enabled) // false!!!

	
That is, by doing stop_one_cpu_nowait() after dropping rq-lock, the
stopper thread gets a chance to preempt and allows the cpu-down for
the target CPU to complete.

OTOH, since stop_one_cpu_nowait() / cpu_stop_queue_work() needs to
issue a wakeup, it must not be ran under the scheduler locks.

Solve this apparent contradiction by keeping preemption disabled over
the unlock + queue_stopper combination:

	preempt_disable();
	task_rq_unlock(...);
	if (!stop_pending)
	  stop_one_cpu_nowait(...)
	preempt_enable();

This respects the lock ordering contraints while still avoiding the
above race. That is, if we find the CPU is online under rq-lock, the
targeted stop_one_cpu_nowait() must succeed.

Apply this pattern to all similar stop_one_cpu_nowait() invocations.

Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
Reported-by: "Kuyo Chang (張建文)" <Kuyo.Chang@mediatek.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: "Kuyo Chang (張建文)" <Kuyo.Chang@mediatek.com>
---
 kernel/sched/core.c     |   10 ++++++++--
 kernel/sched/deadline.c |    2 ++
 kernel/sched/fair.c     |    4 +++-
 3 files changed, 13 insertions(+), 3 deletions(-)

Comments

Kuyo Chang (張建文) Oct. 11, 2023, 3:24 a.m. UTC | #1
On Tue, 2023-10-10 at 22:04 +0200, Peter Zijlstra wrote:
>  	 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  On Tue, Oct 10, 2023 at 04:57:47PM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 10, 2023 at 02:40:22PM +0000, Kuyo Chang (張建文) wrote:
> 
> > > It is running good so far(more than a week)on hotplug/set
> affinity
> > > stress test. I will keep it testing and report back if it happens
> > > again.
> > 
> > OK, I suppose I should look at writing a coherent Changelog for
> this
> > then...
> 
> Something like the below... ?
> 
Thanks for illustrate the race scenario. It looks good to me.
But how about RT? Does RT also need this invocations as below?

---
 kernel/sched/rt.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index e93b69ef919b..6aaf0a3d6081 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2063,9 +2063,11 @@ static int push_rt_task(struct rq *rq, bool
pull)
                 */
                push_task = get_push_task(rq);
                if (push_task) {
+                       preempt_disable();
                        raw_spin_rq_unlock(rq);
                        stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
                                            push_task, &rq->push_work);
+                       preempt_enable();
                        raw_spin_rq_lock(rq);
                }

@@ -2402,9 +2404,11 @@ static void pull_rt_task(struct rq *this_rq)
                double_unlock_balance(this_rq, src_rq);

                if (push_task) {
+                       preempt_disable();
                        raw_spin_rq_unlock(this_rq);
                        stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
                                            push_task, &src_rq-
>push_work);
+                       preempt_enable();
                        raw_spin_rq_lock(this_rq);
                }
        }

> ---
> Subject: sched: Fix stop_one_cpu_nowait() vs hotplug
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Tue Oct 10 20:57:39 CEST 2023
> 
> Kuyo reported sporadic failures on a sched_setaffinity() vs CPU
> hotplug stress-test -- notably affine_move_task() remains stuck in
> wait_for_completion(), leading to a hung-task detector warning.
> 
> Specifically, it was reported that stop_one_cpu_nowait(.fn =
> migration_cpu_stop) returns false -- this stopper is responsible for
> the matching complete().
> 
> The race scenario is:
> 
> CPU0CPU1
> 
> // doing _cpu_down()
> 
>   __set_cpus_allowed_ptr()
>     task_rq_lock();
> takedown_cpu()
>   stop_machine_cpuslocked(take_cpu_down..)
> 
> <PREEMPT: cpu_stopper_thread()
>   MULTI_STOP_PREPARE
>   ...
>     __set_cpus_allowed_ptr_locked()
>       affine_move_task()
>         task_rq_unlock();
> 
>   <PREEMPT: cpu_stopper_thread()\> 
>     ack_state()
>   MULTI_STOP_RUN
>     take_cpu_down()
>       __cpu_disable();
>       stop_machine_park();
> stopper->enabled = false;
>  />
>    />
> stop_one_cpu_nowait(.fn = migration_cpu_stop);
>           if (stopper->enabled) // false!!!
> 
> 
> That is, by doing stop_one_cpu_nowait() after dropping rq-lock, the
> stopper thread gets a chance to preempt and allows the cpu-down for
> the target CPU to complete.
> 
> OTOH, since stop_one_cpu_nowait() / cpu_stop_queue_work() needs to
> issue a wakeup, it must not be ran under the scheduler locks.
> 
> Solve this apparent contradiction by keeping preemption disabled over
> the unlock + queue_stopper combination:
> 
> preempt_disable();
> task_rq_unlock(...);
> if (!stop_pending)
>   stop_one_cpu_nowait(...)
> preempt_enable();
> 
> This respects the lock ordering contraints while still avoiding the
> above race. That is, if we find the CPU is online under rq-lock, the
> targeted stop_one_cpu_nowait() must succeed.
> 
> Apply this pattern to all similar stop_one_cpu_nowait() invocations.
> 
> Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs
> set_cpus_allowed_ptr()")
> Reported-by: "Kuyo Chang (張建文)" <Kuyo.Chang@mediatek.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Tested-by: "Kuyo Chang (張建文)" <Kuyo.Chang@mediatek.com>
> ---
>  kernel/sched/core.c     |   10 ++++++++--
>  kernel/sched/deadline.c |    2 ++
>  kernel/sched/fair.c     |    4 +++-
>  3 files changed, 13 insertions(+), 3 deletions(-)
> 
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2645,9 +2645,11 @@ static int migration_cpu_stop(void *data
>   * it.
>   */
>  WARN_ON_ONCE(!pending->stop_pending);
> +preempt_disable();
>  task_rq_unlock(rq, p, &rf);
>  stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
>      &pending->arg, &pending->stop_work);
> +preempt_enable();
>  return 0;
>  }
>  out:
> @@ -2967,12 +2969,13 @@ static int affine_move_task(struct rq *r
>  complete = true;
>  }
>  
> +preempt_disable();
>  task_rq_unlock(rq, p, rf);
> -
>  if (push_task) {
>  stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
>      p, &rq->push_work);
>  }
> +preempt_enable();
>  
>  if (complete)
>  complete_all(&pending->done);
> @@ -3038,12 +3041,13 @@ static int affine_move_task(struct rq *r
>  if (flags & SCA_MIGRATE_ENABLE)
>  p->migration_flags &= ~MDF_PUSH;
>  
> +preempt_disable();
>  task_rq_unlock(rq, p, rf);
> -
>  if (!stop_pending) {
>  stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
>      &pending->arg, &pending->stop_work);
>  }
> +preempt_enable();
>  
>  if (flags & SCA_MIGRATE_ENABLE)
>  return 0;
> @@ -9459,9 +9463,11 @@ static void balance_push(struct rq *rq)
>   * Temporarily drop rq->lock such that we can wake-up the stop task.
>   * Both preemption and IRQs are still disabled.
>   */
> +preempt_disable();
>  raw_spin_rq_unlock(rq);
>  stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
>      this_cpu_ptr(&push_work));
> +preempt_enable();
>  /*
>   * At this point need_resched() is true and we'll take the loop in
>   * schedule(). The next pick is obviously going to be the stop task
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2420,9 +2420,11 @@ static void pull_dl_task(struct rq *this
>  double_unlock_balance(this_rq, src_rq);
>  
>  if (push_task) {
> +preempt_disable();
>  raw_spin_rq_unlock(this_rq);
>  stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
>      push_task, &src_rq->push_work);
> +preempt_enable();
>  raw_spin_rq_lock(this_rq);
>  }
>  }
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -11299,13 +11299,15 @@ static int load_balance(int this_cpu, st
>  busiest->push_cpu = this_cpu;
>  active_balance = 1;
>  }
> -raw_spin_rq_unlock_irqrestore(busiest, flags);
>  
> +preempt_disable();
> +raw_spin_rq_unlock_irqrestore(busiest, flags);
>  if (active_balance) {
>  stop_one_cpu_nowait(cpu_of(busiest),
>  active_load_balance_cpu_stop, busiest,
>  &busiest->active_balance_work);
>  }
> +preempt_enable();
>  }
>  } else {
>  sd->nr_balance_failed = 0;
Peter Zijlstra Oct. 11, 2023, 1:26 p.m. UTC | #2
On Wed, Oct 11, 2023 at 03:24:19AM +0000, Kuyo Chang (張建文) wrote:
> On Tue, 2023-10-10 at 22:04 +0200, Peter Zijlstra wrote:
> >  	 
> > External email : Please do not click links or open attachments until
> > you have verified the sender or the content.
> >  On Tue, Oct 10, 2023 at 04:57:47PM +0200, Peter Zijlstra wrote:
> > > On Tue, Oct 10, 2023 at 02:40:22PM +0000, Kuyo Chang (張建文) wrote:
> > 
> > > > It is running good so far(more than a week)on hotplug/set
> > affinity
> > > > stress test. I will keep it testing and report back if it happens
> > > > again.
> > > 
> > > OK, I suppose I should look at writing a coherent Changelog for
> > this
> > > then...
> > 
> > Something like the below... ?
> > 
> Thanks for illustrate the race scenario. It looks good to me.
> But how about RT? Does RT also need this invocations as below?
> 
> ---
>  kernel/sched/rt.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index e93b69ef919b..6aaf0a3d6081 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -2063,9 +2063,11 @@ static int push_rt_task(struct rq *rq, bool
> pull)
>                  */
>                 push_task = get_push_task(rq);
>                 if (push_task) {
> +                       preempt_disable();
>                         raw_spin_rq_unlock(rq);
>                         stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
>                                             push_task, &rq->push_work);
> +                       preempt_enable();
>                         raw_spin_rq_lock(rq);
>                 }
> 
> @@ -2402,9 +2404,11 @@ static void pull_rt_task(struct rq *this_rq)
>                 double_unlock_balance(this_rq, src_rq);
> 
>                 if (push_task) {
> +                       preempt_disable();
>                         raw_spin_rq_unlock(this_rq);
>                         stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
>                                             push_task, &src_rq-
> >push_work);
> +                       preempt_enable();
>                         raw_spin_rq_lock(this_rq);
>                 }
>         }

bah, clearly git-grep didn't work for me last night, I'll go fix up.
diff mbox series

Patch

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2645,9 +2645,11 @@  static int migration_cpu_stop(void *data
 		 * it.
 		 */
 		WARN_ON_ONCE(!pending->stop_pending);
+		preempt_disable();
 		task_rq_unlock(rq, p, &rf);
 		stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
 				    &pending->arg, &pending->stop_work);
+		preempt_enable();
 		return 0;
 	}
 out:
@@ -2967,12 +2969,13 @@  static int affine_move_task(struct rq *r
 			complete = true;
 		}
 
+		preempt_disable();
 		task_rq_unlock(rq, p, rf);
-
 		if (push_task) {
 			stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
 					    p, &rq->push_work);
 		}
+		preempt_enable();
 
 		if (complete)
 			complete_all(&pending->done);
@@ -3038,12 +3041,13 @@  static int affine_move_task(struct rq *r
 		if (flags & SCA_MIGRATE_ENABLE)
 			p->migration_flags &= ~MDF_PUSH;
 
+		preempt_disable();
 		task_rq_unlock(rq, p, rf);
-
 		if (!stop_pending) {
 			stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
 					    &pending->arg, &pending->stop_work);
 		}
+		preempt_enable();
 
 		if (flags & SCA_MIGRATE_ENABLE)
 			return 0;
@@ -9459,9 +9463,11 @@  static void balance_push(struct rq *rq)
 	 * Temporarily drop rq->lock such that we can wake-up the stop task.
 	 * Both preemption and IRQs are still disabled.
 	 */
+	preempt_disable();
 	raw_spin_rq_unlock(rq);
 	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
 			    this_cpu_ptr(&push_work));
+	preempt_enable();
 	/*
 	 * At this point need_resched() is true and we'll take the loop in
 	 * schedule(). The next pick is obviously going to be the stop task
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2420,9 +2420,11 @@  static void pull_dl_task(struct rq *this
 		double_unlock_balance(this_rq, src_rq);
 
 		if (push_task) {
+			preempt_disable();
 			raw_spin_rq_unlock(this_rq);
 			stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
 					    push_task, &src_rq->push_work);
+			preempt_enable();
 			raw_spin_rq_lock(this_rq);
 		}
 	}
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11299,13 +11299,15 @@  static int load_balance(int this_cpu, st
 				busiest->push_cpu = this_cpu;
 				active_balance = 1;
 			}
-			raw_spin_rq_unlock_irqrestore(busiest, flags);
 
+			preempt_disable();
+			raw_spin_rq_unlock_irqrestore(busiest, flags);
 			if (active_balance) {
 				stop_one_cpu_nowait(cpu_of(busiest),
 					active_load_balance_cpu_stop, busiest,
 					&busiest->active_balance_work);
 			}
+			preempt_enable();
 		}
 	} else {
 		sd->nr_balance_failed = 0;