diff mbox series

[v15,02/12] cpu/hotplug: Move idle_thread_get() to <linux/smpboot.h>

Message ID 20230316222109.1940300-3-usama.arif@bytedance.com (mailing list archive)
State Superseded
Headers show
Series Parallel CPU bringup for x86_64 | expand

Commit Message

Usama Arif March 16, 2023, 10:20 p.m. UTC
From: David Woodhouse <dwmw@amazon.co.uk>

Instead of relying purely on the special-case wrapper in bringup_cpu()
to pass the idle thread to __cpu_up(), expose idle_thread_get() so that
the architecture code can obtain it directly when necessary.

This will be useful when the existing __cpu_up() is split into multiple
phases, only *one* of which will actually need the idle thread.

If the architecture code is to register its new pre-bringup states with
the cpuhp core, having a special-case wrapper to pass extra arguments is
non-trivial and it's easier just to let the arch register its function
pointer to be invoked with the standard API.

To reduce duplication, move the shadow stack reset and kasan unpoisoning
into idle_thread_get() too.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Kim Phillips <kim.phillips@amd.com>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com>
---
 include/linux/smpboot.h | 10 ++++++++++
 kernel/cpu.c            | 13 +++----------
 kernel/smpboot.c        | 11 ++++++++++-
 kernel/smpboot.h        |  2 --
 4 files changed, 23 insertions(+), 13 deletions(-)

Comments

Borislav Petkov March 19, 2023, 4:34 p.m. UTC | #1
On Thu, Mar 16, 2023 at 10:20:59PM +0000, Usama Arif wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Instead of relying purely on the special-case wrapper in bringup_cpu()
> to pass the idle thread to __cpu_up(), expose idle_thread_get() so that
> the architecture code can obtain it directly when necessary.
> 
> This will be useful when the existing __cpu_up() is split into multiple
> phases, only *one* of which will actually need the idle thread.
> 
> If the architecture code is to register its new pre-bringup states with
> the cpuhp core, having a special-case wrapper to pass extra arguments is
> non-trivial and it's easier just to let the arch register its function
> pointer to be invoked with the standard API.
> 
> To reduce duplication, move the shadow stack reset and kasan unpoisoning

I was wondering what "shadow stack" as that set is not upstream yet. You mean
"shadow call stack" which is apparently something else,
compiler-generated, purely software thing.

> into idle_thread_get() too.

Frankly, I don't think resetting shadow call stack and kasan state
belongs in a function which returns the idle thread. Even more so if you
have to add an @unpoison param which is false sometimes and sometimes
true, depending on where you call the function.

I think you should have a helper

	tsk_reset_stacks(struct task_struct *tsk);

or so which is called where @unpoison == true instead of having a getter
function do something unrelated too.

Thx.
David Woodhouse March 20, 2023, 8:17 a.m. UTC | #2
On Sun, 2023-03-19 at 17:34 +0100, Borislav Petkov wrote:
> Frankly, I don't think resetting shadow call stack and kasan state
> belongs in a function which returns the idle thread. Even more so if you
> have to add an @unpoison param which is false sometimes and sometimes
> true, depending on where you call the function.
> 
> I think you should have a helper
> 
>         tsk_reset_stacks(struct task_struct *tsk);
> 
> or so which is called where @unpoison == true instead of having a getter
> function do something unrelated too.

Yeah, I see that but my primary concern was that I didn't want callers
to be able to *forget* it, which is what happened the first time.

I don't feel so bad about the getter function actually making the
object *usable* as well as getting it. We don't have to remember to
make a separate function call to unpoison a pointer newly returned from
kmalloc either.

I *think* the 'unpoison' arg is purely an optimisation anyway. Because
those operations are, or at least *could* be, idempotent. It's just
that it's a bit pointless doing them from the finish_cpu() call.

But actually, I think we can have it both ways. There's an early call
to idle_thread_get(cpu) in _cpu_up(), with the comment /* Let it fail
before we try to bring the cpu up */

With an adjustment to the comment, we could do the unpoison and reset
the shadow call stack there — in common code where no architecture can
forget to do it — and still leave idle_thread_get() doing precisely
just the 'get'.

In _cpu_up() the call to idle_thread_get() only happens if the CPU in
question is starting from CPUHP_OFFLINE but I think that's sufficient?

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6c0a92ca6bb5..43e0a77f21e8 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -591,12 +591,6 @@ static int bringup_cpu(unsigned int cpu)
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
-	/*
-	 * Reset stale stack state from the last time this CPU was online.
-	 */
-	scs_task_reset(idle);
-	kasan_unpoison_task_stack(idle);
-
 	/*
 	 * Some architectures have to walk the irq descriptors to
 	 * setup the vector space for the cpu which comes online.
@@ -1383,6 +1377,12 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
 			ret = PTR_ERR(idle);
 			goto out;
 		}
+
+		/*
+		 * Reset stale stack state from the last time this CPU was online.
+		 */
+		scs_task_reset(idle);
+		kasan_unpoison_task_stack(idle);
 	}
 
 	cpuhp_tasks_frozen = tasks_frozen;
David Woodhouse March 21, 2023, 7:20 p.m. UTC | #3
On Mon, 2023-03-20 at 08:17 +0000, David Woodhouse wrote:
> 
> In _cpu_up() the call to idle_thread_get() only happens if the CPU in
> question is starting from CPUHP_OFFLINE but I think that's sufficient?
> 
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 6c0a92ca6bb5..43e0a77f21e8 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -591,12 +591,6 @@ static int bringup_cpu(unsigned int cpu)
>  	struct task_struct *idle = idle_thread_get(cpu);
>  	int ret;
>  
> -	/*
> -	 * Reset stale stack state from the last time this CPU was online.
> -	 */
> -	scs_task_reset(idle);
> -	kasan_unpoison_task_stack(idle);
> -
>  	/*
>  	 * Some architectures have to walk the irq descriptors to
>  	 * setup the vector space for the cpu which comes online.
> @@ -1383,6 +1377,12 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
>  			ret = PTR_ERR(idle);
>  			goto out;
>  		}
> +
> +		/*
> +		 * Reset stale stack state from the last time this CPU was online.
> +		 */
> +		scs_task_reset(idle);
> +		kasan_unpoison_task_stack(idle);
>  	}
>  
>  	cpuhp_tasks_frozen = tasks_frozen;

I put that into a separate commit, rebased on tip/x86/apic and pushed
it out with the rest of the changes to
https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/parallel-6.2-rc3-v16

I'll let Usama do another round of testing and repost it (please).

Thanks!
diff mbox series

Patch

diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
index 9d1bc65d226c..df6417703e4c 100644
--- a/include/linux/smpboot.h
+++ b/include/linux/smpboot.h
@@ -5,6 +5,16 @@ 
 #include <linux/types.h>
 
 struct task_struct;
+
+#ifdef CONFIG_GENERIC_SMP_IDLE_THREAD
+struct task_struct *idle_thread_get(unsigned int cpu, bool unpoison);
+#else
+static inline struct task_struct *idle_thread_get(unsigned int cpu, bool unpoison)
+{
+	return NULL;
+}
+#endif
+
 /* Cookie handed to the thread_fn*/
 struct smpboot_thread_data;
 
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6c0a92ca6bb5..6b3dccb4a888 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -31,7 +31,6 @@ 
 #include <linux/smpboot.h>
 #include <linux/relay.h>
 #include <linux/slab.h>
-#include <linux/scs.h>
 #include <linux/percpu-rwsem.h>
 #include <linux/cpuset.h>
 #include <linux/random.h>
@@ -588,15 +587,9 @@  static int bringup_wait_for_ap(unsigned int cpu)
 
 static int bringup_cpu(unsigned int cpu)
 {
-	struct task_struct *idle = idle_thread_get(cpu);
+	struct task_struct *idle = idle_thread_get(cpu, true);
 	int ret;
 
-	/*
-	 * Reset stale stack state from the last time this CPU was online.
-	 */
-	scs_task_reset(idle);
-	kasan_unpoison_task_stack(idle);
-
 	/*
 	 * Some architectures have to walk the irq descriptors to
 	 * setup the vector space for the cpu which comes online.
@@ -614,7 +607,7 @@  static int bringup_cpu(unsigned int cpu)
 
 static int finish_cpu(unsigned int cpu)
 {
-	struct task_struct *idle = idle_thread_get(cpu);
+	struct task_struct *idle = idle_thread_get(cpu, false);
 	struct mm_struct *mm = idle->active_mm;
 
 	/*
@@ -1378,7 +1371,7 @@  static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
 
 	if (st->state == CPUHP_OFFLINE) {
 		/* Let it fail before we try to bring the cpu up */
-		idle = idle_thread_get(cpu);
+		idle = idle_thread_get(cpu, false);
 		if (IS_ERR(idle)) {
 			ret = PTR_ERR(idle);
 			goto out;
diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 2c7396da470c..24e81c725e7b 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -11,6 +11,7 @@ 
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/sched/task.h>
+#include <linux/scs.h>
 #include <linux/export.h>
 #include <linux/percpu.h>
 #include <linux/kthread.h>
@@ -27,12 +28,20 @@ 
  */
 static DEFINE_PER_CPU(struct task_struct *, idle_threads);
 
-struct task_struct *idle_thread_get(unsigned int cpu)
+struct task_struct *idle_thread_get(unsigned int cpu, bool unpoison)
 {
 	struct task_struct *tsk = per_cpu(idle_threads, cpu);
 
 	if (!tsk)
 		return ERR_PTR(-ENOMEM);
+
+	if (unpoison) {
+		/*
+		 * Reset stale stack state from last time this CPU was online.
+		 */
+		scs_task_reset(tsk);
+		kasan_unpoison_task_stack(tsk);
+	}
 	return tsk;
 }
 
diff --git a/kernel/smpboot.h b/kernel/smpboot.h
index 34dd3d7ba40b..60c609318ad6 100644
--- a/kernel/smpboot.h
+++ b/kernel/smpboot.h
@@ -5,11 +5,9 @@ 
 struct task_struct;
 
 #ifdef CONFIG_GENERIC_SMP_IDLE_THREAD
-struct task_struct *idle_thread_get(unsigned int cpu);
 void idle_thread_set_boot_cpu(void);
 void idle_threads_init(void);
 #else
-static inline struct task_struct *idle_thread_get(unsigned int cpu) { return NULL; }
 static inline void idle_thread_set_boot_cpu(void) { }
 static inline void idle_threads_init(void) { }
 #endif