diff mbox series

[1/3] xen/sched: populate cpupool0 only after all cpus are up

Message ID 20190802130730.15942-2-jgross@suse.com (mailing list archive)
State Superseded
Headers show
Series xen/sched: use new idle scheduler for free cpus | expand

Commit Message

Jürgen Groß Aug. 2, 2019, 1:07 p.m. UTC
With core or socket scheduling we need to know the number of siblings
per scheduling unit before we can setup the scheduler properly. In
order to prepare that do cpupool0 population only after all cpus are
up.

With that in place there is no need to create cpupool0 earlier, so
do that just before assigning the cpus. Initialize free cpus with all
online cpus at that time in order to be able to add the cpu notifier
late, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V1: new patch
---
 xen/common/cpupool.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

Comments

Dario Faggioli Aug. 13, 2019, 4:07 p.m. UTC | #1
On Fri, 2019-08-02 at 15:07 +0200, Juergen Gross wrote:
> With core or socket scheduling we need to know the number of siblings
> per scheduling unit before we can setup the scheduler properly. In
> order to prepare that do cpupool0 population only after all cpus are
> up.
> 
> With that in place there is no need to create cpupool0 earlier, so
> do that just before assigning the cpus. Initialize free cpus with all
> online cpus at that time in order to be able to add the cpu notifier
> late, too.
> 
So, now that this series has been made independent, I think that
mentions to the core-scheduling one should be dropped.

I mean, it is at least possible that this series would go in, while the
core-scheduling one never will. And at that point, it would be very
hard, for someone doing archaeology, to understand what went on.

It seems to me that, this patch, simplifies cpupool initialization (as,
e.g., the direct call to the CPU_ONLINE notifier for the BSP was IMO
rather convoluted). And that is made possible by moving the
initialization itself to a later point, making all the online CPUs look
like free CPUs, and using the standard (internal) API directly (i.e.,
cpupool_assign_cpu_locked()) to add them to Pool-0.

So, I'd kill the very first sentence and rearrange the rest to include
at least a quick mention to the simplification that we achieve.

Regards
George Dunlap Aug. 14, 2019, 4:15 p.m. UTC | #2
On Fri, Aug 2, 2019 at 2:08 PM Juergen Gross <jgross@suse.com> wrote:
>
> With core or socket scheduling we need to know the number of siblings
> per scheduling unit before we can setup the scheduler properly. In
> order to prepare that do cpupool0 population only after all cpus are
> up.
>
> With that in place there is no need to create cpupool0 earlier, so
> do that just before assigning the cpus. Initialize free cpus with all
> online cpus at that time in order to be able to add the cpu notifier
> late, too.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V1: new patch
> ---
>  xen/common/cpupool.c | 18 ++++++++++++++----
>  1 file changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
> index f90e496eda..caea5bd8b3 100644
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -762,18 +762,28 @@ static struct notifier_block cpu_nfb = {
>      .notifier_call = cpu_callback
>  };
>
> -static int __init cpupool_presmp_init(void)
> +static int __init cpupool_init(void)
>  {
> +    unsigned int cpu;
>      int err;
> -    void *cpu = (void *)(long)smp_processor_id();
> +
>      cpupool0 = cpupool_create(0, 0, &err);
>      BUG_ON(cpupool0 == NULL);
>      cpupool_put(cpupool0);
> -    cpu_callback(&cpu_nfb, CPU_ONLINE, cpu);
>      register_cpu_notifier(&cpu_nfb);
> +
> +    spin_lock(&cpupool_lock);
> +
> +    cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
> +
> +    for_each_cpu ( cpu, &cpupool_free_cpus )
> +        cpupool_assign_cpu_locked(cpupool0, cpu);
> +
> +    spin_unlock(&cpupool_lock);

Just to make sure I understand what's happening here -- cpu_callback()
now won't get called with CPU_ONLINE early in the boot process; but it
will still be called with CPU_ONLINE in other circumstances (e.g.,
hot-plug / suspend / whatever)?

If not, then it's probably best to remove CPU_ONLINE from that switch statement.

Sorry that's an overly-basic question; I don't have a good picture for
the cpu state machine.

 -George
Dario Faggioli Aug. 14, 2019, 4:58 p.m. UTC | #3
On Wed, 2019-08-14 at 17:15 +0100, George Dunlap wrote:
> On Fri, Aug 2, 2019 at 2:08 PM Juergen Gross <jgross@suse.com> wrote:
> > --- a/xen/common/cpupool.c
> > +++ b/xen/common/cpupool.c
> > @@ -762,18 +762,28 @@ static struct notifier_block cpu_nfb = {
> >      .notifier_call = cpu_callback
> >  };
> > 
> > -static int __init cpupool_presmp_init(void)
> > +static int __init cpupool_init(void)
> >  {
> > +    unsigned int cpu;
> >      int err;
> > -    void *cpu = (void *)(long)smp_processor_id();
> > +
> >      cpupool0 = cpupool_create(0, 0, &err);
> >      BUG_ON(cpupool0 == NULL);
> >      cpupool_put(cpupool0);
> > -    cpu_callback(&cpu_nfb, CPU_ONLINE, cpu);
> >      register_cpu_notifier(&cpu_nfb);
> > +
> > +    spin_lock(&cpupool_lock);
> > +
> > +    cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
> > +
> > +    for_each_cpu ( cpu, &cpupool_free_cpus )
> > +        cpupool_assign_cpu_locked(cpupool0, cpu);
> > +
> > +    spin_unlock(&cpupool_lock);
> 
> Just to make sure I understand what's happening here --
> cpu_callback()
> now won't get called with CPU_ONLINE early in the boot process; but
> it
> will still be called with CPU_ONLINE in other circumstances (e.g.,
> hot-plug / suspend / whatever)?
> 
Exactly.

It is not used for resume (from suspend) any longer, since commit
6870ea9d1fad ("xen/cpupool: simplify suspend/resume handling).

But it is used for putting the various CPUs in Pool-0, as they come-up, 
during boot.

This patch remove the "hack" of calling it directly, during cpupool
initialization, for the BSP.

> Sorry that's an overly-basic question; I don't have a good picture
> for
> the cpu state machine.
> 
Well, I used to... I tried to quickly double check things, and what I
said above should be still valid, even after the latest changes (or so
I hope :-) ).

Regards
Jürgen Groß Aug. 26, 2019, 8:35 a.m. UTC | #4
On 13.08.19 18:07, Dario Faggioli wrote:
> On Fri, 2019-08-02 at 15:07 +0200, Juergen Gross wrote:
>> With core or socket scheduling we need to know the number of siblings
>> per scheduling unit before we can setup the scheduler properly. In
>> order to prepare that do cpupool0 population only after all cpus are
>> up.
>>
>> With that in place there is no need to create cpupool0 earlier, so
>> do that just before assigning the cpus. Initialize free cpus with all
>> online cpus at that time in order to be able to add the cpu notifier
>> late, too.
>>
> So, now that this series has been made independent, I think that
> mentions to the core-scheduling one should be dropped.
> 
> I mean, it is at least possible that this series would go in, while the
> core-scheduling one never will. And at that point, it would be very
> hard, for someone doing archaeology, to understand what went on.
> 
> It seems to me that, this patch, simplifies cpupool initialization (as,
> e.g., the direct call to the CPU_ONLINE notifier for the BSP was IMO
> rather convoluted). And that is made possible by moving the
> initialization itself to a later point, making all the online CPUs look
> like free CPUs, and using the standard (internal) API directly (i.e.,
> cpupool_assign_cpu_locked()) to add them to Pool-0.
> 
> So, I'd kill the very first sentence and rearrange the rest to include
> at least a quick mention to the simplification that we achieve.

Fine with me.


Juergen
diff mbox series

Patch

diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
index f90e496eda..caea5bd8b3 100644
--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -762,18 +762,28 @@  static struct notifier_block cpu_nfb = {
     .notifier_call = cpu_callback
 };
 
-static int __init cpupool_presmp_init(void)
+static int __init cpupool_init(void)
 {
+    unsigned int cpu;
     int err;
-    void *cpu = (void *)(long)smp_processor_id();
+
     cpupool0 = cpupool_create(0, 0, &err);
     BUG_ON(cpupool0 == NULL);
     cpupool_put(cpupool0);
-    cpu_callback(&cpu_nfb, CPU_ONLINE, cpu);
     register_cpu_notifier(&cpu_nfb);
+
+    spin_lock(&cpupool_lock);
+
+    cpumask_copy(&cpupool_free_cpus, &cpu_online_map);
+
+    for_each_cpu ( cpu, &cpupool_free_cpus )
+        cpupool_assign_cpu_locked(cpupool0, cpu);
+
+    spin_unlock(&cpupool_lock);
+
     return 0;
 }
-presmp_initcall(cpupool_presmp_init);
+__initcall(cpupool_init);
 
 /*
  * Local variables: