diff mbox series

xen/sched: fix a potential issue with core scheduling

Message ID 20191108065739.21345-1-jgross@suse.com (mailing list archive)
State New, archived
Headers show
Series xen/sched: fix a potential issue with core scheduling | expand

Commit Message

Jürgen Groß Nov. 8, 2019, 6:57 a.m. UTC
cpupool_online_cpumask() is used by credit and rt scheduler. It returns
all the cpus of a cpupool or all online cpus in case no cpupool is
specified.

The "no cpupool" case can be dropped, as no scheduler other than the
init scheduler will ever work on cpus not associated with any cpupool.

As the individual schedulers should only ever work on scheduling
resources instead of individual cpus, their cpupool_online_cpumask()
use should be replaced by cpupool->res_valid.

Note that only with core scheduling active this might result in
potential problems, as with cpu scheduling both masks are identical.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched_credit.c  | 3 +--
 xen/common/sched_rt.c      | 4 ++--
 xen/include/xen/sched-if.h | 3 ---
 3 files changed, 3 insertions(+), 7 deletions(-)

Comments

Dario Faggioli Nov. 12, 2019, 9:14 a.m. UTC | #1
On Fri, 2019-11-08 at 07:57 +0100, Juergen Gross wrote:
> cpupool_online_cpumask() is used by credit and rt scheduler. It
> returns
> all the cpus of a cpupool or all online cpus in case no cpupool is
> specified.
> 
> The "no cpupool" case can be dropped, as no scheduler other than the
> init scheduler will ever work on cpus not associated with any
> cpupool.
> 
Yes, this is a cool thing about having the init cpupool/idle scheduler
in place. It's even cooler in Credit2, where it will allow us to drop
some of the cpumask_and() cpumask_or() operations.

It's the reason why, even before core scheduling, I was considering
doing an idle scheduler myself.

I'll get to write that patch (the one for Credit2, I mean) at some
point. :-)

> As the individual schedulers should only ever work on scheduling
> resources instead of individual cpus, their cpupool_online_cpumask()
> use should be replaced by cpupool->res_valid.
> 
> Note that only with core scheduling active this might result in
> potential problems, as with cpu scheduling both masks are identical.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
Jürgen Groß Nov. 12, 2019, 9:28 a.m. UTC | #2
On 12.11.19 10:14, Dario Faggioli wrote:
> On Fri, 2019-11-08 at 07:57 +0100, Juergen Gross wrote:
>> cpupool_online_cpumask() is used by credit and rt scheduler. It
>> returns
>> all the cpus of a cpupool or all online cpus in case no cpupool is
>> specified.
>>
>> The "no cpupool" case can be dropped, as no scheduler other than the
>> init scheduler will ever work on cpus not associated with any
>> cpupool.
>>
> Yes, this is a cool thing about having the init cpupool/idle scheduler
> in place. It's even cooler in Credit2, where it will allow us to drop
> some of the cpumask_and() cpumask_or() operations.
> 
> It's the reason why, even before core scheduling, I was considering
> doing an idle scheduler myself.
> 
> I'll get to write that patch (the one for Credit2, I mean) at some
> point. :-)
> 
>> As the individual schedulers should only ever work on scheduling
>> resources instead of individual cpus, their cpupool_online_cpumask()
>> use should be replaced by cpupool->res_valid.
>>
>> Note that only with core scheduling active this might result in
>> potential problems, as with cpu scheduling both masks are identical.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

And with my release manager hat on:

Release-acked-by: Juergen Gross <jgross@suse.com>


Juergen
diff mbox series

Patch

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index fbffcf3996..645cdc5e9a 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -1684,12 +1684,11 @@  csched_load_balance(struct csched_private *prv, int cpu,
     struct cpupool *c = get_sched_res(cpu)->cpupool;
     struct csched_unit *speer;
     cpumask_t workers;
-    cpumask_t *online;
+    cpumask_t *online = c->res_valid;
     int peer_cpu, first_cpu, peer_node, bstep;
     int node = cpu_to_node(cpu);
 
     BUG_ON(get_sched_res(cpu) != snext->unit->res);
-    online = cpupool_online_cpumask(c);
 
     /*
      * If this CPU is going offline, or is not (yet) part of any cpupool
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 6e93e50acb..b2b29481f3 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -774,8 +774,8 @@  rt_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 
     if ( prv->repl_timer.cpu == cpu )
     {
-        struct cpupool *c = get_sched_res(cpu)->cpupool;
-        unsigned int new_cpu = cpumask_cycle(cpu, cpupool_online_cpumask(c));
+        cpumask_t *online = get_sched_res(cpu)->cpupool->res_valid;
+        unsigned int new_cpu = cpumask_cycle(cpu, online);
 
         /*
          * Make sure the timer run on one of the cpus that are still available
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
index 29715652bc..b0ac54e63d 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -545,9 +545,6 @@  struct cpupool
     enum sched_gran  gran;
 };
 
-#define cpupool_online_cpumask(_pool) \
-    (((_pool) == NULL) ? &cpu_online_map : (_pool)->cpu_valid)
-
 static inline cpumask_t *cpupool_domain_master_cpumask(const struct domain *d)
 {
     /*