diff mbox series

sched: print information about scheduler granularity

Message ID 20200416083341.21122-1-sergey.dyasli@citrix.com (mailing list archive)
State New, archived
Headers show
Series sched: print information about scheduler granularity | expand

Commit Message

Sergey Dyasli April 16, 2020, 8:33 a.m. UTC
Currently it might be not obvious which scheduling mode is being used
by the scheduler. Alleviate this by printing additional information
about the selected granularity. Messages now look like these:

1. boot
(XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler (credit) in core-scheduling mode

2. xl debug-keys r
(XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-way core-scheduling mode

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
CC: Juergen Gross <jgross@suse.com>
CC: Dario Faggioli <dfaggioli@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
---
 xen/common/sched/core.c    | 10 ++++++++--
 xen/common/sched/cpupool.c | 30 +++++++++++++++++++++++++++++-
 xen/common/sched/private.h |  2 ++
 3 files changed, 39 insertions(+), 3 deletions(-)

Comments

Jürgen Groß April 16, 2020, 8:57 a.m. UTC | #1
On 16.04.20 10:33, Sergey Dyasli wrote:
> Currently it might be not obvious which scheduling mode is being used
> by the scheduler. Alleviate this by printing additional information
> about the selected granularity. Messages now look like these:
> 
> 1. boot
> (XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler (credit) in core-scheduling mode
> 
> 2. xl debug-keys r
> (XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-way core-scheduling mode
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Hmm, do we need that?

The xen commandline ins part of the boot messages and is contained
in the "xl info" output.


Juergen
Sergey Dyasli April 16, 2020, 9:20 a.m. UTC | #2
On 16/04/2020 09:57, Jürgen Groß wrote:
> On 16.04.20 10:33, Sergey Dyasli wrote:
>> Currently it might be not obvious which scheduling mode is being used
>> by the scheduler. Alleviate this by printing additional information
>> about the selected granularity. Messages now look like these:
>>
>> 1. boot
>> (XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler (credit) in core-scheduling mode
>>
>> 2. xl debug-keys r
>> (XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-way core-scheduling mode
>>
>> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> 
> Hmm, do we need that?
> 
> The xen commandline ins part of the boot messages and is contained
> in the "xl info" output.

It's true that you can see "sched-gran=core" in "xl info" output. But that's
just the switch - not the end result. A user might want to verify that he did
everything correctly and core-scheduling mode has indeed been enabled.

--
Thanks,
Sergey
Jürgen Groß April 16, 2020, 9:25 a.m. UTC | #3
On 16.04.20 11:20, Sergey Dyasli wrote:
> On 16/04/2020 09:57, Jürgen Groß wrote:
>> On 16.04.20 10:33, Sergey Dyasli wrote:
>>> Currently it might be not obvious which scheduling mode is being used
>>> by the scheduler. Alleviate this by printing additional information
>>> about the selected granularity. Messages now look like these:
>>>
>>> 1. boot
>>> (XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler (credit) in core-scheduling mode
>>>
>>> 2. xl debug-keys r
>>> (XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-way core-scheduling mode
>>>
>>> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>>
>> Hmm, do we need that?
>>
>> The xen commandline ins part of the boot messages and is contained
>> in the "xl info" output.
> 
> It's true that you can see "sched-gran=core" in "xl info" output. But that's
> just the switch - not the end result. A user might want to verify that he did
> everything correctly and core-scheduling mode has indeed been enabled.

I'm planning to add this information in the pending hypfs (per cpupool).

I'm not opposed to your patch, but as soon as we have per-cpupool
granularity it should be reverted again.


Juergen
Sergey Dyasli April 16, 2020, 9:38 a.m. UTC | #4
On 16/04/2020 10:25, Jürgen Groß wrote:
> On 16.04.20 11:20, Sergey Dyasli wrote:
>> On 16/04/2020 09:57, Jürgen Groß wrote:
>>> On 16.04.20 10:33, Sergey Dyasli wrote:
>>>> Currently it might be not obvious which scheduling mode is being used
>>>> by the scheduler. Alleviate this by printing additional information
>>>> about the selected granularity. Messages now look like these:
>>>>
>>>> 1. boot
>>>> (XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler (credit) in core-scheduling mode
>>>>
>>>> 2. xl debug-keys r
>>>> (XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-way core-scheduling mode
>>>>
>>>> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>>>
>>> Hmm, do we need that?
>>>
>>> The xen commandline ins part of the boot messages and is contained
>>> in the "xl info" output.
>>
>> It's true that you can see "sched-gran=core" in "xl info" output. But that's
>> just the switch - not the end result. A user might want to verify that he did
>> everything correctly and core-scheduling mode has indeed been enabled.
> 
> I'm planning to add this information in the pending hypfs (per cpupool).

hypfs is certainly nice, but I doubt it'll be available for Xen 4.13.

> I'm not opposed to your patch, but as soon as we have per-cpupool
> granularity it should be reverted again.

"xl debug-keys r" already prints the granularity information per cpupool.
It's just opt_sched_granularity is currently a single global variable. Once
per-cpupool granularity is implemented, sched_gran_str() should simply gain
granularity as a parameter.

--
Thanks,
Sergey
Dario Faggioli April 16, 2020, 4:43 p.m. UTC | #5
On Thu, 2020-04-16 at 09:33 +0100, Sergey Dyasli wrote:
> Currently it might be not obvious which scheduling mode is being used
> by the scheduler. Alleviate this by printing additional information
> about the selected granularity.
>
I like the idea. However, I don't like how verbose and long that line
becomes.

>  Messages now look like these:
> 
> 1. boot
> (XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler
> (credit) in core-scheduling mode
> 
> 2. xl debug-keys r
> (XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-
> way core-scheduling mode
> 
What about adding an entry, just below these ones. Something looking
like, for instance (both at boot and in the debug-key dump):

"Scheduling granularity: cpu"

(or "core", or "socket")

Also

> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -38,7 +38,35 @@ static cpumask_t cpupool_locked_cpus;
>  static DEFINE_SPINLOCK(cpupool_lock);
>  
>  static enum sched_gran __read_mostly opt_sched_granularity =
> SCHED_GRAN_cpu;
> -static unsigned int __read_mostly sched_granularity = 1;
> +static unsigned int __read_mostly sched_granularity;
> +
> +char *sched_gran_str(char *str, size_t size)
> +{
> +    char *mode = "";
> +
> +    switch ( opt_sched_granularity )
> +    {
> +    case SCHED_GRAN_cpu:
> +        mode = "cpu";
> +        break;
> +    case SCHED_GRAN_core:
> +        mode = "core";
> +        break;
> +    case SCHED_GRAN_socket:
> +        mode = "socket";
> +        break;
> +    default:
> +        ASSERT_UNREACHABLE();
> +        break;
> +    }
> +
> +    if ( sched_granularity )
> +        snprintf(str, size, "%u-way %s", sched_granularity, mode);
>
I'm not sure about using the value of the enum like this.

E.g., in a system with 4 threads per core, enabling core scheduling
granularity would mean having 4 vCPUs in the scheduling units. But this
will still print "2-way core-scheduling", which I think would sound
confusing.

So I'd just go with "cpu", "core" and "socket" strings.

Regards
Dario Faggioli April 16, 2020, 4:47 p.m. UTC | #6
On Thu, 2020-04-16 at 11:25 +0200, Jürgen Groß wrote:
> On 16.04.20 11:20, Sergey Dyasli wrote:
> > On 16/04/2020 09:57, Jürgen Groß wrote:
> > > 
> > > The xen commandline ins part of the boot messages and is
> > > contained
> > > in the "xl info" output.
> > 
> > It's true that you can see "sched-gran=core" in "xl info" output.
> > But that's
> > just the switch - not the end result. A user might want to verify
> > that he did
> > everything correctly and core-scheduling mode has indeed been
> > enabled.
> 
> I'm not opposed to your patch, but as soon as we have per-cpupool
> granularity it should be reverted again.
> 
Why reverted? Each cpupool dumps its own scheduling information. With
per-pool granularity, we'll see something like

cpupool: Pool-A
Scheduler: SMP Credit Scheduler (credit)
Scheduling granularity: cpu
...
cpupool: Pool-B
Scheduler: SMP Credit Scheduler (credit)
Scheduling granularity: core

etc.

Or am I missing something?

Regards
Jürgen Groß April 17, 2020, 7:54 a.m. UTC | #7
On 16.04.20 18:47, Dario Faggioli wrote:
> On Thu, 2020-04-16 at 11:25 +0200, Jürgen Groß wrote:
>> On 16.04.20 11:20, Sergey Dyasli wrote:
>>> On 16/04/2020 09:57, Jürgen Groß wrote:
>>>>
>>>> The xen commandline ins part of the boot messages and is
>>>> contained
>>>> in the "xl info" output.
>>>
>>> It's true that you can see "sched-gran=core" in "xl info" output.
>>> But that's
>>> just the switch - not the end result. A user might want to verify
>>> that he did
>>> everything correctly and core-scheduling mode has indeed been
>>> enabled.
>>
>> I'm not opposed to your patch, but as soon as we have per-cpupool
>> granularity it should be reverted again.
>>
> Why reverted? Each cpupool dumps its own scheduling information. With
> per-pool granularity, we'll see something like
> 
> cpupool: Pool-A
> Scheduler: SMP Credit Scheduler (credit)
> Scheduling granularity: cpu
> ...
> cpupool: Pool-B
> Scheduler: SMP Credit Scheduler (credit)
> Scheduling granularity: core
> 
> etc.
> 
> Or am I missing something?

"Reworking" might have been a better wording.

The patch is looking only at the global variable opt_sched_granularity
for deciding which granularity to print out.


Juergen
Jürgen Groß April 17, 2020, 7:57 a.m. UTC | #8
On 16.04.20 18:43, Dario Faggioli wrote:
> On Thu, 2020-04-16 at 09:33 +0100, Sergey Dyasli wrote:
>> Currently it might be not obvious which scheduling mode is being used
>> by the scheduler. Alleviate this by printing additional information
>> about the selected granularity.
>>
> I like the idea. However, I don't like how verbose and long that line
> becomes.
> 
>>   Messages now look like these:
>>
>> 1. boot
>> (XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler
>> (credit) in core-scheduling mode
>>
>> 2. xl debug-keys r
>> (XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-
>> way core-scheduling mode
>>
> What about adding an entry, just below these ones. Something looking
> like, for instance (both at boot and in the debug-key dump):
> 
> "Scheduling granularity: cpu"
> 
> (or "core", or "socket")
> 
> Also
> 
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -38,7 +38,35 @@ static cpumask_t cpupool_locked_cpus;
>>   static DEFINE_SPINLOCK(cpupool_lock);
>>   
>>   static enum sched_gran __read_mostly opt_sched_granularity =
>> SCHED_GRAN_cpu;
>> -static unsigned int __read_mostly sched_granularity = 1;
>> +static unsigned int __read_mostly sched_granularity;
>> +
>> +char *sched_gran_str(char *str, size_t size)
>> +{
>> +    char *mode = "";
>> +
>> +    switch ( opt_sched_granularity )
>> +    {
>> +    case SCHED_GRAN_cpu:
>> +        mode = "cpu";
>> +        break;
>> +    case SCHED_GRAN_core:
>> +        mode = "core";
>> +        break;
>> +    case SCHED_GRAN_socket:
>> +        mode = "socket";
>> +        break;
>> +    default:
>> +        ASSERT_UNREACHABLE();
>> +        break;
>> +    }
>> +
>> +    if ( sched_granularity )
>> +        snprintf(str, size, "%u-way %s", sched_granularity, mode);
>>
> I'm not sure about using the value of the enum like this.

enum? sched_granularity holds the number of cpus per scheduling
resource. opt_sched_granularity is the enum.

> 
> E.g., in a system with 4 threads per core, enabling core scheduling
> granularity would mean having 4 vCPUs in the scheduling units. But this
> will still print "2-way core-scheduling", which I think would sound
> confusing.

It would print "4-way", of course.

> 
> So I'd just go with "cpu", "core" and "socket" strings.

No, this is not a good idea. With e.g. smt=0 you'll be able to have
"1-way core" which is much more informative than "core".


Juergen
Sergey Dyasli April 17, 2020, 1:43 p.m. UTC | #9
On 17/04/2020 08:57, Jürgen Groß wrote:
> On 16.04.20 18:43, Dario Faggioli wrote:
>> On Thu, 2020-04-16 at 09:33 +0100, Sergey Dyasli wrote:
>>> Currently it might be not obvious which scheduling mode is being used
>>> by the scheduler. Alleviate this by printing additional information
>>> about the selected granularity.
>>>
>> I like the idea. However, I don't like how verbose and long that line
>> becomes.
>>
>>>   Messages now look like these:
>>>
>>> 1. boot
>>> (XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler
>>> (credit) in core-scheduling mode
>>>
>>> 2. xl debug-keys r
>>> (XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-
>>> way core-scheduling mode
>>>
>> What about adding an entry, just below these ones. Something looking
>> like, for instance (both at boot and in the debug-key dump):
>>
>> "Scheduling granularity: cpu"
>>
>> (or "core", or "socket")

I agree that the line becomes too long. I'll print the new information
on a separate line as you suggest in v2.

>>
>> Also
>>
>>> --- a/xen/common/sched/cpupool.c
>>> +++ b/xen/common/sched/cpupool.c
>>> @@ -38,7 +38,35 @@ static cpumask_t cpupool_locked_cpus;
>>>   static DEFINE_SPINLOCK(cpupool_lock);
>>>   static enum sched_gran __read_mostly opt_sched_granularity =
>>> SCHED_GRAN_cpu;
>>> -static unsigned int __read_mostly sched_granularity = 1;
>>> +static unsigned int __read_mostly sched_granularity;
>>> +
>>> +char *sched_gran_str(char *str, size_t size)
>>> +{
>>> +    char *mode = "";
>>> +
>>> +    switch ( opt_sched_granularity )
>>> +    {
>>> +    case SCHED_GRAN_cpu:
>>> +        mode = "cpu";
>>> +        break;
>>> +    case SCHED_GRAN_core:
>>> +        mode = "core";
>>> +        break;
>>> +    case SCHED_GRAN_socket:
>>> +        mode = "socket";
>>> +        break;
>>> +    default:
>>> +        ASSERT_UNREACHABLE();
>>> +        break;
>>> +    }
>>> +
>>> +    if ( sched_granularity )
>>> +        snprintf(str, size, "%u-way %s", sched_granularity, mode);
>>>
>> I'm not sure about using the value of the enum like this.
> 
> enum? sched_granularity holds the number of cpus per scheduling
> resource. opt_sched_granularity is the enum.
> 
>>
>> E.g., in a system with 4 threads per core, enabling core scheduling
>> granularity would mean having 4 vCPUs in the scheduling units. But this
>> will still print "2-way core-scheduling", which I think would sound
>> confusing.
> 
> It would print "4-way", of course.
> 
>>
>> So I'd just go with "cpu", "core" and "socket" strings.
> 
> No, this is not a good idea. With e.g. smt=0 you'll be able to have
> "1-way core" which is much more informative than "core".

Can confirm the above. "sched-gran=core" on a Knights Mill produces:
(XEN) [  232.018648] Scheduler: SMP Credit Scheduler (credit) in 4-way core-scheduling mode

While "sched-gran=core smt=0" gives:
(XEN) [  259.337588] Scheduler: SMP Credit Scheduler (credit) in 1-way core-scheduling mode

--
Thanks,
Sergey
Jürgen Groß April 17, 2020, 1:52 p.m. UTC | #10
On 17.04.20 15:43, Sergey Dyasli wrote:
> On 17/04/2020 08:57, Jürgen Groß wrote:
>> On 16.04.20 18:43, Dario Faggioli wrote:
>>> On Thu, 2020-04-16 at 09:33 +0100, Sergey Dyasli wrote:
>>>> Currently it might be not obvious which scheduling mode is being used
>>>> by the scheduler. Alleviate this by printing additional information
>>>> about the selected granularity.
>>>>
>>> I like the idea. However, I don't like how verbose and long that line
>>> becomes.
>>>
>>>>    Messages now look like these:
>>>>
>>>> 1. boot
>>>> (XEN) [00089808f0ea7496] Using scheduler: SMP Credit Scheduler
>>>> (credit) in core-scheduling mode
>>>>
>>>> 2. xl debug-keys r
>>>> (XEN) [   45.914314] Scheduler: SMP Credit Scheduler (credit) in 2-
>>>> way core-scheduling mode
>>>>
>>> What about adding an entry, just below these ones. Something looking
>>> like, for instance (both at boot and in the debug-key dump):
>>>
>>> "Scheduling granularity: cpu"
>>>
>>> (or "core", or "socket")
> 
> I agree that the line becomes too long. I'll print the new information
> on a separate line as you suggest in v2.
> 
>>>
>>> Also
>>>
>>>> --- a/xen/common/sched/cpupool.c
>>>> +++ b/xen/common/sched/cpupool.c
>>>> @@ -38,7 +38,35 @@ static cpumask_t cpupool_locked_cpus;
>>>>    static DEFINE_SPINLOCK(cpupool_lock);
>>>>    static enum sched_gran __read_mostly opt_sched_granularity =
>>>> SCHED_GRAN_cpu;
>>>> -static unsigned int __read_mostly sched_granularity = 1;
>>>> +static unsigned int __read_mostly sched_granularity;
>>>> +
>>>> +char *sched_gran_str(char *str, size_t size)
>>>> +{
>>>> +    char *mode = "";
>>>> +
>>>> +    switch ( opt_sched_granularity )
>>>> +    {
>>>> +    case SCHED_GRAN_cpu:
>>>> +        mode = "cpu";
>>>> +        break;
>>>> +    case SCHED_GRAN_core:
>>>> +        mode = "core";
>>>> +        break;
>>>> +    case SCHED_GRAN_socket:
>>>> +        mode = "socket";
>>>> +        break;
>>>> +    default:
>>>> +        ASSERT_UNREACHABLE();
>>>> +        break;
>>>> +    }
>>>> +
>>>> +    if ( sched_granularity )
>>>> +        snprintf(str, size, "%u-way %s", sched_granularity, mode);
>>>>
>>> I'm not sure about using the value of the enum like this.
>>
>> enum? sched_granularity holds the number of cpus per scheduling
>> resource. opt_sched_granularity is the enum.
>>
>>>
>>> E.g., in a system with 4 threads per core, enabling core scheduling
>>> granularity would mean having 4 vCPUs in the scheduling units. But this
>>> will still print "2-way core-scheduling", which I think would sound
>>> confusing.
>>
>> It would print "4-way", of course.
>>
>>>
>>> So I'd just go with "cpu", "core" and "socket" strings.
>>
>> No, this is not a good idea. With e.g. smt=0 you'll be able to have
>> "1-way core" which is much more informative than "core".
> 
> Can confirm the above. "sched-gran=core" on a Knights Mill produces:
> (XEN) [  232.018648] Scheduler: SMP Credit Scheduler (credit) in 4-way core-scheduling mode
> 
> While "sched-gran=core smt=0" gives:
> (XEN) [  259.337588] Scheduler: SMP Credit Scheduler (credit) in 1-way core-scheduling mode

You might want to consider not using the global variables
[opt_]sched_granularity, but the per-cpupool ones. Right now they have
the same value, but this might change in future...


Juergen
Dario Faggioli April 17, 2020, 2:36 p.m. UTC | #11
On Fri, 2020-04-17 at 09:57 +0200, Jürgen Groß wrote:
> On 16.04.20 18:43, Dario Faggioli wrote:
> > On Thu, 2020-04-16 at 09:33 +0100, Sergey Dyasli wrote:
> > > 
> > > +char *sched_gran_str(char *str, size_t size)
> > > +{
> > > +    char *mode = "";
> > > +
> > > +    switch ( opt_sched_granularity )
> > > +    {
> > > +    case SCHED_GRAN_cpu:
> > > +        mode = "cpu";
> > > +        break;
> > > +    case SCHED_GRAN_core:
> > > +        mode = "core";
> > > +        break;
> > > +    case SCHED_GRAN_socket:
> > > +        mode = "socket";
> > > +        break;
> > > +    default:
> > > +        ASSERT_UNREACHABLE();
> > > +        break;
> > > +    }
> > > +
> > > +    if ( sched_granularity )
> > > +        snprintf(str, size, "%u-way %s", sched_granularity,
> > > mode);
> > > 
> > I'm not sure about using the value of the enum like this.
> 
> enum? sched_granularity holds the number of cpus per scheduling
> resource. opt_sched_granularity is the enum.
> 
Ah, indeed! I failed to see that the if and the snprintf above use
that, and not opt_sched_granularity! Sorry for the confusion.

> > So I'd just go with "cpu", "core" and "socket" strings.
> 
> No, this is not a good idea. With e.g. smt=0 you'll be able to have
> "1-way core" which is much more informative than "core".
> 
True. And thinking to this cases, I agree that it makes sense to
provide an information that takes that into account too.

I'm now not sure whether something like "1-way core-scheduling (or "1-
way core-scheduling more") is really the best way to tell the user
we're using using core-scheduling, but we have disabled SMT... Perhaps
something like:

Scheduling granularity: core, 1CPU(s) per sched-resource

Or something like that?

Regards
Dario Faggioli April 17, 2020, 2:37 p.m. UTC | #12
On Fri, 2020-04-17 at 15:52 +0200, Jürgen Groß wrote:
> On 17.04.20 15:43, Sergey Dyasli wrote:
> > While "sched-gran=core smt=0" gives:
> > (XEN) [  259.337588] Scheduler: SMP Credit Scheduler (credit) in 1-
> > way core-scheduling mode
> 
> You might want to consider not using the global variables
> [opt_]sched_granularity, but the per-cpupool ones. Right now they
> have
> the same value, but this might change in future...
> 
Yes, agreed.

Regards
diff mbox series

Patch

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d4a6489929..b1b09a159b 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2883,6 +2883,7 @@  void scheduler_enable(void)
 void __init scheduler_init(void)
 {
     struct domain *idle_domain;
+    char sched_gran[20];
     int i;
 
     scheduler_enable();
@@ -2937,7 +2938,9 @@  void __init scheduler_init(void)
         BUG();
     register_cpu_notifier(&cpu_schedule_nfb);
 
-    printk("Using scheduler: %s (%s)\n", ops.name, ops.opt_name);
+    printk("Using scheduler: %s (%s) in %s-scheduling mode\n",
+           ops.name, ops.opt_name,
+           sched_gran_str(sched_gran, sizeof(sched_gran)));
     if ( sched_init(&ops) )
         panic("scheduler returned error on init\n");
 
@@ -3267,6 +3270,7 @@  void schedule_dump(struct cpupool *c)
     unsigned int      i, j;
     struct scheduler *sched;
     cpumask_t        *cpus;
+    char              sched_gran[20];
 
     /* Locking, if necessary, must be handled withing each scheduler */
 
@@ -3276,7 +3280,9 @@  void schedule_dump(struct cpupool *c)
     {
         sched = c->sched;
         cpus = c->res_valid;
-        printk("Scheduler: %s (%s)\n", sched->name, sched->opt_name);
+        printk("Scheduler: %s (%s) in %s-scheduling mode\n",
+               sched->name, sched->opt_name,
+               sched_gran_str(sched_gran, sizeof(sched_gran)));
         sched_dump_settings(sched);
     }
     else
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index d40345b585..a37b97f4c2 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -38,7 +38,35 @@  static cpumask_t cpupool_locked_cpus;
 static DEFINE_SPINLOCK(cpupool_lock);
 
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
-static unsigned int __read_mostly sched_granularity = 1;
+static unsigned int __read_mostly sched_granularity;
+
+char *sched_gran_str(char *str, size_t size)
+{
+    char *mode = "";
+
+    switch ( opt_sched_granularity )
+    {
+    case SCHED_GRAN_cpu:
+        mode = "cpu";
+        break;
+    case SCHED_GRAN_core:
+        mode = "core";
+        break;
+    case SCHED_GRAN_socket:
+        mode = "socket";
+        break;
+    default:
+        ASSERT_UNREACHABLE();
+        break;
+    }
+
+    if ( sched_granularity )
+        snprintf(str, size, "%u-way %s", sched_granularity, mode);
+    else
+        snprintf(str, size, "%s", mode);
+
+    return str;
+}
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
 static int __init sched_select_granularity(const char *str)
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 367811a12f..fd49f545cb 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -30,6 +30,8 @@  enum sched_gran {
     SCHED_GRAN_socket
 };
 
+char *sched_gran_str(char *str, size_t size);
+
 /*
  * In order to allow a scheduler to remap the lock->cpu mapping,
  * we have a per-cpu pointer, along with a pre-allocated set of