diff mbox series

xen/sched: add some diagnostic info in the run queue keyhandler

Message ID 20200207072405.2236-1-jgross@suse.com (mailing list archive)
State Superseded
Headers show
Series xen/sched: add some diagnostic info in the run queue keyhandler | expand

Commit Message

Jürgen Groß Feb. 7, 2020, 7:24 a.m. UTC
When dumping the run queue information add some more data regarding
current and (if known) previous vcpu for each physical cpu.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

Comments

Dario Faggioli Feb. 7, 2020, 11:09 a.m. UTC | #1
On Fri, 2020-02-07 at 08:24 +0100, Juergen Gross wrote:
> When dumping the run queue information add some more data regarding
> current and (if known) previous vcpu for each physical cpu.
>
Looks good to me.

Can we have, here in the changelog, a sample of how the new output
looks like?

Regards
Jürgen Groß Feb. 7, 2020, 12:59 p.m. UTC | #2
On 07.02.20 12:09, Dario Faggioli wrote:
> On Fri, 2020-02-07 at 08:24 +0100, Juergen Gross wrote:
>> When dumping the run queue information add some more data regarding
>> current and (if known) previous vcpu for each physical cpu.
>>
> Looks good to me.
> 
> Can we have, here in the changelog, a sample of how the new output
> looks like?

Sure. And I'll even add the proper locking before accessing the
percpu scheduling data.


Juergen
diff mbox series

Patch

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d4e8944e0e..103d94bd02 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3235,7 +3235,7 @@  void scheduler_free(struct scheduler *sched)
 
 void schedule_dump(struct cpupool *c)
 {
-    unsigned int      i;
+    unsigned int      i, j;
     struct scheduler *sched;
     cpumask_t        *cpus;
 
@@ -3246,7 +3246,7 @@  void schedule_dump(struct cpupool *c)
     if ( c != NULL )
     {
         sched = c->sched;
-        cpus = c->cpu_valid;
+        cpus = c->res_valid;
         printk("Scheduler: %s (%s)\n", sched->name, sched->opt_name);
         sched_dump_settings(sched);
     }
@@ -3256,11 +3256,18 @@  void schedule_dump(struct cpupool *c)
         cpus = &cpupool_free_cpus;
     }
 
-    if ( sched->dump_cpu_state != NULL )
+    printk("CPUs info:\n");
+    for_each_cpu (i, cpus)
     {
-        printk("CPUs info:\n");
-        for_each_cpu (i, cpus)
-            sched_dump_cpu_state(sched, i);
+        struct sched_resource *sr = get_sched_res(i);
+
+        printk("CPU[%02d] current=%pv, curr=%pv, prev=%pv\n", i,
+               get_cpu_current(i), sr->curr ? sr->curr->vcpu_list : NULL,
+               sr->prev ? sr->prev->vcpu_list : NULL);
+        for_each_cpu (j, sr->cpus)
+            if ( i != j )
+                printk("CPU[%02d] current=%pv\n", j, get_cpu_current(j));
+        sched_dump_cpu_state(sched, i);
     }
 
     rcu_read_unlock(&sched_res_rculock);