From patchwork Tue Feb 11 12:27:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juergen Gross X-Patchwork-Id: 11375367 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E4A8139A for ; Tue, 11 Feb 2020 12:28:49 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3479520714 for ; Tue, 11 Feb 2020 12:28:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3479520714 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j1Udh-0006qD-H1; Tue, 11 Feb 2020 12:27:41 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j1Udf-0006q8-Sd for xen-devel@lists.xenproject.org; Tue, 11 Feb 2020 12:27:39 +0000 X-Inumbo-ID: e3cbdb0f-4cc9-11ea-b583-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id e3cbdb0f-4cc9-11ea-b583-12813bfff9fa; Tue, 11 Feb 2020 12:27:39 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 2392EAEAC; Tue, 11 Feb 2020 12:27:38 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 11 Feb 2020 13:27:36 +0100 Message-Id: <20200211122736.16714-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Subject: [Xen-devel] [PATCH v2] xen/sched: add some diagnostic info in the run queue keyhandler X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" When dumping the run queue information add some more data regarding current and (if known) previous vcpu for each physical cpu. With core scheduling activated the printed data will be e.g.: (XEN) CPUs info: (XEN) CPU[00] current=d[IDLE]v0, curr=d[IDLE]v0, prev=NULL (XEN) CPU[01] current=d[IDLE]v1 (XEN) CPU[02] current=d[IDLE]v2, curr=d[IDLE]v2, prev=NULL (XEN) CPU[03] current=d[IDLE]v3 Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V2: add proper locking --- xen/common/sched/core.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 2e43f8029f..6fbc30e678 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3234,7 +3234,7 @@ void scheduler_free(struct scheduler *sched) void schedule_dump(struct cpupool *c) { - unsigned int i; + unsigned int i, j; struct scheduler *sched; cpumask_t *cpus; @@ -3245,7 +3245,7 @@ void schedule_dump(struct cpupool *c) if ( c != NULL ) { sched = c->sched; - cpus = c->cpu_valid; + cpus = c->res_valid; printk("Scheduler: %s (%s)\n", sched->name, sched->opt_name); sched_dump_settings(sched); } @@ -3255,11 +3255,25 @@ void schedule_dump(struct cpupool *c) cpus = &cpupool_free_cpus; } - if ( sched->dump_cpu_state != NULL ) + printk("CPUs info:\n"); + for_each_cpu (i, cpus) { - printk("CPUs info:\n"); - for_each_cpu (i, cpus) - sched_dump_cpu_state(sched, i); + struct sched_resource *sr = get_sched_res(i); + unsigned long flags; + spinlock_t *lock; + + lock = pcpu_schedule_lock_irqsave(i, &flags); + + printk("CPU[%02d] current=%pv, curr=%pv, prev=%pv\n", i, + get_cpu_current(i), sr->curr ? sr->curr->vcpu_list : NULL, + sr->prev ? sr->prev->vcpu_list : NULL); + for_each_cpu (j, sr->cpus) + if ( i != j ) + printk("CPU[%02d] current=%pv\n", j, get_cpu_current(j)); + + pcpu_schedule_unlock_irqrestore(lock, flags, i); + + sched_dump_cpu_state(sched, i); } rcu_read_unlock(&sched_res_rculock);