From patchwork Thu Feb 2 17:48:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9552803 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5AE2C60453 for ; Thu, 2 Feb 2017 17:50:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E2A828113 for ; Thu, 2 Feb 2017 17:50:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 42FAB28437; Thu, 2 Feb 2017 17:50:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DF77C28113 for ; Thu, 2 Feb 2017 17:50:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cZLUR-0007kz-Dh; Thu, 02 Feb 2017 17:48:11 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cZLUQ-0007ks-LC for xen-devel@lists.xenproject.org; Thu, 02 Feb 2017 17:48:10 +0000 Received: from [85.158.137.68] by server-1.bemta-3.messagelabs.com id 5F/54-20518-9D073985; Thu, 02 Feb 2017 17:48:09 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrLIsWRWlGSWpSXmKPExsVyMfTSQd2bBZM jDFY9kLD4vmUykwOjx+EPV1gCGKNYM/OS8isSWDMOT9Ap2GBY8X7HZuYGxjnqXYycHEICMxgl vs7j6mLk4mARmMoqsePmElYQR0JgI6vEuX/T2ECqJARiJKbc/8EIYVdJbJm4lQ2iW0Xi5vZVT CANQgLfGSV+TTnLDpIQFtCTOHL0B5TtJdH18QRYA5uAgcSbHXtZQWwRASWJe6smM4HYzEBDj5 9cCrSAA+gMVYm/U8NBTF4Bb4mli1NAKkQF5CRWXm4B6+QVEJQ4OfMJC0gJs4CmxPpd+hBD5CW 2v53DPIFRaBaSqlkIVbOQVC1gZF7FqFGcWlSWWqRrZKCXVJSZnlGSm5iZo2toYKyXm1pcnJie mpOYVKyXnJ+7iREYyPUMDIw7GJtP+B1ilORgUhLlnaI1OUKILyk/pTIjsTgjvqg0J7X4EKMMB 4eSBG9KPlBOsCg1PbUiLTMHGFMwaQkOHiUR3osgad7igsTc4sx0iNQpRmOOnq7TL5k49uy6/J JJiCUvPy9VSpxXB6RUAKQ0ozQPbhAs1i8xykoJ8zIyMDAI8RSkFuVmlqDKv2IU52BUEuYtBpn Ck5lXArfvFdApTECn/Hw8CeSUkkSElFQDo+MGsSN1H1t4VWf5xzm2CoUYhHmqFi7mEqvvtdoq VVbzWj1wlX/DweyfOp9cZPjaN0y7euBIXYOQq/f6ityP6lYSO+oEbn0VOiD789zf+tkaAlqVq 1e6snUdZp4pPcHirdFNeePHOmur3j95PGNen/xPn49nN9yp2ZIQtX6iGMdETde+XX7flFiKMx INtZiLihMB7V52OPACAAA= X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-3.tower-31.messagelabs.com!1486057689!83700301!1 X-Originating-IP: [209.85.210.193] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 57420 invoked from network); 2 Feb 2017 17:48:09 -0000 Received: from mail-wj0-f193.google.com (HELO mail-wj0-f193.google.com) (209.85.210.193) by server-3.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 2 Feb 2017 17:48:09 -0000 Received: by mail-wj0-f193.google.com with SMTP id le4so1838076wjb.0 for ; Thu, 02 Feb 2017 09:48:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:user-agent:mime-version :content-transfer-encoding; bh=LJQfZHgCsoaDMxev4p7NjZa4g95dxy2aYkko9jSz1Yk=; b=QDmr5NoyFoVxQzXukxJ4tQcjnwMlzkBAB6kwIIbX0/pI8QTUCg9Qz5iXvGNUS1OFPN ftGLLadoB3XbeK/Cy0btpNagxhA0TfeVHgDyiDoDi5BlpRSZJGcnX53lqYjkM1VjtlkA WzO/PeKV15m69g+3pALRWTs5e9Wp+notA2l10BNZTsVf6S94Ho3Kw6tUCHsWgI/tabhU J0TWJ7aPuZwxc/I+VrscVaKIlgg4oiD3egER8CSBYyU5XmzzSD6JgyzT9KLnCGks9mc6 VYjZaEmQxC+Lp71ohbkdQdk7bqFkOihS6ECqxeJpX5N5BYuI8aenVI5MYZN9QuOPvepb lZ7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :user-agent:mime-version:content-transfer-encoding; bh=LJQfZHgCsoaDMxev4p7NjZa4g95dxy2aYkko9jSz1Yk=; b=MswQ6D9/B59jPWqjwQMiOEC70jV1u260eNYFErNWCAkACUzSaIQM6JbcDLtY0SD3v7 ZjNzJy2E/12ZXvVmAFH6aJpJZRlWhaXGWANCeAD0qO4QbNgian8fT951ygc2pBrYz8wc D/2aOnphpuqRWzzfRyCBACSnIrKZLKpMd4BWP+ch8J59BFXRqo5657ItWaWcX1ef8S2N Cy5DUR/5Tg6S8uILMQ4sb2fMtgnBiXmnXD2rcL4yjwnIs3dtQmrJFyJl/XY21hkZ4sv5 hz7ypm+MZXIOlBZP3SS+ruCLpmkr57BhIKNFUbjlsPSpB2P5H9++8vIV4lK6ZxcxgZBa cSxQ== X-Gm-Message-State: AIkVDXJihk3XwrhFoPnNkec8Ut9H9L+dcliiEdasPpDxeQlJOnqsKbsXTP7KtvvCLOuIBA== X-Received: by 10.223.170.195 with SMTP id i3mr9825648wrc.123.1486057688832; Thu, 02 Feb 2017 09:48:08 -0800 (PST) Received: from Solace.fritz.box ([80.66.223.139]) by smtp.gmail.com with ESMTPSA id 8sm5009164wmg.1.2017.02.02.09.48.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Feb 2017 09:48:06 -0800 (PST) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Thu, 02 Feb 2017 18:48:04 +0100 Message-ID: <148605768478.25703.7903523788166062864.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Anshul Makkar , Meng Xu Subject: [Xen-devel] [PATCH] xen: sched: harmonize debug dump output among schedulers. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Information we currently print for idle pCPUs is rather useless. Credit2 already stopped showing that, do the same for Credit and RTDS. Also, define a new CPU status dump hook, which is not defined by those schedulers which already dump such info in other ways (e.g., Credit2, which does that while dumping runqueue information). This also means that, still in Credit2, we can keep the runqueue and pCPU info closer together. Signed-off-by: Dario Faggioli Acked-by: Meng Xu --- Cc: George Dunlap Cc: Anshul Makkar --- This is basically the rebase of "xen: sched: improve debug dump output.", on top of "xen: credit2: improve debug dump output." (i.e., commit 3af86727b8204). Sorry again, George, for the mess... I was sure I hadn't sent the first one out yet, when I sended it out what turned out to be the second time (and, even worse, slightly reworked! :-( ). I'm keeping Meng's ack, as I did not touch the RTDS part, wrt the patch he sent it against. --- xen/common/sched_credit.c | 6 +++--- xen/common/sched_credit2.c | 34 +++++++++++----------------------- xen/common/sched_rt.c | 9 ++++++++- xen/common/schedule.c | 8 ++++---- 4 files changed, 26 insertions(+), 31 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index ad20819..7c0ff47 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1988,13 +1988,13 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) runq = &spc->runq; cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" sort=%d, sibling=%s, ", spc->runq_sort_last, cpustr); + printk("CPU[%02d] sort=%d, sibling=%s, ", cpu, spc->runq_sort_last, cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); - /* current VCPU */ + /* current VCPU (nothing to say if that's the idle vcpu). */ svc = CSCHED_VCPU(curr_on_cpu(cpu)); - if ( svc ) + if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); csched_dump_vcpu(svc); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 93c6d32..9f5a190 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -2627,28 +2627,15 @@ csched2_dump_vcpu(struct csched2_private *prv, struct csched2_vcpu *svc) printk("\n"); } -static void -csched2_dump_pcpu(const struct scheduler *ops, int cpu) +static inline void +dump_pcpu(const struct scheduler *ops, int cpu) { struct csched2_private *prv = CSCHED2_PRIV(ops); struct csched2_vcpu *svc; - unsigned long flags; - spinlock_t *lock; #define cpustr keyhandler_scratch - /* - * We need both locks: - * - we print current, so we need the runqueue lock for this - * cpu (the one of the runqueue this cpu is associated to); - * - csched2_dump_vcpu() wants to access domains' weights, - * which are protected by the private scheduler lock. - */ - read_lock_irqsave(&prv->lock, flags); - lock = per_cpu(schedule_data, cpu).schedule_lock; - spin_lock(lock); - cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" runq=%d, sibling=%s, ", c2r(ops, cpu), cpustr); + printk("CPU[%02d] runq=%d, sibling=%s, ", cpu, c2r(ops, cpu), cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); @@ -2659,9 +2646,6 @@ csched2_dump_pcpu(const struct scheduler *ops, int cpu) printk("\trun: "); csched2_dump_vcpu(prv, svc); } - - spin_unlock(lock); - read_unlock_irqrestore(&prv->lock, flags); #undef cpustr } @@ -2671,7 +2655,7 @@ csched2_dump(const struct scheduler *ops) struct list_head *iter_sdom; struct csched2_private *prv = CSCHED2_PRIV(ops); unsigned long flags; - int i, loop; + unsigned int i, j, loop; #define cpustr keyhandler_scratch /* @@ -2741,7 +2725,6 @@ csched2_dump(const struct scheduler *ops) } } - printk("Runqueue info:\n"); for_each_cpu(i, &prv->active_queues) { struct csched2_runqueue_data *rqd = prv->rqd + i; @@ -2750,7 +2733,13 @@ csched2_dump(const struct scheduler *ops) /* We need the lock to scan the runqueue. */ spin_lock(&rqd->lock); - printk("runqueue %d:\n", i); + + printk("Runqueue %d:\n", i); + + for_each_cpu(j, &rqd->active) + dump_pcpu(ops, j); + + printk("RUNQ:\n"); list_for_each( iter, runq ) { struct csched2_vcpu *svc = __runq_elem(iter); @@ -3108,7 +3097,6 @@ static const struct scheduler sched_credit2_def = { .do_schedule = csched2_schedule, .context_saved = csched2_context_saved, - .dump_cpu_state = csched2_dump_pcpu, .dump_settings = csched2_dump, .init = csched2_init, .deinit = csched2_deinit, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 24b4b22..f2d979c 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -320,10 +320,17 @@ static void rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); + struct rt_vcpu *svc; unsigned long flags; spin_lock_irqsave(&prv->lock, flags); - rt_dump_vcpu(ops, rt_vcpu(curr_on_cpu(cpu))); + printk("CPU[%02d]\n", cpu); + /* current VCPU (nothing to say if that's the idle vcpu). */ + svc = rt_vcpu(curr_on_cpu(cpu)); + if ( svc && !is_idle_vcpu(svc->vcpu) ) + { + rt_dump_vcpu(ops, svc); + } spin_unlock_irqrestore(&prv->lock, flags); } diff --git a/xen/common/schedule.c b/xen/common/schedule.c index ed77990..e4320f3 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1844,11 +1844,11 @@ void schedule_dump(struct cpupool *c) cpus = &cpupool_free_cpus; } - printk("CPUs info:\n"); - for_each_cpu (i, cpus) + if ( sched->dump_cpu_state != NULL ) { - printk("CPU[%02d] ", i); - SCHED_OP(sched, dump_cpu_state, i); + printk("CPUs info:\n"); + for_each_cpu (i, cpus) + SCHED_OP(sched, dump_cpu_state, i); } }