From patchwork Thu Jan 26 16:52:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9539785 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 25311601D3 for ; Thu, 26 Jan 2017 16:55:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 18D841FF15 for ; Thu, 26 Jan 2017 16:55:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0D88326E35; Thu, 26 Jan 2017 16:55:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 22A071FF15 for ; Thu, 26 Jan 2017 16:55:13 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cWnI8-0001LZ-Hv; Thu, 26 Jan 2017 16:52:56 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cWnI6-0001LC-PI for xen-devel@lists.xenproject.org; Thu, 26 Jan 2017 16:52:54 +0000 Received: from [85.158.137.68] by server-3.bemta-3.messagelabs.com id A1/88-14551-5692A885; Thu, 26 Jan 2017 16:52:53 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrPIsWRWlGSWpSXmKPExsVyMfTSYd0Uza4 Ig/NTbCy+b5nM5MDocfjDFZYAxijWzLyk/IoE1ozT37czF1y2r/h2tJe9gfGLYRcjF4eQwAxG ibdX5jGCOCwCU1klJj76yNrFyMkhIbCRVWLtNBsIO0aib/55Rgi7UuLn8z52EFtIQEXi5vZVT BCTpjNJrN0wmxkkISygJ3Hk6A92CNtc4uSbp2DNbAIGEm927AVbICKgJHFv1WQmEJtZoEri+M mlYDUsAqoSkxtOA9kcHLwC3hJrpiqDhEUF5CRWXm4Ba+UVEJQ4OfMJC0gJs4CmxPpd+hBT5CW 2v53DPIFRaBaSqlkIVbOQVC1gZF7FqFGcWlSWWqRrZKKXVJSZnlGSm5iZo2toYKyXm1pcnJie mpOYVKyXnJ+7iREYzPUMDIw7GF8d9zvEKMnBpCTK26jSFSHEl5SfUpmRWJwRX1Sak1p8iFGGg 0NJgtdCAygnWJSanlqRlpkDjCuYtAQHj5II7w91oDRvcUFibnFmOkTqFKMxR0/X6ZdMHHt2XX 7JJMSSl5+XKiXOew+kVACkNKM0D24QLN4vMcpKCfMyMjAwCPEUpBblZpagyr9iFOdgVBLm/Qc yhSczrwRu3yugU5iATrnA3A5ySkkiQkqqgXGrZV1J0sHmq0dvprefLxfZEu97YHuD9DEpxyV2 zVvC955P35H64dsvWaVf3BY/w9Nc5jnILg2Y+jY5M03r8HV7Rn2xjHNHboedkvuSctHms9qx+ u8lwnFuCyJbb3NFzY8vyjhvvv2h7dGib1w61wOd1n+65sAZHCE4/WqX2c8I5pyZ11T7EpRYij MSDbWYi4oTAezCYeDyAgAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-8.tower-31.messagelabs.com!1485449572!82625704!1 X-Originating-IP: [209.85.210.195] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 39465 invoked from network); 26 Jan 2017 16:52:52 -0000 Received: from mail-wj0-f195.google.com (HELO mail-wj0-f195.google.com) (209.85.210.195) by server-8.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 26 Jan 2017 16:52:52 -0000 Received: by mail-wj0-f195.google.com with SMTP id un2so5163656wjb.0 for ; Thu, 26 Jan 2017 08:52:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:user-agent:mime-version :content-transfer-encoding; bh=qdzDrkHAZbvN12wZnkW+6Qrrkh0scjuGMTMYtfbnGnk=; b=iR/KvPGkfSfIuYmmfLaUeCblxYhdGQa6pnsGtg5nFcGXLHTdbP1l6Wgtx7vxbPv2l3 aA7IXqt+TtteS0aoWQZGBgS6mXa+1OxTwGDXkOsg0Ldw1xEaAmvXJ6KNjzrtWh0rKNQF zL1/9NhqbJ4uJlUpBlU1IXlHPPGcEo0Sw2W8+7wiQZQx4A4ji/klzdnEdpZjwmZRWpnM s1NFPTUKGF1DK9AbbpXIDpaZXrICbkdAjUXoGYuCdP5QUss1Oq+QIVn6xdLyAYmHk5OV g0zGvw1lg6oZQeCH6+l1lkiFR7Du4xf/fABva4pmeKkbgaSV7gcGV5dpbvkYBTaqS00g lIxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :user-agent:mime-version:content-transfer-encoding; bh=qdzDrkHAZbvN12wZnkW+6Qrrkh0scjuGMTMYtfbnGnk=; b=D6eAYpS5zw2DafyyHvd34Gk7KcI1MjupxrI0vpOTjRm35yq1WE6gXR9itvIfLdMLdf vKT+qPossotvN43R1x5+p7XsWeOkI04LiQ01UVXhAzWfnB1IUID3E7A7YtP/Kdiftdai CxXf3qB5YUnmo//UGrQGjyboeZ7V95X7k68JpAzrEGqiempAASQSK0HVtkJCSmbCsuJz P2goF1J+e8Vb5rHvxqUTy/MazuEPqGkyjuf8YjHRZ1dQq2KvIF8oYO5mW1sB/6ML0atO J3tL6/LmI6ii4g3xG8nlIqBsmieRn5g0e1pM7rgJU8Schm8nLqHQyewcc8Ftccu4uTcE vc9Q== X-Gm-Message-State: AIkVDXIPxzZ7uVhZN4sttJT574PF5xZCuASUO7NNLtieRmn6cDcRN4v5lLwDcn+631D3Og== X-Received: by 10.223.171.22 with SMTP id q22mr3955395wrc.70.1485449572303; Thu, 26 Jan 2017 08:52:52 -0800 (PST) Received: from Solace.fritz.box (58-209-66-80.hosts.abilene.it. [80.66.209.58]) by smtp.gmail.com with ESMTPSA id y1sm4565316wme.15.2017.01.26.08.52.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Jan 2017 08:52:51 -0800 (PST) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Thu, 26 Jan 2017 17:52:50 +0100 Message-ID: <148544957013.26566.1886390191777485188.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Anshul Makkar , Meng Xu Subject: [Xen-devel] [PATCH] xen: sched: improve debug dump output. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Scheduling information debug dump for Credit2 is hard to read as it contains the same information repeated multiple time in different ways. In fact, in Credit2, CPUs are grouped in runqueus. Here's the current debug output: CPU[00] sibling=00000000,00000003, core=00000000,000000ff run: [32767.0] flags=0 cpu=0 credit=-1073741824 [w=0] load=0 (~0%) 1: [0.3] flags=0 cpu=2 credit=3273410 [w=256] load=262144 (~100%) 2: [0.4] flags=0 cpu=2 credit=2974954 [w=256] load=262144 (~100%) CPU[01] sibling=00000000,00000003, core=00000000,000000ff run: [32767.1] flags=0 cpu=1 credit=-1073741824 [w=0] load=0 (~0%) 1: [0.3] flags=0 cpu=2 credit=3273410 [w=256] load=262144 (~100%) 2: [0.4] flags=0 cpu=2 credit=2974954 [w=256] load=262144 (~100%) CPU[02] sibling=00000000,0000000c, core=00000000,000000ff run: [0.2] flags=2 cpu=2 credit=3556909 [w=256] load=262144 (~100%) 1: [0.3] flags=0 cpu=2 credit=3273410 [w=256] load=262144 (~100%) 2: [0.4] flags=0 cpu=2 credit=2974954 [w=256] load=262144 (~100%) Here, CPUs 0, 1 and 2, are all part of runqueue 0, the content of which (which, BTW, is d0v3 and d0v4) is printed 3 times! It is also not very useful to see the details of the idle vcpus, as they're always the same (except for the vCPU ids). With this change, we print: - pCPUs details and, for non idle ones, what vCPU they're running; - the runqueue content, once and for all. Runqueue 0: CPU[00] runq=0, sibling=00000000,00000003, core=00000000,000000ff run: [0.15] flags=2 cpu=0 credit=5804742 [w=256] load=3655 (~1%) CPU[01] runq=0, sibling=00000000,00000003, core=00000000,000000ff CPU[02] runq=0, sibling=00000000,0000000c, core=00000000,000000ff run: [0.3] flags=2 cpu=2 credit=6674856 [w=256] load=262144 (~100%) CPU[03] runq=0, sibling=00000000,0000000c, core=00000000,000000ff RUNQ: 0: [0.1] flags=0 cpu=2 credit=6561215 [w=256] load=262144 (~100%) 1: [0.2] flags=0 cpu=2 credit=5812356 [w=256] load=262144 (~100%) Stop printing details of idle vCPUs also in Credit1 and RTDS (they're pretty useless in there too). Signed-off-by: Dario Faggioli --- Cc: George Dunlap Cc: Anshul Makkar Cc: Meng Xu --- xen/common/sched_credit.c | 6 ++-- xen/common/sched_credit2.c | 72 +++++++++++++++++++++----------------------- xen/common/sched_rt.c | 9 +++++- xen/common/schedule.c | 7 ++-- 4 files changed, 49 insertions(+), 45 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index ad20819..7c0ff47 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1988,13 +1988,13 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) runq = &spc->runq; cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" sort=%d, sibling=%s, ", spc->runq_sort_last, cpustr); + printk("CPU[%02d] sort=%d, sibling=%s, ", cpu, spc->runq_sort_last, cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); - /* current VCPU */ + /* current VCPU (nothing to say if that's the idle vcpu). */ svc = CSCHED_VCPU(curr_on_cpu(cpu)); - if ( svc ) + if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); csched_dump_vcpu(svc); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index b2f2b17..c4e2b9a 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -2627,56 +2627,25 @@ csched2_dump_vcpu(struct csched2_private *prv, struct csched2_vcpu *svc) printk("\n"); } -static void -csched2_dump_pcpu(const struct scheduler *ops, int cpu) +static inline void +dump_pcpu(const struct scheduler *ops, int cpu) { struct csched2_private *prv = CSCHED2_PRIV(ops); - struct list_head *runq, *iter; struct csched2_vcpu *svc; - unsigned long flags; - spinlock_t *lock; - int loop; #define cpustr keyhandler_scratch - /* - * We need both locks: - * - csched2_dump_vcpu() wants to access domains' weights, - * which are protected by the private scheduler lock; - * - we scan through the runqueue, so we need the proper runqueue - * lock (the one of the runqueue this cpu is associated to). - */ - read_lock_irqsave(&prv->lock, flags); - lock = per_cpu(schedule_data, cpu).schedule_lock; - spin_lock(lock); - - runq = &RQD(ops, cpu)->runq; - cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_sibling_mask, cpu)); - printk(" sibling=%s, ", cpustr); + printk("CPU[%02d] runq=%d, sibling=%s, ", cpu, c2r(ops, cpu), cpustr); cpumask_scnprintf(cpustr, sizeof(cpustr), per_cpu(cpu_core_mask, cpu)); printk("core=%s\n", cpustr); - /* current VCPU */ + /* current VCPU (nothing to say if that's the idle vcpu) */ svc = CSCHED2_VCPU(curr_on_cpu(cpu)); - if ( svc ) + if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); csched2_dump_vcpu(prv, svc); } - - loop = 0; - list_for_each( iter, runq ) - { - svc = __runq_elem(iter); - if ( svc ) - { - printk("\t%3d: ", ++loop); - csched2_dump_vcpu(prv, svc); - } - } - - spin_unlock(lock); - read_unlock_irqrestore(&prv->lock, flags); #undef cpustr } @@ -2686,7 +2655,7 @@ csched2_dump(const struct scheduler *ops) struct list_head *iter_sdom; struct csched2_private *prv = CSCHED2_PRIV(ops); unsigned long flags; - int i, loop; + unsigned int i, j, loop; #define cpustr keyhandler_scratch /* @@ -2756,6 +2725,34 @@ csched2_dump(const struct scheduler *ops) } } + for_each_cpu(i, &prv->active_queues) + { + struct csched2_runqueue_data *rqd = prv->rqd + i; + struct list_head *iter, *runq = &rqd->runq; + int loop = 0; + + /* We need the lock to scan the runqueue. */ + spin_lock(&rqd->lock); + + printk("Runqueue %d:\n", i); + + for_each_cpu(j, &rqd->active) + dump_pcpu(ops, j); + + printk("RUNQ:\n"); + list_for_each( iter, runq ) + { + struct csched2_vcpu *svc = __runq_elem(iter); + + if ( svc ) + { + printk("\t%3d: ", loop++); + csched2_dump_vcpu(prv, svc); + } + } + spin_unlock(&rqd->lock); + } + read_unlock_irqrestore(&prv->lock, flags); #undef cpustr } @@ -3100,7 +3097,6 @@ static const struct scheduler sched_credit2_def = { .do_schedule = csched2_schedule, .context_saved = csched2_context_saved, - .dump_cpu_state = csched2_dump_pcpu, .dump_settings = csched2_dump, .init = csched2_init, .deinit = csched2_deinit, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 24b4b22..f2d979c 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -320,10 +320,17 @@ static void rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); + struct rt_vcpu *svc; unsigned long flags; spin_lock_irqsave(&prv->lock, flags); - rt_dump_vcpu(ops, rt_vcpu(curr_on_cpu(cpu))); + printk("CPU[%02d]\n", cpu); + /* current VCPU (nothing to say if that's the idle vcpu). */ + svc = rt_vcpu(curr_on_cpu(cpu)); + if ( svc && !is_idle_vcpu(svc->vcpu) ) + { + rt_dump_vcpu(ops, svc); + } spin_unlock_irqrestore(&prv->lock, flags); } diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 43b5b99..e4320f3 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1844,10 +1844,11 @@ void schedule_dump(struct cpupool *c) cpus = &cpupool_free_cpus; } - for_each_cpu (i, cpus) + if ( sched->dump_cpu_state != NULL ) { - printk("CPU[%02d] ", i); - SCHED_OP(sched, dump_cpu_state, i); + printk("CPUs info:\n"); + for_each_cpu (i, cpus) + SCHED_OP(sched, dump_cpu_state, i); } }