From patchwork Mon Jun 30 16:05:34 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 4452791 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5BAE3BEEAA for ; Mon, 30 Jun 2014 16:11:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9BF9C201EF for ; Mon, 30 Jun 2014 16:11:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE9A920176 for ; Mon, 30 Jun 2014 16:11:23 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X1e8X-0007mn-7R; Mon, 30 Jun 2014 16:08:57 +0000 Received: from mail-wi0-f177.google.com ([209.85.212.177]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1X1e6p-0005pD-HB for linux-arm-kernel@lists.infradead.org; Mon, 30 Jun 2014 16:07:12 +0000 Received: by mail-wi0-f177.google.com with SMTP id r20so6306330wiv.4 for ; Mon, 30 Jun 2014 09:06:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FWkfdVyn6AZcGWZq6U3YiJOwYwVYlP2KoeabOai55jA=; b=Lgn+pfzXAu4H7zLkKaCg1gU+2my7aMGtcW1WRZv4GLXPl53nSaCS8CMwFPeBQXRw+5 1CFHAPmxy3hBfp1sD1CBi5bu2ug0J9940O+fQvzkAksY583mu5JdSlxcSSyMaXTaQlbm FzHFLzwCJ8q4mKrJGiDUlLvJ554qJGDHOaBMJW6PkKUEBJBDyf5S1cC92N8cg1IJBhOJ s5/FxxYLJrGw0arjfyh6pS0RVr65n9lw01Xi4/dbnx3IC+0mw5AB0B6FmFPlxxSy4VfC wcQf+wg4fLKFYQPUp9Ibnmt9cCCIZNueTZlGvlA5sKJgTTaCK/vp4J8x2Qgf3eiubrQY hbPQ== X-Gm-Message-State: ALoCoQlv1j2dbmNJxC1Y05FJ3IM0rS8Pt3/HMVllhp2xDrV9F3Dla0RLK4LFGNwpBuvuZbLRzTaC X-Received: by 10.180.76.68 with SMTP id i4mr30449528wiw.83.1404144409177; Mon, 30 Jun 2014 09:06:49 -0700 (PDT) Received: from lmenx30s.lme.st.com (LPuteaux-656-01-48-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id lo18sm32896271wic.1.2014.06.30.09.06.47 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 30 Jun 2014 09:06:48 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 03/12] sched: fix avg_load computation Date: Mon, 30 Jun 2014 18:05:34 +0200 Message-Id: <1404144343-18720-4-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1404144343-18720-1-git-send-email-vincent.guittot@linaro.org> References: <1404144343-18720-1-git-send-email-vincent.guittot@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140630_090711_771155_A22A43C9 X-CRM114-Status: GOOD ( 12.56 ) X-Spam-Score: -0.7 (/) Cc: nicolas.pitre@linaro.org, efault@gmx.de, Vincent Guittot , daniel.lezcano@linaro.org, dietmar.eggemann@arm.com, linaro-kernel@lists.linaro.org, preeti@linux.vnet.ibm.com, Morten.Rasmussen@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The computation of avg_load and avg_load_per_task should only takes into account the number of cfs tasks. The non cfs task are already taken into account by decreasing the cpu's capacity (cpu_power) and they will be tracked in the CPU's utilization (group_utilization) of the next patches Signed-off-by: Vincent Guittot Acked-by: Rik van Riel --- kernel/sched/fair.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c6dba48..148b277 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4051,7 +4051,7 @@ static unsigned long capacity_of(int cpu) static unsigned long cpu_avg_load_per_task(int cpu) { struct rq *rq = cpu_rq(cpu); - unsigned long nr_running = ACCESS_ONCE(rq->nr_running); + unsigned long nr_running = ACCESS_ONCE(rq->cfs.h_nr_running); unsigned long load_avg = rq->cfs.runnable_load_avg; if (nr_running) @@ -5865,7 +5865,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, load = source_load(i, load_idx); sgs->group_load += load; - sgs->sum_nr_running += rq->nr_running; + sgs->sum_nr_running += rq->cfs.h_nr_running; #ifdef CONFIG_NUMA_BALANCING sgs->nr_numa_running += rq->nr_numa_running; sgs->nr_preferred_running += rq->nr_preferred_running;