From patchwork Mon Jul 28 17:51:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 4636161 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 069E69F38C for ; Mon, 28 Jul 2014 18:20:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 13E982020F for ; Mon, 28 Jul 2014 18:20:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1E83E2020E for ; Mon, 28 Jul 2014 18:20:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XBpV2-0002Xa-M1; Mon, 28 Jul 2014 18:18:16 +0000 Received: from casper.infradead.org ([2001:770:15f::2]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XBpUY-0002Gz-4S for linux-arm-kernel@bombadil.infradead.org; Mon, 28 Jul 2014 18:17:46 +0000 Received: from mail-we0-f175.google.com ([74.125.82.175]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XBp6v-0001Fi-9s for linux-arm-kernel@lists.infradead.org; Mon, 28 Jul 2014 17:53:22 +0000 Received: by mail-we0-f175.google.com with SMTP id t60so7612463wes.6 for ; Mon, 28 Jul 2014 10:52:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cwUwPsITCbhkn9QpdJo0mwThPuDtZrIqpQPWY/rjlTc=; b=nC1vgVbgbYuim2BF2X2CE48gWRaj2sWcBqXOi/67TCcWSAsrogdKua9kE8VpsXVgTp JbIVe55Apu4JAYpXBLi5hWlfpnJqvef0SRLzVCPu8kVYekv9TUYbHPg9dif9B3J6BLPl PLSZ2vumW7Yds2Em8G4FohbwcVT0ssr8s6CHvyPQL44JmWcCUp18CR3TrKw9E0jTnscW 6kKaXNj3X7slZftJXfu4PKgNsM8A1+uhAEXS3fPJIxgnTkpEWX7Ra4/NPkku8v0iMHud Ui4L9znt6AG6d/7FTSYmIa8o1FTCYzTJgSaBRHTvaSXAL2F3Kn2MJWDe377Jib9hPCm7 /zsQ== X-Gm-Message-State: ALoCoQkXNBPV0wJAaDDkX2FFZPb20/npXrwOQoIKvL0M1ip7DEnl9DST5fXEG25yu2Fh1SrZEucK X-Received: by 10.180.95.166 with SMTP id dl6mr33844258wib.15.1406569970977; Mon, 28 Jul 2014 10:52:50 -0700 (PDT) Received: from lmenx30s.lme.st.com (LPuteaux-656-01-48-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id ex4sm33758149wic.2.2014.07.28.10.52.48 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 28 Jul 2014 10:52:49 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, preeti@linux.vnet.ibm.com, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 08/12] sched: move cfs task on a CPU with higher capacity Date: Mon, 28 Jul 2014 19:51:42 +0200 Message-Id: <1406569906-9763-9-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1406569906-9763-1-git-send-email-vincent.guittot@linaro.org> References: <1406569906-9763-1-git-send-email-vincent.guittot@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140728_185321_362795_53734F06 X-CRM114-Status: GOOD ( 24.60 ) X-Spam-Score: -2.6 (--) Cc: nicolas.pitre@linaro.org, riel@redhat.com, daniel.lezcano@linaro.org, Vincent Guittot , efault@gmx.de, dietmar.eggemann@arm.com, linaro-kernel@lists.linaro.org, Morten.Rasmussen@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If the CPU is used for handling lot of IRQs, trig a load balance to check if it's worth moving its tasks on another CPU that has more capacity. As a sidenote, this will note generate more spurious ilb because we already trig an ilb if there is more than 1 busy cpu. If this cpu is the only one that has a task, we will trig the ilb once for migrating the task. The nohz_kick_needed function has been cleaned up a bit while adding the new test Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 69 +++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 49 insertions(+), 20 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6843016..1cde8dd 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5969,6 +5969,14 @@ static bool update_sd_pick_busiest(struct lb_env *env, return true; } + /* + * The group capacity is reduced probably because of activity from other + * sched class or interrupts which use part of the available capacity + */ + if ((sg->sgc->capacity_orig * 100) > (sgs->group_capacity * + env->sd->imbalance_pct)) + return true; + return false; } @@ -6455,13 +6463,23 @@ static int need_active_balance(struct lb_env *env) struct sched_domain *sd = env->sd; if (env->idle == CPU_NEWLY_IDLE) { + int src_cpu = env->src_cpu; /* * ASYM_PACKING needs to force migrate tasks from busy but * higher numbered CPUs in order to pack all tasks in the * lowest numbered CPUs. */ - if ((sd->flags & SD_ASYM_PACKING) && env->src_cpu > env->dst_cpu) + if ((sd->flags & SD_ASYM_PACKING) && src_cpu > env->dst_cpu) + return 1; + + /* + * If the CPUs share their cache and the src_cpu's capacity is + * reduced because of other sched_class or IRQs, we trig an + * active balance to move the task + */ + if ((capacity_orig_of(src_cpu) * 100) > (capacity_of(src_cpu) * + sd->imbalance_pct)) return 1; } @@ -6563,6 +6581,8 @@ static int load_balance(int this_cpu, struct rq *this_rq, schedstat_add(sd, lb_imbalance[idle], env.imbalance); + env.src_cpu = busiest->cpu; + ld_moved = 0; if (busiest->nr_running > 1) { /* @@ -6572,7 +6592,6 @@ static int load_balance(int this_cpu, struct rq *this_rq, * correctly treated as an imbalance. */ env.flags |= LBF_ALL_PINNED; - env.src_cpu = busiest->cpu; env.src_rq = busiest; env.loop_max = min(sysctl_sched_nr_migrate, busiest->nr_running); @@ -7262,10 +7281,12 @@ static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) /* * Current heuristic for kicking the idle load balancer in the presence - * of an idle cpu is the system. + * of an idle cpu in the system. * - This rq has more than one task. - * - At any scheduler domain level, this cpu's scheduler group has multiple - * busy cpu's exceeding the group's capacity. + * - This rq has at least one CFS task and the capacity of the CPU is + * significantly reduced because of RT tasks or IRQs. + * - At parent of LLC scheduler domain level, this cpu's scheduler group has + * multiple busy cpu. * - For SD_ASYM_PACKING, if the lower numbered cpu's in the scheduler * domain span are idle. */ @@ -7275,9 +7296,10 @@ static inline int nohz_kick_needed(struct rq *rq) struct sched_domain *sd; struct sched_group_capacity *sgc; int nr_busy, cpu = rq->cpu; + bool kick = false; if (unlikely(rq->idle_balance)) - return 0; + return false; /* * We may be recently in ticked or tickless idle mode. At the first @@ -7291,38 +7313,45 @@ static inline int nohz_kick_needed(struct rq *rq) * balancing. */ if (likely(!atomic_read(&nohz.nr_cpus))) - return 0; + return false; if (time_before(now, nohz.next_balance)) - return 0; + return false; if (rq->nr_running >= 2) - goto need_kick; + return true; rcu_read_lock(); sd = rcu_dereference(per_cpu(sd_busy, cpu)); - if (sd) { sgc = sd->groups->sgc; nr_busy = atomic_read(&sgc->nr_busy_cpus); - if (nr_busy > 1) - goto need_kick_unlock; + if (nr_busy > 1) { + kick = true; + goto unlock; + } + } - sd = rcu_dereference(per_cpu(sd_asym, cpu)); + sd = rcu_dereference(rq->sd); + if (sd) { + if ((rq->cfs.h_nr_running >= 1) && + ((rq->cpu_capacity * sd->imbalance_pct) < + (rq->cpu_capacity_orig * 100))) { + kick = true; + goto unlock; + } + } + sd = rcu_dereference(per_cpu(sd_asym, cpu)); if (sd && (cpumask_first_and(nohz.idle_cpus_mask, sched_domain_span(sd)) < cpu)) - goto need_kick_unlock; + kick = true; +unlock: rcu_read_unlock(); - return 0; - -need_kick_unlock: - rcu_read_unlock(); -need_kick: - return 1; + return kick; } #else static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) { }