From patchwork Thu Oct 25 10:26:00 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: preeti X-Patchwork-Id: 1643061 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 58D42DF2AB for ; Thu, 25 Oct 2012 10:32:50 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TRKg0-0002GW-Rn; Thu, 25 Oct 2012 10:28:37 +0000 Received: from e28smtp03.in.ibm.com ([122.248.162.3]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TRKdt-0001BX-Aj for linux-arm-kernel@lists.infradead.org; Thu, 25 Oct 2012 10:26:27 +0000 Received: from /spool/local by e28smtp03.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 25 Oct 2012 15:56:22 +0530 Received: from d28relay04.in.ibm.com (9.184.220.61) by e28smtp03.in.ibm.com (192.168.1.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 25 Oct 2012 15:56:22 +0530 Received: from d28av03.in.ibm.com (d28av03.in.ibm.com [9.184.220.65]) by d28relay04.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q9PAQKre66519242 for ; Thu, 25 Oct 2012 15:56:20 +0530 Received: from d28av03.in.ibm.com (loopback [127.0.0.1]) by d28av03.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q9PAQJj8013193 for ; Thu, 25 Oct 2012 21:26:20 +1100 Received: from preeti.in.ibm.com (preeti.in.ibm.com [9.124.35.56]) by d28av03.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q9PAQJ08013165; Thu, 25 Oct 2012 21:26:19 +1100 Subject: [RFC PATCH 10/13] sched: Modify fix_small_imbalance to use PJT's metric To: svaidy@linux.vnet.ibm.com, linux-kernel@vger.kernel.org From: Preeti U Murthy Date: Thu, 25 Oct 2012 15:56:00 +0530 Message-ID: <20121025102559.21022.88893.stgit@preeti.in.ibm.com> In-Reply-To: <20121025102045.21022.92489.stgit@preeti.in.ibm.com> References: <20121025102045.21022.92489.stgit@preeti.in.ibm.com> User-Agent: StGit/0.16-38-g167d MIME-Version: 1.0 x-cbid: 12102510-3864-0000-0000-0000053A0567 X-Spam-Note: CRM114 invocation failed X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [122.248.162.3 listed in list.dnswl.org] 3.0 KHOP_BIG_TO_CC Sent to 10+ recipients instaed of Bcc or a list -0.7 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Morten.Rasmussen@arm.com, venki@google.com, robin.randhawa@arm.com, linaro-dev@lists.linaro.org, a.p.zijlstra@chello.nl, mjg59@srcf.ucam.org, viresh.kumar@linaro.org, amit.kucheria@linaro.org, deepthi@linux.vnet.ibm.com, Arvind.Chauhan@arm.com, paul.mckenney@linaro.org, suresh.b.siddha@intel.com, tglx@linutronix.de, srivatsa.bhat@linux.vnet.ibm.com, vincent.guittot@linaro.org, akpm@linux-foundation.org, paulmck@linux.vnet.ibm.com, arjan@linux.intel.com, mingo@kernel.org, linux-arm-kernel@lists.infradead.org, pjt@google.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Additional parameters which aid in taking the decisions in fix_small_imbalance which are calculated using PJT's metric are used. Signed-off-by: Preeti U Murthy --- kernel/sched/fair.c | 54 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 33 insertions(+), 21 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3b18f5f..a5affbc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2936,8 +2936,9 @@ static unsigned long cpu_avg_load_per_task(int cpu) struct rq *rq = cpu_rq(cpu); unsigned long nr_running = ACCESS_ONCE(rq->nr_running); - if (nr_running) + if (nr_running) { return rq->load.weight / nr_running; + } return 0; } @@ -4830,27 +4831,38 @@ static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds) static inline void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds) { - unsigned long tmp, pwr_now = 0, pwr_move = 0; + /* Parameters introduced to use PJT's metrics */ + u64 tmp, pwr_now = 0, pwr_move = 0; unsigned int imbn = 2; unsigned long scaled_busy_load_per_task; + u64 scaled_busy_sg_load_per_task; /* Parameter to use PJT's metric */ + unsigned long nr_running = ACCESS_ONCE(cpu_rq(env->dst_cpu)->nr_running); if (sds->this_nr_running) { - sds->this_load_per_task /= sds->this_nr_running; - if (sds->busiest_load_per_task > - sds->this_load_per_task) + sds->this_sg_load_per_task /= sds->this_nr_running; + if (sds->busiest_sg_load_per_task > + sds->this_sg_load_per_task) imbn = 1; } else { - sds->this_load_per_task = - cpu_avg_load_per_task(env->dst_cpu); + if (nr_running) { + sds->this_sg_load_per_task = + /* The below decision based on PJT's metric */ + cpu_rq(env->dst_cpu)->cfs.runnable_load_avg / nr_running; + } else { + sds->this_sg_load_per_task = 0; + } } scaled_busy_load_per_task = sds->busiest_load_per_task * SCHED_POWER_SCALE; + scaled_busy_sg_load_per_task = sds->busiest_sg_load_per_task + * SCHED_POWER_SCALE; scaled_busy_load_per_task /= sds->busiest->sgp->power; + scaled_busy_sg_load_per_task /= sds->busiest->sgp->power; - if (sds->max_load - sds->this_load + scaled_busy_load_per_task >= - (scaled_busy_load_per_task * imbn)) { - env->imbalance = sds->busiest_load_per_task; + if (sds->max_sg_load - sds->this_sg_load + scaled_busy_sg_load_per_task >= + (scaled_busy_sg_load_per_task * imbn)) { + env->load_imbalance = sds->busiest_sg_load_per_task; return; } @@ -4861,33 +4873,33 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds) */ pwr_now += sds->busiest->sgp->power * - min(sds->busiest_load_per_task, sds->max_load); + min(sds->busiest_sg_load_per_task, sds->max_sg_load); pwr_now += sds->this->sgp->power * - min(sds->this_load_per_task, sds->this_load); + min(sds->this_sg_load_per_task, sds->this_sg_load); pwr_now /= SCHED_POWER_SCALE; /* Amount of load we'd subtract */ - tmp = (sds->busiest_load_per_task * SCHED_POWER_SCALE) / + tmp = (sds->busiest_sg_load_per_task * SCHED_POWER_SCALE) / sds->busiest->sgp->power; - if (sds->max_load > tmp) + if (sds->max_sg_load > tmp) pwr_move += sds->busiest->sgp->power * - min(sds->busiest_load_per_task, sds->max_load - tmp); + min(sds->busiest_sg_load_per_task, sds->max_sg_load - tmp); /* Amount of load we'd add */ - if (sds->max_load * sds->busiest->sgp->power < - sds->busiest_load_per_task * SCHED_POWER_SCALE) - tmp = (sds->max_load * sds->busiest->sgp->power) / + if (sds->max_sg_load * sds->busiest->sgp->power < + sds->busiest_sg_load_per_task * SCHED_POWER_SCALE) + tmp = (sds->max_sg_load * sds->busiest->sgp->power) / sds->this->sgp->power; else - tmp = (sds->busiest_load_per_task * SCHED_POWER_SCALE) / + tmp = (sds->busiest_sg_load_per_task * SCHED_POWER_SCALE) / sds->this->sgp->power; pwr_move += sds->this->sgp->power * - min(sds->this_load_per_task, sds->this_load + tmp); + min(sds->this_sg_load_per_task, sds->this_sg_load + tmp); pwr_move /= SCHED_POWER_SCALE; /* Move if we gain throughput */ if (pwr_move > pwr_now) - env->imbalance = sds->busiest_load_per_task; + env->load_imbalance = sds->busiest_sg_load_per_task; } /**