From patchwork Thu Sep 11 15:39:10 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 4889151 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AF56AC0338 for ; Thu, 11 Sep 2014 15:41:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6C8C22025B for ; Thu, 11 Sep 2014 15:41:11 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BBFCB201EF for ; Thu, 11 Sep 2014 15:41:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XS6Sq-0006yF-TK; Thu, 11 Sep 2014 15:39:16 +0000 Received: from casper.infradead.org ([2001:770:15f::2]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XS6So-0006xd-Ru for linux-arm-kernel@bombadil.infradead.org; Thu, 11 Sep 2014 15:39:15 +0000 Received: from 178-85-85-44.dynamic.upc.nl ([178.85.85.44] helo=worktop) by casper.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux)) id 1XS6Sk-0002Bz-Tz; Thu, 11 Sep 2014 15:39:10 +0000 Received: by worktop (Postfix, from userid 1000) id 4A3B06E0DDF; Thu, 11 Sep 2014 17:39:10 +0200 (CEST) Date: Thu, 11 Sep 2014 17:39:10 +0200 From: Peter Zijlstra To: Vincent Guittot Subject: Re: [PATCH v5 11/12] sched: replace capacity_factor by utilization Message-ID: <20140911153910.GZ3190@worktop.ger.corp.intel.com> References: <1409051215-16788-1-git-send-email-vincent.guittot@linaro.org> <1409051215-16788-12-git-send-email-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1409051215-16788-12-git-send-email-vincent.guittot@linaro.org> User-Agent: Mutt/1.5.22.1 (2013-10-16) Cc: nicolas.pitre@linaro.org, riel@redhat.com, linaro-kernel@lists.linaro.org, linux@arm.linux.org.uk, daniel.lezcano@linaro.org, efault@gmx.de, linux-kernel@vger.kernel.org, Morten.Rasmussen@arm.com, preeti@linux.vnet.ibm.com, dietmar.eggemann@arm.com, mingo@kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Aug 26, 2014 at 01:06:54PM +0200, Vincent Guittot wrote: > The scheduler tries to compute how many tasks a group of CPUs can handle by > assuming that a task's load is SCHED_LOAD_SCALE and a CPU capacity is > SCHED_CAPACITY_SCALE. > We can now have a better idea of the capacity of a group of CPUs and of the > utilization of this group thanks to the rework of group_capacity_orig and the > group_utilization. We can now deduct how many capacity is still available. > > Signed-off-by: Vincent Guittot > --- A few minor changes I did while going through it. --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5669,7 +5669,7 @@ struct sg_lb_stats { unsigned int idle_cpus; unsigned int group_weight; enum group_type group_type; - int group_out_of_capacity; + int group_no_capacity; #ifdef CONFIG_NUMA_BALANCING unsigned int nr_numa_running; unsigned int nr_preferred_running; @@ -5931,37 +5931,37 @@ static inline int sg_imbalanced(struct s return group->sgc->imbalance; } -static inline int group_has_free_capacity(struct sg_lb_stats *sgs, - struct lb_env *env) +static inline bool +group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs) { if ((sgs->group_capacity_orig * 100) > (sgs->group_utilization * env->sd->imbalance_pct)) - return 1; + return true; if (sgs->sum_nr_running < sgs->group_weight) - return 1; + return true; - return 0; + return false; } -static inline int group_is_overloaded(struct sg_lb_stats *sgs, - struct lb_env *env) +static inline bool +group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs) { if (sgs->sum_nr_running <= sgs->group_weight) - return 0; + return false; if ((sgs->group_capacity_orig * 100) < (sgs->group_utilization * env->sd->imbalance_pct)) - return 1; + return true; - return 0; + return false; } -static enum group_type -group_classify(struct sched_group *group, struct sg_lb_stats *sgs, - struct lb_env *env) +static enum group_type group_classify(struct lb_env *env, + struct sched_group *group, + struct sg_lb_stats *sgs) { - if (group_is_overloaded(sgs, env)) + if (group_is_overloaded(env, sgs)) return group_overloaded; if (sg_imbalanced(group)) @@ -6024,9 +6024,8 @@ static inline void update_sg_lb_stats(st sgs->group_weight = group->group_weight; - sgs->group_type = group_classify(group, sgs, env); - - sgs->group_out_of_capacity = group_is_overloaded(sgs, env); + sgs->group_type = group_classify(env, group, sgs); + sgs->group_no_capacity = group_is_overloaded(env, sgs); } /** @@ -6157,9 +6156,9 @@ static inline void update_sd_lb_stats(st * with a large weight task outweighs the tasks on the system). */ if (prefer_sibling && sds->local && - group_has_free_capacity(&sds->local_stat, env)) { + group_has_capacity(env, &sds->local_stat)) { if (sgs->sum_nr_running > 1) - sgs->group_out_of_capacity = 1; + sgs->group_no_capacity = 1; sgs->group_capacity = min(sgs->group_capacity, SCHED_CAPACITY_SCALE); } @@ -6430,9 +6429,8 @@ static struct sched_group *find_busiest_ goto force_balance; /* SD_BALANCE_NEWIDLE trumps SMP nice when underutilized */ - if (env->idle == CPU_NEWLY_IDLE && - group_has_free_capacity(local, env) && - busiest->group_out_of_capacity) + if (env->idle == CPU_NEWLY_IDLE && group_has_capacity(env, local) && + busiest->group_no_capacity) goto force_balance; /*