From patchwork Wed Sep 10 13:50:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 4877491 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6D603C0338 for ; Wed, 10 Sep 2014 13:52:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 294632016C for ; Wed, 10 Sep 2014 13:52:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AC273201D5 for ; Wed, 10 Sep 2014 13:52:51 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XRiIL-0006XJ-2m; Wed, 10 Sep 2014 13:50:49 +0000 Received: from casper.infradead.org ([2001:770:15f::2]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XRiIH-0006Un-9F for linux-arm-kernel@bombadil.infradead.org; Wed, 10 Sep 2014 13:50:45 +0000 Received: from [89.106.167.2] (helo=worktop) by casper.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux)) id 1XRiIC-0006yI-SU; Wed, 10 Sep 2014 13:50:40 +0000 Received: by worktop (Postfix, from userid 1000) id 341B96E15BD; Wed, 10 Sep 2014 15:50:39 +0200 (CEST) Date: Wed, 10 Sep 2014 15:50:39 +0200 From: Peter Zijlstra To: Preeti U Murthy Subject: Re: [PATCH v5 04/12] sched: Allow all archs to set the capacity_orig Message-ID: <20140910135039.GP3190@worktop.ger.corp.intel.com> References: <1409051215-16788-1-git-send-email-vincent.guittot@linaro.org> <1409051215-16788-5-git-send-email-vincent.guittot@linaro.org> <540204DC.5090204@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <540204DC.5090204@linux.vnet.ibm.com> User-Agent: Mutt/1.5.22.1 (2013-10-16) Cc: nicolas.pitre@linaro.org, riel@redhat.com, linaro-kernel@lists.linaro.org, linux@arm.linux.org.uk, daniel.lezcano@linaro.org, efault@gmx.de, linux-kernel@vger.kernel.org, Morten.Rasmussen@arm.com, Vincent Guittot , dietmar.eggemann@arm.com, mingo@kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Sat, Aug 30, 2014 at 10:37:40PM +0530, Preeti U Murthy wrote: > > - if ((sd->flags & SD_SHARE_CPUCAPACITY) && weight > 1) { > > - if (sched_feat(ARCH_CAPACITY)) > > Aren't you missing this check above? I understand that it is not > crucial, but that would also mean removing ARCH_CAPACITY sched_feat > altogether, wouldn't it? Yes he's missing that, I added the below bit on top. So the argument last time: lkml.kernel.org/r/20140709105721.GT19379@twins.programming.kicks-ass.net was that you cannot put sched_feat(ARCH_CAPACITY) inside a weak arch_* function. The test has to be outside, seeing how it needs to decide to call the arch function at all (or revert to the default implementation). --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5700,7 +5700,7 @@ unsigned long __weak arch_scale_freq_cap return default_scale_capacity(sd, cpu); } -unsigned long __weak arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) +static unsigned long default_scale_cpu_capacity(struct sched_domain *sd, int cpu) { if ((sd->flags & SD_SHARE_CPUCAPACITY) && (sd->span_weight > 1)) return sd->smt_gain / sd->span_weight; @@ -5708,6 +5708,11 @@ unsigned long __weak arch_scale_cpu_capa return SCHED_CAPACITY_SCALE; } +unsigned long __weak arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) +{ + return default_scale_cpu_capacity(sd, cpu); +} + static unsigned long scale_rt_capacity(int cpu) { struct rq *rq = cpu_rq(cpu); @@ -5747,7 +5752,10 @@ static void update_cpu_capacity(struct s unsigned long capacity = SCHED_CAPACITY_SCALE; struct sched_group *sdg = sd->groups; - capacity *= arch_scale_cpu_capacity(sd, cpu); + if (sched_feat(ARCH_CAPACITY)) + capacity *= arch_scale_cpu_capacity(sd, cpu); + else + capacity *= default_scale_cpu_capacity(sd, cpu); capacity >>= SCHED_CAPACITY_SHIFT;