From patchwork Fri Aug 26 18:40:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 9301903 X-Patchwork-Delegate: rjw@sisk.pl Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9511E60757 for ; Fri, 26 Aug 2016 18:42:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B05B2964A for ; Fri, 26 Aug 2016 18:42:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7F93529651; Fri, 26 Aug 2016 18:42:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C88D2964A for ; Fri, 26 Aug 2016 18:42:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752739AbcHZSlw (ORCPT ); Fri, 26 Aug 2016 14:41:52 -0400 Received: from mail-pa0-f52.google.com ([209.85.220.52]:36485 "EHLO mail-pa0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751584AbcHZSlt (ORCPT ); Fri, 26 Aug 2016 14:41:49 -0400 Received: by mail-pa0-f52.google.com with SMTP id di2so29581033pad.3 for ; Fri, 26 Aug 2016 11:40:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jjSFDmXPMTulbXsLSFHSEQ4P82W1uSfo6lEAZ64675Y=; b=ijEXzTCjZrut2GBpoaflKODi9ag5aPFgGXycOoVCZ+9YT/fbToobijcqU/fNt12zNw QMcMJOwxyUiw2HjaPIKSXeV7fzkK7Cwzm+EbmAsEi7V36HwVnyjUtLPwr/OeKxT8jWLg W25yJWOVUyFx0s/1r2UXB385N0fNtugF82RIo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jjSFDmXPMTulbXsLSFHSEQ4P82W1uSfo6lEAZ64675Y=; b=is+COI/+Pp4B7fQu3XZDjm41FnDReT8ioPD6FGTe+2sRDi2Qwq1KmRVoyy45dKKpre kfp/SU5lIGTAS55sCbvcN3Emlbb4AwQSw3cZTize3PCWAhVq+WwtfI8PFh57tFxl7/R9 xnzsSgpLeNHZrjhgCiroMkY5KW62POOFNQL6LCfYLa1nXWfZHsXTNOMtt157WZQVcex0 22OZ8V53FctEo+eOt5lZ3NSj+8Z8H67PtsdgDPrTVL+77IolM5KE7qaIB9+8U79McWUT FYLzbV8qU8vhl+GfuHXY8GDyYcCNr5+gcPOYDQG0m5peR/V0CR6WrSlLAMr4kMSqxuof HgoA== X-Gm-Message-State: AE9vXwO8aQDSflLjHjMlRF1NRqe58Ff9NdTh6RFsUGDq9ZTzZ/zs6EqUJl3bDzcqsFFGAmRC X-Received: by 10.66.79.138 with SMTP id j10mr8644071pax.60.1472236859465; Fri, 26 Aug 2016 11:40:59 -0700 (PDT) Received: from graphite.smuckle.net (cpe-76-167-105-107.san.res.rr.com. [76.167.105.107]) by smtp.gmail.com with ESMTPSA id g27sm30480541pfd.47.2016.08.26.11.40.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Aug 2016 11:40:59 -0700 (PDT) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar , "Rafael J . Wysocki" Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Steve Muckle Subject: [PATCH 1/2] sched: cpufreq: ignore SMT when determining max cpu capacity Date: Fri, 26 Aug 2016 11:40:47 -0700 Message-Id: <1472236848-17038-2-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.7.3 In-Reply-To: <1472236848-17038-1-git-send-email-smuckle@linaro.org> References: <1472236848-17038-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP PELT does not consider SMT when scaling its utilization values via arch_scale_cpu_capacity(). The value in rq->cpu_capacity_orig does take SMT into consideration though and therefore may be smaller than the utilization reported by PELT. On an Intel i7-3630QM for example rq->cpu_capacity_orig is 589 but util_avg scales up to 1024. This means that a 50% utilized CPU will show up in schedutil as ~86% busy. Fix this by using the same CPU scaling value in schedutil as that which is used by PELT. Signed-off-by: Steve Muckle --- kernel/sched/cpufreq_schedutil.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 60d985f4dc47..cb8a77b1ef1b 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -147,7 +147,9 @@ static unsigned int get_next_freq(struct sugov_cpu *sg_cpu, unsigned long util, static void sugov_get_util(unsigned long *util, unsigned long *max) { struct rq *rq = this_rq(); - unsigned long cfs_max = rq->cpu_capacity_orig; + unsigned long cfs_max; + + cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id()); *util = min(rq->cfs.avg.util_avg, cfs_max); *max = cfs_max;