From patchwork Thu Dec 3 17:55:56 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: plongepe X-Patchwork-Id: 7762141 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1F1D99F30B for ; Thu, 3 Dec 2015 17:55:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1F7F1204E7 for ; Thu, 3 Dec 2015 17:55:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 068B8204F6 for ; Thu, 3 Dec 2015 17:55:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751861AbbLCRzA (ORCPT ); Thu, 3 Dec 2015 12:55:00 -0500 Received: from mga11.intel.com ([192.55.52.93]:28380 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751842AbbLCRzA (ORCPT ); Thu, 3 Dec 2015 12:55:00 -0500 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 03 Dec 2015 09:55:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,378,1444719600"; d="scan'208";a="853306429" Received: from tllab185.tl.intel.com ([10.102.161.63]) by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2015 09:54:59 -0800 From: Philippe Longepe To: linux-pm@vger.kernel.org Cc: srinivas.pandruvada@linux.intel.com, Stephane Gasparini Subject: [PATCH V5 2/3] cpufreq: intel_pstate: account for non C0 time Date: Thu, 3 Dec 2015 18:55:56 +0100 Message-Id: <1449165359-25832-5-git-send-email-philippe.longepe@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1449165359-25832-1-git-send-email-philippe.longepe@linux.intel.com> References: <1449165359-25832-1-git-send-email-philippe.longepe@linux.intel.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The current function to calculate business uses the ratio of actual performance to max non turbo state requested during the last sample period. This causes overestimation of busyness which results in higher power consumption. This is a problem for low power systems. The algorithm here uses cpu busyness to select next target P state. The Percent Busy (or load) can be estimated as the ratio of the mperf counter running at a constant frequency only during active periods (C0) and the time stamp counter running at the same frequency but also during idle. So, Percent Busy = 100 * (delta_mperf / delta_tsc) Use this algorithm for platforms with SoCs based on airmont and silvermont cores. Signed-off-by: Philippe Longepe Signed-off-by: Stephane Gasparini --- drivers/cpufreq/intel_pstate.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index c2553a1..2cf8bb6 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -143,6 +143,7 @@ struct cpu_defaults { }; static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu); +static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu); static struct pstate_adjust_policy pid_params; static struct pstate_funcs pstate_funcs; @@ -763,7 +764,7 @@ static struct cpu_defaults silvermont_params = { .set = atom_set_pstate, .get_scaling = silvermont_get_scaling, .get_vid = atom_get_vid, - .get_target_pstate = get_target_pstate_use_performance, + .get_target_pstate = get_target_pstate_use_cpu_load, }, }; @@ -784,7 +785,7 @@ static struct cpu_defaults airmont_params = { .set = atom_set_pstate, .get_scaling = airmont_get_scaling, .get_vid = atom_get_vid, - .get_target_pstate = get_target_pstate_use_performance, + .get_target_pstate = get_target_pstate_use_cpu_load, }, }; @@ -930,6 +931,26 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu) mod_timer_pinned(&cpu->timer, jiffies + delay); } +static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu) +{ + struct sample *sample = &cpu->sample; + int32_t cpu_load; + + /* + * The load can be estimated as the ratio of the mperf counter + * running at a constant frequency during active periods + * (C0) and the time stamp counter running at the same frequency + * also during C-states. + */ + cpu_load = div64_u64(100 * sample->mperf, sample->tsc); + + cpu->sample.busy_scaled = int_tofp(cpu_load); + + return (cpu->pstate.current_pstate - pid_calc(&cpu->pid, + cpu->sample.busy_scaled)); +} + + static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu) { int32_t core_busy, max_pstate, current_pstate, sample_ratio;