From patchwork Thu Dec 3 17:55:59 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: plongepe X-Patchwork-Id: 7762161 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 107CD9F30B for ; Thu, 3 Dec 2015 17:55:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 453CF204F6 for ; Thu, 3 Dec 2015 17:55:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5F4DC204D3 for ; Thu, 3 Dec 2015 17:55:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751876AbbLCRzE (ORCPT ); Thu, 3 Dec 2015 12:55:04 -0500 Received: from mga11.intel.com ([192.55.52.93]:28380 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751865AbbLCRzE (ORCPT ); Thu, 3 Dec 2015 12:55:04 -0500 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 03 Dec 2015 09:55:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,378,1444719600"; d="scan'208";a="853306519" Received: from tllab185.tl.intel.com ([10.102.161.63]) by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2015 09:55:02 -0800 From: Philippe Longepe To: linux-pm@vger.kernel.org Cc: srinivas.pandruvada@linux.intel.com, Stephane Gasparini , Philippe Longepe Subject: [PATCH V5 3/3] cpufreq: intel_pstate: Account for IO wait time Date: Thu, 3 Dec 2015 18:55:59 +0100 Message-Id: <1449165359-25832-8-git-send-email-philippe.longepe@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1449165359-25832-1-git-send-email-philippe.longepe@linux.intel.com> References: <1449165359-25832-1-git-send-email-philippe.longepe@linux.intel.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Stephane Gasparini To improve IO bound work, account IO wait time in calculating CPU busy. This change gets IO wait time using get_cpu_iowait_time_us, and converts time into number of IO cycles spent at max non turbo frequency. This IO cycle count is added to mperf value to account for IO wait time. Signed-off-by: Philippe Longepe Signed-off-by: Stephane Gasparini --- drivers/cpufreq/intel_pstate.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 2cf8bb6..fb92402 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -114,6 +114,7 @@ struct cpudata { u64 prev_mperf; u64 prev_tsc; struct sample sample; + u64 prev_cummulative_iowait; }; static struct cpudata **all_cpu_data; @@ -934,16 +935,28 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu) static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu) { struct sample *sample = &cpu->sample; + u64 cummulative_iowait, delta_iowait_us; + u64 delta_iowait_mperf; + u64 mperf, now; int32_t cpu_load; + cummulative_iowait = get_cpu_iowait_time_us(cpu->cpu, &now); + + /* Convert iowait time into number of IO cycles spent at max_freq */ + delta_iowait_us = cummulative_iowait - cpu->prev_cummulative_iowait; + delta_iowait_mperf = div64_u64(delta_iowait_us * cpu->pstate.scaling * + cpu->pstate.max_pstate, 1000); + + mperf = cpu->sample.mperf + delta_iowait_mperf; + cpu->prev_cummulative_iowait = cummulative_iowait; + /* * The load can be estimated as the ratio of the mperf counter * running at a constant frequency during active periods * (C0) and the time stamp counter running at the same frequency * also during C-states. */ - cpu_load = div64_u64(100 * sample->mperf, sample->tsc); - + cpu_load = div64_u64(100 * mperf, sample->tsc); cpu->sample.busy_scaled = int_tofp(cpu_load); return (cpu->pstate.current_pstate - pid_calc(&cpu->pid,