From patchwork Fri Dec 4 16:40:35 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: plongepe X-Patchwork-Id: 7769751 X-Patchwork-Delegate: rjw@sisk.pl Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1DE659F39B for ; Fri, 4 Dec 2015 16:41:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1108120340 for ; Fri, 4 Dec 2015 16:40:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AE7CD20439 for ; Fri, 4 Dec 2015 16:40:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754052AbbLDQkv (ORCPT ); Fri, 4 Dec 2015 11:40:51 -0500 Received: from mga02.intel.com ([134.134.136.20]:59932 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755374AbbLDQku (ORCPT ); Fri, 4 Dec 2015 11:40:50 -0500 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP; 04 Dec 2015 08:39:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,380,1444719600"; d="scan'208";a="866719317" Received: from tllab185.tl.intel.com ([10.102.161.63]) by fmsmga002.fm.intel.com with ESMTP; 04 Dec 2015 08:39:41 -0800 From: Philippe Longepe To: linux-pm@vger.kernel.org Cc: srinivas.pandruvada@linux.intel.com, Philippe Longepe , Stephane Gasparini Subject: [PATCH V6 3/3] cpufreq: intel_pstate: Account for IO wait time Date: Fri, 4 Dec 2015 17:40:35 +0100 Message-Id: <1449247235-29389-8-git-send-email-philippe.longepe@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1449247235-29389-1-git-send-email-philippe.longepe@linux.intel.com> References: <1449247235-29389-1-git-send-email-philippe.longepe@linux.intel.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Philippe Longepe In cases where we have many IOs, the global load becomes low and the load algorithm will decrease the requested P-State. Because of that, the IOs overheads will increase and impact the IO performances. To improve IO bound work, we can count the io-wait time as busy time in calculating CPU busy. This change uses get_cpu_iowait_time_us() to obtain the IO wait time value and converts time into number of cycles spent waiting on IO at the TSC rate. At the moment, this trick is only used for Atom. Signed-off-by: Philippe Longepe Signed-off-by: Stephane Gasparini --- drivers/cpufreq/intel_pstate.c | 24 +++++++++++++++++++++--- 1 file changed, 21 insertions(+), 3 deletions(-) diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index c8437861..6964df9 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -113,6 +113,7 @@ struct cpudata { u64 prev_aperf; u64 prev_mperf; u64 prev_tsc; + u64 prev_cummulative_iowait; struct sample sample; }; @@ -933,22 +934,39 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu) static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu) { struct sample *sample = &cpu->sample; + u64 cummulative_iowait, delta_iowait_us; + u64 delta_iowait_mperf; + u64 mperf, now; int32_t cpu_load; + cummulative_iowait = get_cpu_iowait_time_us(cpu->cpu, &now); + + /* + * Convert iowait time into number of IO cycles spent at max_freq. + * IO is considered as busy only for the cpu_load algorithm. For + * performance this is not needed since we always try to reach the + * maximum P-State, so we are already boosting the IOs. + */ + delta_iowait_us = cummulative_iowait - cpu->prev_cummulative_iowait; + delta_iowait_mperf = div64_u64(delta_iowait_us * cpu->pstate.scaling * + cpu->pstate.max_pstate, MSEC_PER_SEC); + + mperf = cpu->sample.mperf + delta_iowait_mperf; + cpu->prev_cummulative_iowait = cummulative_iowait; + + /* * The load can be estimated as the ratio of the mperf counter * running at a constant frequency during active periods * (C0) and the time stamp counter running at the same frequency * also during C-states. */ - cpu_load = div64_u64(int_tofp(100) * sample->mperf, sample->tsc); - + cpu_load = div64_u64(int_tofp(100) * mperf, sample->tsc); cpu->sample.busy_scaled = cpu_load; return cpu->pstate.current_pstate - pid_calc(&cpu->pid, cpu_load); } - static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu) { int32_t core_busy, max_pstate, current_pstate, sample_ratio;