From patchwork Wed Jan 30 13:00:02 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fabio Baltieri X-Patchwork-Id: 2067461 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id A8EC43FE4B for ; Wed, 30 Jan 2013 13:01:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754066Ab3A3NA6 (ORCPT ); Wed, 30 Jan 2013 08:00:58 -0500 Received: from mail-ea0-f178.google.com ([209.85.215.178]:33130 "EHLO mail-ea0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753903Ab3A3NA4 (ORCPT ); Wed, 30 Jan 2013 08:00:56 -0500 Received: by mail-ea0-f178.google.com with SMTP id a14so699589eaa.9 for ; Wed, 30 Jan 2013 05:00:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=yy6SaBhz5ZCObAe1Rl/7Kg22U1g1VUOnbW4XGEMqdfc=; b=cT66rvs9FYarlzZeifNzw3vZcGJUw7DJ1J5W6pK3cviEx4q2UZRskvZe0m7hxV2NTb /0gfGQ6lAGbjs1RhkP2VPT3uWnx/3PHzdsyx/rwKZ+FM1nztMdaWJT+3KKEatLLzXt7J lqy7sIdI+izFbQG1SDY13mMQCf7F48XUv8EvFLXL1c2smq6N+I8Z2DaZzuysR1TZPZ3d gHygBcYdiDWU5k+Frp5lREjrrr930LP+0mSh+MEaOGsEeTFXGqotJECw1LZs5ogUO/99 lVLtSRu1TqyibvZqUfohtQvfMbvl52uCzhG/+xntKlaBiug3hsXUzI2TGcfllHqA3D7J vQkA== X-Received: by 10.14.201.69 with SMTP id a45mr14068078eeo.43.1359550855477; Wed, 30 Jan 2013 05:00:55 -0800 (PST) Received: from localhost ([2a01:2003:1:1e91:8e70:5aff:feac:ad8]) by mx.google.com with ESMTPS id b2sm1893789eep.9.2013.01.30.05.00.47 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 30 Jan 2013 05:00:54 -0800 (PST) From: Fabio Baltieri To: "Rafael J. Wysocki" , cpufreq@vger.kernel.org, linux-pm@vger.kernel.org, Viresh Kumar Cc: Linus Walleij , swarren@wwwdotorg.org, linaro-dev@lists.linaro.org, Nicolas Pitre , mathieu.poirier@linaro.org, Joseph Lo , linux-kernel@vger.kernel.org, Fabio Baltieri Subject: [PATCH v7 3/4] cpufreq: conservative: call dbs_check_cpu only when necessary Date: Wed, 30 Jan 2013 14:00:02 +0100 Message-Id: <1359550803-18577-4-git-send-email-fabio.baltieri@linaro.org> X-Mailer: git-send-email 1.7.12.1 In-Reply-To: <1359550803-18577-1-git-send-email-fabio.baltieri@linaro.org> References: <1359550803-18577-1-git-send-email-fabio.baltieri@linaro.org> X-Gm-Message-State: ALoCoQmnMHO3+bX8k+ZTH1jXKA28k1Mwzfs2q919hnV1qsH3caYRPUOUfGuoXexD4fqTg3kvRG5v Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Modify conservative timer to not resample CPU utilization if recently sampled from another SW coordinated core. Signed-off-by: Fabio Baltieri --- drivers/cpufreq/cpufreq_conservative.c | 47 +++++++++++++++++++++++++++++----- 1 file changed, 41 insertions(+), 6 deletions(-) diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c index b9d7f14..5d8e894 100644 --- a/drivers/cpufreq/cpufreq_conservative.c +++ b/drivers/cpufreq/cpufreq_conservative.c @@ -111,22 +111,57 @@ static void cs_check_cpu(int cpu, unsigned int load) } } -static void cs_dbs_timer(struct work_struct *work) +static void cs_timer_update(struct cs_cpu_dbs_info_s *dbs_info, bool sample, + struct delayed_work *dw) { - struct cs_cpu_dbs_info_s *dbs_info = container_of(work, - struct cs_cpu_dbs_info_s, cdbs.work.work); unsigned int cpu = dbs_info->cdbs.cpu; int delay = delay_for_sampling_rate(cs_tuners.sampling_rate); + if (sample) + dbs_check_cpu(&cs_dbs_data, cpu); + + schedule_delayed_work_on(smp_processor_id(), dw, delay); +} + +static void cs_timer_coordinated(struct cs_cpu_dbs_info_s *dbs_info_local, + struct delayed_work *dw) +{ + struct cs_cpu_dbs_info_s *dbs_info; + ktime_t time_now; + s64 delta_us; + bool sample = true; + + /* use leader CPU's dbs_info */ + dbs_info = &per_cpu(cs_cpu_dbs_info, dbs_info_local->cdbs.cpu); mutex_lock(&dbs_info->cdbs.timer_mutex); - dbs_check_cpu(&cs_dbs_data, cpu); + time_now = ktime_get(); + delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp); - schedule_delayed_work_on(smp_processor_id(), &dbs_info->cdbs.work, - delay); + /* Do nothing if we recently have sampled */ + if (delta_us < (s64)(cs_tuners.sampling_rate / 2)) + sample = false; + else + dbs_info->cdbs.time_stamp = time_now; + + cs_timer_update(dbs_info, sample, dw); mutex_unlock(&dbs_info->cdbs.timer_mutex); } +static void cs_dbs_timer(struct work_struct *work) +{ + struct delayed_work *dw = to_delayed_work(work); + struct cs_cpu_dbs_info_s *dbs_info = container_of(work, + struct cs_cpu_dbs_info_s, cdbs.work.work); + + if (dbs_sw_coordinated_cpus(&dbs_info->cdbs)) { + cs_timer_coordinated(dbs_info, dw); + } else { + mutex_lock(&dbs_info->cdbs.timer_mutex); + cs_timer_update(dbs_info, true, dw); + mutex_unlock(&dbs_info->cdbs.timer_mutex); + } +} static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val, void *data) {