From patchwork Mon Nov 26 16:39:53 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fabio Baltieri X-Patchwork-Id: 1803101 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id D843640E13 for ; Mon, 26 Nov 2012 16:41:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755456Ab2KZQku (ORCPT ); Mon, 26 Nov 2012 11:40:50 -0500 Received: from mail-ea0-f174.google.com ([209.85.215.174]:54401 "EHLO mail-ea0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755450Ab2KZQks (ORCPT ); Mon, 26 Nov 2012 11:40:48 -0500 Received: by mail-ea0-f174.google.com with SMTP id e13so4474453eaa.19 for ; Mon, 26 Nov 2012 08:40:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=zW7MYQZVm3/O9a4R7tAM4ujBo1HvhrU5COUF+f9pBRA=; b=gunuSlhQ6IgNQ8ZVQCRdkPfXcPGjG5Bc9+DTstnrHS1YLFiJ6QeLwgoTBBPmhgnCKl E0sUMvuHCK5o/hzX6kpjAlVLMrUfxxPXJGi0YsNwdriQrRai72x1gZMQZaGW5RdiGZ5K syLnhCh25AO3esPFAUru0yq1lKiFquR8Bi0Y0Y7SdNylG4mVFFEqjWgsR8R3dbUNfwFo 7MSC5Fw+nmTW1rFnye2Swz52UG0AXB1HGRngSWxd9XxyZaytZmTjh+HcjceCb45Eyrp1 4qwyg5Xue/ldTnYyhckpN3va9vV7LFkOu+U9m7NiS36HtFDV6uDpaWUO0T1X7auN7/E/ fhiA== Received: by 10.14.175.133 with SMTP id z5mr47437172eel.15.1353948047391; Mon, 26 Nov 2012 08:40:47 -0800 (PST) Received: from localhost ([2a01:2029:1:1304:8e70:5aff:feac:ad8]) by mx.google.com with ESMTPS id a44sm35057241eeo.7.2012.11.26.08.40.40 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 26 Nov 2012 08:40:46 -0800 (PST) From: Fabio Baltieri To: "Rafael J. Wysocki" , cpufreq@vger.kernel.org, linux-pm@vger.kernel.org Cc: Rickard Andersson , Vincent Guittot , Linus Walleij , Lee Jones , linux-kernel@vger.kernel.org, Fabio Baltieri Subject: [PATCH 2/5] cpufreq: star/stop cpufreq timers on cpu hotplug Date: Mon, 26 Nov 2012 17:39:53 +0100 Message-Id: <1353947996-26723-3-git-send-email-fabio.baltieri@linaro.org> X-Mailer: git-send-email 1.7.12.1 In-Reply-To: <1353947996-26723-1-git-send-email-fabio.baltieri@linaro.org> References: <1353947996-26723-1-git-send-email-fabio.baltieri@linaro.org> X-Gm-Message-State: ALoCoQnc8oXKce607ZWQYwRotaekPuLcTJRb4OYmcbJJRHCm3ntHPc1UMXIDFfQwQkWxH7k0tZQM Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Add a CPU notifier to start and stop individual core timers on CPU hotplug events when running on CPUs with SW coordinated frequency. Signed-off-by: Fabio Baltieri --- drivers/cpufreq/cpufreq_governor.c | 51 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index a00f02d..b7c1f89 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -25,9 +25,12 @@ #include #include #include +#include #include "cpufreq_governor.h" +static DEFINE_PER_CPU(struct dbs_data *, cpu_cur_dbs); + static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall) { u64 idle_time; @@ -193,6 +196,46 @@ static inline void dbs_timer_exit(struct cpu_dbs_common_info *cdbs) cancel_delayed_work_sync(&cdbs->work); } +static int __cpuinit cpu_callback(struct notifier_block *nfb, + unsigned long action, void *hcpu) +{ + unsigned int cpu = (unsigned long)hcpu; + struct device *cpu_dev = get_cpu_device(cpu); + struct dbs_data *dbs_data = per_cpu(cpu_cur_dbs, cpu); + struct cpu_dbs_common_info *cpu_cdbs = dbs_data->get_cpu_cdbs(cpu); + unsigned int sampling_rate; + + if (dbs_data->governor == GOV_CONSERVATIVE) { + struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; + sampling_rate = cs_tuners->sampling_rate; + } else { + struct od_dbs_tuners *od_tuners = dbs_data->tuners; + sampling_rate = od_tuners->sampling_rate; + } + + if (cpu_dev) { + switch (action) { + case CPU_ONLINE: + case CPU_ONLINE_FROZEN: + case CPU_DOWN_FAILED: + case CPU_DOWN_FAILED_FROZEN: + dbs_timer_init(dbs_data, cpu_cdbs, + sampling_rate, cpu); + break; + case CPU_DOWN_PREPARE: + case CPU_DOWN_PREPARE_FROZEN: + dbs_timer_exit(cpu_cdbs); + break; + } + } + + return NOTIFY_OK; +} + +static struct notifier_block __refdata ondemand_cpu_notifier = { + .notifier_call = cpu_callback, +}; + int cpufreq_governor_dbs(struct dbs_data *dbs_data, struct cpufreq_policy *policy, unsigned int event) { @@ -304,7 +347,11 @@ second_time: j_cdbs = dbs_data->get_cpu_cdbs(j); dbs_timer_init(dbs_data, j_cdbs, *sampling_rate, j); + + per_cpu(cpu_cur_dbs, j) = dbs_data; } + + register_hotcpu_notifier(&ondemand_cpu_notifier); } else { dbs_timer_init(dbs_data, cpu_cdbs, *sampling_rate, cpu); } @@ -315,11 +362,15 @@ second_time: cs_dbs_info->enable = 0; if (dbs_sw_coordinated_cpus(cpu_cdbs)) { + unregister_hotcpu_notifier(&ondemand_cpu_notifier); + for_each_cpu(j, policy->cpus) { struct cpu_dbs_common_info *j_cdbs; j_cdbs = dbs_data->get_cpu_cdbs(j); dbs_timer_exit(j_cdbs); + + per_cpu(cpu_cur_dbs, j) = NULL; } } else { dbs_timer_exit(cpu_cdbs);