From patchwork Wed Feb 27 19:58:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 10832249 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63FE9139A for ; Wed, 27 Feb 2019 19:59:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3AF2E2A3EB for ; Wed, 27 Feb 2019 19:59:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 239342E596; Wed, 27 Feb 2019 19:59:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47B7B29A57 for ; Wed, 27 Feb 2019 19:59:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730131AbfB0T67 (ORCPT ); Wed, 27 Feb 2019 14:58:59 -0500 Received: from mail-lj1-f193.google.com ([209.85.208.193]:39488 "EHLO mail-lj1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730263AbfB0T6v (ORCPT ); Wed, 27 Feb 2019 14:58:51 -0500 Received: by mail-lj1-f193.google.com with SMTP id g80so14986567ljg.6 for ; Wed, 27 Feb 2019 11:58:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0UE75rMIQ1/9Jb6g/TX5/C4pyxCKvxuMKoZQJmTsETs=; b=fs9VQPmQRj1jiGfHFNe+tJQzJI/tPaF4zmdsl+FEfGNRZTQirJtSKBWKu0nP6x9p0P MX0Hbv/cwTvGR2e61LKylfYtHNgmn/DmeP8G+mBoDwMTa/1krseYVY1yHowfF6DtZZyG NvWrUlGUdS3p0z34XIAdAmqWDhOv9yUvmNyxAO1Js0VvKRfEOmphrUyuYAHMDesJrnZe HSSzhaYavELPrv+OHmP5NeS1Xmddt1qkqXM56BGOkN3vk6QbPckGwmHYf6VMkMwyh+td 0+KksbCPy+IGogdMyDlbvZHrwOfRgEJkDZx4VOYUZzPK8rxhrzuZfF+qgYO4vlF9+hSl aOIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0UE75rMIQ1/9Jb6g/TX5/C4pyxCKvxuMKoZQJmTsETs=; b=DldnFP/yuRFdYj/X+uRNsJzHbc0RtShRZhHJmD6nlvqek9va96/VTNd19qfFoWiMkI alsErDj1fsnNw3d9KVneC5TpMOb0dV4tnn7ycSNzKQy4Xl5krBMZIhj0PlfW28oefpDY tnAeLVwGoVMlgmPzyiXwC8l67JmXZIR5PvhkAdbWXr73W41gvKufu2hiQF06S/rNBhJ8 oVN8UznWSbipQ2g8rVBRXi8jZsRTPDAY4pqZpsM9JBFCzFW3+v2BTLl/tEArR2zEL5wE xROXtA0n2IAlL3gXlrTtPnNTqSkuGQs/6sllVBUR0flpC9yY1Wl0RsDEy+MjvVzkKJWX 2I7A== X-Gm-Message-State: APjAAAVrfbmr3NQW4ZRNI57eNsll9MJ2lNLX4SWE56ixFPZu9IFCfd4X 7441DWk0gYdEUxI2eYX4Lca+xg== X-Google-Smtp-Source: APXvYqzgcYN6BOjXkQsZhyiVuASKF6e74OjwL//SPfBfIgNC+i7ZJXtxi2TwaSAwvDwqowOVbiFCqA== X-Received: by 2002:a2e:680e:: with SMTP id c14mr2469128lja.51.1551297528417; Wed, 27 Feb 2019 11:58:48 -0800 (PST) Received: from localhost.localdomain (h-158-174-22-210.NA.cust.bahnhof.se. [158.174.22.210]) by smtp.gmail.com with ESMTPSA id z128sm723802lfa.60.2019.02.27.11.58.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Feb 2019 11:58:47 -0800 (PST) From: Ulf Hansson To: "Rafael J . Wysocki" , linux-pm@vger.kernel.org Cc: Frederic Weisbecker , Thomas Gleixner , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , Daniel Lezcano , "Raju P . L . S . S . S . N" , Stephen Boyd , Tony Lindgren , Kevin Hilman , Lina Iyer , Ulf Hansson , Viresh Kumar , Vincent Guittot , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 4/4] PM / Domains: Add genpd governor for CPUs Date: Wed, 27 Feb 2019 20:58:36 +0100 Message-Id: <20190227195836.24739-5-ulf.hansson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190227195836.24739-1-ulf.hansson@linaro.org> References: <20190227195836.24739-1-ulf.hansson@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As it's now perfectly possible that a PM domain managed by genpd contains devices belonging to CPUs, we should start to take into account the residency values for the idle states during the state selection process. The residency value specifies the minimum duration of time, the CPU or a group of CPUs, needs to spend in an idle state to not waste energy entering it. To deal with this, let's add a new genpd governor, pm_domain_cpu_gov, that may be used for a PM domain that have CPU devices attached or if the CPUs are attached through subdomains. The new governor computes the minimum expected idle duration time for the online CPUs being attached to the PM domain and its subdomains. Then in the state selection process, trying the deepest state first, it verifies that the idle duration time satisfies the state's residency value. It should be noted that, when computing the minimum expected idle duration time, we use the information about the next timer/tick that is stored in the per CPU variable, cpuidle_devices, for the related CPUs. Future wise, this deserves to be improved, as there are obviously more reasons to why a CPU may be woken up from idle. Cc: Lina Iyer Co-developed-by: Lina Iyer Signed-off-by: Ulf Hansson Acked-by: Daniel Lezcano --- Changes in v12: - Rebased. --- drivers/base/power/domain_governor.c | 62 +++++++++++++++++++++++++++- include/linux/pm_domain.h | 3 ++ 2 files changed, 64 insertions(+), 1 deletion(-) diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c index 99896fbf18e4..fb8fd21e69a7 100644 --- a/drivers/base/power/domain_governor.c +++ b/drivers/base/power/domain_governor.c @@ -10,6 +10,9 @@ #include #include #include +#include +#include +#include static int dev_update_qos_constraint(struct device *dev, void *data) { @@ -211,8 +214,10 @@ static bool default_power_down_ok(struct dev_pm_domain *pd) struct generic_pm_domain *genpd = pd_to_genpd(pd); struct gpd_link *link; - if (!genpd->max_off_time_changed) + if (!genpd->max_off_time_changed) { + genpd->state_idx = genpd->cached_power_down_state_idx; return genpd->cached_power_down_ok; + } /* * We have to invalidate the cached results for the masters, so @@ -237,6 +242,7 @@ static bool default_power_down_ok(struct dev_pm_domain *pd) genpd->state_idx--; } + genpd->cached_power_down_state_idx = genpd->state_idx; return genpd->cached_power_down_ok; } @@ -245,6 +251,55 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain) return false; } +static bool cpu_power_down_ok(struct dev_pm_domain *pd) +{ + struct generic_pm_domain *genpd = pd_to_genpd(pd); + struct cpuidle_device *dev; + ktime_t domain_wakeup; + s64 idle_duration_ns; + int cpu, i; + + /* Validate dev PM QoS constraints. */ + if (!default_power_down_ok(pd)) + return false; + + if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN)) + return true; + + /* + * Find the next wakeup for any of the online CPUs within the PM domain + * and its subdomains. Note, we only need the genpd->cpus, as it already + * contains a mask of all CPUs from subdomains. + */ + domain_wakeup = ktime_set(KTIME_SEC_MAX, 0); + for_each_cpu_and(cpu, genpd->cpus, cpu_online_mask) { + dev = per_cpu(cpuidle_devices, cpu); + if (dev && ktime_before(dev->next_hrtimer, domain_wakeup)) + domain_wakeup = dev->next_hrtimer; + } + + /* The minimum idle duration is from now - until the next wakeup. */ + idle_duration_ns = ktime_to_ns(ktime_sub(domain_wakeup, ktime_get())); + if (idle_duration_ns <= 0) + return false; + + /* + * Find the deepest idle state that has its residency value satisfied + * and by also taking into account the power off latency for the state. + * Start at the state picked by the dev PM QoS constraint validation. + */ + i = genpd->state_idx; + do { + if (idle_duration_ns >= (genpd->states[i].residency_ns + + genpd->states[i].power_off_latency_ns)) { + genpd->state_idx = i; + return true; + } + } while (--i >= 0); + + return false; +} + struct dev_power_governor simple_qos_governor = { .suspend_ok = default_suspend_ok, .power_down_ok = default_power_down_ok, @@ -257,3 +312,8 @@ struct dev_power_governor pm_domain_always_on_gov = { .power_down_ok = always_on_power_down_ok, .suspend_ok = default_suspend_ok, }; + +struct dev_power_governor pm_domain_cpu_gov = { + .suspend_ok = default_suspend_ok, + .power_down_ok = cpu_power_down_ok, +}; diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index a6e251fe9deb..ae7061556a26 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -118,6 +118,7 @@ struct generic_pm_domain { s64 max_off_time_ns; /* Maximum allowed "suspended" time. */ bool max_off_time_changed; bool cached_power_down_ok; + bool cached_power_down_state_idx; int (*attach_dev)(struct generic_pm_domain *domain, struct device *dev); void (*detach_dev)(struct generic_pm_domain *domain, @@ -202,6 +203,7 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state); extern struct dev_power_governor simple_qos_governor; extern struct dev_power_governor pm_domain_always_on_gov; +extern struct dev_power_governor pm_domain_cpu_gov; #else static inline struct generic_pm_domain_data *dev_gpd_data(struct device *dev) @@ -245,6 +247,7 @@ static inline int dev_pm_genpd_set_performance_state(struct device *dev, #define simple_qos_governor (*(struct dev_power_governor *)(NULL)) #define pm_domain_always_on_gov (*(struct dev_power_governor *)(NULL)) +#define pm_domain_cpu_gov (*(struct dev_power_governor *)(NULL)) #endif #ifdef CONFIG_PM_GENERIC_DOMAINS_SLEEP