Message ID | 20191029164438.17012-12-ulf.hansson@linaro.org (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers | show |
Series | cpuidle: psci: Support hierarchical CPU arrangement | expand |
On Tue, Oct 29, 2019 at 05:44:36PM +0100, Ulf Hansson wrote: > In case we have succeeded to attach a CPU to its PM domain, let's deploy > runtime PM support for the corresponding attached device, to allow the CPU > to be powered-managed accordingly. > > To set the triggering point for when runtime PM reference counting should > be done, let's store the index of deepest idle state for the CPU in the per > CPU struct. Then use this index to compare the selected idle state index > when entering idle, as to understand whether runtime PM reference counting > is needed or not. > > Note that, from the hierarchical point view, there may be good reasons to > do runtime PM reference counting even on shallower idle states, but at this > point this isn't supported, mainly due to limitations set by the generic PM > domain. > Looks much better now with psci_enter_domain_state split as separate. -- Regards, Sudeep
diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c index 4b0183d010e0..937a8e450251 100644 --- a/drivers/cpuidle/cpuidle-psci.c +++ b/drivers/cpuidle/cpuidle-psci.c @@ -16,6 +16,7 @@ #include <linux/of.h> #include <linux/of_device.h> #include <linux/psci.h> +#include <linux/pm_runtime.h> #include <linux/slab.h> #include <asm/cpuidle.h> @@ -25,6 +26,7 @@ struct psci_cpuidle_data { u32 *psci_states; + u32 rpm_state_id; struct device *dev; }; @@ -50,13 +52,26 @@ static int psci_enter_domain_state(int idx, struct psci_cpuidle_data *data) { int ret; u32 *states = data->psci_states; - u32 state = psci_get_domain_state(); + struct device *pd_dev = data->dev; + bool runtime_pm = (pd_dev && data->rpm_state_id == idx); + u32 state; + /* + * Do runtime PM if we are using the hierarchical CPU toplogy, but only + * when cpuidle have selected the deepest idle state for the CPU. + */ + if (runtime_pm) + pm_runtime_put_sync_suspend(pd_dev); + + state = psci_get_domain_state(); if (!state) state = states[idx]; ret = _psci_enter_state(idx, state); + if (runtime_pm) + pm_runtime_get_sync(pd_dev); + /* Clear the domain state to start fresh when back from idle. */ psci_set_domain_state(0); return ret; @@ -160,6 +175,7 @@ static int __init psci_dt_cpu_init_idle(struct device_node *cpu_node, } data->dev = dev; + data->rpm_state_id = state_count - 1; /* Idle states parsed correctly, store them in the per-cpu struct. */ data->psci_states = psci_states;
In case we have succeeded to attach a CPU to its PM domain, let's deploy runtime PM support for the corresponding attached device, to allow the CPU to be powered-managed accordingly. To set the triggering point for when runtime PM reference counting should be done, let's store the index of deepest idle state for the CPU in the per CPU struct. Then use this index to compare the selected idle state index when entering idle, as to understand whether runtime PM reference counting is needed or not. Note that, from the hierarchical point view, there may be good reasons to do runtime PM reference counting even on shallower idle states, but at this point this isn't supported, mainly due to limitations set by the generic PM domain. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> --- Changes in v2: - Rebased. --- drivers/cpuidle/cpuidle-psci.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)