diff mbox series

[RFC,v1,4/8] drivers: qcom: cpu_pd: add cpu power domain support using genpd

Message ID 1539206455-29342-5-git-send-email-rplsssn@codeaurora.org (mailing list archive)
State RFC
Delegated to: Andy Gross
Headers show
Series drivers: qcom: Add cpu power domain for SDM845 | expand

Commit Message

Raju P.L.S.S.S.N Oct. 10, 2018, 9:20 p.m. UTC
RPMH based targets require that the sleep and wake state request votes
be sent during system low power mode entry. The votes help reduce the
power consumption when the AP is not using them. The votes sent by the
clients are cached in RPMH controller and needs to be flushed when the
last cpu enters low power mode. So add cpu power domain using Linux
generic power domain infrastructure to perform necessary tasks as part
of domain power down.

Suggested-by: Lina Iyer <ilina@codeaurora.org>
Signed-off-by: Raju P.L.S.S.S.N <rplsssn@codeaurora.org>
---
 drivers/soc/qcom/Kconfig  |   9 ++++
 drivers/soc/qcom/Makefile |   1 +
 drivers/soc/qcom/cpu_pd.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 114 insertions(+)
 create mode 100644 drivers/soc/qcom/cpu_pd.c

Comments

Sudeep Holla Oct. 11, 2018, 11:13 a.m. UTC | #1
On Thu, Oct 11, 2018 at 02:50:51AM +0530, Raju P.L.S.S.S.N wrote:
> RPMH based targets require that the sleep and wake state request votes
> be sent during system low power mode entry. The votes help reduce the
> power consumption when the AP is not using them. The votes sent by the
> clients are cached in RPMH controller and needs to be flushed when the
> last cpu enters low power mode. So add cpu power domain using Linux
> generic power domain infrastructure to perform necessary tasks as part
> of domain power down.
>

You seem to have either randomly chosen just 3 patches from Lina/Ulf's
CPU genpd series or this series doesn't entirely depend on it ?

If latter, how does this work with PSCI CPU_SUSPEND operations ?

And why this can be part of PSCI firmware implementation. Only PSCI
firmware needs if RPMH votes need to be flushed or not. So, honestly
I don't see the need for this in Linux.

--
Regards,
Sudeep
Ulf Hansson Oct. 11, 2018, 3:27 p.m. UTC | #2
On 11 October 2018 at 13:13, Sudeep Holla <sudeep.holla@arm.com> wrote:
> On Thu, Oct 11, 2018 at 02:50:51AM +0530, Raju P.L.S.S.S.N wrote:
>> RPMH based targets require that the sleep and wake state request votes
>> be sent during system low power mode entry. The votes help reduce the
>> power consumption when the AP is not using them. The votes sent by the
>> clients are cached in RPMH controller and needs to be flushed when the
>> last cpu enters low power mode. So add cpu power domain using Linux
>> generic power domain infrastructure to perform necessary tasks as part
>> of domain power down.
>>
>
> You seem to have either randomly chosen just 3 patches from Lina/Ulf's
> CPU genpd series or this series doesn't entirely depend on it ?

Yep, it not easy to follow. But I do understand what you are trying to do here.

>
> If latter, how does this work with PSCI CPU_SUSPEND operations ?
>
> And why this can be part of PSCI firmware implementation. Only PSCI
> firmware needs if RPMH votes need to be flushed or not. So, honestly
> I don't see the need for this in Linux.

I do think there is clear need for this in Linux. More precisely,
since the PSCI firmware have knowledge solely about CPUs (and clusters
of CPUs), but not about other shared resources/devices present on the
SoC.

What Raju is trying to do here, is to manage those resources which
needs special treatment, before and after the CPU (likely cluster) is
going idle and returns from idle.

One question here though, what particular idle state is relevant for
the QCOM SoC to take last-man-actions for? I assume it's only cluster
idle states, and not about cpu idle states, no? Raju, can you please
clarify?

Historically, the typical solution have been to use the
cpu_cluster_pm_enter|exit() notifiers. Those could potentially be
replaced by instead building a hierarchical topology, using
master/subdomain of genpd/"power-domains", along the lines of what
Raju is doing. However, I am not sure if that is the correct approach,
at least we need to make sure it models the HW in DT correctly.

Kind regards
Uffe
Sudeep Holla Oct. 11, 2018, 3:59 p.m. UTC | #3
On Thu, Oct 11, 2018 at 05:27:59PM +0200, Ulf Hansson wrote:
> On 11 October 2018 at 13:13, Sudeep Holla <sudeep.holla@arm.com> wrote:
> > On Thu, Oct 11, 2018 at 02:50:51AM +0530, Raju P.L.S.S.S.N wrote:
> >> RPMH based targets require that the sleep and wake state request votes
> >> be sent during system low power mode entry. The votes help reduce the
> >> power consumption when the AP is not using them. The votes sent by the
> >> clients are cached in RPMH controller and needs to be flushed when the
> >> last cpu enters low power mode. So add cpu power domain using Linux
> >> generic power domain infrastructure to perform necessary tasks as part
> >> of domain power down.
> >>
> >
> > You seem to have either randomly chosen just 3 patches from Lina/Ulf's
> > CPU genpd series or this series doesn't entirely depend on it ?
> 
> Yep, it not easy to follow. But I do understand what you are trying to do here.
> 
> >
> > If latter, how does this work with PSCI CPU_SUSPEND operations ?
> >
> > And why this can be part of PSCI firmware implementation. Only PSCI
> > firmware needs if RPMH votes need to be flushed or not. So, honestly
> > I don't see the need for this in Linux.
> 
> I do think there is clear need for this in Linux. More precisely,
> since the PSCI firmware have knowledge solely about CPUs (and clusters
> of CPUs), but not about other shared resources/devices present on the
> SoC.
>

I disagree. Even with OSI, you indicate the cluster power off though
PSCI CPU_SUSPEND call. If for any async wakeup reasons, firmware decides
not to enter cluster OFF, then it may skip flushing RPMH vote. So doing
it in PSCI is more correct and elegant to avoid such corner cases.

> What Raju is trying to do here, is to manage those resources which
> needs special treatment, before and after the CPU (likely cluster) is
> going idle and returns from idle.
>

OK I get that, but why is Linux better than PSCI. I have my reasoning
above.

> One question here though, what particular idle state is relevant for
> the QCOM SoC to take last-man-actions for? I assume it's only cluster
> idle states, and not about cpu idle states, no? Raju, can you please
> clarify?
>

I assume so. I did see some comment or commit message to indicate the
same.

> Historically, the typical solution have been to use the
> cpu_cluster_pm_enter|exit() notifiers. Those could potentially be
> replaced by instead building a hierarchical topology, using
> master/subdomain of genpd/"power-domains", along the lines of what
> Raju is doing. However, I am not sure if that is the correct approach,
> at least we need to make sure it models the HW in DT correctly.
>

Indeed. I am not sure how to represent both PSCI and this power domains ?
Though I believe the latter is not required at all.

--
Regards,
Sudeep
Ulf Hansson Oct. 12, 2018, 9:23 a.m. UTC | #4
On 11 October 2018 at 17:59, Sudeep Holla <sudeep.holla@arm.com> wrote:
> On Thu, Oct 11, 2018 at 05:27:59PM +0200, Ulf Hansson wrote:
>> On 11 October 2018 at 13:13, Sudeep Holla <sudeep.holla@arm.com> wrote:
>> > On Thu, Oct 11, 2018 at 02:50:51AM +0530, Raju P.L.S.S.S.N wrote:
>> >> RPMH based targets require that the sleep and wake state request votes
>> >> be sent during system low power mode entry. The votes help reduce the
>> >> power consumption when the AP is not using them. The votes sent by the
>> >> clients are cached in RPMH controller and needs to be flushed when the
>> >> last cpu enters low power mode. So add cpu power domain using Linux
>> >> generic power domain infrastructure to perform necessary tasks as part
>> >> of domain power down.
>> >>
>> >
>> > You seem to have either randomly chosen just 3 patches from Lina/Ulf's
>> > CPU genpd series or this series doesn't entirely depend on it ?
>>
>> Yep, it not easy to follow. But I do understand what you are trying to do here.
>>
>> >
>> > If latter, how does this work with PSCI CPU_SUSPEND operations ?
>> >
>> > And why this can be part of PSCI firmware implementation. Only PSCI
>> > firmware needs if RPMH votes need to be flushed or not. So, honestly
>> > I don't see the need for this in Linux.
>>
>> I do think there is clear need for this in Linux. More precisely,
>> since the PSCI firmware have knowledge solely about CPUs (and clusters
>> of CPUs), but not about other shared resources/devices present on the
>> SoC.
>>
>
> I disagree. Even with OSI, you indicate the cluster power off though
> PSCI CPU_SUSPEND call. If for any async wakeup reasons, firmware decides
> not to enter cluster OFF, then it may skip flushing RPMH vote. So doing
> it in PSCI is more correct and elegant to avoid such corner cases.
>
>> What Raju is trying to do here, is to manage those resources which
>> needs special treatment, before and after the CPU (likely cluster) is
>> going idle and returns from idle.
>>
>
> OK I get that, but why is Linux better than PSCI. I have my reasoning
> above.

I assume Lina, in the other thread from Raju, have provided you the
details why PSCI is not the place to do this and why Linux need to be
involved. In any case, let's continue that discussion in that thread
rather than here.

>
>> One question here though, what particular idle state is relevant for
>> the QCOM SoC to take last-man-actions for? I assume it's only cluster
>> idle states, and not about cpu idle states, no? Raju, can you please
>> clarify?
>>
>
> I assume so. I did see some comment or commit message to indicate the
> same.
>
>> Historically, the typical solution have been to use the
>> cpu_cluster_pm_enter|exit() notifiers. Those could potentially be
>> replaced by instead building a hierarchical topology, using
>> master/subdomain of genpd/"power-domains", along the lines of what
>> Raju is doing. However, I am not sure if that is the correct approach,
>> at least we need to make sure it models the HW in DT correctly.
>>
>
> Indeed. I am not sure how to represent both PSCI and this power domains ?
> Though I believe the latter is not required at all.

After my re-work of the PSCI power domain series, I believe the
power-domain that Raju is adding, should then be modeled as the master
power-domain of the PSCI *cluster* power domain. But, as stated, I am
not sure if it's the correct way to model the HW/topology. Maybe it
is.

The other option is to explore the cpu_cluster_pm_enter|exit(), which
today is the only viable solution in the kernel. In principle we then
need to call cpu_cluster_pm_enter() from the PSCI's cluster PM domain
genpd ->power_off() callback, and cpu_cluster_pm_exit() from the
->power_on() callback.

Or maybe we simply need something entirely new, like genpd PM
domain-on/off notifiers, which may scale better. It has even been
suggested on the mailing list, long time ago.

Kind regards
Uffe
Sudeep Holla Oct. 12, 2018, 2:33 p.m. UTC | #5
On Thu, Oct 11, 2018 at 02:50:51AM +0530, Raju P.L.S.S.S.N wrote:
> RPMH based targets require that the sleep and wake state request votes
> be sent during system low power mode entry. The votes help reduce the
> power consumption when the AP is not using them. The votes sent by the
> clients are cached in RPMH controller and needs to be flushed when the
> last cpu enters low power mode. So add cpu power domain using Linux
> generic power domain infrastructure to perform necessary tasks as part
> of domain power down.
>
> Suggested-by: Lina Iyer <ilina@codeaurora.org>
> Signed-off-by: Raju P.L.S.S.S.N <rplsssn@codeaurora.org>
> ---
>  drivers/soc/qcom/Kconfig  |   9 ++++
>  drivers/soc/qcom/Makefile |   1 +
>  drivers/soc/qcom/cpu_pd.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 114 insertions(+)
>  create mode 100644 drivers/soc/qcom/cpu_pd.c
>
> diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
> index ba79b60..91e8b3b 100644
> --- a/drivers/soc/qcom/Kconfig
> +++ b/drivers/soc/qcom/Kconfig
> @@ -95,6 +95,7 @@ config QCOM_RMTFS_MEM
>  config QCOM_RPMH
>  	bool "Qualcomm RPM-Hardened (RPMH) Communication"
>  	depends on ARCH_QCOM && ARM64 && OF || COMPILE_TEST
> +	select QCOM_CPU_PD
>  	help
>  	  Support for communication with the hardened-RPM blocks in
>  	  Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
> @@ -102,6 +103,14 @@ config QCOM_RPMH
>  	  of hardware components aggregate requests for these resources and
>  	  help apply the aggregated state on the resource.
>
> +config QCOM_CPU_PD
> +    bool "Qualcomm cpu power domain driver"
> +    depends on QCOM_RPMH && PM_GENERIC_DOMAINS || COMPILE_TEST
> +    help
> +	  Support for QCOM platform cpu power management to perform tasks
> +	  necessary while application processor votes for deeper modes so that
> +	  the firmware can enter SoC level low power modes to save power.
> +
>  config QCOM_SMEM
>  	tristate "Qualcomm Shared Memory Manager (SMEM)"
>  	depends on ARCH_QCOM
> diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
> index f25b54c..57a1b0e 100644
> --- a/drivers/soc/qcom/Makefile
> +++ b/drivers/soc/qcom/Makefile
> @@ -12,6 +12,7 @@ obj-$(CONFIG_QCOM_RMTFS_MEM)	+= rmtfs_mem.o
>  obj-$(CONFIG_QCOM_RPMH)		+= qcom_rpmh.o
>  qcom_rpmh-y			+= rpmh-rsc.o
>  qcom_rpmh-y			+= rpmh.o
> +obj-$(CONFIG_QCOM_CPU_PD) += cpu_pd.o
>  obj-$(CONFIG_QCOM_SMD_RPM)	+= smd-rpm.o
>  obj-$(CONFIG_QCOM_SMEM) +=	smem.o
>  obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
> diff --git a/drivers/soc/qcom/cpu_pd.c b/drivers/soc/qcom/cpu_pd.c
> new file mode 100644
> index 0000000..565c510
> --- /dev/null
> +++ b/drivers/soc/qcom/cpu_pd.c
> @@ -0,0 +1,104 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2018, The Linux Foundation. All rights reserved.
> + */
> +
> +#include <linux/of_platform.h>
> +#include <linux/pm_domain.h>
> +#include <linux/slab.h>
> +
> +#include <soc/qcom/rpmh.h>
> +
> +static struct device *cpu_pd_dev;
> +

This doesn't scale if you have 2 instances.

> +static int cpu_pd_power_off(struct generic_pm_domain *domain)
> +{
> +	if (rpmh_ctrlr_idle(cpu_pd_dev)) {

How is this expected to compile ? I couldn't find any instance of this.

> +		/* Flush the sleep/wake sets */
> +		rpmh_flush(cpu_pd_dev);

So it's just flushing the pending requests on the controller. The function
implementation carries a note that it's assumed to be called only from
system PM and we may call it in cpu idle path here. Is that fine ?
If so, may be the comment needs to be dropped.

Also, where exactly this voting for CPU is happening in this path ?

> +	} else {
> +		pr_debug("rpmh controller is busy\n");
> +		return -EBUSY;
> +	}
> +
> +	return 0;
> +}
> +
> +static int cpu_pm_domain_probe(struct platform_device *pdev)
> +{
> +	struct device *dev = &pdev->dev;
> +	struct device_node *np = dev->of_node;
> +	struct generic_pm_domain *cpu_pd;
> +	int ret = -EINVAL, cpu;
> +
> +	if (!np) {
> +		dev_err(dev, "device tree node not found\n");
> +		return -ENODEV;
> +	}
> +
> +	if (!of_find_property(np, "#power-domain-cells", NULL)) {
> +		pr_err("power-domain-cells not found\n");
> +		return -ENODEV;
> +	}
> +
> +	cpu_pd_dev = &pdev->dev;
> +	if (IS_ERR_OR_NULL(cpu_pd_dev))

Isn't this too late to check ? You would have crashed on dev->of_node.
So sounds pretty useless

> +		return PTR_ERR(cpu_pd_dev);
> +
> +	cpu_pd = devm_kzalloc(dev, sizeof(*cpu_pd), GFP_KERNEL);
> +	if (!cpu_pd)
> +		return -ENOMEM;
> +
> +	cpu_pd->name = kasprintf(GFP_KERNEL, "%s", np->name);
> +	if (!cpu_pd->name)
> +		goto free_cpu_pd;
> +	cpu_pd->name = kbasename(cpu_pd->name);
> +	cpu_pd->power_off = cpu_pd_power_off;

If some kind of voting is done in off, why is there nothing to take care
of that in pd_power_on  if it's per EL(linux/hyp/secure).

--
Regards,
Sudeep
Raju P.L.S.S.S.N Oct. 12, 2018, 6:01 p.m. UTC | #6
On 10/12/2018 8:03 PM, Sudeep Holla wrote:
> On Thu, Oct 11, 2018 at 02:50:51AM +0530, Raju P.L.S.S.S.N wrote:
>> RPMH based targets require that the sleep and wake state request votes
>> be sent during system low power mode entry. The votes help reduce the
>> power consumption when the AP is not using them. The votes sent by the
>> clients are cached in RPMH controller and needs to be flushed when the
>> last cpu enters low power mode. So add cpu power domain using Linux
>> generic power domain infrastructure to perform necessary tasks as part
>> of domain power down.
>>
>> Suggested-by: Lina Iyer <ilina@codeaurora.org>
>> Signed-off-by: Raju P.L.S.S.S.N <rplsssn@codeaurora.org>
>> ---
>>   drivers/soc/qcom/Kconfig  |   9 ++++
>>   drivers/soc/qcom/Makefile |   1 +
>>   drivers/soc/qcom/cpu_pd.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++
>>   3 files changed, 114 insertions(+)
>>   create mode 100644 drivers/soc/qcom/cpu_pd.c
>>
>> diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
>> index ba79b60..91e8b3b 100644
>> --- a/drivers/soc/qcom/Kconfig
>> +++ b/drivers/soc/qcom/Kconfig
>> @@ -95,6 +95,7 @@ config QCOM_RMTFS_MEM
>>   config QCOM_RPMH
>>   	bool "Qualcomm RPM-Hardened (RPMH) Communication"
>>   	depends on ARCH_QCOM && ARM64 && OF || COMPILE_TEST
>> +	select QCOM_CPU_PD
>>   	help
>>   	  Support for communication with the hardened-RPM blocks in
>>   	  Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
>> @@ -102,6 +103,14 @@ config QCOM_RPMH
>>   	  of hardware components aggregate requests for these resources and
>>   	  help apply the aggregated state on the resource.
>>
>> +config QCOM_CPU_PD
>> +    bool "Qualcomm cpu power domain driver"
>> +    depends on QCOM_RPMH && PM_GENERIC_DOMAINS || COMPILE_TEST
>> +    help
>> +	  Support for QCOM platform cpu power management to perform tasks
>> +	  necessary while application processor votes for deeper modes so that
>> +	  the firmware can enter SoC level low power modes to save power.
>> +
>>   config QCOM_SMEM
>>   	tristate "Qualcomm Shared Memory Manager (SMEM)"
>>   	depends on ARCH_QCOM
>> diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
>> index f25b54c..57a1b0e 100644
>> --- a/drivers/soc/qcom/Makefile
>> +++ b/drivers/soc/qcom/Makefile
>> @@ -12,6 +12,7 @@ obj-$(CONFIG_QCOM_RMTFS_MEM)	+= rmtfs_mem.o
>>   obj-$(CONFIG_QCOM_RPMH)		+= qcom_rpmh.o
>>   qcom_rpmh-y			+= rpmh-rsc.o
>>   qcom_rpmh-y			+= rpmh.o
>> +obj-$(CONFIG_QCOM_CPU_PD) += cpu_pd.o
>>   obj-$(CONFIG_QCOM_SMD_RPM)	+= smd-rpm.o
>>   obj-$(CONFIG_QCOM_SMEM) +=	smem.o
>>   obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
>> diff --git a/drivers/soc/qcom/cpu_pd.c b/drivers/soc/qcom/cpu_pd.c
>> new file mode 100644
>> index 0000000..565c510
>> --- /dev/null
>> +++ b/drivers/soc/qcom/cpu_pd.c
>> @@ -0,0 +1,104 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (c) 2018, The Linux Foundation. All rights reserved.
>> + */
>> +
>> +#include <linux/of_platform.h>
>> +#include <linux/pm_domain.h>
>> +#include <linux/slab.h>
>> +
>> +#include <soc/qcom/rpmh.h>
>> +
>> +static struct device *cpu_pd_dev;
>> +
> 
> This doesn't scale if you have 2 instances.

There would be one instance of this driver for this platform.
This platform has single cluster & single power domain which includes 
all the cpus. Even if there are more than one cluster (lets say, later 
on) the top level grouping of all the clusters will be considered as a 
domain. In this case, if hierarchical topological representation is 
needed, the driver probe needs to be modified. The naming might have led 
to the confusion. Should I change it to something like top_level_pd_dev 
? Another way is to define the compatible flag as SoC specific like 
"qcom,cpu-pm-domain-sdm845" but then there will be multiple SoCs based 
on single cluster. For each of them, compatible flag needs to be added.




> 
>> +static int cpu_pd_power_off(struct generic_pm_domain *domain)
>> +{
>> +	if (rpmh_ctrlr_idle(cpu_pd_dev)) {
> 
> How is this expected to compile ? I couldn't find any instance of this.
> 
>> +		/* Flush the sleep/wake sets */
>> +		rpmh_flush(cpu_pd_dev);
> 
> So it's just flushing the pending requests on the controller. The function
> implementation carries a note that it's assumed to be called only from
> system PM and we may call it in cpu idle path here. Is that fine ?
> If so, may be the comment needs to be dropped.

Sure. we will change the comment.
When RPMh driver was being developed, it named the top level power 
domain which includes all the cpus as system PM.


> 
> Also, where exactly this voting for CPU is happening in this path ?
> 

This SoC uses PC mode and we are not voting for CPU's idle state here.. 
We are just doing power domain activities that are needed when all the 
cpus are down.



>> +	} else {
>> +		pr_debug("rpmh controller is busy\n");
>> +		return -EBUSY;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int cpu_pm_domain_probe(struct platform_device *pdev)
>> +{
>> +	struct device *dev = &pdev->dev;
>> +	struct device_node *np = dev->of_node;
>> +	struct generic_pm_domain *cpu_pd;
>> +	int ret = -EINVAL, cpu;
>> +
>> +	if (!np) {
>> +		dev_err(dev, "device tree node not found\n");
>> +		return -ENODEV;
>> +	}
>> +
>> +	if (!of_find_property(np, "#power-domain-cells", NULL)) {
>> +		pr_err("power-domain-cells not found\n");
>> +		return -ENODEV;
>> +	}
>> +
>> +	cpu_pd_dev = &pdev->dev;
>> +	if (IS_ERR_OR_NULL(cpu_pd_dev))
> 
> Isn't this too late to check ? You would have crashed on dev->of_node.
> So sounds pretty useless

Agree. I will change this.


> 
>> +		return PTR_ERR(cpu_pd_dev);
>> +
>> +	cpu_pd = devm_kzalloc(dev, sizeof(*cpu_pd), GFP_KERNEL);
>> +	if (!cpu_pd)
>> +		return -ENOMEM;
>> +
>> +	cpu_pd->name = kasprintf(GFP_KERNEL, "%s", np->name);
>> +	if (!cpu_pd->name)
>> +		goto free_cpu_pd;
>> +	cpu_pd->name = kbasename(cpu_pd->name);
>> +	cpu_pd->power_off = cpu_pd_power_off;
> 
> If some kind of voting is done in off, why is there nothing to take care
> of that in pd_power_on  if it's per EL(linux/hyp/secure).
Both sleep state votes (which would be applied while powering down)and 
wake state votes (which would be applied while powering on) are applied 
in hardware. Software is expected to write both these types of votes to 
RPM while powering down only. As hardware takes care of applying the 
votes, no need for any pd_power_on operations.


Thanks a lot for your time & review Sudeep. Happy weekend.

- Raju.



> 
> --
> Regards,
> Sudeep
>
diff mbox series

Patch

diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index ba79b60..91e8b3b 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -95,6 +95,7 @@  config QCOM_RMTFS_MEM
 config QCOM_RPMH
 	bool "Qualcomm RPM-Hardened (RPMH) Communication"
 	depends on ARCH_QCOM && ARM64 && OF || COMPILE_TEST
+	select QCOM_CPU_PD
 	help
 	  Support for communication with the hardened-RPM blocks in
 	  Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
@@ -102,6 +103,14 @@  config QCOM_RPMH
 	  of hardware components aggregate requests for these resources and
 	  help apply the aggregated state on the resource.
 
+config QCOM_CPU_PD
+    bool "Qualcomm cpu power domain driver"
+    depends on QCOM_RPMH && PM_GENERIC_DOMAINS || COMPILE_TEST
+    help
+	  Support for QCOM platform cpu power management to perform tasks
+	  necessary while application processor votes for deeper modes so that
+	  the firmware can enter SoC level low power modes to save power.
+
 config QCOM_SMEM
 	tristate "Qualcomm Shared Memory Manager (SMEM)"
 	depends on ARCH_QCOM
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index f25b54c..57a1b0e 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -12,6 +12,7 @@  obj-$(CONFIG_QCOM_RMTFS_MEM)	+= rmtfs_mem.o
 obj-$(CONFIG_QCOM_RPMH)		+= qcom_rpmh.o
 qcom_rpmh-y			+= rpmh-rsc.o
 qcom_rpmh-y			+= rpmh.o
+obj-$(CONFIG_QCOM_CPU_PD) += cpu_pd.o
 obj-$(CONFIG_QCOM_SMD_RPM)	+= smd-rpm.o
 obj-$(CONFIG_QCOM_SMEM) +=	smem.o
 obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
diff --git a/drivers/soc/qcom/cpu_pd.c b/drivers/soc/qcom/cpu_pd.c
new file mode 100644
index 0000000..565c510
--- /dev/null
+++ b/drivers/soc/qcom/cpu_pd.c
@@ -0,0 +1,104 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/of_platform.h>
+#include <linux/pm_domain.h>
+#include <linux/slab.h>
+
+#include <soc/qcom/rpmh.h>
+
+static struct device *cpu_pd_dev;
+
+static int cpu_pd_power_off(struct generic_pm_domain *domain)
+{
+	if (rpmh_ctrlr_idle(cpu_pd_dev)) {
+		/* Flush the sleep/wake sets */
+		rpmh_flush(cpu_pd_dev);
+	} else {
+		pr_debug("rpmh controller is busy\n");
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static int cpu_pm_domain_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *np = dev->of_node;
+	struct generic_pm_domain *cpu_pd;
+	int ret = -EINVAL, cpu;
+
+	if (!np) {
+		dev_err(dev, "device tree node not found\n");
+		return -ENODEV;
+	}
+
+	if (!of_find_property(np, "#power-domain-cells", NULL)) {
+		pr_err("power-domain-cells not found\n");
+		return -ENODEV;
+	}
+
+	cpu_pd_dev = &pdev->dev;
+	if (IS_ERR_OR_NULL(cpu_pd_dev))
+		return PTR_ERR(cpu_pd_dev);
+
+	cpu_pd = devm_kzalloc(dev, sizeof(*cpu_pd), GFP_KERNEL);
+	if (!cpu_pd)
+		return -ENOMEM;
+
+	cpu_pd->name = kasprintf(GFP_KERNEL, "%s", np->name);
+	if (!cpu_pd->name)
+		goto free_cpu_pd;
+	cpu_pd->name = kbasename(cpu_pd->name);
+	cpu_pd->power_off = cpu_pd_power_off;
+	cpu_pd->flags |= GENPD_FLAG_IRQ_SAFE;
+
+	ret = pm_genpd_init(cpu_pd, NULL, false);
+	if (ret)
+		goto free_name;
+
+	ret = of_genpd_add_provider_simple(np, cpu_pd);
+	if (ret)
+		goto remove_pd;
+
+	pr_info("init PM domain %s\n", cpu_pd->name);
+
+	for_each_present_cpu(cpu) {
+		ret = of_genpd_attach_cpu(cpu);
+		if (ret)
+			goto detach_cpu;
+	}
+	return 0;
+
+detach_cpu:
+	of_genpd_detach_cpu(cpu);
+
+remove_pd:
+	pm_genpd_remove(cpu_pd);
+
+free_name:
+	kfree(cpu_pd->name);
+
+free_cpu_pd:
+	kfree(cpu_pd);
+	cpu_pd_dev = NULL;
+	pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
+	return ret;
+}
+
+static const struct of_device_id cpu_pd_drv_match[] = {
+	{ .compatible = "qcom,cpu-pm-domain", },
+	{ }
+};
+
+static struct platform_driver cpu_pm_domain_driver = {
+	.probe = cpu_pm_domain_probe,
+	.driver	= {
+		.name = "cpu_pm_domain",
+		.of_match_table = cpu_pd_drv_match,
+	},
+};
+builtin_platform_driver(cpu_pm_domain_driver);