diff mbox series

[v6,1/3] PM: cpu: Add CPU_LAST_PM_ENTER and CPU_FIRST_PM_EXIT support

Message ID 20220223125536.230224-2-shawn.guo@linaro.org (mailing list archive)
State Superseded
Headers show
Series Add Qualcomm MPM irqchip driver support | expand

Commit Message

Shawn Guo Feb. 23, 2022, 12:55 p.m. UTC
It becomes a common situation on some platforms that certain hardware
setup needs to be done on the last standing cpu, and rpmh-rsc[1] is such
an existing example.  As figuring out the last standing cpu is really
something generic, it adds CPU_LAST_PM_ENTER (and CPU_FIRST_PM_EXIT)
event support to cpu_pm helper, so that individual driver can be
notified when the last standing cpu is about to enter low power state.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/soc/qcom/rpmh-rsc.c?id=v5.16#n773

Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
---
 include/linux/cpu_pm.h | 15 +++++++++++++++
 kernel/cpu_pm.c        | 33 +++++++++++++++++++++++++++++++--
 2 files changed, 46 insertions(+), 2 deletions(-)

Comments

Sudeep Holla Feb. 23, 2022, 7:30 p.m. UTC | #1
On Wed, Feb 23, 2022 at 08:55:34PM +0800, Shawn Guo wrote:
> It becomes a common situation on some platforms that certain hardware
> setup needs to be done on the last standing cpu, and rpmh-rsc[1] is such
> an existing example.  As figuring out the last standing cpu is really
> something generic, it adds CPU_LAST_PM_ENTER (and CPU_FIRST_PM_EXIT)
> event support to cpu_pm helper, so that individual driver can be
> notified when the last standing cpu is about to enter low power state.

Sorry for not getting back on the previous email thread.
When I meant I didn't want to use CPU_CLUSTER_PM_{ENTER,EXIT}, I wasn't
thinking new ones to be added as alternative. With this OSI cpuidle, we
have introduces the concept of power domains and I was check if we can
associate these requirements to them rather than introducing the first
and last cpu notion. The power domains already identify them in order
to turn on or off. Not sure if there is any notification mechanism in
genpd/power domains. I really don't like this addition. It is disintegrating
all the solutions for OSI and makes it hard to understand.

One solution I can think of(not sure if others like or if that is feasible)
is to create a parent power domain that encloses all the last level CPU
power domains, which means when the last one is getting powered off, you
will be asked to power off and you can take whatever action you want.

--
Regards,
Sudeep
Shawn Guo Feb. 25, 2022, 4:33 a.m. UTC | #2
On Wed, Feb 23, 2022 at 07:30:50PM +0000, Sudeep Holla wrote:
> On Wed, Feb 23, 2022 at 08:55:34PM +0800, Shawn Guo wrote:
> > It becomes a common situation on some platforms that certain hardware
> > setup needs to be done on the last standing cpu, and rpmh-rsc[1] is such
> > an existing example.  As figuring out the last standing cpu is really
> > something generic, it adds CPU_LAST_PM_ENTER (and CPU_FIRST_PM_EXIT)
> > event support to cpu_pm helper, so that individual driver can be
> > notified when the last standing cpu is about to enter low power state.
> 
> Sorry for not getting back on the previous email thread.
> When I meant I didn't want to use CPU_CLUSTER_PM_{ENTER,EXIT}, I wasn't
> thinking new ones to be added as alternative. With this OSI cpuidle, we
> have introduces the concept of power domains and I was check if we can
> associate these requirements to them rather than introducing the first
> and last cpu notion. The power domains already identify them in order
> to turn on or off. Not sure if there is any notification mechanism in
> genpd/power domains. I really don't like this addition. It is disintegrating
> all the solutions for OSI and makes it hard to understand.
> 
> One solution I can think of(not sure if others like or if that is feasible)
> is to create a parent power domain that encloses all the last level CPU
> power domains, which means when the last one is getting powered off, you
> will be asked to power off and you can take whatever action you want.

Thanks Sudeep for the input!  Yes, it works for me (if I understand your
suggestion correctly).  So the needed changes on top of the current
version would be:

1) Declare MPM as a PD (power domain) provider and have it be the
   parent PD of cpu cluster (the platform has only one cluster including
   4 cpus).

diff --git a/arch/arm64/boot/dts/qcom/qcm2290.dtsi b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
index 5bc5ce0b5d77..0cd0a9722ec5 100644
--- a/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+++ b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
@@ -240,6 +240,7 @@ CPU_PD3: cpu3 {
 
                CLUSTER_PD: cpu-cluster0 {
                        #power-domain-cells = <0>;
+                       power-domains = <&mpm>;
                        domain-idle-states = <&CLUSTER_SLEEP_0>;
                };
        };
@@ -490,6 +491,7 @@ mpm: interrupt-controller@45f01b8 {
                        interrupt-controller;
                        interrupt-parent = <&intc>;
                        #interrupt-cells = <2>;
+                       #power-domain-cells = <0>;
                        qcom,mpm-pin-count = <96>;
                        qcom,mpm-pin-map = <2 275>,     /* tsens0_tsens_upper_lower_int */
                                           <5 296>,     /* lpass_irq_out_sdc */


2) Add PD in MPM driver and call qcom_mpm_enter_sleep() from .power_off
   hook of the PD.

diff --git a/drivers/irqchip/qcom-mpm.c b/drivers/irqchip/qcom-mpm.c
index d3d8251e57e4..f4409c169a3a 100644
--- a/drivers/irqchip/qcom-mpm.c
+++ b/drivers/irqchip/qcom-mpm.c
@@ -4,7 +4,6 @@
  * Copyright (c) 2010-2020, The Linux Foundation. All rights reserved.
  */
 
-#include <linux/cpu_pm.h>
 #include <linux/delay.h>
 #include <linux/err.h>
 #include <linux/init.h>
@@ -18,6 +17,7 @@
 #include <linux/of.h>
 #include <linux/of_device.h>
 #include <linux/platform_device.h>
+#include <linux/pm_domain.h>
 #include <linux/slab.h>
 #include <linux/soc/qcom/irq.h>
 #include <linux/spinlock.h>
@@ -84,7 +84,7 @@ struct qcom_mpm_priv {
 	unsigned int map_cnt;
 	unsigned int reg_stride;
 	struct irq_domain *domain;
-	struct notifier_block pm_nb;
+	struct generic_pm_domain genpd;
 };
 
 static u32 qcom_mpm_read(struct qcom_mpm_priv *priv, unsigned int reg,
@@ -312,23 +312,12 @@ static int qcom_mpm_enter_sleep(struct qcom_mpm_priv *priv)
 	return 0;
 }
 
-static int qcom_mpm_cpu_pm_callback(struct notifier_block *nb,
-				    unsigned long action, void *data)
+static int mpm_pd_power_off(struct generic_pm_domain *genpd)
 {
-	struct qcom_mpm_priv *priv = container_of(nb, struct qcom_mpm_priv,
-						  pm_nb);
-	int ret = NOTIFY_OK;
-
-	switch (action) {
-	case CPU_LAST_PM_ENTER:
-		if (qcom_mpm_enter_sleep(priv))
-			ret = NOTIFY_BAD;
-		break;
-	default:
-		ret = NOTIFY_DONE;
-	}
+	struct qcom_mpm_priv *priv = container_of(genpd, struct qcom_mpm_priv,
+						  genpd);
 
-	return ret;
+	return qcom_mpm_enter_sleep(priv);
 }
 
 static int qcom_mpm_init(struct device_node *np, struct device_node *parent)
@@ -336,6 +325,7 @@ static int qcom_mpm_init(struct device_node *np, struct device_node *parent)
 	struct platform_device *pdev = of_find_device_by_node(np);
 	struct device *dev = &pdev->dev;
 	struct irq_domain *parent_domain;
+	struct generic_pm_domain *genpd;
 	struct qcom_mpm_priv *priv;
 	unsigned int pin_cnt;
 	int i, irq;
@@ -387,6 +377,26 @@ static int qcom_mpm_init(struct device_node *np, struct device_node *parent)
 	if (irq < 0)
 		return irq;
 
+	genpd = &priv->genpd;
+	genpd->flags = GENPD_FLAG_IRQ_SAFE;
+	genpd->power_off = mpm_pd_power_off;
+
+	genpd->name = devm_kasprintf(dev, GFP_KERNEL, "%s", dev_name(dev));
+	if (!genpd->name)
+		return -ENOMEM;
+
+	ret = pm_genpd_init(genpd, NULL, false);
+	if (ret) {
+		dev_err(dev, "failed to init genpd: %d\n", ret);
+		return ret;
+	}
+
+	ret = of_genpd_add_provider_simple(np, genpd);
+	if (ret) {
+		dev_err(dev, "failed to add genpd provider: %d\n", ret);
+		goto remove_genpd;
+	}
+
 	priv->mbox_client.dev = dev;
 	priv->mbox_chan = mbox_request_channel(&priv->mbox_client, 0);
 	if (IS_ERR(priv->mbox_chan)) {
@@ -420,15 +430,14 @@ static int qcom_mpm_init(struct device_node *np, struct device_node *parent)
 		goto remove_domain;
 	}
 
-	priv->pm_nb.notifier_call = qcom_mpm_cpu_pm_callback;
-	cpu_pm_register_notifier(&priv->pm_nb);
-
 	return 0;
 
 remove_domain:
 	irq_domain_remove(priv->domain);
 free_mbox:
 	mbox_free_channel(priv->mbox_chan);
+remove_genpd:
+	pm_genpd_remove(genpd);
 	return ret;
 }
 

Let's me know if this is what you are asking for, thanks!

Shawn
Sudeep Holla Feb. 25, 2022, 2:20 p.m. UTC | #3
On Fri, Feb 25, 2022 at 12:33:11PM +0800, Shawn Guo wrote:
> On Wed, Feb 23, 2022 at 07:30:50PM +0000, Sudeep Holla wrote:
> > On Wed, Feb 23, 2022 at 08:55:34PM +0800, Shawn Guo wrote:
> > > It becomes a common situation on some platforms that certain hardware
> > > setup needs to be done on the last standing cpu, and rpmh-rsc[1] is such
> > > an existing example.  As figuring out the last standing cpu is really
> > > something generic, it adds CPU_LAST_PM_ENTER (and CPU_FIRST_PM_EXIT)
> > > event support to cpu_pm helper, so that individual driver can be
> > > notified when the last standing cpu is about to enter low power state.
> > 
> > Sorry for not getting back on the previous email thread.
> > When I meant I didn't want to use CPU_CLUSTER_PM_{ENTER,EXIT}, I wasn't
> > thinking new ones to be added as alternative. With this OSI cpuidle, we
> > have introduces the concept of power domains and I was check if we can
> > associate these requirements to them rather than introducing the first
> > and last cpu notion. The power domains already identify them in order
> > to turn on or off. Not sure if there is any notification mechanism in
> > genpd/power domains. I really don't like this addition. It is disintegrating
> > all the solutions for OSI and makes it hard to understand.
> > 
> > One solution I can think of(not sure if others like or if that is feasible)
> > is to create a parent power domain that encloses all the last level CPU
> > power domains, which means when the last one is getting powered off, you
> > will be asked to power off and you can take whatever action you want.
> 
> Thanks Sudeep for the input!  Yes, it works for me (if I understand your
> suggestion correctly).  So the needed changes on top of the current
> version would be:
> 
> 1) Declare MPM as a PD (power domain) provider and have it be the
>    parent PD of cpu cluster (the platform has only one cluster including
>    4 cpus).
> 

[...]

> 
> Let's me know if this is what you are asking for, thanks!

Matches exactly. I don't know if there is anything I am missing to see,
but if this possible, for me it is easier to understand as this is all
linked to power-domains like other things in OSI cpuidle.

So yes, I prefer this, but let us see what others have to say about this.
diff mbox series

Patch

diff --git a/include/linux/cpu_pm.h b/include/linux/cpu_pm.h
index 552b8f9ea05e..153344307b7c 100644
--- a/include/linux/cpu_pm.h
+++ b/include/linux/cpu_pm.h
@@ -55,6 +55,21 @@  enum cpu_pm_event {
 
 	/* A cpu power domain is exiting a low power state */
 	CPU_CLUSTER_PM_EXIT,
+
+	/*
+	 * A cpu is entering a low power state after all other cpus
+	 * in the system have entered the lower power state.
+	 */
+	CPU_LAST_PM_ENTER,
+
+	/* The last cpu failed to enter a low power state */
+	CPU_LAST_PM_ENTER_FAILED,
+
+	/*
+	 * A cpu is exiting a low power state before any other cpus
+	 * in the system exits the low power state.
+	 */
+	CPU_FIRST_PM_EXIT,
 };
 
 #ifdef CONFIG_CPU_PM
diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
index 246efc74e3f3..7c104446e1e9 100644
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -26,6 +26,8 @@  static struct {
 	.lock  = __RAW_SPIN_LOCK_UNLOCKED(cpu_pm_notifier.lock),
 };
 
+static atomic_t cpus_in_pm;
+
 static int cpu_pm_notify(enum cpu_pm_event event)
 {
 	int ret;
@@ -116,7 +118,20 @@  EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
  */
 int cpu_pm_enter(void)
 {
-	return cpu_pm_notify_robust(CPU_PM_ENTER, CPU_PM_ENTER_FAILED);
+	int ret;
+
+	ret = cpu_pm_notify_robust(CPU_PM_ENTER, CPU_PM_ENTER_FAILED);
+	if (ret)
+		return ret;
+
+	if (atomic_inc_return(&cpus_in_pm) == num_online_cpus()) {
+		ret = cpu_pm_notify_robust(CPU_LAST_PM_ENTER,
+					   CPU_LAST_PM_ENTER_FAILED);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(cpu_pm_enter);
 
@@ -134,7 +149,21 @@  EXPORT_SYMBOL_GPL(cpu_pm_enter);
  */
 int cpu_pm_exit(void)
 {
-	return cpu_pm_notify(CPU_PM_EXIT);
+	int ret;
+
+	ret = cpu_pm_notify(CPU_PM_EXIT);
+	if (ret)
+		return ret;
+
+	if (atomic_read(&cpus_in_pm) == num_online_cpus()) {
+		ret = cpu_pm_notify(CPU_FIRST_PM_EXIT);
+		if (ret)
+			return ret;
+	}
+
+	atomic_dec(&cpus_in_pm);
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(cpu_pm_exit);