diff mbox series

[v3,08/13] cpuidle: psci: Add a helper to attach a CPU to its PM domain

Message ID 20191127102914.18729-9-ulf.hansson@linaro.org (mailing list archive)
State Superseded
Headers show
Series cpuidle: psci: Support hierarchical CPU arrangement | expand

Commit Message

Ulf Hansson Nov. 27, 2019, 10:29 a.m. UTC
Introduce a PSCI DT helper function, psci_dt_attach_cpu(), which takes a
CPU number as an in-parameter and tries to attach the CPU's struct device
to its corresponding PM domain.

Let's makes use of dev_pm_domain_attach_by_name(), as it allows us to
specify "psci" as the "name" of the PM domain to attach to. Additionally,
let's also prepare the attached device to be power managed via runtime PM.

Note that, the implementation of the new helper function is in a new
separate c-file, which may seems a bit too much at this point. However,
subsequent changes that implements the remaining part of the PM domain
support for cpuidle-psci, helps to justify this split.

Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---

Changes in v3:
	- None.

---
 drivers/cpuidle/Makefile              |  4 ++-
 drivers/cpuidle/cpuidle-psci-domain.c | 36 +++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle-psci.h        | 12 +++++++++
 3 files changed, 51 insertions(+), 1 deletion(-)
 create mode 100644 drivers/cpuidle/cpuidle-psci-domain.c
 create mode 100644 drivers/cpuidle/cpuidle-psci.h

Comments

Lorenzo Pieralisi Nov. 28, 2019, 2:14 p.m. UTC | #1
On Wed, Nov 27, 2019 at 11:29:09AM +0100, Ulf Hansson wrote:

[...]

> +struct device *psci_dt_attach_cpu(int cpu)
> +{
> +	struct device *dev;
> +
> +	/* Currently limit the hierarchical topology to be used in OSI mode. */
> +	if (!psci_has_osi_support())
> +		return NULL;
> +
> +	dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci");
> +	if (IS_ERR_OR_NULL(dev))
> +		return dev;
> +
> +	pm_runtime_irq_safe(dev);
> +	if (cpu_online(cpu))

It is unclear to me how we handle (or rather we don't) CPU hotplug
with this series - it does not look OK unless genpd code manages
that automatically.

Lorenzo
Ulf Hansson Nov. 28, 2019, 5:21 p.m. UTC | #2
On Thu, 28 Nov 2019 at 15:15, Lorenzo Pieralisi
<lorenzo.pieralisi@arm.com> wrote:
>
> On Wed, Nov 27, 2019 at 11:29:09AM +0100, Ulf Hansson wrote:
>
> [...]
>
> > +struct device *psci_dt_attach_cpu(int cpu)
> > +{
> > +     struct device *dev;
> > +
> > +     /* Currently limit the hierarchical topology to be used in OSI mode. */
> > +     if (!psci_has_osi_support())
> > +             return NULL;
> > +
> > +     dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci");
> > +     if (IS_ERR_OR_NULL(dev))
> > +             return dev;
> > +
> > +     pm_runtime_irq_safe(dev);
> > +     if (cpu_online(cpu))
>
> It is unclear to me how we handle (or rather we don't) CPU hotplug
> with this series - it does not look OK unless genpd code manages
> that automatically.

The series doesn't handle CPU hotplug at the moment, simply because I
am targeting to get the basic support, upstream first.

For a functionality point of view, this isn't a problem in my opinion.
Simply because the consequence is only that the idle states for the
"cluster" will not be reached if there is a CPU brought offline.

As we talked about at LPC and as also told Sudeep for the v2 series,
CPU hotplug is going to be implemented by using a CPU HP notifier.
That should be fine, right?

Kind regards
Uffe
Lorenzo Pieralisi Nov. 28, 2019, 6:31 p.m. UTC | #3
On Thu, Nov 28, 2019 at 06:21:03PM +0100, Ulf Hansson wrote:
> On Thu, 28 Nov 2019 at 15:15, Lorenzo Pieralisi
> <lorenzo.pieralisi@arm.com> wrote:
> >
> > On Wed, Nov 27, 2019 at 11:29:09AM +0100, Ulf Hansson wrote:
> >
> > [...]
> >
> > > +struct device *psci_dt_attach_cpu(int cpu)
> > > +{
> > > +     struct device *dev;
> > > +
> > > +     /* Currently limit the hierarchical topology to be used in OSI mode. */
> > > +     if (!psci_has_osi_support())
> > > +             return NULL;
> > > +
> > > +     dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci");
> > > +     if (IS_ERR_OR_NULL(dev))
> > > +             return dev;
> > > +
> > > +     pm_runtime_irq_safe(dev);
> > > +     if (cpu_online(cpu))
> >
> > It is unclear to me how we handle (or rather we don't) CPU hotplug
> > with this series - it does not look OK unless genpd code manages
> > that automatically.
> 
> The series doesn't handle CPU hotplug at the moment, simply because I
> am targeting to get the basic support, upstream first.

Basic support must work and that includes CPU hotplug - I don't want
to merge code that work with assumptions that aren't valid.

> For a functionality point of view, this isn't a problem in my opinion.
> Simply because the consequence is only that the idle states for the
> "cluster" will not be reached if there is a CPU brought offline.
> 
> As we talked about at LPC and as also told Sudeep for the v2 series,
> CPU hotplug is going to be implemented by using a CPU HP notifier.

Yes, it should be part of the series.

> That should be fine, right?

Yes, hopefully but it has to be part of the series.

Thanks,
Lorenzo
Ulf Hansson Nov. 28, 2019, 10:04 p.m. UTC | #4
On Thu, 28 Nov 2019 at 19:31, Lorenzo Pieralisi
<lorenzo.pieralisi@arm.com> wrote:
>
> On Thu, Nov 28, 2019 at 06:21:03PM +0100, Ulf Hansson wrote:
> > On Thu, 28 Nov 2019 at 15:15, Lorenzo Pieralisi
> > <lorenzo.pieralisi@arm.com> wrote:
> > >
> > > On Wed, Nov 27, 2019 at 11:29:09AM +0100, Ulf Hansson wrote:
> > >
> > > [...]
> > >
> > > > +struct device *psci_dt_attach_cpu(int cpu)
> > > > +{
> > > > +     struct device *dev;
> > > > +
> > > > +     /* Currently limit the hierarchical topology to be used in OSI mode. */
> > > > +     if (!psci_has_osi_support())
> > > > +             return NULL;
> > > > +
> > > > +     dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci");
> > > > +     if (IS_ERR_OR_NULL(dev))
> > > > +             return dev;
> > > > +
> > > > +     pm_runtime_irq_safe(dev);
> > > > +     if (cpu_online(cpu))
> > >
> > > It is unclear to me how we handle (or rather we don't) CPU hotplug
> > > with this series - it does not look OK unless genpd code manages
> > > that automatically.
> >
> > The series doesn't handle CPU hotplug at the moment, simply because I
> > am targeting to get the basic support, upstream first.
>
> Basic support must work and that includes CPU hotplug - I don't want
> to merge code that work with assumptions that aren't valid.
>
> > For a functionality point of view, this isn't a problem in my opinion.
> > Simply because the consequence is only that the idle states for the
> > "cluster" will not be reached if there is a CPU brought offline.
> >
> > As we talked about at LPC and as also told Sudeep for the v2 series,
> > CPU hotplug is going to be implemented by using a CPU HP notifier.
>
> Yes, it should be part of the series.
>
> > That should be fine, right?
>
> Yes, hopefully but it has to be part of the series.

Alright, I send a patch on top, asap.

Please review the rest and if it needs a respin, I can fold it into the series.

Kind regards
Uffe
diff mbox series

Patch

diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index ee70d5cc5b99..cc8c769d7fa9 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -21,7 +21,9 @@  obj-$(CONFIG_ARM_U8500_CPUIDLE)         += cpuidle-ux500.o
 obj-$(CONFIG_ARM_AT91_CPUIDLE)          += cpuidle-at91.o
 obj-$(CONFIG_ARM_EXYNOS_CPUIDLE)        += cpuidle-exynos.o
 obj-$(CONFIG_ARM_CPUIDLE)		+= cpuidle-arm.o
-obj-$(CONFIG_ARM_PSCI_CPUIDLE)		+= cpuidle-psci.o
+obj-$(CONFIG_ARM_PSCI_CPUIDLE)		+= cpuidle_psci.o
+cpuidle_psci-y				:= cpuidle-psci.o
+cpuidle_psci-$(CONFIG_PM_GENERIC_DOMAINS_OF) += cpuidle-psci-domain.o
 
 ###############################################################################
 # MIPS drivers
diff --git a/drivers/cpuidle/cpuidle-psci-domain.c b/drivers/cpuidle/cpuidle-psci-domain.c
new file mode 100644
index 000000000000..bc7df4dc0686
--- /dev/null
+++ b/drivers/cpuidle/cpuidle-psci-domain.c
@@ -0,0 +1,36 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PM domains for CPUs via genpd - managed by cpuidle-psci.
+ *
+ * Copyright (C) 2019 Linaro Ltd.
+ * Author: Ulf Hansson <ulf.hansson@linaro.org>
+ *
+ */
+
+#include <linux/cpu.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/pm_domain.h>
+#include <linux/pm_runtime.h>
+#include <linux/psci.h>
+
+#include "cpuidle-psci.h"
+
+struct device *psci_dt_attach_cpu(int cpu)
+{
+	struct device *dev;
+
+	/* Currently limit the hierarchical topology to be used in OSI mode. */
+	if (!psci_has_osi_support())
+		return NULL;
+
+	dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci");
+	if (IS_ERR_OR_NULL(dev))
+		return dev;
+
+	pm_runtime_irq_safe(dev);
+	if (cpu_online(cpu))
+		pm_runtime_get_sync(dev);
+
+	return dev;
+}
diff --git a/drivers/cpuidle/cpuidle-psci.h b/drivers/cpuidle/cpuidle-psci.h
new file mode 100644
index 000000000000..0cadbb71dc55
--- /dev/null
+++ b/drivers/cpuidle/cpuidle-psci.h
@@ -0,0 +1,12 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __CPUIDLE_PSCI_H
+#define __CPUIDLE_PSCI_H
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+struct device *psci_dt_attach_cpu(int cpu);
+#else
+static inline struct device *psci_dt_attach_cpu(int cpu) { return NULL; }
+#endif
+
+#endif /* __CPUIDLE_PSCI_H */