mbox series

[00/18] ARM/ARM64: Support hierarchical CPU arrangement for PSCI

Message ID 20190513192300.653-1-ulf.hansson@linaro.org (mailing list archive)
Headers show
Series ARM/ARM64: Support hierarchical CPU arrangement for PSCI | expand

Message

Ulf Hansson May 13, 2019, 7:22 p.m. UTC
This series enables support for hierarchical CPU arrangement, managed by PSCI
for ARM/ARM64. It's based on using the generic PM domain (genpd), which
recently was extended to manage devices belonging to CPUs.

The last two DTS patches enables the hierarchical topology to be used for the
Qcom 410c Dragonboard and the Hisilicon Hikey board. The former uses PSCI OS-
initiated mode, while the latter uses the PSCI Platform-Coordinated mode. In
other words, the hierarchical description of the topology in DT, is orthogonal
to the supported PSCI CPU suspend mode.

Do note, these patches have been posted earlier, but then being part of bigger
series, which at that point also included the needed infrastructure changes to
genpd and cpuidle. Rather than continue to carry the old version history,
which may be a bit confusing, I decided to start over. Although, for clarity,
the changelog below explains what changes that have been made since the last
submission was made.

Changes since last submission:
 - Converted to use dev_pm_domain_attach_by_name() rather than
   dev_pm_domain_attach(),when attaching a CPU to its PM domain. This is done to
   cope with multiple PM domains per CPU, if that turns out to be needed in the
   future. Changes mainly consisted of storing the returned struct device* from
   dev_pm_domain_attach_by_name() into a per CPU struct.
 - Due to above changes, some simplification of the code became possible, in
   particular the deployment of runtime PM became a bit nicer, I think.
 - Moved some of the new code inside "#ifdef CONFIG_CPU_IDLE".
 - Addressed various comments for each patch.

The series is also available at:
git.linaro.org/people/ulf.hansson/linux-pm.git next

More background (if you are still awake):
For ARM64/ARM based platforms CPUs are often arranged in a hierarchical manner.
From a CPU idle state perspective, this means some states may be shared among a
group of CPUs (aka CPU cluster).

To deal with idle management of a group of CPUs, sometimes the kernel needs to
be involved to manage the last-man standing algorithm, simply because it can't
rely solely on power management FWs to deal with this. Depending on the
platform, of course.

There are a couple of typical scenarios for when the kernel needs to be in
control, dealing with synchronization of when the last CPU in a cluster is about
to enter a deep idle state.

1)
The kernel needs to carry out so called last-man activities before the
CPU cluster can enter a deep idle state. This may for example involve to
configure external logics for wakeups, as the GIC may no longer be functional
once a deep cluster idle state have been entered. Likewise, these operations
may need to be restored, when the first CPU wakes up.

2)
Other more generic I/O devices, such as an MMC controller for example, may be a
part of the same power domain as the CPU cluster, due to a shared power-rail.
For these scenarios, when the MMC controller is in use dealing with an MMC
request, a deeper idle state of the CPU cluster may needs to be temporarily
disabled. This is needed to retain the MMC controller in a functional state,
else it may loose its register-context in the middle of serving a request.

Kind regards
Ulf Hansson


Lina Iyer (4):
  dt: psci: Update DT bindings to support hierarchical PSCI states
  cpuidle: dt: Support hierarchical CPU idle states
  drivers: firmware: psci: Support hierarchical CPU idle states
  arm64: dts: Convert to the hierarchical CPU topology layout for
    MSM8916

Ulf Hansson (14):
  of: base: Add of_get_cpu_state_node() to get idle states for a CPU
    node
  ARM/ARM64: cpuidle: Let back-end init ops take the driver as input
  drivers: firmware: psci: Simplify state node parsing
  drivers: firmware: psci: Prepare to use OS initiated suspend mode
  drivers: firmware: psci: Prepare to support PM domains
  drivers: firmware: psci: Add support for PM domains using genpd
  drivers: firmware: psci: Add hierarchical domain idle states converter
  drivers: firmware: psci: Introduce psci_dt_topology_init()
  drivers: firmware: psci: Add a helper to attach a CPU to its PM domain
  drivers: firmware: psci: Attach the CPU's device to its PM domain
  drivers: firmware: psci: Manage runtime PM in the idle path for CPUs
  drivers: firmware: psci: Support CPU hotplug for the hierarchical
    model
  arm64: kernel: Respect the hierarchical CPU topology in DT for PSCI
  arm64: dts: hikey: Convert to the hierarchical CPU topology layout

 .../devicetree/bindings/arm/psci.txt          | 166 ++++++++
 arch/arm/include/asm/cpuidle.h                |   4 +-
 arch/arm/kernel/cpuidle.c                     |   5 +-
 arch/arm64/boot/dts/hisilicon/hi6220.dtsi     |  87 +++-
 arch/arm64/boot/dts/qcom/msm8916.dtsi         |  57 ++-
 arch/arm64/include/asm/cpu_ops.h              |   4 +-
 arch/arm64/include/asm/cpuidle.h              |   6 +-
 arch/arm64/kernel/cpuidle.c                   |   6 +-
 arch/arm64/kernel/setup.c                     |   3 +
 drivers/cpuidle/cpuidle-arm.c                 |   2 +-
 drivers/cpuidle/dt_idle_states.c              |   5 +-
 drivers/firmware/psci/Makefile                |   2 +-
 drivers/firmware/psci/psci.c                  | 219 ++++++++--
 drivers/firmware/psci/psci.h                  |  29 ++
 drivers/firmware/psci/psci_pm_domain.c        | 403 ++++++++++++++++++
 drivers/of/base.c                             |  36 ++
 drivers/soc/qcom/spm.c                        |   3 +-
 include/linux/of.h                            |   8 +
 include/linux/psci.h                          |   6 +-
 19 files changed, 987 insertions(+), 64 deletions(-)
 create mode 100644 drivers/firmware/psci/psci.h
 create mode 100644 drivers/firmware/psci/psci_pm_domain.c

Comments

Rafael J. Wysocki May 14, 2019, 8:08 a.m. UTC | #1
On Mon, May 13, 2019 at 9:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
> This series enables support for hierarchical CPU arrangement, managed by PSCI
> for ARM/ARM64. It's based on using the generic PM domain (genpd), which
> recently was extended to manage devices belonging to CPUs.

ACK for the patches touching cpuidle in this series (from the
framework perspective), but I'm assuming it to be taken care of by
ARM/ARM64 maintainers.
Ulf Hansson May 14, 2019, 8:58 a.m. UTC | #2
On Tue, 14 May 2019 at 10:08, Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Mon, May 13, 2019 at 9:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> >
> > This series enables support for hierarchical CPU arrangement, managed by PSCI
> > for ARM/ARM64. It's based on using the generic PM domain (genpd), which
> > recently was extended to manage devices belonging to CPUs.
>
> ACK for the patches touching cpuidle in this series (from the
> framework perspective), but I'm assuming it to be taken care of by
> ARM/ARM64 maintainers.

Thanks for the ack! Yes, this is for PSCI/ARM maintainers.

BTW, apologize for sending this in the merge window, but wanted to
take the opportunity for people to have a look before OSPM Pisa next
week.

Kind regards
Uffe
Ulf Hansson June 7, 2019, 11:19 a.m. UTC | #3
Sudeep, Lorenzo, Mark,

On Mon, 13 May 2019 at 21:23, Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
> This series enables support for hierarchical CPU arrangement, managed by PSCI
> for ARM/ARM64. It's based on using the generic PM domain (genpd), which
> recently was extended to manage devices belonging to CPUs.
>
> The last two DTS patches enables the hierarchical topology to be used for the
> Qcom 410c Dragonboard and the Hisilicon Hikey board. The former uses PSCI OS-
> initiated mode, while the latter uses the PSCI Platform-Coordinated mode. In
> other words, the hierarchical description of the topology in DT, is orthogonal
> to the supported PSCI CPU suspend mode.
>
> Do note, these patches have been posted earlier, but then being part of bigger
> series, which at that point also included the needed infrastructure changes to
> genpd and cpuidle. Rather than continue to carry the old version history,
> which may be a bit confusing, I decided to start over. Although, for clarity,
> the changelog below explains what changes that have been made since the last
> submission was made.

Is there anything I can do to help the review to get going here?

FYI, I hosted a talk about "cluster idle" at OSPM in Pisa a few weeks
ago. There is a couple of slides [1] with flowcharts of how it works,
that may be of interest for you.

Kind regards
Uffe

[...]

[1]
http://retis.sssup.it/ospm-summit/Downloads/01_02-ClusterIdle_UlfHansson.pdf
Sudeep Holla June 7, 2019, 3:42 p.m. UTC | #4
On Tue, May 14, 2019 at 10:58:04AM +0200, Ulf Hansson wrote:
> On Tue, 14 May 2019 at 10:08, Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Mon, May 13, 2019 at 9:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > >
> > > This series enables support for hierarchical CPU arrangement, managed by PSCI
> > > for ARM/ARM64. It's based on using the generic PM domain (genpd), which
> > > recently was extended to manage devices belonging to CPUs.
> >
> > ACK for the patches touching cpuidle in this series (from the
> > framework perspective), but I'm assuming it to be taken care of by
> > ARM/ARM64 maintainers.
>
> Thanks for the ack! Yes, this is for PSCI/ARM maintainers.
>
> BTW, apologize for sending this in the merge window, but wanted to
> take the opportunity for people to have a look before OSPM Pisa next
> week.
>

I will start looking at this series. But I would request PSCI/other
maintainers to wait until we see some comparison data before we merge.
If they are fine to merge w/o that, I am fine. As of now we have just
1-2 platforms to test(that too not so simple to get started) and the
long term support for them are questionable. Also with SDM845 supporting
PC, we have excellent opportunity to compare and conclude the results
found.

--
Regards,
Sudeep
Bjorn Andersson June 7, 2019, 7:34 p.m. UTC | #5
On Fri 07 Jun 08:42 PDT 2019, Sudeep Holla wrote:

> On Tue, May 14, 2019 at 10:58:04AM +0200, Ulf Hansson wrote:
> > On Tue, 14 May 2019 at 10:08, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Mon, May 13, 2019 at 9:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > >
> > > > This series enables support for hierarchical CPU arrangement, managed by PSCI
> > > > for ARM/ARM64. It's based on using the generic PM domain (genpd), which
> > > > recently was extended to manage devices belonging to CPUs.
> > >
> > > ACK for the patches touching cpuidle in this series (from the
> > > framework perspective), but I'm assuming it to be taken care of by
> > > ARM/ARM64 maintainers.
> >
> > Thanks for the ack! Yes, this is for PSCI/ARM maintainers.
> >
> > BTW, apologize for sending this in the merge window, but wanted to
> > take the opportunity for people to have a look before OSPM Pisa next
> > week.
> >
> 
> I will start looking at this series. But I would request PSCI/other
> maintainers to wait until we see some comparison data before we merge.

What comparison are you asking for here? Do you want to see the
improvement this series gives or are you hoping to compare it with some
other mechanism?

> If they are fine to merge w/o that, I am fine. As of now we have just
> 1-2 platforms to test(that too not so simple to get started) and the
> long term support for them are questionable.

Why is the support for these platforms questionable? People are actively
working on these platforms and the feature set constantly improving.

> Also with SDM845 supporting PC, we have excellent opportunity to
> compare and conclude the results found.

That's correct, ATF exists for SDM845. But with the standard choice of
firmware you will get OSI and I don't know of a board out there where
you can switch between them and do a apple to apple comparison.

Devices such as RB3 (96boards SDM845), Pixel3 and the Windows laptops
are all OSI only.


So landing this support is not a question of PC or OSI being the better
choice, it's a question of do we want to be able to enter these lower
power states - with the upstream kernel - on any past, present or future
Qualcomm devices.

Regards,
Bjorn
Sudeep Holla June 10, 2019, 10:32 a.m. UTC | #6
On Fri, Jun 07, 2019 at 12:34:07PM -0700, Bjorn Andersson wrote:
> On Fri 07 Jun 08:42 PDT 2019, Sudeep Holla wrote:
>
> > On Tue, May 14, 2019 at 10:58:04AM +0200, Ulf Hansson wrote:
> > > On Tue, 14 May 2019 at 10:08, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Mon, May 13, 2019 at 9:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > >
> > > > > This series enables support for hierarchical CPU arrangement, managed by PSCI
> > > > > for ARM/ARM64. It's based on using the generic PM domain (genpd), which
> > > > > recently was extended to manage devices belonging to CPUs.
> > > >
> > > > ACK for the patches touching cpuidle in this series (from the
> > > > framework perspective), but I'm assuming it to be taken care of by
> > > > ARM/ARM64 maintainers.
> > >
> > > Thanks for the ack! Yes, this is for PSCI/ARM maintainers.
> > >
> > > BTW, apologize for sending this in the merge window, but wanted to
> > > take the opportunity for people to have a look before OSPM Pisa next
> > > week.
> > >
> >
> > I will start looking at this series. But I would request PSCI/other
> > maintainers to wait until we see some comparison data before we merge.
>
> What comparison are you asking for here? Do you want to see the
> improvement this series gives or are you hoping to compare it with some
> other mechanism?
>

OK, I have mentioned this many times already, let me repeat it again.
This series adds an alternative to the existing PC mode of CPU idle
management. And it's clear that the main reason for the same is the
improvement OSI mode offers vs the PC mode. I am asking the comparison
for the same. And yes we need to compare apples with apples and not
oranges here.

> > If they are fine to merge w/o that, I am fine. As of now we have just
> > 1-2 platforms to test(that too not so simple to get started) and the
> > long term support for them are questionable.
>
> Why is the support for these platforms questionable? People are actively
> working on these platforms and the feature set constantly improving.
>

Qualcomm will never fix any firmware issues and we need to quirk
any bugs found. I would prefer the first platform to minimize those
as it would be reference. But I am sure QC won't care about firmware
on SDM845 anymore, so not an ideal fit.

We need to add support in TF-A to build complete reference story around
OSI mode.

> > Also with SDM845 supporting PC, we have excellent opportunity to
> > compare and conclude the results found.
>
> That's correct, ATF exists for SDM845. But with the standard choice of
> firmware you will get OSI and I don't know of a board out there where
> you can switch between them and do a apple to apple comparison.
>

One that's not PSCI compliant, system must boot in PC. If QC was any
serious about this, they would have attempted to fix them in firmware.
We have given this comment at-least 4 years back and if that's not
still in the current gen products, it says something. Sorry I don't
trust the firmware story from QC.

> Devices such as RB3 (96boards SDM845), Pixel3 and the Windows laptops
> are all OSI only.
>

Again not fully PSCI compliant.

>
> So landing this support is not a question of PC or OSI being the better
> choice, it's a question of do we want to be able to enter these lower
> power states - with the upstream kernel - on any past, present or future
> Qualcomm devices.
>

Nope, I disagree. Better they fix future products. This is a new feature
in the kernel with the claim that it's better and since last 2-3 years
no efforts are made to prove the claim. So I am not really worried
about running low power modes on their past/present devices, but more
worried about the precedence this might set with unproven claim and other
vendors moving to this without considering all the implications.

--
Regards,
Sudeep
Ulf Hansson June 10, 2019, 3:54 p.m. UTC | #7
On Mon, 10 Jun 2019 at 12:32, Sudeep Holla <sudeep.holla@arm.com> wrote:
>
> On Fri, Jun 07, 2019 at 12:34:07PM -0700, Bjorn Andersson wrote:
> > On Fri 07 Jun 08:42 PDT 2019, Sudeep Holla wrote:
> >
> > > On Tue, May 14, 2019 at 10:58:04AM +0200, Ulf Hansson wrote:
> > > > On Tue, 14 May 2019 at 10:08, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > >
> > > > > On Mon, May 13, 2019 at 9:23 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > > >
> > > > > > This series enables support for hierarchical CPU arrangement, managed by PSCI
> > > > > > for ARM/ARM64. It's based on using the generic PM domain (genpd), which
> > > > > > recently was extended to manage devices belonging to CPUs.
> > > > >
> > > > > ACK for the patches touching cpuidle in this series (from the
> > > > > framework perspective), but I'm assuming it to be taken care of by
> > > > > ARM/ARM64 maintainers.
> > > >
> > > > Thanks for the ack! Yes, this is for PSCI/ARM maintainers.
> > > >
> > > > BTW, apologize for sending this in the merge window, but wanted to
> > > > take the opportunity for people to have a look before OSPM Pisa next
> > > > week.
> > > >
> > >
> > > I will start looking at this series. But I would request PSCI/other
> > > maintainers to wait until we see some comparison data before we merge.
> >
> > What comparison are you asking for here? Do you want to see the
> > improvement this series gives or are you hoping to compare it with some
> > other mechanism?
> >
>
> OK, I have mentioned this many times already, let me repeat it again.
> This series adds an alternative to the existing PC mode of CPU idle
> management. And it's clear that the main reason for the same is the
> improvement OSI mode offers vs the PC mode. I am asking the comparison
> for the same. And yes we need to compare apples with apples and not
> oranges here.

In the cover letter you see the two main reasons behind this series.
Yeah, OSI support is a part of the series, but OSI or PC mode is
orthogonal to the overall changes this series implements.

When it comes to comparing OSI mode vs PC mode, let's try to avoid
that tiring discussion again, please. :-)

My summary from the earlier ones, is that because the PSCI spec
includes support for OSI, we should also support it in the kernel (and
ATF). In a discussion offlist, Lorenzo agreed that it's okay to add,
without an apple to apple comparison. Maybe Lorenzo can fill in and
state this publicly, to save us all some time?

My final point in regards to the OSI mode support, it's a minor part
of the series. I don't see how that should hurt from a maintenance
point of view, or perhaps I am wrong? In any case, I offer my help
with review/maintenance in any form as you may see need/fit.

[...]

Kind regards
Uffe
Lorenzo Pieralisi June 10, 2019, 5:16 p.m. UTC | #8
On Mon, Jun 10, 2019 at 05:54:39PM +0200, Ulf Hansson wrote:

[...]

> My summary from the earlier ones, is that because the PSCI spec
> includes support for OSI, we should also support it in the kernel (and
> ATF). In a discussion offlist, Lorenzo agreed that it's okay to add,
> without an apple to apple comparison. Maybe Lorenzo can fill in and
> state this publicly, to save us all some time?

The comparison should have been made before even requesting PSCI OSI
mode changes to the specifications, so we have a chip on our shoulders
anyway.

We will enable PSCI OSI but that's not where the problem lies, enabling
PSCI OSI from a firmware perspective should take 10 lines of code,
not:

 drivers/firmware/psci/Makefile                |   2 +-
 drivers/firmware/psci/psci.c                  | 219 ++++++++--
 drivers/firmware/psci/psci.h                  |  29 ++
 drivers/firmware/psci/psci_pm_domain.c        | 403 ++++++++++++++++++

I have some concerns about these changes that I will state in the
relevant patches.

> My final point in regards to the OSI mode support, it's a minor part
> of the series. I don't see how that should hurt from a maintenance
> point of view, or perhaps I am wrong? In any case, I offer my help
> with review/maintenance in any form as you may see need/fit.

I will go through the series but most of this code should move
to core PM code, it has nothing to do with PSCI.

BTW, apologies for the delay, I was away.

Thanks,
Lorenzo
Ulf Hansson June 10, 2019, 6:57 p.m. UTC | #9
On Mon, 10 Jun 2019 at 19:16, Lorenzo Pieralisi
<lorenzo.pieralisi@arm.com> wrote:
>
> On Mon, Jun 10, 2019 at 05:54:39PM +0200, Ulf Hansson wrote:
>
> [...]
>
> > My summary from the earlier ones, is that because the PSCI spec
> > includes support for OSI, we should also support it in the kernel (and
> > ATF). In a discussion offlist, Lorenzo agreed that it's okay to add,
> > without an apple to apple comparison. Maybe Lorenzo can fill in and
> > state this publicly, to save us all some time?
>
> The comparison should have been made before even requesting PSCI OSI
> mode changes to the specifications, so we have a chip on our shoulders
> anyway.
>
> We will enable PSCI OSI but that's not where the problem lies, enabling
> PSCI OSI from a firmware perspective should take 10 lines of code,
> not:

Thanks for confirming!

>
>  drivers/firmware/psci/Makefile                |   2 +-
>  drivers/firmware/psci/psci.c                  | 219 ++++++++--
>  drivers/firmware/psci/psci.h                  |  29 ++
>  drivers/firmware/psci/psci_pm_domain.c        | 403 ++++++++++++++++++
>
> I have some concerns about these changes that I will state in the
> relevant patches.

Most of the above changes isn't for solely for OSI, but to support a
hierarchical topology described in the PSCI DT layout. This is for
example needed when other resources shares the same power rail as the
CPU cluster.

In other words, the series is orthogonal to whether OSI or PC mode is
used for PSCI, just to make that clear. BTW, this is what you
requested me to change into, a while ago.

>
> > My final point in regards to the OSI mode support, it's a minor part
> > of the series. I don't see how that should hurt from a maintenance
> > point of view, or perhaps I am wrong? In any case, I offer my help
> > with review/maintenance in any form as you may see need/fit.
>
> I will go through the series but most of this code should move
> to core PM code, it has nothing to do with PSCI.

I am looking forward to your review - and for sure, I am open to suggestions!

>
> BTW, apologies for the delay, I was away.
>
> Thanks,
> Lorenzo

Kind regards
Uffe
Ulf Hansson June 18, 2019, 11:56 a.m. UTC | #10
On Mon, 10 Jun 2019 at 20:57, Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
> On Mon, 10 Jun 2019 at 19:16, Lorenzo Pieralisi
> <lorenzo.pieralisi@arm.com> wrote:
> >
> > On Mon, Jun 10, 2019 at 05:54:39PM +0200, Ulf Hansson wrote:
> >
> > [...]
> >
> > > My summary from the earlier ones, is that because the PSCI spec
> > > includes support for OSI, we should also support it in the kernel (and
> > > ATF). In a discussion offlist, Lorenzo agreed that it's okay to add,
> > > without an apple to apple comparison. Maybe Lorenzo can fill in and
> > > state this publicly, to save us all some time?
> >
> > The comparison should have been made before even requesting PSCI OSI
> > mode changes to the specifications, so we have a chip on our shoulders
> > anyway.
> >
> > We will enable PSCI OSI but that's not where the problem lies, enabling
> > PSCI OSI from a firmware perspective should take 10 lines of code,
> > not:
>
> Thanks for confirming!
>
> >
> >  drivers/firmware/psci/Makefile                |   2 +-
> >  drivers/firmware/psci/psci.c                  | 219 ++++++++--
> >  drivers/firmware/psci/psci.h                  |  29 ++
> >  drivers/firmware/psci/psci_pm_domain.c        | 403 ++++++++++++++++++
> >
> > I have some concerns about these changes that I will state in the
> > relevant patches.
>
> Most of the above changes isn't for solely for OSI, but to support a
> hierarchical topology described in the PSCI DT layout. This is for
> example needed when other resources shares the same power rail as the
> CPU cluster.
>
> In other words, the series is orthogonal to whether OSI or PC mode is
> used for PSCI, just to make that clear. BTW, this is what you
> requested me to change into, a while ago.
>
> >
> > > My final point in regards to the OSI mode support, it's a minor part
> > > of the series. I don't see how that should hurt from a maintenance
> > > point of view, or perhaps I am wrong? In any case, I offer my help
> > > with review/maintenance in any form as you may see need/fit.
> >
> > I will go through the series but most of this code should move
> > to core PM code, it has nothing to do with PSCI.
>
> I am looking forward to your review - and for sure, I am open to suggestions!
>
> >
> > BTW, apologies for the delay, I was away.

Lorenzo, a gentle ping.

Kind regards
Uffe