diff mbox series

[v16,4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches

Message ID 1586154741-8293-5-git-send-email-mkshah@codeaurora.org (mailing list archive)
State Superseded
Headers show
Series Invoke rpmh_flush for non OSI targets | expand

Commit Message

Maulik Shah April 6, 2020, 6:32 a.m. UTC
Add changes to invoke rpmh flush() from CPU PM notification.
This is done when the last the cpu is entering power collapse and
controller is not busy.

Controllers that do have 'HW solver' mode do not need to register
for CPU PM notification. They may be in autonomous mode executing
low power mode and do not require rpmh_flush() to happen from CPU
PM notification.

Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
---
 drivers/soc/qcom/rpmh-internal.h |  25 +++++---
 drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
 drivers/soc/qcom/rpmh.c          |  26 +++------
 3 files changed, 137 insertions(+), 37 deletions(-)

Comments

Stephen Boyd April 8, 2020, 2:50 a.m. UTC | #1
Quoting Maulik Shah (2020-04-05 23:32:19)
> Add changes to invoke rpmh flush() from CPU PM notification.
> This is done when the last the cpu is entering power collapse and
> controller is not busy.
> 
> Controllers that do have 'HW solver' mode do not need to register

Controllers that have 'HW solver' mode don't need to register? The 'do
have' is throwing me off.

> for CPU PM notification. They may be in autonomous mode executing
> low power mode and do not require rpmh_flush() to happen from CPU
> PM notification.
> 
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> Reviewed-by: Douglas Anderson <dianders@chromium.org>
> ---
>  drivers/soc/qcom/rpmh-internal.h |  25 +++++---
>  drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
>  drivers/soc/qcom/rpmh.c          |  26 +++------
>  3 files changed, 137 insertions(+), 37 deletions(-)
> 
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index b718221..fbe1f3e 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -6,6 +6,7 @@
[...]
> +
> +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
> +                                   unsigned long action, void *v)
> +{
> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
> +       int ret = NOTIFY_OK;
> +
> +       spin_lock(&drv->pm_lock);
> +
> +       switch (action) {
> +       case CPU_PM_ENTER:

I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
least, the genpd work that has gone on for cpuidle could be used here in
place of CPU_PM notifiers? And so this isn't actually any different
than what was proposed originally to use genpd for this?

> +               cpumask_set_cpu(raw_smp_processor_id(),

Why do we need to use raw_smp_processor_id()? smp_processor_id() should
work just as well?

> +                               &drv->cpus_entered_pm);
> +
> +               if (!cpumask_equal(&drv->cpus_entered_pm, cpu_online_mask))
> +                       goto exit;
> +               break;
> +       case CPU_PM_ENTER_FAILED:
> +       case CPU_PM_EXIT:
> +               cpumask_clear_cpu(raw_smp_processor_id(),
> +                                 &drv->cpus_entered_pm);
> +               goto exit;
> +       }
> +
> +       ret = rpmh_rsc_ctrlr_is_busy(drv);
> +       if (ret) {
> +               ret = NOTIFY_BAD;
> +               goto exit;
> +       }
> +
> +       ret = rpmh_flush(&drv->client);
> +       if (ret)
> +               ret = NOTIFY_BAD;
> +       else
> +               ret = NOTIFY_OK;
> +
> +exit:
> +       spin_unlock(&drv->pm_lock);
> +       return ret;
> +}
> +
Maulik Shah April 8, 2020, 7:08 a.m. UTC | #2
Hi,

On 4/8/2020 8:20 AM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-04-05 23:32:19)
>> Add changes to invoke rpmh flush() from CPU PM notification.
>> This is done when the last the cpu is entering power collapse and
>> controller is not busy.
>>
>> Controllers that do have 'HW solver' mode do not need to register
> Controllers that have 'HW solver' mode don't need to register? The 'do
> have' is throwing me off.
Okay i will remove 'do' from this line.
>> for CPU PM notification. They may be in autonomous mode executing
>> low power mode and do not require rpmh_flush() to happen from CPU
>> PM notification.
>>
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Douglas Anderson <dianders@chromium.org>
>> ---
>>   drivers/soc/qcom/rpmh-internal.h |  25 +++++---
>>   drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
>>   drivers/soc/qcom/rpmh.c          |  26 +++------
>>   3 files changed, 137 insertions(+), 37 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index b718221..fbe1f3e 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -6,6 +6,7 @@
> [...]
>> +
>> +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
>> +                                   unsigned long action, void *v)
>> +{
>> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
>> +       int ret = NOTIFY_OK;
>> +
>> +       spin_lock(&drv->pm_lock);
>> +
>> +       switch (action) {
>> +       case CPU_PM_ENTER:
> I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
> least, the genpd work that has gone on for cpuidle could be used here in
> place of CPU_PM notifiers?

genpd was used in v3 and v4 of this series, where from pd's .power_off  
function, rpmh_flush() was invoked.

genpd can be useful if target firmware supports PSCI's OSI mode, while 
sc7180 is non-OSI target.

The current approch (using cpu pm notification) can be used for both OSI 
and non-OSI targets to invoke rpmh_flush() when last cpu goes to power down.

> And so this isn't actually any different
> than what was proposed originally to use genpd for this?
>
>> +               cpumask_set_cpu(raw_smp_processor_id(),
> Why do we need to use raw_smp_processor_id()? smp_processor_id() should
> work just as well?
Yes, seems it will work as well. I will change to use smp_processor_id().
>
>> +                               &drv->cpus_entered_pm);
>> +
>> +               if (!cpumask_equal(&drv->cpus_entered_pm, cpu_online_mask))
>> +                       goto exit;
>> +               break;
>> +       case CPU_PM_ENTER_FAILED:
>> +       case CPU_PM_EXIT:
>> +               cpumask_clear_cpu(raw_smp_processor_id(),
>> +                                 &drv->cpus_entered_pm);
>> +               goto exit;
>> +       }
>> +
>> +       ret = rpmh_rsc_ctrlr_is_busy(drv);
>> +       if (ret) {
>> +               ret = NOTIFY_BAD;
>> +               goto exit;
>> +       }
>> +
>> +       ret = rpmh_flush(&drv->client);
>> +       if (ret)
>> +               ret = NOTIFY_BAD;
>> +       else
>> +               ret = NOTIFY_OK;
>> +
>> +exit:
>> +       spin_unlock(&drv->pm_lock);
>> +       return ret;
>> +}
>> +
Thanks,
Maulik
Stephen Boyd April 9, 2020, 8:16 a.m. UTC | #3
Quoting Maulik Shah (2020-04-08 00:08:48)
> Hi,
> 
> On 4/8/2020 8:20 AM, Stephen Boyd wrote:
> > Quoting Maulik Shah (2020-04-05 23:32:19)
> >> for CPU PM notification. They may be in autonomous mode executing
> >> low power mode and do not require rpmh_flush() to happen from CPU
> >> PM notification.
> >>
> >> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> >> Reviewed-by: Douglas Anderson <dianders@chromium.org>
> >> ---
> >>   drivers/soc/qcom/rpmh-internal.h |  25 +++++---
> >>   drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
> >>   drivers/soc/qcom/rpmh.c          |  26 +++------
> >>   3 files changed, 137 insertions(+), 37 deletions(-)
> >>
> >> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> >> index b718221..fbe1f3e 100644
> >> --- a/drivers/soc/qcom/rpmh-rsc.c
> >> +++ b/drivers/soc/qcom/rpmh-rsc.c
> >> @@ -6,6 +6,7 @@
> > [...]
> >> +
> >> +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
> >> +                                   unsigned long action, void *v)
> >> +{
> >> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
> >> +       int ret = NOTIFY_OK;
> >> +
> >> +       spin_lock(&drv->pm_lock);
> >> +
> >> +       switch (action) {
> >> +       case CPU_PM_ENTER:
> > I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
> > least, the genpd work that has gone on for cpuidle could be used here in
> > place of CPU_PM notifiers?
> 
> genpd was used in v3 and v4 of this series, where from pd's .power_off  
> function, rpmh_flush() was invoked.
> 
> genpd can be useful if target firmware supports PSCI's OSI mode, while 
> sc7180 is non-OSI target.
> 
> The current approch (using cpu pm notification) can be used for both OSI 
> and non-OSI targets to invoke rpmh_flush() when last cpu goes to power down.

Ok. Doug and I talked today and I re-read the earlier series and I think
Sudeep was suggesting that if we're doing last man down activities here
then we're better off using OSI vs. PC mode. But I can only assume
that's because the concern is something here requires software's help
for last man down activities like lowering a CPU voltage setting or
turning off some power switch to a hardware block through some i2c
message. The way I understand it the last man down activities here are
just setting up the sleep and wake TCS FIFOs to "do the right thing"
when the last CPU actually goes down and the first CPU wakes up by
running through the pile of "instructions" that we program into the
FIFOs.

The execution of those instructions is all done in hardware so any
aggregation or coordination between CPUs is not really important here.
All that matters is that we set up the sleep and wake TCS FIFOs properly
so that _if_ the whole CPU subsystem goes to sleep we're going to let
the hardware turn off the right stuff and lower voltages, etc. and
vice-versa for wake. If we didn't have to share the TCS FIFOs with
active mode control then we could just tweak the sleep and wake TCS
buckets at runtime and let the hardware state of the CPUs decide to
trigger them at the right time. Unfortunately, we don't have that luxury
and we're stuck repurposing the sleep TCS FIFO to control things like
regulator voltages when the CPU is awake. Yuck!

> 
> > And so this isn't actually any different
> > than what was proposed originally to use genpd for this?
> >

I guess this answer to this is yes. Which is fine. CPU PM notifiers are
still used by various drivers to do things like save/restore state of
devices that lose state when the CPUs power down. The use of genpd is
helpful for OSI mode because it can describe how/when big and little
clusters are powered off by putting them in different genpds. For
counting the last CPU to turn off it seems simpler to just register for
CPU PM notifiers and not care about genpd logic and nesting clusters,
etc. I'm happy to see this not be a blocker.
Stephen Boyd April 10, 2020, 4:15 a.m. UTC | #4
Quoting Maulik Shah (2020-04-05 23:32:19)
> Add changes to invoke rpmh flush() from CPU PM notification.
> This is done when the last the cpu is entering power collapse and

'power collapse' is a qcom-ism. Maybe say something like "deep CPU idle
states"?

> controller is not busy.
> 
> Controllers that do have 'HW solver' mode do not need to register
> for CPU PM notification. They may be in autonomous mode executing
> low power mode and do not require rpmh_flush() to happen from CPU
> PM notification.

Can you provide an example of a HW solver mode controller? Presumably
the display RSC is one of these?

> 
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> Reviewed-by: Douglas Anderson <dianders@chromium.org>

> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index b718221..fbe1f3e 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -521,8 +527,86 @@ int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
>         return tcs_ctrl_write(drv, msg);
>  }
>  
> +/**
> + * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy.
> + *
> + * @drv: The controller
> + *
> + * Checks if any of the AMCs are busy in handling ACTIVE sets.
> + * This is called from the last cpu powering down before flushing
> + * SLEEP and WAKE sets. If AMCs are busy, controller can not enter
> + * power collapse, so deny from the last cpu's pm notification.
> + *
> + * Return:
> + * * False             - AMCs are idle
> + * * True              - AMCs are busy
> + */
> +static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv)

Can drv be const? Would be nice to make it const in some places in this
driver.

> +{
> +       int m;
> +       struct tcs_group *tcs = get_tcs_of_type(drv, ACTIVE_TCS);
> +
> +       /*
> +        * If we made an active request on a RSC that does not have a
> +        * dedicated TCS for active state use, then re-purposed wake TCSes
> +        * should be checked for not busy.

not busy, because we use wake TCSes for active requests in this case.

> +        *
> +        * Since this is called from the last cpu, need not take drv or tcs
> +        * lock before checking tcs_is_free().
> +        */
> +       if (!tcs->num_tcs)
> +               tcs = get_tcs_of_type(drv, WAKE_TCS);
> +
> +       for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
> +               if (!tcs_is_free(drv, m))
> +                       return true;
> +       }
> +
> +       return false;
> +}
[...]
> +
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index a75f3df..88f8801 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -433,16 +430,17 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>  }
>  
>  /**
> - * rpmh_flush: Flushes the buffered active and sleep sets to TCS
> + * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes
>   *
> - * @ctrlr: controller making request to flush cached data
> + * @ctrlr: Controller making request to flush cached data
>   *
> - * Return: -EBUSY if the controller is busy, probably waiting on a response
> - * to a RPMH request sent earlier.
> + * This function is called from sleep code on the last CPU
> + * (thus no spinlock needed).

Might be good to stick a lockdep_assert_irqs_disabled() in this function
then. That would document that this function should only be called when
irqs are disabled.

>   *
> - * This function is always called from the sleep code from the last CPU
> - * that is powering down the entire system. Since no other RPMH API would be
> - * executing at this time, it is safe to run lockless.
> + * Return:
> + * * 0          - Success
> + * * -EAGAIN    - Retry again
> + * * Error code - Otherwise
>   */
>  int rpmh_flush(struct rpmh_ctrlr *ctrlr)

This function name keeps throwing me off. Can we please call it
something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems
to imply there's some sort of cache going on, but that's not really the
case. We're programming a couple TCS FIFOs so that they can be used
across deep CPU sleep states.

>  {
> @@ -455,9 +453,7 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>         }
>  
>         /* Invalidate the TCSes first to avoid stale data */
> -       do {
> -               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
> -       } while (ret == -EAGAIN);
> +       ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
>         if (ret)
>                 return ret;
>
Doug Anderson April 10, 2020, 2:52 p.m. UTC | #5
Hi,

On Thu, Apr 9, 2020 at 9:15 PM Stephen Boyd <swboyd@chromium.org> wrote:
>
> >  int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>
> This function name keeps throwing me off. Can we please call it
> something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems
> to imply there's some sort of cache going on, but that's not really the
> case. We're programming a couple TCS FIFOs so that they can be used
> across deep CPU sleep states.

I'm hoping this rename can be deferred until Maulik's series and my
cleanup series land.  While I agree that rpmh_flush() is a bit of a
confusing name, it's not a new name and renaming it midway through
when there are a bunch of patches in-flight is going to be a hassle.

Assuming others agree, my thought is that Maulik will do one more v17
spin with small nits fixed up, then his series can land early next
week when (presumably) -rc1 comes out.  If my current cleanup doesn't
apply cleanly (or if Bjorn/Andy don't want to fix the two nits in
commit messages when applying) I can repost my series and that can
land in short order.  Once those land:

* Maulik can post this rpmh_flush() rename atop.

* I can post the patch to remove the "pm_lock" that was introduced in
this series.  A preview at <https://crrev.com/c/2142823>.

* Maulik can post some of the cleanups that Maulik wanted to do in
rpmh.c with regards to __fill_rpmh_msg()

...assuming those are not controversial perhaps they can be reviewed
quickly and land quickly?  I suppose I can always dream...


-Doug
Maulik Shah April 12, 2020, 1:51 p.m. UTC | #6
Hi,

On 4/9/2020 1:46 PM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-04-08 00:08:48)
>> Hi,
>>
>> On 4/8/2020 8:20 AM, Stephen Boyd wrote:
>>> Quoting Maulik Shah (2020-04-05 23:32:19)
>>>> for CPU PM notification. They may be in autonomous mode executing
>>>> low power mode and do not require rpmh_flush() to happen from CPU
>>>> PM notification.
>>>>
>>>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>>>> Reviewed-by: Douglas Anderson <dianders@chromium.org>
>>>> ---
>>>>    drivers/soc/qcom/rpmh-internal.h |  25 +++++---
>>>>    drivers/soc/qcom/rpmh-rsc.c      | 123 +++++++++++++++++++++++++++++++++++----
>>>>    drivers/soc/qcom/rpmh.c          |  26 +++------
>>>>    3 files changed, 137 insertions(+), 37 deletions(-)
>>>>
>>>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>>>> index b718221..fbe1f3e 100644
>>>> --- a/drivers/soc/qcom/rpmh-rsc.c
>>>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>>>> @@ -6,6 +6,7 @@
>>> [...]
>>>> +
>>>> +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
>>>> +                                   unsigned long action, void *v)
>>>> +{
>>>> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
>>>> +       int ret = NOTIFY_OK;
>>>> +
>>>> +       spin_lock(&drv->pm_lock);
>>>> +
>>>> +       switch (action) {
>>>> +       case CPU_PM_ENTER:
>>> I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
>>> least, the genpd work that has gone on for cpuidle could be used here in
>>> place of CPU_PM notifiers?
>> genpd was used in v3 and v4 of this series, where from pd's .power_off
>> function, rpmh_flush() was invoked.
>>
>> genpd can be useful if target firmware supports PSCI's OSI mode, while
>> sc7180 is non-OSI target.
>>
>> The current approch (using cpu pm notification) can be used for both OSI
>> and non-OSI targets to invoke rpmh_flush() when last cpu goes to power down.
> Ok. Doug and I talked today and I re-read the earlier series and I think
> Sudeep was suggesting that if we're doing last man down activities here
> then we're better off using OSI vs. PC mode. But I can only assume
> that's because the concern is something here requires software's help
> for last man down activities like lowering a CPU voltage setting or
> turning off some power switch to a hardware block through some i2c
> message. The way I understand it the last man down activities here are
> just setting up the sleep and wake TCS FIFOs to "do the right thing"
> when the last CPU actually goes down and the first CPU wakes up by
> running through the pile of "instructions" that we program into the
> FIFOs.
>
> The execution of those instructions is all done in hardware so any
> aggregation or coordination between CPUs is not really important here.
> All that matters is that we set up the sleep and wake TCS FIFOs properly
> so that _if_ the whole CPU subsystem goes to sleep we're going to let
> the hardware turn off the right stuff and lower voltages, etc. and
> vice-versa for wake. If we didn't have to share the TCS FIFOs with
> active mode control then we could just tweak the sleep and wake TCS
> buckets at runtime and let the hardware state of the CPUs decide to
> trigger them at the right time.
Correct.
>   Unfortunately, we don't have that luxury
> and we're stuck repurposing the sleep TCS FIFO to control things like
> regulator voltages when the CPU is awake. Yuck!
RSC is having TCS HW and it is currently divided in ACTIVE/ SLEEP/ WAKE 
TCSes configuration.
The ACTICE TCS HW and SLEEP + WAKE TCS HW usecases are mutually 
exclusive, in the sense that when
ACTIVE TCS HW is in-use the SLEEP + WAKE TCSes are not (since CPU is 
already out of low power mode
doing active only transfer, firmware can not trigger deepest low power 
modes where SLEEP and WAKE TCses is used)

Similarly when SLEEP + WAKE TCSes HW are in-use, the ACTIVE TCS HW is 
not (since none of the CPU is in Linux
to send active message) With above, some of the RSCs already don't have 
dedicated ACTIVE TCS HW, and when we
want to send active-only message we borrow one TCS from WAKE TCS pool, 
configure it for ACTIVE use (like enable
completion IRQ for the TCS) and once done re-configure it to use as WAKE 
TCS only.

So with reduced HW (removing ACTIVE TCSes), the same functionality is 
achieved.
The rpmh driver is designed support such scenarios.

Thanks,
Maulik
>
>>> And so this isn't actually any different
>>> than what was proposed originally to use genpd for this?
>>>
> I guess this answer to this is yes. Which is fine. CPU PM notifiers are
> still used by various drivers to do things like save/restore state of
> devices that lose state when the CPUs power down. The use of genpd is
> helpful for OSI mode because it can describe how/when big and little
> clusters are powered off by putting them in different genpds. For
> counting the last CPU to turn off it seems simpler to just register for
> CPU PM notifiers and not care about genpd logic and nesting clusters,
> etc. I'm happy to see this not be a blocker.
Maulik Shah April 12, 2020, 2:04 p.m. UTC | #7
Hi,

On 4/10/2020 9:45 AM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-04-05 23:32:19)
>> Add changes to invoke rpmh flush() from CPU PM notification.
>> This is done when the last the cpu is entering power collapse and
> 'power collapse' is a qcom-ism. Maybe say something like "deep CPU idle
> states"?
Okay, i updated in v17.
>
>> controller is not busy.
>>
>> Controllers that do have 'HW solver' mode do not need to register
>> for CPU PM notification. They may be in autonomous mode executing
>> low power mode and do not require rpmh_flush() to happen from CPU
>> PM notification.
> Can you provide an example of a HW solver mode controller? Presumably
> the display RSC is one of these?
Sure, Added in v17 to mention display RSC.
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Douglas Anderson <dianders@chromium.org>
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index b718221..fbe1f3e 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -521,8 +527,86 @@ int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
>>          return tcs_ctrl_write(drv, msg);
>>   }
>>   
>> +/**
>> + * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy.
>> + *
>> + * @drv: The controller
>> + *
>> + * Checks if any of the AMCs are busy in handling ACTIVE sets.
>> + * This is called from the last cpu powering down before flushing
>> + * SLEEP and WAKE sets. If AMCs are busy, controller can not enter
>> + * power collapse, so deny from the last cpu's pm notification.
>> + *
>> + * Return:
>> + * * False             - AMCs are idle
>> + * * True              - AMCs are busy
>> + */
>> +static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv)
> Can drv be const? Would be nice to make it const in some places in this
> driver.

It can, but it will require changing multiple places to make it const. 
some of those functions are

dropped/modified in doug's cleanup series. I can do in separate patch 
once this series is pulled in.

>
>> +{
>> +       int m;
>> +       struct tcs_group *tcs = get_tcs_of_type(drv, ACTIVE_TCS);
>> +
>> +       /*
>> +        * If we made an active request on a RSC that does not have a
>> +        * dedicated TCS for active state use, then re-purposed wake TCSes
>> +        * should be checked for not busy.
> not busy, because we use wake TCSes for active requests in this case.
Ok, added in v17.
>
>> +        *
>> +        * Since this is called from the last cpu, need not take drv or tcs
>> +        * lock before checking tcs_is_free().
>> +        */
>> +       if (!tcs->num_tcs)
>> +               tcs = get_tcs_of_type(drv, WAKE_TCS);
>> +
>> +       for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
>> +               if (!tcs_is_free(drv, m))
>> +                       return true;
>> +       }
>> +
>> +       return false;
>> +}
> [...]
>> +
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index a75f3df..88f8801 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -433,16 +430,17 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>>   }
>>   
>>   /**
>> - * rpmh_flush: Flushes the buffered active and sleep sets to TCS
>> + * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes
>>    *
>> - * @ctrlr: controller making request to flush cached data
>> + * @ctrlr: Controller making request to flush cached data
>>    *
>> - * Return: -EBUSY if the controller is busy, probably waiting on a response
>> - * to a RPMH request sent earlier.
>> + * This function is called from sleep code on the last CPU
>> + * (thus no spinlock needed).
> Might be good to stick a lockdep_assert_irqs_disabled() in this function
> then. That would document that this function should only be called when
> irqs are disabled.

Okay. Added  lockdep_assert_irqs_disabled(). But this may need to be 
dropped when

we add support for display RSC.

>
>>    *
>> - * This function is always called from the sleep code from the last CPU
>> - * that is powering down the entire system. Since no other RPMH API would be
>> - * executing at this time, it is safe to run lockless.
>> + * Return:
>> + * * 0          - Success
>> + * * -EAGAIN    - Retry again
>> + * * Error code - Otherwise
>>    */
>>   int rpmh_flush(struct rpmh_ctrlr *ctrlr)
> This function name keeps throwing me off. Can we please call it
> something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems
> to imply there's some sort of cache going on, but that's not really the
> case. We're programming a couple TCS FIFOs so that they can be used
> across deep CPU sleep states.
>
>>   {
>> @@ -455,9 +453,7 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>>          }
>>   
>>          /* Invalidate the TCSes first to avoid stale data */
>> -       do {
>> -               ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
>> -       } while (ret == -EAGAIN);
>> +       ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
>>          if (ret)
>>                  return ret;

Thanks,

Maulik
Maulik Shah April 12, 2020, 2:10 p.m. UTC | #8
Hi,

On 4/10/2020 8:22 PM, Doug Anderson wrote:
> Hi,
>
> On Thu, Apr 9, 2020 at 9:15 PM Stephen Boyd <swboyd@chromium.org> wrote:
>>>   int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>> This function name keeps throwing me off. Can we please call it
>> something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems
>> to imply there's some sort of cache going on, but that's not really the
>> case. We're programming a couple TCS FIFOs so that they can be used
>> across deep CPU sleep states.
> I'm hoping this rename can be deferred until Maulik's series and my
> cleanup series land.  While I agree that rpmh_flush() is a bit of a
> confusing name, it's not a new name and renaming it midway through
> when there are a bunch of patches in-flight is going to be a hassle.
>
> Assuming others agree, my thought is that Maulik will do one more v17
> spin with small nits fixed up, then his series can land early next
> week when (presumably) -rc1 comes out.  If my current cleanup doesn't
> apply cleanly (or if Bjorn/Andy don't want to fix the two nits in
> commit messages when applying) I can repost my series and that can
> land in short order.  Once those land:
>
> * Maulik can post this rpmh_flush() rename atop.
>
> * I can post the patch to remove the "pm_lock" that was introduced in
> this series.  A preview at <https://crrev.com/c/2142823>.
>
> * Maulik can post some of the cleanups that Maulik wanted to do in
> rpmh.c with regards to __fill_rpmh_msg()
>
> ...assuming those are not controversial perhaps they can be reviewed
> quickly and land quickly?  I suppose I can always dream...
>
>
> -Doug

Agree, I can defer rename until this series lands and then post above 
listed changes.

Thanks,
Maulik
diff mbox series

Patch

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 6eec32b..e9a90cb 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -84,23 +84,32 @@  struct rpmh_ctrlr {
  * struct rsc_drv: the Direct Resource Voter (DRV) of the
  * Resource State Coordinator controller (RSC)
  *
- * @name:       controller identifier
- * @tcs_base:   start address of the TCS registers in this controller
- * @id:         instance id in the controller (Direct Resource Voter)
- * @num_tcs:    number of TCSes in this DRV
- * @tcs:        TCS groups
- * @tcs_in_use: s/w state of the TCS
- * @lock:       synchronize state of the controller
- * @client:     handle to the DRV's client.
+ * @name:               Controller identifier
+ * @tcs_base:           Start address of the TCS registers in this controller
+ * @id:                 Instance id in the controller (Direct Resource Voter)
+ * @num_tcs:            Number of TCSes in this DRV
+ * @rsc_pm:             CPU PM notifier for controller
+ *                      Used when solver mode is not present
+ * @cpus_entered_pm:    CPU mask for cpus in idle power collapse
+ *                      Used when solver mode is not present
+ * @tcs:                TCS groups
+ * @tcs_in_use:         S/W state of the TCS
+ * @lock:               Synchronize state of the controller
+ * @pm_lock:            Synchronize during PM notifications
+ *                      Used when solver mode is not present
+ * @client:             Handle to the DRV's client.
  */
 struct rsc_drv {
 	const char *name;
 	void __iomem *tcs_base;
 	int id;
 	int num_tcs;
+	struct notifier_block rsc_pm;
+	struct cpumask cpus_entered_pm;
 	struct tcs_group tcs[TCS_TYPE_NR];
 	DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
 	spinlock_t lock;
+	spinlock_t pm_lock;
 	struct rpmh_ctrlr client;
 };
 
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index b718221..fbe1f3e 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -6,6 +6,7 @@ 
 #define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME
 
 #include <linux/atomic.h>
+#include <linux/cpu_pm.h>
 #include <linux/delay.h>
 #include <linux/interrupt.h>
 #include <linux/io.h>
@@ -30,7 +31,12 @@ 
 #define RSC_DRV_TCS_OFFSET		672
 #define RSC_DRV_CMD_OFFSET		20
 
-/* DRV Configuration Information Register */
+/* DRV HW Solver Configuration Information Register */
+#define DRV_SOLVER_CONFIG		0x04
+#define DRV_HW_SOLVER_MASK		1
+#define DRV_HW_SOLVER_SHIFT		24
+
+/* DRV TCS Configuration Information Register */
 #define DRV_PRNT_CHLD_CONFIG		0x0C
 #define DRV_NUM_TCS_MASK		0x3F
 #define DRV_NUM_TCS_SHIFT		6
@@ -521,8 +527,86 @@  int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
 	return tcs_ctrl_write(drv, msg);
 }
 
+/**
+ * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy.
+ *
+ * @drv: The controller
+ *
+ * Checks if any of the AMCs are busy in handling ACTIVE sets.
+ * This is called from the last cpu powering down before flushing
+ * SLEEP and WAKE sets. If AMCs are busy, controller can not enter
+ * power collapse, so deny from the last cpu's pm notification.
+ *
+ * Return:
+ * * False		- AMCs are idle
+ * * True		- AMCs are busy
+ */
+static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv)
+{
+	int m;
+	struct tcs_group *tcs = get_tcs_of_type(drv, ACTIVE_TCS);
+
+	/*
+	 * If we made an active request on a RSC that does not have a
+	 * dedicated TCS for active state use, then re-purposed wake TCSes
+	 * should be checked for not busy.
+	 *
+	 * Since this is called from the last cpu, need not take drv or tcs
+	 * lock before checking tcs_is_free().
+	 */
+	if (!tcs->num_tcs)
+		tcs = get_tcs_of_type(drv, WAKE_TCS);
+
+	for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
+		if (!tcs_is_free(drv, m))
+			return true;
+	}
+
+	return false;
+}
+
+static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
+				    unsigned long action, void *v)
+{
+	struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
+	int ret = NOTIFY_OK;
+
+	spin_lock(&drv->pm_lock);
+
+	switch (action) {
+	case CPU_PM_ENTER:
+		cpumask_set_cpu(raw_smp_processor_id(),
+				&drv->cpus_entered_pm);
+
+		if (!cpumask_equal(&drv->cpus_entered_pm, cpu_online_mask))
+			goto exit;
+		break;
+	case CPU_PM_ENTER_FAILED:
+	case CPU_PM_EXIT:
+		cpumask_clear_cpu(raw_smp_processor_id(),
+				  &drv->cpus_entered_pm);
+		goto exit;
+	}
+
+	ret = rpmh_rsc_ctrlr_is_busy(drv);
+	if (ret) {
+		ret = NOTIFY_BAD;
+		goto exit;
+	}
+
+	ret = rpmh_flush(&drv->client);
+	if (ret)
+		ret = NOTIFY_BAD;
+	else
+		ret = NOTIFY_OK;
+
+exit:
+	spin_unlock(&drv->pm_lock);
+	return ret;
+}
+
 static int rpmh_probe_tcs_config(struct platform_device *pdev,
-				 struct rsc_drv *drv)
+				 struct rsc_drv *drv, void __iomem *base)
 {
 	struct tcs_type_config {
 		u32 type;
@@ -532,15 +616,6 @@  static int rpmh_probe_tcs_config(struct platform_device *pdev,
 	u32 config, max_tcs, ncpt, offset;
 	int i, ret, n, st = 0;
 	struct tcs_group *tcs;
-	struct resource *res;
-	void __iomem *base;
-	char drv_id[10] = {0};
-
-	snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id);
-	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id);
-	base = devm_ioremap_resource(&pdev->dev, res);
-	if (IS_ERR(base))
-		return PTR_ERR(base);
 
 	ret = of_property_read_u32(dn, "qcom,tcs-offset", &offset);
 	if (ret)
@@ -620,7 +695,11 @@  static int rpmh_rsc_probe(struct platform_device *pdev)
 {
 	struct device_node *dn = pdev->dev.of_node;
 	struct rsc_drv *drv;
+	struct resource *res;
+	char drv_id[10] = {0};
 	int ret, irq;
+	u32 solver_config;
+	void __iomem *base;
 
 	/*
 	 * Even though RPMh doesn't directly use cmd-db, all of its children
@@ -646,7 +725,13 @@  static int rpmh_rsc_probe(struct platform_device *pdev)
 	if (!drv->name)
 		drv->name = dev_name(&pdev->dev);
 
-	ret = rpmh_probe_tcs_config(pdev, drv);
+	snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id);
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id);
+	base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(base))
+		return PTR_ERR(base);
+
+	ret = rpmh_probe_tcs_config(pdev, drv, base);
 	if (ret)
 		return ret;
 
@@ -663,6 +748,20 @@  static int rpmh_rsc_probe(struct platform_device *pdev)
 	if (ret)
 		return ret;
 
+	/*
+	 * CPU PM notification are not required for controllers that support
+	 * 'HW solver' mode where they can be in autonomous mode executing low
+	 * power mode to power down.
+	 */
+	solver_config = readl_relaxed(base + DRV_SOLVER_CONFIG);
+	solver_config &= DRV_HW_SOLVER_MASK << DRV_HW_SOLVER_SHIFT;
+	solver_config = solver_config >> DRV_HW_SOLVER_SHIFT;
+	if (!solver_config) {
+		drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback;
+		spin_lock_init(&drv->pm_lock);
+		cpu_pm_register_notifier(&drv->rsc_pm);
+	}
+
 	/* Enable the active TCS to send requests immediately */
 	write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, drv->tcs[ACTIVE_TCS].mask);
 
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index a75f3df..88f8801 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -297,12 +297,10 @@  static int flush_batch(struct rpmh_ctrlr *ctrlr)
 {
 	struct batch_cache_req *req;
 	const struct rpmh_request *rpm_msg;
-	unsigned long flags;
 	int ret = 0;
 	int i;
 
 	/* Send Sleep/Wake requests to the controller, expect no response */
-	spin_lock_irqsave(&ctrlr->cache_lock, flags);
 	list_for_each_entry(req, &ctrlr->batch_cache, list) {
 		for (i = 0; i < req->count; i++) {
 			rpm_msg = req->rpm_msgs + i;
@@ -312,7 +310,6 @@  static int flush_batch(struct rpmh_ctrlr *ctrlr)
 				break;
 		}
 	}
-	spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
 
 	return ret;
 }
@@ -433,16 +430,17 @@  static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
 }
 
 /**
- * rpmh_flush: Flushes the buffered active and sleep sets to TCS
+ * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes
  *
- * @ctrlr: controller making request to flush cached data
+ * @ctrlr: Controller making request to flush cached data
  *
- * Return: -EBUSY if the controller is busy, probably waiting on a response
- * to a RPMH request sent earlier.
+ * This function is called from sleep code on the last CPU
+ * (thus no spinlock needed).
  *
- * This function is always called from the sleep code from the last CPU
- * that is powering down the entire system. Since no other RPMH API would be
- * executing at this time, it is safe to run lockless.
+ * Return:
+ * * 0          - Success
+ * * -EAGAIN    - Retry again
+ * * Error code - Otherwise
  */
 int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 {
@@ -455,9 +453,7 @@  int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 	}
 
 	/* Invalidate the TCSes first to avoid stale data */
-	do {
-		ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
-	} while (ret == -EAGAIN);
+	ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
 	if (ret)
 		return ret;
 
@@ -466,10 +462,6 @@  int rpmh_flush(struct rpmh_ctrlr *ctrlr)
 	if (ret)
 		return ret;
 
-	/*
-	 * Nobody else should be calling this function other than system PM,
-	 * hence we can run without locks.
-	 */
 	list_for_each_entry(p, &ctrlr->cache, list) {
 		if (!is_req_valid(p)) {
 			pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",