diff mbox series

[v14,6/6] soc: qcom: rpmh-rsc: Allow using free WAKE TCS for active request

Message ID 1585244270-637-7-git-send-email-mkshah@codeaurora.org (mailing list archive)
State Superseded
Headers show
Series Invoke rpmh_flush for non OSI targets | expand

Commit Message

Maulik Shah March 26, 2020, 5:37 p.m. UTC
When there are more than one WAKE TCS available and there is no dedicated
ACTIVE TCS available, invalidating all WAKE TCSes and waiting for current
transfer to complete in first WAKE TCS blocks using another free WAKE TCS
to complete current request.

Remove rpmh_rsc_invalidate() to happen from tcs_write() when WAKE TCSes
is re-purposed to be used for Active mode. Clear only currently used
WAKE TCS's register configuration.

Mark the caches as dirty so next time when rpmh_flush() is invoked it
can invalidate and program cached sleep and wake sets again.

Fixes: 2de4b8d33eab (drivers: qcom: rpmh-rsc: allow active requests from wake TCS)
Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
---
 drivers/soc/qcom/rpmh-rsc.c | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)

Comments

Douglas Anderson March 26, 2020, 9:46 p.m. UTC | #1
Hi,

On Thu, Mar 26, 2020 at 10:38 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> When there are more than one WAKE TCS available and there is no dedicated
> ACTIVE TCS available, invalidating all WAKE TCSes and waiting for current
> transfer to complete in first WAKE TCS blocks using another free WAKE TCS
> to complete current request.
>
> Remove rpmh_rsc_invalidate() to happen from tcs_write() when WAKE TCSes
> is re-purposed to be used for Active mode. Clear only currently used
> WAKE TCS's register configuration.
>
> Mark the caches as dirty so next time when rpmh_flush() is invoked it
> can invalidate and program cached sleep and wake sets again.
>
> Fixes: 2de4b8d33eab (drivers: qcom: rpmh-rsc: allow active requests from wake TCS)
> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
> ---
>  drivers/soc/qcom/rpmh-rsc.c | 29 +++++++++++++++++++----------
>  1 file changed, 19 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index 8fa70b4..c0513af 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -154,8 +154,9 @@ int rpmh_rsc_invalidate(struct rsc_drv *drv)
>  static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>                                          const struct tcs_request *msg)
>  {
> -       int type, ret;
> +       int type;
>         struct tcs_group *tcs;
> +       unsigned long flags;
>
>         switch (msg->state) {
>         case RPMH_ACTIVE_ONLY_STATE:
> @@ -175,18 +176,18 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>          * If we are making an active request on a RSC that does not have a
>          * dedicated TCS for active state use, then re-purpose a wake TCS to
>          * send active votes.
> -        * NOTE: The driver must be aware that this RSC does not have a
> -        * dedicated AMC, and therefore would invalidate the sleep and wake
> -        * TCSes before making an active state request.
> +        *
> +        * NOTE: Mark caches as dirty here since existing data in wake TCS will
> +        * be lost. rpmh_flush() will processed for dirty caches to restore
> +        * data.
>          */
>         tcs = get_tcs_of_type(drv, type);
>         if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) {
>                 tcs = get_tcs_of_type(drv, WAKE_TCS);
> -               if (tcs->num_tcs) {
> -                       ret = rpmh_rsc_invalidate(drv);
> -                       if (ret)
> -                               return ERR_PTR(ret);
> -               }
> +
> +               spin_lock_irqsave(&drv->client.cache_lock, flags);
> +               drv->client.dirty = true;
> +               spin_unlock_irqrestore(&drv->client.cache_lock, flags);

This seems like a huge abstraction violation.  Why can't rpmh_write()
/ rpmh_write_async() / rpmh_write_batch() just always unconditionally
mark the cache dirty?  Are there really lots of cases when those calls
are made and they do nothing?


Other than that this patch seems sane to me and addresses one of the
comments I had in:

https://lore.kernel.org/r/CAD=FV=XmBQb8yfx14T-tMQ68F-h=3UHog744b3X3JZViu15+4g@mail.gmail.com

...interestingly after your patch I guess now I guess tcs_invalidate()
no longer needs spinlocks since it's only ever called from PM code on
the last CPU.  ...if you agree, I can always do it in my cleanup
series.  See:

https://lore.kernel.org/r/CAD=FV=Xp1o68HnC2-hMnffDDsi+jjgc9pNrdNuypjQZbS5K4nQ@mail.gmail.com

-Doug
Maulik Shah March 27, 2020, 12:04 p.m. UTC | #2
Hi,

On 3/27/2020 3:16 AM, Doug Anderson wrote:
> Hi,
>
> On Thu, Mar 26, 2020 at 10:38 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>> When there are more than one WAKE TCS available and there is no dedicated
>> ACTIVE TCS available, invalidating all WAKE TCSes and waiting for current
>> transfer to complete in first WAKE TCS blocks using another free WAKE TCS
>> to complete current request.
>>
>> Remove rpmh_rsc_invalidate() to happen from tcs_write() when WAKE TCSes
>> is re-purposed to be used for Active mode. Clear only currently used
>> WAKE TCS's register configuration.
>>
>> Mark the caches as dirty so next time when rpmh_flush() is invoked it
>> can invalidate and program cached sleep and wake sets again.
>>
>> Fixes: 2de4b8d33eab (drivers: qcom: rpmh-rsc: allow active requests from wake TCS)
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> ---
>>  drivers/soc/qcom/rpmh-rsc.c | 29 +++++++++++++++++++----------
>>  1 file changed, 19 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index 8fa70b4..c0513af 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -154,8 +154,9 @@ int rpmh_rsc_invalidate(struct rsc_drv *drv)
>>  static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>>                                          const struct tcs_request *msg)
>>  {
>> -       int type, ret;
>> +       int type;
>>         struct tcs_group *tcs;
>> +       unsigned long flags;
>>
>>         switch (msg->state) {
>>         case RPMH_ACTIVE_ONLY_STATE:
>> @@ -175,18 +176,18 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
>>          * If we are making an active request on a RSC that does not have a
>>          * dedicated TCS for active state use, then re-purpose a wake TCS to
>>          * send active votes.
>> -        * NOTE: The driver must be aware that this RSC does not have a
>> -        * dedicated AMC, and therefore would invalidate the sleep and wake
>> -        * TCSes before making an active state request.
>> +        *
>> +        * NOTE: Mark caches as dirty here since existing data in wake TCS will
>> +        * be lost. rpmh_flush() will processed for dirty caches to restore
>> +        * data.
>>          */
>>         tcs = get_tcs_of_type(drv, type);
>>         if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) {
>>                 tcs = get_tcs_of_type(drv, WAKE_TCS);
>> -               if (tcs->num_tcs) {
>> -                       ret = rpmh_rsc_invalidate(drv);
>> -                       if (ret)
>> -                               return ERR_PTR(ret);
>> -               }
>> +
>> +               spin_lock_irqsave(&drv->client.cache_lock, flags);
>> +               drv->client.dirty = true;
>> +               spin_unlock_irqrestore(&drv->client.cache_lock, flags);
> This seems like a huge abstraction violation.  

Agree that cache_lock and dirty flag are used in rpmh.c

I will address this to either notify rpmh.c to mark it dirty or think of other solution.

> Why can't rpmh_write()
> / rpmh_write_async() / rpmh_write_batch() just always unconditionally
> mark the cache dirty?  Are there really lots of cases when those calls
> are made and they do nothing?

At rpmh.c, it doesn't know that rpmh-rsc.c worked on borrowed TCS to finish the request.

We should not blindly mark caches dirty everytime.

>
>
> Other than that this patch seems sane to me and addresses one of the
> comments I had in:
>
> https://lore.kernel.org/r/CAD=FV=XmBQb8yfx14T-tMQ68F-h=3UHog744b3X3JZViu15+4g@mail.gmail.com
>
> ...interestingly after your patch I guess now I guess tcs_invalidate()
> no longer needs spinlocks since it's only ever called from PM code on
> the last CPU.  ...if you agree, I can always do it in my cleanup
> series.  See:
>
> https://lore.kernel.org/r/CAD=FV=Xp1o68HnC2-hMnffDDsi+jjgc9pNrdNuypjQZbS5K4nQ@mail.gmail.com
>
> -Doug

There are other RSCs which use same driver, so lets keep spinlock.

I still didn't get chance to validate your patch (i will have update sometime next week), just to update I have never seen any issue internally

using spin_lock even on nosmp case, that might require it to change to _irq_save/restore variant.

Thanks,
Maulik
Douglas Anderson March 27, 2020, 6:42 p.m. UTC | #3
Hi,

On Fri, Mar 27, 2020 at 5:04 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>
> > Why can't rpmh_write()
> > / rpmh_write_async() / rpmh_write_batch() just always unconditionally
> > mark the cache dirty?  Are there really lots of cases when those calls
> > are made and they do nothing?
>
> At rpmh.c, it doesn't know that rpmh-rsc.c worked on borrowed TCS to finish the request.
>
> We should not blindly mark caches dirty everytime.

In message ID "5a5274ac-41f4-b06d-ff49-c00cef67aa7f@codeaurora.org"
which seems to be missing from the archives, you said:

> yes we should trust callers not to send duplicate data

...you can see some reference to it in my reply:

https://lore.kernel.org/r/CAD=FV=VPSahhK71k_D+nfL1=5QE5sKMQT=6zzyEF7+JWMcTxsg@mail.gmail.com/

If callers are trusted to never send duplicate data then ever call to
rpmh_write() will make a change.  ...and thus the cache should always
be marked dirty, no?  Also note that since rpmh_write() to "active"
also counts as a write to "wake" even those will dirty the cache.

Which case are you expecting a rpmh_write() call to not dirty the cache?


> > ...interestingly after your patch I guess now I guess tcs_invalidate()
> > no longer needs spinlocks since it's only ever called from PM code on
> > the last CPU.  ...if you agree, I can always do it in my cleanup
> > series.  See:
> >
> > https://lore.kernel.org/r/CAD=FV=Xp1o68HnC2-hMnffDDsi+jjgc9pNrdNuypjQZbS5K4nQ@mail.gmail.com
> >
> > -Doug
>
> There are other RSCs which use same driver, so lets keep spinlock.

It is really hard to try to write keeping in mind these "other RSCs"
for which there is no code upstream.  IMO we should write the code
keeping in mind what is supported upstream and then when those "other
RSCs" get added we can evaluate their needs.

Specifically when reasoning about rpmh.c and rpmh-rsc.c I can only
look at the code that's there now and decide whether it is race free
or there are races.  Back when I was analyzing the proposal to do
rpmh_flush() all the time (not from PM code) it felt like there were a
bunch of races, especially in the zero-active-TCS case.  Most of the
races go away when you assume that rpmh_flush() is only ever called
from the PM code when nobody could be in the middle of an active
transfer.

If we are ever planning to call rpmh_flush() from another place we
need to re-look at all those races.


-Doug
Maulik Shah March 31, 2020, 8:57 a.m. UTC | #4
Hi,

On 3/28/2020 12:12 AM, Doug Anderson wrote:
> Hi,
>
> On Fri, Mar 27, 2020 at 5:04 AM Maulik Shah <mkshah@codeaurora.org> wrote:
>>> Why can't rpmh_write()
>>> / rpmh_write_async() / rpmh_write_batch() just always unconditionally
>>> mark the cache dirty?  Are there really lots of cases when those calls
>>> are made and they do nothing?
>> At rpmh.c, it doesn't know that rpmh-rsc.c worked on borrowed TCS to finish the request.
>>
>> We should not blindly mark caches dirty everytime.
> In message ID "5a5274ac-41f4-b06d-ff49-c00cef67aa7f@codeaurora.org"
> which seems to be missing from the archives, you said:
>
>> yes we should trust callers not to send duplicate data
> ...you can see some reference to it in my reply:
>
> https://lore.kernel.org/r/CAD=FV=VPSahhK71k_D+nfL1=5QE5sKMQT=6zzyEF7+JWMcTxsg@mail.gmail.com/
>
> If callers are trusted to never send duplicate data then ever call to
> rpmh_write() will make a change.  ...and thus the cache should always
> be marked dirty, no?  Also note that since rpmh_write() to "active"
> also counts as a write to "wake" even those will dirty the cache.
>
> Which case are you expecting a rpmh_write() call to not dirty the cache?
Ok, i will remove marking cache dirty here.
>
>
>>> ...interestingly after your patch I guess now I guess tcs_invalidate()
>>> no longer needs spinlocks since it's only ever called from PM code on
>>> the last CPU.  ...if you agree, I can always do it in my cleanup
>>> series.  See:
>>>
>>> https://lore.kernel.org/r/CAD=FV=Xp1o68HnC2-hMnffDDsi+jjgc9pNrdNuypjQZbS5K4nQ@mail.gmail.com
>>>
>>> -Doug
>> There are other RSCs which use same driver, so lets keep spinlock.
> It is really hard to try to write keeping in mind these "other RSCs"
> for which there is no code upstream.  IMO we should write the code
> keeping in mind what is supported upstream and then when those "other
> RSCs" get added we can evaluate their needs.

Agree but i would insist not remove locks in your cleanup/documentation 
series which are already there.

These will be again need to be added.

The locks don't cause any issue being there since only last cpu is 
invoking rpmh_flush() at present.

Adding support for other RSCs is in my to do list, and when that is 
being done we can re-evaluate and

remove if not required.

>
> Specifically when reasoning about rpmh.c and rpmh-rsc.c I can only
> look at the code that's there now and decide whether it is race free
> or there are races.  Back when I was analyzing the proposal to do
> rpmh_flush() all the time (not from PM code) it felt like there were a
> bunch of races, especially in the zero-active-TCS case.  Most of the
> races go away when you assume that rpmh_flush() is only ever called
> from the PM code when nobody could be in the middle of an active
> transfer.
>
> If we are ever planning to call rpmh_flush() from another place we
> need to re-look at all those races.
Sure. we can re-look all cases.
>
>
> -Doug
Thanks,
Maulik
diff mbox series

Patch

diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 8fa70b4..c0513af 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -154,8 +154,9 @@  int rpmh_rsc_invalidate(struct rsc_drv *drv)
 static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
 					 const struct tcs_request *msg)
 {
-	int type, ret;
+	int type;
 	struct tcs_group *tcs;
+	unsigned long flags;
 
 	switch (msg->state) {
 	case RPMH_ACTIVE_ONLY_STATE:
@@ -175,18 +176,18 @@  static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
 	 * If we are making an active request on a RSC that does not have a
 	 * dedicated TCS for active state use, then re-purpose a wake TCS to
 	 * send active votes.
-	 * NOTE: The driver must be aware that this RSC does not have a
-	 * dedicated AMC, and therefore would invalidate the sleep and wake
-	 * TCSes before making an active state request.
+	 *
+	 * NOTE: Mark caches as dirty here since existing data in wake TCS will
+	 * be lost. rpmh_flush() will processed for dirty caches to restore
+	 * data.
 	 */
 	tcs = get_tcs_of_type(drv, type);
 	if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) {
 		tcs = get_tcs_of_type(drv, WAKE_TCS);
-		if (tcs->num_tcs) {
-			ret = rpmh_rsc_invalidate(drv);
-			if (ret)
-				return ERR_PTR(ret);
-		}
+
+		spin_lock_irqsave(&drv->client.cache_lock, flags);
+		drv->client.dirty = true;
+		spin_unlock_irqrestore(&drv->client.cache_lock, flags);
 	}
 
 	return tcs;
@@ -412,8 +413,16 @@  static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
 
 	tcs->req[tcs_id - tcs->offset] = msg;
 	set_bit(tcs_id, drv->tcs_in_use);
-	if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS)
+	if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) {
+		/*
+		 * Clear previously programmed WAKE commands in selected
+		 * repurposed TCS to avoid triggering them. tcs->slots will be
+		 * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate()
+		 */
+		write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0);
+		write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0);
 		enable_tcs_irq(drv, tcs_id, true);
+	}
 	spin_unlock(&drv->lock);
 
 	__tcs_buffer_write(drv, tcs_id, 0, msg);