Message ID | 20200422145408.v4.1.Ic7096b3b9b7828cdd41cd5469a6dee5eb6abf549@changeid (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v4,1/5] soc: qcom: rpmh-rsc: Corrently ignore CPU_CLUSTER_PM notifications | expand |
Hi, there is a typo in subject, Corrently to correctly. Other than this, fix seems good to me. Reviewed-by: Maulik Shah <mkshah@codeaurora.org> Thanks, Maulik On 4/23/2020 3:24 AM, Douglas Anderson wrote: > Our switch statement doesn't have entries for CPU_CLUSTER_PM_ENTER, > CPU_CLUSTER_PM_ENTER_FAILED, and CPU_CLUSTER_PM_EXIT and doesn't have > a default. This means that we'll try to do a flush in those cases but > we won't necessarily be the last CPU down. That's not so ideal since > our (lack of) locking assumes we're on the last CPU. > > Luckily this isn't as big a problem as you'd think since (at least on > the SoC I tested) we don't get these notifications except on full > system suspend. ...and on full system suspend we get them on the last > CPU down. That means that the worst problem we hit is flushing twice. > Still, it's good to make it correct. > > Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches") > Reported-by: Stephen Boyd <swboyd@chromium.org> > Signed-off-by: Douglas Anderson <dianders@chromium.org> > --- > > Changes in v4: > - ("...Corrently ignore CPU_CLUSTER_PM notifications") split out for v4. > > Changes in v3: None > Changes in v2: None > > drivers/soc/qcom/rpmh-rsc.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c > index a9e15699f55f..3571a99fc839 100644 > --- a/drivers/soc/qcom/rpmh-rsc.c > +++ b/drivers/soc/qcom/rpmh-rsc.c > @@ -806,6 +806,8 @@ static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb, > case CPU_PM_EXIT: > cpumask_clear_cpu(smp_processor_id(), &drv->cpus_entered_pm); > goto exit; > + default: > + return NOTIFY_DONE; > } > > ret = rpmh_rsc_ctrlr_is_busy(drv);
Hi, On Wed, Apr 22, 2020 at 9:45 PM Maulik Shah <mkshah@codeaurora.org> wrote: > > Hi, > > there is a typo in subject, Corrently to correctly. > Other than this, fix seems good to me. > > Reviewed-by: Maulik Shah <mkshah@codeaurora.org> Sigh. My brian has never worked very well. One of these days I'll see if I can get it tuned up. Unless there is another reason to spin this series or I'm requested to, I'll assume that Bjron or Andy can fix my typo in the subject when applying. Thanks! -Doug
Quoting Douglas Anderson (2020-04-22 14:54:59) > Our switch statement doesn't have entries for CPU_CLUSTER_PM_ENTER, > CPU_CLUSTER_PM_ENTER_FAILED, and CPU_CLUSTER_PM_EXIT and doesn't have > a default. This means that we'll try to do a flush in those cases but > we won't necessarily be the last CPU down. That's not so ideal since > our (lack of) locking assumes we're on the last CPU. > > Luckily this isn't as big a problem as you'd think since (at least on > the SoC I tested) we don't get these notifications except on full > system suspend. ...and on full system suspend we get them on the last > CPU down. That means that the worst problem we hit is flushing twice. > Still, it's good to make it correct. > > Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches") > Reported-by: Stephen Boyd <swboyd@chromium.org> > Signed-off-by: Douglas Anderson <dianders@chromium.org> > --- Reviewed-by: Stephen Boyd <swboyd@chromium.org>
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c index a9e15699f55f..3571a99fc839 100644 --- a/drivers/soc/qcom/rpmh-rsc.c +++ b/drivers/soc/qcom/rpmh-rsc.c @@ -806,6 +806,8 @@ static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb, case CPU_PM_EXIT: cpumask_clear_cpu(smp_processor_id(), &drv->cpus_entered_pm); goto exit; + default: + return NOTIFY_DONE; } ret = rpmh_rsc_ctrlr_is_busy(drv);
Our switch statement doesn't have entries for CPU_CLUSTER_PM_ENTER, CPU_CLUSTER_PM_ENTER_FAILED, and CPU_CLUSTER_PM_EXIT and doesn't have a default. This means that we'll try to do a flush in those cases but we won't necessarily be the last CPU down. That's not so ideal since our (lack of) locking assumes we're on the last CPU. Luckily this isn't as big a problem as you'd think since (at least on the SoC I tested) we don't get these notifications except on full system suspend. ...and on full system suspend we get them on the last CPU down. That means that the worst problem we hit is flushing twice. Still, it's good to make it correct. Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches") Reported-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Douglas Anderson <dianders@chromium.org> --- Changes in v4: - ("...Corrently ignore CPU_CLUSTER_PM notifications") split out for v4. Changes in v3: None Changes in v2: None drivers/soc/qcom/rpmh-rsc.c | 2 ++ 1 file changed, 2 insertions(+)