Message ID | 20200422145408.v4.2.I1927d1bca2569a27b2d04986baf285027f0818a2@changeid (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v4,1/5] soc: qcom: rpmh-rsc: Corrently ignore CPU_CLUSTER_PM notifications | expand |
Reviewed-by: Maulik Shah <mkshah@codeaurora.org> Thanks, Maulik On 4/23/2020 3:25 AM, Douglas Anderson wrote: > When a PM Notifier returns NOTIFY_BAD it doesn't get called with > CPU_PM_ENTER_FAILED. It only get called for CPU_PM_ENTER_FAILED if > someone else (further down the notifier chain) returns NOTIFY_BAD. > > Handle this case by taking our CPU out of the list of ones that have > entered PM. Without this it's possible we could detect that the last > CPU went down (and we would flush) even if some CPU was alive. That's > not good since our flushing routines currently assume they're running > on the last CPU for mutual exclusion. > > Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches") > Signed-off-by: Douglas Anderson <dianders@chromium.org> > --- > > Changes in v4: > - ("...We aren't notified of our own failure...") split out for v4. > > Changes in v3: None > Changes in v2: None > > drivers/soc/qcom/rpmh-rsc.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c > index 3571a99fc839..e540e49fd61c 100644 > --- a/drivers/soc/qcom/rpmh-rsc.c > +++ b/drivers/soc/qcom/rpmh-rsc.c > @@ -823,6 +823,10 @@ static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb, > ret = NOTIFY_OK; > > exit: > + if (ret == NOTIFY_BAD) > + /* We won't be called w/ CPU_PM_ENTER_FAILED */ > + cpumask_clear_cpu(smp_processor_id(), &drv->cpus_entered_pm); > + > spin_unlock(&drv->pm_lock); > return ret; > }
Quoting Douglas Anderson (2020-04-22 14:55:00) > When a PM Notifier returns NOTIFY_BAD it doesn't get called with > CPU_PM_ENTER_FAILED. It only get called for CPU_PM_ENTER_FAILED if > someone else (further down the notifier chain) returns NOTIFY_BAD. > > Handle this case by taking our CPU out of the list of ones that have > entered PM. Without this it's possible we could detect that the last > CPU went down (and we would flush) even if some CPU was alive. That's > not good since our flushing routines currently assume they're running > on the last CPU for mutual exclusion. > > Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches") > Signed-off-by: Douglas Anderson <dianders@chromium.org> > --- Reported-by: Stephen Boyd <swboyd@chromium.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org>
Quoting Stephen Boyd (2020-04-23 19:38:24) > Quoting Douglas Anderson (2020-04-22 14:55:00) > > When a PM Notifier returns NOTIFY_BAD it doesn't get called with > > CPU_PM_ENTER_FAILED. It only get called for CPU_PM_ENTER_FAILED if > > someone else (further down the notifier chain) returns NOTIFY_BAD. > > > > Handle this case by taking our CPU out of the list of ones that have > > entered PM. Without this it's possible we could detect that the last > > CPU went down (and we would flush) even if some CPU was alive. That's > > not good since our flushing routines currently assume they're running > > on the last CPU for mutual exclusion. > > > > Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches") > > Signed-off-by: Douglas Anderson <dianders@chromium.org> > > --- > > Reported-by: Stephen Boyd <swboyd@chromium.org> Scratch that one! Copy/paste for the lose.
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c index 3571a99fc839..e540e49fd61c 100644 --- a/drivers/soc/qcom/rpmh-rsc.c +++ b/drivers/soc/qcom/rpmh-rsc.c @@ -823,6 +823,10 @@ static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb, ret = NOTIFY_OK; exit: + if (ret == NOTIFY_BAD) + /* We won't be called w/ CPU_PM_ENTER_FAILED */ + cpumask_clear_cpu(smp_processor_id(), &drv->cpus_entered_pm); + spin_unlock(&drv->pm_lock); return ret; }
When a PM Notifier returns NOTIFY_BAD it doesn't get called with CPU_PM_ENTER_FAILED. It only get called for CPU_PM_ENTER_FAILED if someone else (further down the notifier chain) returns NOTIFY_BAD. Handle this case by taking our CPU out of the list of ones that have entered PM. Without this it's possible we could detect that the last CPU went down (and we would flush) even if some CPU was alive. That's not good since our flushing routines currently assume they're running on the last CPU for mutual exclusion. Fixes: 985427f997b6 ("soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches") Signed-off-by: Douglas Anderson <dianders@chromium.org> --- Changes in v4: - ("...We aren't notified of our own failure...") split out for v4. Changes in v3: None Changes in v2: None drivers/soc/qcom/rpmh-rsc.c | 4 ++++ 1 file changed, 4 insertions(+)