Message ID | 1399282040-8995-6-git-send-email-chander.kashyap@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, May 05, 2014 at 10:27:20AM +0100, Chander Kashyap wrote: > In order to support cpuidle through mcpm, suspend and powered-up > callbacks are required in mcpm platform code. > Hence populate the same callbacks. > > Signed-off-by: Chander Kashyap <chander.kashyap@linaro.org> > Signed-off-by: Chander Kashyap <k.chander@samsung.com> > --- > Changes in v4: None > Changes in v3: > 1. Removed coherancy enablement after suspend failure. coherency > 2. Use generic function to poweron cpu. > changes in v2: > 1. Fixed typo: enynos_pmu_cpunr to exynos_pmu_cpunr > arch/arm/mach-exynos/mcpm-exynos.c | 34 ++++++++++++++++++++++++++++++++++ > 1 file changed, 34 insertions(+) > > diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c > index d0f7461..6d4a907 100644 > --- a/arch/arm/mach-exynos/mcpm-exynos.c > +++ b/arch/arm/mach-exynos/mcpm-exynos.c > @@ -256,10 +256,44 @@ static int exynos_power_down_finish(unsigned int cpu, unsigned int cluster) > return -ETIMEDOUT; /* timeout */ > } > > +void exynos_powered_up(void) static ? > +{ > + unsigned int mpidr, cpu, cluster; > + > + mpidr = read_cpuid_mpidr(); > + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); > + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); > + > + arch_spin_lock(&exynos_mcpm_lock); > + if (cpu_use_count[cpu][cluster] == 0) > + cpu_use_count[cpu][cluster] = 1; > + arch_spin_unlock(&exynos_mcpm_lock); > +} > + > +static void exynos_suspend(u64 residency) > +{ > + unsigned int mpidr, cpunr; > + > + mpidr = read_cpuid_mpidr(); > + cpunr = exynos_pmu_cpunr(mpidr); If I were to be picky, I would compute these values only if they are needed, ie move the computation after exynos_power_down(). There is another quite horrible issue here. We know this code works because the processors A15/A7 hit the caches with C bit in SCTLR cleared. On processors where this is not true, this sequence would explode if power down fails (in case core is gated but L2 is still powered on, the stack is stuck in L2) since it is going to read stack data that is in L2 but can't be read. It is not related to this sequence only, but it is an issue in general and wanted to mention that on the lists for public awareness. The gist of what I am saying is, please add a comment to that extent, here and it should be added in exynos_power_down() too. > + __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 0x1c); No magic numbers please (0x1c). You can add a macro/wrapper, as TC2 does. > + exynos_power_down(); > + > + /* > + * Execution reaches here only if cpu did not power down. > + * Hence roll back the changes done in exynos_power_down function. > + */ > + exynos_cpu_powerup(cpunr); Please be aware that if this function returns MCPM will soft reboot, and the CPUidle driver will have no way to detect a state entry failure. I am just flagging this up, since fixing this behaviour is not easy, and honestly, since power down failure should be the exception not the rule, the idle stats should not be affected much. I think this is the proper way of implementing the sequence but please all keep in mind what I wrote above. Lorenzo > +} > + > static const struct mcpm_platform_ops exynos_power_ops = { > .power_up = exynos_power_up, > .power_down = exynos_power_down, > .power_down_finish = exynos_power_down_finish, > + .suspend = exynos_suspend, > + .powered_up = exynos_powered_up, > }; > > static void __init exynos_mcpm_usage_count_init(void) > -- > 1.7.9.5 > > -- To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Lorenzo On 9 May 2014 21:02, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote: > On Mon, May 05, 2014 at 10:27:20AM +0100, Chander Kashyap wrote: >> In order to support cpuidle through mcpm, suspend and powered-up >> callbacks are required in mcpm platform code. >> Hence populate the same callbacks. >> >> Signed-off-by: Chander Kashyap <chander.kashyap@linaro.org> >> Signed-off-by: Chander Kashyap <k.chander@samsung.com> >> --- >> Changes in v4: None >> Changes in v3: >> 1. Removed coherancy enablement after suspend failure. > > coherency > >> 2. Use generic function to poweron cpu. >> changes in v2: >> 1. Fixed typo: enynos_pmu_cpunr to exynos_pmu_cpunr >> arch/arm/mach-exynos/mcpm-exynos.c | 34 ++++++++++++++++++++++++++++++++++ >> 1 file changed, 34 insertions(+) >> >> diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c >> index d0f7461..6d4a907 100644 >> --- a/arch/arm/mach-exynos/mcpm-exynos.c >> +++ b/arch/arm/mach-exynos/mcpm-exynos.c >> @@ -256,10 +256,44 @@ static int exynos_power_down_finish(unsigned int cpu, unsigned int cluster) >> return -ETIMEDOUT; /* timeout */ >> } >> >> +void exynos_powered_up(void) > > static ? Ok, done > >> +{ >> + unsigned int mpidr, cpu, cluster; >> + >> + mpidr = read_cpuid_mpidr(); >> + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); >> + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); >> + >> + arch_spin_lock(&exynos_mcpm_lock); >> + if (cpu_use_count[cpu][cluster] == 0) >> + cpu_use_count[cpu][cluster] = 1; >> + arch_spin_unlock(&exynos_mcpm_lock); >> +} >> + >> +static void exynos_suspend(u64 residency) >> +{ >> + unsigned int mpidr, cpunr; >> + >> + mpidr = read_cpuid_mpidr(); >> + cpunr = exynos_pmu_cpunr(mpidr); > > If I were to be picky, I would compute these values only if they > are needed, ie move the computation after exynos_power_down(). Yes thats make sense. I will realign it. > > There is another quite horrible issue here. We know this code works > because the processors A15/A7 hit the caches with C bit in SCTLR cleared. > > On processors where this is not true, this sequence would explode > if power down fails (in case core is gated but L2 is still powered on, > the stack is stuck in L2) since it is going to read stack data that is > in L2 but can't be read. > > It is not related to this sequence only, but it is an issue in general > and wanted to mention that on the lists for public awareness. > Can you please elaborate. I didn't understand. > The gist of what I am saying is, please add a comment to that extent, > here and it should be added in exynos_power_down() too. > >> + __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 0x1c); > > No magic numbers please (0x1c). You can add a macro/wrapper, as TC2 does. Yes i will remove it. > >> + exynos_power_down(); >> + >> + /* >> + * Execution reaches here only if cpu did not power down. >> + * Hence roll back the changes done in exynos_power_down function. >> + */ >> + exynos_cpu_powerup(cpunr); > > Please be aware that if this function returns MCPM will soft reboot, and > the CPUidle driver will have no way to detect a state entry failure. > > I am just flagging this up, since fixing this behaviour is not easy, and > honestly, since power down failure should be the exception not the rule, > the idle stats should not be affected much. > > I think this is the proper way of implementing the sequence but please > all keep in mind what I wrote above. > > Lorenzo > >> +} >> + >> static const struct mcpm_platform_ops exynos_power_ops = { >> .power_up = exynos_power_up, >> .power_down = exynos_power_down, >> .power_down_finish = exynos_power_down_finish, >> + .suspend = exynos_suspend, >> + .powered_up = exynos_powered_up, >> }; >> >> static void __init exynos_mcpm_usage_count_init(void) >> -- >> 1.7.9.5 >> >> >
On Tue, May 13, 2014 at 12:43:31PM +0100, Chander Kashyap wrote: [...] > >> +static void exynos_suspend(u64 residency) > >> +{ > >> + unsigned int mpidr, cpunr; > >> + > >> + mpidr = read_cpuid_mpidr(); > >> + cpunr = exynos_pmu_cpunr(mpidr); > > > > If I were to be picky, I would compute these values only if they > > are needed, ie move the computation after exynos_power_down(). > > Yes thats make sense. I will realign it. > > > > > There is another quite horrible issue here. We know this code works > > because the processors A15/A7 hit the caches with C bit in SCTLR cleared. > > > > On processors where this is not true, this sequence would explode > > if power down fails (in case core is gated but L2 is still powered on, > > the stack is stuck in L2) since it is going to read stack data that is > > in L2 but can't be read. > > > > It is not related to this sequence only, but it is an issue in general > > and wanted to mention that on the lists for public awareness. > > > > Can you please elaborate. I didn't understand. It is not related to this patch only. This function carries out writes to the stack (which might end up in eg L2) and then disables the C bit in SCTLR through MCPM. A15 and A7 processors hit the cache with the C bit clear in the SCTLR so the processor still "hits" the stack values if the power down fails. On processors where caches are not hit with the C bit clear (eg A9) this code would fail since the stack values that sit in the caches cannot be read with the C bit clear in SCTLR until the SCTLR is restored, so it will have to be implemented in assembly with no stack usage (or better, no cacheable data usage). So, all I am saying is, to avoid copy'n'paste havoc and to avoid running this code on Exynos platforms where it must not be run as-is, please add a comment along the line: "This function requires the stack data to be visible through power down and can only be executed on processors like A15 and A7 that hit the cache with the C bit clear in the SCTLR register." Please let me know if that's clear. Lorenzo > > > The gist of what I am saying is, please add a comment to that extent, > > here and it should be added in exynos_power_down() too. > > > >> + __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 0x1c); > > > > No magic numbers please (0x1c). You can add a macro/wrapper, as TC2 does. > > Yes i will remove it. > > > > >> + exynos_power_down(); > >> + > >> + /* > >> + * Execution reaches here only if cpu did not power down. > >> + * Hence roll back the changes done in exynos_power_down function. > >> + */ > >> + exynos_cpu_powerup(cpunr); > > > > Please be aware that if this function returns MCPM will soft reboot, and > > the CPUidle driver will have no way to detect a state entry failure. > > > > I am just flagging this up, since fixing this behaviour is not easy, and > > honestly, since power down failure should be the exception not the rule, > > the idle stats should not be affected much. > > > > I think this is the proper way of implementing the sequence but please > > all keep in mind what I wrote above. > > > > Lorenzo > > > >> +} > >> + > >> static const struct mcpm_platform_ops exynos_power_ops = { > >> .power_up = exynos_power_up, > >> .power_down = exynos_power_down, > >> .power_down_finish = exynos_power_down_finish, > >> + .suspend = exynos_suspend, > >> + .powered_up = exynos_powered_up, > >> }; > >> > >> static void __init exynos_mcpm_usage_count_init(void) > >> -- > >> 1.7.9.5 > >> > >> > > > > > > -- > with warm regards, > Chander Kashyap > -- To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Lorenzo, On 13 May 2014 22:44, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote: > On Tue, May 13, 2014 at 12:43:31PM +0100, Chander Kashyap wrote: > > [...] > >> >> +static void exynos_suspend(u64 residency) >> >> +{ >> >> + unsigned int mpidr, cpunr; >> >> + >> >> + mpidr = read_cpuid_mpidr(); >> >> + cpunr = exynos_pmu_cpunr(mpidr); >> > >> > If I were to be picky, I would compute these values only if they >> > are needed, ie move the computation after exynos_power_down(). >> >> Yes thats make sense. I will realign it. >> >> > >> > There is another quite horrible issue here. We know this code works >> > because the processors A15/A7 hit the caches with C bit in SCTLR cleared. >> > >> > On processors where this is not true, this sequence would explode >> > if power down fails (in case core is gated but L2 is still powered on, >> > the stack is stuck in L2) since it is going to read stack data that is >> > in L2 but can't be read. >> > >> > It is not related to this sequence only, but it is an issue in general >> > and wanted to mention that on the lists for public awareness. >> > >> >> Can you please elaborate. I didn't understand. > > It is not related to this patch only. This function carries out writes to the > stack (which might end up in eg L2) and then disables the C bit in SCTLR > through MCPM. > > A15 and A7 processors hit the cache with the C bit clear in the SCTLR > so the processor still "hits" the stack values if the power down fails. > On processors where caches are not hit with the C bit clear (eg A9) this code > would fail since the stack values that sit in the caches cannot be read with > the C bit clear in SCTLR until the SCTLR is restored, so it will have to > be implemented in assembly with no stack usage (or better, no cacheable data > usage). > > So, all I am saying is, to avoid copy'n'paste havoc and to avoid running > this code on Exynos platforms where it must not be run as-is, please add > a comment along the line: > > "This function requires the stack data to be visible through power down > and can only be executed on processors like A15 and A7 that hit the cache > with the C bit clear in the SCTLR register." > > Please let me know if that's clear. It all clear now. Thanks a lot. > > Lorenzo > >> >> > The gist of what I am saying is, please add a comment to that extent, >> > here and it should be added in exynos_power_down() too. >> > >> >> + __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 0x1c); >> > >> > No magic numbers please (0x1c). You can add a macro/wrapper, as TC2 does. >> >> Yes i will remove it. >> >> > >> >> + exynos_power_down(); >> >> + >> >> + /* >> >> + * Execution reaches here only if cpu did not power down. >> >> + * Hence roll back the changes done in exynos_power_down function. >> >> + */ >> >> + exynos_cpu_powerup(cpunr); >> > >> > Please be aware that if this function returns MCPM will soft reboot, and >> > the CPUidle driver will have no way to detect a state entry failure. >> > >> > I am just flagging this up, since fixing this behaviour is not easy, and >> > honestly, since power down failure should be the exception not the rule, >> > the idle stats should not be affected much. >> > >> > I think this is the proper way of implementing the sequence but please >> > all keep in mind what I wrote above. >> > >> > Lorenzo >> > >> >> +} >> >> + >> >> static const struct mcpm_platform_ops exynos_power_ops = { >> >> .power_up = exynos_power_up, >> >> .power_down = exynos_power_down, >> >> .power_down_finish = exynos_power_down_finish, >> >> + .suspend = exynos_suspend, >> >> + .powered_up = exynos_powered_up, >> >> }; >> >> >> >> static void __init exynos_mcpm_usage_count_init(void) >> >> -- >> >> 1.7.9.5 >> >> >> >> >> > >> >> >> >> -- >> with warm regards, >> Chander Kashyap >> >
diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c index d0f7461..6d4a907 100644 --- a/arch/arm/mach-exynos/mcpm-exynos.c +++ b/arch/arm/mach-exynos/mcpm-exynos.c @@ -256,10 +256,44 @@ static int exynos_power_down_finish(unsigned int cpu, unsigned int cluster) return -ETIMEDOUT; /* timeout */ } +void exynos_powered_up(void) +{ + unsigned int mpidr, cpu, cluster; + + mpidr = read_cpuid_mpidr(); + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); + + arch_spin_lock(&exynos_mcpm_lock); + if (cpu_use_count[cpu][cluster] == 0) + cpu_use_count[cpu][cluster] = 1; + arch_spin_unlock(&exynos_mcpm_lock); +} + +static void exynos_suspend(u64 residency) +{ + unsigned int mpidr, cpunr; + + mpidr = read_cpuid_mpidr(); + cpunr = exynos_pmu_cpunr(mpidr); + + __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 0x1c); + + exynos_power_down(); + + /* + * Execution reaches here only if cpu did not power down. + * Hence roll back the changes done in exynos_power_down function. + */ + exynos_cpu_powerup(cpunr); +} + static const struct mcpm_platform_ops exynos_power_ops = { .power_up = exynos_power_up, .power_down = exynos_power_down, .power_down_finish = exynos_power_down_finish, + .suspend = exynos_suspend, + .powered_up = exynos_powered_up, }; static void __init exynos_mcpm_usage_count_init(void)