Message ID | 20181207102749.15205-2-bigeasy@linutronix.de (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [1/3] arm: Convert arm boot_lock to raw | expand |
On Fri, Dec 7, 2018 at 11:28 AM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > From: Frank Rowand <frank.rowand@am.sony.com> > > The arm boot_lock is used by the secondary processor startup code. The locking > task is the idle thread, which has idle->sched_class == &idle_sched_class. > idle_sched_class->enqueue_task == NULL, so if the idle task blocks on the > lock, the attempt to wake it when the lock becomes available will fail: > > try_to_wake_up() > ... > activate_task() > enqueue_task() > p->sched_class->enqueue_task(rq, p, flags) > > Fix by converting boot_lock to a raw spin lock. This makes perfect sense. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Yours, Linus Walleij
On Fri, Dec 07, 2018 at 11:27:47AM +0100, Sebastian Andrzej Siewior wrote: > From: Frank Rowand <frank.rowand@am.sony.com> > > The arm boot_lock is used by the secondary processor startup code. The locking > task is the idle thread, which has idle->sched_class == &idle_sched_class. > idle_sched_class->enqueue_task == NULL, so if the idle task blocks on the > lock, the attempt to wake it when the lock becomes available will fail: > > try_to_wake_up() > ... > activate_task() > enqueue_task() > p->sched_class->enqueue_task(rq, p, flags) > > Fix by converting boot_lock to a raw spin lock. I think the bigger question is why are all these platforms using this. For the ARM development platforms, it's fair as they have no way to power down the CPUs or reset the other CPUs, and the "boot lock" was there originally to prevent the delay calibration going awry. (I've been saying this for years but obviously people like to stuff their fingers in their ears.) It annoys me when people cargo-cult copy code and don't bother trying to understand it, which is shown clearly in your diffstat. However, modern platforms do have ways to control the other CPUs, so the boot lock stuff should not be necessary. Also note that CPU hotplug and CPU unplug is all synchronised in the core CPU hotplug code, so spinlocks within the arch specific code to allegedly serialise what they're doing between CPU death and CPU bringup are completely redundant. Rather than trying to fix this up, can we instead try to eliminate as much of the coping of the Versatile Express etc code as possible? The only place this should be needed is arch/arm/plat-versatile/platsmp.c > arch/arm/mach-actions/platsmp.c | 6 +++--- > arch/arm/mach-exynos/platsmp.c | 12 ++++++------ > arch/arm/mach-hisi/platmcpm.c | 22 +++++++++++----------- > arch/arm/mach-omap2/omap-smp.c | 10 +++++----- > arch/arm/mach-prima2/platsmp.c | 10 +++++----- > arch/arm/mach-qcom/platsmp.c | 10 +++++----- > arch/arm/mach-spear/platsmp.c | 10 +++++----- > arch/arm/mach-sti/platsmp.c | 10 +++++----- > arch/arm/mach-sunxi/mc_smp.c | 20 ++++++++++---------- > arch/arm/plat-versatile/platsmp.c | 10 +++++----- > 10 files changed, 60 insertions(+), 60 deletions(-) > > diff --git a/arch/arm/mach-actions/platsmp.c b/arch/arm/mach-actions/platsmp.c > index 3efaa10efc439..770079245d273 100644 > --- a/arch/arm/mach-actions/platsmp.c > +++ b/arch/arm/mach-actions/platsmp.c > @@ -39,7 +39,7 @@ static void __iomem *sps_base_addr; > static void __iomem *timer_base_addr; > static int ncores; > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > void owl_secondary_startup(void); > > @@ -93,7 +93,7 @@ static int s500_smp_boot_secondary(unsigned int cpu, struct task_struct *idle) > > udelay(10); > > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > > smp_send_reschedule(cpu); > > @@ -106,7 +106,7 @@ static int s500_smp_boot_secondary(unsigned int cpu, struct task_struct *idle) > writel(0, timer_base_addr + OWL_CPU1_ADDR + (cpu - 1) * 4); > writel(0, timer_base_addr + OWL_CPU1_FLAG + (cpu - 1) * 4); > > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > return 0; > } > diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c > index 6a1e682371b32..17dca0ff336e0 100644 > --- a/arch/arm/mach-exynos/platsmp.c > +++ b/arch/arm/mach-exynos/platsmp.c > @@ -239,7 +239,7 @@ static void write_pen_release(int val) > sync_cache_w(&pen_release); > } > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > static void exynos_secondary_init(unsigned int cpu) > { > @@ -252,8 +252,8 @@ static void exynos_secondary_init(unsigned int cpu) > /* > * Synchronise with the boot thread. > */ > - spin_lock(&boot_lock); > - spin_unlock(&boot_lock); > + raw_spin_lock(&boot_lock); > + raw_spin_unlock(&boot_lock); > } > > int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr) > @@ -317,7 +317,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) > * Set synchronisation state between this boot processor > * and the secondary one > */ > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > > /* > * The secondary processor is waiting to be released from > @@ -344,7 +344,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) > > if (timeout == 0) { > printk(KERN_ERR "cpu1 power enable failed"); > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > return -ETIMEDOUT; > } > } > @@ -390,7 +390,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) > * calibrations, then wait for it to finish > */ > fail: > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > return pen_release != -1 ? ret : 0; > } > diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c > index f66815c3dd07e..00524abd963f7 100644 > --- a/arch/arm/mach-hisi/platmcpm.c > +++ b/arch/arm/mach-hisi/platmcpm.c > @@ -61,7 +61,7 @@ > > static void __iomem *sysctrl, *fabric; > static int hip04_cpu_table[HIP04_MAX_CLUSTERS][HIP04_MAX_CPUS_PER_CLUSTER]; > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > static u32 fabric_phys_addr; > /* > * [0]: bootwrapper physical address > @@ -113,7 +113,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle) > if (cluster >= HIP04_MAX_CLUSTERS || cpu >= HIP04_MAX_CPUS_PER_CLUSTER) > return -EINVAL; > > - spin_lock_irq(&boot_lock); > + raw_spin_lock_irq(&boot_lock); > > if (hip04_cpu_table[cluster][cpu]) > goto out; > @@ -147,7 +147,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle) > > out: > hip04_cpu_table[cluster][cpu]++; > - spin_unlock_irq(&boot_lock); > + raw_spin_unlock_irq(&boot_lock); > > return 0; > } > @@ -162,11 +162,11 @@ static void hip04_cpu_die(unsigned int l_cpu) > cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); > cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); > > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > hip04_cpu_table[cluster][cpu]--; > if (hip04_cpu_table[cluster][cpu] == 1) { > /* A power_up request went ahead of us. */ > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > return; > } else if (hip04_cpu_table[cluster][cpu] > 1) { > pr_err("Cluster %d CPU%d boots multiple times\n", cluster, cpu); > @@ -174,7 +174,7 @@ static void hip04_cpu_die(unsigned int l_cpu) > } > > last_man = hip04_cluster_is_down(cluster); > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > if (last_man) { > /* Since it's Cortex A15, disable L2 prefetching. */ > asm volatile( > @@ -203,7 +203,7 @@ static int hip04_cpu_kill(unsigned int l_cpu) > cpu >= HIP04_MAX_CPUS_PER_CLUSTER); > > count = TIMEOUT_MSEC / POLL_MSEC; > - spin_lock_irq(&boot_lock); > + raw_spin_lock_irq(&boot_lock); > for (tries = 0; tries < count; tries++) { > if (hip04_cpu_table[cluster][cpu]) > goto err; > @@ -211,10 +211,10 @@ static int hip04_cpu_kill(unsigned int l_cpu) > data = readl_relaxed(sysctrl + SC_CPU_RESET_STATUS(cluster)); > if (data & CORE_WFI_STATUS(cpu)) > break; > - spin_unlock_irq(&boot_lock); > + raw_spin_unlock_irq(&boot_lock); > /* Wait for clean L2 when the whole cluster is down. */ > msleep(POLL_MSEC); > - spin_lock_irq(&boot_lock); > + raw_spin_lock_irq(&boot_lock); > } > if (tries >= count) > goto err; > @@ -231,10 +231,10 @@ static int hip04_cpu_kill(unsigned int l_cpu) > goto err; > if (hip04_cluster_is_down(cluster)) > hip04_set_snoop_filter(cluster, 0); > - spin_unlock_irq(&boot_lock); > + raw_spin_unlock_irq(&boot_lock); > return 1; > err: > - spin_unlock_irq(&boot_lock); > + raw_spin_unlock_irq(&boot_lock); > return 0; > } > #endif > diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c > index 1c73694c871ad..ac4d2f030b874 100644 > --- a/arch/arm/mach-omap2/omap-smp.c > +++ b/arch/arm/mach-omap2/omap-smp.c > @@ -69,7 +69,7 @@ static const struct omap_smp_config omap5_cfg __initconst = { > .startup_addr = omap5_secondary_startup, > }; > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > void __iomem *omap4_get_scu_base(void) > { > @@ -177,8 +177,8 @@ static void omap4_secondary_init(unsigned int cpu) > /* > * Synchronise with the boot thread. > */ > - spin_lock(&boot_lock); > - spin_unlock(&boot_lock); > + raw_spin_lock(&boot_lock); > + raw_spin_unlock(&boot_lock); > } > > static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) > @@ -191,7 +191,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) > * Set synchronisation state between this boot processor > * and the secondary one > */ > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > > /* > * Update the AuxCoreBoot0 with boot state for secondary core. > @@ -270,7 +270,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) > * Now the secondary core is starting up let it run its > * calibrations, then wait for it to finish > */ > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > return 0; > } > diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c > index 75ef5d4be554c..c17c86e5d860a 100644 > --- a/arch/arm/mach-prima2/platsmp.c > +++ b/arch/arm/mach-prima2/platsmp.c > @@ -22,7 +22,7 @@ > > static void __iomem *clk_base; > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > static void sirfsoc_secondary_init(unsigned int cpu) > { > @@ -36,8 +36,8 @@ static void sirfsoc_secondary_init(unsigned int cpu) > /* > * Synchronise with the boot thread. > */ > - spin_lock(&boot_lock); > - spin_unlock(&boot_lock); > + raw_spin_lock(&boot_lock); > + raw_spin_unlock(&boot_lock); > } > > static const struct of_device_id clk_ids[] = { > @@ -75,7 +75,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) > /* make sure write buffer is drained */ > mb(); > > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > > /* > * The secondary processor is waiting to be released from > @@ -107,7 +107,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) > * now the secondary core is starting up let it run its > * calibrations, then wait for it to finish > */ > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > return pen_release != -1 ? -ENOSYS : 0; > } > diff --git a/arch/arm/mach-qcom/platsmp.c b/arch/arm/mach-qcom/platsmp.c > index 5494c9e0c909b..e8ce157d3548a 100644 > --- a/arch/arm/mach-qcom/platsmp.c > +++ b/arch/arm/mach-qcom/platsmp.c > @@ -46,7 +46,7 @@ > > extern void secondary_startup_arm(void); > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > #ifdef CONFIG_HOTPLUG_CPU > static void qcom_cpu_die(unsigned int cpu) > @@ -60,8 +60,8 @@ static void qcom_secondary_init(unsigned int cpu) > /* > * Synchronise with the boot thread. > */ > - spin_lock(&boot_lock); > - spin_unlock(&boot_lock); > + raw_spin_lock(&boot_lock); > + raw_spin_unlock(&boot_lock); > } > > static int scss_release_secondary(unsigned int cpu) > @@ -284,7 +284,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int)) > * set synchronisation state between this boot processor > * and the secondary one > */ > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > > /* > * Send the secondary CPU a soft interrupt, thereby causing > @@ -297,7 +297,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int)) > * now the secondary core is starting up let it run its > * calibrations, then wait for it to finish > */ > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > return ret; > } > diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c > index 39038a03836ac..6da5c93872bf8 100644 > --- a/arch/arm/mach-spear/platsmp.c > +++ b/arch/arm/mach-spear/platsmp.c > @@ -32,7 +32,7 @@ static void write_pen_release(int val) > sync_cache_w(&pen_release); > } > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > static void __iomem *scu_base = IOMEM(VA_SCU_BASE); > > @@ -47,8 +47,8 @@ static void spear13xx_secondary_init(unsigned int cpu) > /* > * Synchronise with the boot thread. > */ > - spin_lock(&boot_lock); > - spin_unlock(&boot_lock); > + raw_spin_lock(&boot_lock); > + raw_spin_unlock(&boot_lock); > } > > static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) > @@ -59,7 +59,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) > * set synchronisation state between this boot processor > * and the secondary one > */ > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > > /* > * The secondary processor is waiting to be released from > @@ -84,7 +84,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) > * now the secondary core is starting up let it run its > * calibrations, then wait for it to finish > */ > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > return pen_release != -1 ? -ENOSYS : 0; > } > diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c > index 231f19e174365..a3419b7003e6f 100644 > --- a/arch/arm/mach-sti/platsmp.c > +++ b/arch/arm/mach-sti/platsmp.c > @@ -35,7 +35,7 @@ static void write_pen_release(int val) > sync_cache_w(&pen_release); > } > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > static void sti_secondary_init(unsigned int cpu) > { > @@ -48,8 +48,8 @@ static void sti_secondary_init(unsigned int cpu) > /* > * Synchronise with the boot thread. > */ > - spin_lock(&boot_lock); > - spin_unlock(&boot_lock); > + raw_spin_lock(&boot_lock); > + raw_spin_unlock(&boot_lock); > } > > static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) > @@ -60,7 +60,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) > * set synchronisation state between this boot processor > * and the secondary one > */ > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > > /* > * The secondary processor is waiting to be released from > @@ -91,7 +91,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) > * now the secondary core is starting up let it run its > * calibrations, then wait for it to finish > */ > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > return pen_release != -1 ? -ENOSYS : 0; > } > diff --git a/arch/arm/mach-sunxi/mc_smp.c b/arch/arm/mach-sunxi/mc_smp.c > index b4037b603897d..8a6c872f52b59 100644 > --- a/arch/arm/mach-sunxi/mc_smp.c > +++ b/arch/arm/mach-sunxi/mc_smp.c > @@ -367,7 +367,7 @@ static void sunxi_cluster_cache_disable_without_axi(void) > static int sunxi_mc_smp_cpu_table[SUNXI_NR_CLUSTERS][SUNXI_CPUS_PER_CLUSTER]; > int sunxi_mc_smp_first_comer; > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > static bool sunxi_mc_smp_cluster_is_down(unsigned int cluster) > { > @@ -399,7 +399,7 @@ static int sunxi_mc_smp_boot_secondary(unsigned int l_cpu, struct task_struct *i > if (cluster >= SUNXI_NR_CLUSTERS || cpu >= SUNXI_CPUS_PER_CLUSTER) > return -EINVAL; > > - spin_lock_irq(&boot_lock); > + raw_spin_lock_irq(&boot_lock); > > if (sunxi_mc_smp_cpu_table[cluster][cpu]) > goto out; > @@ -417,7 +417,7 @@ static int sunxi_mc_smp_boot_secondary(unsigned int l_cpu, struct task_struct *i > > out: > sunxi_mc_smp_cpu_table[cluster][cpu]++; > - spin_unlock_irq(&boot_lock); > + raw_spin_unlock_irq(&boot_lock); > > return 0; > } > @@ -448,13 +448,13 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu) > cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); > pr_debug("%s: cluster %u cpu %u\n", __func__, cluster, cpu); > > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > sunxi_mc_smp_cpu_table[cluster][cpu]--; > if (sunxi_mc_smp_cpu_table[cluster][cpu] == 1) { > /* A power_up request went ahead of us. */ > pr_debug("%s: aborting due to a power up request\n", > __func__); > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > return; > } else if (sunxi_mc_smp_cpu_table[cluster][cpu] > 1) { > pr_err("Cluster %d CPU%d boots multiple times\n", > @@ -463,7 +463,7 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu) > } > > last_man = sunxi_mc_smp_cluster_is_down(cluster); > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > gic_cpu_if_down(0); > if (last_man) > @@ -542,11 +542,11 @@ static int sunxi_mc_smp_cpu_kill(unsigned int l_cpu) > > /* wait for CPU core to die and enter WFI */ > count = TIMEOUT_USEC / POLL_USEC; > - spin_lock_irq(&boot_lock); > + raw_spin_lock_irq(&boot_lock); > for (tries = 0; tries < count; tries++) { > - spin_unlock_irq(&boot_lock); > + raw_spin_unlock_irq(&boot_lock); > usleep_range(POLL_USEC / 2, POLL_USEC); > - spin_lock_irq(&boot_lock); > + raw_spin_lock_irq(&boot_lock); > > /* > * If the user turns off a bunch of cores at the same > @@ -593,7 +593,7 @@ static int sunxi_mc_smp_cpu_kill(unsigned int l_cpu) > sunxi_cluster_powerdown(cluster); > > out: > - spin_unlock_irq(&boot_lock); > + raw_spin_unlock_irq(&boot_lock); > pr_debug("%s: cluster %u cpu %u powerdown: %d\n", > __func__, cluster, cpu, ret); > return !ret; > diff --git a/arch/arm/plat-versatile/platsmp.c b/arch/arm/plat-versatile/platsmp.c > index c2366510187a8..6b60f582b738c 100644 > --- a/arch/arm/plat-versatile/platsmp.c > +++ b/arch/arm/plat-versatile/platsmp.c > @@ -32,7 +32,7 @@ static void write_pen_release(int val) > sync_cache_w(&pen_release); > } > > -static DEFINE_SPINLOCK(boot_lock); > +static DEFINE_RAW_SPINLOCK(boot_lock); > > void versatile_secondary_init(unsigned int cpu) > { > @@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned int cpu) > /* > * Synchronise with the boot thread. > */ > - spin_lock(&boot_lock); > - spin_unlock(&boot_lock); > + raw_spin_lock(&boot_lock); > + raw_spin_unlock(&boot_lock); > } > > int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) > @@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) > * Set synchronisation state between this boot processor > * and the secondary one > */ > - spin_lock(&boot_lock); > + raw_spin_lock(&boot_lock); > > /* > * This is really belt and braces; we hold unintended secondary > @@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) > * now the secondary core is starting up let it run its > * calibrations, then wait for it to finish > */ > - spin_unlock(&boot_lock); > + raw_spin_unlock(&boot_lock); > > return pen_release != -1 ? -ENOSYS : 0; > } > -- > 2.20.0.rc2 >
On Fri, Dec 7, 2018 at 2:00 PM Russell King - ARM Linux <linux@armlinux.org.uk> wrote: > I think the bigger question is why are all these platforms using this. > For the ARM development platforms, it's fair as they have no way to > power down the CPUs or reset the other CPUs, and the "boot lock" was > there originally to prevent the delay calibration going awry. (I've > been saying this for years but obviously people like to stuff their > fingers in their ears.) > > It annoys me when people cargo-cult copy code and don't bother trying > to understand it, which is shown clearly in your diffstat. It's a very good point. I did away with "my" culprit at one point in commit c00def71efd9 "ARM: ux500: simplify secondary CPU boot" so people could look at that for an example to see how pointless this "pen holding" (what does it even mean, I never figured it out?) and boot_lock is. I thought it was just a few platforms around the time the first SMP systems were arriving that were doing this but as you point out, recent newcomers are doing this, such as sti, actions, qcom, hisi, sunxi. I would encourage other platforms that have this "write some magic stuff in some register to boot cpu n" to look at arch/arm/mach-ux500/platsmp.c, something like that should be all you need, no special assembly, no "pen", no "boot lock". Look up resources etc in .smp_prepare_cpus(), make sure to call set_cpu_possible() for each available CPU, then just kick in the CPU n in .smp_boot_secondary() by doing your platform magic. Optionally cpu_die() if you need special code or just put in a wfi() for that. That's all. Yours, Linus Walleij
On Sun, Dec 09, 2018 at 12:15:35AM +0100, Linus Walleij wrote: > On Fri, Dec 7, 2018 at 2:00 PM Russell King - ARM Linux > <linux@armlinux.org.uk> wrote: > > > I think the bigger question is why are all these platforms using this. > > For the ARM development platforms, it's fair as they have no way to > > power down the CPUs or reset the other CPUs, and the "boot lock" was > > there originally to prevent the delay calibration going awry. (I've > > been saying this for years but obviously people like to stuff their > > fingers in their ears.) > > > > It annoys me when people cargo-cult copy code and don't bother trying > > to understand it, which is shown clearly in your diffstat. > > It's a very good point. > > I did away with "my" culprit at one point in > commit c00def71efd9 > "ARM: ux500: simplify secondary CPU boot" > so people could look at that for an example to see how pointless > this "pen holding" (what does it even mean, I never figured it > out?) and boot_lock is. It's a "holding pen" - in normal parlence, it's a place where livestock are temporarily confined. In this case, our livestock are CPUs, and they are temporarily confined in a tight loop. Early ARM development boards did not have a way to wake individual secondary CPUs from the boot loader, so the only way to boot them as Linux wanted was to direct the boot loader to release all CPUs into a "holding pen" and then release them from the holding pen one at a time when Linux wanted a secondary CPU to come online. The early systems also did not have very good bandwidth between the CPUs and memory, which meant that the CPU requesting another core to be plugged in would perturb the secondary core's delay calibration to a noticable amount, and trapping the requesting core in a spinlock would prevent the delay calibration going awry. So, really _both_ these things are really really specific to ARM development platforms, and have nothing to do with modern production hardware. No one should _ever_ copy the ARM reference platform SMP hotplug code. Needing that code is quite simply a sign that the platform is quite simply not production hardware. I really don't get how we've ended up with so many copies of this. Maybe the code isn't being properly reviewed? Maybe the process for merging new platforms is way too lenient? Whatever, the fact that we're ending up with new copies of the pen release and boot lock stuff demonstrably illustrates that the review process for new platforms is very broken.
On 2018-12-09 00:41:39 [+0000], Russell King - ARM Linux wrote: > So, really _both_ these things are really really specific to ARM > development platforms, and have nothing to do with modern production > hardware. > > No one should _ever_ copy the ARM reference platform SMP hotplug > code. Needing that code is quite simply a sign that the platform is > quite simply not production hardware. > > I really don't get how we've ended up with so many copies of this. > Maybe the code isn't being properly reviewed? Maybe the process for > merging new platforms is way too lenient? Whatever, the fact that > we're ending up with new copies of the pen release and boot lock > stuff demonstrably illustrates that the review process for new > platforms is very broken. Okay. So the whole patch won't get applied, right? If so, may I please get the arch/arm/plat-versatile/platsmp.c hunk applied? The remaining platforms would have then either justify why they have the same problems like the ARM devel board or rework their code. Sebastian
On Mon, Dec 10, 2018 at 3:37 PM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > Okay. So the whole patch won't get applied, right? If so, may I please > get the > arch/arm/plat-versatile/platsmp.c > > hunk applied? Can you please send one patch per mach-* plat-*? I can apply the plat-versatile patch and send upstream. > The remaining platforms would have then either justify why > they have the same problems like the ARM devel board or rework their > code. I sent a patch for Actions semiconductor today: https://marc.info/?l=linux-arm-kernel&m=154451577304234&w=2 I guess I will keep on patching when I have time. It would be even nicer if platform maintainers saw the problem and did it themselves, but that doesn't always work so I usually poke them with a patch when I want something to improve. Yours, Linus Walleij
On 2018-12-11 19:19:32 [+0100], Linus Walleij wrote: > Can you please send one patch per mach-* plat-*? > > I can apply the plat-versatile patch and send upstream. I just resent the versatile hunk. Should I really split up and resend the other platforms? Russell was not very amused about it. > Yours, > Linus Walleij Sebastian
On Tue, Dec 11, 2018 at 07:19:32PM +0100, Linus Walleij wrote: > On Mon, Dec 10, 2018 at 3:37 PM Sebastian Andrzej Siewior > <bigeasy@linutronix.de> wrote: > > > Okay. So the whole patch won't get applied, right? If so, may I please > > get the > > arch/arm/plat-versatile/platsmp.c > > > > hunk applied? > > Can you please send one patch per mach-* plat-*? I think just patch arch/arm/plat-versatile/platsmp.c and ignore the rest - as I say, the right solution is to eliminate the lock. I have a patch to simply remove the lock from omap2+.
On Tue, Dec 11, 2018 at 10:29:30PM +0100, Sebastian Andrzej Siewior wrote: > On 2018-12-11 19:19:32 [+0100], Linus Walleij wrote: > > Can you please send one patch per mach-* plat-*? > > > > I can apply the plat-versatile patch and send upstream. > > I just resent the versatile hunk. Should I really split up and resend > the other platforms? Russell was not very amused about it. Just the Versatile bit, and no I don't see any point in fixing the other SoCs - it's an incentive for them to fix up their code. However, given that hardly any have responded, pointing out this problem in email clearly hasn't worked by the resounding silence from SoC maintainers. Removing boot_lock from those platforms who just cargo-culted the boot_lock including comments without the pen_release stuff is easy, all the lock is doing there is providing a bit of unnecessary synchronisation between the requesting CPU and the booting CPU. Only a couple of SoCs fall into this case - OMAP4 and qcom. The rest aren't something that could just be ripped out without the involvement of the SoC maintainer to test the changes - so I'm of the opinion that we should just send SoC maintainers patches that remove the pen_release and boot_lock crap. If they blindly accept the patches, so be it. I'm preparing a series to attack some of this.
On 2018-12-13 11:48:42 [+0000], Russell King - ARM Linux wrote: > On Tue, Dec 11, 2018 at 10:29:30PM +0100, Sebastian Andrzej Siewior wrote: > > I just resent the versatile hunk. Should I really split up and resend > > the other platforms? Russell was not very amused about it. > > Just the Versatile bit, and no I don't see any point in fixing the > other SoCs - it's an incentive for them to fix up their code. Okay. I already resent that. > The rest aren't something that could just be ripped out without > the involvement of the SoC maintainer to test the changes - so > I'm of the opinion that we should just send SoC maintainers > patches that remove the pen_release and boot_lock crap. If they > blindly accept the patches, so be it. > > I'm preparing a series to attack some of this. Thank you. Sebastian
diff --git a/arch/arm/mach-actions/platsmp.c b/arch/arm/mach-actions/platsmp.c index 3efaa10efc439..770079245d273 100644 --- a/arch/arm/mach-actions/platsmp.c +++ b/arch/arm/mach-actions/platsmp.c @@ -39,7 +39,7 @@ static void __iomem *sps_base_addr; static void __iomem *timer_base_addr; static int ncores; -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); void owl_secondary_startup(void); @@ -93,7 +93,7 @@ static int s500_smp_boot_secondary(unsigned int cpu, struct task_struct *idle) udelay(10); - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); smp_send_reschedule(cpu); @@ -106,7 +106,7 @@ static int s500_smp_boot_secondary(unsigned int cpu, struct task_struct *idle) writel(0, timer_base_addr + OWL_CPU1_ADDR + (cpu - 1) * 4); writel(0, timer_base_addr + OWL_CPU1_FLAG + (cpu - 1) * 4); - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return 0; } diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c index 6a1e682371b32..17dca0ff336e0 100644 --- a/arch/arm/mach-exynos/platsmp.c +++ b/arch/arm/mach-exynos/platsmp.c @@ -239,7 +239,7 @@ static void write_pen_release(int val) sync_cache_w(&pen_release); } -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); static void exynos_secondary_init(unsigned int cpu) { @@ -252,8 +252,8 @@ static void exynos_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ - spin_lock(&boot_lock); - spin_unlock(&boot_lock); + raw_spin_lock(&boot_lock); + raw_spin_unlock(&boot_lock); } int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr) @@ -317,7 +317,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) * Set synchronisation state between this boot processor * and the secondary one */ - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); /* * The secondary processor is waiting to be released from @@ -344,7 +344,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) if (timeout == 0) { printk(KERN_ERR "cpu1 power enable failed"); - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return -ETIMEDOUT; } } @@ -390,7 +390,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) * calibrations, then wait for it to finish */ fail: - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return pen_release != -1 ? ret : 0; } diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c index f66815c3dd07e..00524abd963f7 100644 --- a/arch/arm/mach-hisi/platmcpm.c +++ b/arch/arm/mach-hisi/platmcpm.c @@ -61,7 +61,7 @@ static void __iomem *sysctrl, *fabric; static int hip04_cpu_table[HIP04_MAX_CLUSTERS][HIP04_MAX_CPUS_PER_CLUSTER]; -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); static u32 fabric_phys_addr; /* * [0]: bootwrapper physical address @@ -113,7 +113,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle) if (cluster >= HIP04_MAX_CLUSTERS || cpu >= HIP04_MAX_CPUS_PER_CLUSTER) return -EINVAL; - spin_lock_irq(&boot_lock); + raw_spin_lock_irq(&boot_lock); if (hip04_cpu_table[cluster][cpu]) goto out; @@ -147,7 +147,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle) out: hip04_cpu_table[cluster][cpu]++; - spin_unlock_irq(&boot_lock); + raw_spin_unlock_irq(&boot_lock); return 0; } @@ -162,11 +162,11 @@ static void hip04_cpu_die(unsigned int l_cpu) cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); hip04_cpu_table[cluster][cpu]--; if (hip04_cpu_table[cluster][cpu] == 1) { /* A power_up request went ahead of us. */ - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return; } else if (hip04_cpu_table[cluster][cpu] > 1) { pr_err("Cluster %d CPU%d boots multiple times\n", cluster, cpu); @@ -174,7 +174,7 @@ static void hip04_cpu_die(unsigned int l_cpu) } last_man = hip04_cluster_is_down(cluster); - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); if (last_man) { /* Since it's Cortex A15, disable L2 prefetching. */ asm volatile( @@ -203,7 +203,7 @@ static int hip04_cpu_kill(unsigned int l_cpu) cpu >= HIP04_MAX_CPUS_PER_CLUSTER); count = TIMEOUT_MSEC / POLL_MSEC; - spin_lock_irq(&boot_lock); + raw_spin_lock_irq(&boot_lock); for (tries = 0; tries < count; tries++) { if (hip04_cpu_table[cluster][cpu]) goto err; @@ -211,10 +211,10 @@ static int hip04_cpu_kill(unsigned int l_cpu) data = readl_relaxed(sysctrl + SC_CPU_RESET_STATUS(cluster)); if (data & CORE_WFI_STATUS(cpu)) break; - spin_unlock_irq(&boot_lock); + raw_spin_unlock_irq(&boot_lock); /* Wait for clean L2 when the whole cluster is down. */ msleep(POLL_MSEC); - spin_lock_irq(&boot_lock); + raw_spin_lock_irq(&boot_lock); } if (tries >= count) goto err; @@ -231,10 +231,10 @@ static int hip04_cpu_kill(unsigned int l_cpu) goto err; if (hip04_cluster_is_down(cluster)) hip04_set_snoop_filter(cluster, 0); - spin_unlock_irq(&boot_lock); + raw_spin_unlock_irq(&boot_lock); return 1; err: - spin_unlock_irq(&boot_lock); + raw_spin_unlock_irq(&boot_lock); return 0; } #endif diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c index 1c73694c871ad..ac4d2f030b874 100644 --- a/arch/arm/mach-omap2/omap-smp.c +++ b/arch/arm/mach-omap2/omap-smp.c @@ -69,7 +69,7 @@ static const struct omap_smp_config omap5_cfg __initconst = { .startup_addr = omap5_secondary_startup, }; -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); void __iomem *omap4_get_scu_base(void) { @@ -177,8 +177,8 @@ static void omap4_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ - spin_lock(&boot_lock); - spin_unlock(&boot_lock); + raw_spin_lock(&boot_lock); + raw_spin_unlock(&boot_lock); } static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) @@ -191,7 +191,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) * Set synchronisation state between this boot processor * and the secondary one */ - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); /* * Update the AuxCoreBoot0 with boot state for secondary core. @@ -270,7 +270,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) * Now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return 0; } diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c index 75ef5d4be554c..c17c86e5d860a 100644 --- a/arch/arm/mach-prima2/platsmp.c +++ b/arch/arm/mach-prima2/platsmp.c @@ -22,7 +22,7 @@ static void __iomem *clk_base; -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); static void sirfsoc_secondary_init(unsigned int cpu) { @@ -36,8 +36,8 @@ static void sirfsoc_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ - spin_lock(&boot_lock); - spin_unlock(&boot_lock); + raw_spin_lock(&boot_lock); + raw_spin_unlock(&boot_lock); } static const struct of_device_id clk_ids[] = { @@ -75,7 +75,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) /* make sure write buffer is drained */ mb(); - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); /* * The secondary processor is waiting to be released from @@ -107,7 +107,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return pen_release != -1 ? -ENOSYS : 0; } diff --git a/arch/arm/mach-qcom/platsmp.c b/arch/arm/mach-qcom/platsmp.c index 5494c9e0c909b..e8ce157d3548a 100644 --- a/arch/arm/mach-qcom/platsmp.c +++ b/arch/arm/mach-qcom/platsmp.c @@ -46,7 +46,7 @@ extern void secondary_startup_arm(void); -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); #ifdef CONFIG_HOTPLUG_CPU static void qcom_cpu_die(unsigned int cpu) @@ -60,8 +60,8 @@ static void qcom_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ - spin_lock(&boot_lock); - spin_unlock(&boot_lock); + raw_spin_lock(&boot_lock); + raw_spin_unlock(&boot_lock); } static int scss_release_secondary(unsigned int cpu) @@ -284,7 +284,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int)) * set synchronisation state between this boot processor * and the secondary one */ - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); /* * Send the secondary CPU a soft interrupt, thereby causing @@ -297,7 +297,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int)) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return ret; } diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c index 39038a03836ac..6da5c93872bf8 100644 --- a/arch/arm/mach-spear/platsmp.c +++ b/arch/arm/mach-spear/platsmp.c @@ -32,7 +32,7 @@ static void write_pen_release(int val) sync_cache_w(&pen_release); } -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); static void __iomem *scu_base = IOMEM(VA_SCU_BASE); @@ -47,8 +47,8 @@ static void spear13xx_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ - spin_lock(&boot_lock); - spin_unlock(&boot_lock); + raw_spin_lock(&boot_lock); + raw_spin_unlock(&boot_lock); } static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) @@ -59,7 +59,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) * set synchronisation state between this boot processor * and the secondary one */ - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); /* * The secondary processor is waiting to be released from @@ -84,7 +84,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return pen_release != -1 ? -ENOSYS : 0; } diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c index 231f19e174365..a3419b7003e6f 100644 --- a/arch/arm/mach-sti/platsmp.c +++ b/arch/arm/mach-sti/platsmp.c @@ -35,7 +35,7 @@ static void write_pen_release(int val) sync_cache_w(&pen_release); } -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); static void sti_secondary_init(unsigned int cpu) { @@ -48,8 +48,8 @@ static void sti_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ - spin_lock(&boot_lock); - spin_unlock(&boot_lock); + raw_spin_lock(&boot_lock); + raw_spin_unlock(&boot_lock); } static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) @@ -60,7 +60,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) * set synchronisation state between this boot processor * and the secondary one */ - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); /* * The secondary processor is waiting to be released from @@ -91,7 +91,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return pen_release != -1 ? -ENOSYS : 0; } diff --git a/arch/arm/mach-sunxi/mc_smp.c b/arch/arm/mach-sunxi/mc_smp.c index b4037b603897d..8a6c872f52b59 100644 --- a/arch/arm/mach-sunxi/mc_smp.c +++ b/arch/arm/mach-sunxi/mc_smp.c @@ -367,7 +367,7 @@ static void sunxi_cluster_cache_disable_without_axi(void) static int sunxi_mc_smp_cpu_table[SUNXI_NR_CLUSTERS][SUNXI_CPUS_PER_CLUSTER]; int sunxi_mc_smp_first_comer; -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); static bool sunxi_mc_smp_cluster_is_down(unsigned int cluster) { @@ -399,7 +399,7 @@ static int sunxi_mc_smp_boot_secondary(unsigned int l_cpu, struct task_struct *i if (cluster >= SUNXI_NR_CLUSTERS || cpu >= SUNXI_CPUS_PER_CLUSTER) return -EINVAL; - spin_lock_irq(&boot_lock); + raw_spin_lock_irq(&boot_lock); if (sunxi_mc_smp_cpu_table[cluster][cpu]) goto out; @@ -417,7 +417,7 @@ static int sunxi_mc_smp_boot_secondary(unsigned int l_cpu, struct task_struct *i out: sunxi_mc_smp_cpu_table[cluster][cpu]++; - spin_unlock_irq(&boot_lock); + raw_spin_unlock_irq(&boot_lock); return 0; } @@ -448,13 +448,13 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu) cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); pr_debug("%s: cluster %u cpu %u\n", __func__, cluster, cpu); - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); sunxi_mc_smp_cpu_table[cluster][cpu]--; if (sunxi_mc_smp_cpu_table[cluster][cpu] == 1) { /* A power_up request went ahead of us. */ pr_debug("%s: aborting due to a power up request\n", __func__); - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return; } else if (sunxi_mc_smp_cpu_table[cluster][cpu] > 1) { pr_err("Cluster %d CPU%d boots multiple times\n", @@ -463,7 +463,7 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu) } last_man = sunxi_mc_smp_cluster_is_down(cluster); - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); gic_cpu_if_down(0); if (last_man) @@ -542,11 +542,11 @@ static int sunxi_mc_smp_cpu_kill(unsigned int l_cpu) /* wait for CPU core to die and enter WFI */ count = TIMEOUT_USEC / POLL_USEC; - spin_lock_irq(&boot_lock); + raw_spin_lock_irq(&boot_lock); for (tries = 0; tries < count; tries++) { - spin_unlock_irq(&boot_lock); + raw_spin_unlock_irq(&boot_lock); usleep_range(POLL_USEC / 2, POLL_USEC); - spin_lock_irq(&boot_lock); + raw_spin_lock_irq(&boot_lock); /* * If the user turns off a bunch of cores at the same @@ -593,7 +593,7 @@ static int sunxi_mc_smp_cpu_kill(unsigned int l_cpu) sunxi_cluster_powerdown(cluster); out: - spin_unlock_irq(&boot_lock); + raw_spin_unlock_irq(&boot_lock); pr_debug("%s: cluster %u cpu %u powerdown: %d\n", __func__, cluster, cpu, ret); return !ret; diff --git a/arch/arm/plat-versatile/platsmp.c b/arch/arm/plat-versatile/platsmp.c index c2366510187a8..6b60f582b738c 100644 --- a/arch/arm/plat-versatile/platsmp.c +++ b/arch/arm/plat-versatile/platsmp.c @@ -32,7 +32,7 @@ static void write_pen_release(int val) sync_cache_w(&pen_release); } -static DEFINE_SPINLOCK(boot_lock); +static DEFINE_RAW_SPINLOCK(boot_lock); void versatile_secondary_init(unsigned int cpu) { @@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ - spin_lock(&boot_lock); - spin_unlock(&boot_lock); + raw_spin_lock(&boot_lock); + raw_spin_unlock(&boot_lock); } int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) @@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) * Set synchronisation state between this boot processor * and the secondary one */ - spin_lock(&boot_lock); + raw_spin_lock(&boot_lock); /* * This is really belt and braces; we hold unintended secondary @@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ - spin_unlock(&boot_lock); + raw_spin_unlock(&boot_lock); return pen_release != -1 ? -ENOSYS : 0; }