Message ID | 20231016102501.3643901-2-mark.rutland@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v4,01/38] clocksource/drivers/arm_arch_timer: Initialize evtstrm after finalizing cpucaps | expand |
On 16/10/2023 12:24, Mark Rutland wrote: > We attempt to initialize each CPU's arch_timer event stream in > arch_timer_evtstrm_enable(), which we call from the > arch_timer_starting_cpu() cpu hotplug callback which is registered early > in boot. As this is registered before we initialize the system cpucaps, > the test for ARM64_HAS_ECV will always be false for CPUs present at boot > time, and will only be taken into account for CPUs onlined late > (including those which are hotplugged out and in again). > > Due to this, CPUs present and boot time may not use the intended divider > and scale factor to generate the event stream, and may differ from other > CPUs. > > Correct this by only initializing the event stream after cpucaps have been > finalized, registering a separate CPU hotplug callback for the event stream > configuration. Since the caps must be finalized by this point, use > cpus_have_final_cap() to verify this. > > Signed-off-by: Mark Rutland <mark.rutland@arm.com> > Acked-by: Marc Zyngier <maz@kernel.org> > Acked-by: Thomas Gleixner <tglx@linutronix.de> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Daniel Lezcano <daniel.lezcano@linaro.org> > Cc: Suzuki K Poulose <suzuki.poulose@arm.com> > Cc: Will Deacon <will@kernel.org> > --- May I consider picking this patch or shall it go with the series? [ ... ]
On Tue, Oct 24, 2023 at 05:02:30PM +0200, Daniel Lezcano wrote: > On 16/10/2023 12:24, Mark Rutland wrote: > > We attempt to initialize each CPU's arch_timer event stream in > > arch_timer_evtstrm_enable(), which we call from the > > arch_timer_starting_cpu() cpu hotplug callback which is registered early > > in boot. As this is registered before we initialize the system cpucaps, > > the test for ARM64_HAS_ECV will always be false for CPUs present at boot > > time, and will only be taken into account for CPUs onlined late > > (including those which are hotplugged out and in again). > > > > Due to this, CPUs present and boot time may not use the intended divider > > and scale factor to generate the event stream, and may differ from other > > CPUs. > > > > Correct this by only initializing the event stream after cpucaps have been > > finalized, registering a separate CPU hotplug callback for the event stream > > configuration. Since the caps must be finalized by this point, use > > cpus_have_final_cap() to verify this. > > > > Signed-off-by: Mark Rutland <mark.rutland@arm.com> > > Acked-by: Marc Zyngier <maz@kernel.org> > > Acked-by: Thomas Gleixner <tglx@linutronix.de> > > Cc: Catalin Marinas <catalin.marinas@arm.com> > > Cc: Daniel Lezcano <daniel.lezcano@linaro.org> > > Cc: Suzuki K Poulose <suzuki.poulose@arm.com> > > Cc: Will Deacon <will@kernel.org> > > May I consider picking this patch or shall it go with the series? It has been queued already on the arm64 for-next/cpus_have_const_cap branch. Thanks.
On 24/10/2023 18:22, Catalin Marinas wrote: > On Tue, Oct 24, 2023 at 05:02:30PM +0200, Daniel Lezcano wrote: >> On 16/10/2023 12:24, Mark Rutland wrote: >>> We attempt to initialize each CPU's arch_timer event stream in >>> arch_timer_evtstrm_enable(), which we call from the >>> arch_timer_starting_cpu() cpu hotplug callback which is registered early >>> in boot. As this is registered before we initialize the system cpucaps, >>> the test for ARM64_HAS_ECV will always be false for CPUs present at boot >>> time, and will only be taken into account for CPUs onlined late >>> (including those which are hotplugged out and in again). >>> >>> Due to this, CPUs present and boot time may not use the intended divider >>> and scale factor to generate the event stream, and may differ from other >>> CPUs. >>> >>> Correct this by only initializing the event stream after cpucaps have been >>> finalized, registering a separate CPU hotplug callback for the event stream >>> configuration. Since the caps must be finalized by this point, use >>> cpus_have_final_cap() to verify this. >>> >>> Signed-off-by: Mark Rutland <mark.rutland@arm.com> >>> Acked-by: Marc Zyngier <maz@kernel.org> >>> Acked-by: Thomas Gleixner <tglx@linutronix.de> >>> Cc: Catalin Marinas <catalin.marinas@arm.com> >>> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> >>> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> >>> Cc: Will Deacon <will@kernel.org> >> >> May I consider picking this patch or shall it go with the series? > > It has been queued already on the arm64 for-next/cpus_have_const_cap > branch. Noted, thanks
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index 7dd2c615bce23..d1e9e556da81a 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -917,7 +917,7 @@ static void arch_timer_evtstrm_enable(unsigned int divider) #ifdef CONFIG_ARM64 /* ECV is likely to require a large divider. Use the EVNTIS flag. */ - if (cpus_have_const_cap(ARM64_HAS_ECV) && divider > 15) { + if (cpus_have_final_cap(ARM64_HAS_ECV) && divider > 15) { cntkctl |= ARCH_TIMER_EVT_INTERVAL_SCALE; divider -= 8; } @@ -955,6 +955,30 @@ static void arch_timer_configure_evtstream(void) arch_timer_evtstrm_enable(max(0, lsb)); } +static int arch_timer_evtstrm_starting_cpu(unsigned int cpu) +{ + arch_timer_configure_evtstream(); + return 0; +} + +static int arch_timer_evtstrm_dying_cpu(unsigned int cpu) +{ + cpumask_clear_cpu(smp_processor_id(), &evtstrm_available); + return 0; +} + +static int __init arch_timer_evtstrm_register(void) +{ + if (!arch_timer_evt || !evtstrm_enable) + return 0; + + return cpuhp_setup_state(CPUHP_AP_ARM_ARCH_TIMER_EVTSTRM_STARTING, + "clockevents/arm/arch_timer_evtstrm:starting", + arch_timer_evtstrm_starting_cpu, + arch_timer_evtstrm_dying_cpu); +} +core_initcall(arch_timer_evtstrm_register); + static void arch_counter_set_user_access(void) { u32 cntkctl = arch_timer_get_cntkctl(); @@ -1016,8 +1040,6 @@ static int arch_timer_starting_cpu(unsigned int cpu) } arch_counter_set_user_access(); - if (evtstrm_enable) - arch_timer_configure_evtstream(); return 0; } @@ -1164,8 +1186,6 @@ static int arch_timer_dying_cpu(unsigned int cpu) { struct clock_event_device *clk = this_cpu_ptr(arch_timer_evt); - cpumask_clear_cpu(smp_processor_id(), &evtstrm_available); - arch_timer_stop(clk); return 0; } @@ -1279,6 +1299,7 @@ static int __init arch_timer_register(void) out_free: free_percpu(arch_timer_evt); + arch_timer_evt = NULL; out: return err; } diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 068f7738be22a..bb4d0bcac81b6 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -172,6 +172,7 @@ enum cpuhp_state { CPUHP_AP_ARM_L2X0_STARTING, CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING, CPUHP_AP_ARM_ARCH_TIMER_STARTING, + CPUHP_AP_ARM_ARCH_TIMER_EVTSTRM_STARTING, CPUHP_AP_ARM_GLOBAL_TIMER_STARTING, CPUHP_AP_JCORE_TIMER_STARTING, CPUHP_AP_ARM_TWD_STARTING,