Message ID | 1367369675-13535-2-git-send-email-sboyd@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, May 01, 2013 at 01:54:35AM +0100, Stephen Boyd wrote: > Use the generic sched_clock infrastructure instead of rolling our > own. This has the added benefit of fixing suspend/resume as > outlined in 6a4dae5 (ARM: 7565/1: sched: stop sched_clock() > during suspend, 2012-10-23) and correcting the timestamps when > the hardware returns a value instead of 0 upon the first read. > > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Looks ok. Acked-by: Catalin Marinas <catalin.marinas@arm.com>
On 05/01/2013 05:11 AM, Catalin Marinas wrote: > On Wed, May 01, 2013 at 01:54:35AM +0100, Stephen Boyd wrote: >> Use the generic sched_clock infrastructure instead of rolling our >> own. This has the added benefit of fixing suspend/resume as >> outlined in 6a4dae5 (ARM: 7565/1: sched: stop sched_clock() >> during suspend, 2012-10-23) and correcting the timestamps when >> the hardware returns a value instead of 0 upon the first read. >> >> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> > > Looks ok. > > Acked-by: Catalin Marinas <catalin.marinas@arm.com> I built and ran this change and dependencies on top of Catalin's soc-armv8-model branch [1] and was able to verify that it fixed the printk timestamp jump. 1. http://git.kernel.org/cgit/linux/kernel/git/cmarinas/linux-aarch64.git/log/?h=soc-armv8-model Tested-by: Christopher Covington <cov@codeaurora.org>
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4f4c418..b941cca 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -13,6 +13,7 @@ config ARM64 select GENERIC_IOMAP select GENERIC_IRQ_PROBE select GENERIC_IRQ_SHOW + select GENERIC_SCHED_CLOCK select GENERIC_SMP_IDLE_THREAD select GENERIC_TIME_VSYSCALL select HARDIRQS_SW_RESEND diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c index a551f88..fd07ef9 100644 --- a/arch/arm64/kernel/time.c +++ b/arch/arm64/kernel/time.c @@ -33,6 +33,7 @@ #include <linux/irq.h> #include <linux/delay.h> #include <linux/clocksource.h> +#include <linux/sched_clock.h> #include <clocksource/arm_arch_timer.h> @@ -61,13 +62,6 @@ unsigned long profile_pc(struct pt_regs *regs) EXPORT_SYMBOL(profile_pc); #endif -static u64 sched_clock_mult __read_mostly; - -unsigned long long notrace sched_clock(void) -{ - return arch_timer_read_counter() * sched_clock_mult; -} - int read_current_timer(unsigned long *timer_value) { *timer_value = arch_timer_read_counter(); @@ -84,8 +78,7 @@ void __init time_init(void) if (!arch_timer_rate) panic("Unable to initialise architected timer.\n"); - /* Cache the sched_clock multiplier to save a divide in the hot path. */ - sched_clock_mult = NSEC_PER_SEC / arch_timer_rate; + setup_sched_clock_64(arch_timer_read_counter, 56, arch_timer_rate); /* Calibrate the delay loop directly */ lpj_fine = arch_timer_rate / HZ;
Use the generic sched_clock infrastructure instead of rolling our own. This has the added benefit of fixing suspend/resume as outlined in 6a4dae5 (ARM: 7565/1: sched: stop sched_clock() during suspend, 2012-10-23) and correcting the timestamps when the hardware returns a value instead of 0 upon the first read. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> --- arch/arm64/Kconfig | 1 + arch/arm64/kernel/time.c | 11 ++--------- 2 files changed, 3 insertions(+), 9 deletions(-)