diff mbox series

[v3,3/3] arm64: Early boot time stamps

Message ID 20181226164509.22916-4-pasha.tatashin@soleen.com (mailing list archive)
State New, archived
Headers show
Series Early boot time stamps for arm64 | expand

Commit Message

Pasha Tatashin Dec. 26, 2018, 4:45 p.m. UTC
Allow printk time stamps/sched_clock() to be available from the early
boot.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
---
 arch/arm64/kernel/setup.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

Comments

Marc Zyngier Jan. 3, 2019, 10:51 a.m. UTC | #1
Hi Pavel,

On 26/12/2018 16:45, Pavel Tatashin wrote:
> Allow printk time stamps/sched_clock() to be available from the early
> boot.
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/arm64/kernel/setup.c | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
> 
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 4b0e1231625c..28126facc4ed 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -40,6 +40,7 @@
>  #include <linux/efi.h>
>  #include <linux/psci.h>
>  #include <linux/sched/task.h>
> +#include <linux/sched_clock.h>
>  #include <linux/mm.h>
>  
>  #include <asm/acpi.h>
> @@ -279,8 +280,32 @@ arch_initcall(reserve_memblock_reserved_regions);
>  
>  u64 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
>  
> +/*
> + * Get time stamps available early in boot, useful to identify boot time issues
> + * from the early boot.
> + */
> +static __init void sched_clock_early_init(void)
> +{
> +	u64 (*read_time)(void) = arch_counter_get_cntvct;
> +	u64 freq = arch_timer_get_cntfrq();
> +
> +	/*
> +	 * The arm64 boot protocol mandates that CNTFRQ_EL0 reflects
> +	 * the timer frequency. To avoid breakage on misconfigured
> +	 * systems, do not register the early sched_clock if the
> +	 * programmed value if zero. Other random values will just
> +	 * result in random output.
> +	 */
> +	if (!freq)
> +		return;
> +
> +	sched_clock_register(read_time, ARCH_TIMER_NBITS, freq);
> +}
> +
>  void __init setup_arch(char **cmdline_p)
>  {
> +	sched_clock_early_init();
> +
>  	init_mm.start_code = (unsigned long) _text;
>  	init_mm.end_code   = (unsigned long) _etext;
>  	init_mm.end_data   = (unsigned long) _edata;
> 

I still think this approach is flawed. You provide the kernel with a
potentially broken sched_clock that may jump back and forth until the
workaround kicks in. Nobody expects this.

Instead, I'd suggest you allow for a something other than local_clock()
to be used for the time stamping until a properly working sched_clock
gets registered.

This way, you'll only impact the timestamps when running on a broken system.

Thanks,

	M.
Pasha Tatashin Jan. 3, 2019, 7:58 p.m. UTC | #2
> I still think this approach is flawed. You provide the kernel with a
> potentially broken sched_clock that may jump back and forth until the
> workaround kicks in. Nobody expects this.
>
> Instead, I'd suggest you allow for a something other than local_clock()
> to be used for the time stamping until a properly working sched_clock
> gets registered.
>
> This way, you'll only impact the timestamps when running on a broken system.

I think, given that on other platforms sched_clock() is already used
early, it is not a good idea to invent a different clock just for time
stamps.

We could limit arm64 approach only for chips where cntvct_el0 is
working: i.e. frequency is known, and the clock is stable, meaning
cannot go backward. Perhaps we would start early clock a little later,
but at least it will be available for the sane chips. The only
question, where during boot time this is known.

Another approach is to modify sched_clock() in
kernel/time/sched_clock.c to never return backward value during boot.

1. Rename  current implementation of sched_clock() to sched_clock_raw()
2. New sched_clock() would look like this:

u64 sched_clock(void)
{
   if (static_branch(early_unstable_clock))
      return sched_clock_unstable();
   else
      return sched_clock_raw();
}

3. sched_clock_unstable() would look like this:

u64 sched_clock_unstable(void)
{
again:
  static u64 old_clock;
  u64 new_clock = sched_clock_raw();
  static u64 old_clock_read =   READ_ONCE(old_clock);
  /* It is ok if time does not progress, but don't allow to go backward */
  if (new_clock < old_clock_read)
    return old_clock_read;
   /* update the old_clock value */
   if (cmpxchg64(&old_clock, old_clock_read, new_clock) != old_clock_read)
      goto again;
   return new_clock;
}

Pasha
Marc Zyngier Jan. 4, 2019, 3:39 p.m. UTC | #3
On Thu, 03 Jan 2019 19:58:25 +0000,
Pavel Tatashin <pasha.tatashin@soleen.com> wrote:
> 
> > I still think this approach is flawed. You provide the kernel with a
> > potentially broken sched_clock that may jump back and forth until the
> > workaround kicks in. Nobody expects this.
> >
> > Instead, I'd suggest you allow for a something other than local_clock()
> > to be used for the time stamping until a properly working sched_clock
> > gets registered.
> >
> > This way, you'll only impact the timestamps when running on a broken system.
> 
> I think, given that on other platforms sched_clock() is already used
> early, it is not a good idea to invent a different clock just for time
> stamps.

Square pegs vs round holes. Mimicking other architectures isn't always
the right thing to do when faced with a different problem. We put a
lot of effort in working around timer errata for a good reason, and
feeding the rest of the system bogus timing information doesn't sound
great.

> We could limit arm64 approach only for chips where cntvct_el0 is
> working: i.e. frequency is known, and the clock is stable, meaning
> cannot go backward. Perhaps we would start early clock a little later,
> but at least it will be available for the sane chips. The only
> question, where during boot time this is known.

How do you propose we do that? Defective timers can be a property of
the implementation, of the integration, or both. In any case, it
requires firmware support (DT, ACPI). All that is only available quite
late, and moving it earlier is not easily doable.

> Another approach is to modify sched_clock() in
> kernel/time/sched_clock.c to never return backward value during boot.
> 
> 1. Rename  current implementation of sched_clock() to sched_clock_raw()
> 2. New sched_clock() would look like this:
> 
> u64 sched_clock(void)
> {
>    if (static_branch(early_unstable_clock))
>       return sched_clock_unstable();
>    else
>       return sched_clock_raw();
> }
> 
> 3. sched_clock_unstable() would look like this:
> 
> u64 sched_clock_unstable(void)
> {
> again:
>   static u64 old_clock;
>   u64 new_clock = sched_clock_raw();
>   static u64 old_clock_read =   READ_ONCE(old_clock);
>   /* It is ok if time does not progress, but don't allow to go backward */
>   if (new_clock < old_clock_read)
>     return old_clock_read;
>    /* update the old_clock value */
>    if (cmpxchg64(&old_clock, old_clock_read, new_clock) != old_clock_read)
>       goto again;
>    return new_clock;
> }

You now have an "unstable" clock that is only allowed to move forward,
until you switch to the real one. And at handover time, anything can
happen.

It is one thing to allow for the time stamping to be imprecise. But
imposing the same behaviour on other parts of the kernel that have so
far relied on a strictly monotonic sched_clock feels like a bad idea.

What I'm proposing is that we allow architectures to override the hard
tie between local_clock/sched_clock and kernel log time stamping, with
the default being of course what we have today. This gives a clean
separation between the two when the architecture needs to delay the
availability of sched_clock until implementation requirements are
discovered. It also keep sched_clock simple and efficient.

To illustrate what I'm trying to argue for, I've pushed out a couple
of proof of concept patches here[1]. I've briefly tested them in a
guest, and things seem to work OK.

Thanks,

	M.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=arm64/tsclock
Pasha Tatashin Jan. 4, 2019, 4:23 p.m. UTC | #4
Hi Marc,

Thank you for taking a look at this please see my replies below.

> > I think, given that on other platforms sched_clock() is already used
> > early, it is not a good idea to invent a different clock just for time
> > stamps.
>
> Square pegs vs round holes. Mimicking other architectures isn't always
> the right thing to do when faced with a different problem. We put a
> lot of effort in working around timer errata for a good reason, and
> feeding the rest of the system bogus timing information doesn't sound
> great.
>
> > We could limit arm64 approach only for chips where cntvct_el0 is
> > working: i.e. frequency is known, and the clock is stable, meaning
> > cannot go backward. Perhaps we would start early clock a little later,
> > but at least it will be available for the sane chips. The only
> > question, where during boot time this is known.
>
> How do you propose we do that? Defective timers can be a property of
> the implementation, of the integration, or both. In any case, it
> requires firmware support (DT, ACPI). All that is only available quite
> late, and moving it earlier is not easily doable.

OK, but could we at least whitelist something early with expectation
that the future chips won't be bogus?

> > Another approach is to modify sched_clock() in
> > kernel/time/sched_clock.c to never return backward value during boot.
> >
> > 1. Rename  current implementation of sched_clock() to sched_clock_raw()
> > 2. New sched_clock() would look like this:
> >
> > u64 sched_clock(void)
> > {
> >    if (static_branch(early_unstable_clock))
> >       return sched_clock_unstable();
> >    else
> >       return sched_clock_raw();
> > }
> >
> > 3. sched_clock_unstable() would look like this:
> >
> > u64 sched_clock_unstable(void)
> > {
> > again:
> >   static u64 old_clock;
> >   u64 new_clock = sched_clock_raw();
> >   static u64 old_clock_read =   READ_ONCE(old_clock);
> >   /* It is ok if time does not progress, but don't allow to go backward */
> >   if (new_clock < old_clock_read)
> >     return old_clock_read;
> >    /* update the old_clock value */
> >    if (cmpxchg64(&old_clock, old_clock_read, new_clock) != old_clock_read)
> >       goto again;
> >    return new_clock;
> > }
>
> You now have an "unstable" clock that is only allowed to move forward,
> until you switch to the real one. And at handover time, anything can
> happen.
>
> It is one thing to allow for the time stamping to be imprecise. But
> imposing the same behaviour on other parts of the kernel that have so
> far relied on a strictly monotonic sched_clock feels like a bad idea.

sched_clock() will still be strictly monotonic. During switch over we
will guarantee to continue from where the early clock left.

>
> What I'm proposing is that we allow architectures to override the hard
> tie between local_clock/sched_clock and kernel log time stamping, with
> the default being of course what we have today. This gives a clean
> separation between the two when the architecture needs to delay the
> availability of sched_clock until implementation requirements are
> discovered. It also keep sched_clock simple and efficient.
>
> To illustrate what I'm trying to argue for, I've pushed out a couple
> of proof of concept patches here[1]. I've briefly tested them in a
> guest, and things seem to work OK.

What I am worried is that decoupling time stamps from the
sched_clock() will cause uptime and other commands that show boot time
not to correlate with timestamps in dmesg with these changes. For them
to correlate we would still have to have a switch back to
local_clock() in timestamp_clock() after we are done with early boot,
which brings us back to using a temporarily unstable clock that I
proposed above but without adding an architectural hook for it. Again,
we would need to solve the problem of time continuity during switch
over, which is not a hard problem to solve, as we do it already in
sched_clock.c, and everytime clocksource changes.

During early boot time stamps project for x86 we were extra careful to
make sure that they stay the same.

Thank you,
Pasha
Marc Zyngier Jan. 4, 2019, 4:49 p.m. UTC | #5
On 04/01/2019 16:23, Pavel Tatashin wrote:

Hi Pavel,

>>> We could limit arm64 approach only for chips where cntvct_el0 is
>>> working: i.e. frequency is known, and the clock is stable, meaning
>>> cannot go backward. Perhaps we would start early clock a little later,
>>> but at least it will be available for the sane chips. The only
>>> question, where during boot time this is known.
>>
>> How do you propose we do that? Defective timers can be a property of
>> the implementation, of the integration, or both. In any case, it
>> requires firmware support (DT, ACPI). All that is only available quite
>> late, and moving it earlier is not easily doable.
> 
> OK, but could we at least whitelist something early with expectation
> that the future chips won't be bogus?

Just as I wish we had universal world peace. Timer integration is
probably the most broken thing in the whole ARM ecosystem (clock
domains, Gray code and general incompetence do get in the way). And as I
said above, retecting a broken implementation usually relies on some
firmware indication, which is only available at a later time (and I'm
trying really hard to keep the errata handling in the timer code).

>>> Another approach is to modify sched_clock() in
>>> kernel/time/sched_clock.c to never return backward value during boot.
>>>
>>> 1. Rename  current implementation of sched_clock() to sched_clock_raw()
>>> 2. New sched_clock() would look like this:
>>>
>>> u64 sched_clock(void)
>>> {
>>>    if (static_branch(early_unstable_clock))
>>>       return sched_clock_unstable();
>>>    else
>>>       return sched_clock_raw();
>>> }
>>>
>>> 3. sched_clock_unstable() would look like this:
>>>
>>> u64 sched_clock_unstable(void)
>>> {
>>> again:
>>>   static u64 old_clock;
>>>   u64 new_clock = sched_clock_raw();
>>>   static u64 old_clock_read =   READ_ONCE(old_clock);
>>>   /* It is ok if time does not progress, but don't allow to go backward */
>>>   if (new_clock < old_clock_read)
>>>     return old_clock_read;
>>>    /* update the old_clock value */
>>>    if (cmpxchg64(&old_clock, old_clock_read, new_clock) != old_clock_read)
>>>       goto again;
>>>    return new_clock;
>>> }
>>
>> You now have an "unstable" clock that is only allowed to move forward,
>> until you switch to the real one. And at handover time, anything can
>> happen.
>>
>> It is one thing to allow for the time stamping to be imprecise. But
>> imposing the same behaviour on other parts of the kernel that have so
>> far relied on a strictly monotonic sched_clock feels like a bad idea.
> 
> sched_clock() will still be strictly monotonic. During switch over we
> will guarantee to continue from where the early clock left.

Not quite. There is at least one broken integration that results in
large, spurious jumps ahead. If one of these jumps happens during the
"unstable" phase, we'll only return old_clock. At some point, we switch
early_unstable_clock to be false, as we've now properly initialized the
timer and found the appropriate workaround. We'll now return a much
smaller value. sched_clock continuity doesn't seem to apply here, as
you're not registering a new sched_clock (or at least that's not how I
understand your code above).

>> What I'm proposing is that we allow architectures to override the hard
>> tie between local_clock/sched_clock and kernel log time stamping, with
>> the default being of course what we have today. This gives a clean
>> separation between the two when the architecture needs to delay the
>> availability of sched_clock until implementation requirements are
>> discovered. It also keep sched_clock simple and efficient.
>>
>> To illustrate what I'm trying to argue for, I've pushed out a couple
>> of proof of concept patches here[1]. I've briefly tested them in a
>> guest, and things seem to work OK.
> 
> What I am worried is that decoupling time stamps from the
> sched_clock() will cause uptime and other commands that show boot time
> not to correlate with timestamps in dmesg with these changes. For them
> to correlate we would still have to have a switch back to
> local_clock() in timestamp_clock() after we are done with early boot,
> which brings us back to using a temporarily unstable clock that I
> proposed above but without adding an architectural hook for it. Again,
> we would need to solve the problem of time continuity during switch
> over, which is not a hard problem to solve, as we do it already in
> sched_clock.c, and everytime clocksource changes.
> 
> During early boot time stamps project for x86 we were extra careful to
> make sure that they stay the same.

I can see two ways to achieve this requirement:

- we allow timestamp_clock to fall-back to sched_clock once it becomes
non-zero. It has the drawback of resetting the time stamping in the
middle of the boot, which isn't great.

- we allow sched_clock to inherit the timestamp_clock value instead of
starting at zero like it does now. Not sure if that breaks anything, but
that's worth trying (it should be a matter of setting new_epoch to zero
in sched_clock_register).

Thanks,

	M.
Pasha Tatashin Jan. 4, 2019, 8:54 p.m. UTC | #6
> > sched_clock() will still be strictly monotonic. During switch over we
> > will guarantee to continue from where the early clock left.
>
> Not quite. There is at least one broken integration that results in
> large, spurious jumps ahead. If one of these jumps happens during the
> "unstable" phase, we'll only return old_clock. At some point, we switch
> early_unstable_clock to be false, as we've now properly initialized the
> timer and found the appropriate workaround. We'll now return a much
> smaller value. sched_clock continuity doesn't seem to apply here, as
> you're not registering a new sched_clock (or at least that's not how I
> understand your code above).
>
> >> What I'm proposing is that we allow architectures to override the hard
> >> tie between local_clock/sched_clock and kernel log time stamping, with
> >> the default being of course what we have today. This gives a clean
> >> separation between the two when the architecture needs to delay the
> >> availability of sched_clock until implementation requirements are
> >> discovered. It also keep sched_clock simple and efficient.
> >>
> >> To illustrate what I'm trying to argue for, I've pushed out a couple
> >> of proof of concept patches here[1]. I've briefly tested them in a
> >> guest, and things seem to work OK.
> >
> > What I am worried is that decoupling time stamps from the
> > sched_clock() will cause uptime and other commands that show boot time
> > not to correlate with timestamps in dmesg with these changes. For them
> > to correlate we would still have to have a switch back to
> > local_clock() in timestamp_clock() after we are done with early boot,
> > which brings us back to using a temporarily unstable clock that I
> > proposed above but without adding an architectural hook for it. Again,
> > we would need to solve the problem of time continuity during switch
> > over, which is not a hard problem to solve, as we do it already in
> > sched_clock.c, and everytime clocksource changes.
> >
> > During early boot time stamps project for x86 we were extra careful to
> > make sure that they stay the same.
>
> I can see two ways to achieve this requirement:
>
> - we allow timestamp_clock to fall-back to sched_clock once it becomes
> non-zero. It has the drawback of resetting the time stamping in the
> middle of the boot, which isn't great.

Right, I'd like those timestamps to be continuous.

>
> - we allow sched_clock to inherit the timestamp_clock value instead of
> starting at zero like it does now. Not sure if that breaks anything, but
> that's worth trying (it should be a matter of setting new_epoch to zero
> in sched_clock_register).

This is what I am proposing above with my approach. Inherit the last
value of unstable sched_clock before switching to permanent. Please
see [1] how I implemented it, and we can discuss what is better
whether to use timestamp hook in printk or what I am suggestion.

[1] https://github.com/soleen/time_arm64/commits/time
sched_clock: generic unstable clock is a new patch, the other patches
are the ones sent out in this series. Because we use sched_clock() as
the last value calculating epoch in sched_clock_register() we
guarantee monotonicity during clock change.

Thank you,
Pasha
diff mbox series

Patch

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 4b0e1231625c..28126facc4ed 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -40,6 +40,7 @@ 
 #include <linux/efi.h>
 #include <linux/psci.h>
 #include <linux/sched/task.h>
+#include <linux/sched_clock.h>
 #include <linux/mm.h>
 
 #include <asm/acpi.h>
@@ -279,8 +280,32 @@  arch_initcall(reserve_memblock_reserved_regions);
 
 u64 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
 
+/*
+ * Get time stamps available early in boot, useful to identify boot time issues
+ * from the early boot.
+ */
+static __init void sched_clock_early_init(void)
+{
+	u64 (*read_time)(void) = arch_counter_get_cntvct;
+	u64 freq = arch_timer_get_cntfrq();
+
+	/*
+	 * The arm64 boot protocol mandates that CNTFRQ_EL0 reflects
+	 * the timer frequency. To avoid breakage on misconfigured
+	 * systems, do not register the early sched_clock if the
+	 * programmed value if zero. Other random values will just
+	 * result in random output.
+	 */
+	if (!freq)
+		return;
+
+	sched_clock_register(read_time, ARCH_TIMER_NBITS, freq);
+}
+
 void __init setup_arch(char **cmdline_p)
 {
+	sched_clock_early_init();
+
 	init_mm.start_code = (unsigned long) _text;
 	init_mm.end_code   = (unsigned long) _etext;
 	init_mm.end_data   = (unsigned long) _edata;