diff mbox series

[01/11] perf/x86: Fix native_perf_sched_clock_from_tsc() with __sched_clock_offset

Message ID 20220209084929.54331-2-adrian.hunter@intel.com (mailing list archive)
State New, archived
Headers show
Series perf intel-pt: Add perf event clocks to better support VM tracing | expand

Commit Message

Adrian Hunter Feb. 9, 2022, 8:49 a.m. UTC
native_perf_sched_clock_from_tsc() is used to produce a time value that can
be consistent with perf_clock().  Consequently, it should be adjusted by
__sched_clock_offset, the same as perf_clock() would be.

Fixes: 698eff6355f735 ("sched/clock, x86/perf: Fix perf test tsc")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 arch/x86/kernel/tsc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Peter Zijlstra Feb. 9, 2022, 12:54 p.m. UTC | #1
On Wed, Feb 09, 2022 at 10:49:19AM +0200, Adrian Hunter wrote:
> native_perf_sched_clock_from_tsc() is used to produce a time value that can
> be consistent with perf_clock().  Consequently, it should be adjusted by
> __sched_clock_offset, the same as perf_clock() would be.
> 
> Fixes: 698eff6355f735 ("sched/clock, x86/perf: Fix perf test tsc")
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  arch/x86/kernel/tsc.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
> index a698196377be..c1c73fe324cd 100644
> --- a/arch/x86/kernel/tsc.c
> +++ b/arch/x86/kernel/tsc.c
> @@ -242,7 +242,8 @@ u64 native_sched_clock(void)
>   */
>  u64 native_sched_clock_from_tsc(u64 tsc)
>  {
> -	return cycles_2_ns(tsc);
> +	return cycles_2_ns(tsc) +
> +	       (sched_clock_stable() ? __sched_clock_offset : 0);
>  }

Why do we care about the !sched_clock_stable() case?
Adrian Hunter Feb. 9, 2022, 2:26 p.m. UTC | #2
On 09/02/2022 14:54, Peter Zijlstra wrote:
> On Wed, Feb 09, 2022 at 10:49:19AM +0200, Adrian Hunter wrote:
>> native_perf_sched_clock_from_tsc() is used to produce a time value that can
>> be consistent with perf_clock().  Consequently, it should be adjusted by
>> __sched_clock_offset, the same as perf_clock() would be.
>>
>> Fixes: 698eff6355f735 ("sched/clock, x86/perf: Fix perf test tsc")
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>> ---
>>  arch/x86/kernel/tsc.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
>> index a698196377be..c1c73fe324cd 100644
>> --- a/arch/x86/kernel/tsc.c
>> +++ b/arch/x86/kernel/tsc.c
>> @@ -242,7 +242,8 @@ u64 native_sched_clock(void)
>>   */
>>  u64 native_sched_clock_from_tsc(u64 tsc)
>>  {
>> -	return cycles_2_ns(tsc);
>> +	return cycles_2_ns(tsc) +
>> +	       (sched_clock_stable() ? __sched_clock_offset : 0);
>>  }
> 
> Why do we care about the !sched_clock_stable() case?

I guess we don't.  So add __sched_clock_offset unconditionally then?
diff mbox series

Patch

diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index a698196377be..c1c73fe324cd 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -242,7 +242,8 @@  u64 native_sched_clock(void)
  */
 u64 native_sched_clock_from_tsc(u64 tsc)
 {
-	return cycles_2_ns(tsc);
+	return cycles_2_ns(tsc) +
+	       (sched_clock_stable() ? __sched_clock_offset : 0);
 }
 
 /* We need to define a real function for sched_clock, to override the