Message ID | 1571662320-1280-1-git-send-email-chenhc@lemote.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Paul Burton |
Headers | show |
Series | None | expand |
On Mon, 21 Oct 2019, Huacai Chen wrote: > @@ -50,7 +50,7 @@ static int do_hres(const struct vdso_data *vd, clockid_t clk, > cycles = __arch_get_hw_counter(vd->clock_mode); > ns = vdso_ts->nsec; > last = vd->cycle_last; > - if (unlikely((s64)cycles < 0)) > + if (unlikely(cycles == U64_MAX)) > return -1; That used to create worse code than the weird (s64) type cast which has the same effect. Did you double check that there is no change? Thanks, tglx
On Mon, 21 Oct 2019, Thomas Gleixner wrote: > On Mon, 21 Oct 2019, Huacai Chen wrote: > > @@ -50,7 +50,7 @@ static int do_hres(const struct vdso_data *vd, clockid_t clk, > > cycles = __arch_get_hw_counter(vd->clock_mode); > > ns = vdso_ts->nsec; > > last = vd->cycle_last; > > - if (unlikely((s64)cycles < 0)) > > + if (unlikely(cycles == U64_MAX)) > > return -1; > > That used to create worse code than the weird (s64) type cast which has the > same effect. Did you double check that there is no change? It still does for 32bit.
Hi, Thomas, If we use (s64)cycles < 0, then how to solve the problem that a 64bit counter become negative? Maybe we can change the "invalid" value from U64_MAX to 0? I think the performance of "cycles == 0" is better than "cycles == U64_MAX". Huacai On Mon, Oct 21, 2019 at 10:58 PM Thomas Gleixner <tglx@linutronix.de> wrote: > > On Mon, 21 Oct 2019, Thomas Gleixner wrote: > > > On Mon, 21 Oct 2019, Huacai Chen wrote: > > > @@ -50,7 +50,7 @@ static int do_hres(const struct vdso_data *vd, clockid_t clk, > > > cycles = __arch_get_hw_counter(vd->clock_mode); > > > ns = vdso_ts->nsec; > > > last = vd->cycle_last; > > > - if (unlikely((s64)cycles < 0)) > > > + if (unlikely(cycles == U64_MAX)) > > > return -1; > > > > That used to create worse code than the weird (s64) type cast which has the > > same effect. Did you double check that there is no change? > > It still does for 32bit.
On Tue, 22 Oct 2019, Huacai Chen wrote: https://people.kernel.org/tglx/notes-about-netiquette Look for Toppost > If we use (s64)cycles < 0, then how to solve the problem that a 64bit > counter become negative? I doubt that you will be able to observe that. A 64bit value becomes negative after 1<<63 cycles, i.e. at 1GHz thats 292 years of uptime. What's your problem? Thanks, tglx
diff --git a/kernel/time/vsyscall.c b/kernel/time/vsyscall.c index 4bc37ac..5ee0f77 100644 --- a/kernel/time/vsyscall.c +++ b/kernel/time/vsyscall.c @@ -110,8 +110,7 @@ void update_vsyscall(struct timekeeper *tk) nsec = nsec + tk->wall_to_monotonic.tv_nsec; vdso_ts->sec += __iter_div_u64_rem(nsec, NSEC_PER_SEC, &vdso_ts->nsec); - if (__arch_use_vsyscall(vdata)) - update_vdso_data(vdata, tk); + update_vdso_data(vdata, tk); __arch_update_vsyscall(vdata, tk); @@ -124,10 +123,8 @@ void update_vsyscall_tz(void) { struct vdso_data *vdata = __arch_get_k_vdso_data(); - if (__arch_use_vsyscall(vdata)) { - vdata[CS_HRES_COARSE].tz_minuteswest = sys_tz.tz_minuteswest; - vdata[CS_HRES_COARSE].tz_dsttime = sys_tz.tz_dsttime; - } + vdata[CS_HRES_COARSE].tz_minuteswest = sys_tz.tz_minuteswest; + vdata[CS_HRES_COARSE].tz_dsttime = sys_tz.tz_dsttime; __arch_sync_vdso_data(vdata); } diff --git a/lib/vdso/gettimeofday.c b/lib/vdso/gettimeofday.c index e630e7f..5a31643 100644 --- a/lib/vdso/gettimeofday.c +++ b/lib/vdso/gettimeofday.c @@ -50,7 +50,7 @@ static int do_hres(const struct vdso_data *vd, clockid_t clk, cycles = __arch_get_hw_counter(vd->clock_mode); ns = vdso_ts->nsec; last = vd->cycle_last; - if (unlikely((s64)cycles < 0)) + if (unlikely(cycles == U64_MAX)) return -1; ns += vdso_calc_delta(cycles, last, vd->mask, vd->mult);
In do_hres(), we currently use whether the return value of __arch_get_ hw_counter() is negative to indicate fallback, but this is not a good idea because: 1, ARM64 returns ULL_MAX but MIPS returns 0 when clock_mode is invalid; 2, For a 64bit counter, a "negative" value of counter is actually valid. It is sure that MIPS has a bug when clock_mode is invalid and should return ULL_MAX as ARM64 does (Vincenzo has already submitted a patch). But do_hres() can still be improved so we use U64_MAX as the only "invalid" return value -- this is still not fully correct, but it is the simplest fix and has no problem in most cases (we can hardly see a 64bit counter overflow). By the way, currently update_vdso_data() and update_vsyscall_tz() rely on __arch_use_vsyscall(), which causes __cvdso_clock_getres() and some other functions get wrong results when clock_mode is invalid. So, we update vdso data unconditionally. Fixes: 00b26474c2f1613d7ab894c5 ("lib/vdso: Provide generic VDSO implementation") Fixes: 44f57d788e7deecb50484353 ("timekeeping: Provide a generic update_vsyscall() implementation") Cc: stable@vger.kernel.org Cc: Arnd Bergmann <arnd@arndb.de> Cc: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Huacai Chen <chenhc@lemote.com> --- kernel/time/vsyscall.c | 9 +++------ lib/vdso/gettimeofday.c | 2 +- 2 files changed, 4 insertions(+), 7 deletions(-)