diff mbox series

timekeeping: move multigrain ctime floor handling into timekeeper

Message ID 20240911-mgtime-v1-1-e4aedf1d0d15@kernel.org (mailing list archive)
State New
Headers show
Series timekeeping: move multigrain ctime floor handling into timekeeper | expand

Commit Message

Jeff Layton Sept. 11, 2024, 12:56 p.m. UTC
The kernel test robot reported a performance regression in some
will-it-scale tests due to the multigrain timestamp patches. The data
showed that coarse_ctime() was slowing down current_time(), which is
called frequently in the I/O path.

Add ktime_get_coarse_real_ts64_with_floor(), which returns either the
coarse time or the floor as a realtime value. This avoids some of the
conversion overhead of coarse_ctime(), and recovers some of the
performance in these tests.

The will-it-scale pipe1_threads microbenchmark shows these averages on
my test rig:

	v6.11-rc7:			83830660 (baseline)
	v6.11-rc7 + mgtime series:	77631748 (93% of baseline)
	v6.11-rc7 + mgtime + this:	81620228 (97% of baseline)

Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202409091303.31b2b713-oliver.sang@intel.com
Suggested-by: Arnd Bergmann <arnd@kernel.org>
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
Arnd suggested moving this into the timekeeper when reviewing an earlier
version of this series, and that turns out to be better for performance.

I'm not sure how this should go in (if acceptable). The multigrain
timestamp patches that this would affect are in Christian's tree, so
that may be best if the timekeeper maintainers are OK with this
approach.
---
 fs/inode.c                  | 35 +++++++++--------------------------
 include/linux/timekeeping.h |  2 ++
 kernel/time/timekeeping.c   | 29 +++++++++++++++++++++++++++++
 3 files changed, 40 insertions(+), 26 deletions(-)


---
base-commit: 962e66693d6214b1d48f32f68ed002170a98f2c0
change-id: 20240910-mgtime-e244049f2aea

Best regards,

Comments

John Stultz Sept. 11, 2024, 7:55 p.m. UTC | #1
On Wed, Sep 11, 2024 at 5:57 AM Jeff Layton <jlayton@kernel.org> wrote:
>
> The kernel test robot reported a performance regression in some
> will-it-scale tests due to the multigrain timestamp patches. The data
> showed that coarse_ctime() was slowing down current_time(), which is
> called frequently in the I/O path.

Maybe add a link to/sha for multigrain timestamp patches?

It might be helpful as well to further explain the overhead you're
seeing in detail?

> Add ktime_get_coarse_real_ts64_with_floor(), which returns either the
> coarse time or the floor as a realtime value. This avoids some of the
> conversion overhead of coarse_ctime(), and recovers some of the
> performance in these tests.
>
> The will-it-scale pipe1_threads microbenchmark shows these averages on
> my test rig:
>
>         v6.11-rc7:                      83830660 (baseline)
>         v6.11-rc7 + mgtime series:      77631748 (93% of baseline)
>         v6.11-rc7 + mgtime + this:      81620228 (97% of baseline)
>
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202409091303.31b2b713-oliver.sang@intel.com

Fixes: ?

> Suggested-by: Arnd Bergmann <arnd@kernel.org>
> Signed-off-by: Jeff Layton <jlayton@kernel.org>
> ---
> Arnd suggested moving this into the timekeeper when reviewing an earlier
> version of this series, and that turns out to be better for performance.
>
> I'm not sure how this should go in (if acceptable). The multigrain
> timestamp patches that this would affect are in Christian's tree, so
> that may be best if the timekeeper maintainers are OK with this
> approach.
> ---
>  fs/inode.c                  | 35 +++++++++--------------------------
>  include/linux/timekeeping.h |  2 ++
>  kernel/time/timekeeping.c   | 29 +++++++++++++++++++++++++++++
>  3 files changed, 40 insertions(+), 26 deletions(-)
>
> diff --git a/fs/inode.c b/fs/inode.c
> index 01f7df1973bd..47679a054472 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -2255,25 +2255,6 @@ int file_remove_privs(struct file *file)
>  }
>  EXPORT_SYMBOL(file_remove_privs);
>
> -/**
> - * coarse_ctime - return the current coarse-grained time
> - * @floor: current (monotonic) ctime_floor value
> - *
> - * Get the coarse-grained time, and then determine whether to
> - * return it or the current floor value. Returns the later of the
> - * floor and coarse grained timestamps, converted to realtime
> - * clock value.
> - */
> -static ktime_t coarse_ctime(ktime_t floor)
> -{
> -       ktime_t coarse = ktime_get_coarse();
> -
> -       /* If coarse time is already newer, return that */
> -       if (!ktime_after(floor, coarse))
> -               return ktime_get_coarse_real();
> -       return ktime_mono_to_real(floor);
> -}

I'm guessing this is part of the patch set being worked on, but this
is a very unintuitive function.

You give it a CLOCK_MONOTONIC floor value, but it returns
CLOCK_REALTIME based time?

It looks like it's asking to be misused.

...
> diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
> index 5391e4167d60..56b979471c6a 100644
> --- a/kernel/time/timekeeping.c
> +++ b/kernel/time/timekeeping.c
> @@ -2394,6 +2394,35 @@ void ktime_get_coarse_real_ts64(struct timespec64 *ts)
>  }
>  EXPORT_SYMBOL(ktime_get_coarse_real_ts64);
>
> +/**
> + * ktime_get_coarse_real_ts64_with_floor - get later of coarse grained time or floor
> + * @ts: timespec64 to be filled
> + * @floor: monotonic floor value
> + *
> + * Adjust @floor to realtime and compare that to the coarse time. Fill
> + * @ts with the later of the two.
> + */
> +void ktime_get_coarse_real_ts64_with_floor(struct timespec64 *ts, ktime_t floor)

Maybe name 'floor' 'mono_floor' so it's very clear?

> +{
> +       struct timekeeper *tk = &tk_core.timekeeper;
> +       unsigned int seq;
> +       ktime_t f_real, offset, coarse;
> +
> +       WARN_ON(timekeeping_suspended);
> +
> +       do {
> +               seq = read_seqcount_begin(&tk_core.seq);
> +               *ts = tk_xtime(tk);
> +               offset = *offsets[TK_OFFS_REAL];
> +       } while (read_seqcount_retry(&tk_core.seq, seq));
> +
> +       coarse = timespec64_to_ktime(*ts);
> +       f_real = ktime_add(floor, offset);
> +       if (ktime_after(f_real, coarse))
> +               *ts = ktime_to_timespec64(f_real);


I am still very wary of the function taking a CLOCK_MONOTONIC
comparator and returning a REALTIME value.
But I think I understand why you might want it: You want a ratchet to
filter inconsistencies from mixing fine and coarse (which very quickly
return the time in the recent past) grained timestamps, but you want
to avoid having a one way ratchet getting stuck if settimeofday() get
called.
So you implemented the ratchet against CLOCK_MONOTONIC, so
settimeofday offsets are ignored.

Is that close?

My confusion comes from the fact it seems like that would mean you
have to do all your timestamping with CLOCK_MONOTONIC (so you have a
useful floor value that you're keeping), so I'm not sure I understand
the utility of returning CLOCK_REALTIME values. I guess I don't quite
see the logic where the floor value is updated here, so I'm guessing.

Further, while this change from the earlier method avoids having to
make two calls taking the timekeeping seqlock, this still is going
from timespec->ktime->timespec still seems a little less than optimal
if this is a performance hotpath (the coarse clocks are based on
CLOCK_REALTIME timespecs because that was the legacy hotpath being
optimized for, so if we have to internalize this odd-seeming reatime
against monotonic usage model, we probably should better optimise
through the stack there).

thanks
-john
Arnd Bergmann Sept. 11, 2024, 8:19 p.m. UTC | #2
On Wed, Sep 11, 2024, at 19:55, John Stultz wrote:
>> diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
>> index 5391e4167d60..56b979471c6a 100644

> My confusion comes from the fact it seems like that would mean you
> have to do all your timestamping with CLOCK_MONOTONIC (so you have a
> useful floor value that you're keeping), so I'm not sure I understand
> the utility of returning CLOCK_REALTIME values. I guess I don't quite
> see the logic where the floor value is updated here, so I'm guessing.

I think we could take this further and store the floor value
in the timekeeper itself rather than in a global variable
next to the caller.

And instead of storing the absolute floor value, it would
be enough to store the delta since the previous
update_wall_time(), which in turn can get updated by a
variant of ktime_get_real_ts64() and reset to zero during
update_wall_time().

That way the coarse function only gains a call to
timespec64_add_ns() over the traditional version, and the
fine-grained version needs to atomically update that value.
If the delta value has to be a 64-bit integer, there also
needs to be some serialization of the reader side, but I
think that can be done with read_seqcount_begin() .

      Arnd
Jeff Layton Sept. 11, 2024, 8:19 p.m. UTC | #3
On Wed, 2024-09-11 at 12:55 -0700, John Stultz wrote:
> On Wed, Sep 11, 2024 at 5:57 AM Jeff Layton <jlayton@kernel.org> wrote:
> > 
> > The kernel test robot reported a performance regression in some
> > will-it-scale tests due to the multigrain timestamp patches. The data
> > showed that coarse_ctime() was slowing down current_time(), which is
> > called frequently in the I/O path.
> 
> Maybe add a link to/sha for multigrain timestamp patches?
> 

Sure. This is the latest posting:

https://lore.kernel.org/linux-fsdevel/20240715-mgtime-v6-0-48e5d34bd2ba@kernel.org/

The patches are in the vfs.mgtime branch of Christian's public tree as
well.

> It might be helpful as well to further explain the overhead you're
> seeing in detail?
> 

I changed current_time() to call a new coarse_ctime() function. That
function just calls ktime_* functions, but it makes 2 trips through
seqcount loops. Each of those implies a smp_mb() call.

This patch gets that down to a single seqcount loop.

> > Add ktime_get_coarse_real_ts64_with_floor(), which returns either the
> > coarse time or the floor as a realtime value. This avoids some of the
> > conversion overhead of coarse_ctime(), and recovers some of the
> > performance in these tests.
> > 
> > The will-it-scale pipe1_threads microbenchmark shows these averages on
> > my test rig:
> > 
> >         v6.11-rc7:                      83830660 (baseline)
> >         v6.11-rc7 + mgtime series:      77631748 (93% of baseline)
> >         v6.11-rc7 + mgtime + this:      81620228 (97% of baseline)
> > 
> > Reported-by: kernel test robot <oliver.sang@intel.com>
> > Closes: https://lore.kernel.org/oe-lkp/202409091303.31b2b713-oliver.sang@intel.com
> 
> Fixes: ?

Sure. But as I said, this is not in mainline yet:

    Fixes: a037d5e7f81b ("fs: add infrastructure for multigrain timestamps")

> 
> > Suggested-by: Arnd Bergmann <arnd@kernel.org>
> > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > ---
> > Arnd suggested moving this into the timekeeper when reviewing an earlier
> > version of this series, and that turns out to be better for performance.
> > 
> > I'm not sure how this should go in (if acceptable). The multigrain
> > timestamp patches that this would affect are in Christian's tree, so
> > that may be best if the timekeeper maintainers are OK with this
> > approach.
> > ---
> >  fs/inode.c                  | 35 +++++++++--------------------------
> >  include/linux/timekeeping.h |  2 ++
> >  kernel/time/timekeeping.c   | 29 +++++++++++++++++++++++++++++
> >  3 files changed, 40 insertions(+), 26 deletions(-)
> > 
> > diff --git a/fs/inode.c b/fs/inode.c
> > index 01f7df1973bd..47679a054472 100644
> > --- a/fs/inode.c
> > +++ b/fs/inode.c
> > @@ -2255,25 +2255,6 @@ int file_remove_privs(struct file *file)
> >  }
> >  EXPORT_SYMBOL(file_remove_privs);
> > 
> > -/**
> > - * coarse_ctime - return the current coarse-grained time
> > - * @floor: current (monotonic) ctime_floor value
> > - *
> > - * Get the coarse-grained time, and then determine whether to
> > - * return it or the current floor value. Returns the later of the
> > - * floor and coarse grained timestamps, converted to realtime
> > - * clock value.
> > - */
> > -static ktime_t coarse_ctime(ktime_t floor)
> > -{
> > -       ktime_t coarse = ktime_get_coarse();
> > -
> > -       /* If coarse time is already newer, return that */
> > -       if (!ktime_after(floor, coarse))
> > -               return ktime_get_coarse_real();
> > -       return ktime_mono_to_real(floor);
> > -}
> 
> I'm guessing this is part of the patch set being worked on, but this
> is a very unintuitive function.
> 
> You give it a CLOCK_MONOTONIC floor value, but it returns
> CLOCK_REALTIME based time?
> 
> It looks like it's asking to be misused.
> 

I get your point, but I think it's unavoidable here, unfortunately.

> ...
> > diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
> > index 5391e4167d60..56b979471c6a 100644
> > --- a/kernel/time/timekeeping.c
> > +++ b/kernel/time/timekeeping.c
> > @@ -2394,6 +2394,35 @@ void ktime_get_coarse_real_ts64(struct timespec64 *ts)
> >  }
> >  EXPORT_SYMBOL(ktime_get_coarse_real_ts64);
> > 
> > +/**
> > + * ktime_get_coarse_real_ts64_with_floor - get later of coarse grained time or floor
> > + * @ts: timespec64 to be filled
> > + * @floor: monotonic floor value
> > + *
> > + * Adjust @floor to realtime and compare that to the coarse time. Fill
> > + * @ts with the later of the two.
> > + */
> > +void ktime_get_coarse_real_ts64_with_floor(struct timespec64 *ts, ktime_t floor)
> 
> Maybe name 'floor' 'mono_floor' so it's very clear?
> 

Sure. Will do.

> > +{
> > +       struct timekeeper *tk = &tk_core.timekeeper;
> > +       unsigned int seq;
> > +       ktime_t f_real, offset, coarse;
> > +
> > +       WARN_ON(timekeeping_suspended);
> > +
> > +       do {
> > +               seq = read_seqcount_begin(&tk_core.seq);
> > +               *ts = tk_xtime(tk);
> > +               offset = *offsets[TK_OFFS_REAL];
> > +       } while (read_seqcount_retry(&tk_core.seq, seq));
> > +
> > +       coarse = timespec64_to_ktime(*ts);
> > +       f_real = ktime_add(floor, offset);
> > +       if (ktime_after(f_real, coarse))
> > +               *ts = ktime_to_timespec64(f_real);
> 
> 
> I am still very wary of the function taking a CLOCK_MONOTONIC
> comparator and returning a REALTIME value.
> But I think I understand why you might want it: You want a ratchet to
> filter inconsistencies from mixing fine and coarse (which very quickly
> return the time in the recent past) grained timestamps, but you want
> to avoid having a one way ratchet getting stuck if settimeofday() get
> called.
> So you implemented the ratchet against CLOCK_MONOTONIC, so
> settimeofday offsets are ignored.
> 
> Is that close?
> 

Bingo.

> My confusion comes from the fact it seems like that would mean you
> have to do all your timestamping with CLOCK_MONOTONIC (so you have a
> useful floor value that you're keeping), so I'm not sure I understand
> the utility of returning CLOCK_REALTIME values. I guess I don't quite
> see the logic where the floor value is updated here, so I'm guessing.
> 

The floor value is updated in inode_set_ctime_current() in the
multigrain series. The comments over that hopefully describe how it
works, but basically, once we determine that we need a fine-grained
timestamp, we fetch a new fine-grained value and try to swap it into
ctime_floor. After that, we convert it to a realtime value and try to
swap the nsec field into the inode's ctime.

The conversion is a bit expensive, but the multigrain series takes
great pains to only update the ctime_floor as a last resort. It's a
global value, so we _really_ don't want to write to it any more than
necessary.

> Further, while this change from the earlier method avoids having to
> make two calls taking the timekeeping seqlock, this still is going
> from timespec->ktime->timespec still seems a little less than optimal
> if this is a performance hotpath (the coarse clocks are based on
> CLOCK_REALTIME timespecs because that was the legacy hotpath being
> optimized for, so if we have to internalize this odd-seeming reatime
> against monotonic usage model, we probably should better optimise
> through the stack there).
> 

The floor is tracked as a ktime_t, as we need to be able to swap it
into place with a cmpxchg() operation. I did originally try to use
timespec64's for everything, but it was too hard to keep everything
consistent without resorting to locking.

That said, I'm open to suggestions to make this better. I did (briefly)
look at whether moving the floor tracking into the timekeeper wholesale
would be better, but it didn't seem to be.

Thanks for taking a look!
Jeff Layton Sept. 11, 2024, 8:43 p.m. UTC | #4
On Wed, 2024-09-11 at 20:19 +0000, Arnd Bergmann wrote:
> On Wed, Sep 11, 2024, at 19:55, John Stultz wrote:
> > > diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
> > > index 5391e4167d60..56b979471c6a 100644
> 
> > My confusion comes from the fact it seems like that would mean you
> > have to do all your timestamping with CLOCK_MONOTONIC (so you have a
> > useful floor value that you're keeping), so I'm not sure I understand
> > the utility of returning CLOCK_REALTIME values. I guess I don't quite
> > see the logic where the floor value is updated here, so I'm guessing.
> 
> I think we could take this further and store the floor value
> in the timekeeper itself rather than in a global variable
> next to the caller.
> 
> And instead of storing the absolute floor value, it would
> be enough to store the delta since the previous
> update_wall_time(), which in turn can get updated by a
> variant of ktime_get_real_ts64() and reset to zero during
> update_wall_time().
>
> timespec64_add_ns() over the traditional version, and the
> fine-grained version needs to atomically update that value.
> If the delta value has to be a 64-bit integer, there also
> needs to be some serialization of the reader side, but I
> think that can be done with read_seqcount_begin() .
> 

I think we'd have to track this delta as an atomic value and cmpxchg
new values into place. The zeroing seems quite tricky to make race-
free.

Currently, we fetch the floor value early in the process and if it
changes before we can swap a new one into place, we just take whatever
the new value is (since it's just as good). Since these are monotonic
values, any new value is still newer than the original one, so its
fine. I'm not sure that still works if we're dealing with a delta that
is siding upward and downward.

Maybe it does though. I'll take a stab at this tomorrow and see how it
looks.

Thanks for the suggestion!
Arnd Bergmann Sept. 12, 2024, 10:01 a.m. UTC | #5
On Wed, Sep 11, 2024, at 20:43, Jeff Layton wrote:
>
> I think we'd have to track this delta as an atomic value and cmpxchg
> new values into place. The zeroing seems quite tricky to make race-
> free.
>
> Currently, we fetch the floor value early in the process and if it
> changes before we can swap a new one into place, we just take whatever
> the new value is (since it's just as good). Since these are monotonic
> values, any new value is still newer than the original one, so its
> fine. I'm not sure that still works if we're dealing with a delta that
> is siding upward and downward.
>
> Maybe it does though. I'll take a stab at this tomorrow and see how it
> looks.

Right, the only idea I had for this would be to atomically
update a 64-bit tuple of the 32-bit sequence count and the
32-bit delta value in the timerkeeper. That way I think the
"coarse" reader would still get a correct value when running
concurrently with both a fine-grained reader updating the count
and the timer tick setting a new count.

There are still a couple of problems:

- this extends the timekeeper logic beyond what the seqlock
  semantics normally allow, and I can't prove that this actually
  works in all corner cases.

- if the delta doesn't fit in a 32-bit value, there has to 
  be another fallback mechanism.

- This still requires an atomic64_cmpxchg() in the
  fine-grained ktime_get_real_ts64() replacement, which
  I think is what inode_set_ctime_current() needs today
  as well to ensure that the next coarse value is the
  highest one that has been read so far.

There is another idea that would completely replace
your design with something /much/ simpler:

 - add a variant of ktime_get_real_ts64() that just
   sets a flag in the timekeeper to signify that a
   fine-grained time has been read since the last
   timer tick
 - add a variant of ktime_get_coarse_real_ts64()
   that returns either tk_xtime() if the flag is
   clear or calls ktime_get_real_ts64() if it's set
 - reset the flag in timekeeping_advance() and any other
   place that updates tk_xtime

That way you avoid the atomic64_try_cmpxchg()
inode_set_ctime_current(), making that case faster,
and avoid all overhead in coarse_ctime() unless you
use both types during the same tick.

      Arnd
Jeff Layton Sept. 12, 2024, 11:34 a.m. UTC | #6
On Thu, 2024-09-12 at 10:01 +0000, Arnd Bergmann wrote:
> On Wed, Sep 11, 2024, at 20:43, Jeff Layton wrote:
> > 
> > I think we'd have to track this delta as an atomic value and cmpxchg
> > new values into place. The zeroing seems quite tricky to make race-
> > free.
> > 
> > Currently, we fetch the floor value early in the process and if it
> > changes before we can swap a new one into place, we just take whatever
> > the new value is (since it's just as good). Since these are monotonic
> > values, any new value is still newer than the original one, so its
> > fine. I'm not sure that still works if we're dealing with a delta that
> > is siding upward and downward.
> > 
> > Maybe it does though. I'll take a stab at this tomorrow and see how it
> > looks.
> 
> Right, the only idea I had for this would be to atomically
> update a 64-bit tuple of the 32-bit sequence count and the
> 32-bit delta value in the timerkeeper. That way I think the
> "coarse" reader would still get a correct value when running
> concurrently with both a fine-grained reader updating the count
> and the timer tick setting a new count.
> 
> There are still a couple of problems:
> 
> - this extends the timekeeper logic beyond what the seqlock
>   semantics normally allow, and I can't prove that this actually
>   works in all corner cases.
>
> - if the delta doesn't fit in a 32-bit value, there has to 
>   be another fallback mechanism.
> 

That could be a problem. I was hoping the delta couldn't grow that
large between timer ticks, but I guess it can. I guess the fallback
could be to just grab new fine-grained timestamps on each call until
the timer ticks.

> - This still requires an atomic64_cmpxchg() in the
>   fine-grained ktime_get_real_ts64() replacement, which
>   I think is what inode_set_ctime_current() needs today
>   as well to ensure that the next coarse value is the
>   highest one that has been read so far.
> 

Yes. We really don't want to take the seqlock for write just to update
timestamps. I'd prefer to keep the floor-handling lock-free if
possible.

> There is another idea that would completely replace
> your design with something /much/ simpler:
> 
>  - add a variant of ktime_get_real_ts64() that just
>    sets a flag in the timekeeper to signify that a
>    fine-grained time has been read since the last
>    timer tick
>  - add a variant of ktime_get_coarse_real_ts64()
>    that returns either tk_xtime() if the flag is
>    clear or calls ktime_get_real_ts64() if it's set
>  - reset the flag in timekeeping_advance() and any other
>    place that updates tk_xtime
> 
> That way you avoid the atomic64_try_cmpxchg()
> inode_set_ctime_current(), making that case faster,
> and avoid all overhead in coarse_ctime() unless you
> use both types during the same tick.
> 

With the current code we only get a fine grained timestamp iff:

1/ the timestamps have been queried (a'la I_CTIME_QUERIED)
2/ the current coarse-grained or floor time would not show a change in
the ctime

If we do what you're suggesting above, as soon as one task sets the
flag, anyone calling current_time() will end up getting a brand new
fine-grained timestamp, even when the current floor time would have
been fine.

That means a lot more calls into ktime_get_real_ts64(), at least until
the timer ticks, and would probably mean a lot of extra journal
transactions, since those timestamps would all be distinct from one
another and would need to go to disk more often.
Christian Brauner Sept. 12, 2024, 12:31 p.m. UTC | #7
On Wed, Sep 11, 2024 at 08:56:56AM GMT, Jeff Layton wrote:
> The kernel test robot reported a performance regression in some
> will-it-scale tests due to the multigrain timestamp patches. The data
> showed that coarse_ctime() was slowing down current_time(), which is
> called frequently in the I/O path.
> 
> Add ktime_get_coarse_real_ts64_with_floor(), which returns either the
> coarse time or the floor as a realtime value. This avoids some of the
> conversion overhead of coarse_ctime(), and recovers some of the
> performance in these tests.
> 
> The will-it-scale pipe1_threads microbenchmark shows these averages on
> my test rig:
> 
> 	v6.11-rc7:			83830660 (baseline)
> 	v6.11-rc7 + mgtime series:	77631748 (93% of baseline)
> 	v6.11-rc7 + mgtime + this:	81620228 (97% of baseline)
> 
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202409091303.31b2b713-oliver.sang@intel.com
> Suggested-by: Arnd Bergmann <arnd@kernel.org>
> Signed-off-by: Jeff Layton <jlayton@kernel.org>
> ---
> Arnd suggested moving this into the timekeeper when reviewing an earlier
> version of this series, and that turns out to be better for performance.
> 
> I'm not sure how this should go in (if acceptable). The multigrain
> timestamp patches that this would affect are in Christian's tree, so
> that may be best if the timekeeper maintainers are OK with this
> approach.

We will need this as otherwise we can't really merge the multigrain
timestamp work with known performance regressions?
Jeff Layton Sept. 12, 2024, 12:39 p.m. UTC | #8
On Thu, 2024-09-12 at 14:31 +0200, Christian Brauner wrote:
> On Wed, Sep 11, 2024 at 08:56:56AM GMT, Jeff Layton wrote:
> > The kernel test robot reported a performance regression in some
> > will-it-scale tests due to the multigrain timestamp patches. The data
> > showed that coarse_ctime() was slowing down current_time(), which is
> > called frequently in the I/O path.
> > 
> > Add ktime_get_coarse_real_ts64_with_floor(), which returns either the
> > coarse time or the floor as a realtime value. This avoids some of the
> > conversion overhead of coarse_ctime(), and recovers some of the
> > performance in these tests.
> > 
> > The will-it-scale pipe1_threads microbenchmark shows these averages on
> > my test rig:
> > 
> > 	v6.11-rc7:			83830660 (baseline)
> > 	v6.11-rc7 + mgtime series:	77631748 (93% of baseline)
> > 	v6.11-rc7 + mgtime + this:	81620228 (97% of baseline)
> > 
> > Reported-by: kernel test robot <oliver.sang@intel.com>
> > Closes: https://lore.kernel.org/oe-lkp/202409091303.31b2b713-oliver.sang@intel.com
> > Suggested-by: Arnd Bergmann <arnd@kernel.org>
> > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > ---
> > Arnd suggested moving this into the timekeeper when reviewing an earlier
> > version of this series, and that turns out to be better for performance.
> > 
> > I'm not sure how this should go in (if acceptable). The multigrain
> > timestamp patches that this would affect are in Christian's tree, so
> > that may be best if the timekeeper maintainers are OK with this
> > approach.
> 
> We will need this as otherwise we can't really merge the multigrain
> timestamp work with known performance regressions?

Yes, I think we'll need something here. Arnd suggested an alternative
way to do this that might be even better. I'm not 100% sure that it'll
work though since the approach is a bit different.

I'd still like to see this go in for v6.12, so what I'd probably prefer
is to take this patch initially (with the variable name change that
John suggested), and then we can work on the alternative approach in
the meantime

Would that be acceptable?
Christian Brauner Sept. 12, 2024, 12:43 p.m. UTC | #9
On Thu, Sep 12, 2024 at 08:39:32AM GMT, Jeff Layton wrote:
> On Thu, 2024-09-12 at 14:31 +0200, Christian Brauner wrote:
> > On Wed, Sep 11, 2024 at 08:56:56AM GMT, Jeff Layton wrote:
> > > The kernel test robot reported a performance regression in some
> > > will-it-scale tests due to the multigrain timestamp patches. The data
> > > showed that coarse_ctime() was slowing down current_time(), which is
> > > called frequently in the I/O path.
> > > 
> > > Add ktime_get_coarse_real_ts64_with_floor(), which returns either the
> > > coarse time or the floor as a realtime value. This avoids some of the
> > > conversion overhead of coarse_ctime(), and recovers some of the
> > > performance in these tests.
> > > 
> > > The will-it-scale pipe1_threads microbenchmark shows these averages on
> > > my test rig:
> > > 
> > > 	v6.11-rc7:			83830660 (baseline)
> > > 	v6.11-rc7 + mgtime series:	77631748 (93% of baseline)
> > > 	v6.11-rc7 + mgtime + this:	81620228 (97% of baseline)
> > > 
> > > Reported-by: kernel test robot <oliver.sang@intel.com>
> > > Closes: https://lore.kernel.org/oe-lkp/202409091303.31b2b713-oliver.sang@intel.com
> > > Suggested-by: Arnd Bergmann <arnd@kernel.org>
> > > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > > ---
> > > Arnd suggested moving this into the timekeeper when reviewing an earlier
> > > version of this series, and that turns out to be better for performance.
> > > 
> > > I'm not sure how this should go in (if acceptable). The multigrain
> > > timestamp patches that this would affect are in Christian's tree, so
> > > that may be best if the timekeeper maintainers are OK with this
> > > approach.
> > 
> > We will need this as otherwise we can't really merge the multigrain
> > timestamp work with known performance regressions?
> 
> Yes, I think we'll need something here. Arnd suggested an alternative
> way to do this that might be even better. I'm not 100% sure that it'll
> work though since the approach is a bit different.
> 
> I'd still like to see this go in for v6.12, so what I'd probably prefer
> is to take this patch initially (with the variable name change that
> John suggested), and then we can work on the alternative approach in
> the meantime
> 
> Would that be acceptable?

It would be ok with me but we should get a nodd from the time keeper folks.
Arnd Bergmann Sept. 12, 2024, 1:17 p.m. UTC | #10
On Thu, Sep 12, 2024, at 11:34, Jeff Layton wrote:
> On Thu, 2024-09-12 at 10:01 +0000, Arnd Bergmann wrote:
>> On Wed, Sep 11, 2024, at 20:43, Jeff Layton wrote:
>>
>> That way you avoid the atomic64_try_cmpxchg()
>> inode_set_ctime_current(), making that case faster,
>> and avoid all overhead in coarse_ctime() unless you
>> use both types during the same tick.
>> 
>
> With the current code we only get a fine grained timestamp iff:
>
> 1/ the timestamps have been queried (a'la I_CTIME_QUERIED)
> 2/ the current coarse-grained or floor time would not show a change in
> the ctime
>
> If we do what you're suggesting above, as soon as one task sets the
> flag, anyone calling current_time() will end up getting a brand new
> fine-grained timestamp, even when the current floor time would have
> been fine.

Right, I forgot about this part of your work, the 
I_CTIME_QUERIED logic definitely has to stay.

> That means a lot more calls into ktime_get_real_ts64(), at least until
> the timer ticks, and would probably mean a lot of extra journal
> transactions, since those timestamps would all be distinct from one
> another and would need to go to disk more often.

I guess some of that overhead would go away if we just treated
tk_xtime() as the floor value without an additional cache,
and did the comparison against inode->i_ctime inside of
a new ktime_get_real_ts64_newer_than(), but there is still the
case of a single inode getting updated a lot, and it would
break the ordering of updates between inodes.

       Arnd
Jeff Layton Sept. 12, 2024, 1:26 p.m. UTC | #11
On Thu, 2024-09-12 at 13:17 +0000, Arnd Bergmann wrote:
> On Thu, Sep 12, 2024, at 11:34, Jeff Layton wrote:
> > On Thu, 2024-09-12 at 10:01 +0000, Arnd Bergmann wrote:
> > > On Wed, Sep 11, 2024, at 20:43, Jeff Layton wrote:
> > > 
> > > That way you avoid the atomic64_try_cmpxchg()
> > > inode_set_ctime_current(), making that case faster,
> > > and avoid all overhead in coarse_ctime() unless you
> > > use both types during the same tick.
> > > 
> > 
> > With the current code we only get a fine grained timestamp iff:
> > 
> > 1/ the timestamps have been queried (a'la I_CTIME_QUERIED)
> > 2/ the current coarse-grained or floor time would not show a change in
> > the ctime
> > 
> > If we do what you're suggesting above, as soon as one task sets the
> > flag, anyone calling current_time() will end up getting a brand new
> > fine-grained timestamp, even when the current floor time would have
> > been fine.
> 
> Right, I forgot about this part of your work, the 
> I_CTIME_QUERIED logic definitely has to stay.
> 
> > That means a lot more calls into ktime_get_real_ts64(), at least until
> > the timer ticks, and would probably mean a lot of extra journal
> > transactions, since those timestamps would all be distinct from one
> > another and would need to go to disk more often.
> 
> I guess some of that overhead would go away if we just treated
> tk_xtime() as the floor value without an additional cache,
> and did the comparison against inode->i_ctime inside of
> a new ktime_get_real_ts64_newer_than(), but there is still the
> case of a single inode getting updated a lot, and it would
> break the ordering of updates between inodes.
> 

Yes, and the breaking of ordering is why we had to revert the last set,
so that's definitely no good.

I think your suggestion about using a tuple of the sequence and the
delta should work. The trick is that we need to do the fetch and the
cmpxchg of the floor tuple inside the read_seqcount loop. Zeroing it
out can be done with write_once(). If we get a spurious update to the
floor while zeroing then it's no big deal since everything would just
loop and do it again.

I'll plan to hack something together later today and see how it does.

Thanks for the help so far!
Jeff Layton Sept. 12, 2024, 2:37 p.m. UTC | #12
On Thu, 2024-09-12 at 09:26 -0400, Jeff Layton wrote:
> On Thu, 2024-09-12 at 13:17 +0000, Arnd Bergmann wrote:
> > On Thu, Sep 12, 2024, at 11:34, Jeff Layton wrote:
> > > On Thu, 2024-09-12 at 10:01 +0000, Arnd Bergmann wrote:
> > > > On Wed, Sep 11, 2024, at 20:43, Jeff Layton wrote:
> > > > 
> > > > That way you avoid the atomic64_try_cmpxchg()
> > > > inode_set_ctime_current(), making that case faster,
> > > > and avoid all overhead in coarse_ctime() unless you
> > > > use both types during the same tick.
> > > > 
> > > 
> > > With the current code we only get a fine grained timestamp iff:
> > > 
> > > 1/ the timestamps have been queried (a'la I_CTIME_QUERIED)
> > > 2/ the current coarse-grained or floor time would not show a change in
> > > the ctime
> > > 
> > > If we do what you're suggesting above, as soon as one task sets the
> > > flag, anyone calling current_time() will end up getting a brand new
> > > fine-grained timestamp, even when the current floor time would have
> > > been fine.
> > 
> > Right, I forgot about this part of your work, the 
> > I_CTIME_QUERIED logic definitely has to stay.
> > 
> > > That means a lot more calls into ktime_get_real_ts64(), at least until
> > > the timer ticks, and would probably mean a lot of extra journal
> > > transactions, since those timestamps would all be distinct from one
> > > another and would need to go to disk more often.
> > 
> > I guess some of that overhead would go away if we just treated
> > tk_xtime() as the floor value without an additional cache,
> > and did the comparison against inode->i_ctime inside of
> > a new ktime_get_real_ts64_newer_than(), but there is still the
> > case of a single inode getting updated a lot, and it would
> > break the ordering of updates between inodes.
> > 
> 
> Yes, and the breaking of ordering is why we had to revert the last set,
> so that's definitely no good.
> 
> I think your suggestion about using a tuple of the sequence and the
> delta should work. The trick is that we need to do the fetch and the
> cmpxchg of the floor tuple inside the read_seqcount loop. Zeroing it
> out can be done with write_once(). If we get a spurious update to the
> floor while zeroing then it's no big deal since everything would just
> loop and do it again.
> 
> I'll plan to hack something together later today and see how it does.
> 

Ok, already hit a couple of problems:

First, moving the floor word into struct timekeeper is probably not a
good idea. This is going to be updated more often than the rest of the
timekeeper, and so its cacheline will be invalidated more. I think we
need to keep the floor word on its own cacheline. It can be a static
u64 though inside timekeeper.c.

Second, the existing code basically does:

	get the floor time (tfc = max of coarse and ctime_floor) 
	if ctime was queried and applying tfc would show no change:
		get fine time
		cmpxchg into place and accept the result

The key there is that if the floor time changes at any point between
when we first fetch it and the attempted cmpxchg, no update happens
(which is good).

If we move all of the floor handling into the timekeeper, then I think
we still need to have some state that we pass back when trying to get
the floor time initially, so that we keep the race window large (which
seems weird, but is good in this case).

So, I think that we actually need an API like this:

    /* returns opaque cookie value */
    u64 ktime_get_coarse_real_ts64_mg(struct timespec64 *ts);

    /* accepts opaque cookie value from above function */ 
    void ktime_get_real_ts64_mg(struct timespec64 *ts, u64 cookie);

The first function fills in @ts with the max of coarse time and floor,
and returns an opaque cookie (a copy of the floor word). The second
fetches a fine-grained timestamp and uses the floor cookie as the "old"
value when doing the cmpxchg, and then fills in @ts with the result.

Does that sound reasonable? If so, then the next question is around
what the floor word should hold:

IMO, just keeping it as a monotonic time value seems simplest. I'm
struggling to understand where the "delta" portion would come from in
your earlier proposal, and the fact that that value could overflow
seems less than ideal.

Cheers,
Arnd Bergmann Sept. 12, 2024, 4:51 p.m. UTC | #13
On Thu, Sep 12, 2024, at 14:37, Jeff Layton wrote:
> On Thu, 2024-09-12 at 09:26 -0400, Jeff Layton wrote:
>> On Thu, 2024-09-12 at 13:17 +0000, Arnd Bergmann wrote:
>> > On Thu, Sep 12, 2024, at 11:34, Jeff Layton wrote:
>> 
>> I'll plan to hack something together later today and see how it does.
>> 
>
> Ok, already hit a couple of problems:
>
> First, moving the floor word into struct timekeeper is probably not a
> good idea. This is going to be updated more often than the rest of the
> timekeeper, and so its cacheline will be invalidated more. I think we
> need to keep the floor word on its own cacheline. It can be a static
> u64 though inside timekeeper.c.

Right.

> So, I think that we actually need an API like this:
>
>     /* returns opaque cookie value */
>     u64 ktime_get_coarse_real_ts64_mg(struct timespec64 *ts);
>
>     /* accepts opaque cookie value from above function */ 
>     void ktime_get_real_ts64_mg(struct timespec64 *ts, u64 cookie);
>
> The first function fills in @ts with the max of coarse time and floor,
> and returns an opaque cookie (a copy of the floor word). The second
> fetches a fine-grained timestamp and uses the floor cookie as the "old"
> value when doing the cmpxchg, and then fills in @ts with the result.

I think you lost me here, I'd need to look at the code in
more detail to understand it.

> Does that sound reasonable? If so, then the next question is around
> what the floor word should hold:
>
> IMO, just keeping it as a monotonic time value seems simplest. I'm
> struggling to understand where the "delta" portion would come from in
> your earlier proposal, and the fact that that value could overflow
> seems less than ideal.

I was thinking of the diffence between tk->xtime_nsec and the
computed nsecs in ktime_get_real_ts64().

The calculation is what is in timekeeping_cycles_to_ns(),
with the  "+ tkr->xtime_nsec" left out, roughly

   ((tk_clock_read(tkr) - tkr->cycle_last) & tkr->mask) * \
         tkr->mult >> tkr->shift

There are a few subtleties here, including the possible
1-bit rounding error from the shift. 

     Arnd
diff mbox series

Patch

diff --git a/fs/inode.c b/fs/inode.c
index 01f7df1973bd..47679a054472 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -2255,25 +2255,6 @@  int file_remove_privs(struct file *file)
 }
 EXPORT_SYMBOL(file_remove_privs);
 
-/**
- * coarse_ctime - return the current coarse-grained time
- * @floor: current (monotonic) ctime_floor value
- *
- * Get the coarse-grained time, and then determine whether to
- * return it or the current floor value. Returns the later of the
- * floor and coarse grained timestamps, converted to realtime
- * clock value.
- */
-static ktime_t coarse_ctime(ktime_t floor)
-{
-	ktime_t coarse = ktime_get_coarse();
-
-	/* If coarse time is already newer, return that */
-	if (!ktime_after(floor, coarse))
-		return ktime_get_coarse_real();
-	return ktime_mono_to_real(floor);
-}
-
 /**
  * current_time - Return FS time (possibly fine-grained)
  * @inode: inode.
@@ -2285,10 +2266,10 @@  static ktime_t coarse_ctime(ktime_t floor)
 struct timespec64 current_time(struct inode *inode)
 {
 	ktime_t floor = atomic64_read(&ctime_floor);
-	ktime_t now = coarse_ctime(floor);
-	struct timespec64 now_ts = ktime_to_timespec64(now);
+	struct timespec64 now_ts;
 	u32 cns;
 
+	ktime_get_coarse_real_ts64_with_floor(&now_ts, floor);
 	if (!is_mgtime(inode))
 		goto out;
 
@@ -2745,7 +2726,7 @@  EXPORT_SYMBOL(timestamp_truncate);
  *
  * Set the inode's ctime to the current value for the inode. Returns the
  * current value that was assigned. If this is not a multigrain inode, then we
- * just set it to whatever the coarse_ctime is.
+ * set it to the later of the coarse time and floor value.
  *
  * If it is multigrain, then we first see if the coarse-grained timestamp is
  * distinct from what we have. If so, then we'll just use that. If we have to
@@ -2756,15 +2737,15 @@  EXPORT_SYMBOL(timestamp_truncate);
  */
 struct timespec64 inode_set_ctime_current(struct inode *inode)
 {
-	ktime_t now, floor = atomic64_read(&ctime_floor);
+	ktime_t floor = atomic64_read(&ctime_floor);
 	struct timespec64 now_ts;
 	u32 cns, cur;
 
-	now = coarse_ctime(floor);
+	ktime_get_coarse_real_ts64_with_floor(&now_ts, floor);
 
 	/* Just return that if this is not a multigrain fs */
 	if (!is_mgtime(inode)) {
-		now_ts = timestamp_truncate(ktime_to_timespec64(now), inode);
+		now_ts = timestamp_truncate(now_ts, inode);
 		inode_set_ctime_to_ts(inode, now_ts);
 		goto out;
 	}
@@ -2777,6 +2758,7 @@  struct timespec64 inode_set_ctime_current(struct inode *inode)
 	cns = smp_load_acquire(&inode->i_ctime_nsec);
 	if (cns & I_CTIME_QUERIED) {
 		ktime_t ctime = ktime_set(inode->i_ctime_sec, cns & ~I_CTIME_QUERIED);
+		ktime_t now = timespec64_to_ktime(now_ts);
 
 		if (!ktime_after(now, ctime)) {
 			ktime_t old, fine;
@@ -2797,10 +2779,11 @@  struct timespec64 inode_set_ctime_current(struct inode *inode)
 			else
 				fine = old;
 			now = ktime_mono_to_real(fine);
+			now_ts = ktime_to_timespec64(now);
 		}
 	}
 	mgtime_counter_inc(mg_ctime_updates);
-	now_ts = timestamp_truncate(ktime_to_timespec64(now), inode);
+	now_ts = timestamp_truncate(now_ts, inode);
 	cur = cns;
 
 	/* No need to cmpxchg if it's exactly the same */
diff --git a/include/linux/timekeeping.h b/include/linux/timekeeping.h
index fc12a9ba2c88..9b3c957ab260 100644
--- a/include/linux/timekeeping.h
+++ b/include/linux/timekeeping.h
@@ -44,6 +44,7 @@  extern void ktime_get_ts64(struct timespec64 *ts);
 extern void ktime_get_real_ts64(struct timespec64 *tv);
 extern void ktime_get_coarse_ts64(struct timespec64 *ts);
 extern void ktime_get_coarse_real_ts64(struct timespec64 *ts);
+extern void ktime_get_coarse_real_ts64_with_floor(struct timespec64 *ts, ktime_t floor);
 
 void getboottime64(struct timespec64 *ts);
 
@@ -68,6 +69,7 @@  enum tk_offsets {
 extern ktime_t ktime_get(void);
 extern ktime_t ktime_get_with_offset(enum tk_offsets offs);
 extern ktime_t ktime_get_coarse_with_offset(enum tk_offsets offs);
+extern ktime_t ktime_get_coarse_with_floor_and_offset(enum tk_offsets offs, ktime_t floor);
 extern ktime_t ktime_mono_to_any(ktime_t tmono, enum tk_offsets offs);
 extern ktime_t ktime_get_raw(void);
 extern u32 ktime_get_resolution_ns(void);
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 5391e4167d60..56b979471c6a 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -2394,6 +2394,35 @@  void ktime_get_coarse_real_ts64(struct timespec64 *ts)
 }
 EXPORT_SYMBOL(ktime_get_coarse_real_ts64);
 
+/**
+ * ktime_get_coarse_real_ts64_with_floor - get later of coarse grained time or floor
+ * @ts: timespec64 to be filled
+ * @floor: monotonic floor value
+ *
+ * Adjust @floor to realtime and compare that to the coarse time. Fill
+ * @ts with the later of the two.
+ */
+void ktime_get_coarse_real_ts64_with_floor(struct timespec64 *ts, ktime_t floor)
+{
+	struct timekeeper *tk = &tk_core.timekeeper;
+	unsigned int seq;
+	ktime_t f_real, offset, coarse;
+
+	WARN_ON(timekeeping_suspended);
+
+	do {
+		seq = read_seqcount_begin(&tk_core.seq);
+		*ts = tk_xtime(tk);
+		offset = *offsets[TK_OFFS_REAL];
+	} while (read_seqcount_retry(&tk_core.seq, seq));
+
+	coarse = timespec64_to_ktime(*ts);
+	f_real = ktime_add(floor, offset);
+	if (ktime_after(f_real, coarse))
+		*ts = ktime_to_timespec64(f_real);
+}
+EXPORT_SYMBOL_GPL(ktime_get_coarse_real_ts64_with_floor);
+
 void ktime_get_coarse_ts64(struct timespec64 *ts)
 {
 	struct timekeeper *tk = &tk_core.timekeeper;