mbox series

[v1,0/3] cgroup/rstat: global cgroup_rstat_lock changes

Message ID 171328983017.3930751.9484082608778623495.stgit@firesoul (mailing list archive)
Headers show
Series cgroup/rstat: global cgroup_rstat_lock changes | expand

Message

Jesper Dangaard Brouer April 16, 2024, 5:51 p.m. UTC
This patchset is focused on the global cgroup_rstat_lock.

 Patch-1: Adds tracepoints to improve measuring lock behavior.
 Patch-2: Converts the global lock into a mutex.
 Patch-3: Limits userspace triggered pressure on the lock.

Background in discussion thread [1].
 [1] https://lore.kernel.org/all/ac4cf07f-52dd-454f-b897-2a4b3796a4d9@kernel.org/

---

Jesper Dangaard Brouer (3):
      cgroup/rstat: add cgroup_rstat_lock helpers and tracepoints
      cgroup/rstat: convert cgroup_rstat_lock back to mutex
      cgroup/rstat: introduce ratelimited rstat flushing


 block/blk-cgroup.c            |   2 +-
 include/linux/cgroup-defs.h   |   1 +
 include/linux/cgroup.h        |   5 +-
 include/trace/events/cgroup.h |  48 +++++++++++++++
 kernel/cgroup/rstat.c         | 111 ++++++++++++++++++++++++++++++----
 mm/memcontrol.c               |   1 +
 6 files changed, 153 insertions(+), 15 deletions(-)

--

Comments

Tejun Heo April 16, 2024, 9:38 p.m. UTC | #1
On Tue, Apr 16, 2024 at 07:51:19PM +0200, Jesper Dangaard Brouer wrote:
> This patchset is focused on the global cgroup_rstat_lock.
> 
>  Patch-1: Adds tracepoints to improve measuring lock behavior.
>  Patch-2: Converts the global lock into a mutex.
>  Patch-3: Limits userspace triggered pressure on the lock.

Imma wait for people's inputs on patch 2 and 3. ISTR switching the lock to
mutex made some tail latencies really bad for some workloads at google?
Yosry, was that you?

Thanks.
Yosry Ahmed April 18, 2024, 2:13 a.m. UTC | #2
On Tue, Apr 16, 2024 at 2:38 PM Tejun Heo <tj@kernel.org> wrote:
>
> On Tue, Apr 16, 2024 at 07:51:19PM +0200, Jesper Dangaard Brouer wrote:
> > This patchset is focused on the global cgroup_rstat_lock.
> >
> >  Patch-1: Adds tracepoints to improve measuring lock behavior.
> >  Patch-2: Converts the global lock into a mutex.
> >  Patch-3: Limits userspace triggered pressure on the lock.
>
> Imma wait for people's inputs on patch 2 and 3. ISTR switching the lock to
> mutex made some tail latencies really bad for some workloads at google?
> Yosry, was that you?

I spent some time going through the history of my previous patchsets
to find context.

There were two separate instances where concerns were raised about
using a mutex.

(a) Converting the global rstat spinlock to a mutex:

Shakeel had concerns about priority inversion with a global sleepable
lock. So I never actually tested replacing the spinlock with a mutex
based on Shakeel's concerns as priority inversions would be difficult
to reproduce with synthetic tests.

Generally speaking, other than priority inversions, I was depending on
Wei's synthetic test to measure performance for userspace reads, and a
script I wrote with parallel reclaimers to measure performance for
in-kernel flushers.

(b) Adding a mutex on top of the global rstat spinlock for userspace
reads (to limit contention from userspace on the in-kernel lock):

Wei reported that this significantly affects userspace read latency
[2]. I then proceeded to add per-memcg thresholds for flushing, which
resulted in the regressions from that mutex going away. However, at
that point the mutex didn't really provide much value, so I removed it
[3].

[1]https://lore.kernel.org/lkml/CALvZod441xBoXzhqLWTZ+xnqDOFkHmvrzspr9NAr+nybqXgS-A@mail.gmail.com/
[2]https://lore.kernel.org/lkml/CAAPL-u9D2b=iF5Lf_cRnKxUfkiEe0AMDTu6yhrUAzX0b6a6rDg@mail.gmail.com/
[3]https://lore.kernel.org/lkml/CAJD7tkZgP3m-VVPn+fF_YuvXeQYK=tZZjJHj=dzD=CcSSpp2qg@mail.gmail.com/