mbox series

[bpf-next,v1,0/5] bpf: rstat: cgroup hierarchical stats

Message ID 20220520012133.1217211-1-yosryahmed@google.com (mailing list archive)
Headers show
Series bpf: rstat: cgroup hierarchical stats | expand

Message

Yosry Ahmed May 20, 2022, 1:21 a.m. UTC
This patch series allows for using bpf to collect hierarchical cgroup
stats efficiently by integrating with the rstat framework. The rstat
framework provides an efficient way to collect cgroup stats and
propagate them through the cgroup hierarchy.

* Background on rstat (I am using a subscriber analogy that is not
commonly used):

The rstat framework maintains a tree of cgroups that have updates and
which cpus have updates. A subscriber to the rstat framework maintains
their own stats. The framework is used to tell the subscriber when
and what to flush, for the most efficient stats propagation. The
workflow is as follows:

- When a subscriber updates a cgroup on a cpu, it informs the rstat
  framework by calling cgroup_rstat_updated(cgrp, cpu).

- When a subscriber wants to read some stats for a cgroup, it asks
  the rstat framework to initiate a stats flush (propagation) by calling
  cgroup_rstat_flush(cgrp).

- When the rstat framework initiates a flush, it makes callbacks to
  subscribers to aggregate stats on cpus that have updates, and
  propagate updates to their parent.

Currently, the main subscribers to the rstat framework are cgroup
subsystems (e.g. memory, block). This patch series allow bpf programs to
become subscribers as well.

Patches in this series are based off two patches in the mailing list:
- bpf/btf: also allow kfunc in tracing and syscall programs
- btf: Add a new kfunc set which allows to mark a function to be
  sleepable

Both by Benjamin Tissoires, from different versions of his HID patch
series (the second patch seems to have been dropped in the last
version).

Patches in this series are organized as follows:
* The first patch adds a hook point, bpf_rstat_flush(), that is called
during rstat flushing. This allows bpf fentry programs to attach to it
to be called during rstat flushing (effectively registering themselves
as rstat flush callbacks).

* The second patch adds cgroup_rstat_updated() and cgorup_rstat_flush()
kfuncs, to allow bpf stat collectors and readers to communicate with rstat.

* The third patch is actually v2 of a previously submitted patch [1]
by Hao Luo. We agreed that it fits better as a part of this series. It
introduces cgroup_iter programs that can dump stats for cgroups to
userspace.
v1 - > v2:
- Getting the cgroup's reference at the time at attaching, instead of
  at the time when iterating. (Yonghong) (context [1])
- Remove .init_seq_private and .fini_seq_private callbacks for
  cgroup_iter. They are not needed now. (Yonghong)

* The fourth patch extends bpf selftests cgroup helpers, as necessary
for the following patch.

* The fifth  patch is a selftest that demonstrates the entire workflow.
It includes programs that collect, aggregate, and dump per-cgroup stats
by fully integrating with the rstat framework.

[1]https://lore.kernel.org/lkml/20220225234339.2386398-9-haoluo@google.com/

RFC v2 -> v1:
- Instead of introducing a new program type for rstat flushing, add an
  empty hook point, bpf_rstat_flush(), and use fentry bpf programs to
  attach to it and flush bpf stats.
- Instead of using helpers, use kfuncs for rstat functions.
- These changes simplify the patchset greatly, with minimal changes to
  uapi.

RFC v1 -> RFC v2:
- Instead of rstat flush programs attach to subsystems, they now attach
  to rstat (global flushers, not per-subsystem), based on discussions
  with Tejun. The first patch is entirely rewritten.
- Pass cgroup pointers to rstat flushers instead of cgroup ids. This is
  much more flexibility and less likely to need a uapi update later.
- rstat helpers are now only defined if CGROUP_CONFIG.
- Most of the code is now only defined if CGROUP_CONFIG and
  CONFIG_BPF_SYSCALL.
- Move rstat helper protos from bpf_base_func_proto() to
  tracing_prog_func_proto().
- rstat helpers argument (cgroup pointer) is now ARG_PTR_TO_BTF_ID, not
  ARG_ANYTHING.
- Rewrote the selftest to use the cgroup helpers.
- Dropped bpf_map_lookup_percpu_elem (already added by Feng).
- Dropped patch to support cgroup v1 for cgroup_iter.
- Dropped patch to define some cgroup_put() when !CONFIG_CGROUP. The
  code that calls it is no longer compiled when !CONFIG_CGROUP.


Hao Luo (1):
  bpf: Introduce cgroup iter

Yosry Ahmed (4):
  cgroup: bpf: add a hook for bpf progs to attach to rstat flushing
  cgroup: bpf: add cgroup_rstat_updated() and cgroup_rstat_flush()
    kfuncs
  selftests/bpf: extend cgroup helpers
  bpf: add a selftest for cgroup hierarchical stats collection

 include/linux/bpf.h                           |   2 +
 include/uapi/linux/bpf.h                      |   6 +
 kernel/bpf/Makefile                           |   3 +
 kernel/bpf/cgroup_iter.c                      | 148 ++++++++
 kernel/cgroup/rstat.c                         |  40 +++
 tools/include/uapi/linux/bpf.h                |   6 +
 tools/testing/selftests/bpf/cgroup_helpers.c  | 159 +++++---
 tools/testing/selftests/bpf/cgroup_helpers.h  |  14 +-
 .../test_cgroup_hierarchical_stats.c          | 339 ++++++++++++++++++
 tools/testing/selftests/bpf/progs/bpf_iter.h  |   7 +
 .../selftests/bpf/progs/cgroup_vmscan.c       | 221 ++++++++++++
 11 files changed, 899 insertions(+), 46 deletions(-)
 create mode 100644 kernel/bpf/cgroup_iter.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_cgroup_hierarchical_stats.c
 create mode 100644 tools/testing/selftests/bpf/progs/cgroup_vmscan.c

Comments

Michal Koutný June 3, 2022, 4:22 p.m. UTC | #1
Hello Yosry et al.

This is an interesting piece of work, I'll add some questions and
comments.

On Fri, May 20, 2022 at 01:21:28AM +0000, Yosry Ahmed <yosryahmed@google.com> wrote:
> This patch series allows for using bpf to collect hierarchical cgroup
> stats efficiently by integrating with the rstat framework. The rstat
> framework provides an efficient way to collect cgroup stats and
> propagate them through the cgroup hierarchy.

About the efficiency. Do you have any numbers or examples?
IIUC the idea is to utilize the cgroup's rstat subgraph of full tree
when flushing.
I was looking at your selftest example and the measuring hooks call
cgroup_rstat_updated() and they also allocate an entry bpf_map[cg_id].
The flush callback then looks up the cg_id for cgroups in the rstat
subgraph.
(I'm not familiar with bpf_map implementation or performance but I
imagine, you're potentially one step away from erasing bpf_map[cg_id] in
the flush callback.)
It seems to me that you're building a parallel structure (inside
bpf_map(s)) with similar purpose to the rstat subgraph.

So I wonder whether there remains any benefit of coupling this with
rstat?


Also, I'd expect the custom-processed data are useful in the
structured form (within bpf_maps) but then there's the cgroup iter thing
that takes available data and "flattens" them into text files.
I see this was discussed in subthreads already so it's not necessary to
return to it. IIUC you somehow intend to provide the custom info via the
text files. If that's true, I'd include that in the next cover message
(the purpose of the iterator).


> * The second patch adds cgroup_rstat_updated() and cgorup_rstat_flush()
> kfuncs, to allow bpf stat collectors and readers to communicate with rstat.

kfunc means that it can be just called from any BPF program?
(I'm thinking of an unprivileged user who issues cgroup_rstat_updated()
deep down in the hierarchy repeatedly just to "spam" the rstat subgraph
(which slows down flushers above). Arguably, this can be done already
e.g. by causing certain MM events, so I'd like to just clarify if this
can be a new source of such arbitrary updates.)

> * The third patch is actually v2 of a previously submitted patch [1]
> by Hao Luo. We agreed that it fits better as a part of this series. It
> introduces cgroup_iter programs that can dump stats for cgroups to
> userspace.
> v1 - > v2:
> - Getting the cgroup's reference at the time at attaching, instead of
>   at the time when iterating. (Yonghong) (context [1])

I noticed you take the reference to cgroup, that's fine.
But the demo program also accesses via RCU pointers
(memory_subsys_enabled():cgroup->subsys).
Again, my BPF ignorance here, does the iterator framework somehow take
care of RCU locks?


Thanks,
Michal
Yosry Ahmed June 3, 2022, 7:47 p.m. UTC | #2
On Fri, Jun 3, 2022 at 9:22 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> Hello Yosry et al.
>
> This is an interesting piece of work, I'll add some questions and
> comments.
>
> On Fri, May 20, 2022 at 01:21:28AM +0000, Yosry Ahmed <yosryahmed@google.com> wrote:
> > This patch series allows for using bpf to collect hierarchical cgroup
> > stats efficiently by integrating with the rstat framework. The rstat
> > framework provides an efficient way to collect cgroup stats and
> > propagate them through the cgroup hierarchy.
>
> About the efficiency. Do you have any numbers or examples?
> IIUC the idea is to utilize the cgroup's rstat subgraph of full tree
> when flushing.
> I was looking at your selftest example and the measuring hooks call
> cgroup_rstat_updated() and they also allocate an entry bpf_map[cg_id].
> The flush callback then looks up the cg_id for cgroups in the rstat
> subgraph.
> (I'm not familiar with bpf_map implementation or performance but I
> imagine, you're potentially one step away from erasing bpf_map[cg_id] in
> the flush callback.)
> It seems to me that you're building a parallel structure (inside
> bpf_map(s)) with similar purpose to the rstat subgraph.
>
> So I wonder whether there remains any benefit of coupling this with
> rstat?

Hi Michal,

Thanks for taking a look at this!

The bpf_map[cg_id] is not a similar structure to the rstat flush
subgraph. This is where the stats are stored. These are long running
numbers for (virtually) all cgroups on the system, they do not get
allocated every time we call cgroup_rstat_updated(), only the first
time. They are actually not erased at all in the whole selftest
(except when the map is deleted at the end). In a production
environment, we might have "setup" and "destroy" bpf programs that run
when cgroups are created/destroyed, and allocate/delete these map
entries then, to avoid the overhead in the first stat update/flush if
necessary.

The only reason I didn't do this in the demo selftest is because it
was complex/long enough as-is, and for the purposes of showcasing and
testing it seemed enough to allocate entries on demand on the first
stat update. I can add a comment about this in the selftest if you
think it's not obvious.

In short, think of these bpf maps as equivalents to "struct
memcg_vmstats" and "struct memcg_vmstats_percpu" in the memory
controller. They are just containers to store the stats in, they do
not have any subgraph structure and they have no use beyond storing
percpu and total stats.

I run small microbenchmarks that are not worth posting, they compared
the latency of bpf stats collection vs. in-kernel code that adds stats
to struct memcg_vmstats[_percpu] and flushes them accordingly, the
difference was marginal. If the map lookups are deemed expensive and a
bottleneck in the future, I have some ideas about improving that. We
can rewrite the cgroup storage map to use the generic bpf local
storage code, and have it be accessible from all programs by a cgroup
key (like task_storage for e.g.) rather than only programs attached to
that cgroup. However, this discussion is a tangent here.

>
>
> Also, I'd expect the custom-processed data are useful in the
> structured form (within bpf_maps) but then there's the cgroup iter thing
> that takes available data and "flattens" them into text files.
> I see this was discussed in subthreads already so it's not necessary to
> return to it. IIUC you somehow intend to provide the custom info via the
> text files. If that's true, I'd include that in the next cover message
> (the purpose of the iterator).

The main reason for this is to provide data in a similar fashion to
cgroupfs, in text file per-cgroup. I will include this clearly in the
next cover message. You can always not use the cgroup_iter and access
the data directly from bpf maps.

>
>
> > * The second patch adds cgroup_rstat_updated() and cgorup_rstat_flush()
> > kfuncs, to allow bpf stat collectors and readers to communicate with rstat.
>
> kfunc means that it can be just called from any BPF program?
> (I'm thinking of an unprivileged user who issues cgroup_rstat_updated()
> deep down in the hierarchy repeatedly just to "spam" the rstat subgraph
> (which slows down flushers above). Arguably, this can be done already
> e.g. by causing certain MM events, so I'd like to just clarify if this
> can be a new source of such arbitrary updates.)

AFAIK loading bpf programs requires a privileged user, so someone has
to approve such a program. Am I missing something?

>
> > * The third patch is actually v2 of a previously submitted patch [1]
> > by Hao Luo. We agreed that it fits better as a part of this series. It
> > introduces cgroup_iter programs that can dump stats for cgroups to
> > userspace.
> > v1 - > v2:
> > - Getting the cgroup's reference at the time at attaching, instead of
> >   at the time when iterating. (Yonghong) (context [1])
>
> I noticed you take the reference to cgroup, that's fine.
> But the demo program also accesses via RCU pointers
> (memory_subsys_enabled():cgroup->subsys).
> Again, my BPF ignorance here, does the iterator framework somehow take
> care of RCU locks?

bpf_iter_run_prog() is used to run bpf iterator programs, and it grabs
rcu read lock before doing so. So AFAICT we are good on that front.

Thanks a lot for this great discussion!

>
>
> Thanks,
> Michal
Michal Koutný June 6, 2022, 12:32 p.m. UTC | #3
On Fri, Jun 03, 2022 at 12:47:19PM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> In short, think of these bpf maps as equivalents to "struct
> memcg_vmstats" and "struct memcg_vmstats_percpu" in the memory
> controller. They are just containers to store the stats in, they do
> not have any subgraph structure and they have no use beyond storing
> percpu and total stats.

Thanks for the explanation.

> I run small microbenchmarks that are not worth posting, they compared
> the latency of bpf stats collection vs. in-kernel code that adds stats
> to struct memcg_vmstats[_percpu] and flushes them accordingly, the
> difference was marginal.

OK, that's a reasonable comparison.

> The main reason for this is to provide data in a similar fashion to
> cgroupfs, in text file per-cgroup. I will include this clearly in the
> next cover message.

Thanks, it'd be great to have that use-case captured there.

> AFAIK loading bpf programs requires a privileged user, so someone has
> to approve such a program. Am I missing something?

A sysctl unprivileged_bpf_disabled somehow stuck in my head. But as I
wrote, this adds a way how to call cgroup_rstat_updated() directly, it's
not reserved for privilged users anyhow.

> bpf_iter_run_prog() is used to run bpf iterator programs, and it grabs
> rcu read lock before doing so. So AFAICT we are good on that front.

Thanks for the clarification.


Michal
Yosry Ahmed June 6, 2022, 7:32 p.m. UTC | #4
On Mon, Jun 6, 2022 at 5:32 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> On Fri, Jun 03, 2022 at 12:47:19PM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> > In short, think of these bpf maps as equivalents to "struct
> > memcg_vmstats" and "struct memcg_vmstats_percpu" in the memory
> > controller. They are just containers to store the stats in, they do
> > not have any subgraph structure and they have no use beyond storing
> > percpu and total stats.
>
> Thanks for the explanation.
>
> > I run small microbenchmarks that are not worth posting, they compared
> > the latency of bpf stats collection vs. in-kernel code that adds stats
> > to struct memcg_vmstats[_percpu] and flushes them accordingly, the
> > difference was marginal.
>
> OK, that's a reasonable comparison.
>
> > The main reason for this is to provide data in a similar fashion to
> > cgroupfs, in text file per-cgroup. I will include this clearly in the
> > next cover message.
>
> Thanks, it'd be great to have that use-case captured there.
>
> > AFAIK loading bpf programs requires a privileged user, so someone has
> > to approve such a program. Am I missing something?
>
> A sysctl unprivileged_bpf_disabled somehow stuck in my head. But as I
> wrote, this adds a way how to call cgroup_rstat_updated() directly, it's
> not reserved for privilged users anyhow.

I am not sure if kfuncs have different privilege requirements or if
there is a way to mark a kfunc as privileged. Maybe someone with more
bpf knowledge can help here. But I assume if unprivileged_bpf_disabled
is not set then there is a certain amount of risk/trust that you are
taking anyway?

>
> > bpf_iter_run_prog() is used to run bpf iterator programs, and it grabs
> > rcu read lock before doing so. So AFAICT we are good on that front.
>
> Thanks for the clarification.
>
>
> Michal
Kumar Kartikeya Dwivedi June 6, 2022, 7:54 p.m. UTC | #5
On Tue, Jun 07, 2022 at 01:02:04AM IST, Yosry Ahmed wrote:
> On Mon, Jun 6, 2022 at 5:32 AM Michal Koutný <mkoutny@suse.com> wrote:
> >
> > On Fri, Jun 03, 2022 at 12:47:19PM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> > > In short, think of these bpf maps as equivalents to "struct
> > > memcg_vmstats" and "struct memcg_vmstats_percpu" in the memory
> > > controller. They are just containers to store the stats in, they do
> > > not have any subgraph structure and they have no use beyond storing
> > > percpu and total stats.
> >
> > Thanks for the explanation.
> >
> > > I run small microbenchmarks that are not worth posting, they compared
> > > the latency of bpf stats collection vs. in-kernel code that adds stats
> > > to struct memcg_vmstats[_percpu] and flushes them accordingly, the
> > > difference was marginal.
> >
> > OK, that's a reasonable comparison.
> >
> > > The main reason for this is to provide data in a similar fashion to
> > > cgroupfs, in text file per-cgroup. I will include this clearly in the
> > > next cover message.
> >
> > Thanks, it'd be great to have that use-case captured there.
> >
> > > AFAIK loading bpf programs requires a privileged user, so someone has
> > > to approve such a program. Am I missing something?
> >
> > A sysctl unprivileged_bpf_disabled somehow stuck in my head. But as I
> > wrote, this adds a way how to call cgroup_rstat_updated() directly, it's
> > not reserved for privilged users anyhow.
>
> I am not sure if kfuncs have different privilege requirements or if
> there is a way to mark a kfunc as privileged. Maybe someone with more
> bpf knowledge can help here. But I assume if unprivileged_bpf_disabled
> is not set then there is a certain amount of risk/trust that you are
> taking anyway?
>

It requires CAP_BPF or CAP_SYS_ADMIN, see verifier.c:add_subprog_or_kfunc.

> >
> > > bpf_iter_run_prog() is used to run bpf iterator programs, and it grabs
> > > rcu read lock before doing so. So AFAICT we are good on that front.
> >
> > Thanks for the clarification.
> >
> >
> > Michal

--
Kartikeya
Yosry Ahmed June 6, 2022, 8 p.m. UTC | #6
On Mon, Jun 6, 2022 at 12:55 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> On Tue, Jun 07, 2022 at 01:02:04AM IST, Yosry Ahmed wrote:
> > On Mon, Jun 6, 2022 at 5:32 AM Michal Koutný <mkoutny@suse.com> wrote:
> > >
> > > On Fri, Jun 03, 2022 at 12:47:19PM -0700, Yosry Ahmed <yosryahmed@google.com> wrote:
> > > > In short, think of these bpf maps as equivalents to "struct
> > > > memcg_vmstats" and "struct memcg_vmstats_percpu" in the memory
> > > > controller. They are just containers to store the stats in, they do
> > > > not have any subgraph structure and they have no use beyond storing
> > > > percpu and total stats.
> > >
> > > Thanks for the explanation.
> > >
> > > > I run small microbenchmarks that are not worth posting, they compared
> > > > the latency of bpf stats collection vs. in-kernel code that adds stats
> > > > to struct memcg_vmstats[_percpu] and flushes them accordingly, the
> > > > difference was marginal.
> > >
> > > OK, that's a reasonable comparison.
> > >
> > > > The main reason for this is to provide data in a similar fashion to
> > > > cgroupfs, in text file per-cgroup. I will include this clearly in the
> > > > next cover message.
> > >
> > > Thanks, it'd be great to have that use-case captured there.
> > >
> > > > AFAIK loading bpf programs requires a privileged user, so someone has
> > > > to approve such a program. Am I missing something?
> > >
> > > A sysctl unprivileged_bpf_disabled somehow stuck in my head. But as I
> > > wrote, this adds a way how to call cgroup_rstat_updated() directly, it's
> > > not reserved for privilged users anyhow.
> >
> > I am not sure if kfuncs have different privilege requirements or if
> > there is a way to mark a kfunc as privileged. Maybe someone with more
> > bpf knowledge can help here. But I assume if unprivileged_bpf_disabled
> > is not set then there is a certain amount of risk/trust that you are
> > taking anyway?
> >
>
> It requires CAP_BPF or CAP_SYS_ADMIN, see verifier.c:add_subprog_or_kfunc.

Thanks for the clarification!

>
> > >
> > > > bpf_iter_run_prog() is used to run bpf iterator programs, and it grabs
> > > > rcu read lock before doing so. So AFAICT we are good on that front.
> > >
> > > Thanks for the clarification.
> > >
> > >
> > > Michal
>
> --
> Kartikeya