diff mbox series

[v2] cgroup: memcg: net: do not associate sock with unrelated cgroup

Message ID 20200214222415.181467-1-shakeelb@google.com (mailing list archive)
State New, archived
Headers show
Series [v2] cgroup: memcg: net: do not associate sock with unrelated cgroup | expand

Commit Message

Shakeel Butt Feb. 14, 2020, 10:24 p.m. UTC
We are testing network memory accounting in our setup and noticed
inconsistent network memory usage and often unrelated cgroups network
usage correlates with testing workload. On further inspection, it
seems like mem_cgroup_sk_alloc() and cgroup_sk_alloc() are broken in
irq context specially for cgroup v1.

mem_cgroup_sk_alloc() and cgroup_sk_alloc() can be called in irq context
and kind of assumes that this can only happen from sk_clone_lock()
and the source sock object has already associated cgroup. However in
cgroup v1, where network memory accounting is opt-in, the source sock
can be unassociated with any cgroup and the new cloned sock can get
associated with unrelated interrupted cgroup.

Cgroup v2 can also suffer if the source sock object was created by
process in the root cgroup or if sk_alloc() is called in irq context.
The fix is to just do nothing in interrupt.

Fixes: 2d7580738345 ("mm: memcontrol: consolidate cgroup socket tracking")
Fixes: d979a39d7242 ("cgroup: duplicate cgroup reference when cloning sockets")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
---

Changes since v1:
- Fix cgroup_sk_alloc() too.

 kernel/cgroup/cgroup.c | 4 ++++
 mm/memcontrol.c        | 4 ++++
 2 files changed, 8 insertions(+)

Comments

Roman Gushchin Feb. 14, 2020, 10:33 p.m. UTC | #1
On Fri, Feb 14, 2020 at 02:24:15PM -0800, Shakeel Butt wrote:
> We are testing network memory accounting in our setup and noticed
> inconsistent network memory usage and often unrelated cgroups network
> usage correlates with testing workload. On further inspection, it
> seems like mem_cgroup_sk_alloc() and cgroup_sk_alloc() are broken in
> irq context specially for cgroup v1.
> 
> mem_cgroup_sk_alloc() and cgroup_sk_alloc() can be called in irq context
> and kind of assumes that this can only happen from sk_clone_lock()
> and the source sock object has already associated cgroup. However in
> cgroup v1, where network memory accounting is opt-in, the source sock
> can be unassociated with any cgroup and the new cloned sock can get
> associated with unrelated interrupted cgroup.
> 
> Cgroup v2 can also suffer if the source sock object was created by
> process in the root cgroup or if sk_alloc() is called in irq context.
> The fix is to just do nothing in interrupt.
> 
> Fixes: 2d7580738345 ("mm: memcontrol: consolidate cgroup socket tracking")
> Fixes: d979a39d7242 ("cgroup: duplicate cgroup reference when cloning sockets")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> ---
> 
> Changes since v1:
> - Fix cgroup_sk_alloc() too.
> 
>  kernel/cgroup/cgroup.c | 4 ++++
>  mm/memcontrol.c        | 4 ++++
>  2 files changed, 8 insertions(+)
> 
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 9a8a5ded3c48..46e5f5518fba 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -6449,6 +6449,10 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
>  		return;
>  	}
>  
> +	/* Do not associate the sock with unrelated interrupted task's memcg. */
                                                                       ^^^^^
								       cgroup?
> +	if (in_interrupt())
> +		return;
> +
>  	rcu_read_lock();
>  
>  	while (true) {
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 63bb6a2aab81..f500da82bfe8 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6697,6 +6697,10 @@ void mem_cgroup_sk_alloc(struct sock *sk)
>  		return;
>  	}

Can you, please, include the stacktrace into the commit log?
Except a minor typo (see above),
Reviewed-by: Roman Gushchin <guro@fb.com>

A really good catch.

Thank you!
Eric Dumazet Feb. 14, 2020, 10:38 p.m. UTC | #2
On Fri, Feb 14, 2020 at 2:24 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> We are testing network memory accounting in our setup and noticed
> inconsistent network memory usage and often unrelated cgroups network
> usage correlates with testing workload. On further inspection, it
> seems like mem_cgroup_sk_alloc() and cgroup_sk_alloc() are broken in
> irq context specially for cgroup v1.
>
> mem_cgroup_sk_alloc() and cgroup_sk_alloc() can be called in irq context
> and kind of assumes that this can only happen from sk_clone_lock()
> and the source sock object has already associated cgroup. However in
> cgroup v1, where network memory accounting is opt-in, the source sock
> can be unassociated with any cgroup and the new cloned sock can get
> associated with unrelated interrupted cgroup.
>
> Cgroup v2 can also suffer if the source sock object was created by
> process in the root cgroup or if sk_alloc() is called in irq context.
> The fix is to just do nothing in interrupt.

So, when will the association be done ?
At accept() time ?
Is it done already ?

Thanks


>
> Fixes: 2d7580738345 ("mm: memcontrol: consolidate cgroup socket tracking")
> Fixes: d979a39d7242 ("cgroup: duplicate cgroup reference when cloning sockets")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> ---
>
> Changes since v1:
> - Fix cgroup_sk_alloc() too.
>
>  kernel/cgroup/cgroup.c | 4 ++++
>  mm/memcontrol.c        | 4 ++++
>  2 files changed, 8 insertions(+)
>
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 9a8a5ded3c48..46e5f5518fba 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -6449,6 +6449,10 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
>                 return;
>         }
>
> +       /* Do not associate the sock with unrelated interrupted task's memcg. */
> +       if (in_interrupt())
> +               return;
> +
>         rcu_read_lock();
>
>         while (true) {
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 63bb6a2aab81..f500da82bfe8 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6697,6 +6697,10 @@ void mem_cgroup_sk_alloc(struct sock *sk)
>                 return;
>         }
>
> +       /* Do not associate the sock with unrelated interrupted task's memcg. */
> +       if (in_interrupt())
> +               return;
> +
>         rcu_read_lock();
>         memcg = mem_cgroup_from_task(current);
>         if (memcg == root_mem_cgroup)
> --
> 2.25.0.265.gbab2e86ba0-goog
>
Shakeel Butt Feb. 14, 2020, 10:44 p.m. UTC | #3
On Fri, Feb 14, 2020 at 2:33 PM Roman Gushchin <guro@fb.com> wrote:
>
> On Fri, Feb 14, 2020 at 02:24:15PM -0800, Shakeel Butt wrote:
> > We are testing network memory accounting in our setup and noticed
> > inconsistent network memory usage and often unrelated cgroups network
> > usage correlates with testing workload. On further inspection, it
> > seems like mem_cgroup_sk_alloc() and cgroup_sk_alloc() are broken in
> > irq context specially for cgroup v1.
> >
> > mem_cgroup_sk_alloc() and cgroup_sk_alloc() can be called in irq context
> > and kind of assumes that this can only happen from sk_clone_lock()
> > and the source sock object has already associated cgroup. However in
> > cgroup v1, where network memory accounting is opt-in, the source sock
> > can be unassociated with any cgroup and the new cloned sock can get
> > associated with unrelated interrupted cgroup.
> >
> > Cgroup v2 can also suffer if the source sock object was created by
> > process in the root cgroup or if sk_alloc() is called in irq context.
> > The fix is to just do nothing in interrupt.
> >
> > Fixes: 2d7580738345 ("mm: memcontrol: consolidate cgroup socket tracking")
> > Fixes: d979a39d7242 ("cgroup: duplicate cgroup reference when cloning sockets")
> > Signed-off-by: Shakeel Butt <shakeelb@google.com>
> > ---
> >
> > Changes since v1:
> > - Fix cgroup_sk_alloc() too.
> >
> >  kernel/cgroup/cgroup.c | 4 ++++
> >  mm/memcontrol.c        | 4 ++++
> >  2 files changed, 8 insertions(+)
> >
> > diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> > index 9a8a5ded3c48..46e5f5518fba 100644
> > --- a/kernel/cgroup/cgroup.c
> > +++ b/kernel/cgroup/cgroup.c
> > @@ -6449,6 +6449,10 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
> >               return;
> >       }
> >
> > +     /* Do not associate the sock with unrelated interrupted task's memcg. */
>                                                                        ^^^^^
>                                                                        cgroup?
> > +     if (in_interrupt())
> > +             return;
> > +
> >       rcu_read_lock();
> >
> >       while (true) {
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 63bb6a2aab81..f500da82bfe8 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -6697,6 +6697,10 @@ void mem_cgroup_sk_alloc(struct sock *sk)
> >               return;
> >       }
>
> Can you, please, include the stacktrace into the commit log?
> Except a minor typo (see above),
> Reviewed-by: Roman Gushchin <guro@fb.com>
>
> A really good catch.
>

Thanks, I will add the stack trace and fix the typo.

Shakeel
Shakeel Butt Feb. 14, 2020, 10:48 p.m. UTC | #4
On Fri, Feb 14, 2020 at 2:38 PM Eric Dumazet <edumazet@google.com> wrote:
>
> On Fri, Feb 14, 2020 at 2:24 PM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > We are testing network memory accounting in our setup and noticed
> > inconsistent network memory usage and often unrelated cgroups network
> > usage correlates with testing workload. On further inspection, it
> > seems like mem_cgroup_sk_alloc() and cgroup_sk_alloc() are broken in
> > irq context specially for cgroup v1.
> >
> > mem_cgroup_sk_alloc() and cgroup_sk_alloc() can be called in irq context
> > and kind of assumes that this can only happen from sk_clone_lock()
> > and the source sock object has already associated cgroup. However in
> > cgroup v1, where network memory accounting is opt-in, the source sock
> > can be unassociated with any cgroup and the new cloned sock can get
> > associated with unrelated interrupted cgroup.
> >
> > Cgroup v2 can also suffer if the source sock object was created by
> > process in the root cgroup or if sk_alloc() is called in irq context.
> > The fix is to just do nothing in interrupt.
>
> So, when will the association be done ?
> At accept() time ?
> Is it done already ?
>

I think in the current code if the association is skipped at
allocation time then the sock will remain unassociated for its
lifetime.

Maybe we can add the association in the later stages but it seems like
it is not a simple task i.e. edbe69ef2c90f ("Revert "defer call to
mem_cgroup_sk_alloc()"").

Shakeel
Eric Dumazet Feb. 14, 2020, 11:12 p.m. UTC | #5
On Fri, Feb 14, 2020 at 2:48 PM Shakeel Butt <shakeelb@google.com> wrote:

>
> I think in the current code if the association is skipped at
> allocation time then the sock will remain unassociated for its
> lifetime.
>
> Maybe we can add the association in the later stages but it seems like
> it is not a simple task i.e. edbe69ef2c90f ("Revert "defer call to
> mem_cgroup_sk_alloc()"").

Half TCP sockets are passive, so this means that 50% of TCP sockets
won't be charged.
(the socket cloning always happens from BH context)

I think this deserves a comment in the changelog or documentation,
otherwise some people might think
using memcg will make them safe.
Shakeel Butt Feb. 15, 2020, 12:04 a.m. UTC | #6
On Fri, Feb 14, 2020 at 3:12 PM Eric Dumazet <edumazet@google.com> wrote:
>
> On Fri, Feb 14, 2020 at 2:48 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> >
> > I think in the current code if the association is skipped at
> > allocation time then the sock will remain unassociated for its
> > lifetime.
> >
> > Maybe we can add the association in the later stages but it seems like
> > it is not a simple task i.e. edbe69ef2c90f ("Revert "defer call to
> > mem_cgroup_sk_alloc()"").
>
> Half TCP sockets are passive, so this means that 50% of TCP sockets
> won't be charged.
> (the socket cloning always happens from BH context)
>
> I think this deserves a comment in the changelog or documentation,
> otherwise some people might think
> using memcg will make them safe.

Thanks I will update the changelog. Also is inet_csk_accept() the
right place for delayed cgroup/memcg binding (if we decide to do
that). I am wondering if we can force charge the memcg during late
binding to cater the issue fixed in edbe69ef2c90f.

Shakeel
Eric Dumazet Feb. 15, 2020, 12:11 a.m. UTC | #7
On Fri, Feb 14, 2020 at 4:04 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Fri, Feb 14, 2020 at 3:12 PM Eric Dumazet <edumazet@google.com> wrote:
> >
> > On Fri, Feb 14, 2020 at 2:48 PM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > >
> > > I think in the current code if the association is skipped at
> > > allocation time then the sock will remain unassociated for its
> > > lifetime.
> > >
> > > Maybe we can add the association in the later stages but it seems like
> > > it is not a simple task i.e. edbe69ef2c90f ("Revert "defer call to
> > > mem_cgroup_sk_alloc()"").
> >
> > Half TCP sockets are passive, so this means that 50% of TCP sockets
> > won't be charged.
> > (the socket cloning always happens from BH context)
> >
> > I think this deserves a comment in the changelog or documentation,
> > otherwise some people might think
> > using memcg will make them safe.
>
> Thanks I will update the changelog. Also is inet_csk_accept() the
> right place for delayed cgroup/memcg binding (if we decide to do
> that). I am wondering if we can force charge the memcg during late
> binding to cater the issue fixed in edbe69ef2c90f.
>

Yes, this is exactly why accept() would be the natural choice.

You  do not want to test/change the binding at sendmsg()/recvmsg() time, right ?
diff mbox series

Patch

diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 9a8a5ded3c48..46e5f5518fba 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -6449,6 +6449,10 @@  void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
 		return;
 	}
 
+	/* Do not associate the sock with unrelated interrupted task's memcg. */
+	if (in_interrupt())
+		return;
+
 	rcu_read_lock();
 
 	while (true) {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 63bb6a2aab81..f500da82bfe8 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6697,6 +6697,10 @@  void mem_cgroup_sk_alloc(struct sock *sk)
 		return;
 	}
 
+	/* Do not associate the sock with unrelated interrupted task's memcg. */
+	if (in_interrupt())
+		return;
+
 	rcu_read_lock();
 	memcg = mem_cgroup_from_task(current);
 	if (memcg == root_mem_cgroup)