diff mbox series

doc: memcontrol: add description for oom_kill

Message ID 20210226021254.3980-1-shy828301@gmail.com (mailing list archive)
State New, archived
Headers show
Series doc: memcontrol: add description for oom_kill | expand

Commit Message

Yang Shi Feb. 26, 2021, 2:12 a.m. UTC
When debugging an oom issue, I found the oom_kill counter of memcg is
confusing.  At the first glance without checking document, I thought it
just counts for memcg oom, but it turns out it counts both global and
memcg oom.

The cgroup v2 documents it, but the description is missed for cgroup v1.

Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 Documentation/admin-guide/cgroup-v1/memory.rst | 3 +++
 1 file changed, 3 insertions(+)

Comments

Michal Hocko Feb. 26, 2021, 7:30 a.m. UTC | #1
On Thu 25-02-21 18:12:54, Yang Shi wrote:
> When debugging an oom issue, I found the oom_kill counter of memcg is
> confusing.  At the first glance without checking document, I thought it
> just counts for memcg oom, but it turns out it counts both global and
> memcg oom.

Yes, this is the case indeed. The point of the counter was to count oom
victims from the memcg rather than matching that to the source of the
oom. Rememeber that this could have been a memcg oom up in the
hierarchy as well. Counting victims on the oom origin could be equally
confusing because in many cases there would be no victim counted for the
above mentioned memcg ooms.

> The cgroup v2 documents it, but the description is missed for cgroup v1.
> 
> Signed-off-by: Yang Shi <shy828301@gmail.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  Documentation/admin-guide/cgroup-v1/memory.rst | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
> index 0936412e044e..44d5429636e2 100644
> --- a/Documentation/admin-guide/cgroup-v1/memory.rst
> +++ b/Documentation/admin-guide/cgroup-v1/memory.rst
> @@ -851,6 +851,9 @@ At reading, current status of OOM is shown.
>  	  (if 1, oom-killer is disabled)
>  	- under_oom	   0 or 1
>  	  (if 1, the memory cgroup is under OOM, tasks may be stopped.)
> +        - oom_kill         integer counter
> +          The number of processes belonging to this cgroup killed by any
> +          kind of OOM killer.
>  
>  11. Memory Pressure
>  ===================
> -- 
> 2.26.2
>
Shakeel Butt Feb. 26, 2021, 2:23 p.m. UTC | #2
On Thu, Feb 25, 2021 at 6:12 PM Yang Shi <shy828301@gmail.com> wrote:
>
> When debugging an oom issue, I found the oom_kill counter of memcg is
> confusing.  At the first glance without checking document, I thought it
> just counts for memcg oom, but it turns out it counts both global and
> memcg oom.
>
> The cgroup v2 documents it, but the description is missed for cgroup v1.
>
> Signed-off-by: Yang Shi <shy828301@gmail.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>
Chris Down Feb. 26, 2021, 2:33 p.m. UTC | #3
Yang Shi writes:
>When debugging an oom issue, I found the oom_kill counter of memcg is
>confusing.  At the first glance without checking document, I thought it
>just counts for memcg oom, but it turns out it counts both global and
>memcg oom.
>
>The cgroup v2 documents it, but the description is missed for cgroup v1.
>
>Signed-off-by: Yang Shi <shy828301@gmail.com>

Thanks.

Acked-by: Chris Down <chris@chrisdown.name>

>---
> Documentation/admin-guide/cgroup-v1/memory.rst | 3 +++
> 1 file changed, 3 insertions(+)
>
>diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
>index 0936412e044e..44d5429636e2 100644
>--- a/Documentation/admin-guide/cgroup-v1/memory.rst
>+++ b/Documentation/admin-guide/cgroup-v1/memory.rst
>@@ -851,6 +851,9 @@ At reading, current status of OOM is shown.
> 	  (if 1, oom-killer is disabled)
> 	- under_oom	   0 or 1
> 	  (if 1, the memory cgroup is under OOM, tasks may be stopped.)
>+        - oom_kill         integer counter
>+          The number of processes belonging to this cgroup killed by any
>+          kind of OOM killer.
>
> 11. Memory Pressure
> ===================
>-- 
>2.26.2
>
>
Yang Shi Feb. 26, 2021, 4:42 p.m. UTC | #4
On Thu, Feb 25, 2021 at 11:30 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Thu 25-02-21 18:12:54, Yang Shi wrote:
> > When debugging an oom issue, I found the oom_kill counter of memcg is
> > confusing.  At the first glance without checking document, I thought it
> > just counts for memcg oom, but it turns out it counts both global and
> > memcg oom.
>
> Yes, this is the case indeed. The point of the counter was to count oom
> victims from the memcg rather than matching that to the source of the
> oom. Rememeber that this could have been a memcg oom up in the
> hierarchy as well. Counting victims on the oom origin could be equally

Yes, it is updated hierarchically on v2, but not on v1. I'm supposed
this is because v1 may work in non-hierarchcal mode? If this is the
only reason we may be able to remove this to get aligned with v2 since
non-hierarchal mode is no longer supported.

> confusing because in many cases there would be no victim counted for the
> above mentioned memcg ooms.
>
> > The cgroup v2 documents it, but the description is missed for cgroup v1.
> >
> > Signed-off-by: Yang Shi <shy828301@gmail.com>
>
> Acked-by: Michal Hocko <mhocko@suse.com>
>
> > ---
> >  Documentation/admin-guide/cgroup-v1/memory.rst | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
> > index 0936412e044e..44d5429636e2 100644
> > --- a/Documentation/admin-guide/cgroup-v1/memory.rst
> > +++ b/Documentation/admin-guide/cgroup-v1/memory.rst
> > @@ -851,6 +851,9 @@ At reading, current status of OOM is shown.
> >         (if 1, oom-killer is disabled)
> >       - under_oom        0 or 1
> >         (if 1, the memory cgroup is under OOM, tasks may be stopped.)
> > +        - oom_kill         integer counter
> > +          The number of processes belonging to this cgroup killed by any
> > +          kind of OOM killer.
> >
> >  11. Memory Pressure
> >  ===================
> > --
> > 2.26.2
> >
>
> --
> Michal Hocko
> SUSE Labs
Yang Shi Feb. 26, 2021, 7:19 p.m. UTC | #5
On Fri, Feb 26, 2021 at 8:42 AM Yang Shi <shy828301@gmail.com> wrote:
>
> On Thu, Feb 25, 2021 at 11:30 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Thu 25-02-21 18:12:54, Yang Shi wrote:
> > > When debugging an oom issue, I found the oom_kill counter of memcg is
> > > confusing.  At the first glance without checking document, I thought it
> > > just counts for memcg oom, but it turns out it counts both global and
> > > memcg oom.
> >
> > Yes, this is the case indeed. The point of the counter was to count oom
> > victims from the memcg rather than matching that to the source of the
> > oom. Rememeber that this could have been a memcg oom up in the
> > hierarchy as well. Counting victims on the oom origin could be equally
>
> Yes, it is updated hierarchically on v2, but not on v1. I'm supposed
> this is because v1 may work in non-hierarchcal mode? If this is the
> only reason we may be able to remove this to get aligned with v2 since
> non-hierarchal mode is no longer supported.

BTW, having the counter recorded hierarchically may help out one of
our usecases. We want to monitor the oom_kill for some services, but
systemd would wipe out the cgroup if the service is oom killed then
restart the service from scratch (it means create a brand new cgroup
with the same name). So this systemd behavior makes the counter
useless if it is not recorded hierarchically.

>
> > confusing because in many cases there would be no victim counted for the
> > above mentioned memcg ooms.
> >
> > > The cgroup v2 documents it, but the description is missed for cgroup v1.
> > >
> > > Signed-off-by: Yang Shi <shy828301@gmail.com>
> >
> > Acked-by: Michal Hocko <mhocko@suse.com>
> >
> > > ---
> > >  Documentation/admin-guide/cgroup-v1/memory.rst | 3 +++
> > >  1 file changed, 3 insertions(+)
> > >
> > > diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
> > > index 0936412e044e..44d5429636e2 100644
> > > --- a/Documentation/admin-guide/cgroup-v1/memory.rst
> > > +++ b/Documentation/admin-guide/cgroup-v1/memory.rst
> > > @@ -851,6 +851,9 @@ At reading, current status of OOM is shown.
> > >         (if 1, oom-killer is disabled)
> > >       - under_oom        0 or 1
> > >         (if 1, the memory cgroup is under OOM, tasks may be stopped.)
> > > +        - oom_kill         integer counter
> > > +          The number of processes belonging to this cgroup killed by any
> > > +          kind of OOM killer.
> > >
> > >  11. Memory Pressure
> > >  ===================
> > > --
> > > 2.26.2
> > >
> >
> > --
> > Michal Hocko
> > SUSE Labs
Michal Hocko March 1, 2021, 12:15 p.m. UTC | #6
On Fri 26-02-21 08:42:29, Yang Shi wrote:
> On Thu, Feb 25, 2021 at 11:30 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Thu 25-02-21 18:12:54, Yang Shi wrote:
> > > When debugging an oom issue, I found the oom_kill counter of memcg is
> > > confusing.  At the first glance without checking document, I thought it
> > > just counts for memcg oom, but it turns out it counts both global and
> > > memcg oom.
> >
> > Yes, this is the case indeed. The point of the counter was to count oom
> > victims from the memcg rather than matching that to the source of the
> > oom. Rememeber that this could have been a memcg oom up in the
> > hierarchy as well. Counting victims on the oom origin could be equally
> 
> Yes, it is updated hierarchically on v2, but not on v1. I'm supposed
> this is because v1 may work in non-hierarchcal mode? If this is the
> only reason we may be able to remove this to get aligned with v2 since
> non-hierarchal mode is no longer supported.

I believe the reson is that v1 can have tasks in the intermediate
(non-leaf) memcgs. So you wouldn't have a way to tell whether the oom
kill has happened in such a memcg or somewhere down the hierarchy.
Michal Hocko March 1, 2021, 12:24 p.m. UTC | #7
On Fri 26-02-21 11:19:51, Yang Shi wrote:
> On Fri, Feb 26, 2021 at 8:42 AM Yang Shi <shy828301@gmail.com> wrote:
> >
> > On Thu, Feb 25, 2021 at 11:30 PM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Thu 25-02-21 18:12:54, Yang Shi wrote:
> > > > When debugging an oom issue, I found the oom_kill counter of memcg is
> > > > confusing.  At the first glance without checking document, I thought it
> > > > just counts for memcg oom, but it turns out it counts both global and
> > > > memcg oom.
> > >
> > > Yes, this is the case indeed. The point of the counter was to count oom
> > > victims from the memcg rather than matching that to the source of the
> > > oom. Rememeber that this could have been a memcg oom up in the
> > > hierarchy as well. Counting victims on the oom origin could be equally
> >
> > Yes, it is updated hierarchically on v2, but not on v1. I'm supposed
> > this is because v1 may work in non-hierarchcal mode? If this is the
> > only reason we may be able to remove this to get aligned with v2 since
> > non-hierarchal mode is no longer supported.
> 
> BTW, having the counter recorded hierarchically may help out one of
> our usecases. We want to monitor the oom_kill for some services, but
> systemd would wipe out the cgroup if the service is oom killed then
> restart the service from scratch (it means create a brand new cgroup
> with the same name). So this systemd behavior makes the counter
> useless if it is not recorded hierarchically.

Just to make sure I understand correctly. You have a setup where memcg
for a service has a hard limit configured and it is destroyed when oom
happens inside that memcg. A new instance is created at the same place
of the hierarchy with a new memcg. Your problem is that the oom killed
memcg will not be recorded in its parent oom event and the information
will get lost with the torn down memcg. Correct?

If yes then how do you tell which of the child cgroup was killed from
the parent counter? Or is there only a single child?

Anyway, cgroup v2 will offer the hierarchical behavior. Do you have any
strong reasons that you cannot use v2?
Yang Shi March 1, 2021, 5:07 p.m. UTC | #8
On Mon, Mar 1, 2021 at 4:15 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 26-02-21 08:42:29, Yang Shi wrote:
> > On Thu, Feb 25, 2021 at 11:30 PM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Thu 25-02-21 18:12:54, Yang Shi wrote:
> > > > When debugging an oom issue, I found the oom_kill counter of memcg is
> > > > confusing.  At the first glance without checking document, I thought it
> > > > just counts for memcg oom, but it turns out it counts both global and
> > > > memcg oom.
> > >
> > > Yes, this is the case indeed. The point of the counter was to count oom
> > > victims from the memcg rather than matching that to the source of the
> > > oom. Rememeber that this could have been a memcg oom up in the
> > > hierarchy as well. Counting victims on the oom origin could be equally
> >
> > Yes, it is updated hierarchically on v2, but not on v1. I'm supposed
> > this is because v1 may work in non-hierarchcal mode? If this is the
> > only reason we may be able to remove this to get aligned with v2 since
> > non-hierarchal mode is no longer supported.
>
> I believe the reson is that v1 can have tasks in the intermediate
> (non-leaf) memcgs. So you wouldn't have a way to tell whether the oom
> kill has happened in such a memcg or somewhere down the hierarchy.

Aha, I forgot it, that's bad. Although we don't have tasks in
intermediate nodes in practice, I do understand it is not forbidden as
cgroup v2.

> --
> Michal Hocko
> SUSE Labs
Yang Shi March 1, 2021, 5:17 p.m. UTC | #9
On Mon, Mar 1, 2021 at 4:24 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 26-02-21 11:19:51, Yang Shi wrote:
> > On Fri, Feb 26, 2021 at 8:42 AM Yang Shi <shy828301@gmail.com> wrote:
> > >
> > > On Thu, Feb 25, 2021 at 11:30 PM Michal Hocko <mhocko@suse.com> wrote:
> > > >
> > > > On Thu 25-02-21 18:12:54, Yang Shi wrote:
> > > > > When debugging an oom issue, I found the oom_kill counter of memcg is
> > > > > confusing.  At the first glance without checking document, I thought it
> > > > > just counts for memcg oom, but it turns out it counts both global and
> > > > > memcg oom.
> > > >
> > > > Yes, this is the case indeed. The point of the counter was to count oom
> > > > victims from the memcg rather than matching that to the source of the
> > > > oom. Rememeber that this could have been a memcg oom up in the
> > > > hierarchy as well. Counting victims on the oom origin could be equally
> > >
> > > Yes, it is updated hierarchically on v2, but not on v1. I'm supposed
> > > this is because v1 may work in non-hierarchcal mode? If this is the
> > > only reason we may be able to remove this to get aligned with v2 since
> > > non-hierarchal mode is no longer supported.
> >
> > BTW, having the counter recorded hierarchically may help out one of
> > our usecases. We want to monitor the oom_kill for some services, but
> > systemd would wipe out the cgroup if the service is oom killed then
> > restart the service from scratch (it means create a brand new cgroup
> > with the same name). So this systemd behavior makes the counter
> > useless if it is not recorded hierarchically.
>
> Just to make sure I understand correctly. You have a setup where memcg
> for a service has a hard limit configured and it is destroyed when oom
> happens inside that memcg. A new instance is created at the same place
> of the hierarchy with a new memcg. Your problem is that the oom killed
> memcg will not be recorded in its parent oom event and the information
> will get lost with the torn down memcg. Correct?

Yes. But global oom instead of memcg oom.

>
> If yes then how do you tell which of the child cgroup was killed from
> the parent counter? Or is there only a single child?

Not only a single child, but our case is that oom-killed child
consumes 90% memory, then global oom would kill it. This definitely
doesn't prevent from accounting oom from other children, but we don't
have to have a very accurate counter and in our case we can tell 99%
oom kill happens with that specific memcg.

>
> Anyway, cgroup v2 will offer the hierarchical behavior. Do you have any
> strong reasons that you cannot use v2?

I do prefer to migrate to cgroup v2 personally. But it incurs
significant work for orchestration tools, infrastructure
configuration, monitoring tools, etc which are out of my control.

> --
> Michal Hocko
> SUSE Labs
Jonathan Corbet March 1, 2021, 9:20 p.m. UTC | #10
Yang Shi <shy828301@gmail.com> writes:

> When debugging an oom issue, I found the oom_kill counter of memcg is
> confusing.  At the first glance without checking document, I thought it
> just counts for memcg oom, but it turns out it counts both global and
> memcg oom.
>
> The cgroup v2 documents it, but the description is missed for cgroup v1.
>
> Signed-off-by: Yang Shi <shy828301@gmail.com>
> ---
>  Documentation/admin-guide/cgroup-v1/memory.rst | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
> index 0936412e044e..44d5429636e2 100644
> --- a/Documentation/admin-guide/cgroup-v1/memory.rst
> +++ b/Documentation/admin-guide/cgroup-v1/memory.rst
> @@ -851,6 +851,9 @@ At reading, current status of OOM is shown.
>  	  (if 1, oom-killer is disabled)
>  	- under_oom	   0 or 1
>  	  (if 1, the memory cgroup is under OOM, tasks may be stopped.)
> +        - oom_kill         integer counter
> +          The number of processes belonging to this cgroup killed by any
> +          kind of OOM killer.

Applied, thanks.

jon
diff mbox series

Patch

diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
index 0936412e044e..44d5429636e2 100644
--- a/Documentation/admin-guide/cgroup-v1/memory.rst
+++ b/Documentation/admin-guide/cgroup-v1/memory.rst
@@ -851,6 +851,9 @@  At reading, current status of OOM is shown.
 	  (if 1, oom-killer is disabled)
 	- under_oom	   0 or 1
 	  (if 1, the memory cgroup is under OOM, tasks may be stopped.)
+        - oom_kill         integer counter
+          The number of processes belonging to this cgroup killed by any
+          kind of OOM killer.
 
 11. Memory Pressure
 ===================