Message ID | 20181203080119.18989-1-xlpang@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] mm/memcg: Fix min/low usage in propagate_protected_usage() | expand |
On Mon 03-12-18 16:01:17, Xunlei Pang wrote: > When usage exceeds min, min usage should be min other than 0. > Apply the same for low. Why? What is the actual problem. > Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com> > --- > mm/page_counter.c | 12 ++---------- > 1 file changed, 2 insertions(+), 10 deletions(-) > > diff --git a/mm/page_counter.c b/mm/page_counter.c > index de31470655f6..75d53f15f040 100644 > --- a/mm/page_counter.c > +++ b/mm/page_counter.c > @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_counter *c, > return; > > if (c->min || atomic_long_read(&c->min_usage)) { > - if (usage <= c->min) > - protected = usage; > - else > - protected = 0; > - > + protected = min(usage, c->min); > old_protected = atomic_long_xchg(&c->min_usage, protected); > delta = protected - old_protected; > if (delta) > @@ -35,11 +31,7 @@ static void propagate_protected_usage(struct page_counter *c, > } > > if (c->low || atomic_long_read(&c->low_usage)) { > - if (usage <= c->low) > - protected = usage; > - else > - protected = 0; > - > + protected = min(usage, c->low); > old_protected = atomic_long_xchg(&c->low_usage, protected); > delta = protected - old_protected; > if (delta) > -- > 2.13.5 (Apple Git-94) >
On 2018/12/3 下午7:54, Michal Hocko wrote: > On Mon 03-12-18 16:01:17, Xunlei Pang wrote: >> When usage exceeds min, min usage should be min other than 0. >> Apply the same for low. > > Why? What is the actual problem. children_min_usage tracks the total children usages under min, it's natural that min should be added into children_min_usage when above min, I can't image why 0 is added, is there special history I missed? See mem_cgroup_protected(), when usage exceeds min, emin is calculated as "parent_emin*min/children_min_usage". > >> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com> >> --- >> mm/page_counter.c | 12 ++---------- >> 1 file changed, 2 insertions(+), 10 deletions(-) >> >> diff --git a/mm/page_counter.c b/mm/page_counter.c >> index de31470655f6..75d53f15f040 100644 >> --- a/mm/page_counter.c >> +++ b/mm/page_counter.c >> @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_counter *c, >> return; >> >> if (c->min || atomic_long_read(&c->min_usage)) { >> - if (usage <= c->min) >> - protected = usage; >> - else >> - protected = 0; >> - >> + protected = min(usage, c->min); >> old_protected = atomic_long_xchg(&c->min_usage, protected); >> delta = protected - old_protected; >> if (delta) >> @@ -35,11 +31,7 @@ static void propagate_protected_usage(struct page_counter *c, >> } >> >> if (c->low || atomic_long_read(&c->low_usage)) { >> - if (usage <= c->low) >> - protected = usage; >> - else >> - protected = 0; >> - >> + protected = min(usage, c->low); >> old_protected = atomic_long_xchg(&c->low_usage, protected); >> delta = protected - old_protected; >> if (delta) >> -- >> 2.13.5 (Apple Git-94) >> >
On Mon, Dec 03, 2018 at 04:01:17PM +0800, Xunlei Pang wrote: > When usage exceeds min, min usage should be min other than 0. > Apply the same for low. > > Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com> > --- > mm/page_counter.c | 12 ++---------- > 1 file changed, 2 insertions(+), 10 deletions(-) > > diff --git a/mm/page_counter.c b/mm/page_counter.c > index de31470655f6..75d53f15f040 100644 > --- a/mm/page_counter.c > +++ b/mm/page_counter.c > @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_counter *c, > return; > > if (c->min || atomic_long_read(&c->min_usage)) { > - if (usage <= c->min) > - protected = usage; > - else > - protected = 0; > - > + protected = min(usage, c->min); This change makes sense in the combination with the patch 3, but not as a standlone "fix". It's not a bug, it's a required thing unless you start scanning proportionally to memory.low/min excess. Please, reflect this in the commit message. Or, even better, merge it into the patch 3. Also, please, make sure that cgroup kselftests are passing after your changes. Thanks!
Hi Roman, On 2018/12/4 AM 2:00, Roman Gushchin wrote: > On Mon, Dec 03, 2018 at 04:01:17PM +0800, Xunlei Pang wrote: >> When usage exceeds min, min usage should be min other than 0. >> Apply the same for low. >> >> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com> >> --- >> mm/page_counter.c | 12 ++---------- >> 1 file changed, 2 insertions(+), 10 deletions(-) >> >> diff --git a/mm/page_counter.c b/mm/page_counter.c >> index de31470655f6..75d53f15f040 100644 >> --- a/mm/page_counter.c >> +++ b/mm/page_counter.c >> @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_counter *c, >> return; >> >> if (c->min || atomic_long_read(&c->min_usage)) { >> - if (usage <= c->min) >> - protected = usage; >> - else >> - protected = 0; >> - >> + protected = min(usage, c->min); > > This change makes sense in the combination with the patch 3, but not as a > standlone "fix". It's not a bug, it's a required thing unless you start scanning > proportionally to memory.low/min excess. > > Please, reflect this in the commit message. Or, even better, merge it into > the patch 3. The more I looked the more I think it's a bug, but anyway I'm fine with merging it into patch 3 :-) > > Also, please, make sure that cgroup kselftests are passing after your changes. Sure, will do and send v2. Thanks for your inputs.
On Wed, Dec 05, 2018 at 04:58:31PM +0800, Xunlei Pang wrote: > Hi Roman, > > On 2018/12/4 AM 2:00, Roman Gushchin wrote: > > On Mon, Dec 03, 2018 at 04:01:17PM +0800, Xunlei Pang wrote: > >> When usage exceeds min, min usage should be min other than 0. > >> Apply the same for low. > >> > >> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com> > >> --- > >> mm/page_counter.c | 12 ++---------- > >> 1 file changed, 2 insertions(+), 10 deletions(-) > >> > >> diff --git a/mm/page_counter.c b/mm/page_counter.c > >> index de31470655f6..75d53f15f040 100644 > >> --- a/mm/page_counter.c > >> +++ b/mm/page_counter.c > >> @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_counter *c, > >> return; > >> > >> if (c->min || atomic_long_read(&c->min_usage)) { > >> - if (usage <= c->min) > >> - protected = usage; > >> - else > >> - protected = 0; > >> - > >> + protected = min(usage, c->min); > > > > This change makes sense in the combination with the patch 3, but not as a > > standlone "fix". It's not a bug, it's a required thing unless you start scanning > > proportionally to memory.low/min excess. > > > > Please, reflect this in the commit message. Or, even better, merge it into > > the patch 3. > > The more I looked the more I think it's a bug, but anyway I'm fine with > merging it into patch 3 :-) It's not. I've explained it back to the time when we've been discussing that patch. TL;DR because the decision to scan or to skip is binary now, to prioritize one cgroup over other it's necessary to do this trick. Otherwise both cgroups can have their usages above effective memory protections, and will be scanned with the same pace. If you have any doubts, you can try to run memcg kselftests with and without this change, you'll see the difference. Thanks!
diff --git a/mm/page_counter.c b/mm/page_counter.c index de31470655f6..75d53f15f040 100644 --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_counter *c, return; if (c->min || atomic_long_read(&c->min_usage)) { - if (usage <= c->min) - protected = usage; - else - protected = 0; - + protected = min(usage, c->min); old_protected = atomic_long_xchg(&c->min_usage, protected); delta = protected - old_protected; if (delta) @@ -35,11 +31,7 @@ static void propagate_protected_usage(struct page_counter *c, } if (c->low || atomic_long_read(&c->low_usage)) { - if (usage <= c->low) - protected = usage; - else - protected = 0; - + protected = min(usage, c->low); old_protected = atomic_long_xchg(&c->low_usage, protected); delta = protected - old_protected; if (delta)
When usage exceeds min, min usage should be min other than 0. Apply the same for low. Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com> --- mm/page_counter.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-)