diff mbox series

[3/3] mm: improvements on memcg protection functions

Message ID 20200425152418.28388-4-laoar.shao@gmail.com (mailing list archive)
State New, archived
Headers show
Series mm: improve proportional memcg protection | expand

Commit Message

Yafang Shao April 25, 2020, 3:24 p.m. UTC
Since proportional memory.{min, low} reclaim is introduced in
commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim"),
it have been proved that the proportional reclaim is hard to understand and
the issues caused by it is harder to understand.[1]. That dilemma faced by
us is caused by that the proportional reclaim mixed up memcg and the
reclaim context.

In proportional reclaim, the whole reclaim context - includes the memcg
to be reclaimed and the reclaimer, should be considered, rather than
memcg only.

To make it clear, a new member 'protection' is introduced in the reclaim
context (struct shrink_control) to replace mem_cgroup_protection(). This
one is set when we check whether the memcg is protected or not.

After this change, the issue pointed by me[1] - a really old left-over
value can slow donw target reclaim - can be fixed, and I think it could
also avoid some potential race.

[1]. https://lore.kernel.org/linux-mm/20200423061629.24185-1-laoar.shao@gmail.com

Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Chris Down <chris@chrisdown.name>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 include/linux/memcontrol.h | 25 ----------------
 mm/internal.h              | 17 +++++++++++
 mm/memcontrol.c            | 58 +++++++++++++++++++++++++++-----------
 mm/vmscan.c                | 35 +++--------------------
 4 files changed, 63 insertions(+), 72 deletions(-)

Comments

Michal Hocko April 27, 2020, 9:40 a.m. UTC | #1
On Sat 25-04-20 11:24:18, Yafang Shao wrote:
> Since proportional memory.{min, low} reclaim is introduced in
> commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim"),
> it have been proved that the proportional reclaim is hard to understand and
> the issues caused by it is harder to understand.[1]. That dilemma faced by
> us is caused by that the proportional reclaim mixed up memcg and the
> reclaim context.
> 
> In proportional reclaim, the whole reclaim context - includes the memcg
> to be reclaimed and the reclaimer, should be considered, rather than
> memcg only.
> 
> To make it clear, a new member 'protection' is introduced in the reclaim
> context (struct shrink_control) to replace mem_cgroup_protection(). This

s@shrink_control@scan_control@

> one is set when we check whether the memcg is protected or not.
> 
> After this change, the issue pointed by me[1] - a really old left-over
> value can slow donw target reclaim - can be fixed, and I think it could
> also avoid some potential race.

The patch would have been much esier to review if you only focused on
the effective protection value caching. I really fail to see why you had
to make mem_cgroup_protected even more convoluted with more side effects
(e.g. sc->memcg_low_skipped). This goes directly opposite to what
Johannes was proposing in other email AFAICS.

Your changelog doesn't explain why double caching the effective value is
an improvement. I believe your goal was to drop the special casing for
the reclaim targets which are the only to ignore protection as they are
clearly violating the consumption constraints. This makes some sense
to me because it makes the special case have a local effect.

But I really dislike your patch. Please follow up on Johannes'
suggestion to split up the mem_cgroup_protected into parts
http://lkml.kernel.org/r/20200424134438.GA496852@cmpxchg.org
Yafang Shao April 27, 2020, 10:09 a.m. UTC | #2
On Mon, Apr 27, 2020 at 5:40 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Sat 25-04-20 11:24:18, Yafang Shao wrote:
> > Since proportional memory.{min, low} reclaim is introduced in
> > commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim"),
> > it have been proved that the proportional reclaim is hard to understand and
> > the issues caused by it is harder to understand.[1]. That dilemma faced by
> > us is caused by that the proportional reclaim mixed up memcg and the
> > reclaim context.
> >
> > In proportional reclaim, the whole reclaim context - includes the memcg
> > to be reclaimed and the reclaimer, should be considered, rather than
> > memcg only.
> >
> > To make it clear, a new member 'protection' is introduced in the reclaim
> > context (struct shrink_control) to replace mem_cgroup_protection(). This
>
> s@shrink_control@scan_control@
>

Thanks for pointing this out.

> > one is set when we check whether the memcg is protected or not.
> >
> > After this change, the issue pointed by me[1] - a really old left-over
> > value can slow donw target reclaim - can be fixed, and I think it could
> > also avoid some potential race.
>
> The patch would have been much esier to review if you only focused on
> the effective protection value caching. I really fail to see why you had
> to make mem_cgroup_protected even more convoluted with more side effects
> (e.g. sc->memcg_low_skipped). This goes directly opposite to what
> Johannes was proposing in other email AFAICS.
>

Sorry, I failed to see what the advantage of Johannes's proposal
except the better naming.

> Your changelog doesn't explain why double caching the effective value is
> an improvement.

The improvement is, to avoid getting an wrong value calculated by
other reclaimers and avoid issues in mem_cgroup_protection() that we
haven't noticed.

> I believe your goal was to drop the special casing for
> the reclaim targets which are the only to ignore protection as they are
> clearly violating the consumption constraints. This makes some sense
> to me because it makes the special case have a local effect.
>

> But I really dislike your patch. Please follow up on Johannes'
> suggestion to split up the mem_cgroup_protected into parts
> http://lkml.kernel.org/r/20200424134438.GA496852@cmpxchg.org
>

As I said above, I failed to see the advantage of this proposal.
Maybe we can wait until Johannes can show his code.

Hi Johannes,

Would you pls. show a simple implementation on your proposal ASAP ?
Michal Hocko April 27, 2020, 10:50 a.m. UTC | #3
On Mon 27-04-20 18:09:27, Yafang Shao wrote:
> On Mon, Apr 27, 2020 at 5:40 PM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > On Sat 25-04-20 11:24:18, Yafang Shao wrote:
> > > Since proportional memory.{min, low} reclaim is introduced in
> > > commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim"),
> > > it have been proved that the proportional reclaim is hard to understand and
> > > the issues caused by it is harder to understand.[1]. That dilemma faced by
> > > us is caused by that the proportional reclaim mixed up memcg and the
> > > reclaim context.
> > >
> > > In proportional reclaim, the whole reclaim context - includes the memcg
> > > to be reclaimed and the reclaimer, should be considered, rather than
> > > memcg only.
> > >
> > > To make it clear, a new member 'protection' is introduced in the reclaim
> > > context (struct shrink_control) to replace mem_cgroup_protection(). This
> >
> > s@shrink_control@scan_control@
> >
> 
> Thanks for pointing this out.
> 
> > > one is set when we check whether the memcg is protected or not.
> > >
> > > After this change, the issue pointed by me[1] - a really old left-over
> > > value can slow donw target reclaim - can be fixed, and I think it could
> > > also avoid some potential race.
> >
> > The patch would have been much esier to review if you only focused on
> > the effective protection value caching. I really fail to see why you had
> > to make mem_cgroup_protected even more convoluted with more side effects
> > (e.g. sc->memcg_low_skipped). This goes directly opposite to what
> > Johannes was proposing in other email AFAICS.
> >
> 
> Sorry, I failed to see what the advantage of Johannes's proposal
> except the better naming.

The immediate advantage is that predicate should better not have any
side effect. So splitting into the calculation part which has clearly
defined side effects and having a simple predicate that consults
pre-calculated values makes a lot of sense to me.

> > Your changelog doesn't explain why double caching the effective value is
> > an improvement.
> 
> The improvement is, to avoid getting an wrong value calculated by
> other reclaimers and avoid issues in mem_cgroup_protection() that we
> haven't noticed.

This is not true in general. There is still parallel calculation done
and so parallel reclaimers might affect each other. Your patch only
makes a real difference for leaf memcgs which are the reclaim target as
well. All intermediate nodes really do not care about the cached values
because they do not have any pages on the LRU lists.
Yafang Shao April 27, 2020, 11:06 a.m. UTC | #4
On Mon, Apr 27, 2020 at 6:50 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 27-04-20 18:09:27, Yafang Shao wrote:
> > On Mon, Apr 27, 2020 at 5:40 PM Michal Hocko <mhocko@kernel.org> wrote:
> > >
> > > On Sat 25-04-20 11:24:18, Yafang Shao wrote:
> > > > Since proportional memory.{min, low} reclaim is introduced in
> > > > commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim"),
> > > > it have been proved that the proportional reclaim is hard to understand and
> > > > the issues caused by it is harder to understand.[1]. That dilemma faced by
> > > > us is caused by that the proportional reclaim mixed up memcg and the
> > > > reclaim context.
> > > >
> > > > In proportional reclaim, the whole reclaim context - includes the memcg
> > > > to be reclaimed and the reclaimer, should be considered, rather than
> > > > memcg only.
> > > >
> > > > To make it clear, a new member 'protection' is introduced in the reclaim
> > > > context (struct shrink_control) to replace mem_cgroup_protection(). This
> > >
> > > s@shrink_control@scan_control@
> > >
> >
> > Thanks for pointing this out.
> >
> > > > one is set when we check whether the memcg is protected or not.
> > > >
> > > > After this change, the issue pointed by me[1] - a really old left-over
> > > > value can slow donw target reclaim - can be fixed, and I think it could
> > > > also avoid some potential race.
> > >
> > > The patch would have been much esier to review if you only focused on
> > > the effective protection value caching. I really fail to see why you had
> > > to make mem_cgroup_protected even more convoluted with more side effects
> > > (e.g. sc->memcg_low_skipped). This goes directly opposite to what
> > > Johannes was proposing in other email AFAICS.
> > >
> >
> > Sorry, I failed to see what the advantage of Johannes's proposal
> > except the better naming.
>
> The immediate advantage is that predicate should better not have any
> side effect.

No, it still has side effect. That is not an immediate advantage neither.
See bellow,

> So splitting into the calculation part which has clearly
> defined side effects and having a simple predicate that consults
> pre-calculated values makes a lot of sense to me.
>

When you calculate the effective values,  the source values to
calculate it  may be changed with these values when you do the
predicate.
IOW, this proposal greatly increase the race window.

> > > Your changelog doesn't explain why double caching the effective value is
> > > an improvement.
> >
> > The improvement is, to avoid getting an wrong value calculated by
> > other reclaimers and avoid issues in mem_cgroup_protection() that we
> > haven't noticed.
>
> This is not true in general. There is still parallel calculation done
> and so parallel reclaimers might affect each other. Your patch only
> makes a real difference for leaf memcgs which are the reclaim target as
> well.

The race between  parallel reclaimers is fundamentally exist, and we
can do now is reducing the race window as far as possible.

> All intermediate nodes really do not care about the cached values
> because they do not have any pages on the LRU lists.
>

But read these cached values can save lot of time against the existing
code, as the existing code still calculates them even if they are
useless.
If you think that is a issue, I think why not skip scanning these
intermediate nodes because they don't have any pages on the LRU lists
?
Michal Hocko April 27, 2020, 11:24 a.m. UTC | #5
On Mon 27-04-20 19:06:52, Yafang Shao wrote:
> On Mon, Apr 27, 2020 at 6:50 PM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > On Mon 27-04-20 18:09:27, Yafang Shao wrote:
> > > On Mon, Apr 27, 2020 at 5:40 PM Michal Hocko <mhocko@kernel.org> wrote:
> > > >
> > > > On Sat 25-04-20 11:24:18, Yafang Shao wrote:
> > > > > Since proportional memory.{min, low} reclaim is introduced in
> > > > > commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim"),
> > > > > it have been proved that the proportional reclaim is hard to understand and
> > > > > the issues caused by it is harder to understand.[1]. That dilemma faced by
> > > > > us is caused by that the proportional reclaim mixed up memcg and the
> > > > > reclaim context.
> > > > >
> > > > > In proportional reclaim, the whole reclaim context - includes the memcg
> > > > > to be reclaimed and the reclaimer, should be considered, rather than
> > > > > memcg only.
> > > > >
> > > > > To make it clear, a new member 'protection' is introduced in the reclaim
> > > > > context (struct shrink_control) to replace mem_cgroup_protection(). This
> > > >
> > > > s@shrink_control@scan_control@
> > > >
> > >
> > > Thanks for pointing this out.
> > >
> > > > > one is set when we check whether the memcg is protected or not.
> > > > >
> > > > > After this change, the issue pointed by me[1] - a really old left-over
> > > > > value can slow donw target reclaim - can be fixed, and I think it could
> > > > > also avoid some potential race.
> > > >
> > > > The patch would have been much esier to review if you only focused on
> > > > the effective protection value caching. I really fail to see why you had
> > > > to make mem_cgroup_protected even more convoluted with more side effects
> > > > (e.g. sc->memcg_low_skipped). This goes directly opposite to what
> > > > Johannes was proposing in other email AFAICS.
> > > >
> > >
> > > Sorry, I failed to see what the advantage of Johannes's proposal
> > > except the better naming.
> >
> > The immediate advantage is that predicate should better not have any
> > side effect.
> 
> No, it still has side effect. That is not an immediate advantage neither.
> See bellow,

I believe you have misunderstood the proposal.
mem_cgroup_below_{min,low,protection} won't have any side effect on the
memcg. It would be only mem_cgroup_calculate_protection which updates
the cached state. So there won't be any predicate to have side effect.

> > So splitting into the calculation part which has clearly
> > defined side effects and having a simple predicate that consults
> > pre-calculated values makes a lot of sense to me.
> >
> 
> When you calculate the effective values,  the source values to
> calculate it  may be changed with these values when you do the
> predicate.
> IOW, this proposal greatly increase the race window.

Why do you think so?

> > > > Your changelog doesn't explain why double caching the effective value is
> > > > an improvement.
> > >
> > > The improvement is, to avoid getting an wrong value calculated by
> > > other reclaimers and avoid issues in mem_cgroup_protection() that we
> > > haven't noticed.
> >
> > This is not true in general. There is still parallel calculation done
> > and so parallel reclaimers might affect each other. Your patch only
> > makes a real difference for leaf memcgs which are the reclaim target as
> > well.
> 
> The race between  parallel reclaimers is fundamentally exist, and we
> can do now is reducing the race window as far as possible.

Reducing the race window is futile. The situation changes after each
reclaim attempt. All we need is to keep a protection consistent
regardless of where the reclaim root. That means that hierarchies deeper
in the tree cannot override the protection from those which are higher.

> > All intermediate nodes really do not care about the cached values
> > because they do not have any pages on the LRU lists.
> >
> 
> But read these cached values can save lot of time against the existing
> code, as the existing code still calculates them even if they are
> useless.

They are not useless. Intermediate values are necessary for the
protection distribution for lower level memcgs.
Yafang Shao April 27, 2020, 11:32 a.m. UTC | #6
On Mon, Apr 27, 2020 at 7:24 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 27-04-20 19:06:52, Yafang Shao wrote:
> > On Mon, Apr 27, 2020 at 6:50 PM Michal Hocko <mhocko@kernel.org> wrote:
> > >
> > > On Mon 27-04-20 18:09:27, Yafang Shao wrote:
> > > > On Mon, Apr 27, 2020 at 5:40 PM Michal Hocko <mhocko@kernel.org> wrote:
> > > > >
> > > > > On Sat 25-04-20 11:24:18, Yafang Shao wrote:
> > > > > > Since proportional memory.{min, low} reclaim is introduced in
> > > > > > commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim"),
> > > > > > it have been proved that the proportional reclaim is hard to understand and
> > > > > > the issues caused by it is harder to understand.[1]. That dilemma faced by
> > > > > > us is caused by that the proportional reclaim mixed up memcg and the
> > > > > > reclaim context.
> > > > > >
> > > > > > In proportional reclaim, the whole reclaim context - includes the memcg
> > > > > > to be reclaimed and the reclaimer, should be considered, rather than
> > > > > > memcg only.
> > > > > >
> > > > > > To make it clear, a new member 'protection' is introduced in the reclaim
> > > > > > context (struct shrink_control) to replace mem_cgroup_protection(). This
> > > > >
> > > > > s@shrink_control@scan_control@
> > > > >
> > > >
> > > > Thanks for pointing this out.
> > > >
> > > > > > one is set when we check whether the memcg is protected or not.
> > > > > >
> > > > > > After this change, the issue pointed by me[1] - a really old left-over
> > > > > > value can slow donw target reclaim - can be fixed, and I think it could
> > > > > > also avoid some potential race.
> > > > >
> > > > > The patch would have been much esier to review if you only focused on
> > > > > the effective protection value caching. I really fail to see why you had
> > > > > to make mem_cgroup_protected even more convoluted with more side effects
> > > > > (e.g. sc->memcg_low_skipped). This goes directly opposite to what
> > > > > Johannes was proposing in other email AFAICS.
> > > > >
> > > >
> > > > Sorry, I failed to see what the advantage of Johannes's proposal
> > > > except the better naming.
> > >
> > > The immediate advantage is that predicate should better not have any
> > > side effect.
> >
> > No, it still has side effect. That is not an immediate advantage neither.
> > See bellow,
>
> I believe you have misunderstood the proposal.
> mem_cgroup_below_{min,low,protection} won't have any side effect on the
> memcg. It would be only mem_cgroup_calculate_protection which updates
> the cached state. So there won't be any predicate to have side effect.
>

Maybe  I misunderstood this porposal, or both of us misunderstood this
proposal.

> > > So splitting into the calculation part which has clearly
> > > defined side effects and having a simple predicate that consults
> > > pre-calculated values makes a lot of sense to me.
> > >
> >
> > When you calculate the effective values,  the source values to
> > calculate it  may be changed with these values when you do the
> > predicate.
> > IOW, this proposal greatly increase the race window.
>
> Why do you think so?
>

See above, I don't think it is proper to disccuss it any more until we
see the code.

> > > > > Your changelog doesn't explain why double caching the effective value is
> > > > > an improvement.
> > > >
> > > > The improvement is, to avoid getting an wrong value calculated by
> > > > other reclaimers and avoid issues in mem_cgroup_protection() that we
> > > > haven't noticed.
> > >
> > > This is not true in general. There is still parallel calculation done
> > > and so parallel reclaimers might affect each other. Your patch only
> > > makes a real difference for leaf memcgs which are the reclaim target as
> > > well.
> >
> > The race between  parallel reclaimers is fundamentally exist, and we
> > can do now is reducing the race window as far as possible.
>
> Reducing the race window is futile. The situation changes after each
> reclaim attempt. All we need is to keep a protection consistent
> regardless of where the reclaim root. That means that hierarchies deeper
> in the tree cannot override the protection from those which are higher.
>

That is what I have implemented in this patchset.

> > > All intermediate nodes really do not care about the cached values
> > > because they do not have any pages on the LRU lists.
> > >
> >
> > But read these cached values can save lot of time against the existing
> > code, as the existing code still calculates them even if they are
> > useless.
>
> They are not useless. Intermediate values are necessary for the
> protection distribution for lower level memcgs.
>
diff mbox series

Patch

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index b327857a1e7e..9d5ceeba3b31 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -50,12 +50,6 @@  enum memcg_memory_event {
 	MEMCG_NR_MEMORY_EVENTS,
 };
 
-enum mem_cgroup_protection {
-	MEMCG_PROT_NONE,
-	MEMCG_PROT_LOW,
-	MEMCG_PROT_MIN,
-};
-
 struct mem_cgroup_reclaim_cookie {
 	pg_data_t *pgdat;
 	unsigned int generation;
@@ -344,19 +338,6 @@  static inline bool mem_cgroup_disabled(void)
 	return !cgroup_subsys_enabled(memory_cgrp_subsys);
 }
 
-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg,
-						  bool in_low_reclaim)
-{
-	if (mem_cgroup_disabled())
-		return 0;
-
-	if (in_low_reclaim)
-		return READ_ONCE(memcg->memory.emin);
-
-	return max(READ_ONCE(memcg->memory.emin),
-		   READ_ONCE(memcg->memory.elow));
-}
-
 int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
 			  gfp_t gfp_mask, struct mem_cgroup **memcgp,
 			  bool compound);
@@ -832,12 +813,6 @@  static inline void memcg_memory_event_mm(struct mm_struct *mm,
 {
 }
 
-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg,
-						  bool in_low_reclaim)
-{
-	return 0;
-}
-
 static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
 					gfp_t gfp_mask,
 					struct mem_cgroup **memcgp,
diff --git a/mm/internal.h b/mm/internal.h
index a0b3bdd933b9..10c762a79c0c 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -271,6 +271,9 @@  struct scan_control {
 	 */
 	struct mem_cgroup *target_mem_cgroup;
 
+	/* Memcg protection in this reclaim context */
+	unsigned long protection;
+
 	/* Can active pages be deactivated as part of reclaim? */
 #define DEACTIVATE_ANON 1
 #define DEACTIVATE_FILE 2
@@ -338,6 +341,20 @@  struct scan_control {
 	struct reclaim_state reclaim_state;
 };
 
+#ifdef CONFIG_MEMCG
+bool mem_cgroup_protected(struct mem_cgroup *target,
+			  struct mem_cgroup *memcg,
+			  struct scan_control *sc);
+
+#else
+static inline bool mem_cgroup_protected(struct mem_cgroup *target,
+					struct mem_cgroup *memcg,
+					struct scan_control *sc)
+{
+	return false;
+}
+#endif
+
 /*
  * This function returns the order of a free page in the buddy system. In
  * general, page_zone(page)->lock must be held by the caller to prevent the
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 51dab7f2e714..f2f191898f2b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6372,35 +6372,30 @@  static unsigned long effective_protection(unsigned long usage,
  * WARNING: This function is not stateless! It can only be used as part
  *          of a top-down tree iteration, not for isolated queries.
  *
- * Returns one of the following:
- *   MEMCG_PROT_NONE: cgroup memory is not protected
- *   MEMCG_PROT_LOW: cgroup memory is protected as long there is
- *     an unprotected supply of reclaimable memory from other cgroups.
- *   MEMCG_PROT_MIN: cgroup memory is protected
  */
-enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *target,
-						struct mem_cgroup *memcg,
-						struct scan_control *sc)
+bool mem_cgroup_protected(struct mem_cgroup *target,
+			  struct mem_cgroup *memcg,
+			  struct scan_control *sc)
 {
 	unsigned long usage, parent_usage;
 	struct mem_cgroup *parent;
 
 	if (mem_cgroup_disabled())
-		return MEMCG_PROT_NONE;
+		return false;
 
 	if (!target)
 		target = root_mem_cgroup;
 	if (memcg == target)
-		return MEMCG_PROT_NONE;
+		return false;
 
 	usage = page_counter_read(&memcg->memory);
 	if (!usage)
-		return MEMCG_PROT_NONE;
+		return false;
 
 	parent = parent_mem_cgroup(memcg);
 	/* No parent means a non-hierarchical mode on v1 memcg */
 	if (!parent)
-		return MEMCG_PROT_NONE;
+		return false;
 
 	if (parent == target) {
 		memcg->memory.emin = READ_ONCE(memcg->memory.min);
@@ -6420,12 +6415,43 @@  enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *target,
 			atomic_long_read(&parent->memory.children_low_usage)));
 
 out:
+	/*
+	 * Hard protection.
+	 * If there is no reclaimable memory, OOM.
+	 */
 	if (usage <= memcg->memory.emin)
-		return MEMCG_PROT_MIN;
-	else if (usage <= memcg->memory.elow)
-		return MEMCG_PROT_LOW;
+		return true;
+
+	/* The protection takes effect when false is returned. */
+	if (sc->memcg_low_reclaim)
+		sc->protection = memcg->memory.emin;
 	else
-		return MEMCG_PROT_NONE;
+		sc->protection = max(memcg->memory.emin, memcg->memory.elow);
+
+	if (usage <= memcg->memory.elow) {
+		/*
+		 * Soft protection.
+		 * Respect the protection only as long as there is an
+		 * unprotected supply of reclaimable memory from other
+		 * cgroups.
+		 */
+		if (!sc->memcg_low_reclaim) {
+			sc->memcg_low_skipped = 1;
+			return true;
+		}
+
+		memcg_memory_event(memcg, MEMCG_LOW);
+
+		return false;
+	}
+
+	/*
+	 * All protection thresholds breached. We may still choose to vary
+	 * the scan pressure applied based on by how much the cgroup in
+	 * question has exceeded its protection thresholds
+	 * (see get_scan_count).
+	 */
+	return false;
 }
 
 /**
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 61c944e7f587..a81bf736ac11 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2263,8 +2263,7 @@  static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 		unsigned long protection;
 
 		lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
-		protection = mem_cgroup_protection(memcg,
-						   sc->memcg_low_reclaim);
+		protection = sc->protection;
 
 		if (protection) {
 			/*
@@ -2551,36 +2550,10 @@  static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
 		unsigned long reclaimed;
 		unsigned long scanned;
 
-		switch (mem_cgroup_protected(target_memcg, memcg, sc)) {
-		case MEMCG_PROT_MIN:
-			/*
-			 * Hard protection.
-			 * If there is no reclaimable memory, OOM.
-			 */
+		sc->protection = 0;
+
+		if (mem_cgroup_protected(target_memcg, memcg, sc))
 			continue;
-		case MEMCG_PROT_LOW:
-			/*
-			 * Soft protection.
-			 * Respect the protection only as long as
-			 * there is an unprotected supply
-			 * of reclaimable memory from other cgroups.
-			 */
-			if (!sc->memcg_low_reclaim) {
-				sc->memcg_low_skipped = 1;
-				continue;
-			}
-			memcg_memory_event(memcg, MEMCG_LOW);
-			break;
-		case MEMCG_PROT_NONE:
-			/*
-			 * All protection thresholds breached. We may
-			 * still choose to vary the scan pressure
-			 * applied based on by how much the cgroup in
-			 * question has exceeded its protection
-			 * thresholds (see get_scan_count).
-			 */
-			break;
-		}
 
 		reclaimed = sc->nr_reclaimed;
 		scanned = sc->nr_scanned;