diff mbox series

[3/8] mm/vmscan: Throttle reclaim when no progress is being made

Message ID 20211008135332.19567-4-mgorman@techsingularity.net (mailing list archive)
State New, archived
Headers show
Series Remove dependency on congestion_wait in mm/ | expand

Commit Message

Mel Gorman Oct. 8, 2021, 1:53 p.m. UTC
Memcg reclaim throttles on congestion if no reclaim progress is made.
This makes little sense, it might be due to writeback or a host of
other factors.

For !memcg reclaim, it's messy. Direct reclaim primarily is throttled
in the page allocator if it is failing to make progress. Kswapd
throttles if too many pages are under writeback and marked for
immediate reclaim.

This patch explicitly throttles if reclaim is failing to make progress.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 include/linux/mmzone.h        |  1 +
 include/trace/events/vmscan.h |  4 +++-
 mm/memcontrol.c               | 10 +--------
 mm/vmscan.c                   | 38 +++++++++++++++++++++++++++++++++++
 4 files changed, 43 insertions(+), 10 deletions(-)

Comments

Vlastimil Babka Oct. 14, 2021, 12:31 p.m. UTC | #1
On 10/8/21 15:53, Mel Gorman wrote:
> Memcg reclaim throttles on congestion if no reclaim progress is made.
> This makes little sense, it might be due to writeback or a host of
> other factors.
> 
> For !memcg reclaim, it's messy. Direct reclaim primarily is throttled
> in the page allocator if it is failing to make progress. Kswapd
> throttles if too many pages are under writeback and marked for
> immediate reclaim.
> 
> This patch explicitly throttles if reclaim is failing to make progress.
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
...
> @@ -3769,6 +3797,16 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
>  	trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed);
>  	set_task_reclaim_state(current, NULL);
>  
> +	if (!nr_reclaimed) {
> +		struct zoneref *z;
> +		pg_data_t *pgdat;
> +
> +		z = first_zones_zonelist(zonelist, sc.reclaim_idx, sc.nodemask);
> +		pgdat = zonelist_zone(z)->zone_pgdat;
> +
> +		reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS, HZ/10);
> +	}

Is this necessary? AFAICS here we just returned from:

do_try_to_free_pages()
  shrink_zones()
   for_each_zone()...
     consider_reclaim_throttle()

Which already throttles when needed and using the appropriate pgdat, while
here we have to somewhat awkwardly assume the preferred one.

> +
>  	return nr_reclaimed;
>  }
>  #endif
>
Mel Gorman Oct. 14, 2021, 1:03 p.m. UTC | #2
On Thu, Oct 14, 2021 at 02:31:17PM +0200, Vlastimil Babka wrote:
> On 10/8/21 15:53, Mel Gorman wrote:
> > Memcg reclaim throttles on congestion if no reclaim progress is made.
> > This makes little sense, it might be due to writeback or a host of
> > other factors.
> > 
> > For !memcg reclaim, it's messy. Direct reclaim primarily is throttled
> > in the page allocator if it is failing to make progress. Kswapd
> > throttles if too many pages are under writeback and marked for
> > immediate reclaim.
> > 
> > This patch explicitly throttles if reclaim is failing to make progress.
> > 
> > Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ...
> > @@ -3769,6 +3797,16 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> >  	trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed);
> >  	set_task_reclaim_state(current, NULL);
> >  
> > +	if (!nr_reclaimed) {
> > +		struct zoneref *z;
> > +		pg_data_t *pgdat;
> > +
> > +		z = first_zones_zonelist(zonelist, sc.reclaim_idx, sc.nodemask);
> > +		pgdat = zonelist_zone(z)->zone_pgdat;
> > +
> > +		reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS, HZ/10);
> > +	}
> 
> Is this necessary? AFAICS here we just returned from:
> 
> do_try_to_free_pages()
>   shrink_zones()
>    for_each_zone()...
>      consider_reclaim_throttle()
> 
> Which already throttles when needed and using the appropriate pgdat, while
> here we have to somewhat awkwardly assume the preferred one.
> 

Yes, you're right, consider_reclaim_throttle not only throttles on the
appropriate pgdat but takes priority into account.

Well spotted!
Vlastimil Babka Oct. 14, 2021, 3:45 p.m. UTC | #3
On 10/14/21 15:03, Mel Gorman wrote:
> On Thu, Oct 14, 2021 at 02:31:17PM +0200, Vlastimil Babka wrote:
>> On 10/8/21 15:53, Mel Gorman wrote:
>> > Memcg reclaim throttles on congestion if no reclaim progress is made.
>> > This makes little sense, it might be due to writeback or a host of
>> > other factors.
>> > 
>> > For !memcg reclaim, it's messy. Direct reclaim primarily is throttled
>> > in the page allocator if it is failing to make progress. Kswapd
>> > throttles if too many pages are under writeback and marked for
>> > immediate reclaim.
>> > 
>> > This patch explicitly throttles if reclaim is failing to make progress.
>> > 
>> > Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
>> ...
>> > @@ -3769,6 +3797,16 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
>> >  	trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed);
>> >  	set_task_reclaim_state(current, NULL);
>> >  
>> > +	if (!nr_reclaimed) {
>> > +		struct zoneref *z;
>> > +		pg_data_t *pgdat;
>> > +
>> > +		z = first_zones_zonelist(zonelist, sc.reclaim_idx, sc.nodemask);
>> > +		pgdat = zonelist_zone(z)->zone_pgdat;
>> > +
>> > +		reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS, HZ/10);
>> > +	}
>> 
>> Is this necessary? AFAICS here we just returned from:
>> 
>> do_try_to_free_pages()
>>   shrink_zones()
>>    for_each_zone()...
>>      consider_reclaim_throttle()
>> 
>> Which already throttles when needed and using the appropriate pgdat, while
>> here we have to somewhat awkwardly assume the preferred one.
>> 
> 
> Yes, you're right, consider_reclaim_throttle not only throttles on the
> appropriate pgdat but takes priority into account.
> 
> Well spotted!

So with that part removed
Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks!
diff mbox series

Patch

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ca65d6a64bdd..7c08cc91d526 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -276,6 +276,7 @@  enum lru_list {
 enum vmscan_throttle_state {
 	VMSCAN_THROTTLE_WRITEBACK,
 	VMSCAN_THROTTLE_ISOLATED,
+	VMSCAN_THROTTLE_NOPROGRESS,
 	NR_VMSCAN_THROTTLE,
 };
 
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index d4905bd9e9c4..f25a6149d3ba 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -29,11 +29,13 @@ 
 
 #define _VMSCAN_THROTTLE_WRITEBACK	(1 << VMSCAN_THROTTLE_WRITEBACK)
 #define _VMSCAN_THROTTLE_ISOLATED	(1 << VMSCAN_THROTTLE_ISOLATED)
+#define _VMSCAN_THROTTLE_NOPROGRESS	(1 << VMSCAN_THROTTLE_NOPROGRESS)
 
 #define show_throttle_flags(flags)						\
 	(flags) ? __print_flags(flags, "|",					\
 		{_VMSCAN_THROTTLE_WRITEBACK,	"VMSCAN_THROTTLE_WRITEBACK"},	\
-		{_VMSCAN_THROTTLE_ISOLATED,	"VMSCAN_THROTTLE_ISOLATED"}	\
+		{_VMSCAN_THROTTLE_ISOLATED,	"VMSCAN_THROTTLE_ISOLATED"},	\
+		{_VMSCAN_THROTTLE_NOPROGRESS,	"VMSCAN_THROTTLE_NOPROGRESS"}	\
 		) : "VMSCAN_THROTTLE_NONE"
 
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6da5020a8656..8b33152c9b85 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3465,19 +3465,11 @@  static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
 
 	/* try to free all pages in this cgroup */
 	while (nr_retries && page_counter_read(&memcg->memory)) {
-		int progress;
-
 		if (signal_pending(current))
 			return -EINTR;
 
-		progress = try_to_free_mem_cgroup_pages(memcg, 1,
-							GFP_KERNEL, true);
-		if (!progress) {
+		if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, true))
 			nr_retries--;
-			/* maybe some writeback is necessary */
-			congestion_wait(BLK_RW_ASYNC, HZ/10);
-		}
-
 	}
 
 	return 0;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9ce4195d4123..cdebfc618179 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3311,6 +3311,33 @@  static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
 	return zone_watermark_ok_safe(zone, 0, watermark, sc->reclaim_idx);
 }
 
+static void consider_reclaim_throttle(pg_data_t *pgdat, struct scan_control *sc)
+{
+	/* If reclaim is making progress, wake any throttled tasks. */
+	if (sc->nr_reclaimed) {
+		wait_queue_head_t *wqh;
+
+		wqh = &pgdat->reclaim_wait[VMSCAN_THROTTLE_NOPROGRESS];
+		if (waitqueue_active(wqh))
+			wake_up_all(wqh);
+
+		return;
+	}
+
+	/*
+	 * Do not throttle kswapd on NOPROGRESS as it will throttle on
+	 * VMSCAN_THROTTLE_WRITEBACK if there are too many pages under
+	 * writeback and marked for immediate reclaim at the tail of
+	 * the LRU.
+	 */
+	if (current_is_kswapd())
+		return;
+
+	/* Throttle if making no progress at high prioities. */
+	if (sc->priority < DEF_PRIORITY - 2)
+		reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS, HZ/10);
+}
+
 /*
  * This is the direct reclaim path, for page-allocating processes.  We only
  * try to reclaim pages from zones which will satisfy the caller's allocation
@@ -3395,6 +3422,7 @@  static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 			continue;
 		last_pgdat = zone->zone_pgdat;
 		shrink_node(zone->zone_pgdat, sc);
+		consider_reclaim_throttle(zone->zone_pgdat, sc);
 	}
 
 	/*
@@ -3769,6 +3797,16 @@  unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
 	trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed);
 	set_task_reclaim_state(current, NULL);
 
+	if (!nr_reclaimed) {
+		struct zoneref *z;
+		pg_data_t *pgdat;
+
+		z = first_zones_zonelist(zonelist, sc.reclaim_idx, sc.nodemask);
+		pgdat = zonelist_zone(z)->zone_pgdat;
+
+		reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS, HZ/10);
+	}
+
 	return nr_reclaimed;
 }
 #endif