diff mbox series

[3/8] mm: vmscan: move inactive_list_is_low() swap check to the caller

Message ID 20191022144803.302233-4-hannes@cmpxchg.org (mailing list archive)
State New, archived
Headers show
Series : mm: vmscan: cgroup-related cleanups | expand

Commit Message

Johannes Weiner Oct. 22, 2019, 2:47 p.m. UTC
inactive_list_is_low() should be about one thing: checking the ratio
between inactive and active list. Kitchensink checks like the one for
swap space makes the function hard to use and modify its
callsites. Luckly, most callers already have an understanding of the
swap situation, so it's easy to clean up.

get_scan_count() has its own, memcg-aware swap check, and doesn't even
get to the inactive_list_is_low() check on the anon list when there is
no swap space available.

shrink_list() is called on the results of get_scan_count(), so that
check is redundant too.

age_active_anon() has its own totalswap_pages check right before it
checks the list proportions.

The shrink_node_memcg() site is the only one that doesn't do its own
swap check. Add it there.

Then delete the swap check from inactive_list_is_low().

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/vmscan.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

Comments

Roman Gushchin Oct. 22, 2019, 7:28 p.m. UTC | #1
On Tue, Oct 22, 2019 at 10:47:58AM -0400, Johannes Weiner wrote:
> inactive_list_is_low() should be about one thing: checking the ratio
> between inactive and active list. Kitchensink checks like the one for
> swap space makes the function hard to use and modify its
> callsites. Luckly, most callers already have an understanding of the
> swap situation, so it's easy to clean up.
> 
> get_scan_count() has its own, memcg-aware swap check, and doesn't even
> get to the inactive_list_is_low() check on the anon list when there is
> no swap space available.
> 
> shrink_list() is called on the results of get_scan_count(), so that
> check is redundant too.
> 
> age_active_anon() has its own totalswap_pages check right before it
> checks the list proportions.
> 
> The shrink_node_memcg() site is the only one that doesn't do its own
> swap check. Add it there.
> 
> Then delete the swap check from inactive_list_is_low().
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
>  mm/vmscan.c | 9 +--------
>  1 file changed, 1 insertion(+), 8 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index be3c22c274c1..622b77488144 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2226,13 +2226,6 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
>  	unsigned long refaults;
>  	unsigned long gb;
>  
> -	/*
> -	 * If we don't have swap space, anonymous page deactivation
> -	 * is pointless.
> -	 */
> -	if (!file && !total_swap_pages)
> -		return false;
> -
>  	inactive = lruvec_lru_size(lruvec, inactive_lru, sc->reclaim_idx);
>  	active = lruvec_lru_size(lruvec, active_lru, sc->reclaim_idx);
>  
> @@ -2653,7 +2646,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
>  	 * Even if we did not try to evict anon pages at all, we want to
>  	 * rebalance the anon lru active/inactive ratio.
>  	 */
> -	if (inactive_list_is_low(lruvec, false, sc, true))
> +	if (total_swap_pages && inactive_list_is_low(lruvec, false, sc, true))
>  		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
>  				   sc, LRU_ACTIVE_ANON);
>  }
> -- 
> 2.23.0
> 
> 

Reviewed-by: Roman Gushchin <guro@fb.com>

Thanks!
Michal Hocko Oct. 23, 2019, 2:06 p.m. UTC | #2
On Tue 22-10-19 10:47:58, Johannes Weiner wrote:
> inactive_list_is_low() should be about one thing: checking the ratio
> between inactive and active list. Kitchensink checks like the one for
> swap space makes the function hard to use and modify its
> callsites. Luckly, most callers already have an understanding of the
> swap situation, so it's easy to clean up.
> 
> get_scan_count() has its own, memcg-aware swap check, and doesn't even
> get to the inactive_list_is_low() check on the anon list when there is
> no swap space available.
> 
> shrink_list() is called on the results of get_scan_count(), so that
> check is redundant too.
> 
> age_active_anon() has its own totalswap_pages check right before it
> checks the list proportions.
> 
> The shrink_node_memcg() site is the only one that doesn't do its own
> swap check. Add it there.
> 
> Then delete the swap check from inactive_list_is_low().
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

OK, makes sense to me.
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/vmscan.c | 9 +--------
>  1 file changed, 1 insertion(+), 8 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index be3c22c274c1..622b77488144 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2226,13 +2226,6 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
>  	unsigned long refaults;
>  	unsigned long gb;
>  
> -	/*
> -	 * If we don't have swap space, anonymous page deactivation
> -	 * is pointless.
> -	 */
> -	if (!file && !total_swap_pages)
> -		return false;
> -
>  	inactive = lruvec_lru_size(lruvec, inactive_lru, sc->reclaim_idx);
>  	active = lruvec_lru_size(lruvec, active_lru, sc->reclaim_idx);
>  
> @@ -2653,7 +2646,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
>  	 * Even if we did not try to evict anon pages at all, we want to
>  	 * rebalance the anon lru active/inactive ratio.
>  	 */
> -	if (inactive_list_is_low(lruvec, false, sc, true))
> +	if (total_swap_pages && inactive_list_is_low(lruvec, false, sc, true))
>  		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
>  				   sc, LRU_ACTIVE_ANON);
>  }
> -- 
> 2.23.0
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index be3c22c274c1..622b77488144 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2226,13 +2226,6 @@  static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
 	unsigned long refaults;
 	unsigned long gb;
 
-	/*
-	 * If we don't have swap space, anonymous page deactivation
-	 * is pointless.
-	 */
-	if (!file && !total_swap_pages)
-		return false;
-
 	inactive = lruvec_lru_size(lruvec, inactive_lru, sc->reclaim_idx);
 	active = lruvec_lru_size(lruvec, active_lru, sc->reclaim_idx);
 
@@ -2653,7 +2646,7 @@  static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 	 * Even if we did not try to evict anon pages at all, we want to
 	 * rebalance the anon lru active/inactive ratio.
 	 */
-	if (inactive_list_is_low(lruvec, false, sc, true))
+	if (total_swap_pages && inactive_list_is_low(lruvec, false, sc, true))
 		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
 				   sc, LRU_ACTIVE_ANON);
 }