diff mbox series

[v2] mm: zswap: remove nr_zswap_stored atomic

Message ID 20240322001001.1562517-1-yosryahmed@google.com (mailing list archive)
State New
Headers show
Series [v2] mm: zswap: remove nr_zswap_stored atomic | expand

Commit Message

Yosry Ahmed March 22, 2024, 12:10 a.m. UTC
nr_stored was introduced by commit b5ba474f3f51 ("zswap: shrink zswap
pool based on memory pressure") as a per zswap_pool counter of the
number of stored pages that are not same-filled pages. It is used in
zswap_shrinker_count() to scale the number of freeable compressed pages
by the compression ratio.  That is, to reduce the amount of writeback
from zswap with higher compression ratios as the ROI from IO diminishes.

Later on, commit bf9b7df23cb3 ("mm/zswap: global lru and shrinker shared
by all zswap_pools") made the shrinker global (not per zswap_pool), and
replaced nr_stored with nr_zswap_stored (initially introduced as
zswap.nr_stored), which is now a global counter.

The counter is now awfully close to zswap_stored_pages. The only
difference is that the latter also includes same-filled pages. Also,
when memcgs are enabled, we use memcg_page_state(memcg, MEMCG_ZSWAPPED),
which includes same-filled pages anyway (i.e.  equivalent to
zswap_stored_pages).

Use zswap_stored_pages instead in zswap_shrinker_count() to keep things
consistent whether memcgs are enabled or not, and add a comment about
the number of freeable pages possibly being scaled down more than it
should if we have lots of same-filled pages (i.e. inflated compression
ratio).

Remove nr_zswap_stored and one atomic operation in the store and free
paths.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
---
 mm/zswap.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

Comments

Chengming Zhou March 22, 2024, 2:46 a.m. UTC | #1
On 2024/3/22 08:10, Yosry Ahmed wrote:
> nr_stored was introduced by commit b5ba474f3f51 ("zswap: shrink zswap
> pool based on memory pressure") as a per zswap_pool counter of the
> number of stored pages that are not same-filled pages. It is used in
> zswap_shrinker_count() to scale the number of freeable compressed pages
> by the compression ratio.  That is, to reduce the amount of writeback
> from zswap with higher compression ratios as the ROI from IO diminishes.
> 
> Later on, commit bf9b7df23cb3 ("mm/zswap: global lru and shrinker shared
> by all zswap_pools") made the shrinker global (not per zswap_pool), and
> replaced nr_stored with nr_zswap_stored (initially introduced as
> zswap.nr_stored), which is now a global counter.
> 
> The counter is now awfully close to zswap_stored_pages. The only
> difference is that the latter also includes same-filled pages. Also,
> when memcgs are enabled, we use memcg_page_state(memcg, MEMCG_ZSWAPPED),
> which includes same-filled pages anyway (i.e.  equivalent to
> zswap_stored_pages).
> 
> Use zswap_stored_pages instead in zswap_shrinker_count() to keep things
> consistent whether memcgs are enabled or not, and add a comment about
> the number of freeable pages possibly being scaled down more than it
> should if we have lots of same-filled pages (i.e. inflated compression
> ratio).
> 
> Remove nr_zswap_stored and one atomic operation in the store and free
> paths.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> Reviewed-by: Nhat Pham <nphamcs@gmail.com>

Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

Thanks.

> ---
>  mm/zswap.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/zswap.c b/mm/zswap.c
> index b31c977f53e9c..1a79f99606cef 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -181,8 +181,6 @@ struct zswap_pool {
>  
>  /* Global LRU lists shared by all zswap pools. */
>  static struct list_lru zswap_list_lru;
> -/* counter of pages stored in all zswap pools. */
> -static atomic_t zswap_nr_stored = ATOMIC_INIT(0);
>  
>  /* The lock protects zswap_next_shrink updates. */
>  static DEFINE_SPINLOCK(zswap_shrink_lock);
> @@ -885,7 +883,6 @@ static void zswap_entry_free(struct zswap_entry *entry)
>  	else {
>  		zswap_lru_del(&zswap_list_lru, entry);
>  		zpool_free(zswap_find_zpool(entry), entry->handle);
> -		atomic_dec(&zswap_nr_stored);
>  		zswap_pool_put(entry->pool);
>  	}
>  	if (entry->objcg) {
> @@ -1310,7 +1307,7 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
>  #else
>  	/* use pool stats instead of memcg stats */
>  	nr_backing = zswap_total_pages();
> -	nr_stored = atomic_read(&zswap_nr_stored);
> +	nr_stored = atomic_read(&zswap_stored_pages);
>  #endif
>  
>  	if (!nr_stored)
> @@ -1330,6 +1327,11 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
>  	 * This ensures that the better zswap compresses memory, the fewer
>  	 * pages we will evict to swap (as it will otherwise incur IO for
>  	 * relatively small memory saving).
> +	 *
> +	 * The memory saving factor calculated here takes same-filled pages into
> +	 * account, but those are not freeable since they almost occupy no
> +	 * space. Hence, we may scale nr_freeable down a little bit more than we
> +	 * should if we have a lot of same-filled pages.
>  	 */
>  	return mult_frac(nr_freeable, nr_backing, nr_stored);
>  }
> @@ -1575,7 +1577,6 @@ bool zswap_store(struct folio *folio)
>  	if (entry->length) {
>  		INIT_LIST_HEAD(&entry->lru);
>  		zswap_lru_add(&zswap_list_lru, entry);
> -		atomic_inc(&zswap_nr_stored);
>  	}
>  	spin_unlock(&tree->lock);
>
Johannes Weiner March 22, 2024, 3:21 a.m. UTC | #2
On Fri, Mar 22, 2024 at 12:10:01AM +0000, Yosry Ahmed wrote:
> nr_stored was introduced by commit b5ba474f3f51 ("zswap: shrink zswap
> pool based on memory pressure") as a per zswap_pool counter of the
> number of stored pages that are not same-filled pages. It is used in
> zswap_shrinker_count() to scale the number of freeable compressed pages
> by the compression ratio.  That is, to reduce the amount of writeback
> from zswap with higher compression ratios as the ROI from IO diminishes.
> 
> Later on, commit bf9b7df23cb3 ("mm/zswap: global lru and shrinker shared
> by all zswap_pools") made the shrinker global (not per zswap_pool), and
> replaced nr_stored with nr_zswap_stored (initially introduced as
> zswap.nr_stored), which is now a global counter.
> 
> The counter is now awfully close to zswap_stored_pages. The only
> difference is that the latter also includes same-filled pages. Also,
> when memcgs are enabled, we use memcg_page_state(memcg, MEMCG_ZSWAPPED),
> which includes same-filled pages anyway (i.e.  equivalent to
> zswap_stored_pages).
> 
> Use zswap_stored_pages instead in zswap_shrinker_count() to keep things
> consistent whether memcgs are enabled or not, and add a comment about
> the number of freeable pages possibly being scaled down more than it
> should if we have lots of same-filled pages (i.e. inflated compression
> ratio).
> 
> Remove nr_zswap_stored and one atomic operation in the store and free
> paths.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> Reviewed-by: Nhat Pham <nphamcs@gmail.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>
diff mbox series

Patch

diff --git a/mm/zswap.c b/mm/zswap.c
index b31c977f53e9c..1a79f99606cef 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -181,8 +181,6 @@  struct zswap_pool {
 
 /* Global LRU lists shared by all zswap pools. */
 static struct list_lru zswap_list_lru;
-/* counter of pages stored in all zswap pools. */
-static atomic_t zswap_nr_stored = ATOMIC_INIT(0);
 
 /* The lock protects zswap_next_shrink updates. */
 static DEFINE_SPINLOCK(zswap_shrink_lock);
@@ -885,7 +883,6 @@  static void zswap_entry_free(struct zswap_entry *entry)
 	else {
 		zswap_lru_del(&zswap_list_lru, entry);
 		zpool_free(zswap_find_zpool(entry), entry->handle);
-		atomic_dec(&zswap_nr_stored);
 		zswap_pool_put(entry->pool);
 	}
 	if (entry->objcg) {
@@ -1310,7 +1307,7 @@  static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
 #else
 	/* use pool stats instead of memcg stats */
 	nr_backing = zswap_total_pages();
-	nr_stored = atomic_read(&zswap_nr_stored);
+	nr_stored = atomic_read(&zswap_stored_pages);
 #endif
 
 	if (!nr_stored)
@@ -1330,6 +1327,11 @@  static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
 	 * This ensures that the better zswap compresses memory, the fewer
 	 * pages we will evict to swap (as it will otherwise incur IO for
 	 * relatively small memory saving).
+	 *
+	 * The memory saving factor calculated here takes same-filled pages into
+	 * account, but those are not freeable since they almost occupy no
+	 * space. Hence, we may scale nr_freeable down a little bit more than we
+	 * should if we have a lot of same-filled pages.
 	 */
 	return mult_frac(nr_freeable, nr_backing, nr_stored);
 }
@@ -1575,7 +1577,6 @@  bool zswap_store(struct folio *folio)
 	if (entry->length) {
 		INIT_LIST_HEAD(&entry->lru);
 		zswap_lru_add(&zswap_list_lru, entry);
-		atomic_inc(&zswap_nr_stored);
 	}
 	spin_unlock(&tree->lock);