diff mbox series

[v3,mm-hotfixes] mm/zswap: fix inconsistency when zswap_store_page() fails

Message ID 20250129100844.2935-1-42.hyeyoo@gmail.com (mailing list archive)
State New
Headers show
Series [v3,mm-hotfixes] mm/zswap: fix inconsistency when zswap_store_page() fails | expand

Commit Message

Hyeonggon Yoo Jan. 29, 2025, 10:08 a.m. UTC
Commit b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
skips charging any zswap entries when it failed to zswap the entire
folio.

However, when some base pages are zswapped but it failed to zswap
the entire folio, the zswap operation is rolled back.
When freeing zswap entries for those pages, zswap_entry_free() uncharges
the zswap entries that were not previously charged, causing zswap charging
to become inconsistent.

This inconsistency triggers two warnings with following steps:
  # On a machine with 64GiB of RAM and 36GiB of zswap
  $ stress-ng --bigheap 2 # wait until the OOM-killer kills stress-ng
  $ sudo reboot

  The two warnings are:
    in mm/memcontrol.c:163, function obj_cgroup_release():
      WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1));

    in mm/page_counter.c:60, function page_counter_cancel():
      if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
	  new, nr_pages))

zswap_stored_pages also becomes inconsistent in the same way.

As suggested by Kanchana, increment zswap_stored_pages and charge zswap
entries within zswap_store_page() when it succeeds. This way,
zswap_entry_free() will decrement the counter and uncharge the entries
when it failed to zswap the entire folio.

While this could potentially be optimized by batching objcg charging
and incrementing the counter, let's focus on fixing the bug this time
and leave the optimization for later after some evaluation.

After resolving the inconsistency, the warnings disappear.

Fixes: b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
Cc: stable@vger.kernel.org
Co-developed-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---

v2 -> v3:
  - Adjusted Kanchana's feedback:
    - Fixed inconsistency in zswap_stored_pages
    - Now objcg charging and incrementing zswap_store_pages is done
      within zswap_stored_pages, one by one

 mm/zswap.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

Comments

Yosry Ahmed Jan. 29, 2025, 3:52 p.m. UTC | #1
On Wed, Jan 29, 2025 at 07:08:44PM +0900, Hyeonggon Yoo wrote:
> Commit b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
> skips charging any zswap entries when it failed to zswap the entire
> folio.
> 
> However, when some base pages are zswapped but it failed to zswap
> the entire folio, the zswap operation is rolled back.
> When freeing zswap entries for those pages, zswap_entry_free() uncharges
> the zswap entries that were not previously charged, causing zswap charging
> to become inconsistent.
> 
> This inconsistency triggers two warnings with following steps:
>   # On a machine with 64GiB of RAM and 36GiB of zswap
>   $ stress-ng --bigheap 2 # wait until the OOM-killer kills stress-ng
>   $ sudo reboot
> 
>   The two warnings are:
>     in mm/memcontrol.c:163, function obj_cgroup_release():
>       WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1));
> 
>     in mm/page_counter.c:60, function page_counter_cancel():
>       if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
> 	  new, nr_pages))
> 
> zswap_stored_pages also becomes inconsistent in the same way.
> 
> As suggested by Kanchana, increment zswap_stored_pages and charge zswap
> entries within zswap_store_page() when it succeeds. This way,
> zswap_entry_free() will decrement the counter and uncharge the entries
> when it failed to zswap the entire folio.
> 
> While this could potentially be optimized by batching objcg charging
> and incrementing the counter, let's focus on fixing the bug this time
> and leave the optimization for later after some evaluation.
> 
> After resolving the inconsistency, the warnings disappear.
> 
> Fixes: b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
> Cc: stable@vger.kernel.org
> Co-developed-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
> Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

I have a few nits, but generally:

Acked-by: Yosry Ahmed <yosry.ahmed@linux.dev>

> ---
> 
> v2 -> v3:
>   - Adjusted Kanchana's feedback:
>     - Fixed inconsistency in zswap_stored_pages
>     - Now objcg charging and incrementing zswap_store_pages is done
>       within zswap_stored_pages, one by one
> 
>  mm/zswap.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 6504174fbc6a..f0bd962bffd5 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1504,11 +1504,14 @@ static ssize_t zswap_store_page(struct page *page,
>  	entry->pool = pool;
>  	entry->swpentry = page_swpentry;
>  	entry->objcg = objcg;
> +	if (objcg)
> +		obj_cgroup_charge_zswap(objcg, entry->length);

nit: This can be moved to the existing if (objcg) check where we call
obj_cgroup_get(). At that point there shouldn't be a possibility of
failure. If you want to keep it here to make it obvious that we only
charge when we set entry->objcg that's fine, but we can probably move
obj_cgroup_get() here as well in this case.

>  	entry->referenced = true;
>  	if (entry->length) {
>  		INIT_LIST_HEAD(&entry->lru);
>  		zswap_lru_add(&zswap_list_lru, entry);
>  	}
> +	atomic_long_inc(&zswap_stored_pages);

nit: If you keep the charging after setting entry->objcg because that's
when the freeing path will uncharge, then perhaps you want to move this
after the tree store is successful, because at that point the freeing
path will decrement the counter.

>  	return entry->length;
>  
> @@ -1526,7 +1529,6 @@ bool zswap_store(struct folio *folio)
>  	struct obj_cgroup *objcg = NULL;
>  	struct mem_cgroup *memcg = NULL;
>  	struct zswap_pool *pool;
> -	size_t compressed_bytes = 0;
>  	bool ret = false;
>  	long index;
>  
> @@ -1569,15 +1571,11 @@ bool zswap_store(struct folio *folio)
>  		bytes = zswap_store_page(page, objcg, pool);
>  		if (bytes < 0)
>  			goto put_pool;

Do we need 'bytes' anymore? I think we don't even need
zswap_store_page() to return the compressed size anymore. Seems like a
boolean will suffice.

> -		compressed_bytes += bytes;
>  	}
>  
> -	if (objcg) {
> -		obj_cgroup_charge_zswap(objcg, compressed_bytes);
> +	if (objcg)
>  		count_objcg_events(objcg, ZSWPOUT, nr_pages);
> -	}
>  
> -	atomic_long_add(nr_pages, &zswap_stored_pages);
>  	count_vm_events(ZSWPOUT, nr_pages);
>  
>  	ret = true;
> -- 
> 2.47.1
> 
>
diff mbox series

Patch

diff --git a/mm/zswap.c b/mm/zswap.c
index 6504174fbc6a..f0bd962bffd5 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1504,11 +1504,14 @@  static ssize_t zswap_store_page(struct page *page,
 	entry->pool = pool;
 	entry->swpentry = page_swpentry;
 	entry->objcg = objcg;
+	if (objcg)
+		obj_cgroup_charge_zswap(objcg, entry->length);
 	entry->referenced = true;
 	if (entry->length) {
 		INIT_LIST_HEAD(&entry->lru);
 		zswap_lru_add(&zswap_list_lru, entry);
 	}
+	atomic_long_inc(&zswap_stored_pages);
 
 	return entry->length;
 
@@ -1526,7 +1529,6 @@  bool zswap_store(struct folio *folio)
 	struct obj_cgroup *objcg = NULL;
 	struct mem_cgroup *memcg = NULL;
 	struct zswap_pool *pool;
-	size_t compressed_bytes = 0;
 	bool ret = false;
 	long index;
 
@@ -1569,15 +1571,11 @@  bool zswap_store(struct folio *folio)
 		bytes = zswap_store_page(page, objcg, pool);
 		if (bytes < 0)
 			goto put_pool;
-		compressed_bytes += bytes;
 	}
 
-	if (objcg) {
-		obj_cgroup_charge_zswap(objcg, compressed_bytes);
+	if (objcg)
 		count_objcg_events(objcg, ZSWPOUT, nr_pages);
-	}
 
-	atomic_long_add(nr_pages, &zswap_stored_pages);
 	count_vm_events(ZSWPOUT, nr_pages);
 
 	ret = true;