diff mbox series

[v2] mm: zswap: increase reject_compress_poor but not reject_compress_fail if compression returns ENOSPC

Message ID 20240217053642.79558-1-21cnbao@gmail.com (mailing list archive)
State New
Headers show
Series [v2] mm: zswap: increase reject_compress_poor but not reject_compress_fail if compression returns ENOSPC | expand

Commit Message

Barry Song Feb. 17, 2024, 5:36 a.m. UTC
From: Barry Song <v-songbaohua@oppo.com>

We used to rely on the returned -ENOSPC of zpool_malloc() to increase
reject_compress_poor. But the code wouldn't get to there after commit
744e1885922a ("crypto: scomp - fix req->dst buffer overflow") as the
new code will goto out immediately after the special compression case
happens. So there might be no longer a chance to execute zpool_malloc
now. We are incorrectly increasing zswap_reject_compress_fail instead.
Thus, we need to fix the counters handling right after compressions
return ENOSPC. This patch also centralizes the counters handling for
all of compress_poor, compress_fail and alloc_fail.

Fixes: 744e1885922a ("crypto: scomp - fix req->dst buffer overflow")
Cc: Chengming Zhou <zhouchengming@bytedance.com>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
---
 -v2: 
 * correct the fixes target according to Yosry, Chengming, Nhat's
   comments;
 * centralize the counters handling according to Yosry's comment

 mm/zswap.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

Comments

Nhat Pham Feb. 17, 2024, 8:52 a.m. UTC | #1
On Sat, Feb 17, 2024 at 12:36 PM Barry Song <21cnbao@gmail.com> wrote:
>
> From: Barry Song <v-songbaohua@oppo.com>
>
> We used to rely on the returned -ENOSPC of zpool_malloc() to increase
> reject_compress_poor. But the code wouldn't get to there after commit
> 744e1885922a ("crypto: scomp - fix req->dst buffer overflow") as the
> new code will goto out immediately after the special compression case
> happens. So there might be no longer a chance to execute zpool_malloc
> now. We are incorrectly increasing zswap_reject_compress_fail instead.
> Thus, we need to fix the counters handling right after compressions
> return ENOSPC. This patch also centralizes the counters handling for
> all of compress_poor, compress_fail and alloc_fail.
>
> Fixes: 744e1885922a ("crypto: scomp - fix req->dst buffer overflow")
> Cc: Chengming Zhou <zhouchengming@bytedance.com>
> Cc: Nhat Pham <nphamcs@gmail.com>
> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>

LGTM.
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
Yosry Ahmed Feb. 17, 2024, 8:57 a.m. UTC | #2
On Sat, Feb 17, 2024 at 06:36:42PM +1300, Barry Song wrote:
> From: Barry Song <v-songbaohua@oppo.com>
> 
> We used to rely on the returned -ENOSPC of zpool_malloc() to increase
> reject_compress_poor. But the code wouldn't get to there after commit
> 744e1885922a ("crypto: scomp - fix req->dst buffer overflow") as the
> new code will goto out immediately after the special compression case
> happens. So there might be no longer a chance to execute zpool_malloc
> now. We are incorrectly increasing zswap_reject_compress_fail instead.
> Thus, we need to fix the counters handling right after compressions
> return ENOSPC. This patch also centralizes the counters handling for
> all of compress_poor, compress_fail and alloc_fail.
> 
> Fixes: 744e1885922a ("crypto: scomp - fix req->dst buffer overflow")
> Cc: Chengming Zhou <zhouchengming@bytedance.com>
> Cc: Nhat Pham <nphamcs@gmail.com>
> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> ---
>  -v2: 
>  * correct the fixes target according to Yosry, Chengming, Nhat's
>    comments;
>  * centralize the counters handling according to Yosry's comment

Yet Yosry is not CC'd :P

The patch LGTM, but it won't apply on top of mm-unstable given the
amount of zswap refactoring there. I would rebase on top of mm-unstable
if I were you (and if you did, add mm-unstable in the subject prefix).

Acked-by: Yosry Ahmed <yosryahmed@google.com>

Thanks!
Barry Song Feb. 17, 2024, 10:19 a.m. UTC | #3
On Sat, Feb 17, 2024 at 4:57 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Sat, Feb 17, 2024 at 06:36:42PM +1300, Barry Song wrote:
> > From: Barry Song <v-songbaohua@oppo.com>
> >
> > We used to rely on the returned -ENOSPC of zpool_malloc() to increase
> > reject_compress_poor. But the code wouldn't get to there after commit
> > 744e1885922a ("crypto: scomp - fix req->dst buffer overflow") as the
> > new code will goto out immediately after the special compression case
> > happens. So there might be no longer a chance to execute zpool_malloc
> > now. We are incorrectly increasing zswap_reject_compress_fail instead.
> > Thus, we need to fix the counters handling right after compressions
> > return ENOSPC. This patch also centralizes the counters handling for
> > all of compress_poor, compress_fail and alloc_fail.
> >
> > Fixes: 744e1885922a ("crypto: scomp - fix req->dst buffer overflow")
> > Cc: Chengming Zhou <zhouchengming@bytedance.com>
> > Cc: Nhat Pham <nphamcs@gmail.com>
> > Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
> > Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> > ---
> >  -v2:
> >  * correct the fixes target according to Yosry, Chengming, Nhat's
> >    comments;
> >  * centralize the counters handling according to Yosry's comment
>
> Yet Yosry is not CC'd :P

terribly sorry. I thought you were in my git send-email list ... but you
were not...

>
> The patch LGTM, but it won't apply on top of mm-unstable given the
> amount of zswap refactoring there. I would rebase on top of mm-unstable
> if I were you (and if you did, add mm-unstable in the subject prefix).

This patch has a "fixes" tag, so I assume it should be also in 6.8?

>
> Acked-by: Yosry Ahmed <yosryahmed@google.com>

thanks!

>
> Thanks!
Chengming Zhou Feb. 17, 2024, 1:27 p.m. UTC | #4
On 2024/2/17 13:36, Barry Song wrote:
> From: Barry Song <v-songbaohua@oppo.com>
> 
> We used to rely on the returned -ENOSPC of zpool_malloc() to increase
> reject_compress_poor. But the code wouldn't get to there after commit
> 744e1885922a ("crypto: scomp - fix req->dst buffer overflow") as the
> new code will goto out immediately after the special compression case
> happens. So there might be no longer a chance to execute zpool_malloc
> now. We are incorrectly increasing zswap_reject_compress_fail instead.
> Thus, we need to fix the counters handling right after compressions
> return ENOSPC. This patch also centralizes the counters handling for
> all of compress_poor, compress_fail and alloc_fail.
> 
> Fixes: 744e1885922a ("crypto: scomp - fix req->dst buffer overflow")
> Cc: Chengming Zhou <zhouchengming@bytedance.com>
> Cc: Nhat Pham <nphamcs@gmail.com>
> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>

LGTM, thanks!

Reviewed-by: Chengming Zhou <zhouchengming@bytedance.com>

> ---
>  -v2: 
>  * correct the fixes target according to Yosry, Chengming, Nhat's
>    comments;
>  * centralize the counters handling according to Yosry's comment
> 
>  mm/zswap.c | 25 ++++++++++++-------------
>  1 file changed, 12 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 350dd2fc8159..47cf07d56362 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1498,6 +1498,7 @@ bool zswap_store(struct folio *folio)
>  	struct zswap_tree *tree = zswap_trees[type];
>  	struct zswap_entry *entry, *dupentry;
>  	struct scatterlist input, output;
> +	int comp_ret = 0, alloc_ret = 0;
>  	struct crypto_acomp_ctx *acomp_ctx;
>  	struct obj_cgroup *objcg = NULL;
>  	struct mem_cgroup *memcg = NULL;
> @@ -1508,7 +1509,6 @@ bool zswap_store(struct folio *folio)
>  	char *buf;
>  	u8 *src, *dst;
>  	gfp_t gfp;
> -	int ret;
>  
>  	VM_WARN_ON_ONCE(!folio_test_locked(folio));
>  	VM_WARN_ON_ONCE(!folio_test_swapcache(folio));
> @@ -1621,28 +1621,20 @@ bool zswap_store(struct folio *folio)
>  	 * but in different threads running on different cpu, we have different
>  	 * acomp instance, so multiple threads can do (de)compression in parallel.
>  	 */
> -	ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait);
> +	comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait);
>  	dlen = acomp_ctx->req->dlen;
>  
> -	if (ret) {
> -		zswap_reject_compress_fail++;
> +	if (comp_ret)
>  		goto put_dstmem;
> -	}
>  
>  	/* store */
>  	zpool = zswap_find_zpool(entry);
>  	gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
>  	if (zpool_malloc_support_movable(zpool))
>  		gfp |= __GFP_HIGHMEM | __GFP_MOVABLE;
> -	ret = zpool_malloc(zpool, dlen, gfp, &handle);
> -	if (ret == -ENOSPC) {
> -		zswap_reject_compress_poor++;
> -		goto put_dstmem;
> -	}
> -	if (ret) {
> -		zswap_reject_alloc_fail++;
> +	alloc_ret = zpool_malloc(zpool, dlen, gfp, &handle);
> +	if (alloc_ret)
>  		goto put_dstmem;
> -	}
>  	buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO);
>  	memcpy(buf, dst, dlen);
>  	zpool_unmap_handle(zpool, handle);
> @@ -1689,6 +1681,13 @@ bool zswap_store(struct folio *folio)
>  	return true;
>  
>  put_dstmem:
> +	if (comp_ret == -ENOSPC || alloc_ret == -ENOSPC)
> +		zswap_reject_compress_poor++;
> +	else if (comp_ret)
> +		zswap_reject_compress_fail++;
> +	else if (alloc_ret)
> +		zswap_reject_alloc_fail++;
> +
>  	mutex_unlock(&acomp_ctx->mutex);
>  put_pool:
>  	zswap_pool_put(entry->pool);
Yosry Ahmed Feb. 17, 2024, 11:14 p.m. UTC | #5
On Sat, Feb 17, 2024 at 2:19 AM Barry Song <21cnbao@gmail.com> wrote:
>
> On Sat, Feb 17, 2024 at 4:57 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > On Sat, Feb 17, 2024 at 06:36:42PM +1300, Barry Song wrote:
> > > From: Barry Song <v-songbaohua@oppo.com>
> > >
> > > We used to rely on the returned -ENOSPC of zpool_malloc() to increase
> > > reject_compress_poor. But the code wouldn't get to there after commit
> > > 744e1885922a ("crypto: scomp - fix req->dst buffer overflow") as the
> > > new code will goto out immediately after the special compression case
> > > happens. So there might be no longer a chance to execute zpool_malloc
> > > now. We are incorrectly increasing zswap_reject_compress_fail instead.
> > > Thus, we need to fix the counters handling right after compressions
> > > return ENOSPC. This patch also centralizes the counters handling for
> > > all of compress_poor, compress_fail and alloc_fail.
> > >
> > > Fixes: 744e1885922a ("crypto: scomp - fix req->dst buffer overflow")
> > > Cc: Chengming Zhou <zhouchengming@bytedance.com>
> > > Cc: Nhat Pham <nphamcs@gmail.com>
> > > Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
> > > Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> > > ---
> > >  -v2:
> > >  * correct the fixes target according to Yosry, Chengming, Nhat's
> > >    comments;
> > >  * centralize the counters handling according to Yosry's comment
> >
> > Yet Yosry is not CC'd :P
>
> terribly sorry. I thought you were in my git send-email list ... but you
> were not...

No problem, I caught it on linux-mm anyway :)

>
> >
> > The patch LGTM, but it won't apply on top of mm-unstable given the
> > amount of zswap refactoring there. I would rebase on top of mm-unstable
> > if I were you (and if you did, add mm-unstable in the subject prefix).
>
> This patch has a "fixes" tag, so I assume it should be also in 6.8?

Hmm that's up to Andrew. This fixes debug counters so it's not
critical. On the other hand, it will conflict with the cleanup series
in his tree and he'll have to rebase and fix the conflicts (which
aren't a lot, but could still be annoying). Personally I think this
can wait till v6.9, but if Andrew doesn't have a problem taking it for
v6.8 that's fine too.
Andrew Morton Feb. 18, 2024, 9:45 p.m. UTC | #6
On Sat, 17 Feb 2024 15:14:34 -0800 Yosry Ahmed <yosryahmed@google.com> wrote:

> >
> > >
> > > The patch LGTM, but it won't apply on top of mm-unstable given the
> > > amount of zswap refactoring there. I would rebase on top of mm-unstable
> > > if I were you (and if you did, add mm-unstable in the subject prefix).
> >
> > This patch has a "fixes" tag, so I assume it should be also in 6.8?
> 
> Hmm that's up to Andrew. This fixes debug counters so it's not
> critical. On the other hand, it will conflict with the cleanup series
> in his tree and he'll have to rebase and fix the conflicts (which
> aren't a lot, but could still be annoying). Personally I think this
> can wait till v6.9, but if Andrew doesn't have a problem taking it for
> v6.8 that's fine too.

Yes, there are some pretty extensive repairs needed after this change. 
I'd prefer not to because lazy, and there are risks involved.

So against mm-unstable would be preferred please.
diff mbox series

Patch

diff --git a/mm/zswap.c b/mm/zswap.c
index 350dd2fc8159..47cf07d56362 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1498,6 +1498,7 @@  bool zswap_store(struct folio *folio)
 	struct zswap_tree *tree = zswap_trees[type];
 	struct zswap_entry *entry, *dupentry;
 	struct scatterlist input, output;
+	int comp_ret = 0, alloc_ret = 0;
 	struct crypto_acomp_ctx *acomp_ctx;
 	struct obj_cgroup *objcg = NULL;
 	struct mem_cgroup *memcg = NULL;
@@ -1508,7 +1509,6 @@  bool zswap_store(struct folio *folio)
 	char *buf;
 	u8 *src, *dst;
 	gfp_t gfp;
-	int ret;
 
 	VM_WARN_ON_ONCE(!folio_test_locked(folio));
 	VM_WARN_ON_ONCE(!folio_test_swapcache(folio));
@@ -1621,28 +1621,20 @@  bool zswap_store(struct folio *folio)
 	 * but in different threads running on different cpu, we have different
 	 * acomp instance, so multiple threads can do (de)compression in parallel.
 	 */
-	ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait);
+	comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait);
 	dlen = acomp_ctx->req->dlen;
 
-	if (ret) {
-		zswap_reject_compress_fail++;
+	if (comp_ret)
 		goto put_dstmem;
-	}
 
 	/* store */
 	zpool = zswap_find_zpool(entry);
 	gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
 	if (zpool_malloc_support_movable(zpool))
 		gfp |= __GFP_HIGHMEM | __GFP_MOVABLE;
-	ret = zpool_malloc(zpool, dlen, gfp, &handle);
-	if (ret == -ENOSPC) {
-		zswap_reject_compress_poor++;
-		goto put_dstmem;
-	}
-	if (ret) {
-		zswap_reject_alloc_fail++;
+	alloc_ret = zpool_malloc(zpool, dlen, gfp, &handle);
+	if (alloc_ret)
 		goto put_dstmem;
-	}
 	buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO);
 	memcpy(buf, dst, dlen);
 	zpool_unmap_handle(zpool, handle);
@@ -1689,6 +1681,13 @@  bool zswap_store(struct folio *folio)
 	return true;
 
 put_dstmem:
+	if (comp_ret == -ENOSPC || alloc_ret == -ENOSPC)
+		zswap_reject_compress_poor++;
+	else if (comp_ret)
+		zswap_reject_compress_fail++;
+	else if (alloc_ret)
+		zswap_reject_alloc_fail++;
+
 	mutex_unlock(&acomp_ctx->mutex);
 put_pool:
 	zswap_pool_put(entry->pool);