mbox series

[v3,0/5] Implement writeback for zsmalloc

Message ID 20221108193207.3297327-1-nphamcs@gmail.com (mailing list archive)
Headers show
Series Implement writeback for zsmalloc | expand

Message

Nhat Pham Nov. 8, 2022, 7:32 p.m. UTC
Changelog:
v3:
  * Set pool->ops = NULL when pool->zpool_ops is null (patch 4).
  * Stop holding pool's lock when calling lock_zspage() (patch 5).
    (suggested by Sergey Senozhatsky)
  * Stop holding pool's lock when checking pool->ops and retries.
    (patch 5) (suggested by Sergey Senozhatsky)
  * Fix formatting issues (.shrink, extra spaces in casting removed).
    (patch 5) (suggested by Sergey Senozhatsky)

v2:
  * Add missing CONFIG_ZPOOL ifdefs (patch 5)
    (detected by kernel test robot).

Unlike other zswap's allocators such as zbud or z3fold, zsmalloc
currently lacks the writeback mechanism. This means that when the zswap
pool is full, it will simply reject further allocations, and the pages
will be written directly to swap.

This series of patches implements writeback for zsmalloc. When the zswap
pool becomes full, zsmalloc will attempt to evict all the compressed
objects in the least-recently used zspages.

There are 5 patches in this series:

Johannes Weiner (1):
  zswap: fix writeback lock ordering for zsmalloc

Nhat Pham (4):
  zsmalloc: Consolidate zs_pool's migrate_lock and size_class's locks
  zsmalloc: Add a LRU to zs_pool to keep track of zspages in LRU order
  zsmalloc: Add ops fields to zs_pool to store evict handlers
  zsmalloc: Implement writeback mechanism for zsmalloc

 mm/zsmalloc.c | 346 +++++++++++++++++++++++++++++++++++++++++---------
 mm/zswap.c    |  37 +++---
 2 files changed, 303 insertions(+), 80 deletions(-)

--
2.30.2

Comments

Johannes Weiner Nov. 8, 2022, 8:45 p.m. UTC | #1
On Tue, Nov 08, 2022 at 11:32:02AM -0800, Nhat Pham wrote:
> Changelog:
> v3:
>   * Set pool->ops = NULL when pool->zpool_ops is null (patch 4).
>   * Stop holding pool's lock when calling lock_zspage() (patch 5).
>     (suggested by Sergey Senozhatsky)
>   * Stop holding pool's lock when checking pool->ops and retries.
>     (patch 5) (suggested by Sergey Senozhatsky)
>   * Fix formatting issues (.shrink, extra spaces in casting removed).
>     (patch 5) (suggested by Sergey Senozhatsky)
> 
> v2:
>   * Add missing CONFIG_ZPOOL ifdefs (patch 5)
>     (detected by kernel test robot).
> 
> Unlike other zswap's allocators such as zbud or z3fold, zsmalloc
> currently lacks the writeback mechanism. This means that when the zswap
> pool is full, it will simply reject further allocations, and the pages
> will be written directly to swap.
> 
> This series of patches implements writeback for zsmalloc. When the zswap
> pool becomes full, zsmalloc will attempt to evict all the compressed
> objects in the least-recently used zspages.
> 
> There are 5 patches in this series:

For the series:

Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Andrew Morton Nov. 9, 2022, 4:40 a.m. UTC | #2
On Tue,  8 Nov 2022 11:32:02 -0800 Nhat Pham <nphamcs@gmail.com> wrote:

> This series of patches implements writeback for zsmalloc. 

There's quite a bit of churn in zsmalloc at present.  So for the sake
of clarity I have dropped all zsmalloc patches except for "zsmalloc:
replace IS_ERR() with IS_ERR_VALUE()".

Please coordinate with Sergey and Minchan on getting all this pending
work finalized and reviewed.