mbox series

[v6,0/6] Implement writeback for zsmalloc

Message ID 20221119001536.2086599-1-nphamcs@gmail.com (mailing list archive)
Headers show
Series Implement writeback for zsmalloc | expand

Message

Nhat Pham Nov. 19, 2022, 12:15 a.m. UTC
Changelog:
v6:
  * Move the move-to-front logic into zs_map_object (patch 4)
    (suggested by Minchan Kim).
  * Small clean up for free_zspage at free_handles() call site
    (patch 6) (suggested by Minchan Kim).
v5:
  * Add a new patch that eliminates unused code in zpool and simplify
    the logic for storing evict handler in zbud/z3fold (patch 2)
  * Remove redudant fields in zs_pool (previously required by zpool)
    (patch 3)
  * Wrap under_reclaim and deferred handle freeing logic in CONFIG_ZPOOL
    (patch 6) (suggested by Minchan Kim)
  * Move a small piece of refactoring from patch 6 to patch 4.
v4:
  * Wrap the new LRU logic in CONFIG_ZPOOL (patch 3).
    (suggested by Minchan Kim)
v3:
  * Set pool->ops = NULL when pool->zpool_ops is null (patch 4).
  * Stop holding pool's lock when calling lock_zspage() (patch 5).
    (suggested by Sergey Senozhatsky)
  * Stop holding pool's lock when checking pool->ops and retries.
    (patch 5) (suggested by Sergey Senozhatsky)
  * Fix formatting issues (.shrink, extra spaces in casting removed).
    (patch 5) (suggested by Sergey Senozhatsky)
v2:
  * Add missing CONFIG_ZPOOL ifdefs (patch 5)
    (detected by kernel test robot).

Unlike other zswap's allocators such as zbud or z3fold, zsmalloc
currently lacks the writeback mechanism. This means that when the zswap
pool is full, it will simply reject further allocations, and the pages
will be written directly to swap.

This series of patches implements writeback for zsmalloc. When the zswap
pool becomes full, zsmalloc will attempt to evict all the compressed
objects in the least-recently used zspages.

There are 6 patches in this series:

Johannes Weiner (2):
  zswap: fix writeback lock ordering for zsmalloc
  zpool: clean out dead code

Nhat Pham (4):
  zsmalloc: Consolidate zs_pool's migrate_lock and size_class's locks
  zsmalloc: Add a LRU to zs_pool to keep track of zspages in LRU order
  zsmalloc: Add zpool_ops field to zs_pool to store evict handlers
  zsmalloc: Implement writeback mechanism for zsmalloc

 mm/z3fold.c   |  36 +-----
 mm/zbud.c     |  32 +----
 mm/zpool.c    |  10 +-
 mm/zsmalloc.c | 325 ++++++++++++++++++++++++++++++++++++++++----------
 mm/zswap.c    |  37 +++---
 5 files changed, 295 insertions(+), 145 deletions(-)

--
2.30.2

Comments

Nhat Pham Nov. 21, 2022, 7:29 p.m. UTC | #1
Hi Andrew, looks like Minchan is on board with the series - the concerns
with the latter patches have been resolved. Feel free to cherry-pick
this series back to your mm-unstable branch!
Nhat Pham Nov. 23, 2022, 7:26 p.m. UTC | #2
The suggested changes seem relatively minor, so instead of sending a v7
series of patches, I've just sent the two fixes in a separate thread.

Andrew, would you mind applying those fixes on top of patch 4 and patch
6 respectively? Thanks!