mbox series

[v2,0/9] zsmalloc: remove bit_spin_lock

Message ID 20211115185909.3949505-1-minchan@kernel.org (mailing list archive)
Headers show
Series zsmalloc: remove bit_spin_lock | expand

Message

Minchan Kim Nov. 15, 2021, 6:59 p.m. UTC
The zsmalloc has used bit_spin_lock to minimize space overhead
since it's zpage granularity lock. However, it causes zsmalloc
non-working under PREEMPT_RT as well as adding too much
complication.

This patchset tries to replace the bit_spin_lock with per-pool
rwlock. It also removes unnecessary zspage isolation logic
from class, which was the other part too much complication
added into zsmalloc.
Last patch changes the get_cpu_var to local_lock to make it
work in PREEMPT_RT.

Mike Galbraith (1):
  zsmalloc: replace get_cpu_var with local_lock

Minchan Kim (8):
  zsmalloc: introduce some helper functions
  zsmalloc: rename zs_stat_type to class_stat_type
  zsmalloc: decouple class actions from zspage works
  zsmalloc: introduce obj_allocated
  zsmalloc: move huge compressed obj from page to zspage
  zsmalloc: remove zspage isolation for migration
  locking/rwlocks: introduce write_lock_nested
  zsmalloc: replace per zpage lock with pool->migrate_lock

 include/linux/rwlock.h          |   6 +
 include/linux/rwlock_api_smp.h  |   9 +
 include/linux/rwlock_rt.h       |   6 +
 include/linux/spinlock_api_up.h |   1 +
 kernel/locking/spinlock.c       |   6 +
 kernel/locking/spinlock_rt.c    |  12 +
 mm/zsmalloc.c                   | 529 ++++++++++++--------------------
 7 files changed, 228 insertions(+), 341 deletions(-)