diff mbox series

zram: Replace bit spinlocks with spinlock_t for PREEMPT_RT.

Message ID 20230323161830.jFbWCosd@linutronix.de (mailing list archive)
State New, archived
Headers show
Series zram: Replace bit spinlocks with spinlock_t for PREEMPT_RT. | expand

Commit Message

Sebastian Andrzej Siewior March 23, 2023, 4:18 p.m. UTC
From: Mike Galbraith <umgwanakikbuti@gmail.com>

The bit spinlock disables preemption. The spinlock_t lock becomes a sleeping
lock on PREEMPT_RT and it can not be acquired in this context. In this locked
section, zs_free() acquires a zs_pool::lock, and there is access to
zram::wb_limit_lock.

Use a spinlock_t on PREEMPT_RT for locking and set/ clear ZRAM_LOCK bit after
the lock has been acquired/ dropped.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://lkml.kernel.org/r/YqIbMuHCPiQk+Ac2@linutronix.de
---

I'm simply forwarding Mike's patch here. The other alternative is to let
the driver depend on !PREEMPT_RT. I can't tell likely it is that this
driver is used. Mike most likely stumbled upon it while running LTP.

 drivers/block/zram/zram_drv.c |   36 ++++++++++++++++++++++++++++++++++++
 drivers/block/zram/zram_drv.h |    3 +++
 2 files changed, 39 insertions(+)

Comments

Sergey Senozhatsky March 24, 2023, 4:07 a.m. UTC | #1
On (23/03/23 17:18), Sebastian Andrzej Siewior wrote:
> From: Mike Galbraith <umgwanakikbuti@gmail.com>
> 
> The bit spinlock disables preemption. The spinlock_t lock becomes a sleeping
> lock on PREEMPT_RT and it can not be acquired in this context. In this locked
> section, zs_free() acquires a zs_pool::lock, and there is access to
> zram::wb_limit_lock.
> 
> Use a spinlock_t on PREEMPT_RT for locking and set/ clear ZRAM_LOCK bit after
> the lock has been acquired/ dropped.
> 
> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Link: https://lkml.kernel.org/r/YqIbMuHCPiQk+Ac2@linutronix.de
> ---
> 
> I'm simply forwarding Mike's patch here. The other alternative is to let
> the driver depend on !PREEMPT_RT. I can't tell likely it is that this
> driver is used. Mike most likely stumbled upon it while running LTP.

Yeah, I'm curious if anyone uses zram in preempt-rt systems. I don't
mind this patch but would be nice to add new code when it solves some
real problems. Maybe `depend on !PREEMPT_RT` can be a better option.
Mike Galbraith March 24, 2023, 4:32 a.m. UTC | #2
On Fri, 2023-03-24 at 13:07 +0900, Sergey Senozhatsky wrote:
> On (23/03/23 17:18), Sebastian Andrzej Siewior wrote:
> > Mike most likely stumbled upon it while running LTP.
>
> Yeah, I'm curious if anyone uses zram in preempt-rt systems. I don't
> mind this patch but would be nice to add new code when it solves some
> real problems. Maybe `depend on !PREEMPT_RT` can be a better option.

Patchlet's job here is only obese config RT vs !RT testing.  It can
always move back into local_patches, it won't be lonely ;-)

	-Mike
diff mbox series

Patch

--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -57,6 +57,40 @@  static void zram_free_page(struct zram *
 static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
 				u32 index, int offset, struct bio *bio);
 
+#ifdef CONFIG_PREEMPT_RT
+static void zram_meta_init_table_locks(struct zram *zram, size_t num_pages)
+{
+	size_t index;
+
+	for (index = 0; index < num_pages; index++)
+		spin_lock_init(&zram->table[index].lock);
+}
+
+static int zram_slot_trylock(struct zram *zram, u32 index)
+{
+	int ret;
+
+	ret = spin_trylock(&zram->table[index].lock);
+	if (ret)
+		__set_bit(ZRAM_LOCK, &zram->table[index].flags);
+	return ret;
+}
+
+static void zram_slot_lock(struct zram *zram, u32 index)
+{
+	spin_lock(&zram->table[index].lock);
+	__set_bit(ZRAM_LOCK, &zram->table[index].flags);
+}
+
+static void zram_slot_unlock(struct zram *zram, u32 index)
+{
+	__clear_bit(ZRAM_LOCK, &zram->table[index].flags);
+	spin_unlock(&zram->table[index].lock);
+}
+
+#else
+
+static void zram_meta_init_table_locks(struct zram *zram, size_t num_pages) { }
 
 static int zram_slot_trylock(struct zram *zram, u32 index)
 {
@@ -72,6 +106,7 @@  static void zram_slot_unlock(struct zram
 {
 	bit_spin_unlock(ZRAM_LOCK, &zram->table[index].flags);
 }
+#endif
 
 static inline bool init_done(struct zram *zram)
 {
@@ -1311,6 +1346,7 @@  static bool zram_meta_alloc(struct zram
 
 	if (!huge_class_size)
 		huge_class_size = zs_huge_class_size(zram->mem_pool);
+	zram_meta_init_table_locks(zram, num_pages);
 	return true;
 }
 
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -69,6 +69,9 @@  struct zram_table_entry {
 		unsigned long element;
 	};
 	unsigned long flags;
+#ifdef CONFIG_PREEMPT_RT
+	spinlock_t lock;
+#endif
 #ifdef CONFIG_ZRAM_MEMORY_TRACKING
 	ktime_t ac_time;
 #endif