diff mbox series

[17/95] lib/test_lockup.c: minimum fix to get it compiled on PREEMPT_RT

Message ID 20201216044313.-SpiO0P-3%akpm@linux-foundation.org (mailing list archive)
State New, archived
Headers show
Series [01/95] mm: fix a race on nr_swap_pages | expand

Commit Message

Andrew Morton Dec. 16, 2020, 4:43 a.m. UTC
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Subject: lib/test_lockup.c: minimum fix to get it compiled on PREEMPT_RT

On PREEMPT_RT the locks are quite different so they can't be tested as it
is done below.  The alternative is to test for the waitlock within
rtmutex.

This is the bare minimun to get it compiled.  Problems which exist on
PREEMP_RT:

- none of the locks (spinlock_t, rwlock_t, mutex_t, rw_semaphore) may be
  acquired with disabled preemption or interrupts.
  If I read the code correct the it is possible to acquire a mutex_t with
  disabled interrupts.
  I don't know how to obtain a lock pointer. Technically they are not
  exported to userland.

- memory can not be allocated with disabled preemption or interrupts
  even with GFP_ATOMIC.

Link: https://lkml.kernel.org/r/20201028181041.xyeothhkouc3p4md@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 lib/test_lockup.c |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)
diff mbox series

Patch

--- a/lib/test_lockup.c~lib-test_lockup-minimum-fix-to-get-it-compiled-on-preempt_rt
+++ a/lib/test_lockup.c
@@ -480,6 +480,21 @@  static int __init test_lockup_init(void)
 		return -EINVAL;
 
 #ifdef CONFIG_DEBUG_SPINLOCK
+#ifdef CONFIG_PREEMPT_RT
+	if (test_magic(lock_spinlock_ptr,
+		       offsetof(spinlock_t, lock.wait_lock.magic),
+		       SPINLOCK_MAGIC) ||
+	    test_magic(lock_rwlock_ptr,
+		       offsetof(rwlock_t, rtmutex.wait_lock.magic),
+		       SPINLOCK_MAGIC) ||
+	    test_magic(lock_mutex_ptr,
+		       offsetof(struct mutex, lock.wait_lock.magic),
+		       SPINLOCK_MAGIC) ||
+	    test_magic(lock_rwsem_ptr,
+		       offsetof(struct rw_semaphore, rtmutex.wait_lock.magic),
+		       SPINLOCK_MAGIC))
+		return -EINVAL;
+#else
 	if (test_magic(lock_spinlock_ptr,
 		       offsetof(spinlock_t, rlock.magic),
 		       SPINLOCK_MAGIC) ||
@@ -494,6 +509,7 @@  static int __init test_lockup_init(void)
 		       SPINLOCK_MAGIC))
 		return -EINVAL;
 #endif
+#endif
 
 	if ((wait_state != TASK_RUNNING ||
 	     (call_cond_resched && !reacquire_locks) ||