diff mbox series

[05/13] mm/pagemap: Clenaup PREEMPT_COUNT leftovers

Message ID 20200914204441.486057928@linutronix.de (mailing list archive)
State New, archived
Headers show
Series preempt: Make preempt count unconditional | expand

Commit Message

Thomas Gleixner Sept. 14, 2020, 8:42 p.m. UTC
CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
---
 include/linux/pagemap.h |    4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)
diff mbox series

Patch

--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -168,9 +168,7 @@  void release_pages(struct page **pages,
 static inline int __page_cache_add_speculative(struct page *page, int count)
 {
 #ifdef CONFIG_TINY_RCU
-# ifdef CONFIG_PREEMPT_COUNT
-	VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
+	VM_BUG_ON(preemptible())
 	/*
 	 * Preempt must be disabled here - we rely on rcu_read_lock doing
 	 * this for us.