diff mbox

[RFC,04/19] cpufreq: bring data structures close to their locks

Message ID 20160112155843.GL6357@twins.programming.kicks-ass.net (mailing list archive)
State Not Applicable, archived
Headers show

Commit Message

Peter Zijlstra Jan. 12, 2016, 3:58 p.m. UTC
On Tue, Jan 12, 2016 at 03:26:01PM +0000, Juri Lelli wrote:
> > > 	#define for_each_governor(_g) \
> > > 		list_for_each_entry(_g, &cpufreq_governor_list, governor_list)
> > > 			if (lockdep_assert_held(..), false)
> > > 				;
> > > 			else
> > > 
> > > Which should preserve C syntax rules.
> > > 
> > 
> > Oh, nice this! I'll try it.
> > 
> 
> This second approach doesn't really play well with lockdep_assert_held
> definition, right?

Right, the below however makes it work, except:

../kernel/sched/core.c: In function ‘scheduler_ipi’:
../kernel/sched/core.c:1831:32: warning: left-hand operand of comma expression has no effect [-Wunused-value]
  if (lockdep_assert_held(&lock), false)

Which is of course correct and very much on purpose :/

---

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index c57e424d914b..caf7a89643d8 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -362,9 +362,9 @@  extern void lock_unpin_lock(struct lockdep_map *lock);
 
 #define lockdep_depth(tsk)	(debug_locks ? (tsk)->lockdep_depth : 0)
 
-#define lockdep_assert_held(l)	do {				\
+#define lockdep_assert_held(l)	({				\
 		WARN_ON(debug_locks && !lockdep_is_held(l));	\
-	} while (0)
+		(void)l; })
 
 #define lockdep_assert_held_once(l)	do {				\
 		WARN_ON_ONCE(debug_locks && !lockdep_is_held(l));	\
@@ -422,7 +422,7 @@  struct lock_class_key { };
 
 #define lockdep_depth(tsk)	(0)
 
-#define lockdep_assert_held(l)			do { (void)(l); } while (0)
+#define lockdep_assert_held(l)			({ (void)l; })
 #define lockdep_assert_held_once(l)		do { (void)(l); } while (0)
 
 #define lockdep_recursing(tsk)			(0)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 77d97a6fc715..f6f36217133d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1817,6 +1817,8 @@  void sched_ttwu_pending(void)
 	raw_spin_unlock_irqrestore(&rq->lock, flags);
 }
 
+raw_spinlock_t lock;
+
 void scheduler_ipi(void)
 {
 	/*
@@ -1826,6 +1828,9 @@  void scheduler_ipi(void)
 	 */
 	preempt_fold_need_resched();
 
+	if (lockdep_assert_held(&lock), false)
+		;
+
 	if (llist_empty(&this_rq()->wake_list) && !got_nohz_idle_kick())
 		return;