diff mbox series

[13/13] Revert "locking/lockdep, cpu/hotplug: Annotate AP thread"

Message ID 20181107153019.26401-13-daniel.vetter@ffwll.ch (mailing list archive)
State New, archived
Headers show
Series [01/13] locking/lockdep: restore cross-release checks | expand

Commit Message

Daniel Vetter Nov. 7, 2018, 3:30 p.m. UTC
This reverts commit cb92173d1f0474784c6171a9d3fdbbca0ee53554.

This commit tries to shut up lockdep complaining about a
lockdep_assert_held check in the AP cpuhp bringup threads, while the
lock is actually held by the BP thread. Which is all kinda correct,
since BP does wait for all the AP threads to finish before it releases
the cpuhp locks, through a completion.

The only problem with this somewhat fake annotion is that
cross-release sees through the fog, and rightly complains that doing
this for real would totally deadlock.

One way to fix this would be to check that anyone is currently holding
the lock, and not just the current thread. But I'm not sure that's a
good option really. Hence just revert for now, which will result in a
lockdep_assert_held splat per non-boot cpu at boot-up (and anytime you
hotplug a cpu), but at least lockdep keeps working.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: "Peter Zijlstra (Intel)" <peterz@infraded.org>
Cc: Mukesh Ojha <mojha@codeaurora.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Arnd Bergmann <arnd@arndb.de>
---
 kernel/cpu.c | 28 ----------------------------
 1 file changed, 28 deletions(-)
diff mbox series

Patch

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 3c7f3b4c453c..5ff05c284425 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -315,16 +315,6 @@  void lockdep_assert_cpus_held(void)
 	percpu_rwsem_assert_held(&cpu_hotplug_lock);
 }
 
-static void lockdep_acquire_cpus_lock(void)
-{
-	rwsem_acquire(&cpu_hotplug_lock.rw_sem.dep_map, 0, 0, _THIS_IP_);
-}
-
-static void lockdep_release_cpus_lock(void)
-{
-	rwsem_release(&cpu_hotplug_lock.rw_sem.dep_map, 1, _THIS_IP_);
-}
-
 /*
  * Wait for currently running CPU hotplug operations to complete (if any) and
  * disable future CPU hotplug (from sysfs). The 'cpu_add_remove_lock' protects
@@ -354,17 +344,6 @@  void cpu_hotplug_enable(void)
 	cpu_maps_update_done();
 }
 EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
-
-#else
-
-static void lockdep_acquire_cpus_lock(void)
-{
-}
-
-static void lockdep_release_cpus_lock(void)
-{
-}
-
 #endif	/* CONFIG_HOTPLUG_CPU */
 
 #ifdef CONFIG_HOTPLUG_SMT
@@ -638,12 +617,6 @@  static void cpuhp_thread_fun(unsigned int cpu)
 	 */
 	smp_mb();
 
-	/*
-	 * The BP holds the hotplug lock, but we're now running on the AP,
-	 * ensure that anybody asserting the lock is held, will actually find
-	 * it so.
-	 */
-	lockdep_acquire_cpus_lock();
 	cpuhp_lock_acquire(bringup);
 
 	if (st->single) {
@@ -689,7 +662,6 @@  static void cpuhp_thread_fun(unsigned int cpu)
 	}
 
 	cpuhp_lock_release(bringup);
-	lockdep_release_cpus_lock();
 
 	if (!st->should_run)
 		complete_ap_thread(st, bringup);