=============================================
[ INFO: possible recursive locking detected ]
3.9.0-rc2 #22 Not tainted
---------------------------------------------
kworker/0:1/54 is trying to acquire lock:
(&dmac->lock){+.+...}, at: [<ffffffffa05fffb3>] evo_wait+0x43/0xf0
[nouveau]
but task is already holding lock:
(&dmac->lock){+.+...}, at: [<ffffffffa05fffb3>] evo_wait+0x43/0xf0
[nouveau]
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&dmac->lock);
lock(&dmac->lock);
*** DEADLOCK ***
May be due to missing lock nesting notation
5 locks held by kworker/0:1/54:
#0: (events){.+.+.+}, at: [<ffffffff8106e311>]
process_one_work+0x171/0x4c0
#1: ((&nv_connector->hpd_work)){+.+.+.}, at: [<ffffffff8106e311>]
process_one_work+0x171/0x4c0
#2: (&dev->mode_config.mutex){+.+.+.}, at: [<ffffffffa022ee2a>]
drm_modeset_lock_all+0x2a/0x70 [drm]
#3: (&crtc->mutex){+.+.+.}, at: [<ffffffffa022ee54>]
drm_modeset_lock_all+0x54/0x70 [drm]
#4: (&dmac->lock){+.+...}, at: [<ffffffffa05fffb3>]
evo_wait+0x43/0xf0 [nouveau]
stack backtrace:
Pid: 54, comm: kworker/0:1 Not tainted 3.9.0-rc2 #22
Call Trace:
[<ffffffff810b71e5>] __lock_acquire+0x715/0x1be0
[<ffffffffa056361c>] ? dcb_table+0x1ac/0x2a0 [nouveau]
[<ffffffff810b8c31>] lock_acquire+0xa1/0x130
[<ffffffffa05fffb3>] ? evo_wait+0x43/0xf0 [nouveau]
[<ffffffff816aac59>] ? mutex_lock_nested+0x299/0x340
[<ffffffff816aaa09>] mutex_lock_nested+0x49/0x340
[<ffffffffa05fffb3>] ? evo_wait+0x43/0xf0 [nouveau]
[<ffffffff810b954f>] ? mark_held_locks+0xaf/0x110
[<ffffffffa05fffb3>] evo_wait+0x43/0xf0 [nouveau]
[<ffffffffa0602a63>] nv50_display_flip_next+0x713/0x7a0 [nouveau]
[<ffffffff816ab95e>] ? mutex_unlock+0xe/0x10
[<ffffffffa0600097>] ? evo_kick+0x37/0x40 [nouveau]
[<ffffffffa0602cee>] nv50_crtc_commit+0x10e/0x230 [nouveau]
[<ffffffffa0158125>] drm_crtc_helper_set_mode+0x365/0x510 [drm_kms_helper]
[<ffffffffa015953e>] drm_crtc_helper_set_config+0xa4e/0xb70
[drm_kms_helper]
[<ffffffffa022fe71>] drm_mode_set_config_internal+0x31/0x70 [drm]
[<ffffffffa0157621>] drm_fb_helper_set_par+0x71/0xf0 [drm_kms_helper]
[<ffffffffa022eaa2>] ? drm_modeset_unlock_all+0x52/0x60 [drm]
[<ffffffffa0157581>] drm_fb_helper_hotplug_event+0x81/0xb0
[drm_kms_helper]
[<ffffffffa05e964c>] nouveau_fbcon_output_poll_changed+0x1c/0x20 [nouveau]
[<ffffffffa0157bbb>] drm_kms_helper_hotplug_event+0x2b/0x40
[drm_kms_helper]
[<ffffffffa0158ada>] drm_helper_hpd_irq_event+0x12a/0x140 [drm_kms_helper]
[<ffffffffa05ec323>] nouveau_connector_hotplug_work+0x93/0x100 [nouveau]
[<ffffffff8106e371>] process_one_work+0x1d1/0x4c0
[<ffffffff8106e311>] ? process_one_work+0x171/0x4c0
[<ffffffff8106fd0f>] worker_thread+0x10f/0x380
[<ffffffff8106fc00>] ? busy_worker_rebind_fn+0xb0/0xb0
[<ffffffff8107aaca>] kthread+0xea/0xf0
[<ffffffff8107a9e0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816b80ac>] ret_from_fork+0x7c/0xb0
[<ffffffff8107a9e0>] ? kthread_create_on_node+0x160/0x160
This is consistent and git bisect pointed at,
65b5f42e2a9eb9c8383fb67698bf8c27657f8c14 is the first bad commit
commit 65b5f42e2a9eb9c8383fb67698bf8c27657f8c14
Author: Ben Skeggs <bskeggs@redhat.com>
Date: Wed Feb 20 16:47:44 2013 +1000
drm/nve0/graph: some random reg moved on kepler
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
:040000 040000 9658d1fd413b8797fe06fb2ca8ce681d4dbbedb0
c5b38586625718fc78c0eb062af3baa201fe2e7f M drivers
65b5f42e2a9eb9c8383fb67698bf8c27657f8c14
commit 65b5f42e2a9eb9c8383fb67698bf8c27657f8c14
Author: Ben Skeggs <bskeggs@redhat.com>
Date: Wed Feb 20 16:47:44 2013 +1000
drm/nve0/graph: some random reg moved on kepler
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
b/drivers/gpu/drm/nouveau/core/engine/graph/nve0.c
@@ -350,7 +350,7 @@ nve0_graph_init_gpc_0(struct nvc0_graph_priv *priv)
nv_wr32(priv, GPC_UNIT(gpc, 0x0918), magicgpc918);
}
- nv_wr32(priv, GPC_BCAST(0x1bd4), magicgpc918);
+ nv_wr32(priv, GPC_BCAST(0x3fd4), magicgpc918);
nv_wr32(priv, GPC_BCAST(0x08ac), nv_rd32(priv, 0x100800));
}