Message ID | 87mvan9ljx.fsf@dilma.collabora.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, 2017-05-08 at 16:48 -0300, Gabriel Krisman Bertazi wrote: > Thanks for reporting this. Can you confirm the following patch prevents > the issue? Nope, it still gripes. [ 43.910362] BUG: sleeping function called from invalid context at mm/slab.h:432 [ 43.910955] in_atomic(): 1, irqs_disabled(): 0, pid: 2077, name: Xorg [ 43.911472] Preemption disabled at: [ 43.911478] [<ffffffffa02b1c45>] qxl_bo_kmap_atomic_page+0xa5/0x100 [qxl] [ 43.912103] CPU: 0 PID: 2077 Comm: Xorg Tainted: G E 4.12.0-master #38 [ 43.912550] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20161202_174313-build11a 04/01/2014 [ 43.913202] Call Trace: [ 43.913371] dump_stack+0x65/0x89 [ 43.913581] ? qxl_bo_kmap_atomic_page+0xa5/0x100 [qxl] [ 43.913876] ___might_sleep+0x11a/0x190 [ 43.914095] __might_sleep+0x4a/0x80 [ 43.914319] ? qxl_bo_create+0x50/0x190 [qxl] [ 43.914565] kmem_cache_alloc_trace+0x46/0x180 [ 43.914836] qxl_bo_create+0x50/0x190 [qxl] [ 43.915082] ? refcount_dec_and_test+0x11/0x20 [ 43.915332] ? ttm_mem_io_reserve+0x41/0xe0 [ttm] [ 43.915595] qxl_alloc_bo_reserved+0x37/0xb0 [qxl] [ 43.915884] qxl_cursor_atomic_update+0x8f/0x260 [qxl] [ 43.916172] ? drm_atomic_helper_update_legacy_modeset_state+0x1d6/0x210 [drm_kms_helper] [ 43.916623] drm_atomic_helper_commit_planes+0xec/0x230 [drm_kms_helper] [ 43.916995] drm_atomic_helper_commit_tail+0x2b/0x60 [drm_kms_helper] [ 43.917398] commit_tail+0x65/0x70 [drm_kms_helper] [ 43.917693] drm_atomic_helper_commit+0xa9/0x100 [drm_kms_helper] [ 43.918039] drm_atomic_commit+0x4b/0x50 [drm] [ 43.918334] drm_atomic_helper_update_plane+0xf1/0x110 [drm_kms_helper] [ 43.918902] __setplane_internal+0x19f/0x280 [drm] [ 43.919240] drm_mode_cursor_universal+0x101/0x1c0 [drm] [ 43.919541] drm_mode_cursor_common+0x15b/0x1d0 [drm] [ 43.919858] drm_mode_cursor2_ioctl+0xe/0x10 [drm] [ 43.920157] drm_ioctl+0x211/0x460 [drm] [ 43.920383] ? drm_mode_cursor_ioctl+0x50/0x50 [drm] [ 43.920664] ? handle_mm_fault+0x93/0x160 [ 43.920893] do_vfs_ioctl+0x96/0x6e0 [ 43.921117] ? __fget+0x73/0xa0 [ 43.921322] SyS_ioctl+0x41/0x70 [ 43.921545] entry_SYSCALL_64_fastpath+0x1a/0xa5 [ 43.922188] RIP: 0033:0x7f1145804bc7 [ 43.922526] RSP: 002b:00007ffcd3e50508 EFLAGS: 00003246 ORIG_RAX: 0000000000000010 [ 43.923367] RAX: ffffffffffffffda RBX: 0000000000000040 RCX: 00007f1145804bc7 [ 43.923852] RDX: 00007ffcd3e50540 RSI: 00000000c02464bb RDI: 000000000000000b [ 43.924299] RBP: 0000000000000040 R08: 0000000000000040 R09: 000000000000000c [ 43.924694] R10: 00007ffcd3e50340 R11: 0000000000003246 R12: 0000000000000018 [ 43.925128] R13: 00000000022bc390 R14: 0000000000000040 R15: 00007ffcd3e5062c
On Tue, 2017-05-09 at 04:37 +0200, Mike Galbraith wrote: > On Mon, 2017-05-08 at 16:48 -0300, Gabriel Krisman Bertazi wrote: > > > Thanks for reporting this. Can you confirm the following patch prevents > > the issue? > > Nope, it still gripes. The reason for this gripe is that we find that we don't have memory reserved.. a tad too late. Xorg-2252 [000] .... 135.409756: qxl_release_map <-qxl_cursor_atomic_update Xorg-2252 [000] .... 135.409756: qxl_bo_kmap_atomic_page <-qxl_release_map Xorg-2252 [000] .... 135.409757: qxl_bo_kmap_atomic_page: ENTER Xorg-2252 [000] .... 135.409757: ttm_mem_io_lock <-qxl_bo_kmap_atomic_page Xorg-2252 [000] .... 135.409757: ttm_mem_io_reserve <-qxl_bo_kmap_atomic_page Xorg-2252 [000] .... 135.409757: qxl_ttm_io_mem_reserve <-ttm_mem_io_reserve Xorg-2252 [000] .... 135.409757: ttm_mem_io_unlock <-qxl_bo_kmap_atomic_page Xorg-2252 [000] .... 135.409757: qxl_bo_kmap_atomic_page: PREEMPTION DISABLED Xorg-2252 [000] ...1 135.409758: qxl_bo_kmap <-qxl_cursor_atomic_update Xorg-2252 [000] ...1 135.409758: ttm_bo_kmap <-qxl_bo_kmap <== too late Xorg-2252 [000] ...1 135.409758: ttm_mem_io_reserve <-ttm_bo_kmap Xorg-2252 [000] ...1 135.409758: qxl_ttm_io_mem_reserve <-ttm_mem_io_reserve Xorg-2252 [000] ...1 135.409759: ioremap_nocache <-ttm_bo_kmap <== game over Xorg-2252 [000] ...1 135.409759: __ioremap_caller <-ioremap_nocache Xorg-2252 [000] ...1 135.409759: walk_system_ram_range <-__ioremap_caller Xorg-2252 [000] ...1 135.409759: find_next_iomem_res <-walk_system_ram_range Xorg-2252 [000] ...1 135.409759: _raw_read_lock <-find_next_iomem_res Xorg-2252 [000] ...1 135.409760: reserve_memtype <-__ioremap_caller Xorg-2252 [000] ...1 135.409760: pat_pagerange_is_ram <-reserve_memtype Xorg-2252 [000] ...1 135.409761: walk_system_ram_range <-pat_pagerange_is_ram Xorg-2252 [000] ...1 135.409761: find_next_iomem_res <-walk_system_ram_range Xorg-2252 [000] ...1 135.409761: _raw_read_lock <-find_next_iomem_res Xorg-2252 [000] ...1 135.409761: kmem_cache_alloc_trace <-reserve_memtype Xorg-2252 [000] ...1 135.409761: __might_sleep <-kmem_cache_alloc_trace Xorg-2252 [000] ...1 135.409762: ___might_sleep <-__might_sleep
Mike Galbraith <efault@gmx.de> writes: > On Tue, 2017-05-09 at 04:37 +0200, Mike Galbraith wrote: >> On Mon, 2017-05-08 at 16:48 -0300, Gabriel Krisman Bertazi wrote: >> >> > Thanks for reporting this. Can you confirm the following patch prevents >> > the issue? >> >> Nope, it still gripes. > > The reason for this gripe is that we find that we don't have memory > reserved.. a tad too late. > Thanks for the info. Sorry I wasn't able to get back to this last week. I'll try to get another patch for -rc2. > Xorg-2252 [000] .... 135.409756: qxl_release_map <-qxl_cursor_atomic_update > Xorg-2252 [000] .... 135.409756: qxl_bo_kmap_atomic_page <-qxl_release_map > Xorg-2252 [000] .... 135.409757: qxl_bo_kmap_atomic_page: ENTER > Xorg-2252 [000] .... 135.409757: ttm_mem_io_lock <-qxl_bo_kmap_atomic_page > Xorg-2252 [000] .... 135.409757: ttm_mem_io_reserve <-qxl_bo_kmap_atomic_page > Xorg-2252 [000] .... 135.409757: qxl_ttm_io_mem_reserve <-ttm_mem_io_reserve > Xorg-2252 [000] .... 135.409757: ttm_mem_io_unlock <-qxl_bo_kmap_atomic_page > Xorg-2252 [000] .... 135.409757: qxl_bo_kmap_atomic_page: PREEMPTION DISABLED > Xorg-2252 [000] ...1 135.409758: qxl_bo_kmap <-qxl_cursor_atomic_update > Xorg-2252 [000] ...1 135.409758: ttm_bo_kmap <-qxl_bo_kmap <== too late > Xorg-2252 [000] ...1 135.409758: ttm_mem_io_reserve <-ttm_bo_kmap > Xorg-2252 [000] ...1 135.409758: qxl_ttm_io_mem_reserve <-ttm_mem_io_reserve > Xorg-2252 [000] ...1 135.409759: ioremap_nocache <-ttm_bo_kmap <== game over > Xorg-2252 [000] ...1 135.409759: __ioremap_caller <-ioremap_nocache > Xorg-2252 [000] ...1 135.409759: walk_system_ram_range <-__ioremap_caller > Xorg-2252 [000] ...1 135.409759: find_next_iomem_res <-walk_system_ram_range > Xorg-2252 [000] ...1 135.409759: _raw_read_lock <-find_next_iomem_res > Xorg-2252 [000] ...1 135.409760: reserve_memtype <-__ioremap_caller > Xorg-2252 [000] ...1 135.409760: pat_pagerange_is_ram <-reserve_memtype > Xorg-2252 [000] ...1 135.409761: walk_system_ram_range <-pat_pagerange_is_ram > Xorg-2252 [000] ...1 135.409761: find_next_iomem_res <-walk_system_ram_range > Xorg-2252 [000] ...1 135.409761: _raw_read_lock <-find_next_iomem_res > Xorg-2252 [000] ...1 135.409761: kmem_cache_alloc_trace <-reserve_memtype > Xorg-2252 [000] ...1 135.409761: __might_sleep <-kmem_cache_alloc_trace > Xorg-2252 [000] ...1 135.409762: ___might_sleep <-__might_sleep
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index 058340a002c2..e92d0feaf639 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -581,25 +581,25 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, obj = to_qxl_framebuffer(fb)->obj; user_bo = gem_to_qxl_bo(obj); - /* pinning is done in the prepare/cleanup framevbuffer */ - ret = qxl_bo_kmap(user_bo, &user_ptr); - if (ret) - goto out_free_release; - ret = qxl_alloc_bo_reserved(qdev, release, sizeof(struct qxl_cursor) + size, &cursor_bo); if (ret) - goto out_kunmap; + goto out_free_release; ret = qxl_release_reserve_list(release, true); if (ret) goto out_free_bo; - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); + /* pinning is done in the prepare/cleanup framevbuffer */ + ret = qxl_bo_kmap(user_bo, &user_ptr); if (ret) goto out_backoff; + ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); + if (ret) + goto out_kunmap; + cursor->header.unique = 0; cursor->header.type = SPICE_CURSOR_TYPE_ALPHA; cursor->header.width = 64; @@ -636,12 +636,12 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, return; +out_kunmap: + qxl_bo_kunmap(user_bo); out_backoff: qxl_release_backoff_reserve_list(release); out_free_bo: qxl_bo_unref(&cursor_bo); -out_kunmap: - qxl_bo_kunmap(user_bo); out_free_release: qxl_release_free(qdev, release); return;