From patchwork Mon Apr 23 10:14:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10356551 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 20E20601D3 for ; Mon, 23 Apr 2018 10:14:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 119D528A38 for ; Mon, 23 Apr 2018 10:14:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0696128A3E; Mon, 23 Apr 2018 10:14:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 624BE28A38 for ; Mon, 23 Apr 2018 10:14:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0E2E26E191; Mon, 23 Apr 2018 10:14:39 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 057B46E185 for ; Mon, 23 Apr 2018 10:14:36 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 11462163-1500050 for multiple; Mon, 23 Apr 2018 11:14:26 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Mon, 23 Apr 2018 11:14:24 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 23 Apr 2018 11:14:00 +0100 Message-Id: <20180423101400.27418-6-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180423101400.27418-1-chris@chris-wilson.co.uk> References: <20180423101400.27418-1-chris@chris-wilson.co.uk> X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Subject: [Intel-gfx] [PATCH 6/6] drm/i915: Lazily unbind vma on close X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP When userspace is passing around swapbuffers using DRI, we frequently have to open and close the same object in the foreign address space. This shows itself as the same object being rebound at roughly 30fps (with a second object also being rebound at 30fps), which involves us having to rewrite the page tables and maintain the drm_mm range manager every time. However, since the object still exists and it is only the local handle that disappears, if we are lazy and do not unbind the VMA immediately when the local user closes the object but defer it until the GPU is idle, then we can reuse the same VMA binding. We still have to be careful to mark the handle and lookup tables as closed to maintain the uABI, just allowing the underlying VMA to be resurrected if the user is able to access the same object from the same context again. If the object itself is destroyed (neither userspace keeping a handle to it), the VMA will be reaped immediately as usual. In the future, this will be even more useful as instantiating a new VMA for use on the GPU will become heavier. A nuisance indeed, so nip it in the bud. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_drv.h | 1 + drivers/gpu/drm/i915/i915_gem.c | 4 +- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 5 +++ drivers/gpu/drm/i915/i915_gem_gtt.c | 14 ++++--- drivers/gpu/drm/i915/i915_vma.c | 39 ++++++++++++------- drivers/gpu/drm/i915/i915_vma.h | 5 +++ drivers/gpu/drm/i915/selftests/huge_pages.c | 2 +- .../gpu/drm/i915/selftests/mock_gem_device.c | 1 + 8 files changed, 51 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 89cb74c30a00..b2f78da9bc3c 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2060,6 +2060,7 @@ struct drm_i915_private { struct list_head timelines; struct list_head live_rings; + struct list_head closed_vma; u32 active_requests; /** diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index f07556693cfe..4fb7d45b5b5d 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -165,6 +165,7 @@ static u32 __i915_gem_park(struct drm_i915_private *i915) i915_timelines_park(i915); i915_pmu_gt_parked(i915); + i915_vma_parked(i915); i915->gt.awake = false; @@ -4792,7 +4793,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, &obj->vma_list, obj_link) { GEM_BUG_ON(i915_vma_is_active(vma)); vma->flags &= ~I915_VMA_PIN_MASK; - i915_vma_close(vma); + __i915_vma_final_close(vma); } GEM_BUG_ON(!list_empty(&obj->vma_list)); GEM_BUG_ON(!RB_EMPTY_ROOT(&obj->vma_tree)); @@ -5595,6 +5596,7 @@ int i915_gem_init_early(struct drm_i915_private *dev_priv) INIT_LIST_HEAD(&dev_priv->gt.live_rings); INIT_LIST_HEAD(&dev_priv->gt.timelines); + INIT_LIST_HEAD(&dev_priv->gt.closed_vma); i915_gem_init__mm(dev_priv); diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index c74f5df3fb5a..e5b8371e30de 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -768,6 +768,11 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb) lut->ctx = eb->ctx; lut->handle = handle; + if (vma->flags & I915_VMA_CLOSED) { + vma->flags &= ~I915_VMA_CLOSED; + list_del(&vma->closed_link); + } + add_vma: err = eb_add_vma(eb, i, vma); if (unlikely(err)) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index e9d828324f67..c1587d83fd19 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2218,6 +2218,12 @@ i915_ppgtt_create(struct drm_i915_private *dev_priv, } void i915_ppgtt_close(struct i915_address_space *vm) +{ + GEM_BUG_ON(vm->closed); + vm->closed = true; +} + +static void ppgtt_close_vma(struct i915_address_space *vm) { struct list_head *phases[] = { &vm->active_list, @@ -2226,15 +2232,12 @@ void i915_ppgtt_close(struct i915_address_space *vm) NULL, }, **phase; - GEM_BUG_ON(vm->closed); vm->closed = true; - for (phase = phases; *phase; phase++) { struct i915_vma *vma, *vn; list_for_each_entry_safe(vma, vn, *phase, vm_link) - if (!i915_vma_is_closed(vma)) - i915_vma_close(vma); + __i915_vma_final_close(vma); } } @@ -2245,7 +2248,8 @@ void i915_ppgtt_release(struct kref *kref) trace_i915_ppgtt_release(&ppgtt->base); - /* vmas should already be unbound and destroyed */ + ppgtt_close_vma(&ppgtt->base); + GEM_BUG_ON(!list_empty(&ppgtt->base.active_list)); GEM_BUG_ON(!list_empty(&ppgtt->base.inactive_list)); GEM_BUG_ON(!list_empty(&ppgtt->base.unbound_list)); diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 4bda3bd29bf5..5744f4409f09 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -46,8 +46,6 @@ i915_vma_retire(struct i915_gem_active *active, struct i915_request *rq) GEM_BUG_ON(!drm_mm_node_allocated(&vma->node)); list_move_tail(&vma->vm_link, &vma->vm->inactive_list); - if (unlikely(i915_vma_is_closed(vma) && !i915_vma_is_pinned(vma))) - WARN_ON(i915_vma_unbind(vma)); GEM_BUG_ON(!i915_gem_object_is_active(obj)); if (--obj->active_count) @@ -232,7 +230,6 @@ i915_vma_instance(struct drm_i915_gem_object *obj, if (!vma) vma = vma_create(obj, vm, view); - GEM_BUG_ON(!IS_ERR(vma) && i915_vma_is_closed(vma)); GEM_BUG_ON(!IS_ERR(vma) && i915_vma_compare(vma, vm, view)); GEM_BUG_ON(!IS_ERR(vma) && vma_lookup(obj, vm, view) != vma); return vma; @@ -689,8 +686,6 @@ static void i915_vma_destroy(struct i915_vma *vma) int i; GEM_BUG_ON(vma->node.allocated); - GEM_BUG_ON(i915_vma_is_active(vma)); - GEM_BUG_ON(!i915_vma_is_closed(vma)); GEM_BUG_ON(vma->fence); for (i = 0; i < ARRAY_SIZE(vma->last_read); i++) @@ -699,6 +694,7 @@ static void i915_vma_destroy(struct i915_vma *vma) list_del(&vma->obj_link); list_del(&vma->vm_link); + rb_erase(&vma->obj_node, &vma->obj->vma_tree); if (!i915_vma_is_ggtt(vma)) i915_ppgtt_put(i915_vm_to_ppgtt(vma->vm)); @@ -708,13 +704,34 @@ static void i915_vma_destroy(struct i915_vma *vma) void i915_vma_close(struct i915_vma *vma) { + lockdep_assert_held(&vma->vm->i915->drm.struct_mutex); + GEM_BUG_ON(i915_vma_is_closed(vma)); vma->flags |= I915_VMA_CLOSED; - rb_erase(&vma->obj_node, &vma->obj->vma_tree); + list_add_tail(&vma->closed_link, &vma->vm->i915->gt.closed_vma); +} - if (!i915_vma_is_active(vma) && !i915_vma_is_pinned(vma)) - WARN_ON(i915_vma_unbind(vma)); +void __i915_vma_final_close(struct i915_vma *vma) +{ + lockdep_assert_held(&vma->vm->i915->drm.struct_mutex); + + GEM_BUG_ON(i915_vma_is_active(vma)); + GEM_BUG_ON(i915_vma_is_pinned(vma)); + + if (i915_vma_is_closed(vma)) + list_del(&vma->closed_link); + + WARN_ON(i915_vma_unbind(vma)); + i915_vma_destroy(vma); +} + +void i915_vma_parked(struct drm_i915_private *i915) +{ + struct i915_vma *vma, *next; + + list_for_each_entry_safe(vma, next, &i915->gt.closed_vma, closed_link) + __i915_vma_final_close(vma); } static void __i915_vma_iounmap(struct i915_vma *vma) @@ -804,7 +821,7 @@ int i915_vma_unbind(struct i915_vma *vma) return -EBUSY; if (!drm_mm_node_allocated(&vma->node)) - goto destroy; + return 0; GEM_BUG_ON(obj->bind_count == 0); GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); @@ -841,10 +858,6 @@ int i915_vma_unbind(struct i915_vma *vma) i915_vma_remove(vma); -destroy: - if (unlikely(i915_vma_is_closed(vma))) - i915_vma_destroy(vma); - return 0; } diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index 8c5022095418..0a9790555d84 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -119,6 +119,8 @@ struct i915_vma { /** This vma's place in the eviction list */ struct list_head evict_link; + struct list_head closed_link; + /** * Used for performing relocations during execbuffer insertion. */ @@ -285,6 +287,7 @@ void i915_vma_revoke_mmap(struct i915_vma *vma); int __must_check i915_vma_unbind(struct i915_vma *vma); void i915_vma_unlink_ctx(struct i915_vma *vma); void i915_vma_close(struct i915_vma *vma); +void __i915_vma_final_close(struct i915_vma *vma); int __i915_vma_do_pin(struct i915_vma *vma, u64 size, u64 alignment, u64 flags); @@ -408,6 +411,8 @@ i915_vma_unpin_fence(struct i915_vma *vma) __i915_vma_unpin_fence(vma); } +void i915_vma_parked(struct drm_i915_private *i915); + #define for_each_until(cond) if (cond) break; else /** diff --git a/drivers/gpu/drm/i915/selftests/huge_pages.c b/drivers/gpu/drm/i915/selftests/huge_pages.c index 05bbef363fff..8fb0b9f434c9 100644 --- a/drivers/gpu/drm/i915/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/selftests/huge_pages.c @@ -1091,7 +1091,7 @@ static int __igt_write_huge(struct i915_gem_context *ctx, out_vma_unpin: i915_vma_unpin(vma); out_vma_close: - i915_vma_close(vma); + __i915_vma_final_close(vma); return err; } diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c index ab6cd0e16d9b..c50d9fe76465 100644 --- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c +++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c @@ -226,6 +226,7 @@ struct drm_i915_private *mock_gem_device(void) INIT_LIST_HEAD(&i915->gt.live_rings); INIT_LIST_HEAD(&i915->gt.timelines); + INIT_LIST_HEAD(&i915->gt.closed_vma); mutex_lock(&i915->drm.struct_mutex); mock_init_ggtt(i915);