From patchwork Tue Aug 7 10:45:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10558521 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 168D61057 for ; Tue, 7 Aug 2018 10:45:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 05CEC2970A for ; Tue, 7 Aug 2018 10:45:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE45629942; Tue, 7 Aug 2018 10:45:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A082A2970A for ; Tue, 7 Aug 2018 10:45:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D9C616E35C; Tue, 7 Aug 2018 10:45:16 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 33DE16E34F; Tue, 7 Aug 2018 10:45:14 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 12676813-1500050 for multiple; Tue, 07 Aug 2018 11:45:02 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org, amd-gfx@lists.freedesktop.org Subject: [PATCH] drm/amdgpu: Transfer fences to dmabuf importer Date: Tue, 7 Aug 2018 11:45:00 +0100 Message-Id: <20180807104500.31264-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alex Deucher , =?utf-8?q?Christian_K=C3=B6nig?= , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP amdgpu only uses shared-fences internally, but dmabuf importers rely on implicit write hazard tracking via the reservation_object.fence_excl. For example, the importer use the write hazard for timing a page flip to only occur after the exporter has finished flushing its write into the surface. As such, on exporting a dmabuf, we must either flush all outstanding fences (for we do not know which are writes and should have been exclusive) or alternatively create a new exclusive fence that is the composite of all the existing shared fences, and so will only be signaled when all earlier fences are signaled (ensuring that we can not be signaled before the completion of any earlier write). Testcase: igt/amd_prime/amd-to-i915 Signed-off-by: Chris Wilson Cc: Alex Deucher Cc: "Christian König" --- drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 70 ++++++++++++++++++++--- 1 file changed, 62 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c index 1c5d97f4b4dd..47e6ec5510b6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c @@ -37,6 +37,7 @@ #include "amdgpu_display.h" #include #include +#include static const struct dma_buf_ops amdgpu_dmabuf_ops; @@ -188,6 +189,57 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev, return ERR_PTR(ret); } +static int +__reservation_object_make_exclusive(struct reservation_object *obj) +{ + struct reservation_object_list *fobj; + struct dma_fence_array *array; + struct dma_fence **fences; + unsigned int count, i; + + fobj = reservation_object_get_list(obj); + if (!fobj) + return 0; + + count = !!rcu_access_pointer(obj->fence_excl); + count += fobj->shared_count; + + fences = kmalloc_array(sizeof(*fences), count, GFP_KERNEL); + if (!fences) + return -ENOMEM; + + for (i = 0; i < fobj->shared_count; i++) { + struct dma_fence *f = + rcu_dereference_protected(fobj->shared[i], + reservation_object_held(obj)); + + fences[i] = dma_fence_get(f); + } + + if (rcu_access_pointer(obj->fence_excl)) { + struct dma_fence *f = + rcu_dereference_protected(obj->fence_excl, + reservation_object_held(obj)); + + fences[i] = dma_fence_get(f); + } + + array = dma_fence_array_create(count, fences, + dma_fence_context_alloc(1), 0, + false); + if (!array) + goto err_fences_put; + + reservation_object_add_excl_fence(obj, &array->base); + return 0; + +err_fences_put: + for (i = 0; i < count; i++) + dma_fence_put(fences[i]); + kfree(fences); + return -ENOMEM; +} + /** * amdgpu_gem_map_attach - &dma_buf_ops.attach implementation * @dma_buf: shared DMA buffer @@ -219,16 +271,18 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf, if (attach->dev->driver != adev->dev->driver) { /* - * Wait for all shared fences to complete before we switch to future - * use of exclusive fence on this prime shared bo. + * We only create shared fences for internal use, but importers + * of the dmabuf rely on exclusive fences for implicitly + * tracking write hazards. As any of the current fences may + * correspond to a write, we need to convert all existing + * fences on the resevation object into a single exclusive + * fence. */ - r = reservation_object_wait_timeout_rcu(bo->tbo.resv, - true, false, - MAX_SCHEDULE_TIMEOUT); - if (unlikely(r < 0)) { - DRM_DEBUG_PRIME("Fence wait failed: %li\n", r); + reservation_object_lock(bo->tbo.resv, NULL); + r = __reservation_object_make_exclusive(bo->tbo.resv); + reservation_object_unlock(bo->tbo.resv); + if (r) goto error_unreserve; - } } /* pin buffer into GTT */