From patchwork Tue Jul 16 07:12:10 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 2827983 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 81F089F9CA for ; Tue, 16 Jul 2013 07:35:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4E97720142 for ; Tue, 16 Jul 2013 07:35:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id C7A392011F for ; Tue, 16 Jul 2013 07:35:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 84F4EE64BE for ; Tue, 16 Jul 2013 00:35:42 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-ea0-f169.google.com (mail-ea0-f169.google.com [209.85.215.169]) by gabe.freedesktop.org (Postfix) with ESMTP id 12CA1E6B09 for ; Tue, 16 Jul 2013 00:17:23 -0700 (PDT) Received: by mail-ea0-f169.google.com with SMTP id h15so156862eak.14 for ; Tue, 16 Jul 2013 00:17:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=jcGl/fOtCkDN6xlJwFkNGO3qMHeeXsul5b4N2hQ4JtI=; b=YBj80piNwQNyuVhPN5yigavjk3rka1Fx5Q+KS2O2KakTBCg902clpPnYYp0iBE/OmH Zh01eAsrEp9xMkJVacITauZcECq2eeeKiowmF9TgUSdqxgp/EgTqb9Qjk0fHOBaTzHC3 vza/fmi1alBd5IVGeKGLaUe4QcYlx/oWLKzzA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=jcGl/fOtCkDN6xlJwFkNGO3qMHeeXsul5b4N2hQ4JtI=; b=nOOAOWpO7SsrGKO75q6IyaSuij+diyoGz/W85HfZBwI8JSdqvaYh8ple2zegUffBbB scw2pX8ugM7RPzgjjyEm7QVAmWJTAKahMSsoG0TxE9++FtC+9/xixClxiyIe2+18Cnjd zGjpr8pz+8/kh/vPUws/0htfgzc5PjNSxHsBOz9vxih3qns/4fO7ZJxinWTdhX6NXK8X 4BUlDCLagQFUg+WUbiOfibCZIurblZzORYQ/pxHY9e3j2KLDgpLmTdKC3W1mD8Vn088A xS0pukk477lNu3Cl5kbOgZJsxirhvlxxsgnneunnpXzb5eae62jcfVUZq3aJCUhdCN6h mGsA== X-Received: by 10.15.51.202 with SMTP id n50mr209345eew.39.1373959043449; Tue, 16 Jul 2013 00:17:23 -0700 (PDT) Received: from aaron.ffwll.local (178-83-130-250.dynamic.hispeed.ch. [178.83.130.250]) by mx.google.com with ESMTPSA id n5sm283897eed.9.2013.07.16.00.17.22 for (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 16 Jul 2013 00:17:22 -0700 (PDT) From: Daniel Vetter To: DRI Development Subject: [PATCH 19/20] drm/prime: proper locking+refcounting for obj->dma_buf link Date: Tue, 16 Jul 2013 09:12:10 +0200 Message-Id: <1373958731-4132-20-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1373958731-4132-1-git-send-email-daniel.vetter@ffwll.ch> References: <1373958731-4132-1-git-send-email-daniel.vetter@ffwll.ch> X-Gm-Message-State: ALoCoQn46BQQGbRADGdt+wkTgnCne0da4U0P8CbzHGvZRu/JClk/6aKplQ458fPtcYaaRbchFb7W Cc: Daniel Vetter X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The export dma-buf cache is semantically similar to an flink name. So semantically it makes sense to treat it the same and remove the name (i.e. the dma_buf pointer) and its references when the last gem handle disappears. Again we need to be careful, but double so: Not just could someone race and export with a gem close ioctl (so we need to recheck obj->handle_count again when assigning the new name), but multiple exports can also race against each another. This is prevented by holding the dev->object_name_lock across the entire section which touches obj->dma_buf. With the new scheme we also need to reinstate the obj->dma_buf link at import time (in case the only reference userspace has held in-between was through the dma-buf fd and not through any native gem handle). For simplicity we don't check whether it's a native object but unconditionally set up that link - with the new scheme of removing the obj->dma_buf reference when the last handle disappears we can do that. To make it clear that this is not just for exported buffers anymore als rename it from export_dma_buf to dma_buf. To make sure that now one can race a fd_to_handle or handle_to_fd with gem_close we use the same tricks as in flink of extending the dev->object_name_locking critical section. With this change we finally have a guaranteed 1:1 relationship (at least for native objects) between gem objects and dma-bufs, even accounting for races (which can happen since the dma-buf itself holds a reference while in-flight). Signed-off-by: Daniel Vetter --- drivers/gpu/drm/drm_fops.c | 1 + drivers/gpu/drm/drm_gem.c | 26 +++++++++++++++-- drivers/gpu/drm/drm_prime.c | 71 +++++++++++++++++++++++++++++++++++---------- include/drm/drmP.h | 12 ++++++-- 4 files changed, 90 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c index 3a24385..cea9bc5 100644 --- a/drivers/gpu/drm/drm_fops.c +++ b/drivers/gpu/drm/drm_fops.c @@ -555,6 +555,7 @@ int drm_release(struct inode *inode, struct file *filp) if (dev->driver->postclose) dev->driver->postclose(dev, file_priv); + if (drm_core_check_feature(dev, DRIVER_PRIME)) drm_prime_destroy_file_private(&file_priv->prime); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index dd758c8..44ee16b 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -204,9 +204,16 @@ drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp) drm_prime_remove_buf_handle(&filp->prime, obj->import_attach->dmabuf); } - if (obj->export_dma_buf) { + + /* + * Note: obj->dma_buf can't disappear as long as we still hold a + * handle reference in obj->handle_count. + */ + if (obj->dma_buf) { + printk("removing exported handle for file %p, dmabuf %p\n", + filp, obj->dma_buf); drm_prime_remove_buf_handle(&filp->prime, - obj->export_dma_buf); + obj->dma_buf); } } @@ -240,6 +247,15 @@ static void drm_gem_object_handle_free(struct drm_gem_object *obj) } } +static void drm_gem_object_exported_dma_buf_free(struct drm_gem_object *obj) +{ + /* Unbreak the reference cycle if we have an exported dma_buf. */ + if (obj->dma_buf) { + dma_buf_put(obj->dma_buf); + obj->dma_buf = NULL; + } +} + static void drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj) { @@ -253,8 +269,10 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj) */ mutex_lock(&obj->dev->object_name_lock); - if (--obj->handle_count == 0) + if (--obj->handle_count == 0) { drm_gem_object_handle_free(obj); + drm_gem_object_exported_dma_buf_free(obj); + } mutex_unlock(&obj->dev->object_name_lock); drm_gem_object_unreference_unlocked(obj); @@ -647,6 +665,8 @@ drm_gem_release(struct drm_device *dev, struct drm_file *file_private) void drm_gem_object_release(struct drm_gem_object *obj) { + WARN_ON(obj->dma_buf); + if (obj->filp) fput(obj->filp); } diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index e4436c1..6d0755a 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -196,11 +196,8 @@ void drm_gem_dmabuf_release(struct dma_buf *dma_buf) { struct drm_gem_object *obj = dma_buf->priv; - if (obj->export_dma_buf == dma_buf) { - /* drop the reference on the export fd holds */ - obj->export_dma_buf = NULL; - drm_gem_object_unreference_unlocked(obj); - } + /* drop the reference on the export fd holds */ + drm_gem_object_unreference_unlocked(obj); } EXPORT_SYMBOL(drm_gem_dmabuf_release); @@ -301,6 +298,38 @@ struct dma_buf *drm_gem_prime_export(struct drm_device *dev, } EXPORT_SYMBOL(drm_gem_prime_export); +static struct dma_buf *export_and_register_object(struct drm_device *dev, + struct drm_gem_object *obj, + uint32_t flags) +{ + struct dma_buf *dmabuf; + + /* prevent races with concurrent gem_close. */ + if (obj->handle_count == 0) { + dma_buf_put(dmabuf); + dmabuf = ERR_PTR(-ENOENT); + return dmabuf; + } + + dmabuf = dev->driver->gem_prime_export(dev, obj, flags); + if (IS_ERR(dmabuf)) { + /* normally the created dma-buf takes ownership of the ref, + * but if that fails then drop the ref + */ + return dmabuf; + } + + /* + * Note that callers do not need to clean up the export cache + * since the check for obj->handle_count guarantees that someone + * will clean it up. + */ + obj->dma_buf = dmabuf; + get_dma_buf(obj->dma_buf); + + return dmabuf; +} + int drm_gem_prime_handle_to_fd(struct drm_device *dev, struct drm_file *file_priv, uint32_t handle, uint32_t flags, int *prime_fd) @@ -316,15 +345,20 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, /* re-export the original imported object */ if (obj->import_attach) { dmabuf = obj->import_attach->dmabuf; + get_dma_buf(dmabuf); goto out_have_obj; } - if (obj->export_dma_buf) { - dmabuf = obj->export_dma_buf; + mutex_lock(&dev->object_name_lock); + if (obj->dma_buf) { + get_dma_buf(obj->dma_buf); + dmabuf = obj->dma_buf; + mutex_unlock(&dev->object_name_lock); goto out_have_obj; } - dmabuf = dev->driver->gem_prime_export(dev, obj, flags); + dmabuf = export_and_register_object(dev, obj, flags); + mutex_unlock(&dev->object_name_lock); if (IS_ERR(dmabuf)) { /* normally the created dma-buf takes ownership of the ref, * but if that fails then drop the ref @@ -332,14 +366,13 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, ret = PTR_ERR(dmabuf); goto out; } - obj->export_dma_buf = dmabuf; mutex_lock(&file_priv->prime.lock); /* if we've exported this buffer the cheat and add it to the import list * so we get the correct handle back */ ret = drm_prime_add_buf_handle(&file_priv->prime, - obj->export_dma_buf, handle); + dmabuf, handle); if (ret) goto fail_put_dmabuf; @@ -352,7 +385,6 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, return 0; out_have_obj: - get_dma_buf(dmabuf); ret = dma_buf_fd(dmabuf, flags); if (ret < 0) { dma_buf_put(dmabuf); @@ -368,8 +400,6 @@ fail_rm_handle: dmabuf); mutex_unlock(&file_priv->prime.lock); fail_put_dmabuf: - /* clear NOT to be checked when releasing dma_buf */ - obj->export_dma_buf = NULL; dma_buf_put(dmabuf); out: drm_gem_object_unreference_unlocked(obj); @@ -451,13 +481,22 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev, goto out_put; /* never seen this one, need to import */ + mutex_lock(&dev->object_name_lock); obj = dev->driver->gem_prime_import(dev, dma_buf); if (IS_ERR(obj)) { ret = PTR_ERR(obj); - goto out_put; + goto out_unlock; + } + + if (obj->dma_buf) { + WARN_ON(obj->dma_buf != dma_buf); + } else { + obj->dma_buf = dma_buf; + get_dma_buf(dma_buf); } - ret = drm_gem_handle_create(file_priv, obj, handle); + /* drm_gem_handle_create_tail unlocks dev->object_name_lock. */ + ret = drm_gem_handle_create_tail(file_priv, obj, handle); drm_gem_object_unreference_unlocked(obj); if (ret) goto out_put; @@ -478,6 +517,8 @@ fail: * to detach.. which seems ok.. */ drm_gem_handle_delete(file_priv, *handle); +out_unlock: + mutex_lock(&dev->object_name_lock); out_put: dma_buf_put(dma_buf); mutex_unlock(&file_priv->prime.lock); diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 9d82b33..c19cc2b 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -686,8 +686,16 @@ struct drm_gem_object { void *driver_private; - /* dma buf exported from this GEM object */ - struct dma_buf *export_dma_buf; + /** + * dma_buf - dma buf associated with this GEM object + * + * Pointer to the dma-buf associated with this gem object (either + * through importing or exporting). We break the resulting reference + * loop when the last gem handle for this object is released. + * + * Protected by obj->object_name_lock + */ + struct dma_buf *dma_buf; /** * import_attach - dma buf attachment backing this object