From patchwork Wed Aug 14 22:02:49 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 2844855 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 195149F2F4 for ; Wed, 14 Aug 2013 22:33:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0613E20654 for ; Wed, 14 Aug 2013 22:33:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id E610D20653 for ; Wed, 14 Aug 2013 22:33:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BEC39E6CB4 for ; Wed, 14 Aug 2013 15:33:35 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-ea0-f181.google.com (mail-ea0-f181.google.com [209.85.215.181]) by gabe.freedesktop.org (Postfix) with ESMTP id 72231E66EF for ; Wed, 14 Aug 2013 15:03:32 -0700 (PDT) Received: by mail-ea0-f181.google.com with SMTP id d10so10659eaj.40 for ; Wed, 14 Aug 2013 15:03:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5lKUaHqM0ByjroNeLAByRjZepEquySmehV/+ajVsxNE=; b=TVOJi8Ccxl+8FRQR16C9jtAKS6v37j5LvhCv+7SOBlBXYVZ6Vbwz79c31xsVI8j0xN 7s5m9OlGJ6c7TOnr31tvf58wz6V2uAWGZuo0r8iKCpGWh75y5uzSTEkk3hbR73a6OYTN xplIg3iZ3JjUzn3FWHWxLLDO0RECPk8J/blcQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5lKUaHqM0ByjroNeLAByRjZepEquySmehV/+ajVsxNE=; b=VRn1iGhSzkN4AryWKltr545VRyDUxw6Vcj3hMq7BNmbwqobO+XDEq/Rc4qgQ09pk4S bunxHBC+oe6iqyiiaRnaqkR/8ElBfPQbcTbhTty1DHQeGzfSK8/l9gDmR+csTTlI39+M bGKAUO0fJy8BJszwFWwpil5rdkrllbK/wNdNNssODity201fP6fafKB0bkaJAGLZkRx2 C4viRu83QdR5Q0Wf5XDyG2XSZg97xj2pqP5+xIvdRleM/bC6pFDbMkMQ2qrHNP33NPqn 9yD779yVenkb94p8aXzJRG8p98cy4OAOS99ChcWiHQ5RYWh+peYnax0mVz36hZvwC8w7 Ooyg== X-Gm-Message-State: ALoCoQnEHOUbxLc+HeVf8E6fu0H96DNkOOWDhLmAsx0/vAawjIPEa8HezrypsFF4tntojYbB5QWS X-Received: by 10.15.102.72 with SMTP id bq48mr241746eeb.85.1376517811675; Wed, 14 Aug 2013 15:03:31 -0700 (PDT) Received: from aaron.ffwll.local (178-83-130-250.dynamic.hispeed.ch. [178.83.130.250]) by mx.google.com with ESMTPSA id m54sm78874820eex.2.2013.08.14.15.03.30 for (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 14 Aug 2013 15:03:30 -0700 (PDT) From: Daniel Vetter To: Intel Graphics Development , DRI Development Date: Thu, 15 Aug 2013 00:02:49 +0200 Message-Id: <1376517769-7171-21-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1376517769-7171-1-git-send-email-daniel.vetter@ffwll.ch> References: <1376517769-7171-1-git-send-email-daniel.vetter@ffwll.ch> Cc: Daniel Vetter Subject: [Intel-gfx] [PATCH 20/20] drm/prime: Always add exported buffers to the handle cache X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ... not only when the dma-buf is freshly created. In contrived examples someone else could have exported/imported the dma-buf already and handed us the gem object with a flink name. If such on object gets reexported as a dma_buf we won't have it in the handle cache already, which breaks the guarantee that for dma-buf imports we always hand back an existing handle if there is one. This is exercised by igt/prime_self_import/with_one_bo_two_files Now if we extend the locked sections just a notch more we can also plug th racy buf/handle cache setup in handle_to_fd: If evil userspace races a concurrent gem close against a prime export operation we can end up tearing down the gem handle before the dma buf handle cache is set up. When handle_to_fd gets around to adding the handle to the cache there will be no one left to clean it up, effectily leaking the bo (and the dma-buf, since the handle cache holds a ref on the dma-buf): Thread A Thread B handle_to_fd: lookup gem object from handle creates new dma_buf gem_close on the same handle obj->dma_buf is set, but file priv buf handle cache has no entry obj->handle_count drops to 0 drm_prime_add_buf_handle sets up the handle cache -> We have a dma-buf reference in the handle cache, but since the handle_count of the gem object already dropped to 0 no on will clean it up. When closing the drm device fd we'll hit the WARN_ON in drm_prime_destroy_file_private. The important change is to extend the critical section of the filp->prime.lock to cover the gem handle lookup. This serializes with a concurrent gem handle close. This leak is exercised by igt/prime_self_import/export-vs-gem_close-race Signed-off-by: Daniel Vetter --- drivers/gpu/drm/drm_gem.c | 6 ++-- drivers/gpu/drm/drm_prime.c | 81 +++++++++++++++++++++++++++------------------ include/drm/drmP.h | 2 +- 3 files changed, 53 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 9d72028..a3654fe 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -195,10 +195,12 @@ drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp) * Note: obj->dma_buf can't disappear as long as we still hold a * handle reference in obj->handle_count. */ + mutex_lock(&filp->prime.lock); if (obj->dma_buf) { - drm_prime_remove_buf_handle(&filp->prime, - obj->dma_buf); + drm_prime_remove_buf_handle_locked(&filp->prime, + obj->dma_buf); } + mutex_unlock(&filp->prime.lock); } static void drm_gem_object_ref_bug(struct kref *list_kref) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index ed1ea5c..7ae2bfc 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -83,6 +83,19 @@ static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, return 0; } +static struct dma_buf *drm_prime_lookup_buf_by_handle(struct drm_prime_file_private *prime_fpriv, + uint32_t handle) +{ + struct drm_prime_member *member; + + list_for_each_entry(member, &prime_fpriv->head, entry) { + if (member->handle == handle) + return member->dma_buf; + } + + return NULL; +} + static int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t *handle) @@ -146,9 +159,8 @@ static void drm_gem_map_detach(struct dma_buf *dma_buf, attach->priv = NULL; } -static void drm_prime_remove_buf_handle_locked( - struct drm_prime_file_private *prime_fpriv, - struct dma_buf *dma_buf) +void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv, + struct dma_buf *dma_buf) { struct drm_prime_member *member, *safe; @@ -337,6 +349,8 @@ static struct dma_buf *export_and_register_object(struct drm_device *dev, */ obj->dma_buf = dmabuf; get_dma_buf(obj->dma_buf); + /* Grab a new ref since the callers is now used by the dma-buf */ + drm_gem_object_reference(obj); return dmabuf; } @@ -349,10 +363,20 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, int ret = 0; struct dma_buf *dmabuf; + mutex_lock(&file_priv->prime.lock); obj = drm_gem_object_lookup(dev, file_priv, handle); - if (!obj) - return -ENOENT; + if (!obj) { + ret = -ENOENT; + goto out_unlock; + } + + dmabuf = drm_prime_lookup_buf_by_handle(&file_priv->prime, handle); + if (dmabuf) { + get_dma_buf(dmabuf); + goto out_have_handle; + } + mutex_lock(&dev->object_name_lock); /* re-export the original imported object */ if (obj->import_attach) { dmabuf = obj->import_attach->dmabuf; @@ -360,45 +384,45 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, goto out_have_obj; } - mutex_lock(&dev->object_name_lock); if (obj->dma_buf) { get_dma_buf(obj->dma_buf); dmabuf = obj->dma_buf; - mutex_unlock(&dev->object_name_lock); goto out_have_obj; } dmabuf = export_and_register_object(dev, obj, flags); - mutex_unlock(&dev->object_name_lock); if (IS_ERR(dmabuf)) { /* normally the created dma-buf takes ownership of the ref, * but if that fails then drop the ref */ ret = PTR_ERR(dmabuf); + mutex_unlock(&dev->object_name_lock); goto out; } - mutex_lock(&file_priv->prime.lock); - /* if we've exported this buffer the cheat and add it to the import list - * so we get the correct handle back +out_have_obj: + /* + * If we've exported this buffer then cheat and add it to the import list + * so we get the correct handle back. We must do this under the + * protection of dev->object_name_lock to ensure that a racing gem close + * ioctl doesn't miss to remove this buffer handle from the cache. */ ret = drm_prime_add_buf_handle(&file_priv->prime, dmabuf, handle); + mutex_unlock(&dev->object_name_lock); if (ret) goto fail_put_dmabuf; +out_have_handle: ret = dma_buf_fd(dmabuf, flags); - if (ret < 0) - goto fail_rm_handle; - - *prime_fd = ret; - mutex_unlock(&file_priv->prime.lock); - return 0; - -out_have_obj: - ret = dma_buf_fd(dmabuf, flags); + /* + * We must _not_ remove the buffer from the handle cache since the newly + * created dma buf is already linked in the global obj->dma_buf pointer, + * and that is invariant as long as a userspace gem handle exists. + * Closing the handle will clean out the cache anyway, so we don't leak. + */ if (ret < 0) { - dma_buf_put(dmabuf); + goto fail_put_dmabuf; } else { *prime_fd = ret; ret = 0; @@ -406,14 +430,13 @@ out_have_obj: goto out; -fail_rm_handle: - drm_prime_remove_buf_handle_locked(&file_priv->prime, - dmabuf); - mutex_unlock(&file_priv->prime.lock); fail_put_dmabuf: dma_buf_put(dmabuf); out: drm_gem_object_unreference_unlocked(obj); +out_unlock: + mutex_unlock(&file_priv->prime.lock); + return ret; } EXPORT_SYMBOL(drm_gem_prime_handle_to_fd); @@ -669,11 +692,3 @@ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv) WARN_ON(!list_empty(&prime_fpriv->head)); } EXPORT_SYMBOL(drm_prime_destroy_file_private); - -void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf) -{ - mutex_lock(&prime_fpriv->lock); - drm_prime_remove_buf_handle_locked(prime_fpriv, dma_buf); - mutex_unlock(&prime_fpriv->lock); -} -EXPORT_SYMBOL(drm_prime_remove_buf_handle); diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 8047f76..417aa89 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -1543,7 +1543,7 @@ int drm_gem_dumb_destroy(struct drm_file *file, void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv); void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv); -void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf); +void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf); int drm_prime_add_dma_buf(struct drm_device *dev, struct drm_gem_object *obj); int drm_prime_lookup_obj(struct drm_device *dev, struct dma_buf *buf,