From patchwork Tue Jun 30 16:55:32 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 6697491 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 120EAC05AC for ; Tue, 30 Jun 2015 16:56:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BFC3D2062B for ; Tue, 30 Jun 2015 16:56:17 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 6C95120629 for ; Tue, 30 Jun 2015 16:56:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E726B6E9ED; Tue, 30 Jun 2015 09:56:15 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [87.106.93.118]) by gabe.freedesktop.org (Postfix) with ESMTP id EFD826E9EE for ; Tue, 30 Jun 2015 09:56:13 -0700 (PDT) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 41683575-1500048 for multiple; Tue, 30 Jun 2015 17:55:52 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Tue, 30 Jun 2015 17:55:41 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Tue, 30 Jun 2015 17:55:32 +0100 Message-Id: <1435683333-17844-2-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1435683333-17844-1-git-send-email-chris@chris-wilson.co.uk> References: <1435676470-18125-1-git-send-email-michal.winiarski@intel.com> <1435683333-17844-1-git-send-email-chris@chris-wilson.co.uk> MIME-Version: 1.0 X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Cc: stable@vger.kernel.org Subject: [Intel-gfx] [PATCH 2/3] drm/i915: Fix userptr deadlock with MAP_FIXED X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Micha? Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). To counter act the deadlock, we make the observation that when the MAP_FIXED is made we would have an invalidate_range event for our object. After that we should no longer alias with the rogue mmapping. If we are then able to mark the object as no longer in use after the first invalidate, we do not need to grab the struct_mutex for the subsequent invalidations. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. Reported-by: Micha? Winiarski Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson Cc: Micha? Winiarski Cc: Tvrtko Ursulin Cc: stable@vger.kernel.org --- drivers/gpu/drm/i915/i915_gem_userptr.c | 43 +++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c index cb367d9f7909..d3213fdefafc 100644 --- a/drivers/gpu/drm/i915/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c @@ -59,6 +59,7 @@ struct i915_mmu_object { struct interval_tree_node it; struct list_head link; struct drm_i915_gem_object *obj; + bool active; bool is_linear; }; @@ -114,7 +115,8 @@ restart: obj = mo->obj; - if (!kref_get_unless_zero(&obj->base.refcount)) + if (!mo->active || + !kref_get_unless_zero(&obj->base.refcount)) continue; spin_unlock(&mn->lock); @@ -151,7 +153,8 @@ static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, else it = interval_tree_iter_first(&mn->objects, start, end); if (it != NULL) { - obj = container_of(it, struct i915_mmu_object, it)->obj; + struct i915_mmu_object *mo = + container_of(it, struct i915_mmu_object, it); /* The mmu_object is released late when destroying the * GEM object so it is entirely possible to gain a @@ -160,11 +163,9 @@ static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, * the struct_mutex - and consequently use it after it * is freed and then double free it. */ - if (!kref_get_unless_zero(&obj->base.refcount)) { - spin_unlock(&mn->lock); - serial = 0; - continue; - } + if (mo->active && + kref_get_unless_zero(&mo->obj->base.refcount)) + obj = mo->obj; serial = mn->serial; } @@ -546,6 +547,30 @@ err: } static void +__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, + bool value) +{ + /* During mm_invalidate_range we need to cancel any userptr that + * overlaps the range being invalidated. Doing so requires the + * struct_mutex, and that risks recursion. In order to cause + * recursion, the user must alias the userptr address space with + * a GTT mmapping (possible with a MAP_FIXED) - then when we have + * to invalidate that mmaping, mm_invalidate_range is called with + * the userptr address *and* the struct_mutex held. To prevent that + * we set a flag under the i915_mmu_notifier spinlock to indicate + * whether this object is valid. + */ +#if defined(CONFIG_MMU_NOTIFIER) + if (obj->userptr.mmu_object == NULL) + return; + + spin_lock(&obj->userptr.mmu_object->mn->lock); + obj->userptr.mmu_object->active = value; + spin_unlock(&obj->userptr.mmu_object->mn->lock); +#endif +} + +static void __i915_gem_userptr_get_pages_worker(struct work_struct *_work) { struct get_pages_work *work = container_of(_work, typeof(*work), work); @@ -585,6 +610,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work) if (pinned == num_pages) { ret = st_set_pages(&obj->pages, pvec, num_pages); if (ret == 0) { + __i915_gem_userptr_set_active(obj, true); list_add_tail(&obj->global_list, &to_i915(dev)->mm.unbound_list); pinned = 0; } @@ -699,6 +725,7 @@ i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) } else { ret = st_set_pages(&obj->pages, pvec, num_pages); if (ret == 0) { + __i915_gem_userptr_set_active(obj, true); obj->userptr.work = NULL; pinned = 0; } @@ -732,6 +759,8 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj) sg_free_table(obj->pages); kfree(obj->pages); + + __i915_gem_userptr_set_active(obj, false); } static void