From patchwork Tue Jul 1 18:17:48 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 4461251 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id EBBCBBEEE9 for ; Tue, 1 Jul 2014 18:18:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D3679203E5 for ; Tue, 1 Jul 2014 18:18:48 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id BFE5020395 for ; Tue, 1 Jul 2014 18:18:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EF9356E27C; Tue, 1 Jul 2014 11:18:45 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by gabe.freedesktop.org (Postfix) with ESMTP id 740456E197 for ; Tue, 1 Jul 2014 11:18:44 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 01 Jul 2014 11:18:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.01,583,1400050800"; d="scan'208";a="537317996" Received: from ironside.jf.intel.com ([10.7.197.210]) by orsmga001.jf.intel.com with ESMTP; 01 Jul 2014 11:17:55 -0700 From: Ben Widawsky To: Intel GFX Date: Tue, 1 Jul 2014 11:17:48 -0700 Message-Id: <1404238671-18760-14-git-send-email-benjamin.widawsky@intel.com> X-Mailer: git-send-email 2.0.1 In-Reply-To: <1404238671-18760-1-git-send-email-benjamin.widawsky@intel.com> References: <1404238671-18760-1-git-send-email-benjamin.widawsky@intel.com> Cc: Ben Widawsky , Ben Widawsky Subject: [Intel-gfx] [PATCH 13/16] drm/i915: More correct (slower) ppgtt cleanup X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If a VM still have objects which are bound (exactly: have a node reserved in the drm_mm), and we are in the middle of a reset, we have no hope of the standard methods fixing the situation (ring idle won't work). We must therefore let the reset handler take it's course, and then we can resume tearing down the VM. This logic very much duplicates^Wresembles the logic in our wait for error code. I've decided to leave it as open coded because I expect this bit of code to require tweaks and changes over time. Interruptions via signal causes a really similar problem. This should deprecate the need for the yet unmerged patch from Chris (and an identical patch from me, which was first!!): drm/i915: Prevent signals from interrupting close() I have a followup patch to implement deferred free, before you complain. Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_gem_context.c | 51 +++++++++++++++++++++++++++++++-- 1 file changed, 48 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 8d106d9..e1b5613 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -101,6 +101,32 @@ static void do_ppgtt_cleanup(struct i915_hw_ppgtt *ppgtt) struct drm_device *dev = ppgtt->base.dev; struct drm_i915_private *dev_priv = dev->dev_private; struct i915_address_space *vm = &ppgtt->base; + bool do_idle = false; + int ret; + + /* If we get here while in reset, we need to let the reset handler run + * first, or else our VM teardown isn't going to go smoothly. There are + * a could of options at this point, but letting the reset handler do + * it's thing is the most desirable. The reset handler will take care of + * retiring the stuck requests. + */ + if (i915_reset_in_progress(&dev_priv->gpu_error)) { + mutex_unlock(&dev->struct_mutex); +#define EXIT_COND (!i915_reset_in_progress(&dev_priv->gpu_error) || \ + i915_terminally_wedged(&dev_priv->gpu_error)) + ret = wait_event_timeout(dev_priv->gpu_error.reset_queue, + EXIT_COND, + 10 * HZ); + if (!ret) { + /* it's unlikely idling will solve anything, but it + * shouldn't hurt to try. */ + do_idle = true; + /* TODO: go down kicking and screaming harder */ + } +#undef EXIT_COND + + mutex_lock(&dev->struct_mutex); + } if (ppgtt == dev_priv->mm.aliasing_ppgtt || (list_empty(&vm->active_list) && list_empty(&vm->inactive_list))) { @@ -117,14 +143,33 @@ static void do_ppgtt_cleanup(struct i915_hw_ppgtt *ppgtt) if (!list_empty(&vm->active_list)) { struct i915_vma *vma; + do_idle = true; list_for_each_entry(vma, &vm->active_list, mm_list) if (WARN_ON(list_empty(&vma->vma_link) || list_is_singular(&vma->vma_link))) break; - i915_gem_evict_vm(&ppgtt->base, true, true); - } else { + } else i915_gem_retire_requests(dev); - i915_gem_evict_vm(&ppgtt->base, false, true); + + /* We have a problem here where VM teardown cannot be interrupted, or + * else the ppgtt cleanup will fail. As an example, a precisely timed + * SIGKILL could leads to an OOPS, or worse. There are two options: + * 1. Make the eviction uninterruptible + * 2. Defer the eviction if it was interrupted. + * + * Option #1 is not the friendliest, but it's the easiest to implement, + * and least error prone. + * TODO: Implement option 2 + */ + ret = i915_gem_evict_vm(&ppgtt->base, do_idle, !do_idle); + if (ret == -ERESTARTSYS) + ret = i915_gem_evict_vm(&ppgtt->base, do_idle, false); + WARN_ON(ret); + WARN_ON(!list_empty(&vm->active_list)); + + /* This is going to blow up badly if the mm is unclean */ + if (WARN_ON(!list_empty(&ppgtt->base.mm.head_node.node_list))) { + /* TODO: go down kicking and screaming harder++ */ } ppgtt->base.cleanup(&ppgtt->base);