From patchwork Thu Aug 1 00:00:05 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2836640 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 324EEC0319 for ; Thu, 1 Aug 2013 00:11:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 396D320206 for ; Thu, 1 Aug 2013 00:11:43 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 637BA20201 for ; Thu, 1 Aug 2013 00:11:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 42F31E7991 for ; Wed, 31 Jul 2013 17:11:42 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id 3FF0EE6308 for ; Wed, 31 Jul 2013 17:00:43 -0700 (PDT) Received: by mail.bwidawsk.net (Postfix, from userid 5001) id DA79059643; Wed, 31 Jul 2013 17:00:42 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lundgren.kumite (c-24-21-100-90.hsd1.or.comcast.net [24.21.100.90]) by mail.bwidawsk.net (Postfix) with ESMTPSA id 95D0959632; Wed, 31 Jul 2013 17:00:34 -0700 (PDT) From: Ben Widawsky To: Intel GFX Date: Wed, 31 Jul 2013 17:00:05 -0700 Message-Id: <1375315222-4785-13-git-send-email-ben@bwidawsk.net> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1375315222-4785-1-git-send-email-ben@bwidawsk.net> References: <1375315222-4785-1-git-send-email-ben@bwidawsk.net> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 12/29] drm/i915: make reset&hangcheck code VM aware X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Virus-Scanned: ClamAV using ClamSMTP Hangcheck, and some of the recent reset code for guilty batches need to know which address space the object was in at the time of a hangcheck. This is because we use offsets in the (PP|G)GTT to determine this information, and those offsets can differ depending on which VM they are bound into. Since we still only have 1 VM ever, this code shouldn't yet any any impact. Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_gem.c | 30 +++++++++++++++++++++++------- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index dbf72d5..b4c35f0 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2110,10 +2110,11 @@ i915_gem_request_remove_from_client(struct drm_i915_gem_request *request) spin_unlock(&file_priv->mm.lock); } -static bool i915_head_inside_object(u32 acthd, struct drm_i915_gem_object *obj) +static bool i915_head_inside_object(u32 acthd, struct drm_i915_gem_object *obj, + struct i915_address_space *vm) { - if (acthd >= i915_gem_obj_ggtt_offset(obj) && - acthd < i915_gem_obj_ggtt_offset(obj) + obj->base.size) + if (acthd >= i915_gem_obj_offset(obj, vm) && + acthd < i915_gem_obj_offset(obj, vm) + obj->base.size) return true; return false; @@ -2136,6 +2137,17 @@ static bool i915_head_inside_request(const u32 acthd_unmasked, return false; } +static struct i915_address_space * +request_to_vm(struct drm_i915_gem_request *request) +{ + struct drm_i915_private *dev_priv = request->ring->dev->dev_private; + struct i915_address_space *vm; + + vm = &dev_priv->gtt.base; + + return vm; +} + static bool i915_request_guilty(struct drm_i915_gem_request *request, const u32 acthd, bool *inside) { @@ -2143,9 +2155,9 @@ static bool i915_request_guilty(struct drm_i915_gem_request *request, * pointing inside the ring, matches the batch_obj address range. * However this is extremely unlikely. */ - if (request->batch_obj) { - if (i915_head_inside_object(acthd, request->batch_obj)) { + if (i915_head_inside_object(acthd, request->batch_obj, + request_to_vm(request))) { *inside = true; return true; } @@ -2165,17 +2177,21 @@ static void i915_set_reset_status(struct intel_ring_buffer *ring, { struct i915_ctx_hang_stats *hs = NULL; bool inside, guilty; + unsigned long offset = 0; /* Innocent until proven guilty */ guilty = false; + if (request->batch_obj) + offset = i915_gem_obj_offset(request->batch_obj, + request_to_vm(request)); + if (ring->hangcheck.action != wait && i915_request_guilty(request, acthd, &inside)) { DRM_ERROR("%s hung %s bo (0x%lx ctx %d) at 0x%x\n", ring->name, inside ? "inside" : "flushing", - request->batch_obj ? - i915_gem_obj_ggtt_offset(request->batch_obj) : 0, + offset, request->ctx ? request->ctx->id : 0, acthd);