From patchwork Wed Nov 14 16:14:05 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 1742651 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork2.kernel.org (Postfix) with ESMTP id 5475FDF264 for ; Wed, 14 Nov 2012 16:27:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3096BA02D9 for ; Wed, 14 Nov 2012 08:27:54 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-ee0-f49.google.com (mail-ee0-f49.google.com [74.125.83.49]) by gabe.freedesktop.org (Postfix) with ESMTP id CAB4BA02E4 for ; Wed, 14 Nov 2012 08:25:27 -0800 (PST) Received: by mail-ee0-f49.google.com with SMTP id c1so385161eek.36 for ; Wed, 14 Nov 2012 08:25:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=j+jqP4N5OR4GOhnF7LxlmxbAoFfSx4v3HdjILLKK2Kw=; b=MDkWde4m8MntG9iJLeZzlTfvHGxrROo7er4NzzqAFBxTm2ypBXXCUzl5aJPkBVTiXO 8qehdKyNPOIZHM2Lgso02nhppy2ybl4lUpIgYm2zaXCjhnOybhBEWs3/pIt+3wHNUPU4 crAwPm8p+oYYDLoqi63CdLANtjGxHKDdL1GqQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=j+jqP4N5OR4GOhnF7LxlmxbAoFfSx4v3HdjILLKK2Kw=; b=EDOzvQdLJi8ZyjRjUEIBAkg9MEm4WMfPnzaqOg94flGmJoACu997DJFVGiEJ9J2Njv baMBiKu2hSoDGW+gipy+tLSyQTYUnIsVnp7NcL9y11KMyjjRKrPNtGoSKskcnAvMSpmS R7XIfRK90t9C6gSeJCSB06AXlPC2zHxiUJ2lNE79h8vhqzJaMSrzpVvE4Kc5UHp9qgZk yaDmacxVan4HRVX4jMLUszf8wgYjLN075Rs/gnxwP4tu2f9qwmQgx+pS8gcVf7HxeSSs Cwa/xG3DOl1m83tnoqObdtAJiUsBkGNG4QI0AnM1hz3gYxUhf1EO5r+zaxUd2g0TbwYu 8egA== Received: by 10.14.193.136 with SMTP id k8mr88497481een.30.1352910327464; Wed, 14 Nov 2012 08:25:27 -0800 (PST) Received: from fliege.ffwll.local (178-83-130-250.dynamic.hispeed.ch. [178.83.130.250]) by mx.google.com with ESMTPS id b44sm30409682eep.12.2012.11.14.08.25.25 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 14 Nov 2012 08:25:26 -0800 (PST) From: Daniel Vetter To: Intel Graphics Development Date: Wed, 14 Nov 2012 17:14:05 +0100 Message-Id: <1352909648-21514-4-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 1.7.11.4 In-Reply-To: <1352909648-21514-1-git-send-email-daniel.vetter@ffwll.ch> References: <1352909648-21514-1-git-send-email-daniel.vetter@ffwll.ch> X-Gm-Message-State: ALoCoQlRvS3hAAyjgUIRMimcnpZSMvjcb9x/s1tFa5o/PpNlZv6p4maEcu635WDSac+Z5T+B7qoh Cc: Daniel Vetter Subject: [Intel-gfx] [PATCH 3/6] drm/i915: move wedged to the other gpu error handling stuff X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org And to make Ben Widawsky happier, use the gpu_error instead of the entire device as the argument in some functions. Drop the outdated comment on ->wedged for now, a follow-up patch will change the semantics and add a proper comment again. Signed-off-by: Daniel Vetter Reviewed-by: Damien Lespiau --- drivers/gpu/drm/i915/i915_debugfs.c | 2 +- drivers/gpu/drm/i915/i915_drv.h | 13 +++---------- drivers/gpu/drm/i915/i915_gem.c | 34 ++++++++++++++++----------------- drivers/gpu/drm/i915/i915_irq.c | 6 +++--- drivers/gpu/drm/i915/intel_display.c | 4 ++-- drivers/gpu/drm/i915/intel_ringbuffer.c | 6 ++++-- 6 files changed, 30 insertions(+), 35 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 276997a..ad4cdfe 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -1596,7 +1596,7 @@ i915_wedged_read(struct file *filp, len = snprintf(buf, sizeof(buf), "wedged : %d\n", - atomic_read(&dev_priv->mm.wedged)); + atomic_read(&dev_priv->gpu_error.wedged)); if (len > sizeof(buf)) len = sizeof(buf); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 03218f9..6958bb0 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -694,15 +694,6 @@ struct i915_gem_mm { */ int suspended; - /** - * Flag if the hardware appears to be wedged. - * - * This is set when attempts to idle the device timeout. - * It prevents command submission from occurring and makes - * every pending request fail - */ - atomic_t wedged; - /** Bit 6 swizzling required for X tiling */ uint32_t bit_6_swizzle_x; /** Bit 6 swizzling required for Y tiling */ @@ -736,6 +727,8 @@ struct i915_gpu_error { unsigned long last_reset; + atomic_t wedged; + /* For gpu hang simulation. */ unsigned int stop_rings; }; @@ -1462,7 +1455,7 @@ i915_gem_object_unpin_fence(struct drm_i915_gem_object *obj) void i915_gem_retire_requests(struct drm_device *dev); void i915_gem_retire_requests_ring(struct intel_ring_buffer *ring); -int __must_check i915_gem_check_wedge(struct drm_i915_private *dev_priv, +int __must_check i915_gem_check_wedge(struct i915_gpu_error *error, bool interruptible); void i915_gem_reset(struct drm_device *dev); diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 6e29bed..b2620c7 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -87,14 +87,13 @@ static void i915_gem_info_remove_obj(struct drm_i915_private *dev_priv, } static int -i915_gem_wait_for_error(struct drm_device *dev) +i915_gem_wait_for_error(struct i915_gpu_error *error) { - struct drm_i915_private *dev_priv = dev->dev_private; - struct completion *x = &dev_priv->gpu_error.completion; + struct completion *x = &error->completion; unsigned long flags; int ret; - if (!atomic_read(&dev_priv->mm.wedged)) + if (!atomic_read(&error->wedged)) return 0; /* @@ -110,7 +109,7 @@ i915_gem_wait_for_error(struct drm_device *dev) return ret; } - if (atomic_read(&dev_priv->mm.wedged)) { + if (atomic_read(&error->wedged)) { /* GPU is hung, bump the completion count to account for * the token we just consumed so that we never hit zero and * end up waiting upon a subsequent completion event that @@ -125,9 +124,10 @@ i915_gem_wait_for_error(struct drm_device *dev) int i915_mutex_lock_interruptible(struct drm_device *dev) { + struct drm_i915_private *dev_priv = dev->dev_private; int ret; - ret = i915_gem_wait_for_error(dev); + ret = i915_gem_wait_for_error(&dev_priv->gpu_error); if (ret) return ret; @@ -940,11 +940,11 @@ unlock: } int -i915_gem_check_wedge(struct drm_i915_private *dev_priv, +i915_gem_check_wedge(struct i915_gpu_error *error, bool interruptible) { - if (atomic_read(&dev_priv->mm.wedged)) { - struct completion *x = &dev_priv->gpu_error.completion; + if (atomic_read(&error->wedged)) { + struct completion *x = &error->completion; bool recovery_complete; unsigned long flags; @@ -1026,7 +1026,7 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno, #define EXIT_COND \ (i915_seqno_passed(ring->get_seqno(ring, false), seqno) || \ - atomic_read(&dev_priv->mm.wedged)) + atomic_read(&dev_priv->gpu_error.wedged)) do { if (interruptible) end = wait_event_interruptible_timeout(ring->irq_queue, @@ -1036,7 +1036,7 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno, end = wait_event_timeout(ring->irq_queue, EXIT_COND, timeout_jiffies); - ret = i915_gem_check_wedge(dev_priv, interruptible); + ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible); if (ret) end = ret; } while (end == 0 && wait_forever); @@ -1082,7 +1082,7 @@ i915_wait_seqno(struct intel_ring_buffer *ring, uint32_t seqno) BUG_ON(!mutex_is_locked(&dev->struct_mutex)); BUG_ON(seqno == 0); - ret = i915_gem_check_wedge(dev_priv, interruptible); + ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible); if (ret) return ret; @@ -1147,7 +1147,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj, if (seqno == 0) return 0; - ret = i915_gem_check_wedge(dev_priv, true); + ret = i915_gem_check_wedge(&dev_priv->gpu_error, true); if (ret) return ret; @@ -1385,7 +1385,7 @@ out: /* If this -EIO is due to a gpu hang, give the reset code a * chance to clean up the mess. Otherwise return the proper * SIGBUS. */ - if (!atomic_read(&dev_priv->mm.wedged)) + if (!atomic_read(&dev_priv->gpu_error.wedged)) return VM_FAULT_SIGBUS; case -EAGAIN: /* Give the error handler a chance to run and move the @@ -3402,7 +3402,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) u32 seqno = 0; int ret; - if (atomic_read(&dev_priv->mm.wedged)) + if (atomic_read(&dev_priv->gpu_error.wedged)) return -EIO; spin_lock(&file_priv->mm.lock); @@ -4027,9 +4027,9 @@ i915_gem_entervt_ioctl(struct drm_device *dev, void *data, if (drm_core_check_feature(dev, DRIVER_MODESET)) return 0; - if (atomic_read(&dev_priv->mm.wedged)) { + if (atomic_read(&dev_priv->gpu_error.wedged)) { DRM_ERROR("Reenabling wedged hardware, good luck\n"); - atomic_set(&dev_priv->mm.wedged, 0); + atomic_set(&dev_priv->gpu_error.wedged, 0); } mutex_lock(&dev->struct_mutex); diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 8b71e1d..9d8921a 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -847,11 +847,11 @@ static void i915_error_work_func(struct work_struct *work) kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, error_event); - if (atomic_read(&dev_priv->mm.wedged)) { + if (atomic_read(&dev_priv->gpu_error.wedged)) { DRM_DEBUG_DRIVER("resetting chip\n"); kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, reset_event); if (!i915_reset(dev)) { - atomic_set(&dev_priv->mm.wedged, 0); + atomic_set(&dev_priv->gpu_error.wedged, 0); kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, reset_done_event); } complete_all(&dev_priv->gpu_error.completion); @@ -1435,7 +1435,7 @@ void i915_handle_error(struct drm_device *dev, bool wedged) if (wedged) { INIT_COMPLETION(dev_priv->gpu_error.completion); - atomic_set(&dev_priv->mm.wedged, 1); + atomic_set(&dev_priv->gpu_error.wedged, 1); /* * Wakeup waiting processes so they don't hang diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 655f87c..e321c9e 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -2242,7 +2242,7 @@ intel_finish_fb(struct drm_framebuffer *old_fb) int ret; wait_event(dev_priv->pending_flip_queue, - atomic_read(&dev_priv->mm.wedged) || + atomic_read(&dev_priv->gpu_error.wedged) || atomic_read(&obj->pending_flip) == 0); /* Big Hammer, we also need to ensure that any pending @@ -2963,7 +2963,7 @@ static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc) unsigned long flags; bool pending; - if (atomic_read(&dev_priv->mm.wedged)) + if (atomic_read(&dev_priv->gpu_error.wedged)) return false; spin_lock_irqsave(&dev->event_lock, flags); diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index a81cdb4..8279dd0 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -1305,7 +1305,8 @@ int intel_wait_ring_buffer(struct intel_ring_buffer *ring, int n) msleep(1); - ret = i915_gem_check_wedge(dev_priv, dev_priv->mm.interruptible); + ret = i915_gem_check_wedge(&dev_priv->gpu_error, + dev_priv->mm.interruptible); if (ret) return ret; } while (!time_after(jiffies, end)); @@ -1320,7 +1321,8 @@ int intel_ring_begin(struct intel_ring_buffer *ring, int n = 4*num_dwords; int ret; - ret = i915_gem_check_wedge(dev_priv, dev_priv->mm.interruptible); + ret = i915_gem_check_wedge(&dev_priv->gpu_error, + dev_priv->mm.interruptible); if (ret) return ret;