From patchwork Sat Aug 3 22:24:47 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2838303 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E47949F3B9 for ; Sat, 3 Aug 2013 22:25:39 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 01C7120145 for ; Sat, 3 Aug 2013 22:25:39 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id C84BD20149 for ; Sat, 3 Aug 2013 22:25:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id ABC3FE6769 for ; Sat, 3 Aug 2013 15:25:35 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id 5C77CE62EB for ; Sat, 3 Aug 2013 15:24:52 -0700 (PDT) Received: by mail.bwidawsk.net (Postfix, from userid 5001) id 6C8F6596B7; Sat, 3 Aug 2013 15:24:51 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from bwidawsk.net (jfdmzpr02-ext.jf.intel.com [134.134.137.71]) by mail.bwidawsk.net (Postfix) with ESMTPSA id F3B71596B2; Sat, 3 Aug 2013 15:24:49 -0700 (PDT) Date: Sat, 3 Aug 2013 15:24:47 -0700 From: Ben Widawsky To: Chris Wilson , Intel GFX Message-ID: <20130803222447.GB28218@bwidawsk.net> References: <1375315222-4785-1-git-send-email-ben@bwidawsk.net> <1375315222-4785-14-git-send-email-ben@bwidawsk.net> <20130803105942.GD4878@cantiga.alporthouse.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20130803105942.GD4878@cantiga.alporthouse.com> User-Agent: Mutt/1.5.21 (2010-09-15) Subject: Re: [Intel-gfx] [PATCH 13/29] drm/i915: clear domains for all objects on reset X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Virus-Scanned: ClamAV using ClamSMTP On Sat, Aug 03, 2013 at 11:59:42AM +0100, Chris Wilson wrote: > On Wed, Jul 31, 2013 at 05:00:06PM -0700, Ben Widawsky wrote: > > Simply iterating over 1 inactive list is insufficient for the way we now > > track inactive (1 list per address space). We could alternatively do > > this with bound + unbound lists, and an inactive check. To me, this way > > is a bit easier to understand. > > > > Signed-off-by: Ben Widawsky > > --- > > drivers/gpu/drm/i915/i915_gem.c | 7 ++++--- > > 1 file changed, 4 insertions(+), 3 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c > > index b4c35f0..8ce3545 100644 > > --- a/drivers/gpu/drm/i915/i915_gem.c > > +++ b/drivers/gpu/drm/i915/i915_gem.c > > @@ -2282,7 +2282,7 @@ void i915_gem_restore_fences(struct drm_device *dev) > > void i915_gem_reset(struct drm_device *dev) > > { > > struct drm_i915_private *dev_priv = dev->dev_private; > > - struct i915_address_space *vm = &dev_priv->gtt.base; > > + struct i915_address_space *vm; > > struct drm_i915_gem_object *obj; > > struct intel_ring_buffer *ring; > > int i; > > @@ -2293,8 +2293,9 @@ void i915_gem_reset(struct drm_device *dev) > > /* Move everything out of the GPU domains to ensure we do any > > * necessary invalidation upon reuse. > > */ > > - list_for_each_entry(obj, &vm->inactive_list, mm_list) > > - obj->base.read_domains &= ~I915_GEM_GPU_DOMAINS; > > + list_for_each_entry(vm, &dev_priv->vm_list, global_link) > > + list_for_each_entry(obj, &vm->inactive_list, mm_list) > > + obj->base.read_domains &= ~I915_GEM_GPU_DOMAINS; > > This code is dead. Just remove it rather than port it to vma. > -Chris > > -- > Chris Wilson, Intel Open Source Technology Centre Got it, and moved to the front of the series. commit 8472f08863da69159aa0a7555836ca0511754877 Author: Ben Widawsky Date: Sat Aug 3 15:22:17 2013 -0700 drm/i915: eliminate dead domain clearing on reset The code itself is no longer accurate without updating once we have multiple address space since clearing the domains of every object requires scanning the inactive list for all VMs. "This code is dead. Just remove it rather than port it to vma." - Chris Wilson Recommended-by: Chris Wilson Signed-off-by: Ben Widawsky diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 3a5d4ba..c7e3cee 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2277,12 +2277,6 @@ void i915_gem_reset(struct drm_device *dev) for_each_ring(ring, dev_priv, i) i915_gem_reset_ring_lists(dev_priv, ring); - /* Move everything out of the GPU domains to ensure we do any - * necessary invalidation upon reuse. - */ - list_for_each_entry(obj, &vm->inactive_list, mm_list) - obj->base.read_domains &= ~I915_GEM_GPU_DOMAINS; - i915_gem_restore_fences(dev); }