From patchwork Thu Feb 4 21:05:03 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 77205 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o14L5XaK007017 for ; Thu, 4 Feb 2010 21:06:09 GMT Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A26C09F51D; Thu, 4 Feb 2010 13:05:33 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.ffwll.ch (cable-static-49-187.intergga.ch [157.161.49.187]) by gabe.freedesktop.org (Postfix) with ESMTP id 8366D9F515 for ; Thu, 4 Feb 2010 13:05:31 -0800 (PST) Received: by mail.ffwll.ch (Postfix, from userid 1000) id B27C120C20F; Thu, 4 Feb 2010 22:05:29 +0100 (CET) X-Spam-ASN: X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on orange.ffwll.ch X-Spam-Level: X-Spam-Hammy: 0.000-+--struct, 0.000-+--100644, 0.000-+--signed-off-by X-Spam-Status: No, score=-1.2 required=6.0 tests=ALL_TRUSTED,BAYES_00, FH_DATE_PAST_20XX autolearn=no version=3.2.5 X-Spam-Spammy: 0.970-+--H*m:ffwll, 0.965-+--H*Ad:U*daniel.vetter, 0.955-+--H*r:mail.ffwll.ch Received: from biene (unknown [192.168.23.129]) by mail.ffwll.ch (Postfix) with ESMTP id BD37D20C219; Thu, 4 Feb 2010 22:05:14 +0100 (CET) Received: from daniel by biene with local (Exim 4.71) (envelope-from ) id 1Nd8t4-0004dY-4S; Thu, 04 Feb 2010 22:05:18 +0100 From: Daniel Vetter To: intel-gfx@lists.freedesktop.org Date: Thu, 4 Feb 2010 22:05:03 +0100 Message-Id: <1265317513-27723-4-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 1.6.6.1 In-Reply-To: <1265317513-27723-3-git-send-email-daniel.vetter@ffwll.ch> References: <1265317513-27723-1-git-send-email-daniel.vetter@ffwll.ch> <1265317513-27723-2-git-send-email-daniel.vetter@ffwll.ch> <1265317513-27723-3-git-send-email-daniel.vetter@ffwll.ch> Cc: Daniel Vetter Subject: [Intel-gfx] [PATCH 03/13] drm/i915: move flushing list processing to i915_gem_flush X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.9 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Thu, 04 Feb 2010 21:06:09 +0000 (UTC) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 3c0bf2c..b78b0e5 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1674,7 +1674,7 @@ i915_add_request(struct drm_device *dev, struct drm_file *file_priv, } /* Associate any objects on the flushing list matching the write - * domain we're flushing with our flush. + * domain we're flushing with our request. */ if (flush_domains != 0) i915_gem_process_flushing_list(dev, flush_domains, seqno); @@ -1852,6 +1852,7 @@ i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible) int ret = 0; BUG_ON(seqno == 0); + BUG_ON(seqno == dev_priv->mm.next_gem_seqno); if (atomic_read(&dev_priv->mm.wedged)) return -EIO; @@ -1890,8 +1891,9 @@ i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible) ret = -EIO; if (ret && ret != -ERESTARTSYS) - DRM_ERROR("%s returns %d (awaiting %d at %d)\n", - __func__, ret, seqno, i915_get_gem_seqno(dev)); + DRM_ERROR("%s returns %d (awaiting %d at %d, next %d)\n", + __func__, ret, seqno, i915_get_gem_seqno(dev), + dev_priv->mm.next_gem_seqno); /* Directly dispatch request retiring. While we have the work queue * to handle this, the waiter on a request often wants an associated @@ -1985,6 +1987,13 @@ i915_gem_flush(struct drm_device *dev, OUT_RING(MI_NOOP); ADVANCE_LP_RING(); } + + /* Associate any objects on the flushing list matching the write + * domain we're flushing with the next request. + */ + if (flush_domains != 0) + i915_gem_process_flushing_list(dev, flush_domains, 0); + } /** @@ -2142,7 +2151,7 @@ i915_gpu_idle(struct drm_device *dev) /* Flush everything onto the inactive list. */ i915_gem_flush(dev, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS); - seqno = i915_add_request(dev, NULL, I915_GEM_GPU_DOMAINS); + seqno = i915_add_request(dev, NULL, 0); if (seqno == 0) return -ENOMEM; @@ -2255,7 +2264,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size) i915_gem_flush(dev, obj->write_domain, obj->write_domain); - seqno = i915_add_request(dev, NULL, obj->write_domain); + seqno = i915_add_request(dev, NULL, 0); if (seqno == 0) return -ENOMEM; @@ -2768,7 +2777,7 @@ i915_gem_object_flush_gpu_write_domain(struct drm_gem_object *obj) /* Queue the GPU write cache flushing we need. */ old_write_domain = obj->write_domain; i915_gem_flush(dev, 0, obj->write_domain); - (void) i915_add_request(dev, NULL, obj->write_domain); + (void) i915_add_request(dev, NULL, 0); BUG_ON(obj->write_domain); trace_i915_gem_object_change_domain(obj, @@ -3918,8 +3927,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, dev->invalidate_domains, dev->flush_domains); if (dev->flush_domains & I915_GEM_GPU_DOMAINS) - (void)i915_add_request(dev, file_priv, - dev->flush_domains); + (void)i915_add_request(dev, file_priv, 0); } for (i = 0; i < args->buffer_count; i++) {