From patchwork Tue Sep 24 16:57:58 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2934671 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7DDE09F524 for ; Tue, 24 Sep 2013 16:59:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C8E43203EA for ; Tue, 24 Sep 2013 16:59:39 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 8908D20374 for ; Tue, 24 Sep 2013 16:59:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 75EDBE7795 for ; Tue, 24 Sep 2013 09:59:38 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id 541EDE77AF for ; Tue, 24 Sep 2013 09:58:21 -0700 (PDT) Received: by mail.bwidawsk.net (Postfix, from userid 5001) id 2453A59FB4; Tue, 24 Sep 2013 09:58:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lundgren.intel.com (unknown [131.252.27.73]) by mail.bwidawsk.net (Postfix) with ESMTPSA id E55F859FB1; Tue, 24 Sep 2013 09:58:07 -0700 (PDT) From: Ben Widawsky To: Intel GFX Date: Tue, 24 Sep 2013 09:57:58 -0700 Message-Id: <1380041881-3397-3-git-send-email-benjamin.widawsky@intel.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1380041881-3397-1-git-send-email-benjamin.widawsky@intel.com> References: <1380041881-3397-1-git-send-email-benjamin.widawsky@intel.com> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 3/6] drm/i915: Convert active API to VMA X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ben Widawsky Even though we track object activity and not VMA, because we have the active_list be based on the VM, it makes the most sense to use VMAs in the APIs. NOTE: Daniel intends to eventually rip out active/inactive LRUs, but for now, leave them be. v2: Remove leftover hunk from the previous patch which didn't keep i915_gem_object_move_to_active. That patch had to rely on the ring to get the dev instead of the obj. (Chris) Reviewed-by: Chris Wilson Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_drv.h | 5 ++--- drivers/gpu/drm/i915/i915_gem.c | 9 ++++++++- drivers/gpu/drm/i915/i915_gem_context.c | 5 +---- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 3 +-- 4 files changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 67a31c2..8530e49 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1901,9 +1901,8 @@ static inline void i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj) int __must_check i915_mutex_lock_interruptible(struct drm_device *dev); int i915_gem_object_sync(struct drm_i915_gem_object *obj, struct intel_ring_buffer *to); -void i915_gem_object_move_to_active(struct drm_i915_gem_object *obj, - struct intel_ring_buffer *ring); - +void i915_vma_move_to_active(struct i915_vma *vma, + struct intel_ring_buffer *ring); int i915_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, struct drm_mode_create_dumb *args); diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 6b134f0..f6c8b0e 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1910,7 +1910,7 @@ i915_gem_object_get_pages(struct drm_i915_gem_object *obj) return 0; } -void +static void i915_gem_object_move_to_active(struct drm_i915_gem_object *obj, struct intel_ring_buffer *ring) { @@ -1949,6 +1949,13 @@ i915_gem_object_move_to_active(struct drm_i915_gem_object *obj, } } +void i915_vma_move_to_active(struct i915_vma *vma, + struct intel_ring_buffer *ring) +{ + list_move_tail(&vma->mm_list, &vma->vm->active_list); + return i915_gem_object_move_to_active(vma->obj, ring); +} + static void i915_gem_object_move_to_inactive(struct drm_i915_gem_object *obj) { diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 9af3fe7..1a877a5 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -453,11 +453,8 @@ static int do_switch(struct i915_hw_context *to) * MI_SET_CONTEXT instead of when the next seqno has completed. */ if (from != NULL) { - struct drm_i915_private *dev_priv = from->obj->base.dev->dev_private; - struct i915_address_space *ggtt = &dev_priv->gtt.base; from->obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION; - list_move_tail(&i915_gem_obj_to_vma(from->obj, ggtt)->mm_list, &ggtt->active_list); - i915_gem_object_move_to_active(from->obj, ring); + i915_vma_move_to_active(i915_gem_obj_to_ggtt(from->obj), ring); /* As long as MI_SET_CONTEXT is serializing, ie. it flushes the * whole damn pipeline, we don't need to explicitly mark the * object dirty. The only exception is that the context must be diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index da23cfe..0ce0d47 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -872,8 +872,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas, obj->base.read_domains = obj->base.pending_read_domains; obj->fenced_gpu_access = obj->pending_fenced_gpu_access; - list_move_tail(&vma->mm_list, &vma->vm->active_list); - i915_gem_object_move_to_active(obj, ring); + i915_vma_move_to_active(vma, ring); if (obj->base.write_domain) { obj->dirty = 1; obj->last_write_seqno = intel_ring_get_seqno(ring);