From patchwork Tue Sep 24 16:57:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2934681 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 64DEA9F288 for ; Tue, 24 Sep 2013 17:00:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CCE55203A8 for ; Tue, 24 Sep 2013 17:00:22 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 3E08F20374 for ; Tue, 24 Sep 2013 17:00:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 21232E779B for ; Tue, 24 Sep 2013 10:00:21 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id 16A12E6B65 for ; Tue, 24 Sep 2013 09:58:22 -0700 (PDT) Received: by mail.bwidawsk.net (Postfix, from userid 5001) id B883859FB4; Tue, 24 Sep 2013 09:58:21 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lundgren.intel.com (unknown [131.252.27.73]) by mail.bwidawsk.net (Postfix) with ESMTPSA id 7843B59FB2; Tue, 24 Sep 2013 09:58:08 -0700 (PDT) From: Ben Widawsky To: Intel GFX Date: Tue, 24 Sep 2013 09:57:59 -0700 Message-Id: <1380041881-3397-4-git-send-email-benjamin.widawsky@intel.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1380041881-3397-1-git-send-email-benjamin.widawsky@intel.com> References: <1380041881-3397-1-git-send-email-benjamin.widawsky@intel.com> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 4/6] drm/i915: Add bind/unbind object functions to VM X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ben Widawsky As we plumb the code with more VM information, it has become more obvious that the easiest way to deal with bind and unbind is to simply put the function pointers in the vm, and let those choose the correct way to handle the page table updates. This change allows many places in the code to simply be vm->bind, and not have to worry about distinguishing PPGTT vs GGTT. Notice that this patch has no impact on functionality. I've decided to save the actual change until the next patch because I think it's easier to review that way. I'm happy to squash the two, or let Daniel do it on merge. v2: Make ggtt handle the quirky aliasing ppgtt Add flags to bind object to support above Don't ever call bind/unbind directly for PPGTT until we have real, full PPGTT (use NULLs to assert this) Make sure we rebind the ggtt if there already is a ggtt binding. This happens on set cache levels. Use VMA for bind/unbind (Daniel, Ben) v3: Reorganize ggtt_vma_bind to be more concise and easier to read (Ville). Change logic in unbind to only unbind ggtt when there is a global mapping, and to remove a redundant check if the aliasing ppgtt exists. v4: Make the bind function a bit smarter about the cache levels to avoid unnecessary multiple remaps. "I accept it is a wart, I think unifying the pin_vma / bind_vma could be unified later" (Chris) Removed the git notes, and put version info here. (Daniel) v5: Update the comment to not suck (Chris) Reviewed-by: Chris Wilson Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_drv.h | 69 ++++++++++++---------- drivers/gpu/drm/i915/i915_gem_gtt.c | 112 ++++++++++++++++++++++++++++++++++++ 2 files changed, 151 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 8530e49..9995cdb 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -465,6 +465,36 @@ enum i915_cache_level { typedef uint32_t gen6_gtt_pte_t; +/** + * A VMA represents a GEM BO that is bound into an address space. Therefore, a + * VMA's presence cannot be guaranteed before binding, or after unbinding the + * object into/from the address space. + * + * To make things as simple as possible (ie. no refcounting), a VMA's lifetime + * will always be <= an objects lifetime. So object refcounting should cover us. + */ +struct i915_vma { + struct drm_mm_node node; + struct drm_i915_gem_object *obj; + struct i915_address_space *vm; + + /** This object's place on the active/inactive lists */ + struct list_head mm_list; + + struct list_head vma_link; /* Link in the object's VMA list */ + + /** This vma's place in the batchbuffer or on the eviction list */ + struct list_head exec_list; + + /** + * Used for performing relocations during execbuffer insertion. + */ + struct hlist_node exec_node; + unsigned long exec_handle; + struct drm_i915_gem_exec_object2 *exec_entry; + +}; + struct i915_address_space { struct drm_mm mm; struct drm_device *dev; @@ -503,9 +533,18 @@ struct i915_address_space { /* FIXME: Need a more generic return type */ gen6_gtt_pte_t (*pte_encode)(dma_addr_t addr, enum i915_cache_level level); + + /** Unmap an object from an address space. This usually consists of + * setting the valid PTE entries to a reserved scratch page. */ + void (*unbind_vma)(struct i915_vma *vma); void (*clear_range)(struct i915_address_space *vm, unsigned int first_entry, unsigned int num_entries); + /* Map an object into an address space with the given cache flags. */ +#define GLOBAL_BIND (1<<0) + void (*bind_vma)(struct i915_vma *vma, + enum i915_cache_level cache_level, + u32 flags); void (*insert_entries)(struct i915_address_space *vm, struct sg_table *st, unsigned int first_entry, @@ -552,36 +591,6 @@ struct i915_hw_ppgtt { int (*enable)(struct drm_device *dev); }; -/** - * A VMA represents a GEM BO that is bound into an address space. Therefore, a - * VMA's presence cannot be guaranteed before binding, or after unbinding the - * object into/from the address space. - * - * To make things as simple as possible (ie. no refcounting), a VMA's lifetime - * will always be <= an objects lifetime. So object refcounting should cover us. - */ -struct i915_vma { - struct drm_mm_node node; - struct drm_i915_gem_object *obj; - struct i915_address_space *vm; - - /** This object's place on the active/inactive lists */ - struct list_head mm_list; - - struct list_head vma_link; /* Link in the object's VMA list */ - - /** This vma's place in the batchbuffer or on the eviction list */ - struct list_head exec_list; - - /** - * Used for performing relocations during execbuffer insertion. - */ - struct hlist_node exec_node; - unsigned long exec_handle; - struct drm_i915_gem_exec_object2 *exec_entry; - -}; - struct i915_ctx_hang_stats { /* This context had batch pending when hang was declared */ unsigned batch_pending; diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index e999496..65b61d4 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -57,6 +57,11 @@ #define HSW_WB_ELLC_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0xb) #define HSW_WT_ELLC_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0x6) +static void gen6_ppgtt_bind_vma(struct i915_vma *vma, + enum i915_cache_level cache_level, + u32 flags); +static void gen6_ppgtt_unbind_vma(struct i915_vma *vma); + static gen6_gtt_pte_t snb_pte_encode(dma_addr_t addr, enum i915_cache_level level) { @@ -332,7 +337,9 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt) ppgtt->base.pte_encode = dev_priv->gtt.base.pte_encode; ppgtt->num_pd_entries = GEN6_PPGTT_PD_ENTRIES; ppgtt->enable = gen6_ppgtt_enable; + ppgtt->base.unbind_vma = NULL; ppgtt->base.clear_range = gen6_ppgtt_clear_range; + ppgtt->base.bind_vma = NULL; ppgtt->base.insert_entries = gen6_ppgtt_insert_entries; ppgtt->base.cleanup = gen6_ppgtt_cleanup; ppgtt->base.scratch = dev_priv->gtt.base.scratch; @@ -439,6 +446,18 @@ void i915_ppgtt_bind_object(struct i915_hw_ppgtt *ppgtt, cache_level); } +static void __always_unused +gen6_ppgtt_bind_vma(struct i915_vma *vma, + enum i915_cache_level cache_level, + u32 flags) +{ + const unsigned long entry = vma->node.start >> PAGE_SHIFT; + + WARN_ON(flags); + + gen6_ppgtt_insert_entries(vma->vm, vma->obj->pages, entry, cache_level); +} + void i915_ppgtt_unbind_object(struct i915_hw_ppgtt *ppgtt, struct drm_i915_gem_object *obj) { @@ -447,6 +466,14 @@ void i915_ppgtt_unbind_object(struct i915_hw_ppgtt *ppgtt, obj->base.size >> PAGE_SHIFT); } +static void __always_unused gen6_ppgtt_unbind_vma(struct i915_vma *vma) +{ + const unsigned long entry = vma->node.start >> PAGE_SHIFT; + + gen6_ppgtt_clear_range(vma->vm, entry, + vma->obj->base.size >> PAGE_SHIFT); +} + extern int intel_iommu_gfx_mapped; /* Certain Gen5 chipsets require require idling the GPU before * unmapping anything from the GTT when VT-d is enabled. @@ -592,6 +619,19 @@ static void i915_ggtt_insert_entries(struct i915_address_space *vm, } +static void i915_ggtt_bind_vma(struct i915_vma *vma, + enum i915_cache_level cache_level, + u32 unused) +{ + const unsigned long entry = vma->node.start >> PAGE_SHIFT; + unsigned int flags = (cache_level == I915_CACHE_NONE) ? + AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY; + + BUG_ON(!i915_is_ggtt(vma->vm)); + intel_gtt_insert_sg_entries(vma->obj->pages, entry, flags); + vma->obj->has_global_gtt_mapping = 1; +} + static void i915_ggtt_clear_range(struct i915_address_space *vm, unsigned int first_entry, unsigned int num_entries) @@ -599,6 +639,53 @@ static void i915_ggtt_clear_range(struct i915_address_space *vm, intel_gtt_clear_range(first_entry, num_entries); } +static void i915_ggtt_unbind_vma(struct i915_vma *vma) +{ + const unsigned int first = vma->node.start >> PAGE_SHIFT; + const unsigned int size = vma->obj->base.size >> PAGE_SHIFT; + + BUG_ON(!i915_is_ggtt(vma->vm)); + vma->obj->has_global_gtt_mapping = 0; + intel_gtt_clear_range(first, size); +} + +static void gen6_ggtt_bind_vma(struct i915_vma *vma, + enum i915_cache_level cache_level, + u32 flags) +{ + struct drm_device *dev = vma->vm->dev; + struct drm_i915_private *dev_priv = dev->dev_private; + struct drm_i915_gem_object *obj = vma->obj; + const unsigned long entry = vma->node.start >> PAGE_SHIFT; + + /* If there is no aliasing PPGTT, or the caller needs a global mapping, + * or we have a global mapping already but the cacheability flags have + * changed, set the global PTEs. + * + * If there is an aliasing PPGTT it is anecdotally faster, so use that + * instead if none of the above hold true. + * + * NB: A global mapping should only be needed for special regions like + * "gtt mappable", SNB errata, or if specified via special execbuf + * flags. At all other times, the GPU will use the aliasing PPGTT. + */ + if (!dev_priv->mm.aliasing_ppgtt || flags & GLOBAL_BIND) { + if (!obj->has_global_gtt_mapping || + (cache_level != obj->cache_level)) { + gen6_ggtt_insert_entries(vma->vm, obj->pages, entry, + cache_level); + obj->has_global_gtt_mapping = 1; + } + } + + if (dev_priv->mm.aliasing_ppgtt && + (!obj->has_aliasing_ppgtt_mapping || + (cache_level != obj->cache_level))) { + gen6_ppgtt_insert_entries(&dev_priv->mm.aliasing_ppgtt->base, + vma->obj->pages, entry, cache_level); + vma->obj->has_aliasing_ppgtt_mapping = 1; + } +} void i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj, enum i915_cache_level cache_level) @@ -627,6 +714,27 @@ void i915_gem_gtt_unbind_object(struct drm_i915_gem_object *obj) obj->has_global_gtt_mapping = 0; } +static void gen6_ggtt_unbind_vma(struct i915_vma *vma) +{ + struct drm_device *dev = vma->vm->dev; + struct drm_i915_private *dev_priv = dev->dev_private; + struct drm_i915_gem_object *obj = vma->obj; + const unsigned long entry = vma->node.start >> PAGE_SHIFT; + + if (obj->has_global_gtt_mapping) { + gen6_ggtt_clear_range(vma->vm, entry, + vma->obj->base.size >> PAGE_SHIFT); + obj->has_global_gtt_mapping = 0; + } + + if (obj->has_aliasing_ppgtt_mapping) { + gen6_ppgtt_clear_range(&dev_priv->mm.aliasing_ppgtt->base, + entry, + obj->base.size >> PAGE_SHIFT); + obj->has_aliasing_ppgtt_mapping = 0; + } +} + void i915_gem_gtt_finish_object(struct drm_i915_gem_object *obj) { struct drm_device *dev = obj->base.dev; @@ -860,7 +968,9 @@ static int gen6_gmch_probe(struct drm_device *dev, DRM_ERROR("Scratch setup failed\n"); dev_priv->gtt.base.clear_range = gen6_ggtt_clear_range; + dev_priv->gtt.base.unbind_vma = gen6_ggtt_unbind_vma; dev_priv->gtt.base.insert_entries = gen6_ggtt_insert_entries; + dev_priv->gtt.base.bind_vma = gen6_ggtt_bind_vma; return ret; } @@ -892,7 +1002,9 @@ static int i915_gmch_probe(struct drm_device *dev, dev_priv->gtt.do_idle_maps = needs_idle_maps(dev_priv->dev); dev_priv->gtt.base.clear_range = i915_ggtt_clear_range; + dev_priv->gtt.base.unbind_vma = i915_ggtt_unbind_vma; dev_priv->gtt.base.insert_entries = i915_ggtt_insert_entries; + dev_priv->gtt.base.bind_vma = i915_ggtt_bind_vma; return 0; }