From patchwork Tue Apr 7 15:21:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 6172451 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 35829BF4A6 for ; Tue, 7 Apr 2015 15:22:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A8A52203C3 for ; Tue, 7 Apr 2015 15:22:23 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 3B8AB203AD for ; Tue, 7 Apr 2015 15:22:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3EE016E5C6; Tue, 7 Apr 2015 08:22:17 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from relay.fireflyinternet.com (hostedrelay.fireflyinternet.com [109.228.30.76]) by gabe.freedesktop.org (Postfix) with ESMTP id D83156E5B1 for ; Tue, 7 Apr 2015 08:22:15 -0700 (PDT) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by relay.fireflyinternet.com (FireflyRelay1) with ESMTP id 439630-1305619 for multiple; Tue, 07 Apr 2015 16:22:08 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Tue, 7 Apr 2015 16:21:01 +0100 Message-Id: <1428420094-18352-38-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1428420094-18352-1-git-send-email-chris@chris-wilson.co.uk> References: <1428420094-18352-1-git-send-email-chris@chris-wilson.co.uk> X-Authenticated-User: chris.alporthouse@surfanytime.net Subject: [Intel-gfx] [PATCH 37/70] drm/i915: Squash more pointer indirection for i915_is_gtt X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP 12:58 < jlahtine> there're actually equally many i915_is_ggtt(vma->vm) calls 12:58 < jlahtine> (one less) 12:59 < jlahtine> so while at it I'd make it vm->is_ggtt and vma->is_ggtt 12:59 < jlahtine> then get rid of the whole helper, maybe 13:00 < ickle> you preempted my beautiful macro 13:03 < ickle> just don't complain about the increased churn * to be squashed into the previous patch if desired --- drivers/gpu/drm/i915/i915_debugfs.c | 4 ++-- drivers/gpu/drm/i915/i915_drv.h | 7 +------ drivers/gpu/drm/i915/i915_gem.c | 32 ++++++++++++++---------------- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 5 ++--- drivers/gpu/drm/i915/i915_gem_gtt.c | 21 ++++++++++---------- drivers/gpu/drm/i915/i915_gem_gtt.h | 1 + drivers/gpu/drm/i915/i915_gpu_error.c | 2 +- drivers/gpu/drm/i915/i915_trace.h | 18 ++++++----------- 8 files changed, 39 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 6c147e1bff0c..2e851c6a310c 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -156,7 +156,7 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) if (obj->fence_reg != I915_FENCE_REG_NONE) seq_printf(m, " (fence: %d)", obj->fence_reg); list_for_each_entry(vma, &obj->vma_list, vma_link) { - if (!i915_is_ggtt(vma->vm)) + if (!vma->is_ggtt) seq_puts(m, " (pp"); else seq_puts(m, " (g"); @@ -335,7 +335,7 @@ static int per_file_stats(int id, void *ptr, void *data) if (!drm_mm_node_allocated(&vma->node)) continue; - if (i915_is_ggtt(vma->vm)) { + if (vma->is_ggtt) { stats->global += obj->base.size; continue; } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 2a5343a9ed24..0dbc7d69f148 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2896,16 +2896,11 @@ bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj); /* Some GGTT VM helpers */ #define i915_obj_to_ggtt(obj) \ (&((struct drm_i915_private *)(obj)->base.dev->dev_private)->gtt.base) -static inline bool i915_is_ggtt(struct i915_address_space *vm) -{ - return vm->is_ggtt; -} static inline struct i915_hw_ppgtt * i915_vm_to_ppgtt(struct i915_address_space *vm) { - WARN_ON(i915_is_ggtt(vm)); - + WARN_ON(vm->is_ggtt); return container_of(vm, struct i915_hw_ppgtt, base); } diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 796dc69a6c47..36add864593a 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -3200,8 +3200,7 @@ int i915_vma_unbind(struct i915_vma *vma) * cause memory corruption through use-after-free. */ - if (i915_is_ggtt(vma->vm) && - vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) { + if (vma->is_ggtt && vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) { i915_gem_object_finish_gtt(obj); /* release the fence reg _after_ flushing */ @@ -3215,7 +3214,7 @@ int i915_vma_unbind(struct i915_vma *vma) vma->unbind_vma(vma); list_del_init(&vma->mm_list); - if (i915_is_ggtt(vma->vm)) { + if (vma->is_ggtt) { if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) { obj->map_and_fenceable = false; } else if (vma->ggtt_view.pages) { @@ -3658,7 +3657,7 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj, struct i915_vma *vma; int ret; - if(WARN_ON(i915_is_ggtt(vm) != !!ggtt_view)) + if (WARN_ON(vm->is_ggtt != !!ggtt_view)) return ERR_PTR(-EINVAL); fence_size = i915_gem_get_gtt_size(dev, @@ -3756,8 +3755,7 @@ search_free: /* allocate before insert / bind */ if (vma->vm->allocate_va_range) { - trace_i915_va_alloc(vma->vm, vma->node.start, vma->node.size, - VM_TO_TRACE_NAME(vma->vm)); + trace_i915_va_alloc(vma->vm, vma->node.start, vma->node.size); ret = vma->vm->allocate_va_range(vma->vm, vma->node.start, vma->node.size); @@ -4360,13 +4358,13 @@ i915_gem_object_do_pin(struct drm_i915_gem_object *obj, if (WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base)) return -ENODEV; - if (WARN_ON(flags & (PIN_GLOBAL | PIN_MAPPABLE) && !i915_is_ggtt(vm))) + if (WARN_ON(flags & (PIN_GLOBAL | PIN_MAPPABLE) && !vm->is_ggtt)) return -EINVAL; if (WARN_ON((flags & (PIN_MAPPABLE | PIN_GLOBAL)) == PIN_MAPPABLE)) return -EINVAL; - if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view)) + if (WARN_ON(vm->is_ggtt != !!ggtt_view)) return -EINVAL; vma = ggtt_view ? i915_gem_obj_to_ggtt_view(obj, ggtt_view) : @@ -4456,7 +4454,7 @@ i915_gem_object_pin(struct drm_i915_gem_object *obj, uint64_t flags) { return i915_gem_object_do_pin(obj, vm, - i915_is_ggtt(vm) ? &i915_ggtt_view_normal : NULL, + vm->is_ggtt ? &i915_ggtt_view_normal : NULL, size, alignment, flags); } @@ -4788,7 +4786,7 @@ struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj, { struct i915_vma *vma; list_for_each_entry(vma, &obj->vma_list, vma_link) { - if (i915_is_ggtt(vma->vm) && + if (vma->is_ggtt && vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) continue; if (vma->vm == vm) @@ -4824,7 +4822,7 @@ void i915_gem_vma_destroy(struct i915_vma *vma) vm = vma->vm; - if (!i915_is_ggtt(vm)) + if (!vm->is_ggtt) i915_ppgtt_put(i915_vm_to_ppgtt(vm)); list_del(&vma->vma_link); @@ -5188,7 +5186,7 @@ init_ring_lists(struct intel_engine_cs *ring) void i915_init_vm(struct drm_i915_private *dev_priv, struct i915_address_space *vm) { - if (!i915_is_ggtt(vm)) + if (!vm->is_ggtt) drm_mm_init(&vm->mm, vm->start, vm->total); vm->dev = dev_priv->dev; INIT_LIST_HEAD(&vm->active_list); @@ -5353,7 +5351,7 @@ i915_gem_obj_offset(struct drm_i915_gem_object *o, WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base); list_for_each_entry(vma, &o->vma_list, vma_link) { - if (i915_is_ggtt(vma->vm) && + if (vma->is_ggtt && vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) continue; if (vma->vm == vm) @@ -5361,7 +5359,7 @@ i915_gem_obj_offset(struct drm_i915_gem_object *o, } WARN(1, "%s vma for this object not found.\n", - i915_is_ggtt(vm) ? "global" : "ppgtt"); + vm->is_ggtt ? "global" : "ppgtt"); return -1; } @@ -5387,7 +5385,7 @@ bool i915_gem_obj_bound(struct drm_i915_gem_object *o, struct i915_vma *vma; list_for_each_entry(vma, &o->vma_list, vma_link) { - if (i915_is_ggtt(vma->vm) && + if (vma->is_ggtt && vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) continue; if (vma->vm == vm && drm_mm_node_allocated(&vma->node)) @@ -5434,7 +5432,7 @@ unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o, BUG_ON(list_empty(&o->vma_list)); list_for_each_entry(vma, &o->vma_list, vma_link) { - if (i915_is_ggtt(vma->vm) && + if (vma->is_ggtt && vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) continue; if (vma->vm == vm) @@ -5447,7 +5445,7 @@ bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj) { struct i915_vma *vma; list_for_each_entry(vma, &obj->vma_list, vma_link) { - if (i915_is_ggtt(vma->vm) && + if (vma->is_ggtt && vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) continue; if (vma->pin_count > 0) diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 1eda0bdc5eab..5f735b491e2f 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -642,7 +642,7 @@ need_reloc_mappable(struct i915_vma *vma) if (entry->relocation_count == 0) return false; - if (!i915_is_ggtt(vma->vm)) + if (!vma->is_ggtt) return false; /* See also use_cpu_reloc() */ @@ -661,8 +661,7 @@ eb_vma_misplaced(struct i915_vma *vma) struct drm_i915_gem_exec_object2 *entry = vma->exec_entry; struct drm_i915_gem_object *obj = vma->obj; - WARN_ON(entry->flags & __EXEC_OBJECT_NEEDS_MAP && - !i915_is_ggtt(vma->vm)); + WARN_ON(entry->flags & __EXEC_OBJECT_NEEDS_MAP && !vma->is_ggtt); if (entry->alignment && vma->node.start & (entry->alignment - 1)) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index df1ee971138e..85077beb9338 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -1703,7 +1703,7 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev) container_of(vm, struct i915_hw_ppgtt, base); - if (i915_is_ggtt(vm)) + if (vm->is_ggtt) ppgtt = dev_priv->mm.aliasing_ppgtt; gen6_write_page_range(dev_priv, &ppgtt->pd, @@ -1881,7 +1881,7 @@ static void i915_ggtt_bind_vma(struct i915_vma *vma, unsigned int flags = (cache_level == I915_CACHE_NONE) ? AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY; - BUG_ON(!i915_is_ggtt(vma->vm)); + BUG_ON(!vma->is_ggtt); intel_gtt_insert_sg_entries(vma->ggtt_view.pages, entry, flags); vma->bound = GLOBAL_BIND; } @@ -1901,7 +1901,7 @@ static void i915_ggtt_unbind_vma(struct i915_vma *vma) const unsigned int first = vma->node.start >> PAGE_SHIFT; const unsigned int size = vma->obj->base.size >> PAGE_SHIFT; - BUG_ON(!i915_is_ggtt(vma->vm)); + BUG_ON(!vma->is_ggtt); vma->bound = 0; intel_gtt_clear_range(first, size); } @@ -1919,7 +1919,7 @@ static void ggtt_bind_vma(struct i915_vma *vma, if (obj->gt_ro) flags |= PTE_READ_ONLY; - if (i915_is_ggtt(vma->vm)) + if (vma->is_ggtt) pages = vma->ggtt_view.pages; /* If there is no aliasing PPGTT, or the caller needs a global mapping, @@ -2541,7 +2541,7 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj, { struct i915_vma *vma; - if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view)) + if (WARN_ON(vm->is_ggtt != !!ggtt_view)) return ERR_PTR(-EINVAL); vma = kmem_cache_zalloc(to_i915(obj->base.dev)->vmas, GFP_KERNEL); @@ -2553,9 +2553,10 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj, INIT_LIST_HEAD(&vma->exec_list); vma->vm = vm; vma->obj = obj; + vma->is_ggtt = vm->is_ggtt; if (INTEL_INFO(vm->dev)->gen >= 6) { - if (i915_is_ggtt(vm)) { + if (vm->is_ggtt) { vma->ggtt_view = *ggtt_view; vma->unbind_vma = ggtt_unbind_vma; @@ -2565,14 +2566,14 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj, vma->bind_vma = ppgtt_bind_vma; } } else { - BUG_ON(!i915_is_ggtt(vm)); + BUG_ON(!vm->is_ggtt); vma->ggtt_view = *ggtt_view; vma->unbind_vma = i915_ggtt_unbind_vma; vma->bind_vma = i915_ggtt_bind_vma; } list_add_tail(&vma->vma_link, &obj->vma_list); - if (!i915_is_ggtt(vm)) + if (!vm->is_ggtt) i915_ppgtt_get(i915_vm_to_ppgtt(vm)); return vma; @@ -2587,7 +2588,7 @@ i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj, vma = i915_gem_obj_to_vma(obj, vm); if (!vma) vma = __i915_gem_vma_create(obj, vm, - i915_is_ggtt(vm) ? &i915_ggtt_view_normal : NULL); + vm->is_ggtt ? &i915_ggtt_view_normal : NULL); return vma; } @@ -2758,7 +2759,7 @@ i915_get_ggtt_vma_pages(struct i915_vma *vma) int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level, u32 flags) { - if (i915_is_ggtt(vma->vm)) { + if (vma->is_ggtt) { int ret = i915_get_ggtt_vma_pages(vma); if (ret) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h index db9ec04d312c..4e6cdaba2569 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.h +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h @@ -160,6 +160,7 @@ struct i915_vma { #define LOCAL_BIND (1<<1) #define PTE_READ_ONLY (1<<2) unsigned int bound : 4; + unsigned is_ggtt : 1; /** * Support different GGTT views into the same object. diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 17dc2fcaba10..8832f1b2a495 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -606,7 +606,7 @@ i915_error_object_create(struct drm_i915_private *dev_priv, dst->gtt_offset = -1; reloc_offset = dst->gtt_offset; - if (i915_is_ggtt(vm)) + if (vm->is_ggtt) vma = i915_gem_obj_to_ggtt(src); use_ggtt = (src->cache_level == I915_CACHE_NONE && vma && (vma->bound & GLOBAL_BIND) && diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h index 97483e21c9b4..ce8ee9e8bced 100644 --- a/drivers/gpu/drm/i915/i915_trace.h +++ b/drivers/gpu/drm/i915/i915_trace.h @@ -156,35 +156,29 @@ TRACE_EVENT(i915_vma_unbind, __entry->obj, __entry->offset, __entry->size, __entry->vm) ); -#define VM_TO_TRACE_NAME(vm) \ - (i915_is_ggtt(vm) ? "G" : \ - "P") - DECLARE_EVENT_CLASS(i915_va, - TP_PROTO(struct i915_address_space *vm, u64 start, u64 length, const char *name), - TP_ARGS(vm, start, length, name), + TP_PROTO(struct i915_address_space *vm, u64 start, u64 length), + TP_ARGS(vm, start, length), TP_STRUCT__entry( __field(struct i915_address_space *, vm) __field(u64, start) __field(u64, end) - __string(name, name) ), TP_fast_assign( __entry->vm = vm; __entry->start = start; __entry->end = start + length - 1; - __assign_str(name, name); ), - TP_printk("vm=%p (%s), 0x%llx-0x%llx", - __entry->vm, __get_str(name), __entry->start, __entry->end) + TP_printk("vm=%p (%c), 0x%llx-0x%llx", + __entry->vm, __entry->vm->is_ggtt ? 'G' : 'P', __entry->start, __entry->end) ); DEFINE_EVENT(i915_va, i915_va_alloc, - TP_PROTO(struct i915_address_space *vm, u64 start, u64 length, const char *name), - TP_ARGS(vm, start, length, name) + TP_PROTO(struct i915_address_space *vm, u64 start, u64 length), + TP_ARGS(vm, start, length) ); DECLARE_EVENT_CLASS(i915_page_table_entry,