diff mbox

[2/3] drm/i915: Infrastructure for supporting different GGTT views per object

Message ID 1417618766-14845-3-git-send-email-tvrtko.ursulin@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Tvrtko Ursulin Dec. 3, 2014, 2:59 p.m. UTC
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Things like reliable GGTT mappings and mirrored 2d-on-3d display will need
to map objects into the same address space multiple times.

Added a GGTT view concept and linked it with the VMA to distinguish between
multiple instances per address space.

New objects and GEM functions which do not take this new view as a parameter
assume the default of zero (I915_GGTT_VIEW_NORMAL) which preserves the
previous behaviour.

This now means that objects can have multiple VMA entries so the code which
assumed there will only be one also had to be modified.

Alternative GGTT views are supposed to borrow DMA addresses from obj->pages
which is DMA mapped on first VMA instantiation and unmapped on the last one
going away.

v2:
    * Removed per view special casing in i915_gem_ggtt_prepare /
      finish_object in favour of creating and destroying DMA mappings
      on first VMA instantiation and last VMA destruction. (Daniel Vetter)
    * Simplified i915_vma_unbind which does not need to count the GGTT views.
      (Daniel Vetter)
    * Also moved obj->map_and_fenceable reset under the same check.
    * Checkpatch cleanups.

v3:
    * Only retire objects once the last VMA is unbound.

For: VIZ-4544
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_debugfs.c        |  5 +-
 drivers/gpu/drm/i915/i915_drv.h            | 46 +++++++++++++++++--
 drivers/gpu/drm/i915/i915_gem.c            | 73 ++++++++++++++++--------------
 drivers/gpu/drm/i915/i915_gem_context.c    |  4 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  4 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c        | 61 +++++++++++++++++++------
 drivers/gpu/drm/i915/i915_gem_gtt.h        | 22 ++++++++-
 drivers/gpu/drm/i915/i915_gpu_error.c      |  8 +---
 8 files changed, 159 insertions(+), 64 deletions(-)

Comments

Chris Wilson Dec. 4, 2014, 9:53 a.m. UTC | #1
On Wed, Dec 03, 2014 at 02:59:25PM +0000, Tvrtko Ursulin wrote:
> +void i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
> +		   u32 flags)
> +{
> +	struct sg_table *pages = i915_ggtt_view_pages(vma);
> +
> +	if (pages && !IS_ERR(pages)) {
> +		vma->bind_vma(vma, pages, cache_level, flags);
> +
> +		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
> +			sg_free_table(pages);
> +			kfree(pages);
> +		}
> +	}
> +}

Stop. Even if the failure path is impossible with the present
implementation, here you are masking the error only to go and pretend
the binding succeeded.

Don't be lazy, this is a very nasty bug that should be hit during igt -
or else you are not testing well enough.
-Chris
Tvrtko Ursulin Dec. 4, 2014, 10:19 a.m. UTC | #2
On 12/04/2014 09:53 AM, Chris Wilson wrote:
> On Wed, Dec 03, 2014 at 02:59:25PM +0000, Tvrtko Ursulin wrote:
>> +void i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
>> +		   u32 flags)
>> +{
>> +	struct sg_table *pages = i915_ggtt_view_pages(vma);
>> +
>> +	if (pages && !IS_ERR(pages)) {
>> +		vma->bind_vma(vma, pages, cache_level, flags);
>> +
>> +		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
>> +			sg_free_table(pages);
>> +			kfree(pages);
>> +		}
>> +	}
>> +}
>
> Stop. Even if the failure path is impossible with the present
> implementation, here you are masking the error only to go and pretend
> the binding succeeded.
>
> Don't be lazy, this is a very nasty bug that should be hit during igt -
> or else you are not testing well enough.

Fair comment, even if a bit too assuming. I actually had this as TODO 
but somehow lost it.

I don't have any ideas on how to provoke this to fail from an IGT? Even 
with future implementations it boils down to a couple of small 
allocations which would have to fail reliably.

But will add the error path for it.

Regards,

Tvrtko
Chris Wilson Dec. 4, 2014, 10:26 a.m. UTC | #3
On Thu, Dec 04, 2014 at 10:19:09AM +0000, Tvrtko Ursulin wrote:
> 
> On 12/04/2014 09:53 AM, Chris Wilson wrote:
> >On Wed, Dec 03, 2014 at 02:59:25PM +0000, Tvrtko Ursulin wrote:
> >>+void i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
> >>+		   u32 flags)
> >>+{
> >>+	struct sg_table *pages = i915_ggtt_view_pages(vma);
> >>+
> >>+	if (pages && !IS_ERR(pages)) {
> >>+		vma->bind_vma(vma, pages, cache_level, flags);
> >>+
> >>+		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
> >>+			sg_free_table(pages);
> >>+			kfree(pages);
> >>+		}
> >>+	}
> >>+}
> >
> >Stop. Even if the failure path is impossible with the present
> >implementation, here you are masking the error only to go and pretend
> >the binding succeeded.
> >
> >Don't be lazy, this is a very nasty bug that should be hit during igt -
> >or else you are not testing well enough.
> 
> Fair comment, even if a bit too assuming. I actually had this as
> TODO but somehow lost it.
> 
> I don't have any ideas on how to provoke this to fail from an IGT?
> Even with future implementations it boils down to a couple of small
> allocations which would have to fail reliably.

We have quite a few thrash tests now that are fairly good at getting
even the small allocations to fail.

What we don't have is a single-fd, multi-ctx thrash test (well except
for some GL tests...)
-Chris
Daniel Vetter Dec. 4, 2014, 10:59 a.m. UTC | #4
On Thu, Dec 04, 2014 at 10:26:14AM +0000, Chris Wilson wrote:
> On Thu, Dec 04, 2014 at 10:19:09AM +0000, Tvrtko Ursulin wrote:
> > 
> > On 12/04/2014 09:53 AM, Chris Wilson wrote:
> > >On Wed, Dec 03, 2014 at 02:59:25PM +0000, Tvrtko Ursulin wrote:
> > >>+void i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
> > >>+		   u32 flags)
> > >>+{
> > >>+	struct sg_table *pages = i915_ggtt_view_pages(vma);
> > >>+
> > >>+	if (pages && !IS_ERR(pages)) {
> > >>+		vma->bind_vma(vma, pages, cache_level, flags);
> > >>+
> > >>+		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
> > >>+			sg_free_table(pages);
> > >>+			kfree(pages);
> > >>+		}
> > >>+	}
> > >>+}
> > >
> > >Stop. Even if the failure path is impossible with the present
> > >implementation, here you are masking the error only to go and pretend
> > >the binding succeeded.
> > >
> > >Don't be lazy, this is a very nasty bug that should be hit during igt -
> > >or else you are not testing well enough.
> > 
> > Fair comment, even if a bit too assuming. I actually had this as
> > TODO but somehow lost it.
> > 
> > I don't have any ideas on how to provoke this to fail from an IGT?
> > Even with future implementations it boils down to a couple of small
> > allocations which would have to fail reliably.
> 
> We have quite a few thrash tests now that are fairly good at getting
> even the small allocations to fail.
> 
> What we don't have is a single-fd, multi-ctx thrash test (well except
> for some GL tests...)

But none of these tests result in permanent memory failures (only the
occasional ioctl restart when waiting for gpu rendering). And sg table
alloc only recurses through the shrinker so that can't happen. So I think
we just have to get by with review.

We did have issues with sg table allocations in stress tests though,
before we've added the recursive shrinker locking, hence sg table alloc
can indeed go south.
-Daniel
Tvrtko Ursulin Dec. 4, 2014, 12:17 p.m. UTC | #5
On 12/04/2014 10:59 AM, Daniel Vetter wrote:
> On Thu, Dec 04, 2014 at 10:26:14AM +0000, Chris Wilson wrote:
>> On Thu, Dec 04, 2014 at 10:19:09AM +0000, Tvrtko Ursulin wrote:
>>>
>>> On 12/04/2014 09:53 AM, Chris Wilson wrote:
>>>> On Wed, Dec 03, 2014 at 02:59:25PM +0000, Tvrtko Ursulin wrote:
>>>>> +void i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
>>>>> +		   u32 flags)
>>>>> +{
>>>>> +	struct sg_table *pages = i915_ggtt_view_pages(vma);
>>>>> +
>>>>> +	if (pages && !IS_ERR(pages)) {
>>>>> +		vma->bind_vma(vma, pages, cache_level, flags);
>>>>> +
>>>>> +		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
>>>>> +			sg_free_table(pages);
>>>>> +			kfree(pages);
>>>>> +		}
>>>>> +	}
>>>>> +}
>>>>
>>>> Stop. Even if the failure path is impossible with the present
>>>> implementation, here you are masking the error only to go and pretend
>>>> the binding succeeded.
>>>>
>>>> Don't be lazy, this is a very nasty bug that should be hit during igt -
>>>> or else you are not testing well enough.
>>>
>>> Fair comment, even if a bit too assuming. I actually had this as
>>> TODO but somehow lost it.
>>>
>>> I don't have any ideas on how to provoke this to fail from an IGT?
>>> Even with future implementations it boils down to a couple of small
>>> allocations which would have to fail reliably.
>>
>> We have quite a few thrash tests now that are fairly good at getting
>> even the small allocations to fail.
>>
>> What we don't have is a single-fd, multi-ctx thrash test (well except
>> for some GL tests...)
>
> But none of these tests result in permanent memory failures (only the
> occasional ioctl restart when waiting for gpu rendering). And sg table
> alloc only recurses through the shrinker so that can't happen. So I think
> we just have to get by with review.
>
> We did have issues with sg table allocations in stress tests though,
> before we've added the recursive shrinker locking, hence sg table alloc
> can indeed go south.

I looked at propagating errors from i915_vma_bind() out to callers and 
it is mostly all fine apart from the i915_gem_restore_gtt_mappings 
during i915_drm_resume.

I don't see how this is fixable apart by going back and having sgls stay 
around for the lifetime of their VMAs. It shouldn't be such a big deal - 
they are not so big even with non-coalesced entries.

Thoughts?

Regards,

Tvrtko
Chris Wilson Dec. 4, 2014, 12:27 p.m. UTC | #6
On Thu, Dec 04, 2014 at 12:17:51PM +0000, Tvrtko Ursulin wrote:
> On 12/04/2014 10:59 AM, Daniel Vetter wrote:
> >On Thu, Dec 04, 2014 at 10:26:14AM +0000, Chris Wilson wrote:
> >>On Thu, Dec 04, 2014 at 10:19:09AM +0000, Tvrtko Ursulin wrote:
> >>>
> >>>On 12/04/2014 09:53 AM, Chris Wilson wrote:
> >>>>On Wed, Dec 03, 2014 at 02:59:25PM +0000, Tvrtko Ursulin wrote:
> >>>>>+void i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
> >>>>>+		   u32 flags)
> >>>>>+{
> >>>>>+	struct sg_table *pages = i915_ggtt_view_pages(vma);
> >>>>>+
> >>>>>+	if (pages && !IS_ERR(pages)) {
> >>>>>+		vma->bind_vma(vma, pages, cache_level, flags);
> >>>>>+
> >>>>>+		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
> >>>>>+			sg_free_table(pages);
> >>>>>+			kfree(pages);
> >>>>>+		}
> >>>>>+	}
> >>>>>+}
> >>>>
> >>>>Stop. Even if the failure path is impossible with the present
> >>>>implementation, here you are masking the error only to go and pretend
> >>>>the binding succeeded.
> >>>>
> >>>>Don't be lazy, this is a very nasty bug that should be hit during igt -
> >>>>or else you are not testing well enough.
> >>>
> >>>Fair comment, even if a bit too assuming. I actually had this as
> >>>TODO but somehow lost it.
> >>>
> >>>I don't have any ideas on how to provoke this to fail from an IGT?
> >>>Even with future implementations it boils down to a couple of small
> >>>allocations which would have to fail reliably.
> >>
> >>We have quite a few thrash tests now that are fairly good at getting
> >>even the small allocations to fail.
> >>
> >>What we don't have is a single-fd, multi-ctx thrash test (well except
> >>for some GL tests...)
> >
> >But none of these tests result in permanent memory failures (only the
> >occasional ioctl restart when waiting for gpu rendering). And sg table
> >alloc only recurses through the shrinker so that can't happen. So I think
> >we just have to get by with review.
> >
> >We did have issues with sg table allocations in stress tests though,
> >before we've added the recursive shrinker locking, hence sg table alloc
> >can indeed go south.
> 
> I looked at propagating errors from i915_vma_bind() out to callers
> and it is mostly all fine apart from the
> i915_gem_restore_gtt_mappings during i915_drm_resume.
> 
> I don't see how this is fixable apart by going back and having sgls
> stay around for the lifetime of their VMAs. It shouldn't be such a
> big deal - they are not so big even with non-coalesced entries.

The good news is that for a rebind, you already have the sg lists
already allocated and converted to dma addresses, and so you can just do
something like:

static inline void
i915_vma_rebind(struct i915_vma *vma, enum i915_cache_level cache_level)
{
        unsigned rebind;

        rebind = vma->bound;
        vma->bound = 0;

        WARN_ON(vma->bind_vma(vma, cache_level, rebind));
}
-Chris
Daniel Vetter Dec. 4, 2014, 1:01 p.m. UTC | #7
On Thu, Dec 4, 2014 at 1:17 PM, Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
> I looked at propagating errors from i915_vma_bind() out to callers and it is
> mostly all fine apart from the i915_gem_restore_gtt_mappings during
> i915_drm_resume.

Bummer.

> I don't see how this is fixable apart by going back and having sgls stay
> around for the lifetime of their VMAs. It shouldn't be such a big deal -
> they are not so big even with non-coalesced entries.
>
> Thoughts?

I guess we'll just have to eat the additional pointer in vma structs
and that's it. If you want to go fancy we could have a new subtype:

struct i915_vma_special_view {
    struct i915_vma base;
    struct sg_table *sgt;
}

And then typecast accordingly for non-normal views. Feels a bit like
overkill though, at least until we start using vmas for other crazy
things which need special handling.
-Daniel
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 6c16939..bd08289 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -152,8 +152,9 @@  describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 			seq_puts(m, " (pp");
 		else
 			seq_puts(m, " (g");
-		seq_printf(m, "gtt offset: %08lx, size: %08lx)",
-			   vma->node.start, vma->node.size);
+		seq_printf(m, "gtt offset: %08lx, size: %08lx, type: %u)",
+			   vma->node.start, vma->node.size,
+			   vma->ggtt_view.type);
 	}
 	if (obj->stolen)
 		seq_printf(m, " (stolen: %08lx)", obj->stolen->start);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 049482f..03c8132 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2514,10 +2514,23 @@  void i915_gem_vma_destroy(struct i915_vma *vma);
 #define PIN_GLOBAL 0x4
 #define PIN_OFFSET_BIAS 0x8
 #define PIN_OFFSET_MASK (~4095)
+int __must_check i915_gem_object_pin_view(struct drm_i915_gem_object *obj,
+					  struct i915_address_space *vm,
+					  uint32_t alignment,
+					  uint64_t flags,
+					  const struct i915_ggtt_view *view);
+static inline
 int __must_check i915_gem_object_pin(struct drm_i915_gem_object *obj,
 				     struct i915_address_space *vm,
 				     uint32_t alignment,
-				     uint64_t flags);
+				     uint64_t flags)
+{
+	return i915_gem_object_pin_view(obj, vm, alignment, flags,
+						&i915_ggtt_view_normal);
+}
+
+void i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
+		   u32 flags);
 int __must_check i915_vma_unbind(struct i915_vma *vma);
 int i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
 void i915_gem_release_all_mmaps(struct drm_i915_private *dev_priv);
@@ -2679,18 +2692,43 @@  struct dma_buf *i915_gem_prime_export(struct drm_device *dev,
 
 void i915_gem_restore_fences(struct drm_device *dev);
 
+unsigned long i915_gem_obj_offset_view(struct drm_i915_gem_object *o,
+				       struct i915_address_space *vm,
+				       enum i915_ggtt_view_type view);
+static inline
 unsigned long i915_gem_obj_offset(struct drm_i915_gem_object *o,
-				  struct i915_address_space *vm);
+				  struct i915_address_space *vm)
+{
+	return i915_gem_obj_offset_view(o, vm, I915_GGTT_VIEW_NORMAL);
+}
 bool i915_gem_obj_bound_any(struct drm_i915_gem_object *o);
 bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
 			struct i915_address_space *vm);
 unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o,
 				struct i915_address_space *vm);
+struct i915_vma *i915_gem_obj_to_vma_view(struct drm_i915_gem_object *obj,
+					  struct i915_address_space *vm,
+					  const struct i915_ggtt_view *view);
+static inline
 struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
-				     struct i915_address_space *vm);
+				     struct i915_address_space *vm)
+{
+	return i915_gem_obj_to_vma_view(obj, vm, &i915_ggtt_view_normal);
+}
+
+struct i915_vma *
+i915_gem_obj_lookup_or_create_vma_view(struct drm_i915_gem_object *obj,
+				       struct i915_address_space *vm,
+				       const struct i915_ggtt_view *view);
+
+static inline
 struct i915_vma *
 i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj,
-				  struct i915_address_space *vm);
+				  struct i915_address_space *vm)
+{
+	return i915_gem_obj_lookup_or_create_vma_view(obj, vm,
+						&i915_ggtt_view_normal);
+}
 
 struct i915_vma *i915_gem_obj_to_ggtt(struct drm_i915_gem_object *obj);
 static inline bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj) {
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index c1c1141..976597c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2087,8 +2087,7 @@  i915_gem_shrink(struct drm_i915_private *dev_priv,
 			/* For the unbound phase, this should be a no-op! */
 			list_for_each_entry_safe(vma, v,
 						 &obj->vma_list, vma_link)
-				if (i915_vma_unbind(vma))
-					break;
+				i915_vma_unbind(vma);
 
 			if (i915_gem_object_put_pages(obj) == 0)
 				count += obj->base.size >> PAGE_SHIFT;
@@ -2297,17 +2296,14 @@  void i915_vma_move_to_active(struct i915_vma *vma,
 static void
 i915_gem_object_move_to_inactive(struct drm_i915_gem_object *obj)
 {
-	struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
-	struct i915_address_space *vm;
 	struct i915_vma *vma;
 
 	BUG_ON(obj->base.write_domain & ~I915_GEM_GPU_DOMAINS);
 	BUG_ON(!obj->active);
 
-	list_for_each_entry(vm, &dev_priv->vm_list, global_link) {
-		vma = i915_gem_obj_to_vma(obj, vm);
-		if (vma && !list_empty(&vma->mm_list))
-			list_move_tail(&vma->mm_list, &vm->inactive_list);
+	list_for_each_entry(vma, &obj->vma_list, vma_link) {
+		if (!list_empty(&vma->mm_list))
+			list_move_tail(&vma->mm_list, &vma->vm->inactive_list);
 	}
 
 	intel_fb_obj_flush(obj, true);
@@ -3062,10 +3058,8 @@  int i915_vma_unbind(struct i915_vma *vma)
 	 * cause memory corruption through use-after-free.
 	 */
 
-	/* Throw away the active reference before moving to the unbound list */
-	i915_gem_object_retire(obj);
-
-	if (i915_is_ggtt(vma->vm)) {
+	if (i915_is_ggtt(vma->vm) &&
+	    vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
 		i915_gem_object_finish_gtt(obj);
 
 		/* release the fence reg _after_ flushing */
@@ -3079,7 +3073,8 @@  int i915_vma_unbind(struct i915_vma *vma)
 	vma->unbind_vma(vma);
 
 	list_del_init(&vma->mm_list);
-	if (i915_is_ggtt(vma->vm))
+	if (i915_is_ggtt(vma->vm) &&
+	    vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL)
 		obj->map_and_fenceable = false;
 
 	drm_mm_remove_node(&vma->node);
@@ -3088,6 +3083,10 @@  int i915_vma_unbind(struct i915_vma *vma)
 	/* Since the unbound list is global, only move to that list if
 	 * no more VMAs exist. */
 	if (list_empty(&obj->vma_list)) {
+		/* Throw away the active reference before
+		 * moving to the unbound list. */
+		i915_gem_object_retire(obj);
+
 		i915_gem_gtt_finish_object(obj);
 		list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
 	}
@@ -3498,7 +3497,8 @@  static struct i915_vma *
 i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
 			   struct i915_address_space *vm,
 			   unsigned alignment,
-			   uint64_t flags)
+			   uint64_t flags,
+			   const struct i915_ggtt_view *view)
 {
 	struct drm_device *dev = obj->base.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
@@ -3548,7 +3548,7 @@  i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
 
 	i915_gem_object_pin_pages(obj);
 
-	vma = i915_gem_obj_lookup_or_create_vma(obj, vm);
+	vma = i915_gem_obj_lookup_or_create_vma_view(obj, vm, view);
 	if (IS_ERR(vma))
 		goto err_unpin;
 
@@ -3582,7 +3582,7 @@  search_free:
 	list_add_tail(&vma->mm_list, &vm->inactive_list);
 
 	trace_i915_vma_bind(vma, flags);
-	vma->bind_vma(vma, obj->cache_level,
+	i915_vma_bind(vma, obj->cache_level,
 		      flags & PIN_GLOBAL ? GLOBAL_BIND : 0);
 
 	return vma;
@@ -3790,8 +3790,8 @@  int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 
 		list_for_each_entry(vma, &obj->vma_list, vma_link)
 			if (drm_mm_node_allocated(&vma->node))
-				vma->bind_vma(vma, cache_level,
-						vma->bound & GLOBAL_BIND);
+				i915_vma_bind(vma, cache_level,
+					      vma->bound & GLOBAL_BIND);
 	}
 
 	list_for_each_entry(vma, &obj->vma_list, vma_link)
@@ -4144,10 +4144,11 @@  i915_vma_misplaced(struct i915_vma *vma, uint32_t alignment, uint64_t flags)
 }
 
 int
-i915_gem_object_pin(struct drm_i915_gem_object *obj,
-		    struct i915_address_space *vm,
-		    uint32_t alignment,
-		    uint64_t flags)
+i915_gem_object_pin_view(struct drm_i915_gem_object *obj,
+			 struct i915_address_space *vm,
+			 uint32_t alignment,
+			 uint64_t flags,
+			 const struct i915_ggtt_view *view)
 {
 	struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
 	struct i915_vma *vma;
@@ -4163,7 +4164,7 @@  i915_gem_object_pin(struct drm_i915_gem_object *obj,
 	if (WARN_ON((flags & (PIN_MAPPABLE | PIN_GLOBAL)) == PIN_MAPPABLE))
 		return -EINVAL;
 
-	vma = i915_gem_obj_to_vma(obj, vm);
+	vma = i915_gem_obj_to_vma_view(obj, vm, view);
 	if (vma) {
 		if (WARN_ON(vma->pin_count == DRM_I915_GEM_OBJECT_MAX_PIN_COUNT))
 			return -EBUSY;
@@ -4173,7 +4174,8 @@  i915_gem_object_pin(struct drm_i915_gem_object *obj,
 			     "bo is already pinned with incorrect alignment:"
 			     " offset=%lx, req.alignment=%x, req.map_and_fenceable=%d,"
 			     " obj->map_and_fenceable=%d\n",
-			     i915_gem_obj_offset(obj, vm), alignment,
+			     i915_gem_obj_offset_view(obj, vm, view->type),
+			     alignment,
 			     !!(flags & PIN_MAPPABLE),
 			     obj->map_and_fenceable);
 			ret = i915_vma_unbind(vma);
@@ -4186,13 +4188,14 @@  i915_gem_object_pin(struct drm_i915_gem_object *obj,
 
 	bound = vma ? vma->bound : 0;
 	if (vma == NULL || !drm_mm_node_allocated(&vma->node)) {
-		vma = i915_gem_object_bind_to_vm(obj, vm, alignment, flags);
+		vma = i915_gem_object_bind_to_vm(obj, vm, alignment,
+						 flags, view);
 		if (IS_ERR(vma))
 			return PTR_ERR(vma);
 	}
 
 	if (flags & PIN_GLOBAL && !(vma->bound & GLOBAL_BIND))
-		vma->bind_vma(vma, obj->cache_level, GLOBAL_BIND);
+		i915_vma_bind(vma, obj->cache_level, GLOBAL_BIND);
 
 	if ((bound ^ vma->bound) & GLOBAL_BIND) {
 		bool mappable, fenceable;
@@ -4528,12 +4531,13 @@  void i915_gem_free_object(struct drm_gem_object *gem_obj)
 	intel_runtime_pm_put(dev_priv);
 }
 
-struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
-				     struct i915_address_space *vm)
+struct i915_vma *i915_gem_obj_to_vma_view(struct drm_i915_gem_object *obj,
+					  struct i915_address_space *vm,
+					  const struct i915_ggtt_view *view)
 {
 	struct i915_vma *vma;
 	list_for_each_entry(vma, &obj->vma_list, vma_link)
-		if (vma->vm == vm)
+		if (vma->vm == vm && vma->ggtt_view.type == view->type)
 			return vma;
 
 	return NULL;
@@ -5147,8 +5151,9 @@  i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
 }
 
 /* All the new VM stuff */
-unsigned long i915_gem_obj_offset(struct drm_i915_gem_object *o,
-				  struct i915_address_space *vm)
+unsigned long i915_gem_obj_offset_view(struct drm_i915_gem_object *o,
+				       struct i915_address_space *vm,
+				       enum i915_ggtt_view_type view)
 {
 	struct drm_i915_private *dev_priv = o->base.dev->dev_private;
 	struct i915_vma *vma;
@@ -5156,7 +5161,7 @@  unsigned long i915_gem_obj_offset(struct drm_i915_gem_object *o,
 	WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);
 
 	list_for_each_entry(vma, &o->vma_list, vma_link) {
-		if (vma->vm == vm)
+		if (vma->vm == vm && vma->ggtt_view.type == view)
 			return vma->node.start;
 
 	}
@@ -5307,7 +5312,9 @@  struct i915_vma *i915_gem_obj_to_ggtt(struct drm_i915_gem_object *obj)
 	struct i915_vma *vma;
 
 	list_for_each_entry(vma, &obj->vma_list, vma_link) {
-		if (vma->vm == ggtt)
+		if (vma->vm != ggtt)
+			continue;
+		if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL)
 			return vma;
 	}
 
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 5cd2b97..c2e6f2e 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -591,8 +591,8 @@  static int do_switch(struct intel_engine_cs *ring,
 
 	vma = i915_gem_obj_to_ggtt(to->legacy_hw_ctx.rcs_state);
 	if (!(vma->bound & GLOBAL_BIND))
-		vma->bind_vma(vma, to->legacy_hw_ctx.rcs_state->cache_level,
-				GLOBAL_BIND);
+		i915_vma_bind(vma, to->legacy_hw_ctx.rcs_state->cache_level,
+			      GLOBAL_BIND);
 
 	if (!to->legacy_hw_ctx.initialized || i915_gem_context_is_default(to))
 		hw_flags |= MI_RESTORE_INHIBIT;
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 0c25f62..56ac483 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -361,8 +361,8 @@  i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
 	if (unlikely(IS_GEN6(dev) &&
 	    reloc->write_domain == I915_GEM_DOMAIN_INSTRUCTION &&
 	    !(target_vma->bound & GLOBAL_BIND)))
-		target_vma->bind_vma(target_vma, target_i915_obj->cache_level,
-				GLOBAL_BIND);
+		i915_vma_bind(target_vma, target_i915_obj->cache_level,
+			      GLOBAL_BIND);
 
 	/* Validate that the target is in a valid r/w GPU domain */
 	if (unlikely(reloc->write_domain & (reloc->write_domain - 1))) {
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index ac03a38..02eabf9 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -30,6 +30,8 @@ 
 #include "i915_trace.h"
 #include "intel_drv.h"
 
+const struct i915_ggtt_view i915_ggtt_view_normal;
+
 static void bdw_setup_private_ppat(struct drm_i915_private *dev_priv);
 static void chv_setup_private_ppat(struct drm_i915_private *dev_priv);
 
@@ -76,7 +78,7 @@  static int sanitize_enable_ppgtt(struct drm_device *dev, int enable_ppgtt)
 }
 
 
-static void ppgtt_bind_vma(struct i915_vma *vma,
+static void ppgtt_bind_vma(struct i915_vma *vma, struct sg_table *pages,
 			   enum i915_cache_level cache_level,
 			   u32 flags);
 static void ppgtt_unbind_vma(struct i915_vma *vma);
@@ -1200,7 +1202,7 @@  void  i915_ppgtt_release(struct kref *kref)
 }
 
 static void
-ppgtt_bind_vma(struct i915_vma *vma,
+ppgtt_bind_vma(struct i915_vma *vma, struct sg_table *pages,
 	       enum i915_cache_level cache_level,
 	       u32 flags)
 {
@@ -1208,7 +1210,7 @@  ppgtt_bind_vma(struct i915_vma *vma,
 	if (vma->obj->gt_ro)
 		flags |= PTE_READ_ONLY;
 
-	vma->vm->insert_entries(vma->vm, vma->obj->pages, vma->node.start,
+	vma->vm->insert_entries(vma->vm, pages, vma->node.start,
 				cache_level, flags);
 }
 
@@ -1343,7 +1345,7 @@  void i915_gem_restore_gtt_mappings(struct drm_device *dev)
 		 * without telling our object about it. So we need to fake it.
 		 */
 		vma->bound &= ~GLOBAL_BIND;
-		vma->bind_vma(vma, obj->cache_level, GLOBAL_BIND);
+		i915_vma_bind(vma, obj->cache_level, GLOBAL_BIND);
 	}
 
 
@@ -1529,7 +1531,7 @@  static void gen6_ggtt_clear_range(struct i915_address_space *vm,
 }
 
 
-static void i915_ggtt_bind_vma(struct i915_vma *vma,
+static void i915_ggtt_bind_vma(struct i915_vma *vma, struct sg_table *pages,
 			       enum i915_cache_level cache_level,
 			       u32 unused)
 {
@@ -1538,7 +1540,7 @@  static void i915_ggtt_bind_vma(struct i915_vma *vma,
 		AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY;
 
 	BUG_ON(!i915_is_ggtt(vma->vm));
-	intel_gtt_insert_sg_entries(vma->obj->pages, entry, flags);
+	intel_gtt_insert_sg_entries(pages, entry, flags);
 	vma->bound = GLOBAL_BIND;
 }
 
@@ -1562,7 +1564,7 @@  static void i915_ggtt_unbind_vma(struct i915_vma *vma)
 	intel_gtt_clear_range(first, size);
 }
 
-static void ggtt_bind_vma(struct i915_vma *vma,
+static void ggtt_bind_vma(struct i915_vma *vma, struct sg_table *pages,
 			  enum i915_cache_level cache_level,
 			  u32 flags)
 {
@@ -1588,7 +1590,7 @@  static void ggtt_bind_vma(struct i915_vma *vma,
 	if (!dev_priv->mm.aliasing_ppgtt || flags & GLOBAL_BIND) {
 		if (!(vma->bound & GLOBAL_BIND) ||
 		    (cache_level != obj->cache_level)) {
-			vma->vm->insert_entries(vma->vm, obj->pages,
+			vma->vm->insert_entries(vma->vm, pages,
 						vma->node.start,
 						cache_level, flags);
 			vma->bound |= GLOBAL_BIND;
@@ -1600,7 +1602,7 @@  static void ggtt_bind_vma(struct i915_vma *vma,
 	     (cache_level != obj->cache_level))) {
 		struct i915_hw_ppgtt *appgtt = dev_priv->mm.aliasing_ppgtt;
 		appgtt->base.insert_entries(&appgtt->base,
-					    vma->obj->pages,
+					    pages,
 					    vma->node.start,
 					    cache_level, flags);
 		vma->bound |= LOCAL_BIND;
@@ -2165,7 +2167,8 @@  int i915_gem_gtt_init(struct drm_device *dev)
 }
 
 static struct i915_vma *__i915_gem_vma_create(struct drm_i915_gem_object *obj,
-					      struct i915_address_space *vm)
+					      struct i915_address_space *vm,
+					      const struct i915_ggtt_view *view)
 {
 	struct i915_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL);
 	if (vma == NULL)
@@ -2176,6 +2179,7 @@  static struct i915_vma *__i915_gem_vma_create(struct drm_i915_gem_object *obj,
 	INIT_LIST_HEAD(&vma->exec_list);
 	vma->vm = vm;
 	vma->obj = obj;
+	vma->ggtt_view = *view;
 
 	switch (INTEL_INFO(vm->dev)->gen) {
 	case 9:
@@ -2210,14 +2214,43 @@  static struct i915_vma *__i915_gem_vma_create(struct drm_i915_gem_object *obj,
 }
 
 struct i915_vma *
-i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj,
-				  struct i915_address_space *vm)
+i915_gem_obj_lookup_or_create_vma_view(struct drm_i915_gem_object *obj,
+				       struct i915_address_space *vm,
+				       const struct i915_ggtt_view *view)
 {
 	struct i915_vma *vma;
 
-	vma = i915_gem_obj_to_vma(obj, vm);
+	vma = i915_gem_obj_to_vma_view(obj, vm, view);
 	if (!vma)
-		vma = __i915_gem_vma_create(obj, vm);
+		vma = __i915_gem_vma_create(obj, vm, view);
 
 	return vma;
 }
+
+static inline
+struct sg_table *i915_ggtt_view_pages(struct i915_vma *vma)
+{
+	struct sg_table *pages = ERR_PTR(-EINVAL);
+
+	if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL)
+		pages = vma->obj->pages;
+	else
+		WARN(1, "Not implemented!\n");
+
+	return pages;
+}
+
+void i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
+		   u32 flags)
+{
+	struct sg_table *pages = i915_ggtt_view_pages(vma);
+
+	if (pages && !IS_ERR(pages)) {
+		vma->bind_vma(vma, pages, cache_level, flags);
+
+		if (vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL) {
+			sg_free_table(pages);
+			kfree(pages);
+		}
+	}
+}
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index dd849df..a6a0e27 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -109,7 +109,18 @@  typedef gen8_gtt_pte_t gen8_ppgtt_pde_t;
 #define GEN8_PPAT_ELLC_OVERRIDE		(0<<2)
 #define GEN8_PPAT(i, x)			((uint64_t) (x) << ((i) * 8))
 
+enum i915_ggtt_view_type {
+	I915_GGTT_VIEW_NORMAL = 0,
+};
+
+struct i915_ggtt_view {
+	enum i915_ggtt_view_type type;
+};
+
+extern const struct i915_ggtt_view i915_ggtt_view_normal;
+
 enum i915_cache_level;
+
 /**
  * A VMA represents a GEM BO that is bound into an address space. Therefore, a
  * VMA's presence cannot be guaranteed before binding, or after unbinding the
@@ -129,6 +140,15 @@  struct i915_vma {
 #define PTE_READ_ONLY	(1<<2)
 	unsigned int bound : 4;
 
+	/**
+	 * Support different GGTT views into the same object.
+	 * This means there can be multiple VMA mappings per object and per VM.
+	 * i915_ggtt_view_type is used to distinguish between those entries.
+	 * The default one of zero (I915_GGTT_VIEW_NORMAL) is default and also
+	 * assumed in GEM functions which take no ggtt view parameter.
+	 */
+	struct i915_ggtt_view ggtt_view;
+
 	/** This object's place on the active/inactive lists */
 	struct list_head mm_list;
 
@@ -160,7 +180,7 @@  struct i915_vma {
 	 * setting the valid PTE entries to a reserved scratch page. */
 	void (*unbind_vma)(struct i915_vma *vma);
 	/* Map an object into an address space with the given cache flags. */
-	void (*bind_vma)(struct i915_vma *vma,
+	void (*bind_vma)(struct i915_vma *vma, struct sg_table *pages,
 			 enum i915_cache_level cache_level,
 			 u32 flags);
 };
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index c4536e1..f97479a 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -718,10 +718,8 @@  static u32 capture_pinned_bo(struct drm_i915_error_buffer *err,
 			break;
 
 		list_for_each_entry(vma, &obj->vma_list, vma_link)
-			if (vma->vm == vm && vma->pin_count > 0) {
+			if (vma->vm == vm && vma->pin_count > 0)
 				capture_bo(err++, vma);
-				break;
-			}
 	}
 
 	return err - first;
@@ -1096,10 +1094,8 @@  static void i915_gem_capture_vm(struct drm_i915_private *dev_priv,
 
 	list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
 		list_for_each_entry(vma, &obj->vma_list, vma_link)
-			if (vma->vm == vm && vma->pin_count > 0) {
+			if (vma->vm == vm && vma->pin_count > 0)
 				i++;
-				break;
-			}
 	}
 	error->pinned_bo_count[ndx] = i - error->active_bo_count[ndx];