diff mbox

drm/i915: Use the new vm [un]bind functions

Message ID 1379649999-16923-1-git-send-email-benjamin.widawsky@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ben Widawsky Sept. 20, 2013, 4:06 a.m. UTC
From: Ben Widawsky <ben@bwidawsk.net>

Building on the last patch which created the new function pointers in
the VM for bind/unbind, here we actually put those new function pointers
to use.

Split out as a separate patch to aid in review. I'm fine with squashing
into the previous patch if people request it.

v2: Updated to address the smart ggtt which can do aliasing as needed
Make sure we bind to global gtt when mappable and fenceable. I thought
we could get away without this initialy, but we cannot.

v3: Make the global GTT binding explicitly use the ggtt VM for
bind_vma(). While at it, use the new ggtt_vma helper (Chris)

v4: Make the code support the secure dispatch flag, which requires
special handling during execbuf. This was fixed (incorrectly) later in
the series, but having it here earlier in the series should be perfectly
acceptable. (Chris)
Move do_switch over to the new, special ggtt_vma interface.

v5: Don't use a local variable (or assertion) when setting the batch
object to the global GTT during secure dispatch (Chris)

v6: Caclulate the exec offset for the secure case (Bug fix missed on
v4). (Chris)
Remove redundant check for has_global_gtt_mapping, since it is done in
bind_vma.

v7: Remove now unused dev_priv in do_switch
Don't pass the vm to ggtt_offset (error from v6 which I should have
caught before sending).

v8: Assert, and rework the SNB workaround (to make it a bit clearer)
code to make sure the VM can't be anything but the GGTT.

Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
---
 drivers/gpu/drm/i915/i915_drv.h            | 15 ++++-----
 drivers/gpu/drm/i915/i915_gem.c            | 39 ++++++++--------------
 drivers/gpu/drm/i915/i915_gem_context.c    |  6 ++--
 drivers/gpu/drm/i915/i915_gem_execbuffer.c | 52 ++++++++++++++++++------------
 drivers/gpu/drm/i915/i915_gem_gtt.c        | 48 ++-------------------------
 5 files changed, 59 insertions(+), 101 deletions(-)

Comments

Chris Wilson Sept. 20, 2013, 10:43 a.m. UTC | #1
On Thu, Sep 19, 2013 at 09:06:39PM -0700, Ben Widawsky wrote:
> @@ -1117,8 +1114,25 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>  	 * batch" bit. Hence we need to pin secure batches into the global gtt.
>  	 * hsw should have this fixed, but let's be paranoid and do it
>  	 * unconditionally for now. */
> -	if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping)
> -		i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level);
> +	if (flags & I915_DISPATCH_SECURE) {
> +		struct i915_address_space *ggtt = obj_to_ggtt(batch_obj);
> +		/* Assuming all privileged batches are in the global GTT means
> +		 * we need to make sure we have a global gtt offset, as well as
> +		 * the PTEs mapped. As mentioned above, we can forego this on
> +		 * HSW, but don't.
> +		 */
> +		ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false,
> +						 false);
> +		if (ret)
> +			goto err;

bind_to_vm() has unwanted side-effects here - notably always allocating
a node and corrupting lists.

Just pin, ggtt->bind_vma, unpin. Hmmm, except that we also need a
move_to_active (as we are not presuming vm == ggtt).

pin, ggtt->bind_vma, move_to_active(ggtt), unpin.

And then hope we have the correct flushes in place for that to be
retired if nothing else is going on with that ggtt.

> +
> +		ggtt->bind_vma(i915_gem_obj_to_ggtt(batch_obj),
> +			       batch_obj->cache_level,
> +			       GLOBAL_BIND);
> +
> +		exec_start += i915_gem_obj_ggtt_offset(batch_obj);
> +	} else
> +		exec_start += i915_gem_obj_offset(batch_obj, vm);
>  
>  	ret = i915_gem_execbuffer_move_to_gpu(ring, &eb->vmas);
>  	if (ret)
Daniel Vetter Sept. 20, 2013, 1:24 p.m. UTC | #2
On Fri, Sep 20, 2013 at 12:43 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> On Thu, Sep 19, 2013 at 09:06:39PM -0700, Ben Widawsky wrote:
>> @@ -1117,8 +1114,25 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>>        * batch" bit. Hence we need to pin secure batches into the global gtt.
>>        * hsw should have this fixed, but let's be paranoid and do it
>>        * unconditionally for now. */
>> -     if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping)
>> -             i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level);
>> +     if (flags & I915_DISPATCH_SECURE) {
>> +             struct i915_address_space *ggtt = obj_to_ggtt(batch_obj);
>> +             /* Assuming all privileged batches are in the global GTT means
>> +              * we need to make sure we have a global gtt offset, as well as
>> +              * the PTEs mapped. As mentioned above, we can forego this on
>> +              * HSW, but don't.
>> +              */
>> +             ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false,
>> +                                              false);
>> +             if (ret)
>> +                     goto err;
>
> bind_to_vm() has unwanted side-effects here - notably always allocating
> a node and corrupting lists.
>
> Just pin, ggtt->bind_vma, unpin. Hmmm, except that we also need a
> move_to_active (as we are not presuming vm == ggtt).
>
> pin, ggtt->bind_vma, move_to_active(ggtt), unpin.
>
> And then hope we have the correct flushes in place for that to be
> retired if nothing else is going on with that ggtt.

New idea: Can't we make this work in an easier fashion by changing the
vma we look up for the eb lists using the right gtt appropriate for
the batch?

Then (presuming all our code is clear of unnecessary (obj, vm) -> vma
lookups) everything should Just Work, including grabing the gtt
offset. Or am I just dreaming here? Of course a BUG_ON to check that
vma->vm of the batch object points at the global gtt vm if we have a
secure dispatch bb would still be dutiful.

Cheers, Daniel
Daniel Vetter Sept. 20, 2013, 1:26 p.m. UTC | #3
On Fri, Sep 20, 2013 at 3:24 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Fri, Sep 20, 2013 at 12:43 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>> On Thu, Sep 19, 2013 at 09:06:39PM -0700, Ben Widawsky wrote:
>>> @@ -1117,8 +1114,25 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>>>        * batch" bit. Hence we need to pin secure batches into the global gtt.
>>>        * hsw should have this fixed, but let's be paranoid and do it
>>>        * unconditionally for now. */
>>> -     if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping)
>>> -             i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level);
>>> +     if (flags & I915_DISPATCH_SECURE) {
>>> +             struct i915_address_space *ggtt = obj_to_ggtt(batch_obj);
>>> +             /* Assuming all privileged batches are in the global GTT means
>>> +              * we need to make sure we have a global gtt offset, as well as
>>> +              * the PTEs mapped. As mentioned above, we can forego this on
>>> +              * HSW, but don't.
>>> +              */
>>> +             ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false,
>>> +                                              false);
>>> +             if (ret)
>>> +                     goto err;
>>
>> bind_to_vm() has unwanted side-effects here - notably always allocating
>> a node and corrupting lists.
>>
>> Just pin, ggtt->bind_vma, unpin. Hmmm, except that we also need a
>> move_to_active (as we are not presuming vm == ggtt).
>>
>> pin, ggtt->bind_vma, move_to_active(ggtt), unpin.
>>
>> And then hope we have the correct flushes in place for that to be
>> retired if nothing else is going on with that ggtt.
>
> New idea: Can't we make this work in an easier fashion by changing the
> vma we look up for the eb lists using the right gtt appropriate for
> the batch?
>
> Then (presuming all our code is clear of unnecessary (obj, vm) -> vma
> lookups) everything should Just Work, including grabing the gtt
> offset. Or am I just dreaming here? Of course a BUG_ON to check that
> vma->vm of the batch object points at the global gtt vm if we have a
> secure dispatch bb would still be dutiful.

Ok, I'm dreaming, or at least it's not that simple: We also need the
ppgtt binding for the non-CS access to the batch bo (like indirect
state and stuff). So could we instead just insert two vmas into the eb
lists, one for the ppgtt and one for the global gtt?

Of course that means we can only tackle this once we do have multiple vms.
-Daniel
Chris Wilson Sept. 20, 2013, 1:29 p.m. UTC | #4
On Fri, Sep 20, 2013 at 03:24:43PM +0200, Daniel Vetter wrote:
> On Fri, Sep 20, 2013 at 12:43 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > On Thu, Sep 19, 2013 at 09:06:39PM -0700, Ben Widawsky wrote:
> >> @@ -1117,8 +1114,25 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
> >>        * batch" bit. Hence we need to pin secure batches into the global gtt.
> >>        * hsw should have this fixed, but let's be paranoid and do it
> >>        * unconditionally for now. */
> >> -     if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping)
> >> -             i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level);
> >> +     if (flags & I915_DISPATCH_SECURE) {
> >> +             struct i915_address_space *ggtt = obj_to_ggtt(batch_obj);
> >> +             /* Assuming all privileged batches are in the global GTT means
> >> +              * we need to make sure we have a global gtt offset, as well as
> >> +              * the PTEs mapped. As mentioned above, we can forego this on
> >> +              * HSW, but don't.
> >> +              */
> >> +             ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false,
> >> +                                              false);
> >> +             if (ret)
> >> +                     goto err;
> >
> > bind_to_vm() has unwanted side-effects here - notably always allocating
> > a node and corrupting lists.
> >
> > Just pin, ggtt->bind_vma, unpin. Hmmm, except that we also need a
> > move_to_active (as we are not presuming vm == ggtt).
> >
> > pin, ggtt->bind_vma, move_to_active(ggtt), unpin.
> >
> > And then hope we have the correct flushes in place for that to be
> > retired if nothing else is going on with that ggtt.
> 
> New idea: Can't we make this work in an easier fashion by changing the
> vma we look up for the eb lists using the right gtt appropriate for
> the batch?

Not quite, in the eb it does need to be in the ppgtt for self- or
back-references to the batch bo. It is only the CS that needs the GGTT
entry in addition to the PPGTT entry required for everything else. And
we can quite happily handle having different offsets in each address
space.
 
> Then (presuming all our code is clear of unnecessary (obj, vm) -> vma
> lookups) everything should Just Work, including grabing the gtt
> offset. Or am I just dreaming here? Of course a BUG_ON to check that
> vma->vm of the batch object points at the global gtt vm if we have a
> secure dispatch bb would still be dutiful.

Dream on. :)
-Chris
Ben Widawsky Sept. 20, 2013, 8:44 p.m. UTC | #5
On Fri, Sep 20, 2013 at 11:43:48AM +0100, Chris Wilson wrote:
> On Thu, Sep 19, 2013 at 09:06:39PM -0700, Ben Widawsky wrote:
> > @@ -1117,8 +1114,25 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
> >  	 * batch" bit. Hence we need to pin secure batches into the global gtt.
> >  	 * hsw should have this fixed, but let's be paranoid and do it
> >  	 * unconditionally for now. */
> > -	if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping)
> > -		i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level);
> > +	if (flags & I915_DISPATCH_SECURE) {
> > +		struct i915_address_space *ggtt = obj_to_ggtt(batch_obj);
> > +		/* Assuming all privileged batches are in the global GTT means
> > +		 * we need to make sure we have a global gtt offset, as well as
> > +		 * the PTEs mapped. As mentioned above, we can forego this on
> > +		 * HSW, but don't.
> > +		 */
> > +		ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false,
> > +						 false);
> > +		if (ret)
> > +			goto err;
> 
> bind_to_vm() has unwanted side-effects here - notably always allocating
> a node and corrupting lists.
> 
> Just pin, ggtt->bind_vma, unpin. Hmmm, except that we also need a
> move_to_active (as we are not presuming vm == ggtt).
> 
> pin, ggtt->bind_vma, move_to_active(ggtt), unpin.
> 
> And then hope we have the correct flushes in place for that to be
> retired if nothing else is going on with that ggtt.

Yes, you're right, and a particular nice catch on the move to active; I
completely forgot. I think ggtt->bind_vma is redundant though. Shouldn't
it just be:
pin, move_to_active, unpin?

Furthermore, the actually pinning (pin count increment) should be
unnecessary, but I assume you were just trying to save me some typing.

> 
> > +
> > +		ggtt->bind_vma(i915_gem_obj_to_ggtt(batch_obj),
> > +			       batch_obj->cache_level,
> > +			       GLOBAL_BIND);
> > +
> > +		exec_start += i915_gem_obj_ggtt_offset(batch_obj);
> > +	} else
> > +		exec_start += i915_gem_obj_offset(batch_obj, vm);
> >  
> >  	ret = i915_gem_execbuffer_move_to_gpu(ring, &eb->vmas);
> >  	if (ret)
> 
> -- 
> Chris Wilson, Intel Open Source Technology Centre
Chris Wilson Sept. 20, 2013, 8:55 p.m. UTC | #6
On Fri, Sep 20, 2013 at 01:44:23PM -0700, Ben Widawsky wrote:
> On Fri, Sep 20, 2013 at 11:43:48AM +0100, Chris Wilson wrote:
> > On Thu, Sep 19, 2013 at 09:06:39PM -0700, Ben Widawsky wrote:
> > > @@ -1117,8 +1114,25 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
> > >  	 * batch" bit. Hence we need to pin secure batches into the global gtt.
> > >  	 * hsw should have this fixed, but let's be paranoid and do it
> > >  	 * unconditionally for now. */
> > > -	if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping)
> > > -		i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level);
> > > +	if (flags & I915_DISPATCH_SECURE) {
> > > +		struct i915_address_space *ggtt = obj_to_ggtt(batch_obj);
> > > +		/* Assuming all privileged batches are in the global GTT means
> > > +		 * we need to make sure we have a global gtt offset, as well as
> > > +		 * the PTEs mapped. As mentioned above, we can forego this on
> > > +		 * HSW, but don't.
> > > +		 */
> > > +		ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false,
> > > +						 false);
> > > +		if (ret)
> > > +			goto err;
> > 
> > bind_to_vm() has unwanted side-effects here - notably always allocating
> > a node and corrupting lists.
> > 
> > Just pin, ggtt->bind_vma, unpin. Hmmm, except that we also need a
> > move_to_active (as we are not presuming vm == ggtt).
> > 
> > pin, ggtt->bind_vma, move_to_active(ggtt), unpin.
> > 
> > And then hope we have the correct flushes in place for that to be
> > retired if nothing else is going on with that ggtt.
> 
> Yes, you're right, and a particular nice catch on the move to active; I
> completely forgot. I think ggtt->bind_vma is redundant though. Shouldn't
> it just be:
> pin, move_to_active, unpin?

Since we will ask for a !map_and_fenceable pin, pin() will not
automatically bind into the global GTT, so I think we still need the
ggtt->bind_vma().

 
> Furthermore, the actually pinning (pin count increment) should be
> unnecessary, but I assume you were just trying to save me some typing.

Yes, the pin-count adjustments should be unnecessary - but not a huge
burden, and I was thinking it may help in the future as we may want to
explicitly hold the pin until move-to-active for all objects. That
future being where we strive to reduce hold times on struct_mutex.
-Chris
Ben Widawsky Sept. 20, 2013, 9:08 p.m. UTC | #7
On Fri, Sep 20, 2013 at 09:55:51PM +0100, Chris Wilson wrote:
> On Fri, Sep 20, 2013 at 01:44:23PM -0700, Ben Widawsky wrote:
> > On Fri, Sep 20, 2013 at 11:43:48AM +0100, Chris Wilson wrote:
> > > On Thu, Sep 19, 2013 at 09:06:39PM -0700, Ben Widawsky wrote:
> > > > @@ -1117,8 +1114,25 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
> > > >  	 * batch" bit. Hence we need to pin secure batches into the global gtt.
> > > >  	 * hsw should have this fixed, but let's be paranoid and do it
> > > >  	 * unconditionally for now. */
> > > > -	if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping)
> > > > -		i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level);
> > > > +	if (flags & I915_DISPATCH_SECURE) {
> > > > +		struct i915_address_space *ggtt = obj_to_ggtt(batch_obj);
> > > > +		/* Assuming all privileged batches are in the global GTT means
> > > > +		 * we need to make sure we have a global gtt offset, as well as
> > > > +		 * the PTEs mapped. As mentioned above, we can forego this on
> > > > +		 * HSW, but don't.
> > > > +		 */
> > > > +		ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false,
> > > > +						 false);
> > > > +		if (ret)
> > > > +			goto err;
> > > 
> > > bind_to_vm() has unwanted side-effects here - notably always allocating
> > > a node and corrupting lists.
> > > 
> > > Just pin, ggtt->bind_vma, unpin. Hmmm, except that we also need a
> > > move_to_active (as we are not presuming vm == ggtt).
> > > 
> > > pin, ggtt->bind_vma, move_to_active(ggtt), unpin.
> > > 
> > > And then hope we have the correct flushes in place for that to be
> > > retired if nothing else is going on with that ggtt.
> > 
> > Yes, you're right, and a particular nice catch on the move to active; I
> > completely forgot. I think ggtt->bind_vma is redundant though. Shouldn't
> > it just be:
> > pin, move_to_active, unpin?
> 
> Since we will ask for a !map_and_fenceable pin, pin() will not
> automatically bind into the global GTT, so I think we still need the
> ggtt->bind_vma().
> 
pin gets passed a VM, which will be GGTT.
>  
> > Furthermore, the actually pinning (pin count increment) should be
> > unnecessary, but I assume you were just trying to save me some typing.
> 
> Yes, the pin-count adjustments should be unnecessary - but not a huge
> burden, and I was thinking it may help in the future as we may want to
> explicitly hold the pin until move-to-active for all objects. That
> future being where we strive to reduce hold times on struct_mutex.
> -Chris
> 
> -- 
> Chris Wilson, Intel Open Source Technology Centre
Daniel Vetter Sept. 20, 2013, 9:22 p.m. UTC | #8
On Fri, Sep 20, 2013 at 09:55:51PM +0100, Chris Wilson wrote:
> On Fri, Sep 20, 2013 at 01:44:23PM -0700, Ben Widawsky wrote:
> > Furthermore, the actually pinning (pin count increment) should be
> > unnecessary, but I assume you were just trying to save me some typing.
> 
> Yes, the pin-count adjustments should be unnecessary - but not a huge
> burden, and I was thinking it may help in the future as we may want to
> explicitly hold the pin until move-to-active for all objects. That
> future being where we strive to reduce hold times on struct_mutex.

My grand plan is that pinning-to-mark-an-object-reserved-for-execbuf will
be replaced by per-object-lock-acquired. By using the owner-tracking of ww
mutexes we'll even get a "you have this already acquired" notice for free.
And then we obviously need to hold the ww mutex lock until we're done
updating the state, so past the move-to-active.

But I haven't worked out a concrete plan for how to get there yet, so
dunno whether sprinkling more pinnning around is a good idea or not. Just
wanted to drop my 2 uninformed cents here ;-)
-Daniel
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 686a66c..8172eb1 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1862,6 +1862,12 @@  int __must_check i915_gem_object_pin(struct drm_i915_gem_object *obj,
 				     uint32_t alignment,
 				     bool map_and_fenceable,
 				     bool nonblocking);
+int __must_check
+i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
+			   struct i915_address_space *vm,
+			   unsigned alignment,
+			   bool map_and_fenceable,
+			   bool nonblocking);
 void i915_gem_object_unpin(struct drm_i915_gem_object *obj);
 int __must_check i915_vma_unbind(struct i915_vma *vma);
 int __must_check i915_gem_object_ggtt_unbind(struct drm_i915_gem_object *obj);
@@ -2085,17 +2091,8 @@  int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
 
 /* i915_gem_gtt.c */
 void i915_gem_cleanup_aliasing_ppgtt(struct drm_device *dev);
-void i915_ppgtt_bind_object(struct i915_hw_ppgtt *ppgtt,
-			    struct drm_i915_gem_object *obj,
-			    enum i915_cache_level cache_level);
-void i915_ppgtt_unbind_object(struct i915_hw_ppgtt *ppgtt,
-			      struct drm_i915_gem_object *obj);
-
 void i915_gem_restore_gtt_mappings(struct drm_device *dev);
 int __must_check i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj);
-void i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj,
-				enum i915_cache_level cache_level);
-void i915_gem_gtt_unbind_object(struct drm_i915_gem_object *obj);
 void i915_gem_gtt_finish_object(struct drm_i915_gem_object *obj);
 void i915_gem_init_global_gtt(struct drm_device *dev);
 void i915_gem_setup_global_gtt(struct drm_device *dev, unsigned long start,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 651b91c..4f8dc67 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -43,12 +43,6 @@  static void i915_gem_object_flush_cpu_write_domain(struct drm_i915_gem_object *o
 static __must_check int
 i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
 			       bool readonly);
-static __must_check int
-i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
-			   struct i915_address_space *vm,
-			   unsigned alignment,
-			   bool map_and_fenceable,
-			   bool nonblocking);
 static int i915_gem_phys_pwrite(struct drm_device *dev,
 				struct drm_i915_gem_object *obj,
 				struct drm_i915_gem_pwrite *args,
@@ -2693,12 +2687,8 @@  int i915_vma_unbind(struct i915_vma *vma)
 
 	trace_i915_vma_unbind(vma);
 
-	if (obj->has_global_gtt_mapping)
-		i915_gem_gtt_unbind_object(obj);
-	if (obj->has_aliasing_ppgtt_mapping) {
-		i915_ppgtt_unbind_object(dev_priv->mm.aliasing_ppgtt, obj);
-		obj->has_aliasing_ppgtt_mapping = 0;
-	}
+	vma->vm->unbind_vma(vma);
+
 	i915_gem_gtt_finish_object(obj);
 	i915_gem_object_unpin_pages(obj);
 
@@ -3155,7 +3145,7 @@  static void i915_gem_verify_gtt(struct drm_device *dev)
 /**
  * Finds free space in the GTT aperture and binds the object there.
  */
-static int
+int
 i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
 			   struct i915_address_space *vm,
 			   unsigned alignment,
@@ -3424,7 +3414,6 @@  int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 				    enum i915_cache_level cache_level)
 {
 	struct drm_device *dev = obj->base.dev;
-	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct i915_vma *vma;
 	int ret;
 
@@ -3463,11 +3452,8 @@  int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 				return ret;
 		}
 
-		if (obj->has_global_gtt_mapping)
-			i915_gem_gtt_bind_object(obj, cache_level);
-		if (obj->has_aliasing_ppgtt_mapping)
-			i915_ppgtt_bind_object(dev_priv->mm.aliasing_ppgtt,
-					       obj, cache_level);
+		list_for_each_entry(vma, &obj->vma_list, vma_link)
+			vma->vm->bind_vma(vma, cache_level, 0);
 	}
 
 	list_for_each_entry(vma, &obj->vma_list, vma_link)
@@ -3795,6 +3781,7 @@  i915_gem_object_pin(struct drm_i915_gem_object *obj,
 		    bool map_and_fenceable,
 		    bool nonblocking)
 {
+	const u32 flags = map_and_fenceable ? GLOBAL_BIND : 0;
 	struct i915_vma *vma;
 	int ret;
 
@@ -3823,20 +3810,22 @@  i915_gem_object_pin(struct drm_i915_gem_object *obj,
 	}
 
 	if (!i915_gem_obj_bound(obj, vm)) {
-		struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
-
 		ret = i915_gem_object_bind_to_vm(obj, vm, alignment,
 						 map_and_fenceable,
 						 nonblocking);
 		if (ret)
 			return ret;
 
-		if (!dev_priv->mm.aliasing_ppgtt)
-			i915_gem_gtt_bind_object(obj, obj->cache_level);
-	}
+		vma = i915_gem_obj_to_vma(obj, vm);
+		vm->bind_vma(vma, obj->cache_level, flags);
+	} else
+		vma = i915_gem_obj_to_vma(obj, vm);
 
+	/* Objects are created map and fenceable. If we bind an object
+	 * the first time, and we had aliasing PPGTT (and didn't request
+	 * GLOBAL), we'll need to do this on the second bind.*/
 	if (!obj->has_global_gtt_mapping && map_and_fenceable)
-		i915_gem_gtt_bind_object(obj, obj->cache_level);
+		vm->bind_vma(vma, obj->cache_level, GLOBAL_BIND);
 
 	obj->pin_count++;
 	obj->pin_mappable |= map_and_fenceable;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index cb3b7e8..2e7416f 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -415,8 +415,10 @@  static int do_switch(struct i915_hw_context *to)
 		return ret;
 	}
 
-	if (!to->obj->has_global_gtt_mapping)
-		i915_gem_gtt_bind_object(to->obj, to->obj->cache_level);
+	if (!to->obj->has_global_gtt_mapping) {
+		struct i915_vma *vma = i915_gem_obj_to_ggtt(to->obj);
+		vma->vm->bind_vma(vma, to->obj->cache_level, GLOBAL_BIND);
+	}
 
 	if (!to->is_initialized || is_default_context(to))
 		hw_flags |= MI_RESTORE_INHIBIT;
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index b26d979..5702a30 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -286,8 +286,14 @@  i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
 	if (unlikely(IS_GEN6(dev) &&
 	    reloc->write_domain == I915_GEM_DOMAIN_INSTRUCTION &&
 	    !target_i915_obj->has_global_gtt_mapping)) {
-		i915_gem_gtt_bind_object(target_i915_obj,
-					 target_i915_obj->cache_level);
+		/* SNB shall not support full PPGTT. This path can only be taken
+		 * when the VM is the GGTT (aliasing PPGTT is not a real VM, and
+		 * therefore doesn't count).
+		 */
+		BUG_ON(vm != obj_to_ggtt(target_i915_obj));
+		vm->bind_vma(i915_gem_obj_to_ggtt(target_i915_obj),
+			     target_i915_obj->cache_level,
+			     GLOBAL_BIND);
 	}
 
 	/* Validate that the target is in a valid r/w GPU domain */
@@ -464,11 +470,12 @@  i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
 				struct intel_ring_buffer *ring,
 				bool *need_reloc)
 {
-	struct drm_i915_private *dev_priv = ring->dev->dev_private;
+	struct drm_i915_gem_object *obj = vma->obj;
 	struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
 	bool has_fenced_gpu_access = INTEL_INFO(ring->dev)->gen < 4;
 	bool need_fence, need_mappable;
-	struct drm_i915_gem_object *obj = vma->obj;
+	u32 flags = (entry->flags & EXEC_OBJECT_NEEDS_GTT) &&
+		!vma->obj->has_global_gtt_mapping ? GLOBAL_BIND : 0;
 	int ret;
 
 	need_fence =
@@ -497,14 +504,6 @@  i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
 		}
 	}
 
-	/* Ensure ppgtt mapping exists if needed */
-	if (dev_priv->mm.aliasing_ppgtt && !obj->has_aliasing_ppgtt_mapping) {
-		i915_ppgtt_bind_object(dev_priv->mm.aliasing_ppgtt,
-				       obj, obj->cache_level);
-
-		obj->has_aliasing_ppgtt_mapping = 1;
-	}
-
 	if (entry->offset != vma->node.start) {
 		entry->offset = vma->node.start;
 		*need_reloc = true;
@@ -515,9 +514,7 @@  i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
 		obj->base.pending_write_domain = I915_GEM_DOMAIN_RENDER;
 	}
 
-	if (entry->flags & EXEC_OBJECT_NEEDS_GTT &&
-	    !obj->has_global_gtt_mapping)
-		i915_gem_gtt_bind_object(obj, obj->cache_level);
+	vma->vm->bind_vma(vma, obj->cache_level, flags);
 
 	return 0;
 }
@@ -936,7 +933,7 @@  i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 	struct intel_ring_buffer *ring;
 	struct i915_ctx_hang_stats *hs;
 	u32 ctx_id = i915_execbuffer2_get_context_id(*args);
-	u32 exec_start, exec_len;
+	u32 exec_len, exec_start = args->batch_start_offset;
 	u32 mask, flags;
 	int ret, mode, i;
 	bool need_relocs;
@@ -1117,8 +1114,25 @@  i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 	 * batch" bit. Hence we need to pin secure batches into the global gtt.
 	 * hsw should have this fixed, but let's be paranoid and do it
 	 * unconditionally for now. */
-	if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping)
-		i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level);
+	if (flags & I915_DISPATCH_SECURE) {
+		struct i915_address_space *ggtt = obj_to_ggtt(batch_obj);
+		/* Assuming all privileged batches are in the global GTT means
+		 * we need to make sure we have a global gtt offset, as well as
+		 * the PTEs mapped. As mentioned above, we can forego this on
+		 * HSW, but don't.
+		 */
+		ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false,
+						 false);
+		if (ret)
+			goto err;
+
+		ggtt->bind_vma(i915_gem_obj_to_ggtt(batch_obj),
+			       batch_obj->cache_level,
+			       GLOBAL_BIND);
+
+		exec_start += i915_gem_obj_ggtt_offset(batch_obj);
+	} else
+		exec_start += i915_gem_obj_offset(batch_obj, vm);
 
 	ret = i915_gem_execbuffer_move_to_gpu(ring, &eb->vmas);
 	if (ret)
@@ -1160,8 +1174,6 @@  i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 			goto err;
 	}
 
-	exec_start = i915_gem_obj_offset(batch_obj, vm) +
-		args->batch_start_offset;
 	exec_len = args->batch_len;
 	if (cliprects) {
 		for (i = 0; i < args->num_cliprects; i++) {
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 0ea40b3..7e4a308 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -437,15 +437,6 @@  void i915_gem_cleanup_aliasing_ppgtt(struct drm_device *dev)
 	dev_priv->mm.aliasing_ppgtt = NULL;
 }
 
-void i915_ppgtt_bind_object(struct i915_hw_ppgtt *ppgtt,
-			    struct drm_i915_gem_object *obj,
-			    enum i915_cache_level cache_level)
-{
-	ppgtt->base.insert_entries(&ppgtt->base, obj->pages,
-				   i915_gem_obj_ggtt_offset(obj) >> PAGE_SHIFT,
-				   cache_level);
-}
-
 static void __always_unused
 gen6_ppgtt_bind_vma(struct i915_vma *vma,
 		    enum i915_cache_level cache_level,
@@ -458,14 +449,6 @@  gen6_ppgtt_bind_vma(struct i915_vma *vma,
 	gen6_ppgtt_insert_entries(vma->vm, vma->obj->pages, entry, cache_level);
 }
 
-void i915_ppgtt_unbind_object(struct i915_hw_ppgtt *ppgtt,
-			      struct drm_i915_gem_object *obj)
-{
-	ppgtt->base.clear_range(&ppgtt->base,
-				i915_gem_obj_ggtt_offset(obj) >> PAGE_SHIFT,
-				obj->base.size >> PAGE_SHIFT);
-}
-
 static void __always_unused gen6_ppgtt_unbind_vma(struct i915_vma *vma)
 {
 	const unsigned long entry = vma->node.start >> PAGE_SHIFT;
@@ -523,8 +506,10 @@  void i915_gem_restore_gtt_mappings(struct drm_device *dev)
 				       dev_priv->gtt.base.total / PAGE_SIZE);
 
 	list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
+		struct i915_vma *vma = i915_gem_obj_to_vma(obj,
+							   &dev_priv->gtt.base);
 		i915_gem_clflush_object(obj, obj->pin_display);
-		i915_gem_gtt_bind_object(obj, obj->cache_level);
+		vma->vm->bind_vma(vma, obj->cache_level, 0);
 	}
 
 	i915_gem_chipset_flush(dev);
@@ -687,33 +672,6 @@  static void gen6_ggtt_bind_vma(struct i915_vma *vma,
 	}
 }
 
-void i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj,
-			      enum i915_cache_level cache_level)
-{
-	struct drm_device *dev = obj->base.dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	const unsigned long entry = i915_gem_obj_ggtt_offset(obj) >> PAGE_SHIFT;
-
-	dev_priv->gtt.base.insert_entries(&dev_priv->gtt.base, obj->pages,
-					  entry,
-					  cache_level);
-
-	obj->has_global_gtt_mapping = 1;
-}
-
-void i915_gem_gtt_unbind_object(struct drm_i915_gem_object *obj)
-{
-	struct drm_device *dev = obj->base.dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	const unsigned long entry = i915_gem_obj_ggtt_offset(obj) >> PAGE_SHIFT;
-
-	dev_priv->gtt.base.clear_range(&dev_priv->gtt.base,
-				       entry,
-				       obj->base.size >> PAGE_SHIFT);
-
-	obj->has_global_gtt_mapping = 0;
-}
-
 static void gen6_ggtt_unbind_vma(struct i915_vma *vma)
 {
 	struct drm_device *dev = vma->vm->dev;