diff mbox

drm/i915: avoid leaking DMA mappings

Message ID 1436194237-850-1-git-send-email-imre.deak@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Imre Deak July 6, 2015, 2:50 p.m. UTC
We have 3 types of DMA mappings for GEM objects:
1. physically contiguous for stolen and for objects needing contiguous
   memory
2. DMA-buf mappings imported via a DMA-buf attach operation
3. SG DMA mappings for shmem backed and userptr objects

For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
corresponding backing pages and so in practice we create/release the
mapping in the object's get_pages/put_pages callback.

For 3. the lifetime of the mapping matches that of any existing GPU binding
of the object, so we'll create the mapping when the object is bound to
the first vma and release the mapping when the object is unbound from its
last vma.

Since the object can be bound to multiple vmas, we can end up creating a
new DMA mapping in the 3. case even if the object already had one. This
is not allowed by the DMA API and can lead to leaked mapping data and
IOMMU memory space starvation in certain cases. For example HW IOMMU
drivers (intel_iommu) allocate a new range from their memory space
whenever a mapping is created, silently overriding a pre-existing
mapping.

Fix this by adding new callbacks to create/release the DMA mapping. This
way we can use the has_dma_mapping flag for objects of the 3. case also
(so far the flag was only used for the 1. and 2. case) and skip creating
a new mapping if one exists already.

Note that I also thought about simply creating/releasing the mapping
when get_pages/put_pages is called. However since creating a DMA mapping
may have associated resources (at least in case of HW IOMMU) it does
make sense to release these resources as early as possible. We can
release the DMA mapping as soon as the object is unbound from the last
vma, before we drop the backing pages, hence it's worth keeping the two
operations separate.

I noticed this issue by enabling DMA debugging, which got disabled after
a while due to its internal mapping tables getting full. It also reported
errors in connection to random other drivers that did a DMA mapping for
an address that was previously mapped by i915 but was never released.
Besides these diagnostic messages and the memory space starvation
problem for IOMMUs, I'm not aware of this causing a real issue.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h     |  2 ++
 drivers/gpu/drm/i915/i915_gem.c     | 26 ++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_gtt.c | 15 ++++-----------
 3 files changed, 32 insertions(+), 11 deletions(-)

Comments

Chris Wilson July 6, 2015, 2:57 p.m. UTC | #1
On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> We have 3 types of DMA mappings for GEM objects:
> 1. physically contiguous for stolen and for objects needing contiguous
>    memory
> 2. DMA-buf mappings imported via a DMA-buf attach operation
> 3. SG DMA mappings for shmem backed and userptr objects
> 
> For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> corresponding backing pages and so in practice we create/release the
> mapping in the object's get_pages/put_pages callback.
> 
> For 3. the lifetime of the mapping matches that of any existing GPU binding
> of the object, so we'll create the mapping when the object is bound to
> the first vma and release the mapping when the object is unbound from its
> last vma.
> 
> Since the object can be bound to multiple vmas, we can end up creating a
> new DMA mapping in the 3. case even if the object already had one. This
> is not allowed by the DMA API and can lead to leaked mapping data and
> IOMMU memory space starvation in certain cases. For example HW IOMMU
> drivers (intel_iommu) allocate a new range from their memory space
> whenever a mapping is created, silently overriding a pre-existing
> mapping.
> 
> Fix this by adding new callbacks to create/release the DMA mapping. This
> way we can use the has_dma_mapping flag for objects of the 3. case also
> (so far the flag was only used for the 1. and 2. case) and skip creating
> a new mapping if one exists already.
> 
> Note that I also thought about simply creating/releasing the mapping
> when get_pages/put_pages is called. However since creating a DMA mapping
> may have associated resources (at least in case of HW IOMMU) it does
> make sense to release these resources as early as possible. We can
> release the DMA mapping as soon as the object is unbound from the last
> vma, before we drop the backing pages, hence it's worth keeping the two
> operations separate.
> 
> I noticed this issue by enabling DMA debugging, which got disabled after
> a while due to its internal mapping tables getting full. It also reported
> errors in connection to random other drivers that did a DMA mapping for
> an address that was previously mapped by i915 but was never released.
> Besides these diagnostic messages and the memory space starvation
> problem for IOMMUs, I'm not aware of this causing a real issue.

Nope, it is much much simpler. Since we only do the dma prepare/finish
from inside get_pages/put_pages, we can put the calls there. The only
caveat there is userptr worker, but that can be easily fixed up.

http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=nightly&id=f55727d7d6f76aeee687c1f2d31411662ff03b6f

Nak.
-Chris
Tvrtko Ursulin July 6, 2015, 3:11 p.m. UTC | #2
Hi,

On 07/06/2015 03:50 PM, Imre Deak wrote:
> We have 3 types of DMA mappings for GEM objects:
> 1. physically contiguous for stolen and for objects needing contiguous
>     memory
> 2. DMA-buf mappings imported via a DMA-buf attach operation
> 3. SG DMA mappings for shmem backed and userptr objects
>
> For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> corresponding backing pages and so in practice we create/release the
> mapping in the object's get_pages/put_pages callback.
>
> For 3. the lifetime of the mapping matches that of any existing GPU binding
> of the object, so we'll create the mapping when the object is bound to
> the first vma and release the mapping when the object is unbound from its
> last vma.
>
> Since the object can be bound to multiple vmas, we can end up creating a
> new DMA mapping in the 3. case even if the object already had one. This
> is not allowed by the DMA API and can lead to leaked mapping data and
> IOMMU memory space starvation in certain cases. For example HW IOMMU
> drivers (intel_iommu) allocate a new range from their memory space
> whenever a mapping is created, silently overriding a pre-existing
> mapping.

Ha.. back when I was adding multiple GGTT views I had this implemented 
by only calling i915_gem_gtt_prepare_object on first VMA being 
instantiated, and the same but opposite for last one going away. Someone 
told me it is not needed though and to rip it out. :) To be fair I had 
no clue so got it right just by being defensive.

> Fix this by adding new callbacks to create/release the DMA mapping. This
> way we can use the has_dma_mapping flag for objects of the 3. case also
> (so far the flag was only used for the 1. and 2. case) and skip creating
> a new mapping if one exists already.
>
> Note that I also thought about simply creating/releasing the mapping
> when get_pages/put_pages is called. However since creating a DMA mapping
> may have associated resources (at least in case of HW IOMMU) it does
> make sense to release these resources as early as possible. We can
> release the DMA mapping as soon as the object is unbound from the last
> vma, before we drop the backing pages, hence it's worth keeping the two
> operations separate.
>
> I noticed this issue by enabling DMA debugging, which got disabled after
> a while due to its internal mapping tables getting full. It also reported
> errors in connection to random other drivers that did a DMA mapping for
> an address that was previously mapped by i915 but was never released.
> Besides these diagnostic messages and the memory space starvation
> problem for IOMMUs, I'm not aware of this causing a real issue.

Out of interest how to enable DMA debugging?

> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_drv.h     |  2 ++
>   drivers/gpu/drm/i915/i915_gem.c     | 26 ++++++++++++++++++++++++++
>   drivers/gpu/drm/i915/i915_gem_gtt.c | 15 ++++-----------
>   3 files changed, 32 insertions(+), 11 deletions(-)

Patch looks good to me but I have this gut feeling Daniel will say that 
function pointers are an overkill. Personally I think it is more 
readable than adding special casing to core GEM functions.

Regards,

Tvrtko
Imre Deak July 6, 2015, 3:11 p.m. UTC | #3
On ma, 2015-07-06 at 15:57 +0100, Chris Wilson wrote:
> On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > We have 3 types of DMA mappings for GEM objects:
> > 1. physically contiguous for stolen and for objects needing contiguous
> >    memory
> > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > 3. SG DMA mappings for shmem backed and userptr objects
> > 
> > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > corresponding backing pages and so in practice we create/release the
> > mapping in the object's get_pages/put_pages callback.
> > 
> > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > of the object, so we'll create the mapping when the object is bound to
> > the first vma and release the mapping when the object is unbound from its
> > last vma.
> > 
> > Since the object can be bound to multiple vmas, we can end up creating a
> > new DMA mapping in the 3. case even if the object already had one. This
> > is not allowed by the DMA API and can lead to leaked mapping data and
> > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > drivers (intel_iommu) allocate a new range from their memory space
> > whenever a mapping is created, silently overriding a pre-existing
> > mapping.
> > 
> > Fix this by adding new callbacks to create/release the DMA mapping. This
> > way we can use the has_dma_mapping flag for objects of the 3. case also
> > (so far the flag was only used for the 1. and 2. case) and skip creating
> > a new mapping if one exists already.
> > 
> > Note that I also thought about simply creating/releasing the mapping
> > when get_pages/put_pages is called. However since creating a DMA mapping
> > may have associated resources (at least in case of HW IOMMU) it does
> > make sense to release these resources as early as possible. We can
> > release the DMA mapping as soon as the object is unbound from the last
> > vma, before we drop the backing pages, hence it's worth keeping the two
> > operations separate.
> > 
> > I noticed this issue by enabling DMA debugging, which got disabled after
> > a while due to its internal mapping tables getting full. It also reported
> > errors in connection to random other drivers that did a DMA mapping for
> > an address that was previously mapped by i915 but was never released.
> > Besides these diagnostic messages and the memory space starvation
> > problem for IOMMUs, I'm not aware of this causing a real issue.
> 
> Nope, it is much much simpler. Since we only do the dma prepare/finish
> from inside get_pages/put_pages, we can put the calls there. The only
> caveat there is userptr worker, but that can be easily fixed up.
> 
> http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=nightly&id=f55727d7d6f76aeee687c1f2d31411662ff03b6f

Yes, that's what I meant by creating/releasing the mapping in the
get_pages/put_pages callbacks. It does have the disadvantage of keeping
on to IOMMU mapping resources longer than it's needed as I described
above.

> Nak.

Right. Your patch doesn't explicitly mention fixing the issues I tracked
down, but it does seem to fix them. It would make sens to add this fact
to the commit log.

--Imre
Imre Deak July 6, 2015, 3:21 p.m. UTC | #4
On ma, 2015-07-06 at 16:11 +0100, Tvrtko Ursulin wrote:
> Hi,
> 
> On 07/06/2015 03:50 PM, Imre Deak wrote:
> > We have 3 types of DMA mappings for GEM objects:
> > 1. physically contiguous for stolen and for objects needing contiguous
> >     memory
> > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > 3. SG DMA mappings for shmem backed and userptr objects
> >
> > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > corresponding backing pages and so in practice we create/release the
> > mapping in the object's get_pages/put_pages callback.
> >
> > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > of the object, so we'll create the mapping when the object is bound to
> > the first vma and release the mapping when the object is unbound from its
> > last vma.
> >
> > Since the object can be bound to multiple vmas, we can end up creating a
> > new DMA mapping in the 3. case even if the object already had one. This
> > is not allowed by the DMA API and can lead to leaked mapping data and
> > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > drivers (intel_iommu) allocate a new range from their memory space
> > whenever a mapping is created, silently overriding a pre-existing
> > mapping.
> 
> Ha.. back when I was adding multiple GGTT views I had this implemented 
> by only calling i915_gem_gtt_prepare_object on first VMA being 
> instantiated, and the same but opposite for last one going away. Someone 
> told me it is not needed though and to rip it out. :) To be fair I had 
> no clue so got it right just by being defensive.
> 
> > Fix this by adding new callbacks to create/release the DMA mapping. This
> > way we can use the has_dma_mapping flag for objects of the 3. case also
> > (so far the flag was only used for the 1. and 2. case) and skip creating
> > a new mapping if one exists already.
> >
> > Note that I also thought about simply creating/releasing the mapping
> > when get_pages/put_pages is called. However since creating a DMA mapping
> > may have associated resources (at least in case of HW IOMMU) it does
> > make sense to release these resources as early as possible. We can
> > release the DMA mapping as soon as the object is unbound from the last
> > vma, before we drop the backing pages, hence it's worth keeping the two
> > operations separate.
> >
> > I noticed this issue by enabling DMA debugging, which got disabled after
> > a while due to its internal mapping tables getting full. It also reported
> > errors in connection to random other drivers that did a DMA mapping for
> > an address that was previously mapped by i915 but was never released.
> > Besides these diagnostic messages and the memory space starvation
> > problem for IOMMUs, I'm not aware of this causing a real issue.
> 
> Out of interest how to enable DMA debugging?

By adding CONFIG_DMA_API_DEBUG=y.

> 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >   drivers/gpu/drm/i915/i915_drv.h     |  2 ++
> >   drivers/gpu/drm/i915/i915_gem.c     | 26 ++++++++++++++++++++++++++
> >   drivers/gpu/drm/i915/i915_gem_gtt.c | 15 ++++-----------
> >   3 files changed, 32 insertions(+), 11 deletions(-)
> 
> Patch looks good to me but I have this gut feeling Daniel will say that 
> function pointers are an overkill. Personally I think it is more 
> readable than adding special casing to core GEM functions.

Yea, imo it depends if want to keep the put_pages and release DMA
mapping separate operations. In that case we could move the relevant
code for DMA buf objects too into these new callbacks. But if that's
found to be not worth it then we can just create/release the mapping in
the get_pages/put_pages callbacks and so the new ones are not needed.

Thanks for your review,
Imre
Chris Wilson July 6, 2015, 3:28 p.m. UTC | #5
On Mon, Jul 06, 2015 at 06:11:40PM +0300, Imre Deak wrote:
> On ma, 2015-07-06 at 15:57 +0100, Chris Wilson wrote:
> > On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > > We have 3 types of DMA mappings for GEM objects:
> > > 1. physically contiguous for stolen and for objects needing contiguous
> > >    memory
> > > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > > 3. SG DMA mappings for shmem backed and userptr objects
> > > 
> > > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > > corresponding backing pages and so in practice we create/release the
> > > mapping in the object's get_pages/put_pages callback.
> > > 
> > > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > > of the object, so we'll create the mapping when the object is bound to
> > > the first vma and release the mapping when the object is unbound from its
> > > last vma.
> > > 
> > > Since the object can be bound to multiple vmas, we can end up creating a
> > > new DMA mapping in the 3. case even if the object already had one. This
> > > is not allowed by the DMA API and can lead to leaked mapping data and
> > > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > > drivers (intel_iommu) allocate a new range from their memory space
> > > whenever a mapping is created, silently overriding a pre-existing
> > > mapping.
> > > 
> > > Fix this by adding new callbacks to create/release the DMA mapping. This
> > > way we can use the has_dma_mapping flag for objects of the 3. case also
> > > (so far the flag was only used for the 1. and 2. case) and skip creating
> > > a new mapping if one exists already.
> > > 
> > > Note that I also thought about simply creating/releasing the mapping
> > > when get_pages/put_pages is called. However since creating a DMA mapping
> > > may have associated resources (at least in case of HW IOMMU) it does
> > > make sense to release these resources as early as possible. We can
> > > release the DMA mapping as soon as the object is unbound from the last
> > > vma, before we drop the backing pages, hence it's worth keeping the two
> > > operations separate.
> > > 
> > > I noticed this issue by enabling DMA debugging, which got disabled after
> > > a while due to its internal mapping tables getting full. It also reported
> > > errors in connection to random other drivers that did a DMA mapping for
> > > an address that was previously mapped by i915 but was never released.
> > > Besides these diagnostic messages and the memory space starvation
> > > problem for IOMMUs, I'm not aware of this causing a real issue.
> > 
> > Nope, it is much much simpler. Since we only do the dma prepare/finish
> > from inside get_pages/put_pages, we can put the calls there. The only
> > caveat there is userptr worker, but that can be easily fixed up.
> > 
> > http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=nightly&id=f55727d7d6f76aeee687c1f2d31411662ff03b6f
> 
> Yes, that's what I meant by creating/releasing the mapping in the
> get_pages/put_pages callbacks. It does have the disadvantage of keeping
> on to IOMMU mapping resources longer than it's needed as I described
> above.

I don't think that is a disadvantage though. You haven't introduced a
dma shrinker which is what you need to handle a limited resource. So
it's a moot point as we don't handle the allocation failure smartly. By
moving the failure into get pages, at least it is tractable.
-Chris
Daniel Vetter July 6, 2015, 3:29 p.m. UTC | #6
On Mon, Jul 06, 2015 at 03:57:44PM +0100, Chris Wilson wrote:
> On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > We have 3 types of DMA mappings for GEM objects:
> > 1. physically contiguous for stolen and for objects needing contiguous
> >    memory
> > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > 3. SG DMA mappings for shmem backed and userptr objects
> > 
> > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > corresponding backing pages and so in practice we create/release the
> > mapping in the object's get_pages/put_pages callback.
> > 
> > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > of the object, so we'll create the mapping when the object is bound to
> > the first vma and release the mapping when the object is unbound from its
> > last vma.
> > 
> > Since the object can be bound to multiple vmas, we can end up creating a
> > new DMA mapping in the 3. case even if the object already had one. This
> > is not allowed by the DMA API and can lead to leaked mapping data and
> > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > drivers (intel_iommu) allocate a new range from their memory space
> > whenever a mapping is created, silently overriding a pre-existing
> > mapping.

How does this happen? Essentially list_empty(obj->vmas) ==
!dma_mapping_exists should hold for objects of the 3rd type. I don't
understand how this is broken in the current code. There was definitely
versions of the ppgtt code where this wasn't working properly, but I
thought we've fixed that up again.

> > Fix this by adding new callbacks to create/release the DMA mapping. This
> > way we can use the has_dma_mapping flag for objects of the 3. case also
> > (so far the flag was only used for the 1. and 2. case) and skip creating
> > a new mapping if one exists already.
> > 
> > Note that I also thought about simply creating/releasing the mapping
> > when get_pages/put_pages is called. However since creating a DMA mapping
> > may have associated resources (at least in case of HW IOMMU) it does
> > make sense to release these resources as early as possible. We can
> > release the DMA mapping as soon as the object is unbound from the last
> > vma, before we drop the backing pages, hence it's worth keeping the two
> > operations separate.
> > 
> > I noticed this issue by enabling DMA debugging, which got disabled after
> > a while due to its internal mapping tables getting full. It also reported
> > errors in connection to random other drivers that did a DMA mapping for
> > an address that was previously mapped by i915 but was never released.
> > Besides these diagnostic messages and the memory space starvation
> > problem for IOMMUs, I'm not aware of this causing a real issue.
> 
> Nope, it is much much simpler. Since we only do the dma prepare/finish
> from inside get_pages/put_pages, we can put the calls there. The only
> caveat there is userptr worker, but that can be easily fixed up.

I do kinda like the distinction between just grabbing the backing storage
and making it accessible to the hw. Small one, but I think it does help if
we keep these two maps separate. Now the function names otoh are
super-confusing, that I agree with.
-Daniel
Imre Deak July 6, 2015, 3:30 p.m. UTC | #7
On ma, 2015-07-06 at 17:29 +0200, Daniel Vetter wrote:
> On Mon, Jul 06, 2015 at 03:57:44PM +0100, Chris Wilson wrote:
> > On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > > We have 3 types of DMA mappings for GEM objects:
> > > 1. physically contiguous for stolen and for objects needing contiguous
> > >    memory
> > > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > > 3. SG DMA mappings for shmem backed and userptr objects
> > > 
> > > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > > corresponding backing pages and so in practice we create/release the
> > > mapping in the object's get_pages/put_pages callback.
> > > 
> > > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > > of the object, so we'll create the mapping when the object is bound to
> > > the first vma and release the mapping when the object is unbound from its
> > > last vma.
> > > 
> > > Since the object can be bound to multiple vmas, we can end up creating a
> > > new DMA mapping in the 3. case even if the object already had one. This
> > > is not allowed by the DMA API and can lead to leaked mapping data and
> > > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > > drivers (intel_iommu) allocate a new range from their memory space
> > > whenever a mapping is created, silently overriding a pre-existing
> > > mapping.
> 
> How does this happen? Essentially list_empty(obj->vmas) ==
> !dma_mapping_exists should hold for objects of the 3rd type. I don't
> understand how this is broken in the current code. There was definitely
> versions of the ppgtt code where this wasn't working properly, but I
> thought we've fixed that up again.

When binding the object we don't check if it's already bound, just
create the mapping regardless. So if it was already bound (having a
mapping) we'll again create a new mapping overriding the old one.

> > > Fix this by adding new callbacks to create/release the DMA mapping. This
> > > way we can use the has_dma_mapping flag for objects of the 3. case also
> > > (so far the flag was only used for the 1. and 2. case) and skip creating
> > > a new mapping if one exists already.
> > > 
> > > Note that I also thought about simply creating/releasing the mapping
> > > when get_pages/put_pages is called. However since creating a DMA mapping
> > > may have associated resources (at least in case of HW IOMMU) it does
> > > make sense to release these resources as early as possible. We can
> > > release the DMA mapping as soon as the object is unbound from the last
> > > vma, before we drop the backing pages, hence it's worth keeping the two
> > > operations separate.
> > > 
> > > I noticed this issue by enabling DMA debugging, which got disabled after
> > > a while due to its internal mapping tables getting full. It also reported
> > > errors in connection to random other drivers that did a DMA mapping for
> > > an address that was previously mapped by i915 but was never released.
> > > Besides these diagnostic messages and the memory space starvation
> > > problem for IOMMUs, I'm not aware of this causing a real issue.
> > 
> > Nope, it is much much simpler. Since we only do the dma prepare/finish
> > from inside get_pages/put_pages, we can put the calls there. The only
> > caveat there is userptr worker, but that can be easily fixed up.
> 
> I do kinda like the distinction between just grabbing the backing storage
> and making it accessible to the hw. Small one, but I think it does help if
> we keep these two maps separate. Now the function names otoh are
> super-confusing, that I agree with.

Well, please convince Chris :)


> -Daniel
Imre Deak July 6, 2015, 3:31 p.m. UTC | #8
On ma, 2015-07-06 at 16:28 +0100, Chris Wilson wrote:
> On Mon, Jul 06, 2015 at 06:11:40PM +0300, Imre Deak wrote:
> > On ma, 2015-07-06 at 15:57 +0100, Chris Wilson wrote:
> > > On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > > > We have 3 types of DMA mappings for GEM objects:
> > > > 1. physically contiguous for stolen and for objects needing contiguous
> > > >    memory
> > > > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > > > 3. SG DMA mappings for shmem backed and userptr objects
> > > > 
> > > > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > > > corresponding backing pages and so in practice we create/release the
> > > > mapping in the object's get_pages/put_pages callback.
> > > > 
> > > > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > > > of the object, so we'll create the mapping when the object is bound to
> > > > the first vma and release the mapping when the object is unbound from its
> > > > last vma.
> > > > 
> > > > Since the object can be bound to multiple vmas, we can end up creating a
> > > > new DMA mapping in the 3. case even if the object already had one. This
> > > > is not allowed by the DMA API and can lead to leaked mapping data and
> > > > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > > > drivers (intel_iommu) allocate a new range from their memory space
> > > > whenever a mapping is created, silently overriding a pre-existing
> > > > mapping.
> > > > 
> > > > Fix this by adding new callbacks to create/release the DMA mapping. This
> > > > way we can use the has_dma_mapping flag for objects of the 3. case also
> > > > (so far the flag was only used for the 1. and 2. case) and skip creating
> > > > a new mapping if one exists already.
> > > > 
> > > > Note that I also thought about simply creating/releasing the mapping
> > > > when get_pages/put_pages is called. However since creating a DMA mapping
> > > > may have associated resources (at least in case of HW IOMMU) it does
> > > > make sense to release these resources as early as possible. We can
> > > > release the DMA mapping as soon as the object is unbound from the last
> > > > vma, before we drop the backing pages, hence it's worth keeping the two
> > > > operations separate.
> > > > 
> > > > I noticed this issue by enabling DMA debugging, which got disabled after
> > > > a while due to its internal mapping tables getting full. It also reported
> > > > errors in connection to random other drivers that did a DMA mapping for
> > > > an address that was previously mapped by i915 but was never released.
> > > > Besides these diagnostic messages and the memory space starvation
> > > > problem for IOMMUs, I'm not aware of this causing a real issue.
> > > 
> > > Nope, it is much much simpler. Since we only do the dma prepare/finish
> > > from inside get_pages/put_pages, we can put the calls there. The only
> > > caveat there is userptr worker, but that can be easily fixed up.
> > > 
> > > http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=nightly&id=f55727d7d6f76aeee687c1f2d31411662ff03b6f
> > 
> > Yes, that's what I meant by creating/releasing the mapping in the
> > get_pages/put_pages callbacks. It does have the disadvantage of keeping
> > on to IOMMU mapping resources longer than it's needed as I described
> > above.
> 
> I don't think that is a disadvantage though. You haven't introduced a
> dma shrinker which is what you need to handle a limited resource. So
> it's a moot point as we don't handle the allocation failure smartly. By
> moving the failure into get pages, at least it is tractable.

That's true, but we could do this in the future, if we had the new
callbacks.


> -Chris
>
Chris Wilson July 6, 2015, 3:33 p.m. UTC | #9
On Mon, Jul 06, 2015 at 05:29:39PM +0200, Daniel Vetter wrote:
> On Mon, Jul 06, 2015 at 03:57:44PM +0100, Chris Wilson wrote:
> > On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > > We have 3 types of DMA mappings for GEM objects:
> > > 1. physically contiguous for stolen and for objects needing contiguous
> > >    memory
> > > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > > 3. SG DMA mappings for shmem backed and userptr objects
> > > 
> > > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > > corresponding backing pages and so in practice we create/release the
> > > mapping in the object's get_pages/put_pages callback.
> > > 
> > > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > > of the object, so we'll create the mapping when the object is bound to
> > > the first vma and release the mapping when the object is unbound from its
> > > last vma.
> > > 
> > > Since the object can be bound to multiple vmas, we can end up creating a
> > > new DMA mapping in the 3. case even if the object already had one. This
> > > is not allowed by the DMA API and can lead to leaked mapping data and
> > > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > > drivers (intel_iommu) allocate a new range from their memory space
> > > whenever a mapping is created, silently overriding a pre-existing
> > > mapping.
> 
> How does this happen? Essentially list_empty(obj->vmas) ==
> !dma_mapping_exists should hold for objects of the 3rd type. I don't
> understand how this is broken in the current code. There was definitely
> versions of the ppgtt code where this wasn't working properly, but I
> thought we've fixed that up again.

Every g/ppgtt binding remapped the obj->pages through the iommu. Even
with the DMAR disabled, we still pay the cpu cost of sw iommu (which is
itself an annoying kernel bug that you can't disable).
 
> > > Fix this by adding new callbacks to create/release the DMA mapping. This
> > > way we can use the has_dma_mapping flag for objects of the 3. case also
> > > (so far the flag was only used for the 1. and 2. case) and skip creating
> > > a new mapping if one exists already.
> > > 
> > > Note that I also thought about simply creating/releasing the mapping
> > > when get_pages/put_pages is called. However since creating a DMA mapping
> > > may have associated resources (at least in case of HW IOMMU) it does
> > > make sense to release these resources as early as possible. We can
> > > release the DMA mapping as soon as the object is unbound from the last
> > > vma, before we drop the backing pages, hence it's worth keeping the two
> > > operations separate.
> > > 
> > > I noticed this issue by enabling DMA debugging, which got disabled after
> > > a while due to its internal mapping tables getting full. It also reported
> > > errors in connection to random other drivers that did a DMA mapping for
> > > an address that was previously mapped by i915 but was never released.
> > > Besides these diagnostic messages and the memory space starvation
> > > problem for IOMMUs, I'm not aware of this causing a real issue.
> > 
> > Nope, it is much much simpler. Since we only do the dma prepare/finish
> > from inside get_pages/put_pages, we can put the calls there. The only
> > caveat there is userptr worker, but that can be easily fixed up.
> 
> I do kinda like the distinction between just grabbing the backing storage
> and making it accessible to the hw. Small one, but I think it does help if
> we keep these two maps separate. Now the function names otoh are
> super-confusing, that I agree with.

But that is the raison-d'etre of get_pages(). We call it preciselly when
we want the backing storage available to the hw. We relaxed that for
set-domain to avoid one type of bug, and stolen/dma-buf have their own
notion of dma mapping. userptr is the odd one out due to its worker
asynchronously grabbing the pages.
-Chris
Imre Deak July 6, 2015, 3:56 p.m. UTC | #10
On ma, 2015-07-06 at 16:33 +0100, Chris Wilson wrote:
> On Mon, Jul 06, 2015 at 05:29:39PM +0200, Daniel Vetter wrote:
> > On Mon, Jul 06, 2015 at 03:57:44PM +0100, Chris Wilson wrote:
> > > On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > > > We have 3 types of DMA mappings for GEM objects:
> > > > 1. physically contiguous for stolen and for objects needing contiguous
> > > >    memory
> > > > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > > > 3. SG DMA mappings for shmem backed and userptr objects
> > > > 
> > > > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > > > corresponding backing pages and so in practice we create/release the
> > > > mapping in the object's get_pages/put_pages callback.
> > > > 
> > > > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > > > of the object, so we'll create the mapping when the object is bound to
> > > > the first vma and release the mapping when the object is unbound from its
> > > > last vma.
> > > > 
> > > > Since the object can be bound to multiple vmas, we can end up creating a
> > > > new DMA mapping in the 3. case even if the object already had one. This
> > > > is not allowed by the DMA API and can lead to leaked mapping data and
> > > > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > > > drivers (intel_iommu) allocate a new range from their memory space
> > > > whenever a mapping is created, silently overriding a pre-existing
> > > > mapping.
> > 
> > How does this happen? Essentially list_empty(obj->vmas) ==
> > !dma_mapping_exists should hold for objects of the 3rd type. I don't
> > understand how this is broken in the current code. There was definitely
> > versions of the ppgtt code where this wasn't working properly, but I
> > thought we've fixed that up again.
> 
> Every g/ppgtt binding remapped the obj->pages through the iommu. Even
> with the DMAR disabled, we still pay the cpu cost of sw iommu (which is
> itself an annoying kernel bug that you can't disable).
>  
> > > > Fix this by adding new callbacks to create/release the DMA mapping. This
> > > > way we can use the has_dma_mapping flag for objects of the 3. case also
> > > > (so far the flag was only used for the 1. and 2. case) and skip creating
> > > > a new mapping if one exists already.
> > > > 
> > > > Note that I also thought about simply creating/releasing the mapping
> > > > when get_pages/put_pages is called. However since creating a DMA mapping
> > > > may have associated resources (at least in case of HW IOMMU) it does
> > > > make sense to release these resources as early as possible. We can
> > > > release the DMA mapping as soon as the object is unbound from the last
> > > > vma, before we drop the backing pages, hence it's worth keeping the two
> > > > operations separate.
> > > > 
> > > > I noticed this issue by enabling DMA debugging, which got disabled after
> > > > a while due to its internal mapping tables getting full. It also reported
> > > > errors in connection to random other drivers that did a DMA mapping for
> > > > an address that was previously mapped by i915 but was never released.
> > > > Besides these diagnostic messages and the memory space starvation
> > > > problem for IOMMUs, I'm not aware of this causing a real issue.
> > > 
> > > Nope, it is much much simpler. Since we only do the dma prepare/finish
> > > from inside get_pages/put_pages, we can put the calls there. The only
> > > caveat there is userptr worker, but that can be easily fixed up.
> > 
> > I do kinda like the distinction between just grabbing the backing storage
> > and making it accessible to the hw. Small one, but I think it does help if
> > we keep these two maps separate. Now the function names otoh are
> > super-confusing, that I agree with.
> 
> But that is the raison-d'etre of get_pages(). We call it preciselly when
> we want the backing storage available to the hw. We relaxed that for
> set-domain to avoid one type of bug, and stolen/dma-buf have their own
> notion of dma mapping. userptr is the odd one out due to its worker
> asynchronously grabbing the pages.

Isn't the DMA mapping operation more tied to binding the object to a
VMA? As far as I can see we call put_pages only when destroying the
object (or attaching a physically contiguous mapping to it) and that's
because at that point we also give up on the content of the buffer.
Otherwise we just do unbinding when reclaiming memory. At this point it
make sense to release the DMA mapping independently of releasing the
buffer contents.

--Imre
Chris Wilson July 6, 2015, 4:04 p.m. UTC | #11
On Mon, Jul 06, 2015 at 06:56:00PM +0300, Imre Deak wrote:
> On ma, 2015-07-06 at 16:33 +0100, Chris Wilson wrote:
> > On Mon, Jul 06, 2015 at 05:29:39PM +0200, Daniel Vetter wrote:
> > > On Mon, Jul 06, 2015 at 03:57:44PM +0100, Chris Wilson wrote:
> > > > On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > > > > We have 3 types of DMA mappings for GEM objects:
> > > > > 1. physically contiguous for stolen and for objects needing contiguous
> > > > >    memory
> > > > > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > > > > 3. SG DMA mappings for shmem backed and userptr objects
> > > > > 
> > > > > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > > > > corresponding backing pages and so in practice we create/release the
> > > > > mapping in the object's get_pages/put_pages callback.
> > > > > 
> > > > > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > > > > of the object, so we'll create the mapping when the object is bound to
> > > > > the first vma and release the mapping when the object is unbound from its
> > > > > last vma.
> > > > > 
> > > > > Since the object can be bound to multiple vmas, we can end up creating a
> > > > > new DMA mapping in the 3. case even if the object already had one. This
> > > > > is not allowed by the DMA API and can lead to leaked mapping data and
> > > > > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > > > > drivers (intel_iommu) allocate a new range from their memory space
> > > > > whenever a mapping is created, silently overriding a pre-existing
> > > > > mapping.
> > > 
> > > How does this happen? Essentially list_empty(obj->vmas) ==
> > > !dma_mapping_exists should hold for objects of the 3rd type. I don't
> > > understand how this is broken in the current code. There was definitely
> > > versions of the ppgtt code where this wasn't working properly, but I
> > > thought we've fixed that up again.
> > 
> > Every g/ppgtt binding remapped the obj->pages through the iommu. Even
> > with the DMAR disabled, we still pay the cpu cost of sw iommu (which is
> > itself an annoying kernel bug that you can't disable).
> >  
> > > > > Fix this by adding new callbacks to create/release the DMA mapping. This
> > > > > way we can use the has_dma_mapping flag for objects of the 3. case also
> > > > > (so far the flag was only used for the 1. and 2. case) and skip creating
> > > > > a new mapping if one exists already.
> > > > > 
> > > > > Note that I also thought about simply creating/releasing the mapping
> > > > > when get_pages/put_pages is called. However since creating a DMA mapping
> > > > > may have associated resources (at least in case of HW IOMMU) it does
> > > > > make sense to release these resources as early as possible. We can
> > > > > release the DMA mapping as soon as the object is unbound from the last
> > > > > vma, before we drop the backing pages, hence it's worth keeping the two
> > > > > operations separate.
> > > > > 
> > > > > I noticed this issue by enabling DMA debugging, which got disabled after
> > > > > a while due to its internal mapping tables getting full. It also reported
> > > > > errors in connection to random other drivers that did a DMA mapping for
> > > > > an address that was previously mapped by i915 but was never released.
> > > > > Besides these diagnostic messages and the memory space starvation
> > > > > problem for IOMMUs, I'm not aware of this causing a real issue.
> > > > 
> > > > Nope, it is much much simpler. Since we only do the dma prepare/finish
> > > > from inside get_pages/put_pages, we can put the calls there. The only
> > > > caveat there is userptr worker, but that can be easily fixed up.
> > > 
> > > I do kinda like the distinction between just grabbing the backing storage
> > > and making it accessible to the hw. Small one, but I think it does help if
> > > we keep these two maps separate. Now the function names otoh are
> > > super-confusing, that I agree with.
> > 
> > But that is the raison-d'etre of get_pages(). We call it preciselly when
> > we want the backing storage available to the hw. We relaxed that for
> > set-domain to avoid one type of bug, and stolen/dma-buf have their own
> > notion of dma mapping. userptr is the odd one out due to its worker
> > asynchronously grabbing the pages.
> 
> Isn't the DMA mapping operation more tied to binding the object to a
> VMA? As far as I can see we call put_pages only when destroying the
> object (or attaching a physically contiguous mapping to it) and that's
> because at that point we also give up on the content of the buffer.
> Otherwise we just do unbinding when reclaiming memory. At this point it
> make sense to release the DMA mapping independently of releasing the
> buffer contents.

No. As proved above, it is not about each VMA, it about preparing the
object for access by the hw - i.e. a natural fit for the
get_pages/put_pages() greedy scheme, and if you look at the workloads
where we benefit from the current scheme, we also massively benefit from
avoiding the remapping. A dma shrinker would also simply call
i915_gem_shrink(), and we can do that today cf get_pages_gtt() and do
our own shrinking first.
-Chris
Imre Deak July 6, 2015, 4:23 p.m. UTC | #12
On ma, 2015-07-06 at 17:04 +0100, Chris Wilson wrote:
> On Mon, Jul 06, 2015 at 06:56:00PM +0300, Imre Deak wrote:
> > On ma, 2015-07-06 at 16:33 +0100, Chris Wilson wrote:
> > > On Mon, Jul 06, 2015 at 05:29:39PM +0200, Daniel Vetter wrote:
> > > > On Mon, Jul 06, 2015 at 03:57:44PM +0100, Chris Wilson wrote:
> > > > > On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote:
> > > > > > We have 3 types of DMA mappings for GEM objects:
> > > > > > 1. physically contiguous for stolen and for objects needing contiguous
> > > > > >    memory
> > > > > > 2. DMA-buf mappings imported via a DMA-buf attach operation
> > > > > > 3. SG DMA mappings for shmem backed and userptr objects
> > > > > > 
> > > > > > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
> > > > > > corresponding backing pages and so in practice we create/release the
> > > > > > mapping in the object's get_pages/put_pages callback.
> > > > > > 
> > > > > > For 3. the lifetime of the mapping matches that of any existing GPU binding
> > > > > > of the object, so we'll create the mapping when the object is bound to
> > > > > > the first vma and release the mapping when the object is unbound from its
> > > > > > last vma.
> > > > > > 
> > > > > > Since the object can be bound to multiple vmas, we can end up creating a
> > > > > > new DMA mapping in the 3. case even if the object already had one. This
> > > > > > is not allowed by the DMA API and can lead to leaked mapping data and
> > > > > > IOMMU memory space starvation in certain cases. For example HW IOMMU
> > > > > > drivers (intel_iommu) allocate a new range from their memory space
> > > > > > whenever a mapping is created, silently overriding a pre-existing
> > > > > > mapping.
> > > > 
> > > > How does this happen? Essentially list_empty(obj->vmas) ==
> > > > !dma_mapping_exists should hold for objects of the 3rd type. I don't
> > > > understand how this is broken in the current code. There was definitely
> > > > versions of the ppgtt code where this wasn't working properly, but I
> > > > thought we've fixed that up again.
> > > 
> > > Every g/ppgtt binding remapped the obj->pages through the iommu. Even
> > > with the DMAR disabled, we still pay the cpu cost of sw iommu (which is
> > > itself an annoying kernel bug that you can't disable).
> > >  
> > > > > > Fix this by adding new callbacks to create/release the DMA mapping. This
> > > > > > way we can use the has_dma_mapping flag for objects of the 3. case also
> > > > > > (so far the flag was only used for the 1. and 2. case) and skip creating
> > > > > > a new mapping if one exists already.
> > > > > > 
> > > > > > Note that I also thought about simply creating/releasing the mapping
> > > > > > when get_pages/put_pages is called. However since creating a DMA mapping
> > > > > > may have associated resources (at least in case of HW IOMMU) it does
> > > > > > make sense to release these resources as early as possible. We can
> > > > > > release the DMA mapping as soon as the object is unbound from the last
> > > > > > vma, before we drop the backing pages, hence it's worth keeping the two
> > > > > > operations separate.
> > > > > > 
> > > > > > I noticed this issue by enabling DMA debugging, which got disabled after
> > > > > > a while due to its internal mapping tables getting full. It also reported
> > > > > > errors in connection to random other drivers that did a DMA mapping for
> > > > > > an address that was previously mapped by i915 but was never released.
> > > > > > Besides these diagnostic messages and the memory space starvation
> > > > > > problem for IOMMUs, I'm not aware of this causing a real issue.
> > > > > 
> > > > > Nope, it is much much simpler. Since we only do the dma prepare/finish
> > > > > from inside get_pages/put_pages, we can put the calls there. The only
> > > > > caveat there is userptr worker, but that can be easily fixed up.
> > > > 
> > > > I do kinda like the distinction between just grabbing the backing storage
> > > > and making it accessible to the hw. Small one, but I think it does help if
> > > > we keep these two maps separate. Now the function names otoh are
> > > > super-confusing, that I agree with.
> > > 
> > > But that is the raison-d'etre of get_pages(). We call it preciselly when
> > > we want the backing storage available to the hw. We relaxed that for
> > > set-domain to avoid one type of bug, and stolen/dma-buf have their own
> > > notion of dma mapping. userptr is the odd one out due to its worker
> > > asynchronously grabbing the pages.
> > 
> > Isn't the DMA mapping operation more tied to binding the object to a
> > VMA? As far as I can see we call put_pages only when destroying the
> > object (or attaching a physically contiguous mapping to it) and that's
> > because at that point we also give up on the content of the buffer.
> > Otherwise we just do unbinding when reclaiming memory. At this point it
> > make sense to release the DMA mapping independently of releasing the
> > buffer contents.
> 
> No. As proved above, it is not about each VMA, it about preparing the
> object for access by the hw - i.e. a natural fit for the
> get_pages/put_pages() greedy scheme, and if you look at the workloads
> where we benefit from the current scheme, we also massively benefit from
> avoiding the remapping. A dma shrinker would also simply call
> i915_gem_shrink(), and we can do that today cf get_pages_gtt() and do
> our own shrinking first.

Right, misunderstood this. Adding new callbacks doesn't have a benefit
then.

--Imre
Shuang He July 7, 2015, 7:09 p.m. UTC | #13
Tested-By: Intel Graphics QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Task id: 6732
-------------------------------------Summary-------------------------------------
Platform          Delta          drm-intel-nightly          Series Applied
ILK                 -4              302/302              298/302
SNB                 -4              312/316              308/316
IVB                 -5              345/345              340/345
BYT                 -1              289/289              288/289
HSW                 -5              382/382              377/382
-------------------------------------Detailed-------------------------------------
Platform  Test                                drm-intel-nightly          Series Applied
*ILK  igt@gem_userptr_blits@dmabuf-sync      PASS(1)      FAIL(1)
*ILK  igt@gem_userptr_blits@dmabuf-unsync      PASS(1)      FAIL(1)
*ILK  igt@gem_userptr_blits@forked-access      PASS(1)      FAIL(1)
*ILK  igt@gem_userptr_blits@forked-sync-interruptible      PASS(1)      DMESG_WARN(1)
(dmesg patch applied)WARNING:at_drivers/gpu/drm/i915/i915_gem_userptr.c:#cancel_userptr[i915]()@WARNING:.* at .* cancel_userptr+0x
*SNB  igt@gem_userptr_blits@coherency-sync      PASS(1)      CRASH(1)
*SNB  igt@gem_userptr_blits@dmabuf-sync      PASS(1)      FAIL(1)
*SNB  igt@gem_userptr_blits@dmabuf-unsync      PASS(1)      FAIL(1)
*SNB  igt@gem_userptr_blits@forked-access      PASS(1)      FAIL(1)
*IVB  igt@gem_userptr_blits@coherency-sync      PASS(1)      CRASH(1)
*IVB  igt@gem_userptr_blits@coherency-unsync      PASS(1)      CRASH(1)
*IVB  igt@gem_userptr_blits@dmabuf-sync      PASS(1)      FAIL(1)
*IVB  igt@gem_userptr_blits@dmabuf-unsync      PASS(1)      FAIL(1)
*IVB  igt@gem_userptr_blits@forked-access      PASS(1)      FAIL(1)
*BYT  igt@gem_userptr_blits@forked-access      PASS(1)      FAIL(1)
*HSW  igt@gem_userptr_blits@coherency-sync      PASS(1)      FAIL(1)
*HSW  igt@gem_userptr_blits@coherency-unsync      PASS(1)      FAIL(1)
*HSW  igt@gem_userptr_blits@dmabuf-sync      PASS(1)      FAIL(1)
*HSW  igt@gem_userptr_blits@dmabuf-unsync      PASS(1)      FAIL(1)
*HSW  igt@gem_userptr_blits@forked-access      PASS(1)      FAIL(1)
Note: You need to pay more attention to line start with '*'
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1dbd957..64fd3f0 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1961,6 +1961,8 @@  struct drm_i915_gem_object_ops {
 	 */
 	int (*get_pages)(struct drm_i915_gem_object *);
 	void (*put_pages)(struct drm_i915_gem_object *);
+	int (*get_dma_mapping)(struct drm_i915_gem_object *);
+	void (*put_dma_mapping)(struct drm_i915_gem_object *);
 	int (*dmabuf_export)(struct drm_i915_gem_object *);
 	void (*release)(struct drm_i915_gem_object *);
 };
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e4d31fc..fe7020c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2349,6 +2349,30 @@  i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
 	return 0;
 }
 
+static int i915_gem_object_get_dma_mapping_gtt(struct drm_i915_gem_object *obj)
+{
+	if (obj->has_dma_mapping)
+		return 0;
+
+	if (!dma_map_sg(&obj->base.dev->pdev->dev, obj->pages->sgl,
+			 obj->pages->nents, PCI_DMA_BIDIRECTIONAL))
+		return -ENOSPC;
+
+	obj->has_dma_mapping = true;
+
+	return 0;
+}
+
+static void i915_gem_object_put_dma_mapping_gtt(struct drm_i915_gem_object *obj)
+{
+	WARN_ON_ONCE(!obj->has_dma_mapping);
+
+	dma_unmap_sg(&obj->base.dev->pdev->dev, obj->pages->sgl,
+		     obj->pages->nents, PCI_DMA_BIDIRECTIONAL);
+
+	obj->has_dma_mapping = false;
+}
+
 void i915_vma_move_to_active(struct i915_vma *vma,
 			     struct drm_i915_gem_request *req)
 {
@@ -4635,6 +4659,8 @@  void i915_gem_object_init(struct drm_i915_gem_object *obj,
 static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
 	.get_pages = i915_gem_object_get_pages_gtt,
 	.put_pages = i915_gem_object_put_pages_gtt,
+	.get_dma_mapping = i915_gem_object_get_dma_mapping_gtt,
+	.put_dma_mapping = i915_gem_object_put_dma_mapping_gtt,
 };
 
 struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev,
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index b29b73f..56bc611 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1802,13 +1802,8 @@  void i915_gem_suspend_gtt_mappings(struct drm_device *dev)
 
 int i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj)
 {
-	if (obj->has_dma_mapping)
-		return 0;
-
-	if (!dma_map_sg(&obj->base.dev->pdev->dev,
-			obj->pages->sgl, obj->pages->nents,
-			PCI_DMA_BIDIRECTIONAL))
-		return -ENOSPC;
+	if (obj->ops->get_dma_mapping)
+		return obj->ops->get_dma_mapping(obj);
 
 	return 0;
 }
@@ -2052,10 +2047,8 @@  void i915_gem_gtt_finish_object(struct drm_i915_gem_object *obj)
 
 	interruptible = do_idling(dev_priv);
 
-	if (!obj->has_dma_mapping)
-		dma_unmap_sg(&dev->pdev->dev,
-			     obj->pages->sgl, obj->pages->nents,
-			     PCI_DMA_BIDIRECTIONAL);
+	if (obj->ops->put_dma_mapping)
+		obj->ops->put_dma_mapping(obj);
 
 	undo_idling(dev_priv, interruptible);
 }