diff mbox

drm/i915: make compact dma scatter lists creation work with SWIOTLB backend.

Message ID 1372088868-23477-2-git-send-email-konrad.wilk@oracle.com (mailing list archive)
State New, archived
Headers show

Commit Message

Konrad Rzeszutek Wilk June 24, 2013, 3:47 p.m. UTC
Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
("drm/i915: create compact dma scatter lists for gem objects") makes
certain assumptions about the under laying DMA API that are not always
correct.

On a ThinkPad X230 with an Intel HD 4000 with Xen during the bootup
I see:

[drm:intel_pipe_set_base] *ERROR* pin & fence failed
[drm:intel_crtc_set_config] *ERROR* failed to set mode on [CRTC:3], err = -28

Bit of debugging traced it down to dma_map_sg failing (in
i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB).

That unfortunately are sizes that the SWIOTLB is incapable of handling -
the maximum it can handle is a an entry of 512KB of virtual contiguous
memory for its bounce buffer. (See IO_TLB_SEGSIZE).

Previous to the above mention git commit the SG entries were of 4KB, and
the code introduced by above git commit squashed the CPU contiguous PFNs
in one big virtual address provided to DMA API.

This patch is a simple semi-revert - were we emulate the old behavior
if we detect that SWIOTLB is online. If it is not online then we continue
on with the new compact scatter gather mechanism.

An alternative solution would be for the the '.get_pages' and the
i915_gem_gtt_prepare_object to retry with smaller max gap of the
amount of PFNs that can be combined together - but with this issue
discovered during rc7 that might be too risky.

Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Chris Wilson <chris@chris-wilson.co.uk>
CC: Imre Deak <imre.deak@intel.com>
CC: Daniel Vetter <daniel.vetter@ffwll.ch>
CC: David Airlie <airlied@linux.ie>
CC: <dri-devel@lists.freedesktop.org>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

Comments

Daniel Vetter June 24, 2013, 5:09 p.m. UTC | #1
On Mon, Jun 24, 2013 at 11:47:48AM -0400, Konrad Rzeszutek Wilk wrote:
> Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
> ("drm/i915: create compact dma scatter lists for gem objects") makes
> certain assumptions about the under laying DMA API that are not always
> correct.
> 
> On a ThinkPad X230 with an Intel HD 4000 with Xen during the bootup
> I see:
> 
> [drm:intel_pipe_set_base] *ERROR* pin & fence failed
> [drm:intel_crtc_set_config] *ERROR* failed to set mode on [CRTC:3], err = -28
> 
> Bit of debugging traced it down to dma_map_sg failing (in
> i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB).
> 
> That unfortunately are sizes that the SWIOTLB is incapable of handling -
> the maximum it can handle is a an entry of 512KB of virtual contiguous
> memory for its bounce buffer. (See IO_TLB_SEGSIZE).
> 
> Previous to the above mention git commit the SG entries were of 4KB, and
> the code introduced by above git commit squashed the CPU contiguous PFNs
> in one big virtual address provided to DMA API.
> 
> This patch is a simple semi-revert - were we emulate the old behavior
> if we detect that SWIOTLB is online. If it is not online then we continue
> on with the new compact scatter gather mechanism.
> 
> An alternative solution would be for the the '.get_pages' and the
> i915_gem_gtt_prepare_object to retry with smaller max gap of the
> amount of PFNs that can be combined together - but with this issue
> discovered during rc7 that might be too risky.
> 
> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Chris Wilson <chris@chris-wilson.co.uk>
> CC: Imre Deak <imre.deak@intel.com>
> CC: Daniel Vetter <daniel.vetter@ffwll.ch>
> CC: David Airlie <airlied@linux.ie>
> CC: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Two things:

- SWIOTLB usage should seriously blow up all over the place in drm/i915.
  We really rely on the everywhere else true fact that the pages and their
  dma mapping point at the same backing storage.

- How is this solved elsewhere when constructing sg tables? Or are we
  really the only guys who try to construct such big sg entries? I
  expected somewhat that the dma mapping backed would fill in the segment
  limits accordingly, but I haven't found anything really on a quick
  search.


Cheers, Daniel

> ---
>  drivers/gpu/drm/i915/i915_gem.c | 15 ++++++++++++---
>  1 file changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 970ad17..7045f45 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1801,7 +1801,14 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  			gfp |= __GFP_NORETRY | __GFP_NOWARN | __GFP_NO_KSWAPD;
>  			gfp &= ~(__GFP_IO | __GFP_WAIT);
>  		}
> -
> +#ifdef CONFIG_SWIOTLB
> +		if (swiotlb_nr_tbl()) {
> +			st->nents++;
> +			sg_set_page(sg, page, PAGE_SIZE, 0);
> +			sg = sg_next(sg);
> +			continue;
> +		}
> +#endif
>  		if (!i || page_to_pfn(page) != last_pfn + 1) {
>  			if (i)
>  				sg = sg_next(sg);
> @@ -1812,8 +1819,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  		}
>  		last_pfn = page_to_pfn(page);
>  	}
> -
> -	sg_mark_end(sg);
> +#ifdef CONFIG_SWIOTLB
> +	if (!swiotlb_nr_tbl())
> +#endif
> +		sg_mark_end(sg);
>  	obj->pages = st;
>  
>  	if (i915_gem_object_needs_bit17_swizzle(obj))
> -- 
> 1.8.1.4
>
Konrad Rzeszutek Wilk June 24, 2013, 5:32 p.m. UTC | #2
On Mon, Jun 24, 2013 at 07:09:12PM +0200, Daniel Vetter wrote:
> On Mon, Jun 24, 2013 at 11:47:48AM -0400, Konrad Rzeszutek Wilk wrote:
> > Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
> > ("drm/i915: create compact dma scatter lists for gem objects") makes
> > certain assumptions about the under laying DMA API that are not always
> > correct.
> > 
> > On a ThinkPad X230 with an Intel HD 4000 with Xen during the bootup
> > I see:
> > 
> > [drm:intel_pipe_set_base] *ERROR* pin & fence failed
> > [drm:intel_crtc_set_config] *ERROR* failed to set mode on [CRTC:3], err = -28
> > 
> > Bit of debugging traced it down to dma_map_sg failing (in
> > i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB).
> > 
> > That unfortunately are sizes that the SWIOTLB is incapable of handling -
> > the maximum it can handle is a an entry of 512KB of virtual contiguous
> > memory for its bounce buffer. (See IO_TLB_SEGSIZE).
> > 
> > Previous to the above mention git commit the SG entries were of 4KB, and
> > the code introduced by above git commit squashed the CPU contiguous PFNs
> > in one big virtual address provided to DMA API.
> > 
> > This patch is a simple semi-revert - were we emulate the old behavior
> > if we detect that SWIOTLB is online. If it is not online then we continue
> > on with the new compact scatter gather mechanism.
> > 
> > An alternative solution would be for the the '.get_pages' and the
> > i915_gem_gtt_prepare_object to retry with smaller max gap of the
> > amount of PFNs that can be combined together - but with this issue
> > discovered during rc7 that might be too risky.
> > 
> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > CC: Chris Wilson <chris@chris-wilson.co.uk>
> > CC: Imre Deak <imre.deak@intel.com>
> > CC: Daniel Vetter <daniel.vetter@ffwll.ch>
> > CC: David Airlie <airlied@linux.ie>
> > CC: <dri-devel@lists.freedesktop.org>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> Two things:

Hey Daniel,

> 
> - SWIOTLB usage should seriously blow up all over the place in drm/i915.
>   We really rely on the everywhere else true fact that the pages and their
>   dma mapping point at the same backing storage.

It works. As in, it seems to work for just a normal desktop user. I don't 
see much of dma_sync_* sprinkled around the drm/i915 so I would think that
there are some issues would be hit as well - but at the first glance
when using it on a laptop it looks OK.
 
> - How is this solved elsewhere when constructing sg tables? Or are we
>   really the only guys who try to construct such big sg entries? I
>   expected somewhat that the dma mapping backed would fill in the segment
>   limits accordingly, but I haven't found anything really on a quick
>   search.

The TTM layer (so radeon, nouveau) uses pci_alloc_coherent which will
construct the dma mapped pages. That allows it to construct "SWIOTLB-approved"
pages that won't need to go through dma_map/dma_unmap as they are
already mapped and ready to go.

Coming back to your question - I think that i915 is the one that I've
encountered.
> 
> 
> Cheers, Daniel
> 
> > ---
> >  drivers/gpu/drm/i915/i915_gem.c | 15 ++++++++++++---
> >  1 file changed, 12 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 970ad17..7045f45 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -1801,7 +1801,14 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
> >  			gfp |= __GFP_NORETRY | __GFP_NOWARN | __GFP_NO_KSWAPD;
> >  			gfp &= ~(__GFP_IO | __GFP_WAIT);
> >  		}
> > -
> > +#ifdef CONFIG_SWIOTLB
> > +		if (swiotlb_nr_tbl()) {
> > +			st->nents++;
> > +			sg_set_page(sg, page, PAGE_SIZE, 0);
> > +			sg = sg_next(sg);
> > +			continue;
> > +		}
> > +#endif
> >  		if (!i || page_to_pfn(page) != last_pfn + 1) {
> >  			if (i)
> >  				sg = sg_next(sg);
> > @@ -1812,8 +1819,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
> >  		}
> >  		last_pfn = page_to_pfn(page);
> >  	}
> > -
> > -	sg_mark_end(sg);
> > +#ifdef CONFIG_SWIOTLB
> > +	if (!swiotlb_nr_tbl())
> > +#endif
> > +		sg_mark_end(sg);
> >  	obj->pages = st;
> >  
> >  	if (i915_gem_object_needs_bit17_swizzle(obj))
> > -- 
> > 1.8.1.4
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
Daniel Vetter June 24, 2013, 6:26 p.m. UTC | #3
On Mon, Jun 24, 2013 at 7:32 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Mon, Jun 24, 2013 at 07:09:12PM +0200, Daniel Vetter wrote:
>> On Mon, Jun 24, 2013 at 11:47:48AM -0400, Konrad Rzeszutek Wilk wrote:
>> > Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
>> > ("drm/i915: create compact dma scatter lists for gem objects") makes
>> > certain assumptions about the under laying DMA API that are not always
>> > correct.
>> >
>> > On a ThinkPad X230 with an Intel HD 4000 with Xen during the bootup
>> > I see:
>> >
>> > [drm:intel_pipe_set_base] *ERROR* pin & fence failed
>> > [drm:intel_crtc_set_config] *ERROR* failed to set mode on [CRTC:3], err = -28
>> >
>> > Bit of debugging traced it down to dma_map_sg failing (in
>> > i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB).
>> >
>> > That unfortunately are sizes that the SWIOTLB is incapable of handling -
>> > the maximum it can handle is a an entry of 512KB of virtual contiguous
>> > memory for its bounce buffer. (See IO_TLB_SEGSIZE).
>> >
>> > Previous to the above mention git commit the SG entries were of 4KB, and
>> > the code introduced by above git commit squashed the CPU contiguous PFNs
>> > in one big virtual address provided to DMA API.
>> >
>> > This patch is a simple semi-revert - were we emulate the old behavior
>> > if we detect that SWIOTLB is online. If it is not online then we continue
>> > on with the new compact scatter gather mechanism.
>> >
>> > An alternative solution would be for the the '.get_pages' and the
>> > i915_gem_gtt_prepare_object to retry with smaller max gap of the
>> > amount of PFNs that can be combined together - but with this issue
>> > discovered during rc7 that might be too risky.
>> >
>> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > CC: Chris Wilson <chris@chris-wilson.co.uk>
>> > CC: Imre Deak <imre.deak@intel.com>
>> > CC: Daniel Vetter <daniel.vetter@ffwll.ch>
>> > CC: David Airlie <airlied@linux.ie>
>> > CC: <dri-devel@lists.freedesktop.org>
>> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>
>> Two things:
>
> Hey Daniel,
>
>>
>> - SWIOTLB usage should seriously blow up all over the place in drm/i915.
>>   We really rely on the everywhere else true fact that the pages and their
>>   dma mapping point at the same backing storage.
>
> It works. As in, it seems to work for just a normal desktop user. I don't
> see much of dma_sync_* sprinkled around the drm/i915 so I would think that
> there are some issues would be hit as well - but at the first glance
> when using it on a laptop it looks OK.

Yeah, we have a pretty serious case of "roll our own coherency stuff".
The biggest reason is that for a long time i915.ko didn't care one bit
about iommus, and the thing we care about (flushing cpu caches for
dma) isn't supported on x86 since x86 every dma is coherent (well, not
quite, but we don't have support for it). I think longer-term it would
make sense to move the clfushing we're doing into the dma layer.

>> - How is this solved elsewhere when constructing sg tables? Or are we
>>   really the only guys who try to construct such big sg entries? I
>>   expected somewhat that the dma mapping backed would fill in the segment
>>   limits accordingly, but I haven't found anything really on a quick
>>   search.
>
> The TTM layer (so radeon, nouveau) uses pci_alloc_coherent which will
> construct the dma mapped pages. That allows it to construct "SWIOTLB-approved"
> pages that won't need to go through dma_map/dma_unmap as they are
> already mapped and ready to go.
>
> Coming back to your question - I think that i915 is the one that I've
> encountered.

That's a bit surprising. With dma_buf graphics people will use sg
tables much more (there's even a nice sg_alloc_table_from_pages helper
to construct them), and those sg tables tend to have large segments. I
guess we need some more generic solution here ...

For now I guess we can live with your CONFIG_SWIOTLB hack.
-Daniel



>>
>>
>> Cheers, Daniel
>>
>> > ---
>> >  drivers/gpu/drm/i915/i915_gem.c | 15 ++++++++++++---
>> >  1 file changed, 12 insertions(+), 3 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>> > index 970ad17..7045f45 100644
>> > --- a/drivers/gpu/drm/i915/i915_gem.c
>> > +++ b/drivers/gpu/drm/i915/i915_gem.c
>> > @@ -1801,7 +1801,14 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>> >                     gfp |= __GFP_NORETRY | __GFP_NOWARN | __GFP_NO_KSWAPD;
>> >                     gfp &= ~(__GFP_IO | __GFP_WAIT);
>> >             }
>> > -
>> > +#ifdef CONFIG_SWIOTLB
>> > +           if (swiotlb_nr_tbl()) {
>> > +                   st->nents++;
>> > +                   sg_set_page(sg, page, PAGE_SIZE, 0);
>> > +                   sg = sg_next(sg);
>> > +                   continue;
>> > +           }
>> > +#endif
>> >             if (!i || page_to_pfn(page) != last_pfn + 1) {
>> >                     if (i)
>> >                             sg = sg_next(sg);
>> > @@ -1812,8 +1819,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>> >             }
>> >             last_pfn = page_to_pfn(page);
>> >     }
>> > -
>> > -   sg_mark_end(sg);
>> > +#ifdef CONFIG_SWIOTLB
>> > +   if (!swiotlb_nr_tbl())
>> > +#endif
>> > +           sg_mark_end(sg);
>> >     obj->pages = st;
>> >
>> >     if (i915_gem_object_needs_bit17_swizzle(obj))
>> > --
>> > 1.8.1.4
>> >
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel



--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
Daniel Vetter June 24, 2013, 6:30 p.m. UTC | #4
On Mon, Jun 24, 2013 at 11:47:48AM -0400, Konrad Rzeszutek Wilk wrote:
> Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
> ("drm/i915: create compact dma scatter lists for gem objects") makes
> certain assumptions about the under laying DMA API that are not always
> correct.
> 
> On a ThinkPad X230 with an Intel HD 4000 with Xen during the bootup
> I see:
> 
> [drm:intel_pipe_set_base] *ERROR* pin & fence failed
> [drm:intel_crtc_set_config] *ERROR* failed to set mode on [CRTC:3], err = -28
> 
> Bit of debugging traced it down to dma_map_sg failing (in
> i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB).
> 
> That unfortunately are sizes that the SWIOTLB is incapable of handling -
> the maximum it can handle is a an entry of 512KB of virtual contiguous
> memory for its bounce buffer. (See IO_TLB_SEGSIZE).
> 
> Previous to the above mention git commit the SG entries were of 4KB, and
> the code introduced by above git commit squashed the CPU contiguous PFNs
> in one big virtual address provided to DMA API.
> 
> This patch is a simple semi-revert - were we emulate the old behavior
> if we detect that SWIOTLB is online. If it is not online then we continue
> on with the new compact scatter gather mechanism.
> 
> An alternative solution would be for the the '.get_pages' and the
> i915_gem_gtt_prepare_object to retry with smaller max gap of the
> amount of PFNs that can be combined together - but with this issue
> discovered during rc7 that might be too risky.
> 
> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Chris Wilson <chris@chris-wilson.co.uk>
> CC: Imre Deak <imre.deak@intel.com>
> CC: Daniel Vetter <daniel.vetter@ffwll.ch>
> CC: David Airlie <airlied@linux.ie>
> CC: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Queued for -next (with cc: stable), thanks for the patch.
-Daniel
Konrad Rzeszutek Wilk June 24, 2013, 6:34 p.m. UTC | #5
On Mon, Jun 24, 2013 at 08:26:18PM +0200, Daniel Vetter wrote:
> On Mon, Jun 24, 2013 at 7:32 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Mon, Jun 24, 2013 at 07:09:12PM +0200, Daniel Vetter wrote:
> >> On Mon, Jun 24, 2013 at 11:47:48AM -0400, Konrad Rzeszutek Wilk wrote:
> >> > Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
> >> > ("drm/i915: create compact dma scatter lists for gem objects") makes
> >> > certain assumptions about the under laying DMA API that are not always
> >> > correct.
> >> >
> >> > On a ThinkPad X230 with an Intel HD 4000 with Xen during the bootup
> >> > I see:
> >> >
> >> > [drm:intel_pipe_set_base] *ERROR* pin & fence failed
> >> > [drm:intel_crtc_set_config] *ERROR* failed to set mode on [CRTC:3], err = -28
> >> >
> >> > Bit of debugging traced it down to dma_map_sg failing (in
> >> > i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB).
> >> >
> >> > That unfortunately are sizes that the SWIOTLB is incapable of handling -
> >> > the maximum it can handle is a an entry of 512KB of virtual contiguous
> >> > memory for its bounce buffer. (See IO_TLB_SEGSIZE).
> >> >
> >> > Previous to the above mention git commit the SG entries were of 4KB, and
> >> > the code introduced by above git commit squashed the CPU contiguous PFNs
> >> > in one big virtual address provided to DMA API.
> >> >
> >> > This patch is a simple semi-revert - were we emulate the old behavior
> >> > if we detect that SWIOTLB is online. If it is not online then we continue
> >> > on with the new compact scatter gather mechanism.
> >> >
> >> > An alternative solution would be for the the '.get_pages' and the
> >> > i915_gem_gtt_prepare_object to retry with smaller max gap of the
> >> > amount of PFNs that can be combined together - but with this issue
> >> > discovered during rc7 that might be too risky.
> >> >
> >> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > CC: Chris Wilson <chris@chris-wilson.co.uk>
> >> > CC: Imre Deak <imre.deak@intel.com>
> >> > CC: Daniel Vetter <daniel.vetter@ffwll.ch>
> >> > CC: David Airlie <airlied@linux.ie>
> >> > CC: <dri-devel@lists.freedesktop.org>
> >> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>
> >> Two things:
> >
> > Hey Daniel,
> >
> >>
> >> - SWIOTLB usage should seriously blow up all over the place in drm/i915.
> >>   We really rely on the everywhere else true fact that the pages and their
> >>   dma mapping point at the same backing storage.
> >
> > It works. As in, it seems to work for just a normal desktop user. I don't
> > see much of dma_sync_* sprinkled around the drm/i915 so I would think that
> > there are some issues would be hit as well - but at the first glance
> > when using it on a laptop it looks OK.
> 
> Yeah, we have a pretty serious case of "roll our own coherency stuff".
> The biggest reason is that for a long time i915.ko didn't care one bit
> about iommus, and the thing we care about (flushing cpu caches for
> dma) isn't supported on x86 since x86 every dma is coherent (well, not
> quite, but we don't have support for it). I think longer-term it would
> make sense to move the clfushing we're doing into the dma layer.
> 
> >> - How is this solved elsewhere when constructing sg tables? Or are we
> >>   really the only guys who try to construct such big sg entries? I
> >>   expected somewhat that the dma mapping backed would fill in the segment
> >>   limits accordingly, but I haven't found anything really on a quick
> >>   search.
> >
> > The TTM layer (so radeon, nouveau) uses pci_alloc_coherent which will
> > construct the dma mapped pages. That allows it to construct "SWIOTLB-approved"
> > pages that won't need to go through dma_map/dma_unmap as they are
> > already mapped and ready to go.
> >
> > Coming back to your question - I think that i915 is the one that I've
> > encountered.
> 
> That's a bit surprising. With dma_buf graphics people will use sg
> tables much more (there's even a nice sg_alloc_table_from_pages helper
> to construct them), and those sg tables tend to have large segments. I
> guess we need some more generic solution here ...

Yes. I don't grok the full picture yet so I am not sure how to help with
this right now. Is there a roadmap or Wiki on how this was envisioned?
> 
> For now I guess we can live with your CONFIG_SWIOTLB hack.
> -Daniel

OK, I read that as an Ack-ed-by. Should I send the patch to Dave Airlie
in a GIT PULL or some other way to make it on the v3.10-rc7 train?

> 
> 
> 
> >>
> >>
> >> Cheers, Daniel
> >>
> >> > ---
> >> >  drivers/gpu/drm/i915/i915_gem.c | 15 ++++++++++++---
> >> >  1 file changed, 12 insertions(+), 3 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> >> > index 970ad17..7045f45 100644
> >> > --- a/drivers/gpu/drm/i915/i915_gem.c
> >> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> >> > @@ -1801,7 +1801,14 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
> >> >                     gfp |= __GFP_NORETRY | __GFP_NOWARN | __GFP_NO_KSWAPD;
> >> >                     gfp &= ~(__GFP_IO | __GFP_WAIT);
> >> >             }
> >> > -
> >> > +#ifdef CONFIG_SWIOTLB
> >> > +           if (swiotlb_nr_tbl()) {
> >> > +                   st->nents++;
> >> > +                   sg_set_page(sg, page, PAGE_SIZE, 0);
> >> > +                   sg = sg_next(sg);
> >> > +                   continue;
> >> > +           }
> >> > +#endif
> >> >             if (!i || page_to_pfn(page) != last_pfn + 1) {
> >> >                     if (i)
> >> >                             sg = sg_next(sg);
> >> > @@ -1812,8 +1819,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
> >> >             }
> >> >             last_pfn = page_to_pfn(page);
> >> >     }
> >> > -
> >> > -   sg_mark_end(sg);
> >> > +#ifdef CONFIG_SWIOTLB
> >> > +   if (!swiotlb_nr_tbl())
> >> > +#endif
> >> > +           sg_mark_end(sg);
> >> >     obj->pages = st;
> >> >
> >> >     if (i915_gem_object_needs_bit17_swizzle(obj))
> >> > --
> >> > 1.8.1.4
> >> >
> >>
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > http://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> 
> 
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
Dave Airlie June 24, 2013, 8:52 p.m. UTC | #6
On Tue, Jun 25, 2013 at 4:34 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Mon, Jun 24, 2013 at 08:26:18PM +0200, Daniel Vetter wrote:
>> On Mon, Jun 24, 2013 at 7:32 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>> > On Mon, Jun 24, 2013 at 07:09:12PM +0200, Daniel Vetter wrote:
>> >> On Mon, Jun 24, 2013 at 11:47:48AM -0400, Konrad Rzeszutek Wilk wrote:
>> >> > Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
>> >> > ("drm/i915: create compact dma scatter lists for gem objects") makes
>> >> > certain assumptions about the under laying DMA API that are not always
>> >> > correct.
>> >> >
>> >> > On a ThinkPad X230 with an Intel HD 4000 with Xen during the bootup
>> >> > I see:
>> >> >
>> >> > [drm:intel_pipe_set_base] *ERROR* pin & fence failed
>> >> > [drm:intel_crtc_set_config] *ERROR* failed to set mode on [CRTC:3], err = -28
>> >> >
>> >> > Bit of debugging traced it down to dma_map_sg failing (in
>> >> > i915_gem_gtt_prepare_object) as some of the SG entries were huge (3MB).
>> >> >
>> >> > That unfortunately are sizes that the SWIOTLB is incapable of handling -
>> >> > the maximum it can handle is a an entry of 512KB of virtual contiguous
>> >> > memory for its bounce buffer. (See IO_TLB_SEGSIZE).
>> >> >
>> >> > Previous to the above mention git commit the SG entries were of 4KB, and
>> >> > the code introduced by above git commit squashed the CPU contiguous PFNs
>> >> > in one big virtual address provided to DMA API.
>> >> >
>> >> > This patch is a simple semi-revert - were we emulate the old behavior
>> >> > if we detect that SWIOTLB is online. If it is not online then we continue
>> >> > on with the new compact scatter gather mechanism.
>> >> >
>> >> > An alternative solution would be for the the '.get_pages' and the
>> >> > i915_gem_gtt_prepare_object to retry with smaller max gap of the
>> >> > amount of PFNs that can be combined together - but with this issue
>> >> > discovered during rc7 that might be too risky.
>> >> >
>> >> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> >> > CC: Chris Wilson <chris@chris-wilson.co.uk>
>> >> > CC: Imre Deak <imre.deak@intel.com>
>> >> > CC: Daniel Vetter <daniel.vetter@ffwll.ch>
>> >> > CC: David Airlie <airlied@linux.ie>
>> >> > CC: <dri-devel@lists.freedesktop.org>
>> >> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> >>
>> >> Two things:
>> >
>> > Hey Daniel,
>> >
>> >>
>> >> - SWIOTLB usage should seriously blow up all over the place in drm/i915.
>> >>   We really rely on the everywhere else true fact that the pages and their
>> >>   dma mapping point at the same backing storage.
>> >
>> > It works. As in, it seems to work for just a normal desktop user. I don't
>> > see much of dma_sync_* sprinkled around the drm/i915 so I would think that
>> > there are some issues would be hit as well - but at the first glance
>> > when using it on a laptop it looks OK.
>>
>> Yeah, we have a pretty serious case of "roll our own coherency stuff".
>> The biggest reason is that for a long time i915.ko didn't care one bit
>> about iommus, and the thing we care about (flushing cpu caches for
>> dma) isn't supported on x86 since x86 every dma is coherent (well, not
>> quite, but we don't have support for it). I think longer-term it would
>> make sense to move the clfushing we're doing into the dma layer.
>>
>> >> - How is this solved elsewhere when constructing sg tables? Or are we
>> >>   really the only guys who try to construct such big sg entries? I
>> >>   expected somewhat that the dma mapping backed would fill in the segment
>> >>   limits accordingly, but I haven't found anything really on a quick
>> >>   search.
>> >
>> > The TTM layer (so radeon, nouveau) uses pci_alloc_coherent which will
>> > construct the dma mapped pages. That allows it to construct "SWIOTLB-approved"
>> > pages that won't need to go through dma_map/dma_unmap as they are
>> > already mapped and ready to go.
>> >
>> > Coming back to your question - I think that i915 is the one that I've
>> > encountered.
>>
>> That's a bit surprising. With dma_buf graphics people will use sg
>> tables much more (there's even a nice sg_alloc_table_from_pages helper
>> to construct them), and those sg tables tend to have large segments. I
>> guess we need some more generic solution here ...
>
> Yes. I don't grok the full picture yet so I am not sure how to help with
> this right now. Is there a roadmap or Wiki on how this was envisioned?
>>
>> For now I guess we can live with your CONFIG_SWIOTLB hack.
>> -Daniel
>
> OK, I read that as an Ack-ed-by. Should I send the patch to Dave Airlie
> in a GIT PULL or some other way to make it on the v3.10-rc7 train?

I don't like this at all, I'll accept the patch on the condition you
investigate further :-)

If you are using swiotlb on i915 things should break, I know I've
investigated problems before where swiotlb was being incorrectly used
due to page masks or other issues. Shouldn't you be passing through
using the real iommu?

Dave.
Konrad Rzeszutek Wilk June 24, 2013, 11:18 p.m. UTC | #7
Dave Airlie <airlied@gmail.com> wrote:

>On Tue, Jun 25, 2013 at 4:34 AM, Konrad Rzeszutek Wilk
><konrad.wilk@oracle.com> wrote:
>> On Mon, Jun 24, 2013 at 08:26:18PM +0200, Daniel Vetter wrote:
>>> On Mon, Jun 24, 2013 at 7:32 PM, Konrad Rzeszutek Wilk
>>> <konrad.wilk@oracle.com> wrote:
>>> > On Mon, Jun 24, 2013 at 07:09:12PM +0200, Daniel Vetter wrote:
>>> >> On Mon, Jun 24, 2013 at 11:47:48AM -0400, Konrad Rzeszutek Wilk
>wrote:
>>> >> > Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
>>> >> > ("drm/i915: create compact dma scatter lists for gem objects")
>makes
>>> >> > certain assumptions about the under laying DMA API that are not
>always
>>> >> > correct.
>>> >> >
>>> >> > On a ThinkPad X230 with an Intel HD 4000 with Xen during the
>bootup
>>> >> > I see:
>>> >> >
>>> >> > [drm:intel_pipe_set_base] *ERROR* pin & fence failed
>>> >> > [drm:intel_crtc_set_config] *ERROR* failed to set mode on
>[CRTC:3], err = -28
>>> >> >
>>> >> > Bit of debugging traced it down to dma_map_sg failing (in
>>> >> > i915_gem_gtt_prepare_object) as some of the SG entries were
>huge (3MB).
>>> >> >
>>> >> > That unfortunately are sizes that the SWIOTLB is incapable of
>handling -
>>> >> > the maximum it can handle is a an entry of 512KB of virtual
>contiguous
>>> >> > memory for its bounce buffer. (See IO_TLB_SEGSIZE).
>>> >> >
>>> >> > Previous to the above mention git commit the SG entries were of
>4KB, and
>>> >> > the code introduced by above git commit squashed the CPU
>contiguous PFNs
>>> >> > in one big virtual address provided to DMA API.
>>> >> >
>>> >> > This patch is a simple semi-revert - were we emulate the old
>behavior
>>> >> > if we detect that SWIOTLB is online. If it is not online then
>we continue
>>> >> > on with the new compact scatter gather mechanism.
>>> >> >
>>> >> > An alternative solution would be for the the '.get_pages' and
>the
>>> >> > i915_gem_gtt_prepare_object to retry with smaller max gap of
>the
>>> >> > amount of PFNs that can be combined together - but with this
>issue
>>> >> > discovered during rc7 that might be too risky.
>>> >> >
>>> >> > Reported-and-Tested-by: Konrad Rzeszutek Wilk
><konrad.wilk@oracle.com>
>>> >> > CC: Chris Wilson <chris@chris-wilson.co.uk>
>>> >> > CC: Imre Deak <imre.deak@intel.com>
>>> >> > CC: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> >> > CC: David Airlie <airlied@linux.ie>
>>> >> > CC: <dri-devel@lists.freedesktop.org>
>>> >> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> >>
>>> >> Two things:
>>> >
>>> > Hey Daniel,
>>> >
>>> >>
>>> >> - SWIOTLB usage should seriously blow up all over the place in
>drm/i915.
>>> >>   We really rely on the everywhere else true fact that the pages
>and their
>>> >>   dma mapping point at the same backing storage.
>>> >
>>> > It works. As in, it seems to work for just a normal desktop user.
>I don't
>>> > see much of dma_sync_* sprinkled around the drm/i915 so I would
>think that
>>> > there are some issues would be hit as well - but at the first
>glance
>>> > when using it on a laptop it looks OK.
>>>
>>> Yeah, we have a pretty serious case of "roll our own coherency
>stuff".
>>> The biggest reason is that for a long time i915.ko didn't care one
>bit
>>> about iommus, and the thing we care about (flushing cpu caches for
>>> dma) isn't supported on x86 since x86 every dma is coherent (well,
>not
>>> quite, but we don't have support for it). I think longer-term it
>would
>>> make sense to move the clfushing we're doing into the dma layer.
>>>
>>> >> - How is this solved elsewhere when constructing sg tables? Or
>are we
>>> >>   really the only guys who try to construct such big sg entries?
>I
>>> >>   expected somewhat that the dma mapping backed would fill in the
>segment
>>> >>   limits accordingly, but I haven't found anything really on a
>quick
>>> >>   search.
>>> >
>>> > The TTM layer (so radeon, nouveau) uses pci_alloc_coherent which
>will
>>> > construct the dma mapped pages. That allows it to construct
>"SWIOTLB-approved"
>>> > pages that won't need to go through dma_map/dma_unmap as they are
>>> > already mapped and ready to go.
>>> >
>>> > Coming back to your question - I think that i915 is the one that
>I've
>>> > encountered.
>>>
>>> That's a bit surprising. With dma_buf graphics people will use sg
>>> tables much more (there's even a nice sg_alloc_table_from_pages
>helper
>>> to construct them), and those sg tables tend to have large segments.
>I
>>> guess we need some more generic solution here ...
>>
>> Yes. I don't grok the full picture yet so I am not sure how to help
>with
>> this right now. Is there a roadmap or Wiki on how this was
>envisioned?
>>>
>>> For now I guess we can live with your CONFIG_SWIOTLB hack.
>>> -Daniel
>>
>> OK, I read that as an Ack-ed-by. Should I send the patch to Dave
>Airlie
>> in a GIT PULL or some other way to make it on the v3.10-rc7 train?
>
>I don't like this at all, I'll accept the patch on the condition you
>investigate further :-)
>
>If you are using swiotlb on i915 things should break, I know I've
>investigated problems before where swiotlb was being incorrectly used
>due to page masks or other issues. Shouldn't you be passing through
>using the real iommu?
>
>Dave.

Hey Dave
Of course I will investigate. 

The SWIOTLB is unfortunately used because it is a fallback (and I am the maintainer of it) and if a real IOMMU is activated it can be mitigated differently. When you say 'passed through'  you mean in terms of an IOMMU in a guest? There are no IOMMU inside a guest when passing in a PCI device. 

Let me start on a new thread on this when I have gotten my head wrapped around dma  buf. 

Thanks and sorry for getting to this so late in the cycle.  New laptop and playing with it and that triggered me finding this.
Dave Airlie June 25, 2013, 12:06 a.m. UTC | #8
On Tue, Jun 25, 2013 at 9:18 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> Dave Airlie <airlied@gmail.com> wrote:
>
>>On Tue, Jun 25, 2013 at 4:34 AM, Konrad Rzeszutek Wilk
>><konrad.wilk@oracle.com> wrote:
>>> On Mon, Jun 24, 2013 at 08:26:18PM +0200, Daniel Vetter wrote:
>>>> On Mon, Jun 24, 2013 at 7:32 PM, Konrad Rzeszutek Wilk
>>>> <konrad.wilk@oracle.com> wrote:
>>>> > On Mon, Jun 24, 2013 at 07:09:12PM +0200, Daniel Vetter wrote:
>>>> >> On Mon, Jun 24, 2013 at 11:47:48AM -0400, Konrad Rzeszutek Wilk
>>wrote:
>>>> >> > Git commit 90797e6d1ec0dfde6ba62a48b9ee3803887d6ed4
>>>> >> > ("drm/i915: create compact dma scatter lists for gem objects")
>>makes
>>>> >> > certain assumptions about the under laying DMA API that are not
>>always
>>>> >> > correct.
>>>> >> >
>>>> >> > On a ThinkPad X230 with an Intel HD 4000 with Xen during the
>>bootup
>>>> >> > I see:
>>>> >> >
>>>> >> > [drm:intel_pipe_set_base] *ERROR* pin & fence failed
>>>> >> > [drm:intel_crtc_set_config] *ERROR* failed to set mode on
>>[CRTC:3], err = -28
>>>> >> >
>>>> >> > Bit of debugging traced it down to dma_map_sg failing (in
>>>> >> > i915_gem_gtt_prepare_object) as some of the SG entries were
>>huge (3MB).
>>>> >> >
>>>> >> > That unfortunately are sizes that the SWIOTLB is incapable of
>>handling -
>>>> >> > the maximum it can handle is a an entry of 512KB of virtual
>>contiguous
>>>> >> > memory for its bounce buffer. (See IO_TLB_SEGSIZE).
>>>> >> >
>>>> >> > Previous to the above mention git commit the SG entries were of
>>4KB, and
>>>> >> > the code introduced by above git commit squashed the CPU
>>contiguous PFNs
>>>> >> > in one big virtual address provided to DMA API.
>>>> >> >
>>>> >> > This patch is a simple semi-revert - were we emulate the old
>>behavior
>>>> >> > if we detect that SWIOTLB is online. If it is not online then
>>we continue
>>>> >> > on with the new compact scatter gather mechanism.
>>>> >> >
>>>> >> > An alternative solution would be for the the '.get_pages' and
>>the
>>>> >> > i915_gem_gtt_prepare_object to retry with smaller max gap of
>>the
>>>> >> > amount of PFNs that can be combined together - but with this
>>issue
>>>> >> > discovered during rc7 that might be too risky.
>>>> >> >
>>>> >> > Reported-and-Tested-by: Konrad Rzeszutek Wilk
>><konrad.wilk@oracle.com>
>>>> >> > CC: Chris Wilson <chris@chris-wilson.co.uk>
>>>> >> > CC: Imre Deak <imre.deak@intel.com>
>>>> >> > CC: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>> >> > CC: David Airlie <airlied@linux.ie>
>>>> >> > CC: <dri-devel@lists.freedesktop.org>
>>>> >> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>> >>
>>>> >> Two things:
>>>> >
>>>> > Hey Daniel,
>>>> >
>>>> >>
>>>> >> - SWIOTLB usage should seriously blow up all over the place in
>>drm/i915.
>>>> >>   We really rely on the everywhere else true fact that the pages
>>and their
>>>> >>   dma mapping point at the same backing storage.
>>>> >
>>>> > It works. As in, it seems to work for just a normal desktop user.
>>I don't
>>>> > see much of dma_sync_* sprinkled around the drm/i915 so I would
>>think that
>>>> > there are some issues would be hit as well - but at the first
>>glance
>>>> > when using it on a laptop it looks OK.
>>>>
>>>> Yeah, we have a pretty serious case of "roll our own coherency
>>stuff".
>>>> The biggest reason is that for a long time i915.ko didn't care one
>>bit
>>>> about iommus, and the thing we care about (flushing cpu caches for
>>>> dma) isn't supported on x86 since x86 every dma is coherent (well,
>>not
>>>> quite, but we don't have support for it). I think longer-term it
>>would
>>>> make sense to move the clfushing we're doing into the dma layer.
>>>>
>>>> >> - How is this solved elsewhere when constructing sg tables? Or
>>are we
>>>> >>   really the only guys who try to construct such big sg entries?
>>I
>>>> >>   expected somewhat that the dma mapping backed would fill in the
>>segment
>>>> >>   limits accordingly, but I haven't found anything really on a
>>quick
>>>> >>   search.
>>>> >
>>>> > The TTM layer (so radeon, nouveau) uses pci_alloc_coherent which
>>will
>>>> > construct the dma mapped pages. That allows it to construct
>>"SWIOTLB-approved"
>>>> > pages that won't need to go through dma_map/dma_unmap as they are
>>>> > already mapped and ready to go.
>>>> >
>>>> > Coming back to your question - I think that i915 is the one that
>>I've
>>>> > encountered.
>>>>
>>>> That's a bit surprising. With dma_buf graphics people will use sg
>>>> tables much more (there's even a nice sg_alloc_table_from_pages
>>helper
>>>> to construct them), and those sg tables tend to have large segments.
>>I
>>>> guess we need some more generic solution here ...
>>>
>>> Yes. I don't grok the full picture yet so I am not sure how to help
>>with
>>> this right now. Is there a roadmap or Wiki on how this was
>>envisioned?
>>>>
>>>> For now I guess we can live with your CONFIG_SWIOTLB hack.
>>>> -Daniel
>>>
>>> OK, I read that as an Ack-ed-by. Should I send the patch to Dave
>>Airlie
>>> in a GIT PULL or some other way to make it on the v3.10-rc7 train?
>>
>>I don't like this at all, I'll accept the patch on the condition you
>>investigate further :-)
>>
>>If you are using swiotlb on i915 things should break, I know I've
>>investigated problems before where swiotlb was being incorrectly used
>>due to page masks or other issues. Shouldn't you be passing through
>>using the real iommu?
>>
>>Dave.
>
> Hey Dave
> Of course I will investigate.
>
> The SWIOTLB is unfortunately used because it is a fallback (and I am the maintainer of it) and if a real IOMMU is activated it can be mitigated differently. When you say 'passed through'  you mean in terms of an IOMMU in a guest? There are no IOMMU inside a guest when passing in a PCI device.

I just don't understand why anyone would see swiotlb in this case, the
host would be using hw iommu, why would the guest then need to use a
bounce buffer?

>
> Let me start on a new thread on this when I have gotten my head wrapped around dma  buf.
>
> Thanks and sorry for getting to this so late in the cycle.  New laptop and playing with it and that triggered me finding this.

My main worry is this will regress things for people with swiotlb
enabled even if the gpu isn't using it, granted it won't be any slower
than before so probably not something I care about now if I know
you'll narrow down why all this is necessary later.

Dave.
Konrad Rzeszutek Wilk June 26, 2013, 2:03 p.m. UTC | #9
> >>Dave.
> >
> > Hey Dave
> > Of course I will investigate.
> >
> > The SWIOTLB is unfortunately used because it is a fallback (and I am the maintainer of it) and if a real IOMMU is activated it can be mitigated differently. When you say 'passed through'  you mean in terms of an IOMMU in a guest? There are no IOMMU inside a guest when passing in a PCI device.
> 
> I just don't understand why anyone would see swiotlb in this case, the
> host would be using hw iommu, why would the guest then need to use a
> bounce buffer?

Hey Dave,

Sorry for the late response.

The guest has no concept of the HW IOMMU as it is not 'emulated' or there
are no plumbing for it to interface with the host's IOMMU. It means that if
it has more than 4GB it will automatically turn on SWIOTLB (b/c hey it
might have 32-bit capable devices and it needs to bounce buffer the data
to an area above 4GB).

Normally the SWIOTLB bounce buffers won't be used unless:
 a) the pages are not contingous. This is not a case for HVM guests (as
   it _thinks_ its PFN are always contingous - albeit in reality in might
   not be, but that is the job of the host EPT/IOMMU to construct this fake
   view), but for Xen PV - which has a mapping of the PFN -> machine addresses -
   it _knows_ that the real machine address of a PFN. And as guests are
   created from random swaths of memory - some of the PFNs might be
   contingous but some might not. In other words for RAM regions:

      pfn_to_mfn(pfn + 1) != (pfn_to_mfn(pfn) + 1)

   mfn is the real physical address bitshifted (PAGE_SHIFT). For HVM guest:
   (pfn_to_mfn returns the pfn value, so the above formula is):

       pfn+1 == pfn+1

   If this does not make any sense to you - that is OK :-) I can try to
   explain more but it might just put you to sleep - in which case just
   think: "Xen PV CPU physical addresses are not the same as the bus(DMA)
   addresses." - which means it is similar to Sparc platforms or other
   platforms where the IOMMU has no address CPU->PCI machinery.

 b) the pages are not page aligned. Less of an issue, but still can
   come up.

 c) the DMA mask of the PCI device is 32-bit (common with USB devices, not
   so often with graphic cards). But hey - there are quirks that sometimes
    make graphics card DMA up only to certain bitness.

 d). user provided 'swiotlb=force' and now everything is going through
   the bounce buffer.

The nice solution is to have a virtualization aware version of IOMMU in the
guest that will disable SWIOTLB (or use it only in fallback). The AMD folks
were thinking about that for KVM, but nothing came out of that. The Citrix
folks are looking at that for Xen, but nothing yet (thought I did see some
RFC patches).

> 
> >
> > Let me start on a new thread on this when I have gotten my head wrapped around dma  buf.

Hadn't gotten to that yet.
> >
> > Thanks and sorry for getting to this so late in the cycle.  New laptop and playing with it and that triggered me finding this.
> 
> My main worry is this will regress things for people with swiotlb
> enabled even if the gpu isn't using it, granted it won't be any slower
> than before so probably not something I care about now if I know
> you'll narrow down why all this is necessary later.

I am not sure how it would? The patch makes the i915 construct the
scatter gather list as it was in v3.9. So it _should_ not impact it
negatively. I was trying to follow the spirit of doing a partial revert
as close as possible so that the risk of regression would be nil.

To summarize, I think (and please correct me if I am mistaken):
 - You or Daniel are thinking to take this patch for v3.10 or v3.11
   (and if in v3.11 then tack on stable@vger.kernel.org).

 - You will tell defer all SWIOTLB related issues to me. In other words
   if you see something that is i915 and swiotlb, you will happily shout
   "Konrad! Tag!" and wash your hands. Hopefully you can also send me
   some of the past bugs that you suspect are SWIOTLB related.

 - You expect me to look at dma-buf and figure out how it can coexist with
   SWIOTLB.

Sounds about right?
> 
> Dave.
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 970ad17..7045f45 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1801,7 +1801,14 @@  i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 			gfp |= __GFP_NORETRY | __GFP_NOWARN | __GFP_NO_KSWAPD;
 			gfp &= ~(__GFP_IO | __GFP_WAIT);
 		}
-
+#ifdef CONFIG_SWIOTLB
+		if (swiotlb_nr_tbl()) {
+			st->nents++;
+			sg_set_page(sg, page, PAGE_SIZE, 0);
+			sg = sg_next(sg);
+			continue;
+		}
+#endif
 		if (!i || page_to_pfn(page) != last_pfn + 1) {
 			if (i)
 				sg = sg_next(sg);
@@ -1812,8 +1819,10 @@  i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 		}
 		last_pfn = page_to_pfn(page);
 	}
-
-	sg_mark_end(sg);
+#ifdef CONFIG_SWIOTLB
+	if (!swiotlb_nr_tbl())
+#endif
+		sg_mark_end(sg);
 	obj->pages = st;
 
 	if (i915_gem_object_needs_bit17_swizzle(obj))