diff mbox series

drm/i915: Fix DMA mapped scatterlist walks

Message ID 20200909124457.296845-1-tvrtko.ursulin@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series drm/i915: Fix DMA mapped scatterlist walks | expand

Commit Message

Tvrtko Ursulin Sept. 9, 2020, 12:44 p.m. UTC
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

When walking DMA mapped scatterlists sg_dma_len has to be used since it
can be different (coalesced) from the backing store entry.

This also means we have to end the walk when encountering a zero length
DMA entry and cannot rely on the normal sg list end marker.

Both issues were there in theory for some time but were hidden by the fact
Intel IOMMU driver was never coalescing entries. As there are ongoing
efforts to change this we need to start handling it.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
References: 85d1225ec066 ("drm/i915: Introduce & use new lightweight SGL iterators")
References: b31144c0daa8 ("drm/i915: Micro-optimise gen6_ppgtt_insert_entries()")
Reported-by: Tom Murphy <murphyt7@tcd.ie>
Suggested-by: Tom Murphy <murphyt7@tcd.ie> # __sgt_iter
Suggested-by: Logan Gunthorpe <logang@deltatee.com> # __sgt_iter
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/gpu/drm/i915/gt/gen6_ppgtt.c    |  6 +++---
 drivers/gpu/drm/i915/gt/gen8_ppgtt.c    | 17 ++++++++++-------
 drivers/gpu/drm/i915/gt/intel_gtt.h     |  2 +-
 drivers/gpu/drm/i915/i915_scatterlist.h | 12 ++++++++----
 4 files changed, 22 insertions(+), 15 deletions(-)

Comments

Chris Wilson Sept. 9, 2020, 1:03 p.m. UTC | #1
Quoting Tvrtko Ursulin (2020-09-09 13:44:57)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> When walking DMA mapped scatterlists sg_dma_len has to be used since it
> can be different (coalesced) from the backing store entry.
> 
> This also means we have to end the walk when encountering a zero length
> DMA entry and cannot rely on the normal sg list end marker.
> 
> Both issues were there in theory for some time but were hidden by the fact
> Intel IOMMU driver was never coalescing entries. As there are ongoing
> efforts to change this we need to start handling it.

Does this change anything if we were already feeding in coalesced
entries? I doubt we made all sg generators compact the lists though.

The change looks reasonable, it seems you have kept the dma and page
iterators distinct, which was the only worry that occurred to me.
-Chris
Tvrtko Ursulin Sept. 9, 2020, 1:43 p.m. UTC | #2
On 09/09/2020 14:03, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-09-09 13:44:57)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> When walking DMA mapped scatterlists sg_dma_len has to be used since it
>> can be different (coalesced) from the backing store entry.
>>
>> This also means we have to end the walk when encountering a zero length
>> DMA entry and cannot rely on the normal sg list end marker.
>>
>> Both issues were there in theory for some time but were hidden by the fact
>> Intel IOMMU driver was never coalescing entries. As there are ongoing
>> efforts to change this we need to start handling it.
> 
> Does this change anything if we were already feeding in coalesced
> entries? I doubt we made all sg generators compact the lists though.

I don't think so. My takeaway from this was a realization that page and 
dma coalescing can be completely different (more so once the iommu 
patches get in). I did not think of any other problems as long as the 
dma remapping fits into the number of available sg table entries we 
allocated. It is just a bit weird, this duality of sg_table "views".

> The change looks reasonable, it seems you have kept the dma and page
> iterators distinct, which was the only worry that occurred to me.

It surprised me we have two iterators, well one and a half. The sgt_dma 
one being the half of an iterator. :)

Regards,

Tvrtko

P.S. Well as I was writing this reply full test results came in and 
something is still broken, so scratch that..
Logan Gunthorpe Sept. 9, 2020, 7:29 p.m. UTC | #3
On 2020-09-09 6:44 a.m., Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> When walking DMA mapped scatterlists sg_dma_len has to be used since it
> can be different (coalesced) from the backing store entry.
> 
> This also means we have to end the walk when encountering a zero length
> DMA entry and cannot rely on the normal sg list end marker.
> 
> Both issues were there in theory for some time but were hidden by the fact
> Intel IOMMU driver was never coalescing entries. As there are ongoing
> efforts to change this we need to start handling it.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> References: 85d1225ec066 ("drm/i915: Introduce & use new lightweight SGL iterators")
> References: b31144c0daa8 ("drm/i915: Micro-optimise gen6_ppgtt_insert_entries()")
> Reported-by: Tom Murphy <murphyt7@tcd.ie>
> Suggested-by: Tom Murphy <murphyt7@tcd.ie> # __sgt_iter
> Suggested-by: Logan Gunthorpe <logang@deltatee.com> # __sgt_iter
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Lu Baolu <baolu.lu@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/gt/gen6_ppgtt.c    |  6 +++---
>  drivers/gpu/drm/i915/gt/gen8_ppgtt.c    | 17 ++++++++++-------
>  drivers/gpu/drm/i915/gt/intel_gtt.h     |  2 +-
>  drivers/gpu/drm/i915/i915_scatterlist.h | 12 ++++++++----
>  4 files changed, 22 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> index fd0d24d28763..c0d17f87b00f 100644
> --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> @@ -131,17 +131,17 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
>  
>  	vaddr = kmap_atomic_px(i915_pt_entry(pd, act_pt));
>  	do {
> -		GEM_BUG_ON(iter.sg->length < I915_GTT_PAGE_SIZE);
> +		GEM_BUG_ON(sg_dma_len(iter.sg) < I915_GTT_PAGE_SIZE);
>  		vaddr[act_pte] = pte_encode | GEN6_PTE_ADDR_ENCODE(iter.dma);
>  
>  		iter.dma += I915_GTT_PAGE_SIZE;
>  		if (iter.dma == iter.max) {
>  			iter.sg = __sg_next(iter.sg);
> -			if (!iter.sg)
> +			if (!iter.sg || sg_dma_len(iter.sg) == 0)
>  				break;
>  
>  			iter.dma = sg_dma_address(iter.sg);
> -			iter.max = iter.dma + iter.sg->length;
> +			iter.max = iter.dma + sg_dma_len(iter.sg);
>  		}
>  
>  		if (++act_pte == GEN6_PTES) {
> diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> index eb64f474a78c..0361b3dfdc72 100644
> --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> @@ -372,19 +372,19 @@ gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt,
>  	pd = i915_pd_entry(pdp, gen8_pd_index(idx, 2));
>  	vaddr = kmap_atomic_px(i915_pt_entry(pd, gen8_pd_index(idx, 1)));
>  	do {
> -		GEM_BUG_ON(iter->sg->length < I915_GTT_PAGE_SIZE);
> +		GEM_BUG_ON(sg_dma_len(iter->sg) < I915_GTT_PAGE_SIZE);
>  		vaddr[gen8_pd_index(idx, 0)] = pte_encode | iter->dma;
>  
>  		iter->dma += I915_GTT_PAGE_SIZE;
>  		if (iter->dma >= iter->max) {
>  			iter->sg = __sg_next(iter->sg);
> -			if (!iter->sg) {
> +			if (!iter->sg || sg_dma_len(iter->sg) == 0) {
>  				idx = 0;
>  				break;
>  			}
>  
>  			iter->dma = sg_dma_address(iter->sg);
> -			iter->max = iter->dma + iter->sg->length;
> +			iter->max = iter->dma + sg_dma_len(iter->sg);
>  		}
>  
>  		if (gen8_pd_index(++idx, 0) == 0) {
> @@ -414,7 +414,7 @@ static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
>  {
>  	const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level, flags);
>  	u64 start = vma->node.start;
> -	dma_addr_t rem = iter->sg->length;
> +	dma_addr_t rem = sg_dma_len(iter->sg);

Seems a little odd to me to be storing a length in a dma_addr_t. But
besides that small nit, this all makes sense to me.

Reviewed-by: Logan Gunthorpe <logang@deltatee.com>

Thanks,

Logan
Tvrtko Ursulin Sept. 10, 2020, 10:27 a.m. UTC | #4
On 09/09/2020 20:29, Logan Gunthorpe wrote:
> On 2020-09-09 6:44 a.m., Tvrtko Ursulin wrote:
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> When walking DMA mapped scatterlists sg_dma_len has to be used since it
>> can be different (coalesced) from the backing store entry.
>>
>> This also means we have to end the walk when encountering a zero length
>> DMA entry and cannot rely on the normal sg list end marker.
>>
>> Both issues were there in theory for some time but were hidden by the fact
>> Intel IOMMU driver was never coalescing entries. As there are ongoing
>> efforts to change this we need to start handling it.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> References: 85d1225ec066 ("drm/i915: Introduce & use new lightweight SGL iterators")
>> References: b31144c0daa8 ("drm/i915: Micro-optimise gen6_ppgtt_insert_entries()")
>> Reported-by: Tom Murphy <murphyt7@tcd.ie>
>> Suggested-by: Tom Murphy <murphyt7@tcd.ie> # __sgt_iter
>> Suggested-by: Logan Gunthorpe <logang@deltatee.com> # __sgt_iter
>> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Matthew Auld <matthew.auld@intel.com>
>> Cc: Lu Baolu <baolu.lu@linux.intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/gen6_ppgtt.c    |  6 +++---
>>   drivers/gpu/drm/i915/gt/gen8_ppgtt.c    | 17 ++++++++++-------
>>   drivers/gpu/drm/i915/gt/intel_gtt.h     |  2 +-
>>   drivers/gpu/drm/i915/i915_scatterlist.h | 12 ++++++++----
>>   4 files changed, 22 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> index fd0d24d28763..c0d17f87b00f 100644
>> --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> @@ -131,17 +131,17 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
>>   
>>   	vaddr = kmap_atomic_px(i915_pt_entry(pd, act_pt));
>>   	do {
>> -		GEM_BUG_ON(iter.sg->length < I915_GTT_PAGE_SIZE);
>> +		GEM_BUG_ON(sg_dma_len(iter.sg) < I915_GTT_PAGE_SIZE);
>>   		vaddr[act_pte] = pte_encode | GEN6_PTE_ADDR_ENCODE(iter.dma);
>>   
>>   		iter.dma += I915_GTT_PAGE_SIZE;
>>   		if (iter.dma == iter.max) {
>>   			iter.sg = __sg_next(iter.sg);
>> -			if (!iter.sg)
>> +			if (!iter.sg || sg_dma_len(iter.sg) == 0)
>>   				break;
>>   
>>   			iter.dma = sg_dma_address(iter.sg);
>> -			iter.max = iter.dma + iter.sg->length;
>> +			iter.max = iter.dma + sg_dma_len(iter.sg);
>>   		}
>>   
>>   		if (++act_pte == GEN6_PTES) {
>> diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> index eb64f474a78c..0361b3dfdc72 100644
>> --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> @@ -372,19 +372,19 @@ gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt,
>>   	pd = i915_pd_entry(pdp, gen8_pd_index(idx, 2));
>>   	vaddr = kmap_atomic_px(i915_pt_entry(pd, gen8_pd_index(idx, 1)));
>>   	do {
>> -		GEM_BUG_ON(iter->sg->length < I915_GTT_PAGE_SIZE);
>> +		GEM_BUG_ON(sg_dma_len(iter->sg) < I915_GTT_PAGE_SIZE);
>>   		vaddr[gen8_pd_index(idx, 0)] = pte_encode | iter->dma;
>>   
>>   		iter->dma += I915_GTT_PAGE_SIZE;
>>   		if (iter->dma >= iter->max) {
>>   			iter->sg = __sg_next(iter->sg);
>> -			if (!iter->sg) {
>> +			if (!iter->sg || sg_dma_len(iter->sg) == 0) {
>>   				idx = 0;
>>   				break;
>>   			}
>>   
>>   			iter->dma = sg_dma_address(iter->sg);
>> -			iter->max = iter->dma + iter->sg->length;
>> +			iter->max = iter->dma + sg_dma_len(iter->sg);
>>   		}
>>   
>>   		if (gen8_pd_index(++idx, 0) == 0) {
>> @@ -414,7 +414,7 @@ static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
>>   {
>>   	const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level, flags);
>>   	u64 start = vma->node.start;
>> -	dma_addr_t rem = iter->sg->length;
>> +	dma_addr_t rem = sg_dma_len(iter->sg);
> 
> Seems a little odd to me to be storing a length in a dma_addr_t. But
> besides that small nit, this all makes sense to me.
> 
> Reviewed-by: Logan Gunthorpe <logang@deltatee.com>

I did not spot that, thanks. I'll improve that in v2 since I need to add 
a 2nd patch to completely prepare i915 for this.

Regards,

Tvrtko
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
index fd0d24d28763..c0d17f87b00f 100644
--- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
@@ -131,17 +131,17 @@  static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
 
 	vaddr = kmap_atomic_px(i915_pt_entry(pd, act_pt));
 	do {
-		GEM_BUG_ON(iter.sg->length < I915_GTT_PAGE_SIZE);
+		GEM_BUG_ON(sg_dma_len(iter.sg) < I915_GTT_PAGE_SIZE);
 		vaddr[act_pte] = pte_encode | GEN6_PTE_ADDR_ENCODE(iter.dma);
 
 		iter.dma += I915_GTT_PAGE_SIZE;
 		if (iter.dma == iter.max) {
 			iter.sg = __sg_next(iter.sg);
-			if (!iter.sg)
+			if (!iter.sg || sg_dma_len(iter.sg) == 0)
 				break;
 
 			iter.dma = sg_dma_address(iter.sg);
-			iter.max = iter.dma + iter.sg->length;
+			iter.max = iter.dma + sg_dma_len(iter.sg);
 		}
 
 		if (++act_pte == GEN6_PTES) {
diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
index eb64f474a78c..0361b3dfdc72 100644
--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
@@ -372,19 +372,19 @@  gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt,
 	pd = i915_pd_entry(pdp, gen8_pd_index(idx, 2));
 	vaddr = kmap_atomic_px(i915_pt_entry(pd, gen8_pd_index(idx, 1)));
 	do {
-		GEM_BUG_ON(iter->sg->length < I915_GTT_PAGE_SIZE);
+		GEM_BUG_ON(sg_dma_len(iter->sg) < I915_GTT_PAGE_SIZE);
 		vaddr[gen8_pd_index(idx, 0)] = pte_encode | iter->dma;
 
 		iter->dma += I915_GTT_PAGE_SIZE;
 		if (iter->dma >= iter->max) {
 			iter->sg = __sg_next(iter->sg);
-			if (!iter->sg) {
+			if (!iter->sg || sg_dma_len(iter->sg) == 0) {
 				idx = 0;
 				break;
 			}
 
 			iter->dma = sg_dma_address(iter->sg);
-			iter->max = iter->dma + iter->sg->length;
+			iter->max = iter->dma + sg_dma_len(iter->sg);
 		}
 
 		if (gen8_pd_index(++idx, 0) == 0) {
@@ -414,7 +414,7 @@  static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
 {
 	const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level, flags);
 	u64 start = vma->node.start;
-	dma_addr_t rem = iter->sg->length;
+	dma_addr_t rem = sg_dma_len(iter->sg);
 
 	GEM_BUG_ON(!i915_vm_is_4lvl(vma->vm));
 
@@ -456,7 +456,7 @@  static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
 		}
 
 		do {
-			GEM_BUG_ON(iter->sg->length < page_size);
+			GEM_BUG_ON(sg_dma_len(iter->sg) < page_size);
 			vaddr[index++] = encode | iter->dma;
 
 			start += page_size;
@@ -467,7 +467,10 @@  static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
 				if (!iter->sg)
 					break;
 
-				rem = iter->sg->length;
+				rem = sg_dma_len(iter->sg);
+				if (!rem)
+					break;
+
 				iter->dma = sg_dma_address(iter->sg);
 				iter->max = iter->dma + rem;
 
@@ -525,7 +528,7 @@  static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
 		}
 
 		vma->page_sizes.gtt |= page_size;
-	} while (iter->sg);
+	} while (iter->sg && sg_dma_len(iter->sg));
 }
 
 static void gen8_ppgtt_insert(struct i915_address_space *vm,
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index c13c650ced22..8a33940a71f3 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -580,7 +580,7 @@  static inline struct sgt_dma {
 	struct scatterlist *sg = vma->pages->sgl;
 	dma_addr_t addr = sg_dma_address(sg);
 
-	return (struct sgt_dma){ sg, addr, addr + sg->length };
+	return (struct sgt_dma){ sg, addr, addr + sg_dma_len(sg) };
 }
 
 #endif
diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h b/drivers/gpu/drm/i915/i915_scatterlist.h
index b7b59328cb76..510856887628 100644
--- a/drivers/gpu/drm/i915/i915_scatterlist.h
+++ b/drivers/gpu/drm/i915/i915_scatterlist.h
@@ -27,13 +27,17 @@  static __always_inline struct sgt_iter {
 } __sgt_iter(struct scatterlist *sgl, bool dma) {
 	struct sgt_iter s = { .sgp = sgl };
 
-	if (s.sgp) {
+	if (dma && s.sgp && sg_dma_len(s.sgp) == 0) {
+		s.sgp = NULL;
+	} else if (s.sgp) {
 		s.max = s.curr = s.sgp->offset;
-		s.max += s.sgp->length;
-		if (dma)
+		if (dma) {
 			s.dma = sg_dma_address(s.sgp);
-		else
+			s.max += sg_dma_len(s.sgp);
+		} else {
 			s.pfn = page_to_pfn(sg_page(s.sgp));
+			s.max += s.sgp->length;
+		}
 	}
 
 	return s;