diff mbox series

dma-buf: heaps: remove duplicated cache sync

Message ID 20210830100139.15632-1-guangming.cao@mediatek.com (mailing list archive)
State New, archived
Headers show
Series dma-buf: heaps: remove duplicated cache sync | expand

Commit Message

Guangming.Cao Aug. 30, 2021, 10:01 a.m. UTC
From: Guangming Cao <Guangming.Cao@mediatek.com>

Current flow, one dmabuf maybe call cache sync many times if
it has beed mapped more than one time.

Is there any case that attachments of one dmabuf will points to
different memory? If not, seems do sync only one time is more better.

Signed-off-by: Guangming Cao <Guangming.Cao@mediatek.com>
---
 drivers/dma-buf/heaps/system_heap.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

Comments

Christian König Aug. 30, 2021, 10:16 a.m. UTC | #1
Am 30.08.21 um 12:01 schrieb guangming.cao@mediatek.com:
> From: Guangming Cao <Guangming.Cao@mediatek.com>
>
> Current flow, one dmabuf maybe call cache sync many times if
> it has beed mapped more than one time.

Well I'm not an expert on DMA heaps, but this will most likely not work 
correctly.

> Is there any case that attachments of one dmabuf will points to
> different memory? If not, seems do sync only one time is more better.

I think that this can happen, yes.

Christian.


>
> Signed-off-by: Guangming Cao <Guangming.Cao@mediatek.com>
> ---
>   drivers/dma-buf/heaps/system_heap.c | 14 ++++++++------
>   1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index 23a7e74ef966..909ef652a8c8 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -162,9 +162,10 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
>   
>   	list_for_each_entry(a, &buffer->attachments, list) {
> -		if (!a->mapped)
> -			continue;
> -		dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
> +		if (a->mapped) {
> +			dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
> +			break;
> +		}
>   	}
>   	mutex_unlock(&buffer->lock);
>   
> @@ -183,9 +184,10 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>   		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
>   
>   	list_for_each_entry(a, &buffer->attachments, list) {
> -		if (!a->mapped)
> -			continue;
> -		dma_sync_sgtable_for_device(a->dev, a->table, direction);
> +		if (!a->mapped) {
> +			dma_sync_sgtable_for_device(a->dev, a->table, direction);
> +			break;
> +		}
>   	}
>   	mutex_unlock(&buffer->lock);
>
Guangming.Cao Aug. 31, 2021, 3:44 a.m. UTC | #2
From: Guangming Cao <Guangming.Cao@mediatek.com>

> Am 30.08.21 um 12:01 schrieb guangming.cao@mediatek.com:
> > From: Guangming Cao <Guangming.Cao@mediatek.com>
> >
> > Current flow, one dmabuf maybe call cache sync many times if
> > it has beed mapped more than one time.
>
> Well I'm not an expert on DMA heaps, but this will most likely not work 
> correctly.
>
All attachments of one dmabuf will add into a list, I think it means dmabuf
supports map more than one time. Could you tell me more about it?

> > Is there any case that attachments of one dmabuf will points to
> > different memory? If not, seems do sync only one time is more better.
>
> I think that this can happen, yes.
>
> Christian.
>
Seems it's a very special case on Android, if you don't mind, could you
tell me more about it?

>
> >
> > Signed-off-by: Guangming Cao <Guangming.Cao@mediatek.com>
> > ---
> >   drivers/dma-buf/heaps/system_heap.c | 14 ++++++++------
> >   1 file changed, 8 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> > index 23a7e74ef966..909ef652a8c8 100644
> > --- a/drivers/dma-buf/heaps/system_heap.c
> > +++ b/drivers/dma-buf/heaps/system_heap.c
> > @@ -162,9 +162,10 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >   		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
> >   
> >   	list_for_each_entry(a, &buffer->attachments, list) {
> > -		if (!a->mapped)
> > -			continue;
> > -		dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
> > +		if (a->mapped) {
> > +			dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
> > +			break;
> > +		}
> >   	}
> >   	mutex_unlock(&buffer->lock);
> >   
> > @@ -183,9 +184,10 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> >   		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
> >   
> >   	list_for_each_entry(a, &buffer->attachments, list) {
> > -		if (!a->mapped)
> > -			continue;
> > -		dma_sync_sgtable_for_device(a->dev, a->table, direction);
> > +		if (!a->mapped) {
> > +			dma_sync_sgtable_for_device(a->dev, a->table, direction);
> > +			break;
> > +		}
> >   	}
> >   	mutex_unlock(&buffer->lock);
> >
Christian König Aug. 31, 2021, 6:31 a.m. UTC | #3
Am 31.08.21 um 05:44 schrieb guangming.cao@mediatek.com:
> From: Guangming Cao <Guangming.Cao@mediatek.com>
>
>> Am 30.08.21 um 12:01 schrieb guangming.cao@mediatek.com:
>>> From: Guangming Cao <Guangming.Cao@mediatek.com>
>>>
>>> Current flow, one dmabuf maybe call cache sync many times if
>>> it has beed mapped more than one time.
>> Well I'm not an expert on DMA heaps, but this will most likely not work
>> correctly.
>>
> All attachments of one dmabuf will add into a list, I think it means dmabuf
> supports map more than one time. Could you tell me more about it?

Yes, that's correct and all of those needs to be synced as far as I know.

See the dma_sync_sgtable_for_cpu() is intentionally for each SG table 
given out.

>>> Is there any case that attachments of one dmabuf will points to
>>> different memory? If not, seems do sync only one time is more better.
>> I think that this can happen, yes.
>>
>> Christian.
>>
> Seems it's a very special case on Android, if you don't mind, could you
> tell me more about it?

That might be the case, nevertheless this change here is illegal from 
the DMA API point of view as far as I can see.

Regards,
Christian.

>
>>> Signed-off-by: Guangming Cao <Guangming.Cao@mediatek.com>
>>> ---
>>>    drivers/dma-buf/heaps/system_heap.c | 14 ++++++++------
>>>    1 file changed, 8 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
>>> index 23a7e74ef966..909ef652a8c8 100644
>>> --- a/drivers/dma-buf/heaps/system_heap.c
>>> +++ b/drivers/dma-buf/heaps/system_heap.c
>>> @@ -162,9 +162,10 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>    		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
>>>    
>>>    	list_for_each_entry(a, &buffer->attachments, list) {
>>> -		if (!a->mapped)
>>> -			continue;
>>> -		dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
>>> +		if (a->mapped) {
>>> +			dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
>>> +			break;
>>> +		}
>>>    	}
>>>    	mutex_unlock(&buffer->lock);
>>>    
>>> @@ -183,9 +184,10 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>>>    		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
>>>    
>>>    	list_for_each_entry(a, &buffer->attachments, list) {
>>> -		if (!a->mapped)
>>> -			continue;
>>> -		dma_sync_sgtable_for_device(a->dev, a->table, direction);
>>> +		if (!a->mapped) {
>>> +			dma_sync_sgtable_for_device(a->dev, a->table, direction);
>>> +			break;
>>> +		}
>>>    	}
>>>    	mutex_unlock(&buffer->lock);
>>>
diff mbox series

Patch

diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index 23a7e74ef966..909ef652a8c8 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -162,9 +162,10 @@  static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
 
 	list_for_each_entry(a, &buffer->attachments, list) {
-		if (!a->mapped)
-			continue;
-		dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
+		if (a->mapped) {
+			dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
+			break;
+		}
 	}
 	mutex_unlock(&buffer->lock);
 
@@ -183,9 +184,10 @@  static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
 		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
 
 	list_for_each_entry(a, &buffer->attachments, list) {
-		if (!a->mapped)
-			continue;
-		dma_sync_sgtable_for_device(a->dev, a->table, direction);
+		if (!a->mapped) {
+			dma_sync_sgtable_for_device(a->dev, a->table, direction);
+			break;
+		}
 	}
 	mutex_unlock(&buffer->lock);