diff mbox

[v2,1/2] swiotlb-xen: implement xen_swiotlb_dma_mmap callback

Message ID 1484565815-25015-2-git-send-email-andrii.anisov@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Andrii Anisov Jan. 16, 2017, 11:23 a.m. UTC
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

This function creates userspace mapping for the DMA-coherent memory.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Oleksandr Dmytryshyn <oleksandr.dmytryshyn@globallogic.com>
Signed-off-by: Andrii Anisov <andrii_anisov@epam.com>
---
 arch/arm/xen/mm.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Stefano Stabellini Jan. 16, 2017, 10:43 p.m. UTC | #1
On Mon, 16 Jan 2017, Andrii Anisov wrote:
> From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> This function creates userspace mapping for the DMA-coherent memory.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Oleksandr Dmytryshyn <oleksandr.dmytryshyn@globallogic.com>
> Signed-off-by: Andrii Anisov <andrii_anisov@epam.com>
> ---
>  arch/arm/xen/mm.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
> index bd62d94..ff812a2 100644
> --- a/arch/arm/xen/mm.c
> +++ b/arch/arm/xen/mm.c
> @@ -163,6 +163,19 @@ bool xen_arch_need_swiotlb(struct device *dev,
>  		!is_device_dma_coherent(dev));
>  }
>  
> +/*
> + * Create userspace mapping for the DMA-coherent memory.
> + */
> +static int xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
> +			 void *cpu_addr, dma_addr_t dma_addr, size_t size,
> +			 unsigned long attrs)
> +{
> +	if (__generic_dma_ops(dev)->mmap)
> +		return __generic_dma_ops(dev)->mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
> +
> +	return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
> +}
> +
>  int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
>  				 unsigned int address_bits,
>  				 dma_addr_t *dma_handle)
> @@ -198,6 +211,7 @@ static struct dma_map_ops xen_swiotlb_dma_ops = {
>  	.unmap_page = xen_swiotlb_unmap_page,
>  	.dma_supported = xen_swiotlb_dma_supported,
>  	.set_dma_mask = xen_swiotlb_set_dma_mask,
> +	.mmap = xen_swiotlb_dma_mmap,
>  };
>  
>  int __init xen_mm_init(void)

The patch should work fine and looks OK. It is better written like this,
compared to the previous versions that reimplemented dma_common_mmap. I
like the fact that we are reusing the arm specific generic mmap
functions via __generic_dma_ops.

For consistency, I would prefer to have xen_swiotlb_dma_mmap in
drivers/xen/swiotlb-xen.c, even if it needs to be #ifdef'ed CONFIG_ARM
(at least the __generic_dma_ops calls need to be #ifdef'ed).

Konrad, what do you think?
Stefano Stabellini Jan. 16, 2017, 10:56 p.m. UTC | #2
On Mon, 16 Jan 2017, Stefano Stabellini wrote:
> On Mon, 16 Jan 2017, Andrii Anisov wrote:
> > From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > This function creates userspace mapping for the DMA-coherent memory.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > Signed-off-by: Oleksandr Dmytryshyn <oleksandr.dmytryshyn@globallogic.com>
> > Signed-off-by: Andrii Anisov <andrii_anisov@epam.com>
> > ---
> >  arch/arm/xen/mm.c | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> > 
> > diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
> > index bd62d94..ff812a2 100644
> > --- a/arch/arm/xen/mm.c
> > +++ b/arch/arm/xen/mm.c
> > @@ -163,6 +163,19 @@ bool xen_arch_need_swiotlb(struct device *dev,
> >  		!is_device_dma_coherent(dev));
> >  }
> >  
> > +/*
> > + * Create userspace mapping for the DMA-coherent memory.
> > + */
> > +static int xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
> > +			 void *cpu_addr, dma_addr_t dma_addr, size_t size,
> > +			 unsigned long attrs)
> > +{

Only one suggestion more. For this to work correctly, we are assuming
that no foreging pages are involved here, which is a very reasonable
assumption given that mmap should be called on memory returned by
dma_alloc_coherent.  Please add an in-code comment here so that we'll
remember.


> > +	if (__generic_dma_ops(dev)->mmap)
> > +		return __generic_dma_ops(dev)->mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
> > +
> > +	return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
> > +}
> > +
> >  int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
> >  				 unsigned int address_bits,
> >  				 dma_addr_t *dma_handle)
> > @@ -198,6 +211,7 @@ static struct dma_map_ops xen_swiotlb_dma_ops = {
> >  	.unmap_page = xen_swiotlb_unmap_page,
> >  	.dma_supported = xen_swiotlb_dma_supported,
> >  	.set_dma_mask = xen_swiotlb_set_dma_mask,
> > +	.mmap = xen_swiotlb_dma_mmap,
> >  };
> >  
> >  int __init xen_mm_init(void)
> 
> The patch should work fine and looks OK. It is better written like this,
> compared to the previous versions that reimplemented dma_common_mmap. I
> like the fact that we are reusing the arm specific generic mmap
> functions via __generic_dma_ops.
> 
> For consistency, I would prefer to have xen_swiotlb_dma_mmap in
> drivers/xen/swiotlb-xen.c, even if it needs to be #ifdef'ed CONFIG_ARM
> (at least the __generic_dma_ops calls need to be #ifdef'ed).
> 
> Konrad, what do you think?
>
Andrii Anisov Jan. 18, 2017, 11:31 a.m. UTC | #3
Dear Stefano,


> Only one suggestion more. For this to work correctly, we are assuming
> that no foreging pages are involved here, which is a very reasonable
> assumption given that mmap should be called on memory returned by
> dma_alloc_coherent.

I also kept in mind this problem, that's why the first version was RFC.

> Please add an in-code comment here so that we'll remember.

Do you think comment would be enough so far?
Maybe fallback to common ops would be better in order to keep current (even broken) functionality for now? Or BUG_ON as you suggested for get_sgtable callback?


ANDRII ANISOV

Lead Systems Engineer



Office: +380 44 390 5457<tel:+380%2044%20390%205457> x 66766<tel:66766>   Cell: +380 50 573 8852<tel:+380%2050%20573%208852>   Email: andrii_anisov@epam.com<mailto:andrii_anisov@epam.com>

Kyiv, Ukraine (GMT+3)   epam.com<http://www.epam.com>



CONFIDENTIALITY CAUTION AND DISCLAIMER
This message is intended only for the use of the individual(s) or entity(ies) to which it is addressed and contains information that is legally privileged and confidential. If you are not the intended recipient, or the person responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. All unintended recipients are obliged to delete this message and destroy any printed copies.
Stefano Stabellini Jan. 18, 2017, 8:23 p.m. UTC | #4
On Wed, 18 Jan 2017, Andrii Anisov wrote:
> Dear Stefano,
> 
> 
> > Only one suggestion more. For this to work correctly, we are assuming
> > that no foreging pages are involved here, which is a very reasonable
> > assumption given that mmap should be called on memory returned by
> > dma_alloc_coherent.
> 
> I also kept in mind this problem, that's why the first version was RFC.
> 
> > Please add an in-code comment here so that we'll remember.
> 
> Do you think comment would be enough so far?

A comment is enough in the case of xen_swiotlb_dma_mmap, because we are
sure that the function can only be called with local pages. See the
comment above dma_mmap_attrs:

 * Map a coherent DMA buffer previously allocated by dma_alloc_attrs
 * into user space.  The coherent DMA buffer must not be freed by the
 * driver until the user space mapping has been released.

If the page must comes from dma_alloc_coherent, then we are safe.


I wasn't sure about dma_get_sgtable_attrs, because there is no in-tree
description, but looking at git log:

  commit d2b7428eb0caa7c66e34b6ac869a43915b294123
  Author: Marek Szyprowski <m.szyprowski@samsung.com>
  Date:   Wed Jun 13 10:05:52 2012 +0200
  
      common: dma-mapping: introduce dma_get_sgtable() function
      
      This patch adds dma_get_sgtable() function which is required to let
      drivers to share the buffers allocated by DMA-mapping subsystem. Right

It looks like dma_get_sgtable is also supposed to be called on buffers
returned by dma_alloc_coherent. We should be safe in both cases.


> Maybe fallback to common ops would be better in order to keep current (even broken) functionality for now? Or
> BUG_ON as you suggested for get_sgtable callback?

BUG_ON is good because it is an obvious failure for a case we don't know
how to handle. If it actually works as expected, we could add it to
both functions anyway, surrounded by #ifdef DEBUG_DRIVER not to slow
down the common case.
diff mbox

Patch

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index bd62d94..ff812a2 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -163,6 +163,19 @@  bool xen_arch_need_swiotlb(struct device *dev,
 		!is_device_dma_coherent(dev));
 }
 
+/*
+ * Create userspace mapping for the DMA-coherent memory.
+ */
+static int xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+			 void *cpu_addr, dma_addr_t dma_addr, size_t size,
+			 unsigned long attrs)
+{
+	if (__generic_dma_ops(dev)->mmap)
+		return __generic_dma_ops(dev)->mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
+
+	return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
+}
+
 int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
 				 unsigned int address_bits,
 				 dma_addr_t *dma_handle)
@@ -198,6 +211,7 @@  static struct dma_map_ops xen_swiotlb_dma_ops = {
 	.unmap_page = xen_swiotlb_unmap_page,
 	.dma_supported = xen_swiotlb_dma_supported,
 	.set_dma_mask = xen_swiotlb_set_dma_mask,
+	.mmap = xen_swiotlb_dma_mmap,
 };
 
 int __init xen_mm_init(void)