diff mbox

[2/8] ARM: dma-mapping: implement dma_map_single on top of dma_map_page

Message ID 1308556213-24970-3-git-send-email-m.szyprowski@samsung.com (mailing list archive)
State New, archived
Headers show

Commit Message

Marek Szyprowski June 20, 2011, 7:50 a.m. UTC
This patch consolidates dma_map_single and dma_map_page calls. This is
required to let dma-mapping framework on ARM architecture use common,
generic dma-mapping helpers.

Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
---
 arch/arm/common/dmabounce.c        |   28 ----------
 arch/arm/include/asm/dma-mapping.h |  100 +++++++++++++----------------------
 2 files changed, 37 insertions(+), 91 deletions(-)

Comments

Russell King - ARM Linux June 20, 2011, 2:39 p.m. UTC | #1
On Mon, Jun 20, 2011 at 09:50:07AM +0200, Marek Szyprowski wrote:
> This patch consolidates dma_map_single and dma_map_page calls. This is
> required to let dma-mapping framework on ARM architecture use common,
> generic dma-mapping helpers.

This breaks DMA API debugging, which requires that dma_map_page and
dma_unmap_page are paired separately from dma_map_single and
dma_unmap_single().

This also breaks dmabounce when used with a highmem-enabled system -
dmabounce refuses the dma_map_page() API but allows the dma_map_single()
API.
Marek Szyprowski June 20, 2011, 3:15 p.m. UTC | #2
Hello,

On Monday, June 20, 2011 4:39 PM Russell King - ARM Linux wrote:

> On Mon, Jun 20, 2011 at 09:50:07AM +0200, Marek Szyprowski wrote:
> > This patch consolidates dma_map_single and dma_map_page calls. This is
> > required to let dma-mapping framework on ARM architecture use common,
> > generic dma-mapping helpers.
> 
> This breaks DMA API debugging, which requires that dma_map_page and
> dma_unmap_page are paired separately from dma_map_single and
> dma_unmap_single().

Ok, right. This can be fixed by creating appropriate static inline functions
in dma-mapping.h and moving dma_debug_* calls there. These function will be
later removed by using dma_map_ops and include/asm-generic/dma-mapping-common.h
inlines, which do all the dma_debug_* calls correctly anyway. 

> This also breaks dmabounce when used with a highmem-enabled system -
> dmabounce refuses the dma_map_page() API but allows the dma_map_single()
> API.

I really not sure how this change will break dma bounce code. 

Does it mean that it is allowed to call dma_map_single() on kmapped HIGH_MEM 
page?

Best regards
Arnd Bergmann June 24, 2011, 3:24 p.m. UTC | #3
On Monday 20 June 2011, Marek Szyprowski wrote:
> > This also breaks dmabounce when used with a highmem-enabled system -
> > dmabounce refuses the dma_map_page() API but allows the dma_map_single()
> > API.
> 
> I really not sure how this change will break dma bounce code. 
> 
> Does it mean that it is allowed to call dma_map_single() on kmapped HIGH_MEM 
> page?

dma_map_single on a kmapped page already doesn't work, the argument needs to
be inside of the linear mapping in order for virt_to_page to work.

	Arnd
Marek Szyprowski June 27, 2011, 2:29 p.m. UTC | #4
Hello,

On Friday, June 24, 2011 5:24 PM Arnd Bergmann wrote:

> On Monday 20 June 2011, Marek Szyprowski wrote:
> > > This also breaks dmabounce when used with a highmem-enabled system -
> > > dmabounce refuses the dma_map_page() API but allows the
> dma_map_single()
> > > API.
> >
> > I really not sure how this change will break dma bounce code.
> >
> > Does it mean that it is allowed to call dma_map_single() on kmapped
> > HIGH_MEM page?
> 
> dma_map_single on a kmapped page already doesn't work, the argument needs
> to be inside of the linear mapping in order for virt_to_page to work.

Then I got really confused.

Documentation/DMA-mapping.txt says that dma_map_single() can be used only
with kernel linear mapping, while dma_map_page() can be also called on 
HIGHMEM pages.

Now, lets go to arch/arm/common/dmabounce.c code:

dma_addr_t __dma_map_page(struct device *dev, struct page *page,
                unsigned long offset, size_t size, enum dma_data_direction dir)
{
        dev_dbg(dev, "%s(page=%p,off=%#lx,size=%zx,dir=%x)\n",
                __func__, page, offset, size, dir);

        BUG_ON(!valid_dma_direction(dir));

        if (PageHighMem(page)) {
                dev_err(dev, "DMA buffer bouncing of HIGHMEM pages "
                             "is not supported\n");
                return ~0;
        }

        return map_single(dev, page_address(page) + offset, size, dir);
}
EXPORT_SYMBOL(__dma_map_page);

Am I right that there is something mixed here? I really don't get why there is
high mem check in dma_map_page implementation. dma_map_single doesn't perform
such check and works with kmapped highmem pages...

Russell also pointed that my patch broke dma bounch with high mem enabled.

Best regards
Arnd Bergmann June 27, 2011, 2:53 p.m. UTC | #5
On Monday 27 June 2011, Marek Szyprowski wrote:
> On Friday, June 24, 2011 5:24 PM Arnd Bergmann wrote:
> 
> > On Monday 20 June 2011, Marek Szyprowski wrote:
> > > > This also breaks dmabounce when used with a highmem-enabled system -
> > > > dmabounce refuses the dma_map_page() API but allows the
> > dma_map_single()
> > > > API.
> > >
> > > I really not sure how this change will break dma bounce code.
> > >
> > > Does it mean that it is allowed to call dma_map_single() on kmapped
> > > HIGH_MEM page?
> > 
> > dma_map_single on a kmapped page already doesn't work, the argument needs
> > to be inside of the linear mapping in order for virt_to_page to work.
> 
> Then I got really confused.
> 
> Documentation/DMA-mapping.txt says that dma_map_single() can be used only
> with kernel linear mapping, while dma_map_page() can be also called on 
> HIGHMEM pages.

Right, this is true in general.

> Now, lets go to arch/arm/common/dmabounce.c code:
> 
> dma_addr_t __dma_map_page(struct device *dev, struct page *page,
>                 unsigned long offset, size_t size, enum dma_data_direction dir)
> {
>         dev_dbg(dev, "%s(page=%p,off=%#lx,size=%zx,dir=%x)\n",
>                 __func__, page, offset, size, dir);
> 
>         BUG_ON(!valid_dma_direction(dir));
> 
>         if (PageHighMem(page)) {
>                 dev_err(dev, "DMA buffer bouncing of HIGHMEM pages "
>                              "is not supported\n");
>                 return ~0;
>         }
> 
>         return map_single(dev, page_address(page) + offset, size, dir);
> }
> EXPORT_SYMBOL(__dma_map_page);
>
> Am I right that there is something mixed here? I really don't get why there is
> high mem check in dma_map_page implementation. dma_map_single doesn't perform
> such check and works with kmapped highmem pages...
>
> Russell also pointed that my patch broke dma bounch with high mem enabled.

The version of __dma_map_page that you cited is the one used with dmabounce
enabled, when CONFIG_DMABOUNCE is disabled, the following version is used:

static inline dma_addr_t __dma_map_page(struct device *dev, struct page *page,
             unsigned long offset, size_t size, enum dma_data_direction dir)
{
        __dma_page_cpu_to_dev(page, offset, size, dir);
        return pfn_to_dma(dev, page_to_pfn(page)) + offset;
}

This does not have the check, because the kernel does not need to touch
the kernel mapping in that case.

If you pass a kmapped page into dma_map_single, it should also not
work because of the BUG_ON in ___dma_single_cpu_to_dev -- it warns
you that you would end up flushing the cache for the wrong page (if any).

	Arnd
Marek Szyprowski June 27, 2011, 3:06 p.m. UTC | #6
Hello,

On Monday, June 27, 2011 4:54 PM Arnd Bergmann wrote:

> On Monday 27 June 2011, Marek Szyprowski wrote:
> > On Friday, June 24, 2011 5:24 PM Arnd Bergmann wrote:
> >
> > > On Monday 20 June 2011, Marek Szyprowski wrote:
> > > > > This also breaks dmabounce when used with a highmem-enabled system
> -
> > > > > dmabounce refuses the dma_map_page() API but allows the
> > > dma_map_single()
> > > > > API.
> > > >
> > > > I really not sure how this change will break dma bounce code.
> > > >
> > > > Does it mean that it is allowed to call dma_map_single() on kmapped
> > > > HIGH_MEM page?
> > >
> > > dma_map_single on a kmapped page already doesn't work, the argument
> needs
> > > to be inside of the linear mapping in order for virt_to_page to work.
> >
> > Then I got really confused.
> >
> > Documentation/DMA-mapping.txt says that dma_map_single() can be used only
> > with kernel linear mapping, while dma_map_page() can be also called on
> > HIGHMEM pages.
> 
> Right, this is true in general.

Ok, so I see no reasons for not implementing dma_map_single() on top of 
dma_map_page() like it has been done in asm-generic/dma-mapping-common.h
 
> > Now, lets go to arch/arm/common/dmabounce.c code:
> >
> > dma_addr_t __dma_map_page(struct device *dev, struct page *page,
> >                 unsigned long offset, size_t size, enum
> dma_data_direction dir)
> > {
> >         dev_dbg(dev, "%s(page=%p,off=%#lx,size=%zx,dir=%x)\n",
> >                 __func__, page, offset, size, dir);
> >
> >         BUG_ON(!valid_dma_direction(dir));
> >
> >         if (PageHighMem(page)) {
> >                 dev_err(dev, "DMA buffer bouncing of HIGHMEM pages "
> >                              "is not supported\n");
> >                 return ~0;
> >         }
> >
> >         return map_single(dev, page_address(page) + offset, size, dir);
> > }
> > EXPORT_SYMBOL(__dma_map_page);
> >
> > Am I right that there is something mixed here? I really don't get why
> there is
> > high mem check in dma_map_page implementation. dma_map_single doesn't
> perform
> > such check and works with kmapped highmem pages...
> >
> > Russell also pointed that my patch broke dma bounch with high mem enabled.
> 
> The version of __dma_map_page that you cited is the one used with dmabounce
> enabled, when CONFIG_DMABOUNCE is disabled, the following version is used:
> 
> static inline dma_addr_t __dma_map_page(struct device *dev, struct page
> *page,
>              unsigned long offset, size_t size, enum dma_data_direction
> dir)
> {
>         __dma_page_cpu_to_dev(page, offset, size, dir);
>         return pfn_to_dma(dev, page_to_pfn(page)) + offset;
> }
> 
> This does not have the check, because the kernel does not need to touch
> the kernel mapping in that case.
> 
> If you pass a kmapped page into dma_map_single, it should also not
> work because of the BUG_ON in ___dma_single_cpu_to_dev -- it warns
> you that you would end up flushing the cache for the wrong page (if any).

Yes, I know that the flow is different when dma bounce is not used. Non-dma
bounce version will still work correctly after my patch. 

However I still don't get how my patch broke dma bounce code with HIGHMEM,
what has been pointed by Russell...

Best regards
diff mbox

Patch

diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
index f7b330f..9eb161e 100644
--- a/arch/arm/common/dmabounce.c
+++ b/arch/arm/common/dmabounce.c
@@ -329,34 +329,6 @@  static inline void unmap_single(struct device *dev, dma_addr_t dma_addr,
  * substitute the safe buffer for the unsafe one.
  * (basically move the buffer from an unsafe area to a safe one)
  */
-dma_addr_t __dma_map_single(struct device *dev, void *ptr, size_t size,
-		enum dma_data_direction dir)
-{
-	dev_dbg(dev, "%s(ptr=%p,size=%d,dir=%x)\n",
-		__func__, ptr, size, dir);
-
-	BUG_ON(!valid_dma_direction(dir));
-
-	return map_single(dev, ptr, size, dir);
-}
-EXPORT_SYMBOL(__dma_map_single);
-
-/*
- * see if a mapped address was really a "safe" buffer and if so, copy
- * the data from the safe buffer back to the unsafe buffer and free up
- * the safe buffer.  (basically return things back to the way they
- * should be)
- */
-void __dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
-		enum dma_data_direction dir)
-{
-	dev_dbg(dev, "%s(ptr=%p,size=%d,dir=%x)\n",
-		__func__, (void *) dma_addr, size, dir);
-
-	unmap_single(dev, dma_addr, size, dir);
-}
-EXPORT_SYMBOL(__dma_unmap_single);
-
 dma_addr_t __dma_map_page(struct device *dev, struct page *page,
 		unsigned long offset, size_t size, enum dma_data_direction dir)
 {
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index ca920aa..799669d 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -298,10 +298,6 @@  extern int dma_needs_bounce(struct device*, dma_addr_t, size_t);
 /*
  * The DMA API, implemented by dmabounce.c.  See below for descriptions.
  */
-extern dma_addr_t __dma_map_single(struct device *, void *, size_t,
-		enum dma_data_direction);
-extern void __dma_unmap_single(struct device *, dma_addr_t, size_t,
-		enum dma_data_direction);
 extern dma_addr_t __dma_map_page(struct device *, struct page *,
 		unsigned long, size_t, enum dma_data_direction);
 extern void __dma_unmap_page(struct device *, dma_addr_t, size_t,
@@ -325,14 +321,6 @@  static inline int dmabounce_sync_for_device(struct device *d, dma_addr_t addr,
 	return 1;
 }
 
-
-static inline dma_addr_t __dma_map_single(struct device *dev, void *cpu_addr,
-		size_t size, enum dma_data_direction dir)
-{
-	__dma_single_cpu_to_dev(cpu_addr, size, dir);
-	return virt_to_dma(dev, cpu_addr);
-}
-
 static inline dma_addr_t __dma_map_page(struct device *dev, struct page *page,
 	     unsigned long offset, size_t size, enum dma_data_direction dir)
 {
@@ -340,12 +328,6 @@  static inline dma_addr_t __dma_map_page(struct device *dev, struct page *page,
 	return pfn_to_dma(dev, page_to_pfn(page)) + offset;
 }
 
-static inline void __dma_unmap_single(struct device *dev, dma_addr_t handle,
-		size_t size, enum dma_data_direction dir)
-{
-	__dma_single_dev_to_cpu(dma_to_virt(dev, handle), size, dir);
-}
-
 static inline void __dma_unmap_page(struct device *dev, dma_addr_t handle,
 		size_t size, enum dma_data_direction dir)
 {
@@ -354,34 +336,6 @@  static inline void __dma_unmap_page(struct device *dev, dma_addr_t handle,
 }
 #endif /* CONFIG_DMABOUNCE */
 
-/**
- * dma_map_single - map a single buffer for streaming DMA
- * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
- * @cpu_addr: CPU direct mapped address of buffer
- * @size: size of buffer to map
- * @dir: DMA transfer direction
- *
- * Ensure that any data held in the cache is appropriately discarded
- * or written back.
- *
- * The device owns this memory once this call has completed.  The CPU
- * can regain ownership by calling dma_unmap_single() or
- * dma_sync_single_for_cpu().
- */
-static inline dma_addr_t dma_map_single(struct device *dev, void *cpu_addr,
-		size_t size, enum dma_data_direction dir)
-{
-	dma_addr_t addr;
-
-	BUG_ON(!valid_dma_direction(dir));
-
-	addr = __dma_map_single(dev, cpu_addr, size, dir);
-	debug_dma_map_page(dev, virt_to_page(cpu_addr),
-			(unsigned long)cpu_addr & ~PAGE_MASK, size,
-			dir, addr, true);
-
-	return addr;
-}
 
 /**
  * dma_map_page - map a portion of a page for streaming DMA
@@ -411,48 +365,68 @@  static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
 }
 
 /**
- * dma_unmap_single - unmap a single buffer previously mapped
+ * dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
  * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
  * @handle: DMA address of buffer
- * @size: size of buffer (same as passed to dma_map_single)
- * @dir: DMA transfer direction (same as passed to dma_map_single)
+ * @size: size of buffer (same as passed to dma_map_page)
+ * @dir: DMA transfer direction (same as passed to dma_map_page)
  *
- * Unmap a single streaming mode DMA translation.  The handle and size
- * must match what was provided in the previous dma_map_single() call.
+ * Unmap a page streaming mode DMA translation.  The handle and size
+ * must match what was provided in the previous dma_map_page() call.
  * All other usages are undefined.
  *
  * After this call, reads by the CPU to the buffer are guaranteed to see
  * whatever the device wrote there.
  */
-static inline void dma_unmap_single(struct device *dev, dma_addr_t handle,
+
+static inline void dma_unmap_page(struct device *dev, dma_addr_t handle,
 		size_t size, enum dma_data_direction dir)
 {
-	debug_dma_unmap_page(dev, handle, size, dir, true);
-	__dma_unmap_single(dev, handle, size, dir);
+	debug_dma_unmap_page(dev, handle, size, dir, false);
+	__dma_unmap_page(dev, handle, size, dir);
 }
 
 /**
- * dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
+ * dma_map_single - map a single buffer for streaming DMA
+ * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
+ * @cpu_addr: CPU direct mapped address of buffer
+ * @size: size of buffer to map
+ * @dir: DMA transfer direction
+ *
+ * Ensure that any data held in the cache is appropriately discarded
+ * or written back.
+ *
+ * The device owns this memory once this call has completed.  The CPU
+ * can regain ownership by calling dma_unmap_single() or
+ * dma_sync_single_for_cpu().
+ */
+static inline dma_addr_t dma_map_single(struct device *dev, void *cpu_addr,
+		size_t size, enum dma_data_direction dir)
+{
+	return dma_map_page(dev, virt_to_page(cpu_addr),
+			    (unsigned long)cpu_addr & ~PAGE_MASK, size, dir);
+}
+
+/**
+ * dma_unmap_single - unmap a single buffer previously mapped
  * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
  * @handle: DMA address of buffer
- * @size: size of buffer (same as passed to dma_map_page)
- * @dir: DMA transfer direction (same as passed to dma_map_page)
+ * @size: size of buffer (same as passed to dma_map_single)
+ * @dir: DMA transfer direction (same as passed to dma_map_single)
  *
- * Unmap a page streaming mode DMA translation.  The handle and size
- * must match what was provided in the previous dma_map_page() call.
+ * Unmap a single streaming mode DMA translation.  The handle and size
+ * must match what was provided in the previous dma_map_single() call.
  * All other usages are undefined.
  *
  * After this call, reads by the CPU to the buffer are guaranteed to see
  * whatever the device wrote there.
  */
-static inline void dma_unmap_page(struct device *dev, dma_addr_t handle,
+static inline void dma_unmap_single(struct device *dev, dma_addr_t handle,
 		size_t size, enum dma_data_direction dir)
 {
-	debug_dma_unmap_page(dev, handle, size, dir, false);
-	__dma_unmap_page(dev, handle, size, dir);
+	dma_unmap_page(dev, handle, size, dir);
 }
 
-
 static inline void dma_sync_single_for_cpu(struct device *dev,
 		dma_addr_t handle, size_t size, enum dma_data_direction dir)
 {