diff mbox series

[1/4] net: sgi: ioc3-eth: don't abuse dma_direct_* calls

Message ID 20191030211233.30157-2-hch@lst.de (mailing list archive)
State Not Applicable
Delegated to: Paul Burton
Headers show
Series [1/4] net: sgi: ioc3-eth: don't abuse dma_direct_* calls | expand

Commit Message

Christoph Hellwig Oct. 30, 2019, 9:12 p.m. UTC
dma_direct_ is a low-level API that must never be used by drivers
directly.  Switch to use the proper DMA API instead.

Fixes: ed870f6a7aa2 ("net: sgi: ioc3-eth: use dma-direct for dma allocations")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/net/ethernet/sgi/ioc3-eth.c | 25 +++++++++++--------------
 1 file changed, 11 insertions(+), 14 deletions(-)

Comments

Thomas Bogendoerfer Oct. 30, 2019, 10:05 p.m. UTC | #1
On Wed, 30 Oct 2019 14:12:30 -0700
Christoph Hellwig <hch@lst.de> wrote:

> dma_direct_ is a low-level API that must never be used by drivers
> directly.  Switch to use the proper DMA API instead.

is the 4kb/16kb alignment still guaranteed ? If not how is the way
to get such an alignment ?

Thomas.
Christoph Hellwig Oct. 30, 2019, 10:38 p.m. UTC | #2
On Wed, Oct 30, 2019 at 11:05:49PM +0100, Thomas Bogendoerfer wrote:
> On Wed, 30 Oct 2019 14:12:30 -0700
> Christoph Hellwig <hch@lst.de> wrote:
> 
> > dma_direct_ is a low-level API that must never be used by drivers
> > directly.  Switch to use the proper DMA API instead.
> 
> is the 4kb/16kb alignment still guaranteed ? If not how is the way
> to get such an alignment ?

The DMA API gives you page aligned memory. dma_direct doesn't give you
any gurantees as it is an internal API explicitly documented as not
for driver usage that can change at any time.
Thomas Bogendoerfer Oct. 31, 2019, 8:54 a.m. UTC | #3
On Wed, 30 Oct 2019 23:38:18 +0100
Christoph Hellwig <hch@lst.de> wrote:

> On Wed, Oct 30, 2019 at 11:05:49PM +0100, Thomas Bogendoerfer wrote:
> > On Wed, 30 Oct 2019 14:12:30 -0700
> > Christoph Hellwig <hch@lst.de> wrote:
> > 
> > > dma_direct_ is a low-level API that must never be used by drivers
> > > directly.  Switch to use the proper DMA API instead.
> > 
> > is the 4kb/16kb alignment still guaranteed ? If not how is the way
> > to get such an alignment ?
> 
> The DMA API gives you page aligned memory. dma_direct doesn't give you
> any gurantees as it is an internal API explicitly documented as not
> for driver usage that can change at any time.

I didn't want to argue about that. What I'm interested in is a way how 
to allocate dma memory, which is 16kB aligned, via the DMA API ?

Thomas.
Christoph Hellwig Oct. 31, 2019, 1:15 p.m. UTC | #4
On Thu, Oct 31, 2019 at 09:54:30AM +0100, Thomas Bogendoerfer wrote:
> I didn't want to argue about that. What I'm interested in is a way how 
> to allocate dma memory, which is 16kB aligned, via the DMA API ?

You can't.
Thomas Bogendoerfer Oct. 31, 2019, 3:18 p.m. UTC | #5
On Thu, 31 Oct 2019 14:15:01 +0100
Christoph Hellwig <hch@lst.de> wrote:

> On Thu, Oct 31, 2019 at 09:54:30AM +0100, Thomas Bogendoerfer wrote:
> > I didn't want to argue about that. What I'm interested in is a way how 
> > to allocate dma memory, which is 16kB aligned, via the DMA API ?
> 
> You can't.

So then __get_free_pages() and dma_map_page() is the only way ?

BTW I've successful tested your patches on an 8 CPU Origin 2k.

Thomas.
Christoph Hellwig Oct. 31, 2019, 5:01 p.m. UTC | #6
On Thu, Oct 31, 2019 at 04:18:17PM +0100, Thomas Bogendoerfer wrote:
> On Thu, 31 Oct 2019 14:15:01 +0100
> Christoph Hellwig <hch@lst.de> wrote:
> 
> > On Thu, Oct 31, 2019 at 09:54:30AM +0100, Thomas Bogendoerfer wrote:
> > > I didn't want to argue about that. What I'm interested in is a way how 
> > > to allocate dma memory, which is 16kB aligned, via the DMA API ?
> > 
> > You can't.
> 
> So then __get_free_pages() and dma_map_page() is the only way ?

Or dma_alloc_coherent + check alignment + allocate larger and align
yourself.  In practice you'll always get alignmened memory at the
moment, but there is no API gurantee.
David Miller Oct. 31, 2019, 7:15 p.m. UTC | #7
From: Thomas Bogendoerfer <tbogendoerfer@suse.de>
Date: Thu, 31 Oct 2019 16:18:17 +0100

> On Thu, 31 Oct 2019 14:15:01 +0100
> Christoph Hellwig <hch@lst.de> wrote:
> 
>> On Thu, Oct 31, 2019 at 09:54:30AM +0100, Thomas Bogendoerfer wrote:
>> > I didn't want to argue about that. What I'm interested in is a way how 
>> > to allocate dma memory, which is 16kB aligned, via the DMA API ?
>> 
>> You can't.
> 
> So then __get_free_pages() and dma_map_page() is the only way ?
> 
> BTW I've successful tested your patches on an 8 CPU Origin 2k.

Nope, it is not the only way.

Allocate a (SIZE*2)-1 sized piece of DMA memory and align the
resulting CPU pointer and DMA address as needed.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c
index deb636d653f3..477af82bf8a9 100644
--- a/drivers/net/ethernet/sgi/ioc3-eth.c
+++ b/drivers/net/ethernet/sgi/ioc3-eth.c
@@ -48,7 +48,7 @@ 
 #include <linux/etherdevice.h>
 #include <linux/ethtool.h>
 #include <linux/skbuff.h>
-#include <linux/dma-direct.h>
+#include <linux/dma-mapping.h>
 
 #include <net/ip.h>
 
@@ -1242,8 +1242,8 @@  static int ioc3_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	ioc3_stop(ip);
 
 	/* Allocate rx ring.  4kb = 512 entries, must be 4kb aligned */
-	ip->rxr = dma_direct_alloc_pages(ip->dma_dev, RX_RING_SIZE,
-					 &ip->rxr_dma, GFP_ATOMIC, 0);
+	ip->rxr = dma_alloc_coherent(ip->dma_dev, RX_RING_SIZE, &ip->rxr_dma,
+				     GFP_ATOMIC);
 	if (!ip->rxr) {
 		pr_err("ioc3-eth: rx ring allocation failed\n");
 		err = -ENOMEM;
@@ -1251,9 +1251,8 @@  static int ioc3_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	}
 
 	/* Allocate tx rings.  16kb = 128 bufs, must be 16kb aligned  */
-	ip->txr = dma_direct_alloc_pages(ip->dma_dev, TX_RING_SIZE,
-					 &ip->txr_dma,
-					 GFP_KERNEL | __GFP_ZERO, 0);
+	ip->txr = dma_alloc_coherent(ip->dma_dev, TX_RING_SIZE, &ip->txr_dma,
+				     GFP_KERNEL | __GFP_ZERO);
 	if (!ip->txr) {
 		pr_err("ioc3-eth: tx ring allocation failed\n");
 		err = -ENOMEM;
@@ -1313,11 +1312,11 @@  static int ioc3_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 out_stop:
 	del_timer_sync(&ip->ioc3_timer);
 	if (ip->rxr)
-		dma_direct_free_pages(ip->dma_dev, RX_RING_SIZE, ip->rxr,
-				      ip->rxr_dma, 0);
+		dma_free_coherent(ip->dma_dev, RX_RING_SIZE, ip->rxr,
+				  ip->rxr_dma);
 	if (ip->txr)
-		dma_direct_free_pages(ip->dma_dev, TX_RING_SIZE, ip->txr,
-				      ip->txr_dma, 0);
+		dma_free_coherent(ip->dma_dev, TX_RING_SIZE, ip->txr,
+				  ip->txr_dma);
 out_res:
 	pci_release_regions(pdev);
 out_free:
@@ -1335,10 +1334,8 @@  static void ioc3_remove_one(struct pci_dev *pdev)
 	struct net_device *dev = pci_get_drvdata(pdev);
 	struct ioc3_private *ip = netdev_priv(dev);
 
-	dma_direct_free_pages(ip->dma_dev, RX_RING_SIZE, ip->rxr,
-			      ip->rxr_dma, 0);
-	dma_direct_free_pages(ip->dma_dev, TX_RING_SIZE, ip->txr,
-			      ip->txr_dma, 0);
+	dma_free_coherent(ip->dma_dev, RX_RING_SIZE, ip->rxr, ip->rxr_dma);
+	dma_free_coherent(ip->dma_dev, TX_RING_SIZE, ip->txr, ip->txr_dma);
 
 	unregister_netdev(dev);
 	del_timer_sync(&ip->ioc3_timer);