diff mbox series

[rdma-next] RDMA/core: Add weak ordering dma attr to dma mapping

Message ID 20200212073559.684139-1-leon@kernel.org (mailing list archive)
State Mainlined
Commit f03d9fadfe13a78ee28fec320d43f7b37574adcb
Delegated to: Jason Gunthorpe
Headers show
Series [rdma-next] RDMA/core: Add weak ordering dma attr to dma mapping | expand

Commit Message

Leon Romanovsky Feb. 12, 2020, 7:35 a.m. UTC
From: Michael Guralnik <michaelgur@mellanox.com>

For memory regions registered with IB_ACCESS_RELAXED_ORDERING will be
dma mapped with the DMA_ATTR_WEAK_ORDERING.

This will allow reads and writes to the mapping to be weakly ordered,
such change can enhance performance on some supporting architectures.

Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/umem.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

Comments

Jason Gunthorpe Feb. 13, 2020, 7:20 p.m. UTC | #1
On Wed, Feb 12, 2020 at 09:35:59AM +0200, Leon Romanovsky wrote:
> From: Michael Guralnik <michaelgur@mellanox.com>
> 
> For memory regions registered with IB_ACCESS_RELAXED_ORDERING will be
> dma mapped with the DMA_ATTR_WEAK_ORDERING.
> 
> This will allow reads and writes to the mapping to be weakly ordered,
> such change can enhance performance on some supporting architectures.
> 
> Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> ---
>  drivers/infiniband/core/umem.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)

Applied to for-next

Thanks,
Jason
diff mbox series

Patch

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 06b6125b5ae1..82455a1392f1 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -197,6 +197,7 @@  struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
 	unsigned long lock_limit;
 	unsigned long new_pinned;
 	unsigned long cur_base;
+	unsigned long dma_attr = 0;
 	struct mm_struct *mm;
 	unsigned long npages;
 	int ret;
@@ -278,10 +279,12 @@  struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
 
 	sg_mark_end(sg);
 
-	umem->nmap = ib_dma_map_sg(device,
-				   umem->sg_head.sgl,
-				   umem->sg_nents,
-				   DMA_BIDIRECTIONAL);
+	if (access & IB_ACCESS_RELAXED_ORDERING)
+		dma_attr |= DMA_ATTR_WEAK_ORDERING;
+
+	umem->nmap =
+		ib_dma_map_sg_attrs(device, umem->sg_head.sgl, umem->sg_nents,
+				    DMA_BIDIRECTIONAL, dma_attr);
 
 	if (!umem->nmap) {
 		ret = -ENOMEM;