diff mbox series

[v1,rdma-next,4/5] RDMA/rxe: Use correct sizing on buffers holding page DMA addresses

Message ID 20190328164947.13232-5-shiraz.saleem@intel.com (mailing list archive)
State Mainlined
Commit 93923d309bda99bc52f8cee6ea4774895b18ae5b
Delegated to: Jason Gunthorpe
Headers show
Series Use correct sizing on buffers holding page DMA addresses | expand

Commit Message

Saleem, Shiraz March 28, 2019, 4:49 p.m. UTC
The buffer that holds the page DMA addresses is sized off umem->nmap.
This can potentially cause out of bound accesses on the PBL array when
iterating the umem DMA-mapped SGL. This is because if umem pages are
combined, umem->nmap can be much lower than the number of system pages
in umem.

Use ib_umem_num_pages() to size this buffer.

Cc: Moni Shoua <monis@mellanox.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 42f0f25..bec23a2 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -179,7 +179,7 @@  int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
 	}
 
 	mem->umem = umem;
-	num_buf = umem->nmap;
+	num_buf = ib_umem_num_pages(umem);
 
 	rxe_mem_init(access, mem);