diff mbox series

[rdma-next,v1] RDMA/rdmavt: Adapt to handle non-uniform sizes on umem SGEs

Message ID 20190212165224.20328-1-shiraz.saleem@intel.com (mailing list archive)
State Accepted
Delegated to: Jason Gunthorpe
Headers show
Series [rdma-next,v1] RDMA/rdmavt: Adapt to handle non-uniform sizes on umem SGEs | expand

Commit Message

Saleem, Shiraz Feb. 12, 2019, 4:52 p.m. UTC
From: "Shiraz, Saleem" <shiraz.saleem@intel.com>

rdmavt expects a uniform size on all umem SGEs which is
currently at PAGE_SIZE.

Adapt to a umem API change which could return non-uniform
sized SGEs due to combining contiguous PAGE_SIZE regions
into an SGE. Use for_each_sg_page variant to unfold the
larger SGEs into a list of PAGE_SIZE elements.

Additionally, purge umem->page_shift usage in the driver
as its only relevant for ODP MRs. Use system page size and
shift instead.

Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Shiraz, Saleem <shiraz.saleem@intel.com>
---
 drivers/infiniband/sw/rdmavt/mr.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

rfc->v0:
        *remove rfc tag
v0->v1:
        *use for_each_sg_page variant to break SGE - Jasons feedback.

Comments

Jason Gunthorpe Feb. 12, 2019, 9:32 p.m. UTC | #1
On Tue, Feb 12, 2019 at 10:52:24AM -0600, Shiraz Saleem wrote:
> From: "Shiraz, Saleem" <shiraz.saleem@intel.com>
> 
> rdmavt expects a uniform size on all umem SGEs which is
> currently at PAGE_SIZE.
> 
> Adapt to a umem API change which could return non-uniform
> sized SGEs due to combining contiguous PAGE_SIZE regions
> into an SGE. Use for_each_sg_page variant to unfold the
> larger SGEs into a list of PAGE_SIZE elements.
> 
> Additionally, purge umem->page_shift usage in the driver
> as its only relevant for ODP MRs. Use system page size and
> shift instead.
> 
> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
> Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
> Signed-off-by: Shiraz, Saleem <shiraz.saleem@intel.com>
> ---
>  drivers/infiniband/sw/rdmavt/mr.c | 18 ++++++++----------
>  1 file changed, 8 insertions(+), 10 deletions(-)

Applied to for-next thanks

Jason
diff mbox series

Patch

diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
index 8b1c1e8..97bef44 100644
--- a/drivers/infiniband/sw/rdmavt/mr.c
+++ b/drivers/infiniband/sw/rdmavt/mr.c
@@ -381,8 +381,8 @@  struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 {
 	struct rvt_mr *mr;
 	struct ib_umem *umem;
-	struct scatterlist *sg;
-	int n, m, entry;
+	struct sg_page_iter sg_iter;
+	int n, m;
 	struct ib_mr *ret;
 
 	if (length == 0)
@@ -407,23 +407,21 @@  struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 	mr->mr.access_flags = mr_access_flags;
 	mr->umem = umem;
 
-	mr->mr.page_shift = umem->page_shift;
+	mr->mr.page_shift = PAGE_SHIFT;
 	m = 0;
 	n = 0;
-	for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) {
+	for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->nmap, 0) {
 		void *vaddr;
 
-		vaddr = page_address(sg_page(sg));
+		vaddr = page_address(sg_page_iter_page(&sg_iter));
 		if (!vaddr) {
 			ret = ERR_PTR(-EINVAL);
 			goto bail_inval;
 		}
 		mr->mr.map[m]->segs[n].vaddr = vaddr;
-		mr->mr.map[m]->segs[n].length = BIT(umem->page_shift);
-		trace_rvt_mr_user_seg(&mr->mr, m, n, vaddr,
-				      BIT(umem->page_shift));
-		n++;
-		if (n == RVT_SEGSZ) {
+		mr->mr.map[m]->segs[n].length = PAGE_SIZE;
+		trace_rvt_mr_user_seg(&mr->mr, m, n, vaddr, PAGE_SIZE);
+		if (++n == RVT_SEGSZ) {
 			m++;
 			n = 0;
 		}