Message ID | 1578962480-17814-3-git-send-email-rao.shoaib@oracle.com (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | Jason Gunthorpe |
Headers | show |
Series | rxe should use same buffer size for SGE's and inline data | expand |
On Mon, Jan 13, 2020 at 04:41:20PM -0800, rao Shoaib wrote: > From: Rao Shoaib <rao.shoaib@oracle.com> > > SGE buffer size and max_inline data should be same. Maximum of the > two values requested is used. > > Signed-off-by: Rao Shoaib <rao.shoaib@oracle.com> > drivers/infiniband/sw/rxe/rxe_qp.c | 23 +++++++++++------------ > 1 file changed, 11 insertions(+), 12 deletions(-) > > diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c > index aeea994..41c669c 100644 > +++ b/drivers/infiniband/sw/rxe/rxe_qp.c > @@ -235,18 +235,17 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, > return err; > qp->sk->sk->sk_user_data = qp; > > - qp->sq.max_wr = init->cap.max_send_wr; > - qp->sq.max_sge = init->cap.max_send_sge; > - qp->sq.max_inline = init->cap.max_inline_data; > - > - wqe_size = max_t(int, sizeof(struct rxe_send_wqe) + > - qp->sq.max_sge * sizeof(struct ib_sge), > - sizeof(struct rxe_send_wqe) + > - qp->sq.max_inline); > - > - qp->sq.queue = rxe_queue_init(rxe, > - &qp->sq.max_wr, > - wqe_size); > + wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), > + init->cap.max_inline_data); > + qp->sq.max_sge = wqe_size/sizeof(struct ib_sge); > + qp->sq.max_inline = wqe_size; > + > + wqe_size += sizeof(struct rxe_send_wqe); Where does this limit the user's request to RXE_MAX_WQE_SIZE ? I seem to recall the if the requested max can't be satisified then that is an EINVAL? And the init->cap should be updated with the actual allocation. So more like: if (init->cap.max_send_sge > RXE_MAX_SGE || init->cap.max_inline_data > RXE_MAX_INLINE) return -EINVAL; wqe_size = max_t(int, init->cap.max_sge * sizeof(struct ib_sge), init->cap.max_inline_data) sizeof(struct rxe_send_wqe); qp->sq.max_sge = init->cap.max_send_sge = wqe_size/sizeof(struct ib_sge); qp->sq.max_inline = init->cap.max_inline_data = wqe_size; wqe_size += sizeof(struct rxe_send_wqe); Jason
On 1/15/20 10:27 AM, Jason Gunthorpe wrote: > On Mon, Jan 13, 2020 at 04:41:20PM -0800, rao Shoaib wrote: >> From: Rao Shoaib <rao.shoaib@oracle.com> >> >> SGE buffer size and max_inline data should be same. Maximum of the >> two values requested is used. >> >> Signed-off-by: Rao Shoaib <rao.shoaib@oracle.com> >> drivers/infiniband/sw/rxe/rxe_qp.c | 23 +++++++++++------------ >> 1 file changed, 11 insertions(+), 12 deletions(-) >> >> diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c >> index aeea994..41c669c 100644 >> +++ b/drivers/infiniband/sw/rxe/rxe_qp.c >> @@ -235,18 +235,17 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, >> return err; >> qp->sk->sk->sk_user_data = qp; >> >> - qp->sq.max_wr = init->cap.max_send_wr; >> - qp->sq.max_sge = init->cap.max_send_sge; >> - qp->sq.max_inline = init->cap.max_inline_data; >> - >> - wqe_size = max_t(int, sizeof(struct rxe_send_wqe) + >> - qp->sq.max_sge * sizeof(struct ib_sge), >> - sizeof(struct rxe_send_wqe) + >> - qp->sq.max_inline); >> - >> - qp->sq.queue = rxe_queue_init(rxe, >> - &qp->sq.max_wr, >> - wqe_size); >> + wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), >> + init->cap.max_inline_data); >> + qp->sq.max_sge = wqe_size/sizeof(struct ib_sge); >> + qp->sq.max_inline = wqe_size; >> + >> + wqe_size += sizeof(struct rxe_send_wqe); > Where does this limit the user's request to RXE_MAX_WQE_SIZE ? My understanding is that the user request can only specify sge's and/or inline data. The check for those is made in rxe_qp_chk_cap. Since max sge's and max inline data are constrained by RXE_MAX_WQE_SIZE the limit is enforced. > > I seem to recall the if the requested max can't be satisified then > that is an EINVAL? > > And the init->cap should be updated with the actual allocation. Since the user request for both (sge's and inline data) has been satisfied I decided not to update the values in case the return values are being checked. If you prefer that I update the values I can do that. Shoaib > > So more like: > > if (init->cap.max_send_sge > RXE_MAX_SGE || > init->cap.max_inline_data > RXE_MAX_INLINE) > return -EINVAL; > > wqe_size = max_t(int, init->cap.max_sge * sizeof(struct ib_sge), > init->cap.max_inline_data) > sizeof(struct rxe_send_wqe); > qp->sq.max_sge = init->cap.max_send_sge = wqe_size/sizeof(struct ib_sge); > qp->sq.max_inline = init->cap.max_inline_data = wqe_size; > wqe_size += sizeof(struct rxe_send_wqe); > > Jason
On 1/15/20 11:57 AM, Rao Shoaib wrote: > >> I seem to recall the if the requested max can't be satisified then >> that is an EINVAL? >> >> And the init->cap should be updated with the actual allocation. > > Since the user request for both (sge's and inline data) has been > satisfied I decided not to update the values in case the return values > are being checked. If you prefer that I update the values I can do that. > > Shoaib > In my original v1 patch I did update init->cap, I must have overlooked it. I will resubmit the patch with that change once I hear back from you about the enforcement. Shoaib
On Wed, Jan 15, 2020 at 11:57:08AM -0800, Rao Shoaib wrote: > > On 1/15/20 10:27 AM, Jason Gunthorpe wrote: > > On Mon, Jan 13, 2020 at 04:41:20PM -0800, rao Shoaib wrote: > > > From: Rao Shoaib <rao.shoaib@oracle.com> > > > > > > SGE buffer size and max_inline data should be same. Maximum of the > > > two values requested is used. > > > > > > Signed-off-by: Rao Shoaib <rao.shoaib@oracle.com> > > > drivers/infiniband/sw/rxe/rxe_qp.c | 23 +++++++++++------------ > > > 1 file changed, 11 insertions(+), 12 deletions(-) > > > > > > diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c > > > index aeea994..41c669c 100644 > > > +++ b/drivers/infiniband/sw/rxe/rxe_qp.c > > > @@ -235,18 +235,17 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, > > > return err; > > > qp->sk->sk->sk_user_data = qp; > > > - qp->sq.max_wr = init->cap.max_send_wr; > > > - qp->sq.max_sge = init->cap.max_send_sge; > > > - qp->sq.max_inline = init->cap.max_inline_data; > > > - > > > - wqe_size = max_t(int, sizeof(struct rxe_send_wqe) + > > > - qp->sq.max_sge * sizeof(struct ib_sge), > > > - sizeof(struct rxe_send_wqe) + > > > - qp->sq.max_inline); > > > - > > > - qp->sq.queue = rxe_queue_init(rxe, > > > - &qp->sq.max_wr, > > > - wqe_size); > > > + wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), > > > + init->cap.max_inline_data); > > > + qp->sq.max_sge = wqe_size/sizeof(struct ib_sge); > > > + qp->sq.max_inline = wqe_size; > > > + > > > + wqe_size += sizeof(struct rxe_send_wqe); > > Where does this limit the user's request to RXE_MAX_WQE_SIZE ? > > My understanding is that the user request can only specify sge's and/or > inline data. The check for those is made in rxe_qp_chk_cap. Since max sge's > and max inline data are constrained by RXE_MAX_WQE_SIZE the limit is > enforced. Okay, that is fine, it is a bit obtuse because of how distant rxe_qp_chk_cap() is from this function, lets just add a comment > > I seem to recall the if the requested max can't be satisified then > > that is an EINVAL? > > > > And the init->cap should be updated with the actual allocation. > > Since the user request for both (sge's and inline data) has been satisfied I > decided not to update the values in case the return values are being > checked. If you prefer that I update the values I can do that. If the sizes are increased then the driver is supposed to return the actual maximums. It is easy, I will fix it. Also, your patches don't apply cleanly. You need to send patches against the rdma for-next tree And subjects should start with some 'RDMA/rxe: ' tag I fixed it all and applied to for-next Thanks, Jason
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index aeea994..41c669c 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -235,18 +235,17 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, return err; qp->sk->sk->sk_user_data = qp; - qp->sq.max_wr = init->cap.max_send_wr; - qp->sq.max_sge = init->cap.max_send_sge; - qp->sq.max_inline = init->cap.max_inline_data; - - wqe_size = max_t(int, sizeof(struct rxe_send_wqe) + - qp->sq.max_sge * sizeof(struct ib_sge), - sizeof(struct rxe_send_wqe) + - qp->sq.max_inline); - - qp->sq.queue = rxe_queue_init(rxe, - &qp->sq.max_wr, - wqe_size); + wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), + init->cap.max_inline_data); + qp->sq.max_sge = wqe_size/sizeof(struct ib_sge); + qp->sq.max_inline = wqe_size; + + wqe_size += sizeof(struct rxe_send_wqe); + + qp->sq.max_wr = init->cap.max_send_wr; + + qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size); + if (!qp->sq.queue) return -ENOMEM;