Message ID | 2035e9eca59810687d9b53d7d6e603b765729b1f.1484075557.git.aditr@vmware.com (mailing list archive) |
---|---|
State | Changes Requested |
Headers | show |
On Tue, Jan 10, 2017 at 11:15:41AM -0800, Adit Ranadive wrote: > Moved the header page count to a macro [1]. In a future patch we will > separate out the allocation for the header page. Thanks Yuval. > Also, clear out the alloc_ucontext user response [2]. Thanks Dan. Though i appreciate the above acknowledgment i'm not sure it can be part of the commit message. > > [1] - http://marc.info/?l=linux-rdma&m=148101146228847&w=2 > [2] - http://marc.info/?l=linux-rdma&m=148351229926333&w=2 Same here. > > Fixes: 29c8d9eba550 ("IB: Add vmw_pvrdma driver") > Reported-by: Yuval Shaia <yuval.shaia@oracle.com> > Reported-by: Dan Carpenter <dan.carpenter@oracle.com> > Signed-off-by: Adit Ranadive <aditr@vmware.com> > Reviewed-by: Aditya Sarwade <asarwade@vmware.com> > --- > drivers/infiniband/hw/vmw_pvrdma/pvrdma.h | 1 + > drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 9 +++++---- > drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c | 2 +- > 3 files changed, 7 insertions(+), 5 deletions(-) > > diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h > index ee6a941..5dada5a 100644 > --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h > +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h > @@ -70,6 +70,7 @@ > #define PCI_DEVICE_ID_VMWARE_PVRDMA 0x0820 > > #define PVRDMA_NUM_RING_PAGES 4 > +#define PVRDMA_QP_NUM_HEADER_PAGES 1 > > struct pvrdma_dev; > > diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c > index 765bd32..3e23425 100644 > --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c > +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c > @@ -170,8 +170,9 @@ static int pvrdma_set_sq_size(struct pvrdma_dev *dev, struct ib_qp_cap *req_cap, > sizeof(struct pvrdma_sge) * > qp->sq.max_sg); > /* Note: one extra page for the header. */ > - qp->npages_send = 1 + (qp->sq.wqe_cnt * qp->sq.wqe_size + > - PAGE_SIZE - 1) / PAGE_SIZE; > + qp->npages_send = PVRDMA_QP_NUM_HEADER_PAGES + > + (qp->sq.wqe_cnt * qp->sq.wqe_size + PAGE_SIZE - 1) / > + PAGE_SIZE; > > return 0; > } > @@ -289,7 +290,7 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, > qp->npages = qp->npages_send + qp->npages_recv; > > /* Skip header page. */ > - qp->sq.offset = PAGE_SIZE; > + qp->sq.offset = PVRDMA_QP_NUM_HEADER_PAGES * PAGE_SIZE; > > /* Recv queue pages are after send pages. */ > qp->rq.offset = qp->npages_send * PAGE_SIZE; > @@ -342,7 +343,7 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, > cmd->qp_type = ib_qp_type_to_pvrdma(init_attr->qp_type); > cmd->access_flags = IB_ACCESS_LOCAL_WRITE; > cmd->total_chunks = qp->npages; > - cmd->send_chunks = qp->npages_send - 1; > + cmd->send_chunks = qp->npages_send - PVRDMA_QP_NUM_HEADER_PAGES; > cmd->pdir_dma = qp->pdir.dir_dma; > > dev_dbg(&dev->pdev->dev, "create queuepair with %d, %d, %d, %d\n", > diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c > index 5489137..c2aa526 100644 > --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c > +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c > @@ -306,7 +306,7 @@ struct ib_ucontext *pvrdma_alloc_ucontext(struct ib_device *ibdev, > union pvrdma_cmd_resp rsp; > struct pvrdma_cmd_create_uc *cmd = &req.create_uc; > struct pvrdma_cmd_create_uc_resp *resp = &rsp.create_uc_resp; > - struct pvrdma_alloc_ucontext_resp uresp; > + struct pvrdma_alloc_ucontext_resp uresp = {0}; > int ret; > void *ptr; Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> > > -- > 2.7.4 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jan 11, 2017 at 09:09:53AM +0200, Yuval Shaia wrote: > On Tue, Jan 10, 2017 at 11:15:41AM -0800, Adit Ranadive wrote: > > Moved the header page count to a macro [1]. In a future patch we will > > separate out the allocation for the header page. Thanks Yuval. > > Also, clear out the alloc_ucontext user response [2]. Thanks Dan. > > Though i appreciate the above acknowledgment i'm not sure it can be part of > the commit message. Okay. Doug, if possible please update the commit message as follows: -- Moved the header page count to a macro. Also, clear out the alloc_ucontext user response. -- If you prefer I can send a new series with the updated message. Thanks, Adit -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jan 11, 2017 at 02:27:31PM -0800, Adit Ranadive wrote: > On Wed, Jan 11, 2017 at 09:09:53AM +0200, Yuval Shaia wrote: > > On Tue, Jan 10, 2017 at 11:15:41AM -0800, Adit Ranadive wrote: > > > Moved the header page count to a macro [1]. In a future patch we will > > > separate out the allocation for the header page. Thanks Yuval. > > > Also, clear out the alloc_ucontext user response [2]. Thanks Dan. > > > > Though i appreciate the above acknowledgment i'm not sure it can be part of > > the commit message. > > Okay. > > Doug, if possible please update the commit message as follows: > -- > Moved the header page count to a macro. Also, clear out the alloc_ucontext > user response. > -- With this commit message fix: Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> > > If you prefer I can send a new series with the updated message. > > Thanks, > Adit -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h index ee6a941..5dada5a 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h @@ -70,6 +70,7 @@ #define PCI_DEVICE_ID_VMWARE_PVRDMA 0x0820 #define PVRDMA_NUM_RING_PAGES 4 +#define PVRDMA_QP_NUM_HEADER_PAGES 1 struct pvrdma_dev; diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c index 765bd32..3e23425 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c @@ -170,8 +170,9 @@ static int pvrdma_set_sq_size(struct pvrdma_dev *dev, struct ib_qp_cap *req_cap, sizeof(struct pvrdma_sge) * qp->sq.max_sg); /* Note: one extra page for the header. */ - qp->npages_send = 1 + (qp->sq.wqe_cnt * qp->sq.wqe_size + - PAGE_SIZE - 1) / PAGE_SIZE; + qp->npages_send = PVRDMA_QP_NUM_HEADER_PAGES + + (qp->sq.wqe_cnt * qp->sq.wqe_size + PAGE_SIZE - 1) / + PAGE_SIZE; return 0; } @@ -289,7 +290,7 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, qp->npages = qp->npages_send + qp->npages_recv; /* Skip header page. */ - qp->sq.offset = PAGE_SIZE; + qp->sq.offset = PVRDMA_QP_NUM_HEADER_PAGES * PAGE_SIZE; /* Recv queue pages are after send pages. */ qp->rq.offset = qp->npages_send * PAGE_SIZE; @@ -342,7 +343,7 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, cmd->qp_type = ib_qp_type_to_pvrdma(init_attr->qp_type); cmd->access_flags = IB_ACCESS_LOCAL_WRITE; cmd->total_chunks = qp->npages; - cmd->send_chunks = qp->npages_send - 1; + cmd->send_chunks = qp->npages_send - PVRDMA_QP_NUM_HEADER_PAGES; cmd->pdir_dma = qp->pdir.dir_dma; dev_dbg(&dev->pdev->dev, "create queuepair with %d, %d, %d, %d\n", diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c index 5489137..c2aa526 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c @@ -306,7 +306,7 @@ struct ib_ucontext *pvrdma_alloc_ucontext(struct ib_device *ibdev, union pvrdma_cmd_resp rsp; struct pvrdma_cmd_create_uc *cmd = &req.create_uc; struct pvrdma_cmd_create_uc_resp *resp = &rsp.create_uc_resp; - struct pvrdma_alloc_ucontext_resp uresp; + struct pvrdma_alloc_ucontext_resp uresp = {0}; int ret; void *ptr;