diff mbox

nvmet-rdma: Fix missing dma sync to nvme data structures

Message ID 1484412337-10860-1-git-send-email-parav@mellanox.com (mailing list archive)
State Not Applicable
Headers show

Commit Message

Parav Pandit Jan. 14, 2017, 4:45 p.m. UTC
This patch performs dma sync operations on nvme_command,
inline page(s) and nvme_completion.

nvme_command and write cmd inline data is synced
(a) on receiving of the recv queue completion for cpu access.
(b) before posting recv wqe back to rdma adapter for device access.

nvme_completion is synced
(a) on receiving send completion for nvme_completion for cpu access.
(b) before posting send wqe to rdma adapter for device access.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/target/rdma.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

Comments

Sagi Grimberg Jan. 14, 2017, 9:07 p.m. UTC | #1
Looks good,

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

When this is applied, will add stable 4.8+ tag.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Jan. 16, 2017, 3:31 p.m. UTC | #2
> +++ b/drivers/nvme/target/rdma.c
> @@ -438,6 +438,14 @@ static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev,
>  {
>  	struct ib_recv_wr *bad_wr;
>  
> +	ib_dma_sync_single_for_device(ndev->device,
> +			cmd->sge[0].addr, sizeof(*cmd->nvme_cmd),
> +			DMA_FROM_DEVICE);
> +
> +	if (cmd->sge[1].addr)

0 can be a valid address returned from dma_map_single on some
architectures. 
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Parav Pandit Jan. 16, 2017, 5:18 p.m. UTC | #3
> -----Original Message-----
> From: Christoph Hellwig [mailto:hch@lst.de]
> Sent: Monday, January 16, 2017 9:31 AM
> To: Parav Pandit <parav@mellanox.com>
> Cc: hch@lst.de; sagi@grimberg.me; linux-nvme@lists.infradead.org; linux-
> rdma@vger.kernel.org; dledford@redhat.com
> Subject: Re: [PATCH] nvmet-rdma: Fix missing dma sync to nvme data
> structures
> 
> > +++ b/drivers/nvme/target/rdma.c
> > @@ -438,6 +438,14 @@ static int nvmet_rdma_post_recv(struct
> > nvmet_rdma_device *ndev,  {
> >  	struct ib_recv_wr *bad_wr;
> >
> > +	ib_dma_sync_single_for_device(ndev->device,
> > +			cmd->sge[0].addr, sizeof(*cmd->nvme_cmd),
> > +			DMA_FROM_DEVICE);
> > +
> > +	if (cmd->sge[1].addr)
> 
> 0 can be a valid address returned from dma_map_single on some
> architectures.

I see. I will change it to check for the non-zero length instead of non-zero address.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Jan. 16, 2017, 5:30 p.m. UTC | #4
On Mon, Jan 16, 2017 at 05:18:15PM +0000, Parav Pandit wrote:
> I see. I will change it to check for the non-zero length instead of non-zero address.

Yes, that should work as well.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 6c1c368..da3d553 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -438,6 +438,14 @@  static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev,
 {
 	struct ib_recv_wr *bad_wr;
 
+	ib_dma_sync_single_for_device(ndev->device,
+			cmd->sge[0].addr, sizeof(*cmd->nvme_cmd),
+			DMA_FROM_DEVICE);
+
+	if (cmd->sge[1].addr)
+		ib_dma_sync_single_for_device(ndev->device,
+				cmd->sge[1].addr, NVMET_RDMA_INLINE_DATA_SIZE,
+				DMA_FROM_DEVICE);
 	if (ndev->srq)
 		return ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr);
 	return ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, &bad_wr);
@@ -507,6 +515,10 @@  static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
 	struct nvmet_rdma_rsp *rsp =
 		container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe);
 
+	ib_dma_sync_single_for_cpu(rsp->queue->dev->device,
+			rsp->send_sge.addr, sizeof(*rsp->req.rsp),
+			DMA_TO_DEVICE);
+
 	nvmet_rdma_release_rsp(rsp);
 
 	if (unlikely(wc->status != IB_WC_SUCCESS &&
@@ -538,6 +550,11 @@  static void nvmet_rdma_queue_response(struct nvmet_req *req)
 		first_wr = &rsp->send_wr;
 
 	nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd);
+
+	ib_dma_sync_single_for_device(rsp->queue->dev->device,
+			rsp->send_sge.addr, sizeof(*rsp->req.rsp),
+			DMA_TO_DEVICE);
+
 	if (ib_post_send(cm_id->qp, first_wr, &bad_wr)) {
 		pr_err("sending cmd response failed\n");
 		nvmet_rdma_release_rsp(rsp);
@@ -698,6 +715,15 @@  static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue,
 	cmd->n_rdma = 0;
 	cmd->req.port = queue->port;
 
+	ib_dma_sync_single_for_cpu(queue->dev->device, cmd->cmd->sge[0].addr,
+			sizeof(*cmd->cmd->nvme_cmd), DMA_FROM_DEVICE);
+
+	if (cmd->cmd->sge[1].addr)
+		ib_dma_sync_single_for_cpu(queue->dev->device,
+				cmd->cmd->sge[1].addr,
+				NVMET_RDMA_INLINE_DATA_SIZE,
+				DMA_FROM_DEVICE);
+
 	if (!nvmet_req_init(&cmd->req, &queue->nvme_cq,
 			&queue->nvme_sq, &nvmet_rdma_ops))
 		return;