From patchwork Sat Jan 14 16:45:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Parav Pandit X-Patchwork-Id: 9517063 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 43F84607D6 for ; Sat, 14 Jan 2017 16:45:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B1DD284CA for ; Sat, 14 Jan 2017 16:45:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F0CF4284D0; Sat, 14 Jan 2017 16:45:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96300284CA for ; Sat, 14 Jan 2017 16:45:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751394AbdANQpr (ORCPT ); Sat, 14 Jan 2017 11:45:47 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:60759 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751389AbdANQpq (ORCPT ); Sat, 14 Jan 2017 11:45:46 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from parav@mellanox.com) with ESMTPS (AES256-SHA encrypted); 14 Jan 2017 18:45:41 +0200 Received: from sw-mtx-036.mtx.labs.mlnx (sw-mtx-036.mtx.labs.mlnx [10.12.150.149]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id v0EGjdof013294; Sat, 14 Jan 2017 18:45:40 +0200 From: Parav Pandit To: hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, dledford@redhat.com Cc: parav@mellanox.com Subject: [PATCH] nvmet-rdma: Fix missing dma sync to nvme data structures Date: Sat, 14 Jan 2017 10:45:37 -0600 Message-Id: <1484412337-10860-1-git-send-email-parav@mellanox.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch performs dma sync operations on nvme_command, inline page(s) and nvme_completion. nvme_command and write cmd inline data is synced (a) on receiving of the recv queue completion for cpu access. (b) before posting recv wqe back to rdma adapter for device access. nvme_completion is synced (a) on receiving send completion for nvme_completion for cpu access. (b) before posting send wqe to rdma adapter for device access. Signed-off-by: Parav Pandit Reviewed-by: Max Gurtovoy Reviewed-by: Sagi Grimberg --- drivers/nvme/target/rdma.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 6c1c368..da3d553 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -438,6 +438,14 @@ static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev, { struct ib_recv_wr *bad_wr; + ib_dma_sync_single_for_device(ndev->device, + cmd->sge[0].addr, sizeof(*cmd->nvme_cmd), + DMA_FROM_DEVICE); + + if (cmd->sge[1].addr) + ib_dma_sync_single_for_device(ndev->device, + cmd->sge[1].addr, NVMET_RDMA_INLINE_DATA_SIZE, + DMA_FROM_DEVICE); if (ndev->srq) return ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr); return ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, &bad_wr); @@ -507,6 +515,10 @@ static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc) struct nvmet_rdma_rsp *rsp = container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe); + ib_dma_sync_single_for_cpu(rsp->queue->dev->device, + rsp->send_sge.addr, sizeof(*rsp->req.rsp), + DMA_TO_DEVICE); + nvmet_rdma_release_rsp(rsp); if (unlikely(wc->status != IB_WC_SUCCESS && @@ -538,6 +550,11 @@ static void nvmet_rdma_queue_response(struct nvmet_req *req) first_wr = &rsp->send_wr; nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd); + + ib_dma_sync_single_for_device(rsp->queue->dev->device, + rsp->send_sge.addr, sizeof(*rsp->req.rsp), + DMA_TO_DEVICE); + if (ib_post_send(cm_id->qp, first_wr, &bad_wr)) { pr_err("sending cmd response failed\n"); nvmet_rdma_release_rsp(rsp); @@ -698,6 +715,15 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue, cmd->n_rdma = 0; cmd->req.port = queue->port; + ib_dma_sync_single_for_cpu(queue->dev->device, cmd->cmd->sge[0].addr, + sizeof(*cmd->cmd->nvme_cmd), DMA_FROM_DEVICE); + + if (cmd->cmd->sge[1].addr) + ib_dma_sync_single_for_cpu(queue->dev->device, + cmd->cmd->sge[1].addr, + NVMET_RDMA_INLINE_DATA_SIZE, + DMA_FROM_DEVICE); + if (!nvmet_req_init(&cmd->req, &queue->nvme_cq, &queue->nvme_sq, &nvmet_rdma_ops)) return;