From patchwork Thu Jan 12 22:45:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Parav Pandit X-Patchwork-Id: 9514391 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 918B160710 for ; Thu, 12 Jan 2017 22:45:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 99A0B2871E for ; Thu, 12 Jan 2017 22:45:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8E71828722; Thu, 12 Jan 2017 22:45:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E7BB2871E for ; Thu, 12 Jan 2017 22:45:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751146AbdALWpW (ORCPT ); Thu, 12 Jan 2017 17:45:22 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:33333 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750868AbdALWpT (ORCPT ); Thu, 12 Jan 2017 17:45:19 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from parav@mellanox.com) with ESMTPS (AES256-SHA encrypted); 13 Jan 2017 00:45:13 +0200 Received: from sw-mtx-036.mtx.labs.mlnx (sw-mtx-036.mtx.labs.mlnx [10.12.150.149]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id v0CMjBWN006771; Fri, 13 Jan 2017 00:45:12 +0200 From: Parav Pandit To: hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, dledford@redhat.com Cc: parav@mellanox.com Subject: [PATCH] nvmet-rdma: Fix missing dma sync to nvme data structures Date: Thu, 12 Jan 2017 16:45:09 -0600 Message-Id: <1484261109-3316-1-git-send-email-parav@mellanox.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch performs dma sync operations on nvme_commmand, inline page(s) and nvme_completion. nvme_command and write cmd inline data is synced (a) on receiving of the recv queue completion for cpu access. (b) before posting recv wqe back to rdma adapter for device access. nvme_completion is synced (a) on receiving send completion for nvme_completion for cpu access. (b) before posting send wqe to rdma adapter for device access. Pushing this patch through linux-rdma tree as its more relavant with Bart's changes for dma_map_ops of[1]. [1] https://patchwork.kernel.org/patch/9514085/ Signed-off-by: Parav Pandit Reviewed-by: Max Gurtovoy --- drivers/nvme/target/rdma.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 8c3760a..c6468b3 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -438,6 +438,14 @@ static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev, { struct ib_recv_wr *bad_wr; + dma_sync_single_for_device(ndev->device->dma_device, + cmd->sge[0].addr, sizeof(*cmd->nvme_cmd), + DMA_FROM_DEVICE); + + if (cmd->sge[1].addr) + dma_sync_single_for_device(ndev->device->dma_device, + cmd->sge[1].addr, NVMET_RDMA_INLINE_DATA_SIZE, + DMA_FROM_DEVICE); if (ndev->srq) return ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr); return ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, &bad_wr); @@ -507,6 +515,10 @@ static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc) struct nvmet_rdma_rsp *rsp = container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe); + dma_sync_single_for_cpu(rsp->queue->dev->device->dma_device, + rsp->send_sge.addr, sizeof(*rsp->req.rsp), + DMA_TO_DEVICE); + nvmet_rdma_release_rsp(rsp); if (unlikely(wc->status != IB_WC_SUCCESS && @@ -538,6 +550,11 @@ static void nvmet_rdma_queue_response(struct nvmet_req *req) first_wr = &rsp->send_wr; nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd); + + dma_sync_single_for_device(rsp->queue->dev->device->dma_device, + rsp->send_sge.addr, sizeof(*rsp->req.rsp), + DMA_TO_DEVICE); + if (ib_post_send(cm_id->qp, first_wr, &bad_wr)) { pr_err("sending cmd response failed\n"); nvmet_rdma_release_rsp(rsp); @@ -698,6 +715,16 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue, cmd->n_rdma = 0; cmd->req.port = queue->port; + dma_sync_single_for_cpu(queue->dev->device->dma_device, + cmd->cmd->sge[0].addr, sizeof(*cmd->cmd->nvme_cmd), + DMA_FROM_DEVICE); + + if (cmd->cmd->sge[1].addr) + dma_sync_single_for_cpu(queue->dev->device->dma_device, + cmd->cmd->sge[1].addr, + NVMET_RDMA_INLINE_DATA_SIZE, + DMA_FROM_DEVICE); + if (!nvmet_req_init(&cmd->req, &queue->nvme_cq, &queue->nvme_sq, &nvmet_rdma_ops)) return;