From patchwork Tue Oct 31 08:55:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 10033751 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6F2D960327 for ; Tue, 31 Oct 2017 08:55:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6042C286A1 for ; Tue, 31 Oct 2017 08:55:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 553E6288A3; Tue, 31 Oct 2017 08:55:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D4B592886A for ; Tue, 31 Oct 2017 08:55:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751422AbdJaIzf (ORCPT ); Tue, 31 Oct 2017 04:55:35 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:58594 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750978AbdJaIzd (ORCPT ); Tue, 31 Oct 2017 04:55:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=cqoARXBUJ5twybxF2XUIzRjSuHnX7WgYqT4tEeaLne0=; b=oSh6hD2DNvhfWaMRcdkFM5QPc YmImeG96jh8o7hLwAzzUKh5yVJ4Z2rl2B4jQwHRmD+On8MuwjcxW5Srm/eKe+YwtOX8cW3SLGg4uO IpnDyQQqkoPy12umaD2PZV1VE7MTDuEx0oKN+DbAShy7XQZlB4e8WZ2QKoeinqQ9A4rSJGsI4iamQ YKXBxD80j77PjAFsuIuaZyxy4SwsDFkBXEbi1k1SnZLgciASPyyifYr754YRzFVVFIxdNBZvpOGzL 78F5eo7QCtIFBTkp+3x5OqikwoSS3ERmRtDHOUUQnI/Zr2lqsAQsuMBzszN8P1R4dXH9EWgZt4D5j WYRZPo03g==; Received: from bzq-82-81-101-184.red.bezeqint.net ([82.81.101.184] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1e9SKZ-0004Jv-VE; Tue, 31 Oct 2017 08:55:32 +0000 From: Sagi Grimberg To: linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Christoph Hellwig , Jason Gunthorpe , idanb@mellanox.com Subject: [PATCH 3/3] nvme-rdma: wait for local invalidation before completing a request Date: Tue, 31 Oct 2017 10:55:22 +0200 Message-Id: <1509440122-1190-4-git-send-email-sagi@grimberg.me> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1509440122-1190-1-git-send-email-sagi@grimberg.me> References: <1509440122-1190-1-git-send-email-sagi@grimberg.me> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We must not complete a request before the host memory region is invalidated. Luckily we have send with invalidate protocol support so we usually don't need to execute it, but in case the target did not invalidate a memory region for us, we must wait for the invalidation to complete before unmapping host memory and completing the I/O. Signed-off-by: Sagi Grimberg --- drivers/nvme/host/rdma.c | 42 +++++++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 17 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index ae1fb66358f7..b7e0fb0fe913 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -818,8 +818,19 @@ static void nvme_rdma_memreg_done(struct ib_cq *cq, struct ib_wc *wc) static void nvme_rdma_inv_rkey_done(struct ib_cq *cq, struct ib_wc *wc) { - if (unlikely(wc->status != IB_WC_SUCCESS)) + struct nvme_rdma_request *req = + container_of(wc->wr_cqe, struct nvme_rdma_request, reg_cqe); + struct request *rq = blk_mq_rq_from_pdu(req); + + if (unlikely(wc->status != IB_WC_SUCCESS)) { nvme_rdma_wr_error(cq, wc, "LOCAL_INV"); + return; + } + + req->mr->need_inval = false; + if (req->resp_completed && req->send_completed) + nvme_end_request(rq, req->cqe.status, req->cqe.result); + } static int nvme_rdma_inv_rkey(struct nvme_rdma_queue *queue, @@ -830,7 +841,7 @@ static int nvme_rdma_inv_rkey(struct nvme_rdma_queue *queue, .opcode = IB_WR_LOCAL_INV, .next = NULL, .num_sge = 0, - .send_flags = 0, + .send_flags = IB_SEND_SIGNALED, .ex.invalidate_rkey = req->mr->rkey, }; @@ -844,24 +855,12 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue, struct request *rq) { struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq); - struct nvme_rdma_ctrl *ctrl = queue->ctrl; struct nvme_rdma_device *dev = queue->device; struct ib_device *ibdev = dev->dev; - int res; if (!blk_rq_bytes(rq)) return; - if (req->mr->need_inval && test_bit(NVME_RDMA_Q_LIVE, &req->queue->flags)) { - res = nvme_rdma_inv_rkey(queue, req); - if (unlikely(res < 0)) { - dev_err(ctrl->ctrl.device, - "Queueing INV WR for rkey %#x failed (%d)\n", - req->mr->rkey, res); - nvmf_error_recovery(&queue->ctrl->ctrl); - } - } - ib_dma_unmap_sg(ibdev, req->sg_table.sgl, req->nents, rq_data_dir(rq) == WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE); @@ -1014,7 +1013,7 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc) } req->send_completed = true; - if (req->resp_completed) + if (req->resp_completed && !req->mr->need_inval) nvme_end_request(rq, req->cqe.status, req->cqe.result); } @@ -1139,10 +1138,19 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, req->resp_completed = true; if ((wc->wc_flags & IB_WC_WITH_INVALIDATE) && - wc->ex.invalidate_rkey == req->mr->rkey) + wc->ex.invalidate_rkey == req->mr->rkey) { req->mr->need_inval = false; + } else if (req->mr->need_inval) { + ret = nvme_rdma_inv_rkey(queue, req); + if (unlikely(ret < 0)) { + dev_err(queue->ctrl->ctrl.device, + "Queueing INV WR for rkey %#x failed (%d)\n", + req->mr->rkey, ret); + nvmf_error_recovery(&queue->ctrl->ctrl); + } + } - if (req->send_completed) + if (req->send_completed && !req->mr->need_inval) nvme_end_request(rq, req->cqe.status, req->cqe.result); return ret;