From patchwork Mon Nov 20 10:41:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 10066273 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 51343602B7 for ; Mon, 20 Nov 2017 10:42:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4109028F54 for ; Mon, 20 Nov 2017 10:42:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 35E28290E1; Mon, 20 Nov 2017 10:42:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9657428F54 for ; Mon, 20 Nov 2017 10:42:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751087AbdKTKmJ (ORCPT ); Mon, 20 Nov 2017 05:42:09 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:43335 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750952AbdKTKmI (ORCPT ); Mon, 20 Nov 2017 05:42:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=9v46FvazZXsBHrV3hKIeDuGrHKospsMjR8qHfQCvyFo=; b=OH43YwqfWtbDCYsgfpWoLBwXH gNr/kfXPcH6fR+4m9vNEE4Stc0bk/h9UKrfo2fe8INs7lZoShGCLNPJfvdQ7qFoyQJdW/wcNjFhoO BGzTq9my0wuEHgd2SakqtSjn5Y0jWnUuqYahk1nKCRaWy8ys76sgTt5i4WwTLb9C+1RrYt038GmTX gqhyDchDgAX2O9fMJgQfEBPCsdXBjaej5w3/4dI7bn2gw2mm/9IXjkO8/PRGH3p/RA1WF+k6lfX/Z Ovmiw+j/EZfe8B3U+nfzeVH1MOp+irk7ZCPhky5GiXFmoeYug9EfMy9vWxjUElilcX6VoKmOkHn2O ANY//2Fdg==; Received: from bzq-82-81-101-184.red.bezeqint.net ([82.81.101.184] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1eGjWh-0006c2-K9; Mon, 20 Nov 2017 10:42:08 +0000 From: Sagi Grimberg To: linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org Cc: Christoph Hellwig Subject: [PATCH v3 1/3] nvme-rdma: don't suppress send completions Date: Mon, 20 Nov 2017 12:41:54 +0200 Message-Id: <20171120104156.31344-2-sagi@grimberg.me> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171120104156.31344-1-sagi@grimberg.me> References: <20171120104156.31344-1-sagi@grimberg.me> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The entire completions suppress mechanism is currently broken because the HCA might retry a send operation (due to dropped ack) after the nvme transaction has completed. In order to handle this, we signal all send completions (besides async event which is not racing anything). Signed-off-by: Sagi Grimberg --- drivers/nvme/host/rdma.c | 46 +++++++++------------------------------------- 1 file changed, 9 insertions(+), 37 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index fdd6659a09a0..85c98589a5e0 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -77,7 +77,6 @@ enum nvme_rdma_queue_flags { struct nvme_rdma_queue { struct nvme_rdma_qe *rsp_ring; - atomic_t sig_count; int queue_size; size_t cmnd_capsule_len; struct nvme_rdma_ctrl *ctrl; @@ -510,7 +509,6 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl, queue->cmnd_capsule_len = sizeof(struct nvme_command); queue->queue_size = queue_size; - atomic_set(&queue->sig_count, 0); queue->cm_id = rdma_create_id(&init_net, nvme_rdma_cm_handler, queue, RDMA_PS_TCP, IB_QPT_RC); @@ -1204,21 +1202,9 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc) nvme_rdma_wr_error(cq, wc, "SEND"); } -/* - * We want to signal completion at least every queue depth/2. This returns the - * largest power of two that is not above half of (queue size + 1) to optimize - * (avoid divisions). - */ -static inline bool nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue) -{ - int limit = 1 << ilog2((queue->queue_size + 1) / 2); - - return (atomic_inc_return(&queue->sig_count) & (limit - 1)) == 0; -} - static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge, - struct ib_send_wr *first, bool flush) + struct ib_send_wr *first, bool signal) { struct ib_send_wr wr, *bad_wr; int ret; @@ -1234,24 +1220,7 @@ static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, wr.sg_list = sge; wr.num_sge = num_sge; wr.opcode = IB_WR_SEND; - wr.send_flags = 0; - - /* - * Unsignalled send completions are another giant desaster in the - * IB Verbs spec: If we don't regularly post signalled sends - * the send queue will fill up and only a QP reset will rescue us. - * Would have been way to obvious to handle this in hardware or - * at least the RDMA stack.. - * - * Always signal the flushes. The magic request used for the flush - * sequencer is not allocated in our driver's tagset and it's - * triggered to be freed by blk_cleanup_queue(). So we need to - * always mark it as signaled to ensure that the "wr_cqe", which is - * embedded in request's payload, is not freed when __ib_process_cq() - * calls wr_cqe->done(). - */ - if (nvme_rdma_queue_sig_limit(queue) || flush) - wr.send_flags |= IB_SEND_SIGNALED; + wr.send_flags = signal ? IB_SEND_SIGNALED : 0; if (first) first->next = ≀ @@ -1322,6 +1291,12 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg) ib_dma_sync_single_for_device(dev, sqe->dma, sizeof(*cmd), DMA_TO_DEVICE); + /* + * async events do not race with reply completions and don't + * contain inline data, thus are safe to suppress. so in order + * to avoid the complexity of detecting async event send completions + * in the hot path we simply suppress their send completions. + */ ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL, false); WARN_ON_ONCE(ret); } @@ -1607,7 +1582,6 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx, struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq); struct nvme_rdma_qe *sqe = &req->sqe; struct nvme_command *c = sqe->data; - bool flush = false; struct ib_device *dev; blk_status_t ret; int err; @@ -1639,10 +1613,8 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx, ib_dma_sync_single_for_device(dev, sqe->dma, sizeof(struct nvme_command), DMA_TO_DEVICE); - if (req_op(rq) == REQ_OP_FLUSH) - flush = true; err = nvme_rdma_post_send(queue, sqe, req->sge, req->num_sge, - req->mr->need_inval ? &req->reg_wr.wr : NULL, flush); + req->mr->need_inval ? &req->reg_wr.wr : NULL, true); if (unlikely(err)) { nvme_rdma_unmap_data(queue, rq); goto err;