From patchwork Mon Nov 20 10:49:21 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10066307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0188D60597 for ; Mon, 20 Nov 2017 10:50:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E662528D98 for ; Mon, 20 Nov 2017 10:50:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D50CA291AC; Mon, 20 Nov 2017 10:50:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6EC1D290BB for ; Mon, 20 Nov 2017 10:50:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751179AbdKTKtY (ORCPT ); Mon, 20 Nov 2017 05:49:24 -0500 Received: from verein.lst.de ([213.95.11.211]:60178 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750952AbdKTKtX (ORCPT ); Mon, 20 Nov 2017 05:49:23 -0500 Received: by newverein.lst.de (Postfix, from userid 2407) id E7A6668D58; Mon, 20 Nov 2017 11:49:21 +0100 (CET) Date: Mon, 20 Nov 2017 11:49:21 +0100 From: Christoph Hellwig To: Sagi Grimberg Cc: Christoph Hellwig , linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org, Max Gurtuvoy Subject: Re: [PATCH v2 2/3] nvme-rdma: don't complete requests before a send work request has completed Message-ID: <20171120104921.GA31309@lst.de> References: <20171108100616.26605-1-sagi@grimberg.me> <20171108100616.26605-3-sagi@grimberg.me> <20171109092110.GB16966@lst.de> <0f368bc9-2e9f-4008-316c-46b85661a274@grimberg.me> <20171120083130.GC27552@lst.de> <384d8a51-aa2f-5954-c9fd-a0c88d7e5364@grimberg.me> <20171120084102.GA28456@lst.de> <58bdc9c0-f98e-9d9f-f81e-fbed572f922e@grimberg.me> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <58bdc9c0-f98e-9d9f-f81e-fbed572f922e@grimberg.me> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Btw, I think we can avoid 2 atomic ops for the remote invalidation path with a simple update like the one below: --- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 5627d81735d2..2032cd8ad431 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1141,7 +1141,6 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue, IB_ACCESS_REMOTE_WRITE; req->mr->need_inval = true; - atomic_inc(&req->ref); sg->addr = cpu_to_le64(req->mr->iova); put_unaligned_le24(req->mr->length, sg->length); @@ -1328,10 +1327,9 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, req->cqe.status = cqe->status; req->cqe.result = cqe->result; - if ((wc->wc_flags & IB_WC_WITH_INVALIDATE) && - wc->ex.invalidate_rkey == req->mr->rkey) { - atomic_dec(&req->ref); - } else if (req->mr->need_inval) { + if (req->mr->need_inval && + (!(wc->wc_flags & IB_WC_WITH_INVALIDATE) || + wc->ex.invalidate_rkey != req->mr->rkey)) { ret = nvme_rdma_inv_rkey(queue, req); if (unlikely(ret < 0)) { dev_err(queue->ctrl->ctrl.device, @@ -1339,12 +1337,12 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue, req->mr->rkey, ret); nvme_rdma_error_recovery(queue->ctrl); } - } - - if (atomic_dec_and_test(&req->ref)) { - if (rq->tag == tag) - ret = 1; - nvme_end_request(rq, req->cqe.status, req->cqe.result); + } else { + if (atomic_dec_and_test(&req->ref)) { + if (rq->tag == tag) + ret = 1; + nvme_end_request(rq, req->cqe.status, req->cqe.result); + } } return ret;