From patchwork Thu Jun 9 21:42:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Wise X-Patchwork-Id: 9168241 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 06BB460467 for ; Thu, 9 Jun 2016 21:42:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EC39026E1A for ; Thu, 9 Jun 2016 21:42:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E0C282832A; Thu, 9 Jun 2016 21:42:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 703B626E1A for ; Thu, 9 Jun 2016 21:42:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752480AbcFIVmP (ORCPT ); Thu, 9 Jun 2016 17:42:15 -0400 Received: from linode.aoot.com ([69.164.194.13]:57332 "EHLO linode.aoot.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751821AbcFIVmN (ORCPT ); Thu, 9 Jun 2016 17:42:13 -0400 Received: from stevoacer (r74-192-171-8.gtwncmta01.grtntx.tl.dh.suddenlink.net [74.192.171.8]) by linode.aoot.com (Postfix) with ESMTP id F019E8200; Thu, 9 Jun 2016 16:42:11 -0500 (CDT) From: "Steve Wise" To: "'Sagi Grimberg'" , "'Christoph Hellwig'" , , Cc: , , , "'Armen Baloyan'" , "'Jay Freyensee'" , "'Ming Lin'" , References: <1465248215-18186-1-git-send-email-hch@lst.de> <1465248215-18186-5-git-send-email-hch@lst.de> <5756B75C.9000409@lightbits.io> In-Reply-To: <5756B75C.9000409@lightbits.io> Subject: RE: [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver Date: Thu, 9 Jun 2016 16:42:11 -0500 Message-ID: <051801d1c297$c7d8a7d0$5789f770$@opengridcomputing.com> MIME-Version: 1.0 X-Mailer: Microsoft Outlook 16.0 Thread-Index: AQHsnHT7Ltc2grgyk+cl8PWzIpbndQLyURyqAT0isz2finp2AA== Content-Language: en-us Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP > > + > > +static struct nvmet_rdma_queue * > > +nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev, > > + struct rdma_cm_id *cm_id, > > + struct rdma_cm_event *event) > > +{ > > + struct nvmet_rdma_queue *queue; > > + int ret; > > + > > + queue = kzalloc(sizeof(*queue), GFP_KERNEL); > > + if (!queue) { > > + ret = NVME_RDMA_CM_NO_RSC; > > + goto out_reject; > > + } > > + > > + ret = nvmet_sq_init(&queue->nvme_sq); > > + if (ret) > > + goto out_free_queue; > > + > > + ret = nvmet_rdma_parse_cm_connect_req(&event->param.conn, > queue); > > + if (ret) > > + goto out_destroy_sq; > > + > > + /* > > + * Schedules the actual release because calling rdma_destroy_id from > > + * inside a CM callback would trigger a deadlock. (great API design..) > > + */ > > + INIT_WORK(&queue->release_work, > nvmet_rdma_release_queue_work); > > + queue->dev = ndev; > > + queue->cm_id = cm_id; > > + > > + spin_lock_init(&queue->state_lock); > > + queue->state = NVMET_RDMA_Q_CONNECTING; > > + INIT_LIST_HEAD(&queue->rsp_wait_list); > > + INIT_LIST_HEAD(&queue->rsp_wr_wait_list); > > + spin_lock_init(&queue->rsp_wr_wait_lock); > > + INIT_LIST_HEAD(&queue->free_rsps); > > + spin_lock_init(&queue->rsps_lock); > > + > > + queue->idx = ida_simple_get(&nvmet_rdma_queue_ida, 0, 0, > GFP_KERNEL); > > + if (queue->idx < 0) { > > + ret = NVME_RDMA_CM_NO_RSC; > > + goto out_free_queue; > > + } > > + > > + ret = nvmet_rdma_alloc_rsps(queue); > > + if (ret) { > > + ret = NVME_RDMA_CM_NO_RSC; > > + goto out_ida_remove; > > + } > > + > > + if (!ndev->srq) { > > + queue->cmds = nvmet_rdma_alloc_cmds(ndev, > > + queue->recv_queue_size, > > + !queue->host_qid); > > + if (IS_ERR(queue->cmds)) { > > + ret = NVME_RDMA_CM_NO_RSC; > > + goto out_free_cmds; > > + } > > + } > > + Should the above error path actually goto a block that frees the rsps? Like this? --- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index c184ee5..8aaa36f 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -1053,7 +1053,7 @@ nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev, !queue->host_qid); if (IS_ERR(queue->cmds)) { ret = NVME_RDMA_CM_NO_RSC; - goto out_free_cmds; + goto out_free_responses; } } @@ -1073,6 +1073,8 @@ out_free_cmds: queue->recv_queue_size, !queue->host_qid); } +out_free_responses: + nvmet_rdma_free_rsps(queue); out_ida_remove: ida_simple_remove(&nvmet_rdma_queue_ida, queue->idx); out_destroy_sq: