From patchwork Tue Mar 17 13:40:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 11442939 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9DC7014B4 for ; Tue, 17 Mar 2020 13:40:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 86E3D20770 for ; Tue, 17 Mar 2020 13:40:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726474AbgCQNki (ORCPT ); Tue, 17 Mar 2020 09:40:38 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:55042 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726084AbgCQNki (ORCPT ); Tue, 17 Mar 2020 09:40:38 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 17 Mar 2020 15:40:30 +0200 Received: from mtr-vdi-031.wap.labs.mlnx. (mtr-vdi-031.wap.labs.mlnx [10.209.102.136]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 02HDeUhc028750; Tue, 17 Mar 2020 15:40:30 +0200 From: Max Gurtovoy To: linux-nvme@lists.infradead.org, sagi@grimberg.me, hch@lst.de, loberman@redhat.com, bvanassche@acm.org, linux-rdma@vger.kernel.org Cc: kbusch@kernel.org, leonro@mellanox.com, jgg@mellanox.com, dledford@redhat.com, idanb@mellanox.com, shlomin@mellanox.com, oren@mellanox.com, vladimirk@mellanox.com, Max Gurtovoy Subject: [PATCH 1/5] IB/core: add a simple SRQ set per PD Date: Tue, 17 Mar 2020 15:40:26 +0200 Message-Id: <20200317134030.152833-2-maxg@mellanox.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200317134030.152833-1-maxg@mellanox.com> References: <20200317134030.152833-1-maxg@mellanox.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org ULP's can use this API to create/destroy SRQ's with the same characteristics for implementing a logic that aimed to save resources without significant performance penalty (e.g. create SRQ per completion vector and use shared receive buffers for multiple controllers of the ULP). Signed-off-by: Max Gurtovoy --- drivers/infiniband/core/Makefile | 2 +- drivers/infiniband/core/srq_set.c | 78 +++++++++++++++++++++++++++++++++++++++ drivers/infiniband/core/verbs.c | 4 ++ include/rdma/ib_verbs.h | 5 +++ include/rdma/srq_set.h | 18 +++++++++ 5 files changed, 106 insertions(+), 1 deletion(-) create mode 100644 drivers/infiniband/core/srq_set.c create mode 100644 include/rdma/srq_set.h diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile index d1b14887..1d3eaec 100644 --- a/drivers/infiniband/core/Makefile +++ b/drivers/infiniband/core/Makefile @@ -12,7 +12,7 @@ ib_core-y := packer.o ud_header.o verbs.o cq.o rw.o sysfs.o \ roce_gid_mgmt.o mr_pool.o addr.o sa_query.o \ multicast.o mad.o smi.o agent.o mad_rmpp.o \ nldev.o restrack.o counters.o ib_core_uverbs.o \ - trace.o + trace.o srq_set.o ib_core-$(CONFIG_SECURITY_INFINIBAND) += security.o ib_core-$(CONFIG_CGROUP_RDMA) += cgroup.o diff --git a/drivers/infiniband/core/srq_set.c b/drivers/infiniband/core/srq_set.c new file mode 100644 index 0000000..d143561 --- /dev/null +++ b/drivers/infiniband/core/srq_set.c @@ -0,0 +1,78 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2020 Mellanox Technologies. All rights reserved. + */ + +#include + +struct ib_srq *rdma_srq_get(struct ib_pd *pd) +{ + struct ib_srq *srq; + unsigned long flags; + + spin_lock_irqsave(&pd->srq_lock, flags); + srq = list_first_entry_or_null(&pd->srqs, struct ib_srq, pd_entry); + if (srq) { + list_del(&srq->pd_entry); + pd->srqs_used++; + } + spin_unlock_irqrestore(&pd->srq_lock, flags); + + return srq; +} +EXPORT_SYMBOL(rdma_srq_get); + +void rdma_srq_put(struct ib_pd *pd, struct ib_srq *srq) +{ + unsigned long flags; + + spin_lock_irqsave(&pd->srq_lock, flags); + list_add(&srq->pd_entry, &pd->srqs); + pd->srqs_used--; + spin_unlock_irqrestore(&pd->srq_lock, flags); +} +EXPORT_SYMBOL(rdma_srq_put); + +int rdma_srq_set_init(struct ib_pd *pd, int nr, + struct ib_srq_init_attr *srq_attr) +{ + struct ib_srq *srq; + unsigned long flags; + int ret, i; + + for (i = 0; i < nr; i++) { + srq = ib_create_srq(pd, srq_attr); + if (IS_ERR(srq)) { + ret = PTR_ERR(srq); + goto out; + } + + spin_lock_irqsave(&pd->srq_lock, flags); + list_add_tail(&srq->pd_entry, &pd->srqs); + spin_unlock_irqrestore(&pd->srq_lock, flags); + } + + return 0; +out: + rdma_srq_set_destroy(pd); + return ret; +} +EXPORT_SYMBOL(rdma_srq_set_init); + +void rdma_srq_set_destroy(struct ib_pd *pd) +{ + struct ib_srq *srq; + unsigned long flags; + + spin_lock_irqsave(&pd->srq_lock, flags); + while (!list_empty(&pd->srqs)) { + srq = list_first_entry(&pd->srqs, struct ib_srq, pd_entry); + list_del(&srq->pd_entry); + + spin_unlock_irqrestore(&pd->srq_lock, flags); + ib_destroy_srq(srq); + spin_lock_irqsave(&pd->srq_lock, flags); + } + spin_unlock_irqrestore(&pd->srq_lock, flags); +} +EXPORT_SYMBOL(rdma_srq_set_destroy); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index e62c9df..6950abf 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -272,6 +272,9 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, pd->__internal_mr = NULL; atomic_set(&pd->usecnt, 0); pd->flags = flags; + pd->srqs_used = 0; + spin_lock_init(&pd->srq_lock); + INIT_LIST_HEAD(&pd->srqs); pd->res.type = RDMA_RESTRACK_PD; rdma_restrack_set_task(&pd->res, caller); @@ -340,6 +343,7 @@ void ib_dealloc_pd_user(struct ib_pd *pd, struct ib_udata *udata) pd->__internal_mr = NULL; } + WARN_ON_ONCE(pd->srqs_used > 0); /* uverbs manipulates usecnt with proper locking, while the kabi requires the caller to guarantee we can't race here. */ WARN_ON(atomic_read(&pd->usecnt)); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 1f779fa..fc8207d 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1517,6 +1517,10 @@ struct ib_pd { u32 unsafe_global_rkey; + spinlock_t srq_lock; + int srqs_used; + struct list_head srqs; + /* * Implementation details of the RDMA core, don't use in drivers: */ @@ -1585,6 +1589,7 @@ struct ib_srq { void *srq_context; enum ib_srq_type srq_type; atomic_t usecnt; + struct list_head pd_entry; /* srq set entry */ struct { struct ib_cq *cq; diff --git a/include/rdma/srq_set.h b/include/rdma/srq_set.h new file mode 100644 index 0000000..834c4c6 --- /dev/null +++ b/include/rdma/srq_set.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) */ +/* + * Copyright (c) 2020 Mellanox Technologies. All rights reserved. + */ + +#ifndef _RDMA_SRQ_SET_H +#define _RDMA_SRQ_SET_H 1 + +#include + +struct ib_srq *rdma_srq_get(struct ib_pd *pd); +void rdma_srq_put(struct ib_pd *pd, struct ib_srq *srq); + +int rdma_srq_set_init(struct ib_pd *pd, int nr, + struct ib_srq_init_attr *srq_attr); +void rdma_srq_set_destroy(struct ib_pd *pd); + +#endif /* _RDMA_SRQ_SET_H */ From patchwork Tue Mar 17 13:40:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 11442943 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29E061392 for ; Tue, 17 Mar 2020 13:40:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 09A3E20776 for ; Tue, 17 Mar 2020 13:40:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726407AbgCQNkk (ORCPT ); Tue, 17 Mar 2020 09:40:40 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:55052 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726329AbgCQNkk (ORCPT ); Tue, 17 Mar 2020 09:40:40 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 17 Mar 2020 15:40:30 +0200 Received: from mtr-vdi-031.wap.labs.mlnx. (mtr-vdi-031.wap.labs.mlnx [10.209.102.136]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 02HDeUhe028750; Tue, 17 Mar 2020 15:40:30 +0200 From: Max Gurtovoy To: linux-nvme@lists.infradead.org, sagi@grimberg.me, hch@lst.de, loberman@redhat.com, bvanassche@acm.org, linux-rdma@vger.kernel.org Cc: kbusch@kernel.org, leonro@mellanox.com, jgg@mellanox.com, dledford@redhat.com, idanb@mellanox.com, shlomin@mellanox.com, oren@mellanox.com, vladimirk@mellanox.com, Max Gurtovoy Subject: [PATCH 3/5] nvmet-rdma: use SRQ per completion vector Date: Tue, 17 Mar 2020 15:40:28 +0200 Message-Id: <20200317134030.152833-4-maxg@mellanox.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200317134030.152833-1-maxg@mellanox.com> References: <20200317134030.152833-1-maxg@mellanox.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In order to save resource allocation and utilize the completion locality in a better way (compared to SRQ per device that exist today), allocate Shared Receive Queues (SRQs) per completion vector. Associate each created QP/CQ with an appropriate SRQ according to the queue index. This association will reduce the lock contention in the fast path (compared to SRQ per device solution) and increase the locality in memory buffers. Add new module parameter for SRQ size to adjust it according to the expected load. User should make sure the size is >= 256 to avoid lack of resources. Signed-off-by: Max Gurtovoy --- drivers/nvme/target/rdma.c | 204 +++++++++++++++++++++++++++++++++------------ 1 file changed, 153 insertions(+), 51 deletions(-) diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 04062af..f560257 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -18,6 +18,7 @@ #include #include +#include #include #include @@ -31,6 +32,8 @@ #define NVMET_RDMA_MAX_INLINE_SGE 4 #define NVMET_RDMA_MAX_INLINE_DATA_SIZE max_t(int, SZ_16K, PAGE_SIZE) +struct nvmet_rdma_srq; + struct nvmet_rdma_cmd { struct ib_sge sge[NVMET_RDMA_MAX_INLINE_SGE + 1]; struct ib_cqe cqe; @@ -38,7 +41,7 @@ struct nvmet_rdma_cmd { struct scatterlist inline_sg[NVMET_RDMA_MAX_INLINE_SGE]; struct nvme_command *nvme_cmd; struct nvmet_rdma_queue *queue; - struct ib_srq *srq; + struct nvmet_rdma_srq *nsrq; }; enum { @@ -80,6 +83,7 @@ struct nvmet_rdma_queue { struct ib_cq *cq; atomic_t sq_wr_avail; struct nvmet_rdma_device *dev; + struct nvmet_rdma_srq *nsrq; spinlock_t state_lock; enum nvmet_rdma_queue_state state; struct nvmet_cq nvme_cq; @@ -97,17 +101,24 @@ struct nvmet_rdma_queue { int idx; int host_qid; + int comp_vector; int recv_queue_size; int send_queue_size; struct list_head queue_list; }; +struct nvmet_rdma_srq { + struct ib_srq *srq; + struct nvmet_rdma_cmd *cmds; + struct nvmet_rdma_device *ndev; +}; + struct nvmet_rdma_device { struct ib_device *device; struct ib_pd *pd; - struct ib_srq *srq; - struct nvmet_rdma_cmd *srq_cmds; + struct nvmet_rdma_srq **srqs; + int srq_count; size_t srq_size; struct kref ref; struct list_head entry; @@ -119,6 +130,16 @@ struct nvmet_rdma_device { module_param_named(use_srq, nvmet_rdma_use_srq, bool, 0444); MODULE_PARM_DESC(use_srq, "Use shared receive queue."); +static int srq_size_set(const char *val, const struct kernel_param *kp); +static const struct kernel_param_ops srq_size_ops = { + .set = srq_size_set, + .get = param_get_int, +}; + +static int nvmet_rdma_srq_size = 1024; +module_param_cb(srq_size, &srq_size_ops, &nvmet_rdma_srq_size, 0644); +MODULE_PARM_DESC(srq_size, "set Shared Receive Queue (SRQ) size, should >= 256 (default: 1024)"); + static DEFINE_IDA(nvmet_rdma_queue_ida); static LIST_HEAD(nvmet_rdma_queue_list); static DEFINE_MUTEX(nvmet_rdma_queue_mutex); @@ -139,6 +160,17 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev, static const struct nvmet_fabrics_ops nvmet_rdma_ops; +static int srq_size_set(const char *val, const struct kernel_param *kp) +{ + int n = 0, ret; + + ret = kstrtoint(val, 10, &n); + if (ret != 0 || n < 256) + return -EINVAL; + + return param_set_int(val, kp); +} + static int num_pages(int len) { return 1 + (((len - 1) & PAGE_MASK) >> PAGE_SHIFT); @@ -462,8 +494,8 @@ static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev, cmd->sge[0].addr, cmd->sge[0].length, DMA_FROM_DEVICE); - if (cmd->srq) - ret = ib_post_srq_recv(cmd->srq, &cmd->wr, NULL); + if (cmd->nsrq) + ret = ib_post_srq_recv(cmd->nsrq->srq, &cmd->wr, NULL); else ret = ib_post_recv(cmd->queue->cm_id->qp, &cmd->wr, NULL); @@ -841,30 +873,82 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) nvmet_rdma_handle_command(queue, rsp); } -static void nvmet_rdma_destroy_srq(struct nvmet_rdma_device *ndev) +static void nvmet_rdma_destroy_srq(struct nvmet_rdma_srq *nsrq) +{ + nvmet_rdma_free_cmds(nsrq->ndev, nsrq->cmds, nsrq->ndev->srq_size, false); + rdma_srq_put(nsrq->ndev->pd, nsrq->srq); + + kfree(nsrq); +} + +static void nvmet_rdma_destroy_srqs(struct nvmet_rdma_device *ndev) { - if (!ndev->srq) + int i; + + if (!ndev->srqs) return; - nvmet_rdma_free_cmds(ndev, ndev->srq_cmds, ndev->srq_size, false); - ib_destroy_srq(ndev->srq); + for (i = 0; i < ndev->srq_count; i++) + nvmet_rdma_destroy_srq(ndev->srqs[i]); + + rdma_srq_set_destroy(ndev->pd); + kfree(ndev->srqs); + ndev->srqs = NULL; + ndev->srq_count = 0; + ndev->srq_size = 0; } -static int nvmet_rdma_init_srq(struct nvmet_rdma_device *ndev) +static struct nvmet_rdma_srq * +nvmet_rdma_init_srq(struct nvmet_rdma_device *ndev) { - struct ib_srq_init_attr srq_attr = { NULL, }; + size_t srq_size = ndev->srq_size; + struct nvmet_rdma_srq *nsrq; struct ib_srq *srq; - size_t srq_size; int ret, i; - srq_size = 4095; /* XXX: tune */ + nsrq = kzalloc(sizeof(*nsrq), GFP_KERNEL); + if (!nsrq) + return ERR_PTR(-ENOMEM); - srq_attr.attr.max_wr = srq_size; - srq_attr.attr.max_sge = 1 + ndev->inline_page_count; - srq_attr.attr.srq_limit = 0; - srq_attr.srq_type = IB_SRQT_BASIC; - srq = ib_create_srq(ndev->pd, &srq_attr); - if (IS_ERR(srq)) { + srq = rdma_srq_get(ndev->pd); + if (!srq) { + ret = -EAGAIN; + goto out_free_nsrq; + } + + nsrq->cmds = nvmet_rdma_alloc_cmds(ndev, srq_size, false); + if (IS_ERR(nsrq->cmds)) { + ret = PTR_ERR(nsrq->cmds); + goto out_put_srq; + } + + nsrq->srq = srq; + nsrq->ndev = ndev; + + for (i = 0; i < srq_size; i++) { + nsrq->cmds[i].nsrq = nsrq; + ret = nvmet_rdma_post_recv(ndev, &nsrq->cmds[i]); + if (ret) + goto out_free_cmds; + } + + return nsrq; + +out_free_cmds: + nvmet_rdma_free_cmds(ndev, nsrq->cmds, srq_size, false); +out_put_srq: + rdma_srq_put(ndev->pd, srq); +out_free_nsrq: + kfree(nsrq); + return ERR_PTR(ret); +} + +static int nvmet_rdma_init_srqs(struct nvmet_rdma_device *ndev) +{ + struct ib_srq_init_attr srq_attr = { NULL, }; + int i, ret; + + if (!ndev->device->attrs.max_srq_wr || !ndev->device->attrs.max_srq) { /* * If SRQs aren't supported we just go ahead and use normal * non-shared receive queues. @@ -873,31 +957,44 @@ static int nvmet_rdma_init_srq(struct nvmet_rdma_device *ndev) return 0; } - ndev->srq_cmds = nvmet_rdma_alloc_cmds(ndev, srq_size, false); - if (IS_ERR(ndev->srq_cmds)) { - ret = PTR_ERR(ndev->srq_cmds); - goto out_destroy_srq; - } + ndev->srq_size = min(ndev->device->attrs.max_srq_wr, + nvmet_rdma_srq_size); + ndev->srq_count = min(ndev->device->num_comp_vectors, + ndev->device->attrs.max_srq); - ndev->srq = srq; - ndev->srq_size = srq_size; + ndev->srqs = kcalloc(ndev->srq_count, sizeof(*ndev->srqs), GFP_KERNEL); + if (!ndev->srqs) + return -ENOMEM; - for (i = 0; i < srq_size; i++) { - ndev->srq_cmds[i].srq = srq; - ret = nvmet_rdma_post_recv(ndev, &ndev->srq_cmds[i]); - if (ret) - goto out_free_cmds; + srq_attr.attr.max_wr = ndev->srq_size; + srq_attr.attr.max_sge = 2; + srq_attr.attr.srq_limit = 0; + srq_attr.srq_type = IB_SRQT_BASIC; + ret = rdma_srq_set_init(ndev->pd, ndev->srq_count, &srq_attr); + if (ret) + goto err_free; + + for (i = 0; i < ndev->srq_count; i++) { + ndev->srqs[i] = nvmet_rdma_init_srq(ndev); + if (IS_ERR(ndev->srqs[i])) + goto err_srq; } return 0; -out_free_cmds: - nvmet_rdma_free_cmds(ndev, ndev->srq_cmds, ndev->srq_size, false); -out_destroy_srq: - ib_destroy_srq(srq); +err_srq: + while (--i >= 0) + nvmet_rdma_destroy_srq(ndev->srqs[i]); + rdma_srq_set_destroy(ndev->pd); +err_free: + kfree(ndev->srqs); + ndev->srqs = NULL; + ndev->srq_count = 0; + ndev->srq_size = 0; return ret; } + static void nvmet_rdma_free_dev(struct kref *ref) { struct nvmet_rdma_device *ndev = @@ -907,7 +1004,7 @@ static void nvmet_rdma_free_dev(struct kref *ref) list_del(&ndev->entry); mutex_unlock(&device_list_mutex); - nvmet_rdma_destroy_srq(ndev); + nvmet_rdma_destroy_srqs(ndev); ib_dealloc_pd(ndev->pd); kfree(ndev); @@ -953,7 +1050,7 @@ static void nvmet_rdma_free_dev(struct kref *ref) goto out_free_dev; if (nvmet_rdma_use_srq) { - ret = nvmet_rdma_init_srq(ndev); + ret = nvmet_rdma_init_srqs(ndev); if (ret) goto out_free_pd; } @@ -977,14 +1074,8 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue) { struct ib_qp_init_attr qp_attr; struct nvmet_rdma_device *ndev = queue->dev; - int comp_vector, nr_cqe, ret, i; + int nr_cqe, ret, i; - /* - * Spread the io queues across completion vectors, - * but still keep all admin queues on vector 0. - */ - comp_vector = !queue->host_qid ? 0 : - queue->idx % ndev->device->num_comp_vectors; /* * Reserve CQ slots for RECV + RDMA_READ/RDMA_WRITE + RDMA_SEND. @@ -992,7 +1083,7 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue) nr_cqe = queue->recv_queue_size + 2 * queue->send_queue_size; queue->cq = ib_alloc_cq(ndev->device, queue, - nr_cqe + 1, comp_vector, + nr_cqe + 1, queue->comp_vector, IB_POLL_WORKQUEUE); if (IS_ERR(queue->cq)) { ret = PTR_ERR(queue->cq); @@ -1014,8 +1105,8 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue) qp_attr.cap.max_send_sge = max(ndev->device->attrs.max_sge_rd, ndev->device->attrs.max_send_sge); - if (ndev->srq) { - qp_attr.srq = ndev->srq; + if (queue->nsrq) { + qp_attr.srq = queue->nsrq->srq; } else { /* +1 for drain */ qp_attr.cap.max_recv_wr = 1 + queue->recv_queue_size; @@ -1034,7 +1125,7 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue) __func__, queue->cq->cqe, qp_attr.cap.max_send_sge, qp_attr.cap.max_send_wr, queue->cm_id); - if (!ndev->srq) { + if (!queue->nsrq) { for (i = 0; i < queue->recv_queue_size; i++) { queue->cmds[i].queue = queue; ret = nvmet_rdma_post_recv(ndev, &queue->cmds[i]); @@ -1070,7 +1161,7 @@ static void nvmet_rdma_free_queue(struct nvmet_rdma_queue *queue) nvmet_sq_destroy(&queue->nvme_sq); nvmet_rdma_destroy_queue_ib(queue); - if (!queue->dev->srq) { + if (!queue->nsrq) { nvmet_rdma_free_cmds(queue->dev, queue->cmds, queue->recv_queue_size, !queue->host_qid); @@ -1182,13 +1273,22 @@ static int nvmet_rdma_cm_reject(struct rdma_cm_id *cm_id, goto out_destroy_sq; } + /* + * Spread the io queues across completion vectors, + * but still keep all admin queues on vector 0. + */ + queue->comp_vector = !queue->host_qid ? 0 : + queue->idx % ndev->device->num_comp_vectors; + ret = nvmet_rdma_alloc_rsps(queue); if (ret) { ret = NVME_RDMA_CM_NO_RSC; goto out_ida_remove; } - if (!ndev->srq) { + if (ndev->srqs) { + queue->nsrq = ndev->srqs[queue->comp_vector % ndev->srq_count]; + } else { queue->cmds = nvmet_rdma_alloc_cmds(ndev, queue->recv_queue_size, !queue->host_qid); @@ -1209,10 +1309,12 @@ static int nvmet_rdma_cm_reject(struct rdma_cm_id *cm_id, return queue; out_free_cmds: - if (!ndev->srq) { + if (!queue->nsrq) { nvmet_rdma_free_cmds(queue->dev, queue->cmds, queue->recv_queue_size, !queue->host_qid); + } else { + queue->nsrq = NULL; } out_free_responses: nvmet_rdma_free_rsps(queue); From patchwork Tue Mar 17 13:40:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 11442937 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FD0414B4 for ; Tue, 17 Mar 2020 13:40:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 08D1F2077F for ; Tue, 17 Mar 2020 13:40:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726287AbgCQNkh (ORCPT ); Tue, 17 Mar 2020 09:40:37 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:41444 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726084AbgCQNkg (ORCPT ); Tue, 17 Mar 2020 09:40:36 -0400 Received: from Internal Mail-Server by MTLPINE2 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 17 Mar 2020 15:40:31 +0200 Received: from mtr-vdi-031.wap.labs.mlnx. (mtr-vdi-031.wap.labs.mlnx [10.209.102.136]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 02HDeUhf028750; Tue, 17 Mar 2020 15:40:30 +0200 From: Max Gurtovoy To: linux-nvme@lists.infradead.org, sagi@grimberg.me, hch@lst.de, loberman@redhat.com, bvanassche@acm.org, linux-rdma@vger.kernel.org Cc: kbusch@kernel.org, leonro@mellanox.com, jgg@mellanox.com, dledford@redhat.com, idanb@mellanox.com, shlomin@mellanox.com, oren@mellanox.com, vladimirk@mellanox.com, Max Gurtovoy Subject: [PATCH 4/5] IB/core: cache the CQ completion vector Date: Tue, 17 Mar 2020 15:40:29 +0200 Message-Id: <20200317134030.152833-5-maxg@mellanox.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200317134030.152833-1-maxg@mellanox.com> References: <20200317134030.152833-1-maxg@mellanox.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In some cases, e.g. when using ib_alloc_cq_any, one would like to know the completion vector that eventually assigned to the CQ. Cache this value during CQ creation. Signed-off-by: Max Gurtovoy --- drivers/infiniband/core/cq.c | 1 + include/rdma/ib_verbs.h | 1 + 2 files changed, 2 insertions(+) diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c index 4f25b24..a7cbf52 100644 --- a/drivers/infiniband/core/cq.c +++ b/drivers/infiniband/core/cq.c @@ -217,6 +217,7 @@ struct ib_cq *__ib_alloc_cq_user(struct ib_device *dev, void *private, cq->device = dev; cq->cq_context = private; cq->poll_ctx = poll_ctx; + cq->comp_vector = comp_vector; atomic_set(&cq->usecnt, 0); cq->wc = kmalloc_array(IB_POLL_BATCH, sizeof(*cq->wc), GFP_KERNEL); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index fc8207d..0d61772 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1558,6 +1558,7 @@ struct ib_cq { struct ib_device *device; struct ib_ucq_object *uobject; ib_comp_handler comp_handler; + u32 comp_vector; void (*event_handler)(struct ib_event *, void *); void *cq_context; int cqe; From patchwork Tue Mar 17 13:40:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 11442941 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C89941667 for ; Tue, 17 Mar 2020 13:40:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A77662077B for ; Tue, 17 Mar 2020 13:40:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726084AbgCQNkj (ORCPT ); Tue, 17 Mar 2020 09:40:39 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:55065 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726407AbgCQNki (ORCPT ); Tue, 17 Mar 2020 09:40:38 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 17 Mar 2020 15:40:31 +0200 Received: from mtr-vdi-031.wap.labs.mlnx. (mtr-vdi-031.wap.labs.mlnx [10.209.102.136]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 02HDeUhg028750; Tue, 17 Mar 2020 15:40:31 +0200 From: Max Gurtovoy To: linux-nvme@lists.infradead.org, sagi@grimberg.me, hch@lst.de, loberman@redhat.com, bvanassche@acm.org, linux-rdma@vger.kernel.org Cc: kbusch@kernel.org, leonro@mellanox.com, jgg@mellanox.com, dledford@redhat.com, idanb@mellanox.com, shlomin@mellanox.com, oren@mellanox.com, vladimirk@mellanox.com, Max Gurtovoy Subject: [PATCH 5/5] RDMA/srpt: use SRQ per completion vector Date: Tue, 17 Mar 2020 15:40:30 +0200 Message-Id: <20200317134030.152833-6-maxg@mellanox.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200317134030.152833-1-maxg@mellanox.com> References: <20200317134030.152833-1-maxg@mellanox.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In order to save resource allocation and utilize the completion locality in a better way (compared to SRQ per device that exist today), allocate Shared Receive Queues (SRQs) per completion vector. Associate each created channel with an appropriate SRQ according to the completion vector index. This association will reduce the lock contention in the fast path (compared to SRQ per device solution) and increase the locality in memory buffers. Signed-off-by: Max Gurtovoy --- drivers/infiniband/ulp/srpt/ib_srpt.c | 169 +++++++++++++++++++++++++--------- drivers/infiniband/ulp/srpt/ib_srpt.h | 26 +++++- 2 files changed, 148 insertions(+), 47 deletions(-) diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c index 9855274..34869b7 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.c +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c @@ -811,6 +811,31 @@ static bool srpt_test_and_set_cmd_state(struct srpt_send_ioctx *ioctx, } /** + * srpt_srq_post_recv - post an initial IB receive request for SRQ + * @srq: SRPT SRQ context. + * @ioctx: Receive I/O context pointer. + */ +static int srpt_srq_post_recv(struct srpt_srq *srq, struct srpt_recv_ioctx *ioctx) +{ + struct srpt_device *sdev = srq->sdev; + struct ib_sge list; + struct ib_recv_wr wr; + + BUG_ON(!srq); + list.addr = ioctx->ioctx.dma + ioctx->ioctx.offset; + list.length = srp_max_req_size; + list.lkey = sdev->lkey; + + ioctx->ioctx.cqe.done = srpt_recv_done; + wr.wr_cqe = &ioctx->ioctx.cqe; + wr.next = NULL; + wr.sg_list = &list; + wr.num_sge = 1; + + return ib_post_srq_recv(srq->ibsrq, &wr, NULL); +} + +/** * srpt_post_recv - post an IB receive request * @sdev: SRPT HCA pointer. * @ch: SRPT RDMA channel. @@ -823,6 +848,7 @@ static int srpt_post_recv(struct srpt_device *sdev, struct srpt_rdma_ch *ch, struct ib_recv_wr wr; BUG_ON(!sdev); + BUG_ON(!ch); list.addr = ioctx->ioctx.dma + ioctx->ioctx.offset; list.length = srp_max_req_size; list.lkey = sdev->lkey; @@ -834,7 +860,7 @@ static int srpt_post_recv(struct srpt_device *sdev, struct srpt_rdma_ch *ch, wr.num_sge = 1; if (sdev->use_srq) - return ib_post_srq_recv(sdev->srq, &wr, NULL); + return ib_post_srq_recv(ch->srq->ibsrq, &wr, NULL); else return ib_post_recv(ch->qp, &wr, NULL); } @@ -1820,7 +1846,8 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) SRPT_MAX_SG_PER_WQE); qp_init->port_num = ch->sport->port; if (sdev->use_srq) { - qp_init->srq = sdev->srq; + ch->srq = sdev->srqs[ch->cq->comp_vector % sdev->srq_count]; + qp_init->srq = ch->srq->ibsrq; } else { qp_init->cap.max_recv_wr = ch->rq_size; qp_init->cap.max_recv_sge = min(attrs->max_recv_sge, @@ -1878,6 +1905,8 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) static void srpt_destroy_ch_ib(struct srpt_rdma_ch *ch) { + if (ch->srq) + ch->srq = NULL; ib_destroy_qp(ch->qp); ib_free_cq(ch->cq); } @@ -3018,20 +3047,75 @@ static struct se_wwn *srpt_lookup_wwn(const char *name) return wwn; } -static void srpt_free_srq(struct srpt_device *sdev) +static void srpt_free_srq(struct srpt_srq *srq) { - if (!sdev->srq) - return; - ib_destroy_srq(sdev->srq); - srpt_free_ioctx_ring((struct srpt_ioctx **)sdev->ioctx_ring, sdev, - sdev->srq_size, sdev->req_buf_cache, + srpt_free_ioctx_ring((struct srpt_ioctx **)srq->ioctx_ring, srq->sdev, + srq->sdev->srq_size, srq->sdev->req_buf_cache, DMA_FROM_DEVICE); + rdma_srq_put(srq->sdev->pd, srq->ibsrq); + kfree(srq); + +} + +static void srpt_free_srqs(struct srpt_device *sdev) +{ + int i; + + if (!sdev->srqs) + return; + + for (i = 0; i < sdev->srq_count; i++) + srpt_free_srq(sdev->srqs[i]); kmem_cache_destroy(sdev->req_buf_cache); - sdev->srq = NULL; + rdma_srq_set_destroy(sdev->pd); + kfree(sdev->srqs); + sdev->srqs = NULL; } -static int srpt_alloc_srq(struct srpt_device *sdev) +static struct srpt_srq *srpt_alloc_srq(struct srpt_device *sdev) +{ + struct srpt_srq *srq; + int i, ret; + + srq = kzalloc(sizeof(*srq), GFP_KERNEL); + if (!srq) { + pr_debug("failed to allocate SRQ context\n"); + return ERR_PTR(-ENOMEM); + } + + srq->ibsrq = rdma_srq_get(sdev->pd); + if (!srq) { + ret = -EAGAIN; + goto free_srq; + } + + srq->ioctx_ring = (struct srpt_recv_ioctx **) + srpt_alloc_ioctx_ring(sdev, sdev->srq_size, + sizeof(*srq->ioctx_ring[0]), + sdev->req_buf_cache, 0, DMA_FROM_DEVICE); + if (!srq->ioctx_ring) { + ret = -ENOMEM; + goto put_srq; + } + + srq->sdev = sdev; + + for (i = 0; i < sdev->srq_size; ++i) { + INIT_LIST_HEAD(&srq->ioctx_ring[i]->wait_list); + srpt_srq_post_recv(srq, srq->ioctx_ring[i]); + } + + return srq; + +put_srq: + rdma_srq_put(sdev->pd, srq->ibsrq); +free_srq: + kfree(srq); + return ERR_PTR(ret); +} + +static int srpt_alloc_srqs(struct srpt_device *sdev) { struct ib_srq_init_attr srq_attr = { .event_handler = srpt_srq_event, @@ -3041,46 +3125,45 @@ static int srpt_alloc_srq(struct srpt_device *sdev) .srq_type = IB_SRQT_BASIC, }; struct ib_device *device = sdev->device; - struct ib_srq *srq; - int i; + int i, j, ret; - WARN_ON_ONCE(sdev->srq); - srq = ib_create_srq(sdev->pd, &srq_attr); - if (IS_ERR(srq)) { - pr_debug("ib_create_srq() failed: %ld\n", PTR_ERR(srq)); - return PTR_ERR(srq); - } + WARN_ON_ONCE(sdev->srqs); + sdev->srqs = kcalloc(sdev->srq_count, sizeof(*sdev->srqs), GFP_KERNEL); + if (!sdev->srqs) + return -ENOMEM; - pr_debug("create SRQ #wr= %d max_allow=%d dev= %s\n", sdev->srq_size, - sdev->device->attrs.max_srq_wr, dev_name(&device->dev)); + pr_debug("create SRQ set #wr= %d max_allow=%d dev= %s\n", + sdev->srq_size, sdev->device->attrs.max_srq_wr, + dev_name(&device->dev)); + + ret = rdma_srq_set_init(sdev->pd, sdev->srq_count, &srq_attr); + if (ret) + goto out_free; sdev->req_buf_cache = kmem_cache_create("srpt-srq-req-buf", srp_max_req_size, 0, 0, NULL); if (!sdev->req_buf_cache) - goto free_srq; + goto out_free_set; - sdev->ioctx_ring = (struct srpt_recv_ioctx **) - srpt_alloc_ioctx_ring(sdev, sdev->srq_size, - sizeof(*sdev->ioctx_ring[0]), - sdev->req_buf_cache, 0, DMA_FROM_DEVICE); - if (!sdev->ioctx_ring) - goto free_cache; + for (i = 0; i < sdev->srq_count; i++) { + sdev->srqs[i] = srpt_alloc_srq(sdev); + if (IS_ERR(sdev->srqs[i])) + goto free_srq; + } sdev->use_srq = true; - sdev->srq = srq; - - for (i = 0; i < sdev->srq_size; ++i) { - INIT_LIST_HEAD(&sdev->ioctx_ring[i]->wait_list); - srpt_post_recv(sdev, NULL, sdev->ioctx_ring[i]); - } return 0; -free_cache: - kmem_cache_destroy(sdev->req_buf_cache); - free_srq: - ib_destroy_srq(srq); + for (j = 0; j < i; j++) + srpt_free_srq(sdev->srqs[j]); + kmem_cache_destroy(sdev->req_buf_cache); +out_free_set: + rdma_srq_set_destroy(sdev->pd); +out_free: + kfree(sdev->srqs); + sdev->srqs = NULL; return -ENOMEM; } @@ -3090,10 +3173,10 @@ static int srpt_use_srq(struct srpt_device *sdev, bool use_srq) int ret = 0; if (!use_srq) { - srpt_free_srq(sdev); + srpt_free_srqs(sdev); sdev->use_srq = false; - } else if (use_srq && !sdev->srq) { - ret = srpt_alloc_srq(sdev); + } else if (use_srq && !sdev->srqs) { + ret = srpt_alloc_srqs(sdev); } pr_debug("%s(%s): use_srq = %d; ret = %d\n", __func__, dev_name(&device->dev), sdev->use_srq, ret); @@ -3127,6 +3210,8 @@ static void srpt_add_one(struct ib_device *device) sdev->lkey = sdev->pd->local_dma_lkey; sdev->srq_size = min(srpt_srq_size, sdev->device->attrs.max_srq_wr); + sdev->srq_count = min(sdev->device->num_comp_vectors, + sdev->device->attrs.max_srq); srpt_use_srq(sdev, sdev->port[0].port_attrib.use_srq); @@ -3204,7 +3289,7 @@ static void srpt_add_one(struct ib_device *device) if (sdev->cm_id) ib_destroy_cm_id(sdev->cm_id); err_ring: - srpt_free_srq(sdev); + srpt_free_srqs(sdev); ib_dealloc_pd(sdev->pd); free_dev: kfree(sdev); @@ -3255,7 +3340,7 @@ static void srpt_remove_one(struct ib_device *device, void *client_data) for (i = 0; i < sdev->device->phys_port_cnt; i++) srpt_release_sport(&sdev->port[i]); - srpt_free_srq(sdev); + srpt_free_srqs(sdev); ib_dealloc_pd(sdev->pd); diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h index 2e1a698..a637d4f 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.h +++ b/drivers/infiniband/ulp/srpt/ib_srpt.h @@ -42,6 +42,7 @@ #include #include #include +#include #include #include @@ -56,6 +57,7 @@ #define SRP_SERVICE_NAME_PREFIX "SRP.T10:" struct srpt_nexus; +struct srpt_srq; enum { /* @@ -255,6 +257,7 @@ enum rdma_ch_state { /** * struct srpt_rdma_ch - RDMA channel * @nexus: I_T nexus this channel is associated with. + * @srq: SRQ that this channel is associated with (if use_srq=True). * @qp: IB queue pair used for communicating over this channel. * @ib_cm: See below. * @ib_cm.cm_id: IB CM ID associated with the channel. @@ -295,6 +298,7 @@ enum rdma_ch_state { */ struct srpt_rdma_ch { struct srpt_nexus *nexus; + struct srpt_srq *srq; struct ib_qp *qp; union { struct { @@ -432,17 +436,29 @@ struct srpt_port { }; /** + * struct srpt_srq - SRQ (shared receive queue) context for SRPT + * @ibsrq: verbs SRQ pointer. + * @ioctx_ring: Per SRQ ring. + * @sdev: backpointer to the HCA information. + */ +struct srpt_srq { + struct ib_srq *ibsrq; + struct srpt_recv_ioctx **ioctx_ring; + struct srpt_device *sdev; +}; + +/** * struct srpt_device - information associated by SRPT with a single HCA * @device: Backpointer to the struct ib_device managed by the IB core. * @pd: IB protection domain. * @lkey: L_Key (local key) with write access to all local memory. - * @srq: Per-HCA SRQ (shared receive queue). * @cm_id: Connection identifier. - * @srq_size: SRQ size. + * @srqs: SRQ's array. + * @srq_count: SRQs array size. + * @srq_size: SRQ size for each in SRQ the array. * @sdev_mutex: Serializes use_srq changes. * @use_srq: Whether or not to use SRQ. * @req_buf_cache: kmem_cache for @ioctx_ring buffers. - * @ioctx_ring: Per-HCA SRQ. * @event_handler: Per-HCA asynchronous IB event handler. * @list: Node in srpt_dev_list. * @port: Information about the ports owned by this HCA. @@ -451,13 +467,13 @@ struct srpt_device { struct ib_device *device; struct ib_pd *pd; u32 lkey; - struct ib_srq *srq; struct ib_cm_id *cm_id; + struct srpt_srq **srqs; + int srq_count; int srq_size; struct mutex sdev_mutex; bool use_srq; struct kmem_cache *req_buf_cache; - struct srpt_recv_ioctx **ioctx_ring; struct ib_event_handler event_handler; struct list_head list; struct srpt_port port[];