From patchwork Mon Dec 6 21:12:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12659699 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C3D6C43219 for ; Mon, 6 Dec 2021 21:14:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350395AbhLFVSQ (ORCPT ); Mon, 6 Dec 2021 16:18:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350425AbhLFVRh (ORCPT ); Mon, 6 Dec 2021 16:17:37 -0500 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4688C0698C2 for ; Mon, 6 Dec 2021 13:14:06 -0800 (PST) Received: by mail-oi1-x236.google.com with SMTP id m6so23866100oim.2 for ; Mon, 06 Dec 2021 13:14:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cwpwrPOoumsEFx90rh0xREKSytlSaqsCifz9ru66LQk=; b=UX9aqbsH0VWAH65YiAy/2Lfl5eNkdLdpFaFHXEKZqRf/Cib67asi0Aj6oc8K8+lTAn kJDyPNuFYZ8P0xNk3iOM0jBRPQ8cTGSOIiJP3UKlTjFc5rjgDs65ckVMyqdrdQw37FeS g8TR630zdbsxOaWSJD8PhO/Gz2zv5W4Zt+apFw0rt/mI6gXhZCaWk4ekWwRYxGW3vQbq fBdZuRNpsy0L32J8yRs/E186taCfNvSvgKOYwffVwnO6oAyRqA3lHQ3syZxPezQLk6wz RzHL2uWBQ5EdVflcQU3yZ1N80rRIq8EZ47AgHshbpDc2bkXAA/75xYulUoTByzyXi45x vvEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cwpwrPOoumsEFx90rh0xREKSytlSaqsCifz9ru66LQk=; b=HcdwSMIpioGKXzKyReT046XODFhZ4EM2StuKIwN/zABJBGvrWbWMA8MtGgc6brV8AU HMUNq+h0PKnLRhXDOqXh6hUI/XICePTBzd5HPi8szgPN37ZQhNkwKniM2SUk7LROjYAg QGG4dKSGjjDNivXgB+/cwjSE1mxVIyamRwXxJP5vXPVcauT8STT65BS9c/ST890Azumt nwBPwNt8a1/BPGsksWm85DGvFeKfdnJhZJYOt0Qp9MFC4C3aB5Fak8NNCGn/ox31yQWr +58nx2DIOCCNzEM+PE6BGfVA/FdPsdln4YMnGeBHiz3Q5l2T64j3lSEUYZ6ZQW1v27c1 hEHA== X-Gm-Message-State: AOAM533vjLBhLmdyr8NbzKqit0kqM1G0cd4NBsohmOGb65MLAZqRKLuQ 5yKDEh956MGFejFMsGpvD4w= X-Google-Smtp-Source: ABdhPJxv201frne41vFeoi2c0VnNw+p1J4VLgm6FXoN0wdIGX3TRQAGsxHGLSAd+TsG0bCppOAAfww== X-Received: by 2002:a05:6808:10d0:: with SMTP id s16mr1243158ois.0.1638825245935; Mon, 06 Dec 2021 13:14:05 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-07ad-dbeb-c616-747c.res6.spectrum.com. [2603:8081:140c:1a00:7ad:dbeb:c616:747c]) by smtp.googlemail.com with ESMTPSA id y28sm2819111oix.57.2021.12.06.13.14.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 13:14:05 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v6 1/8] RDMA/rxe: Replace RB tree by xarray for indexes Date: Mon, 6 Dec 2021 15:12:36 -0600 Message-Id: <20211206211242.15528-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211206211242.15528-1-rpearsonhpe@gmail.com> References: <20211206211242.15528-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rxe driver uses red-black trees to add indices and keys to the rxe object pool. Linux xarrays provide a better way to implement the same functionality for indices but not keys. This patch replaces red-black trees by xarrays for indexed objects. Since caller managed locks for indexed objects are not used these APIs are deleted as well. To avoid double locking since xarray already includes a spinlock replace the rxe_pool rwlock by the spinlock included in xarray. The RDMA objects are created and destroyed by verbs calls from rdma_core but are looked up from indices or keys from soft IRQs so _bh style locks are the correct type to use. Signed-off-by: Bob Pearson --- v6 Minor fix to comment. --- drivers/infiniband/sw/rxe/rxe.c | 100 ++---------- drivers/infiniband/sw/rxe/rxe_mcast.c | 6 +- drivers/infiniband/sw/rxe/rxe_mr.c | 1 - drivers/infiniband/sw/rxe/rxe_mw.c | 4 - drivers/infiniband/sw/rxe/rxe_pool.c | 221 ++++++++------------------ drivers/infiniband/sw/rxe/rxe_pool.h | 77 ++++----- drivers/infiniband/sw/rxe/rxe_verbs.c | 12 -- 7 files changed, 114 insertions(+), 307 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 8e0f9c489cab..09c73a0d8513 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -116,97 +116,31 @@ static void rxe_init_ports(struct rxe_dev *rxe) } /* init pools of managed objects */ -static int rxe_init_pools(struct rxe_dev *rxe) +static void rxe_init_pools(struct rxe_dev *rxe) { - int err; - - err = rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC, - rxe->max_ucontext); - if (err) - goto err1; - - err = rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD, - rxe->attr.max_pd); - if (err) - goto err2; - - err = rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH, - rxe->attr.max_ah); - if (err) - goto err3; - - err = rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ, - rxe->attr.max_srq); - if (err) - goto err4; - - err = rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP, - rxe->attr.max_qp); - if (err) - goto err5; - - err = rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ, - rxe->attr.max_cq); - if (err) - goto err6; - - err = rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR, - rxe->attr.max_mr); - if (err) - goto err7; - - err = rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW, - rxe->attr.max_mw); - if (err) - goto err8; - - err = rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP, + rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC, rxe->max_ucontext); + rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD, rxe->attr.max_pd); + rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH, rxe->attr.max_ah); + rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ, rxe->attr.max_srq); + rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP, rxe->attr.max_qp); + rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ, rxe->attr.max_cq); + rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR, rxe->attr.max_mr); + rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW, rxe->attr.max_mw); + rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP, rxe->attr.max_mcast_grp); - if (err) - goto err9; - - err = rxe_pool_init(rxe, &rxe->mc_elem_pool, RXE_TYPE_MC_ELEM, + rxe_pool_init(rxe, &rxe->mc_elem_pool, RXE_TYPE_MC_ELEM, rxe->attr.max_total_mcast_qp_attach); - if (err) - goto err10; - - return 0; - -err10: - rxe_pool_cleanup(&rxe->mc_grp_pool); -err9: - rxe_pool_cleanup(&rxe->mw_pool); -err8: - rxe_pool_cleanup(&rxe->mr_pool); -err7: - rxe_pool_cleanup(&rxe->cq_pool); -err6: - rxe_pool_cleanup(&rxe->qp_pool); -err5: - rxe_pool_cleanup(&rxe->srq_pool); -err4: - rxe_pool_cleanup(&rxe->ah_pool); -err3: - rxe_pool_cleanup(&rxe->pd_pool); -err2: - rxe_pool_cleanup(&rxe->uc_pool); -err1: - return err; } /* initialize rxe device state */ -static int rxe_init(struct rxe_dev *rxe) +static void rxe_init(struct rxe_dev *rxe) { - int err; - /* init default device parameters */ rxe_init_device_param(rxe); rxe_init_ports(rxe); - err = rxe_init_pools(rxe); - if (err) - return err; + rxe_init_pools(rxe); /* init pending mmap list */ spin_lock_init(&rxe->mmap_offset_lock); @@ -214,8 +148,6 @@ static int rxe_init(struct rxe_dev *rxe) INIT_LIST_HEAD(&rxe->pending_mmaps); mutex_init(&rxe->usdev_lock); - - return 0; } void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu) @@ -237,11 +169,7 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu) */ int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name) { - int err; - - err = rxe_init(rxe); - if (err) - return err; + rxe_init(rxe); rxe_set_mtu(rxe, mtu); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index bd1ac88b8700..1692526c5b57 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -44,7 +44,7 @@ int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, if (rxe->attr.max_mcast_qp_attach == 0) return -EINVAL; - write_lock_bh(&pool->pool_lock); + rxe_pool_lock_bh(pool); grp = rxe_pool_get_key_locked(pool, mgid); if (grp) @@ -52,13 +52,13 @@ int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, grp = create_grp(rxe, pool, mgid); if (IS_ERR(grp)) { - write_unlock_bh(&pool->pool_lock); + rxe_pool_unlock_bh(pool); err = PTR_ERR(grp); return err; } done: - write_unlock_bh(&pool->pool_lock); + rxe_pool_unlock_bh(pool); *grp_p = grp; return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 25c78aade822..3c4390adfb80 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -693,7 +693,6 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) mr->state = RXE_MR_STATE_INVALID; rxe_drop_ref(mr_pd(mr)); - rxe_drop_index(mr); rxe_drop_ref(mr); return 0; diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 32dd8c0b8b9e..3ae981d77c25 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -20,7 +20,6 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return ret; } - rxe_add_index(mw); mw->rkey = ibmw->rkey = (mw->elem.index << 8) | rxe_get_next_key(-1); mw->state = (mw->ibmw.type == IB_MW_TYPE_2) ? RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; @@ -332,7 +331,4 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) void rxe_mw_cleanup(struct rxe_pool_elem *elem) { - struct rxe_mw *mw = container_of(elem, typeof(*mw), elem); - - rxe_drop_index(mw); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 4cb003885e00..8970115b11ef 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -97,37 +97,13 @@ static const struct rxe_type_info { }, }; -static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) -{ - int err = 0; - - if ((max - min + 1) < pool->max_elem) { - pr_warn("not enough indices for max_elem\n"); - err = -EINVAL; - goto out; - } - - pool->index.max_index = max; - pool->index.min_index = min; - - pool->index.table = bitmap_zalloc(max - min + 1, GFP_KERNEL); - if (!pool->index.table) { - err = -ENOMEM; - goto out; - } - -out: - return err; -} - -int rxe_pool_init( +void rxe_pool_init( struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type, unsigned int max_elem) { const struct rxe_type_info *info = &rxe_type_info[type]; - int err = 0; memset(pool, 0, sizeof(*pool)); @@ -142,14 +118,13 @@ int rxe_pool_init( atomic_set(&pool->num_elem, 0); - rwlock_init(&pool->pool_lock); - if (pool->flags & RXE_POOL_INDEX) { - pool->index.tree = RB_ROOT; - err = rxe_pool_init_index(pool, info->max_index, - info->min_index); - if (err) - goto out; + xa_init_flags(&pool->xarray.xa, XA_FLAGS_ALLOC); + pool->xarray.limit.max = info->max_index; + pool->xarray.limit.min = info->min_index; + } else { + /* if pool not indexed just use xa spin_lock */ + spin_lock_init(&pool->xarray.xa.xa_lock); } if (pool->flags & RXE_POOL_KEY) { @@ -157,9 +132,6 @@ int rxe_pool_init( pool->key.key_offset = info->key_offset; pool->key.key_size = info->key_size; } - -out: - return err; } void rxe_pool_cleanup(struct rxe_pool *pool) @@ -167,51 +139,6 @@ void rxe_pool_cleanup(struct rxe_pool *pool) if (atomic_read(&pool->num_elem) > 0) pr_warn("%s pool destroyed with unfree'd elem\n", pool->name); - - if (pool->flags & RXE_POOL_INDEX) - bitmap_free(pool->index.table); -} - -static u32 alloc_index(struct rxe_pool *pool) -{ - u32 index; - u32 range = pool->index.max_index - pool->index.min_index + 1; - - index = find_next_zero_bit(pool->index.table, range, pool->index.last); - if (index >= range) - index = find_first_zero_bit(pool->index.table, range); - - WARN_ON_ONCE(index >= range); - set_bit(index, pool->index.table); - pool->index.last = index; - return index + pool->index.min_index; -} - -static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) -{ - struct rb_node **link = &pool->index.tree.rb_node; - struct rb_node *parent = NULL; - struct rxe_pool_elem *elem; - - while (*link) { - parent = *link; - elem = rb_entry(parent, struct rxe_pool_elem, index_node); - - if (elem->index == new->index) { - pr_warn("element already exists!\n"); - return -EINVAL; - } - - if (elem->index > new->index) - link = &(*link)->rb_left; - else - link = &(*link)->rb_right; - } - - rb_link_node(&new->index_node, parent, link); - rb_insert_color(&new->index_node, &pool->index.tree); - - return 0; } static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new) @@ -262,9 +189,9 @@ int __rxe_add_key(struct rxe_pool_elem *elem, void *key) struct rxe_pool *pool = elem->pool; int err; - write_lock_bh(&pool->pool_lock); + rxe_pool_lock_bh(pool); err = __rxe_add_key_locked(elem, key); - write_unlock_bh(&pool->pool_lock); + rxe_pool_unlock_bh(pool); return err; } @@ -280,55 +207,16 @@ void __rxe_drop_key(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; - write_lock_bh(&pool->pool_lock); + rxe_pool_lock_bh(pool); __rxe_drop_key_locked(elem); - write_unlock_bh(&pool->pool_lock); -} - -int __rxe_add_index_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - int err; - - elem->index = alloc_index(pool); - err = rxe_insert_index(pool, elem); - - return err; -} - -int __rxe_add_index(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - int err; - - write_lock_bh(&pool->pool_lock); - err = __rxe_add_index_locked(elem); - write_unlock_bh(&pool->pool_lock); - - return err; -} - -void __rxe_drop_index_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - clear_bit(elem->index - pool->index.min_index, pool->index.table); - rb_erase(&elem->index_node, &pool->index.tree); -} - -void __rxe_drop_index(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - write_lock_bh(&pool->pool_lock); - __rxe_drop_index_locked(elem); - write_unlock_bh(&pool->pool_lock); + rxe_pool_unlock_bh(pool); } void *rxe_alloc_locked(struct rxe_pool *pool) { struct rxe_pool_elem *elem; void *obj; + int err; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -343,8 +231,18 @@ void *rxe_alloc_locked(struct rxe_pool *pool) elem->obj = obj; kref_init(&elem->ref_cnt); + if (pool->flags & RXE_POOL_INDEX) { + err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, + pool->xarray.limit, + &pool->xarray.next, GFP_ATOMIC); + if (err) + goto out_free; + } + return obj; +out_free: + kfree(obj); out_cnt: atomic_dec(&pool->num_elem); return NULL; @@ -354,6 +252,7 @@ void *rxe_alloc(struct rxe_pool *pool) { struct rxe_pool_elem *elem; void *obj; + int err; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -368,8 +267,18 @@ void *rxe_alloc(struct rxe_pool *pool) elem->obj = obj; kref_init(&elem->ref_cnt); + if (pool->flags & RXE_POOL_INDEX) { + err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, + pool->xarray.limit, + &pool->xarray.next, GFP_KERNEL); + if (err) + goto out_free; + } + return obj; +out_free: + kfree(obj); out_cnt: atomic_dec(&pool->num_elem); return NULL; @@ -377,6 +286,8 @@ void *rxe_alloc(struct rxe_pool *pool) int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { + int err = -EINVAL; + if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -384,11 +295,19 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + if (pool->flags & RXE_POOL_INDEX) { + err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, + pool->xarray.limit, + &pool->xarray.next, GFP_KERNEL); + if (err) + goto out_cnt; + } + return 0; out_cnt: atomic_dec(&pool->num_elem); - return -EINVAL; + return err; } void rxe_elem_release(struct kref *kref) @@ -398,6 +317,9 @@ void rxe_elem_release(struct kref *kref) struct rxe_pool *pool = elem->pool; void *obj; + if (pool->flags & RXE_POOL_INDEX) + __xa_erase(&pool->xarray.xa, elem->index); + if (pool->cleanup) pool->cleanup(elem); @@ -409,42 +331,27 @@ void rxe_elem_release(struct kref *kref) atomic_dec(&pool->num_elem); } -void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) +/** + * rxe_pool_get_index - lookup object from index + * @pool: the object pool + * @index: the index of the object + * + * Returns: the object if the index exists in the pool + * and the reference count on the object is positive + * else NULL + */ +void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { - struct rb_node *node; struct rxe_pool_elem *elem; void *obj; - node = pool->index.tree.rb_node; - - while (node) { - elem = rb_entry(node, struct rxe_pool_elem, index_node); - - if (elem->index > index) - node = node->rb_left; - else if (elem->index < index) - node = node->rb_right; - else - break; - } - - if (node) { - kref_get(&elem->ref_cnt); + rxe_pool_lock_bh(pool); + elem = xa_load(&pool->xarray.xa, index); + if (elem && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; - } else { + else obj = NULL; - } - - return obj; -} - -void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) -{ - void *obj; - - read_lock_bh(&pool->pool_lock); - obj = rxe_pool_get_index_locked(pool, index); - read_unlock_bh(&pool->pool_lock); + rxe_pool_unlock_bh(pool); return obj; } @@ -486,9 +393,9 @@ void *rxe_pool_get_key(struct rxe_pool *pool, void *key) { void *obj; - read_lock_bh(&pool->pool_lock); + rxe_pool_lock_bh(pool); obj = rxe_pool_get_key_locked(pool, key); - read_unlock_bh(&pool->pool_lock); + rxe_pool_unlock_bh(pool); return obj; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 214279310f4d..e84de5f59af1 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -37,14 +37,12 @@ struct rxe_pool_elem { struct rb_node key_node; /* only used if indexed */ - struct rb_node index_node; u32 index; }; struct rxe_pool { struct rxe_dev *rxe; const char *name; - rwlock_t pool_lock; /* protects pool add/del/search */ void (*cleanup)(struct rxe_pool_elem *obj); enum rxe_pool_flags flags; enum rxe_elem_type type; @@ -56,12 +54,10 @@ struct rxe_pool { /* only used if indexed */ struct { - struct rb_root tree; - unsigned long *table; - u32 last; - u32 max_index; - u32 min_index; - } index; + struct xarray xa; + struct xa_limit limit; + u32 next; + } xarray; /* only used if keyed */ struct { @@ -71,11 +67,10 @@ struct rxe_pool { } key; }; -/* initialize a pool of objects with given limit on - * number of elements. gets parameters from rxe_type_info - * pool elements will be allocated out of a slab cache - */ -int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, +#define rxe_pool_lock_bh(pool) xa_lock_bh(&pool->xarray.xa) +#define rxe_pool_unlock_bh(pool) xa_unlock_bh(&pool->xarray.xa) + +void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type, u32 max_elem); /* free resources from object pool */ @@ -91,28 +86,6 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) -/* assign an index to an indexed object and insert object into - * pool's rb tree holding and not holding the pool_lock - */ -int __rxe_add_index_locked(struct rxe_pool_elem *elem); - -#define rxe_add_index_locked(obj) __rxe_add_index_locked(&(obj)->elem) - -int __rxe_add_index(struct rxe_pool_elem *elem); - -#define rxe_add_index(obj) __rxe_add_index(&(obj)->elem) - -/* drop an index and remove object from rb tree - * holding and not holding the pool_lock - */ -void __rxe_drop_index_locked(struct rxe_pool_elem *elem); - -#define rxe_drop_index_locked(obj) __rxe_drop_index_locked(&(obj)->elem) - -void __rxe_drop_index(struct rxe_pool_elem *elem); - -#define rxe_drop_index(obj) __rxe_drop_index(&(obj)->elem) - /* assign a key to a keyed object and insert object into * pool's rb tree holding and not holding pool_lock */ @@ -133,11 +106,6 @@ void __rxe_drop_key(struct rxe_pool_elem *elem); #define rxe_drop_key(obj) __rxe_drop_key(&(obj)->elem) -/* lookup an indexed object from index holding and not holding the pool_lock. - * takes a reference on object - */ -void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index); - void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); /* lookup keyed object from key holding and not holding the pool_lock. @@ -150,10 +118,31 @@ void *rxe_pool_get_key(struct rxe_pool *pool, void *key); /* cleanup an object when all references are dropped */ void rxe_elem_release(struct kref *kref); -/* take a reference on an object */ -#define rxe_add_ref(obj) kref_get(&(obj)->elem.ref_cnt) +/** + * __rxe_add_ref() - adds a reference to a pool element + * @elem: pool element + * + * Returns: true if the kref_get succeeds else false + */ +static inline bool __rxe_add_ref(struct rxe_pool_elem *elem) +{ + return kref_get_unless_zero(&elem->ref_cnt); +} + +#define rxe_add_ref(obj) __rxe_add_ref(&(obj)->elem) + +/* drop a reference to an object */ +static inline bool __rxe_drop_ref(struct rxe_pool_elem *elem) +{ + bool ret; + + rxe_pool_lock_bh(elem->pool); + ret = kref_put(&elem->ref_cnt, rxe_elem_release); + rxe_pool_unlock_bh(elem->pool); + + return ret; +} -/* drop a reference on an object */ -#define rxe_drop_ref(obj) kref_put(&(obj)->elem.ref_cnt, rxe_elem_release) +#define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->elem) #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 07ca169110bf..e3f64eae088c 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -181,7 +181,6 @@ static int rxe_create_ah(struct ib_ah *ibah, return err; /* create index > 0 */ - rxe_add_index(ah); ah->ah_num = ah->elem.index; if (uresp) { @@ -189,7 +188,6 @@ static int rxe_create_ah(struct ib_ah *ibah, err = copy_to_user(&uresp->ah_num, &ah->ah_num, sizeof(uresp->ah_num)); if (err) { - rxe_drop_index(ah); rxe_drop_ref(ah); return -EFAULT; } @@ -230,7 +228,6 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); - rxe_drop_index(ah); rxe_drop_ref(ah); return 0; } @@ -437,7 +434,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (err) return err; - rxe_add_index(qp); err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibqp->pd, udata); if (err) goto qp_init; @@ -445,7 +441,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, return 0; qp_init: - rxe_drop_index(qp); rxe_drop_ref(qp); return err; } @@ -490,7 +485,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) struct rxe_qp *qp = to_rqp(ibqp); rxe_qp_destroy(qp); - rxe_drop_index(qp); rxe_drop_ref(qp); return 0; } @@ -893,7 +887,6 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) if (!mr) return ERR_PTR(-ENOMEM); - rxe_add_index(mr); rxe_add_ref(pd); rxe_mr_init_dma(pd, access, mr); @@ -917,7 +910,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, goto err2; } - rxe_add_index(mr); rxe_add_ref(pd); @@ -929,7 +921,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, err3: rxe_drop_ref(pd); - rxe_drop_index(mr); rxe_drop_ref(mr); err2: return ERR_PTR(err); @@ -952,8 +943,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, goto err1; } - rxe_add_index(mr); - rxe_add_ref(pd); err = rxe_mr_init_fast(pd, max_num_sg, mr); @@ -964,7 +953,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, err2: rxe_drop_ref(pd); - rxe_drop_index(mr); rxe_drop_ref(mr); err1: return ERR_PTR(err); From patchwork Mon Dec 6 21:12:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12659697 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB518C4332F for ; Mon, 6 Dec 2021 21:14:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350414AbhLFVSQ (ORCPT ); Mon, 6 Dec 2021 16:18:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350426AbhLFVRh (ORCPT ); Mon, 6 Dec 2021 16:17:37 -0500 Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20160C0698C3 for ; Mon, 6 Dec 2021 13:14:07 -0800 (PST) Received: by mail-oi1-x22c.google.com with SMTP id q25so23963806oiw.0 for ; Mon, 06 Dec 2021 13:14:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sgjlfZQtpHGy141Aj7rGecrRtBMeE90RbyA6xJVdHKU=; b=bvBS2b4M0MXSQbs1s6ycJoaT5FI4RhbpSPkMyhjwPuiaOO+nZnLLVXQ3bTilE2p+zC L01JXHBlJv/X1uQGYpvHfXXKQSn1j/XwWchtqvh6DfUdaRVrCT0TxaG6cl9pWtTjbrrQ 79lUWWdZR3nl+5P16UhkCKk9MDFEjjkQny8YNuqkAHQhuwP1ymqHepRQIS8D/Y0Q2cM1 Ub5hwQOULp+JFYPhutF6ZDSGsus7J9FNRIbHn9+QCOYAQLzyW/OM2pR1kuHOc2Ato4F7 avrI336MuVs3VAeRWd9xmPS5UFPk1wwt432dyA71/1O67bGxftlqXUhTp2tmqua8eU/x GhWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sgjlfZQtpHGy141Aj7rGecrRtBMeE90RbyA6xJVdHKU=; b=lu6+w10eIc3fZpu8+MohFw8kCTo8eC0B3TyEBhYeIjhKtiCpFkn8Cf4ymMJPJoMqrx hobkkghm+W80sVngt5hxyq/QkVosMGBi3a9L4mE8bWFjNUkdSpbykZD+Z8uQLk+FYrha BLj6kpJWj2OCh9nrIyNhSrfUVHuH3cVatA9l6A9aiUTwQnw4tsh+6sq/OlqBHxVhhrkf GGVjKhvYdB5wR/KXmoicy0QavvrdLXEnFhBQmcO4gAmxorWoE1G9EmBN1xtrFoulkcPQ 3n/N8UktZ+aZVxGnzSjGLqAe3mjKkVWCMn/GQZR0rtU0Y5+MDov84m5Jh3iy+ZOu0sLI TTvQ== X-Gm-Message-State: AOAM533KFnXwtSmFdN5RLJ1qL7KZ+ug1Gw5WA8IVwvKamlsS2jna3i0o MdHrTIJYys1LvuHtvrizUTk= X-Google-Smtp-Source: ABdhPJzNlSXLRelGlvRvDrrC1BZDOggbO4/9CJo1G0ZiQZEpHtCgEDfXCXCLB4P01o17mv0lHcqhwA== X-Received: by 2002:a05:6808:198f:: with SMTP id bj15mr1148530oib.69.1638825246541; Mon, 06 Dec 2021 13:14:06 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-07ad-dbeb-c616-747c.res6.spectrum.com. [2603:8081:140c:1a00:7ad:dbeb:c616:747c]) by smtp.googlemail.com with ESMTPSA id y28sm2819111oix.57.2021.12.06.13.14.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 13:14:06 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v6 2/8] RDMA/rxe: Reverse the sense of RXE_POOL_NO_ALLOC Date: Mon, 6 Dec 2021 15:12:37 -0600 Message-Id: <20211206211242.15528-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211206211242.15528-1-rpearsonhpe@gmail.com> References: <20211206211242.15528-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Since most rxe objects are now allocated in rdma-core change the sense of RXE_POOL_NO_ALLOC to RXE_POOL_ALLOC. This makes the code easier to understand. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 18 ++++++++---------- drivers/infiniband/sw/rxe/rxe_pool.h | 2 +- 2 files changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 8970115b11ef..599696883c44 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -23,19 +23,17 @@ static const struct rxe_type_info { .name = "rxe-uc", .size = sizeof(struct rxe_ucontext), .elem_offset = offsetof(struct rxe_ucontext, elem), - .flags = RXE_POOL_NO_ALLOC, }, [RXE_TYPE_PD] = { .name = "rxe-pd", .size = sizeof(struct rxe_pd), .elem_offset = offsetof(struct rxe_pd, elem), - .flags = RXE_POOL_NO_ALLOC, }, [RXE_TYPE_AH] = { .name = "rxe-ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, }, @@ -43,7 +41,7 @@ static const struct rxe_type_info { .name = "rxe-srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, }, @@ -52,7 +50,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_qp), .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_QP_INDEX, .max_index = RXE_MAX_QP_INDEX, }, @@ -60,7 +58,6 @@ static const struct rxe_type_info { .name = "rxe-cq", .size = sizeof(struct rxe_cq), .elem_offset = offsetof(struct rxe_cq, elem), - .flags = RXE_POOL_NO_ALLOC, .cleanup = rxe_cq_cleanup, }, [RXE_TYPE_MR] = { @@ -68,7 +65,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mr), .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, - .flags = RXE_POOL_INDEX, + .flags = RXE_POOL_INDEX | RXE_POOL_ALLOC, .min_index = RXE_MIN_MR_INDEX, .max_index = RXE_MAX_MR_INDEX, }, @@ -77,7 +74,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), .cleanup = rxe_mw_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, }, @@ -86,7 +83,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mc_grp), .elem_offset = offsetof(struct rxe_mc_grp, elem), .cleanup = rxe_mc_cleanup, - .flags = RXE_POOL_KEY, + .flags = RXE_POOL_KEY | RXE_POOL_ALLOC, .key_offset = offsetof(struct rxe_mc_grp, mgid), .key_size = sizeof(union ib_gid), }, @@ -94,6 +91,7 @@ static const struct rxe_type_info { .name = "rxe-mc_elem", .size = sizeof(struct rxe_mc_elem), .elem_offset = offsetof(struct rxe_mc_elem, elem), + .flags = RXE_POOL_ALLOC, }, }; @@ -323,7 +321,7 @@ void rxe_elem_release(struct kref *kref) if (pool->cleanup) pool->cleanup(elem); - if (!(pool->flags & RXE_POOL_NO_ALLOC)) { + if (pool->flags & RXE_POOL_ALLOC) { obj = elem->obj; kfree(obj); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index e84de5f59af1..2731ede2310c 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -10,7 +10,7 @@ enum rxe_pool_flags { RXE_POOL_INDEX = BIT(1), RXE_POOL_KEY = BIT(2), - RXE_POOL_NO_ALLOC = BIT(4), + RXE_POOL_ALLOC = BIT(4), }; enum rxe_elem_type { From patchwork Mon Dec 6 21:12:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12659695 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06632C433F5 for ; Mon, 6 Dec 2021 21:14:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350750AbhLFVSQ (ORCPT ); Mon, 6 Dec 2021 16:18:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350427AbhLFVRh (ORCPT ); Mon, 6 Dec 2021 16:17:37 -0500 Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E539AC0698C4 for ; Mon, 6 Dec 2021 13:14:07 -0800 (PST) Received: by mail-oi1-x22f.google.com with SMTP id u74so23777621oie.8 for ; Mon, 06 Dec 2021 13:14:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7UjDbkNVXWI7bofKe+9tbYdocyC4PJshHPQPAMUO2YA=; b=ToD2or8jffeE8Le84rQikwos/cBQbnRuBbPlA5UVQyu9rLSXa199VumnPuijRibbO0 gr5HU8HEbpq82b50nh7tbdOlcAOibV+TIIVtY6jXINARDrFmHsEqahPH68KeovWsAGc4 h/p154RwCNLKM5t7pjvt4pnDIq7nnHgDexBwBqkwK/2EblwoFmi/D/SaWpidoqclWVlm 0dEbinpeeTljrrmGmJ6dqVIIz8OKepy7nVt5tFH3lRvZhLj4ItzIgjQAb0yIga3laWJ8 dHNu9MNihX+4tkDXAOpJZ0E9nYrgoLD5Ytt/xb09havgr9tqIdT6/qqJB1p9rapriikv 19JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7UjDbkNVXWI7bofKe+9tbYdocyC4PJshHPQPAMUO2YA=; b=fUkLV0bS6gQBGSZBEGcGUriK5L+ViB6HXr/7T+RLlGXEPwb3yTknf0aD4RVp8axgUE Rg74aS2ztUhgAdaynGJxTjxoPEdNLfCQssMZB++8+oeEmbAwzzg1Q+hvd9vuC92GIuo3 HW8CRMk9J0YI4cGMUOgf1ZylfNsH1wisgWvT2QW9AmE2nZZwKEOZGUkHG4jycukaJnAW zDWQE8hWnCV+2ZVJMmoHak0dlS9a5C1HxlGyG4NOmNasvarCpoLugBGTW0NvPbRZb/VM IeUAT13yxyY6kBB8RdClEqv3uyrp+vS2wUaBOd0rVmYzHOVOwRf+cUITxY98lBgwd6J4 ikzg== X-Gm-Message-State: AOAM533x7t2Q9v1MAmFUWbPwyLs9Aj2u0zWLEU8h8yfrIjZrZjCjgTLL 8srqSaj+bWIkt5bBTllx26o= X-Google-Smtp-Source: ABdhPJwb7ew9St15Gqh4yN/fwZURcvzR0wMqMdnRj0zxZWqnZ6bHtqaeyNbDWve7w0Rzp7e15vonIg== X-Received: by 2002:a05:6808:20a5:: with SMTP id s37mr1126224oiw.127.1638825247190; Mon, 06 Dec 2021 13:14:07 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-07ad-dbeb-c616-747c.res6.spectrum.com. [2603:8081:140c:1a00:7ad:dbeb:c616:747c]) by smtp.googlemail.com with ESMTPSA id y28sm2819111oix.57.2021.12.06.13.14.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 13:14:06 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v6 3/8] RDMA/rxe: Cleanup pool APIs for keyed objects Date: Mon, 6 Dec 2021 15:12:38 -0600 Message-Id: <20211206211242.15528-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211206211242.15528-1-rpearsonhpe@gmail.com> References: <20211206211242.15528-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Simplify the rxe pool APIs for keyed objects. Eliminate xxx_locked() APIs. Merge rxe_drop_key into rxe_drop_ref. Replace separate rxe_get_key, and add_key by one call rxe_add_key which looks up and if necessary creates a new object. Signed-off-by: Bob Pearson --- v6 Changed gfp_t flag in __rxe_alloc() to flags so rxe_alloc() and rxe_add_to_pool() use GFP_KERNEL. --- drivers/infiniband/sw/rxe/rxe_loc.h | 5 +- drivers/infiniband/sw/rxe/rxe_mcast.c | 46 ++--- drivers/infiniband/sw/rxe/rxe_pool.c | 255 +++++++++++++------------- drivers/infiniband/sw/rxe/rxe_pool.h | 50 +---- 4 files changed, 143 insertions(+), 213 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index b1e174afb1d4..6558602be751 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -40,17 +40,14 @@ void rxe_cq_disable(struct rxe_cq *cq); void rxe_cq_cleanup(struct rxe_pool_elem *arg); /* rxe_mcast.c */ +int rxe_init_grp(struct rxe_pool_elem *elem); int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, struct rxe_mc_grp **grp_p); - int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mc_grp *grp); - int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid); - void rxe_drop_all_mcast_groups(struct rxe_qp *qp); - void rxe_mc_cleanup(struct rxe_pool_elem *arg); /* rxe_mmap.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 1692526c5b57..e110c4d3fbf4 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -7,59 +7,38 @@ #include "rxe.h" #include "rxe_loc.h" -/* caller should hold mc_grp_pool->pool_lock */ -static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, - struct rxe_pool *pool, - union ib_gid *mgid) +int rxe_init_grp(struct rxe_pool_elem *elem) { + struct rxe_dev *rxe = elem->pool->rxe; + struct rxe_mc_grp *grp = elem->obj; int err; - struct rxe_mc_grp *grp; - - grp = rxe_alloc_locked(&rxe->mc_grp_pool); - if (!grp) - return ERR_PTR(-ENOMEM); INIT_LIST_HEAD(&grp->qp_list); spin_lock_init(&grp->mcg_lock); grp->rxe = rxe; - rxe_add_key_locked(grp, mgid); - err = rxe_mcast_add(rxe, mgid); - if (unlikely(err)) { - rxe_drop_key_locked(grp); + err = rxe_mcast_add(rxe, &grp->mgid); + if (err) rxe_drop_ref(grp); - return ERR_PTR(err); - } - return grp; + return err; } int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, struct rxe_mc_grp **grp_p) { - int err; - struct rxe_mc_grp *grp; struct rxe_pool *pool = &rxe->mc_grp_pool; + struct rxe_mc_grp *grp; if (rxe->attr.max_mcast_qp_attach == 0) return -EINVAL; - rxe_pool_lock_bh(pool); - - grp = rxe_pool_get_key_locked(pool, mgid); - if (grp) - goto done; - - grp = create_grp(rxe, pool, mgid); - if (IS_ERR(grp)) { - rxe_pool_unlock_bh(pool); - err = PTR_ERR(grp); - return err; - } + grp = rxe_pool_add_key(pool, mgid); + if (!grp) + return -EINVAL; -done: - rxe_pool_unlock_bh(pool); *grp_p = grp; + return 0; } @@ -84,7 +63,7 @@ int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, goto out; } - elem = rxe_alloc_locked(&rxe->mc_elem_pool); + elem = rxe_alloc(&rxe->mc_elem_pool); if (!elem) { err = -ENOMEM; goto out; @@ -173,6 +152,5 @@ void rxe_mc_cleanup(struct rxe_pool_elem *elem) struct rxe_mc_grp *grp = container_of(elem, typeof(*grp), elem); struct rxe_dev *rxe = grp->rxe; - rxe_drop_key(grp); rxe_mcast_delete(rxe, &grp->mgid); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 599696883c44..eb3566b2ce01 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -12,7 +12,8 @@ static const struct rxe_type_info { const char *name; size_t size; size_t elem_offset; - void (*cleanup)(struct rxe_pool_elem *obj); + int (*init)(struct rxe_pool_elem *elem); + void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; u32 min_index; u32 max_index; @@ -82,6 +83,7 @@ static const struct rxe_type_info { .name = "rxe-mc_grp", .size = sizeof(struct rxe_mc_grp), .elem_offset = offsetof(struct rxe_mc_grp, elem), + .init = rxe_init_grp, .cleanup = rxe_mc_cleanup, .flags = RXE_POOL_KEY | RXE_POOL_ALLOC, .key_offset = offsetof(struct rxe_mc_grp, mgid), @@ -112,6 +114,7 @@ void rxe_pool_init( pool->elem_size = ALIGN(info->size, RXE_POOL_ALIGN); pool->elem_offset = info->elem_offset; pool->flags = info->flags; + pool->init = info->init; pool->cleanup = info->cleanup; atomic_set(&pool->num_elem, 0); @@ -139,78 +142,7 @@ void rxe_pool_cleanup(struct rxe_pool *pool) pool->name); } -static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new) -{ - struct rb_node **link = &pool->key.tree.rb_node; - struct rb_node *parent = NULL; - struct rxe_pool_elem *elem; - int cmp; - - while (*link) { - parent = *link; - elem = rb_entry(parent, struct rxe_pool_elem, key_node); - - cmp = memcmp((u8 *)elem + pool->key.key_offset, - (u8 *)new + pool->key.key_offset, - pool->key.key_size); - - if (cmp == 0) { - pr_warn("key already exists!\n"); - return -EINVAL; - } - - if (cmp > 0) - link = &(*link)->rb_left; - else - link = &(*link)->rb_right; - } - - rb_link_node(&new->key_node, parent, link); - rb_insert_color(&new->key_node, &pool->key.tree); - - return 0; -} - -int __rxe_add_key_locked(struct rxe_pool_elem *elem, void *key) -{ - struct rxe_pool *pool = elem->pool; - int err; - - memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); - err = rxe_insert_key(pool, elem); - - return err; -} - -int __rxe_add_key(struct rxe_pool_elem *elem, void *key) -{ - struct rxe_pool *pool = elem->pool; - int err; - - rxe_pool_lock_bh(pool); - err = __rxe_add_key_locked(elem, key); - rxe_pool_unlock_bh(pool); - - return err; -} - -void __rxe_drop_key_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - rb_erase(&elem->key_node, &pool->key.tree); -} - -void __rxe_drop_key(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - rxe_pool_lock_bh(pool); - __rxe_drop_key_locked(elem); - rxe_pool_unlock_bh(pool); -} - -void *rxe_alloc_locked(struct rxe_pool *pool) +static void *__rxe_alloc(struct rxe_pool *pool, gfp_t flags) { struct rxe_pool_elem *elem; void *obj; @@ -219,7 +151,7 @@ void *rxe_alloc_locked(struct rxe_pool *pool) if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; - obj = kzalloc(pool->elem_size, GFP_ATOMIC); + obj = kzalloc(pool->elem_size, flags); if (!obj) goto out_cnt; @@ -229,46 +161,16 @@ void *rxe_alloc_locked(struct rxe_pool *pool) elem->obj = obj; kref_init(&elem->ref_cnt); - if (pool->flags & RXE_POOL_INDEX) { - err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, - pool->xarray.limit, - &pool->xarray.next, GFP_ATOMIC); + if (pool->init) { + err = pool->init(elem); if (err) goto out_free; } - return obj; - -out_free: - kfree(obj); -out_cnt: - atomic_dec(&pool->num_elem); - return NULL; -} - -void *rxe_alloc(struct rxe_pool *pool) -{ - struct rxe_pool_elem *elem; - void *obj; - int err; - - if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; - - obj = kzalloc(pool->elem_size, GFP_KERNEL); - if (!obj) - goto out_cnt; - - elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); - - elem->pool = pool; - elem->obj = obj; - kref_init(&elem->ref_cnt); - if (pool->flags & RXE_POOL_INDEX) { err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, pool->xarray.limit, - &pool->xarray.next, GFP_KERNEL); + &pool->xarray.next, flags); if (err) goto out_free; } @@ -282,6 +184,11 @@ void *rxe_alloc(struct rxe_pool *pool) return NULL; } +void *rxe_alloc(struct rxe_pool *pool) +{ + return __rxe_alloc(pool, GFP_KERNEL); +} + int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { int err = -EINVAL; @@ -293,6 +200,12 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + if (pool->init) { + err = pool->init(elem); + if (err) + goto out_cnt; + } + if (pool->flags & RXE_POOL_INDEX) { err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, pool->xarray.limit, @@ -308,27 +221,6 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) return err; } -void rxe_elem_release(struct kref *kref) -{ - struct rxe_pool_elem *elem = - container_of(kref, struct rxe_pool_elem, ref_cnt); - struct rxe_pool *pool = elem->pool; - void *obj; - - if (pool->flags & RXE_POOL_INDEX) - __xa_erase(&pool->xarray.xa, elem->index); - - if (pool->cleanup) - pool->cleanup(elem); - - if (pool->flags & RXE_POOL_ALLOC) { - obj = elem->obj; - kfree(obj); - } - - atomic_dec(&pool->num_elem); -} - /** * rxe_pool_get_index - lookup object from index * @pool: the object pool @@ -354,7 +246,8 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) return obj; } -void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) +/* lookup key in pool. Caller must hold pool lock */ +static void *__rxe_get_key(struct rxe_pool *pool, void *key) { struct rb_node *node; struct rxe_pool_elem *elem; @@ -366,7 +259,7 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) while (node) { elem = rb_entry(node, struct rxe_pool_elem, key_node); - cmp = memcmp((u8 *)elem + pool->key.key_offset, + cmp = memcmp((u8 *)elem->obj + pool->key.key_offset, key, pool->key.key_size); if (cmp > 0) @@ -387,13 +280,113 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) return obj; } +/* add key to pool. Caller must hold pool lock */ +static int __rxe_add_key(struct rxe_pool_elem *new, void *key) +{ + struct rxe_pool *pool = new->pool; + struct rb_node **link = &pool->key.tree.rb_node; + struct rb_node *parent = NULL; + struct rxe_pool_elem *elem; + int cmp; + + while (*link) { + parent = *link; + elem = rb_entry(parent, struct rxe_pool_elem, key_node); + + cmp = memcmp(key, (u8 *)elem->obj + pool->key.key_offset, + pool->key.key_size); + if (cmp == 0) { + pr_warn("key already exists!\n"); + return -EINVAL; + } + + if (cmp > 0) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->key_node, parent, link); + rb_insert_color(&new->key_node, &pool->key.tree); + + memcpy((u8 *)new->obj + pool->key.key_offset, key, + pool->key.key_size); + + return 0; +} + +/** + * rxe_pool_get_key() - lookup key in pool and return object + * @pool: the object pool + * @key: the key + * + * Returns: if the object matching key is present in pool + * return its address and take a reference else NULL + */ void *rxe_pool_get_key(struct rxe_pool *pool, void *key) { void *obj; rxe_pool_lock_bh(pool); - obj = rxe_pool_get_key_locked(pool, key); + obj = __rxe_get_key(pool, key); + rxe_pool_unlock_bh(pool); + + return obj; +} + +/** + * rxe_pool_add_key() - lookup or add object with key in pool + * @pool: the object pool + * @key: the key + * + * Returns: If object matching key is present in pool return + * its address and take a reference else allocate a + * new object to pool with key and return its address + * with one reference. + */ +void *rxe_pool_add_key(struct rxe_pool *pool, void *key) +{ + void *obj; + + rxe_pool_lock_bh(pool); + obj = __rxe_get_key(pool, key); + if (obj) + goto done; + + obj = __rxe_alloc(pool, GFP_ATOMIC); + if (!obj) + goto done; + + __rxe_add_key(obj, key); +done: rxe_pool_unlock_bh(pool); return obj; } + +/** + * rxe_elem_release() - cleanup pool element when last reference dropped + * @kref: address of the kref contained in pool element + * + * Caller should hold pool lock + */ +void rxe_elem_release(struct kref *kref) +{ + struct rxe_pool_elem *elem = + container_of(kref, struct rxe_pool_elem, ref_cnt); + struct rxe_pool *pool = elem->pool; + + if (pool->flags & RXE_POOL_INDEX) + __xa_erase(&pool->xarray.xa, elem->index); + + if (pool->flags & RXE_POOL_KEY) + rb_erase(&elem->key_node, &pool->key.tree); + + if (pool->cleanup) + pool->cleanup(elem); + + if (pool->flags & RXE_POOL_ALLOC) + kfree(elem->obj); + + atomic_dec(&pool->num_elem); +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 2731ede2310c..01f23f57d666 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -43,7 +43,8 @@ struct rxe_pool_elem { struct rxe_pool { struct rxe_dev *rxe; const char *name; - void (*cleanup)(struct rxe_pool_elem *obj); + int (*init)(struct rxe_pool_elem *elem); + void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; @@ -71,67 +72,29 @@ struct rxe_pool { #define rxe_pool_unlock_bh(pool) xa_unlock_bh(&pool->xarray.xa) void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, - enum rxe_elem_type type, u32 max_elem); + enum rxe_elem_type type, u32 max_elem); -/* free resources from object pool */ void rxe_pool_cleanup(struct rxe_pool *pool); -/* allocate an object from pool holding and not holding the pool lock */ -void *rxe_alloc_locked(struct rxe_pool *pool); - void *rxe_alloc(struct rxe_pool *pool); -/* connect already allocated object to pool */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); - #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) -/* assign a key to a keyed object and insert object into - * pool's rb tree holding and not holding pool_lock - */ -int __rxe_add_key_locked(struct rxe_pool_elem *elem, void *key); - -#define rxe_add_key_locked(obj, key) __rxe_add_key_locked(&(obj)->elem, key) - -int __rxe_add_key(struct rxe_pool_elem *elem, void *key); - -#define rxe_add_key(obj, key) __rxe_add_key(&(obj)->elem, key) - -/* remove elem from rb tree holding and not holding the pool_lock */ -void __rxe_drop_key_locked(struct rxe_pool_elem *elem); - -#define rxe_drop_key_locked(obj) __rxe_drop_key_locked(&(obj)->elem) - -void __rxe_drop_key(struct rxe_pool_elem *elem); - -#define rxe_drop_key(obj) __rxe_drop_key(&(obj)->elem) - void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); -/* lookup keyed object from key holding and not holding the pool_lock. - * takes a reference on the objecti - */ -void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key); - void *rxe_pool_get_key(struct rxe_pool *pool, void *key); -/* cleanup an object when all references are dropped */ -void rxe_elem_release(struct kref *kref); +void *rxe_pool_add_key(struct rxe_pool *pool, void *key); -/** - * __rxe_add_ref() - adds a reference to a pool element - * @elem: pool element - * - * Returns: true if the kref_get succeeds else false - */ static inline bool __rxe_add_ref(struct rxe_pool_elem *elem) { return kref_get_unless_zero(&elem->ref_cnt); } - #define rxe_add_ref(obj) __rxe_add_ref(&(obj)->elem) -/* drop a reference to an object */ +void rxe_elem_release(struct kref *kref); + static inline bool __rxe_drop_ref(struct rxe_pool_elem *elem) { bool ret; @@ -142,7 +105,6 @@ static inline bool __rxe_drop_ref(struct rxe_pool_elem *elem) return ret; } - #define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->elem) #endif /* RXE_POOL_H */ From patchwork Mon Dec 6 21:12:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12659689 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAA66C433F5 for ; Mon, 6 Dec 2021 21:14:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350640AbhLFVRo (ORCPT ); Mon, 6 Dec 2021 16:17:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350419AbhLFVRi (ORCPT ); Mon, 6 Dec 2021 16:17:38 -0500 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 623B0C0698C6 for ; Mon, 6 Dec 2021 13:14:08 -0800 (PST) Received: by mail-oi1-x233.google.com with SMTP id bj13so23863497oib.4 for ; Mon, 06 Dec 2021 13:14:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rxnIAChd5yz7XN7CZFN7y6OIcTpmkHYw69yec4yrCJs=; b=e+vQjiSeP2ceyOF7FZSSwUOB9jnyh5iMCNxQkxU8veCVjStya0c6L4HUz0TL9huWSG bDTqNH8K1UfRRqeyoo6FqQuwpYloWQujipJRQMAMEMjectWOFQI1EbdPq8x3oMMmVKxo 6xlc0/y7V/c7le6TR1ponLMDcuMIx+FlkyWPmCoswQow+Dhf0Dz9nI847LQ9qMWqFBl1 6Waj75P3mPqlrc+K6FEfKjrODG+nwtaYf122WO7kRzAlDNHVLylzYzF73c7I2wk4vUPd 0DWRqHR8UjqD3pAjk6u9xLulqqiew3rZsrpCNZ7j6G+ieunqbnCmQT2tPfWg2+BuQbb9 om/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rxnIAChd5yz7XN7CZFN7y6OIcTpmkHYw69yec4yrCJs=; b=AiHQffpYv57TsxqFr3Aokf2cPuNhTfWGlKqQ8bUHRF2nRmCO+tIKnVo+vxySsN7q+2 I5hnUW23sLIVSBcR3hgc7fk/KyDpScr6BYU4adAHkMNsnzTKNV2lpJzzz4PVtI+mbRA7 fxfqnBiETYGHWdYnpHjZn71iHLqZUmWslkfe8KD8tLVrHXru3zpotclkFl8XC1Cb0ueR p0/2qXNi9D+N59A/hZrfU5G2+m0cFHO834QRu2epMlSjtl0l6d2gs5AMWuNyU5IUjkGN HAg5VGmZp47VXYfPbU96NVHOHDIa/URBaOoVhigmgtELgSEtUUIHqnGZIMMs3VbTAozq 6mZw== X-Gm-Message-State: AOAM532WMigD0oqMNljWLE+ru/HemnTT2JH5Py+h16QF/pdB0IJTT7A5 k8NVKIn8bvKwSV9gQ6z64TxulOp6J3U= X-Google-Smtp-Source: ABdhPJyupcCrvU3knJCZmRPZF8tCtuHnvNjguQkEbLd08Ma8XNK4VpzYt6IlhFzxSgEBps6+VMJl/w== X-Received: by 2002:a05:6808:1a02:: with SMTP id bk2mr1210756oib.52.1638825247789; Mon, 06 Dec 2021 13:14:07 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-07ad-dbeb-c616-747c.res6.spectrum.com. [2603:8081:140c:1a00:7ad:dbeb:c616:747c]) by smtp.googlemail.com with ESMTPSA id y28sm2819111oix.57.2021.12.06.13.14.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 13:14:07 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v6 4/8] RDMA/rxe: Fix ref error in rxe_av.c Date: Mon, 6 Dec 2021 15:12:39 -0600 Message-Id: <20211206211242.15528-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211206211242.15528-1-rpearsonhpe@gmail.com> References: <20211206211242.15528-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The commit referenced below can take a reference to the AH which is never dropped. This only happens in the UD request path. This patch optionally passes that AH back to the caller so that it can hold the reference while the AV is being accessed and then drop it. Code to do this is added to rxe_req.c. The AV is also passed to rxe_prepare in rxe_net.c as an optimization. Fixes: e2fe06c90806 ("RDMA/rxe: Lookup kernel AH from ah index in UD WQEs") Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_av.c | 19 +++++++++- drivers/infiniband/sw/rxe/rxe_loc.h | 5 ++- drivers/infiniband/sw/rxe/rxe_net.c | 17 +++++---- drivers/infiniband/sw/rxe/rxe_req.c | 55 +++++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_resp.c | 2 +- 5 files changed, 63 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c index 38c7b6fb39d7..360a567159fe 100644 --- a/drivers/infiniband/sw/rxe/rxe_av.c +++ b/drivers/infiniband/sw/rxe/rxe_av.c @@ -99,11 +99,14 @@ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr) av->network_type = type; } -struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt) +struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp) { struct rxe_ah *ah; u32 ah_num; + if (ahp) + *ahp = NULL; + if (!pkt || !pkt->qp) return NULL; @@ -117,10 +120,22 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt) if (ah_num) { /* only new user provider or kernel client */ ah = rxe_pool_get_index(&pkt->rxe->ah_pool, ah_num); - if (!ah || ah->ah_num != ah_num || rxe_ah_pd(ah) != pkt->qp->pd) { + if (!ah) { pr_warn("Unable to find AH matching ah_num\n"); return NULL; } + + if (rxe_ah_pd(ah) != pkt->qp->pd) { + pr_warn("PDs don't match for AH and QP\n"); + rxe_drop_ref(ah); + return NULL; + } + + if (ahp) + *ahp = ah; + else + rxe_drop_ref(ah); + return &ah->av; } diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 6558602be751..02d57c894e34 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -19,7 +19,7 @@ void rxe_av_to_attr(struct rxe_av *av, struct rdma_ah_attr *attr); void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr); -struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt); +struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp); /* rxe_cq.c */ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq, @@ -99,7 +99,8 @@ void rxe_mw_cleanup(struct rxe_pool_elem *arg); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, int paylen, struct rxe_pkt_info *pkt); -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb); +int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 2cb810cb890a..456e960cacd7 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -293,13 +293,13 @@ static void prepare_ipv6_hdr(struct dst_entry *dst, struct sk_buff *skb, ip6h->payload_len = htons(skb->len - sizeof(*ip6h)); } -static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static int prepare4(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { struct rxe_qp *qp = pkt->qp; struct dst_entry *dst; bool xnet = false; __be16 df = htons(IP_DF); - struct rxe_av *av = rxe_get_av(pkt); struct in_addr *saddr = &av->sgid_addr._sockaddr_in.sin_addr; struct in_addr *daddr = &av->dgid_addr._sockaddr_in.sin_addr; @@ -319,11 +319,11 @@ static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb) return 0; } -static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static int prepare6(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { struct rxe_qp *qp = pkt->qp; struct dst_entry *dst; - struct rxe_av *av = rxe_get_av(pkt); struct in6_addr *saddr = &av->sgid_addr._sockaddr_in6.sin6_addr; struct in6_addr *daddr = &av->dgid_addr._sockaddr_in6.sin6_addr; @@ -344,16 +344,17 @@ static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb) return 0; } -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb) +int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { int err = 0; if (skb->protocol == htons(ETH_P_IP)) - err = prepare4(pkt, skb); + err = prepare4(av, pkt, skb); else if (skb->protocol == htons(ETH_P_IPV6)) - err = prepare6(pkt, skb); + err = prepare6(av, pkt, skb); - if (ether_addr_equal(skb->dev->dev_addr, rxe_get_av(pkt)->dmac)) + if (ether_addr_equal(skb->dev->dev_addr, av->dmac)) pkt->mask |= RXE_LOOPBACK_MASK; return err; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index c8d674da5cc2..7bc1ec8a5aa6 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -358,6 +358,7 @@ static inline int get_mtu(struct rxe_qp *qp) } static struct sk_buff *init_req_packet(struct rxe_qp *qp, + struct rxe_av *av, struct rxe_send_wqe *wqe, int opcode, int payload, struct rxe_pkt_info *pkt) @@ -365,7 +366,6 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; struct rxe_send_wr *ibwr = &wqe->wr; - struct rxe_av *av; int pad = (-payload) & 0x3; int paylen; int solicited; @@ -375,21 +375,9 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, /* length from start of bth to end of icrc */ paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - - /* pkt->hdr, port_num and mask are initialized in ifc layer */ - pkt->rxe = rxe; - pkt->opcode = opcode; - pkt->qp = qp; - pkt->psn = qp->req.psn; - pkt->mask = rxe_opcode[opcode].mask; - pkt->paylen = paylen; - pkt->wqe = wqe; + pkt->paylen = paylen; /* init skb */ - av = rxe_get_av(pkt); - if (!av) - return NULL; - skb = rxe_init_packet(rxe, av, paylen, pkt); if (unlikely(!skb)) return NULL; @@ -450,13 +438,13 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, return skb; } -static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - struct rxe_pkt_info *pkt, struct sk_buff *skb, - int paylen) +static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, + struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, + struct sk_buff *skb, int paylen) { int err; - err = rxe_prepare(pkt, skb); + err = rxe_prepare(av, pkt, skb); if (err) return err; @@ -611,6 +599,7 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) int rxe_requester(void *arg) { struct rxe_qp *qp = (struct rxe_qp *)arg; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_pkt_info pkt; struct sk_buff *skb; struct rxe_send_wqe *wqe; @@ -622,6 +611,8 @@ int rxe_requester(void *arg) struct rxe_send_wqe rollback_wqe; u32 rollback_psn; struct rxe_queue *q = qp->sq.queue; + struct rxe_ah *ah; + struct rxe_av *av; rxe_add_ref(qp); @@ -708,14 +699,28 @@ int rxe_requester(void *arg) payload = mtu; } - skb = init_req_packet(qp, wqe, opcode, payload, &pkt); + pkt.rxe = rxe; + pkt.opcode = opcode; + pkt.qp = qp; + pkt.psn = qp->req.psn; + pkt.mask = rxe_opcode[opcode].mask; + pkt.wqe = wqe; + + av = rxe_get_av(&pkt, &ah); + if (unlikely(!av)) { + pr_err("qp#%d Failed no address vector\n", qp_num(qp)); + wqe->status = IB_WC_LOC_QP_OP_ERR; + goto err_drop_ah; + } + + skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt); if (unlikely(!skb)) { pr_err("qp#%d Failed allocating skb\n", qp_num(qp)); wqe->status = IB_WC_LOC_QP_OP_ERR; - goto err; + goto err_drop_ah; } - ret = finish_packet(qp, wqe, &pkt, skb, payload); + ret = finish_packet(qp, av, wqe, &pkt, skb, payload); if (unlikely(ret)) { pr_debug("qp#%d Error during finish packet\n", qp_num(qp)); if (ret == -EFAULT) @@ -723,9 +728,12 @@ int rxe_requester(void *arg) else wqe->status = IB_WC_LOC_QP_OP_ERR; kfree_skb(skb); - goto err; + goto err_drop_ah; } + if (ah) + rxe_drop_ref(ah); + /* * To prevent a race on wqe access between requester and completer, * wqe members state and psn need to be set before calling @@ -754,6 +762,9 @@ int rxe_requester(void *arg) goto next_wqe; +err_drop_ah: + if (ah) + rxe_drop_ref(ah); err: wqe->state = wqe_state_error; __rxe_do_task(&qp->comp.task); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index e8f435fa6e4d..f589f4dde35c 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -632,7 +632,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, if (ack->mask & RXE_ATMACK_MASK) atmack_set_orig(ack, qp->resp.atomic_orig); - err = rxe_prepare(ack, skb); + err = rxe_prepare(&qp->pri_av, ack, skb); if (err) { kfree_skb(skb); return NULL; From patchwork Mon Dec 6 21:12:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12659685 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A9C8C433FE for ; Mon, 6 Dec 2021 21:14:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350538AbhLFVRn (ORCPT ); Mon, 6 Dec 2021 16:17:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350450AbhLFVRi (ORCPT ); Mon, 6 Dec 2021 16:17:38 -0500 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D595C061D5F for ; Mon, 6 Dec 2021 13:14:09 -0800 (PST) Received: by mail-oi1-x22e.google.com with SMTP id m6so23866288oim.2 for ; Mon, 06 Dec 2021 13:14:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QVjPYeNB7k+4WfWyre09D8aqdJVq1HNADP4EktaQGO0=; b=VLtSEKBbuWOUKiKnsl3RjRqdQyCzAhzq12Ga2V/pX4SEpcJ58JTX/4RPPU7x5NGeWu HysgY7OIyquH3G8e6zCg6+mhZz4x31qGRXLwgWinfHxabtoUDLyADTJaCiSSDB6jC7lW TswcQKjBTinQLqaTnzoJ2qT5l7hG0vBAsu2bowKAwhjlQWWk2TgMT5B5Z9rtENSQ703+ emrfdvNZga9u6u5TBc2W/g1pupeq+uZiZqnuUWX88VbxEz8lYdsIQ3yQ9rHy4qomVoNs CI2Y9jAfpe72IOWVvm36VbBKZxXn7ylKHmVpwdFjKpsXT3S3SDGxw2ft2mmj9jVa2G8C UUsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QVjPYeNB7k+4WfWyre09D8aqdJVq1HNADP4EktaQGO0=; b=jbKK/maFRy9UWio136Dh+ZASWFNj7xPB1qGsPNpbCqeZD4Rf/HspFMAy+gi4ZsA8bX WpnqwXK+UZ21UY4A3wcpnzHowS6fjO21Ouq6wlzmPBZWHIZiJmU3YxDjx74wfY+qk4BS CbZuxmKAe6xAVqAwpkPziIDW+dAQcvRsIi80GfQp0MFoMJc2MXR4Dh4uDSokqVZ+GNOZ O80LdpivZNYsNUEZLJ1Rkvn0GdCOoBRvNbwjpxoG8czqzywK/Fj6QBBhrcvWHkYx5SkX PsMGpODDwior945IALiwV6DtJCj/cbmkvAI8RUDg2G6jCwfhX4/JIcLJvl+pno1Houht PMAQ== X-Gm-Message-State: AOAM531zgk6OBJ0HGo0KDimepPHIcuhC+n6EQK6RDCm5QTH5taW0s5TX K96+Od+WfZkCz/pUjuwTwv7XSkYW6+c= X-Google-Smtp-Source: ABdhPJwwt2lOdjmbc10pXX5zskqMB7ghbD1tZsAusobWKcaNj7SP0DZKcqf9WMDWxFegZ5U2uW+OQQ== X-Received: by 2002:a05:6808:228c:: with SMTP id bo12mr1140910oib.93.1638825248482; Mon, 06 Dec 2021 13:14:08 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-07ad-dbeb-c616-747c.res6.spectrum.com. [2603:8081:140c:1a00:7ad:dbeb:c616:747c]) by smtp.googlemail.com with ESMTPSA id y28sm2819111oix.57.2021.12.06.13.14.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 13:14:08 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v6 5/8] RDMA/rxe: Replace mr by rkey in responder resources Date: Mon, 6 Dec 2021 15:12:40 -0600 Message-Id: <20211206211242.15528-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211206211242.15528-1-rpearsonhpe@gmail.com> References: <20211206211242.15528-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently rxe saves a copy of MR in responder resources for RDMA reads. Since the responder resources are never freed just over written if more are needed this MR may not have a reference freed until the QP is destroyed. This patch uses the rkey instead of the MR and on subsequent packets of a multipacket read reply message it looks up the MR from the rkey for each packet. This makes it possible for a user to deregister an MR or unbind a MW on the fly and get correct behaviour. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_qp.c | 10 +-- drivers/infiniband/sw/rxe/rxe_resp.c | 123 ++++++++++++++++++-------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 - 3 files changed, 87 insertions(+), 47 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 864bb3ef145f..4922a26bb5fc 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -135,12 +135,8 @@ static void free_rd_atomic_resources(struct rxe_qp *qp) void free_rd_atomic_resource(struct rxe_qp *qp, struct resp_res *res) { - if (res->type == RXE_ATOMIC_MASK) { + if (res->type == RXE_ATOMIC_MASK) kfree_skb(res->atomic.skb); - } else if (res->type == RXE_READ_MASK) { - if (res->read.mr) - rxe_drop_ref(res->read.mr); - } res->type = 0; } @@ -816,10 +812,8 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->pd) rxe_drop_ref(qp->pd); - if (qp->resp.mr) { + if (qp->resp.mr) rxe_drop_ref(qp->resp.mr); - qp->resp.mr = NULL; - } if (qp_type(qp) == IB_QPT_RC) sk_dst_reset(qp->sk->sk); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index f589f4dde35c..c776289842e5 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -641,6 +641,78 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, return skb; } +static struct resp_res *rxe_prepare_read_res(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) +{ + struct resp_res *res; + u32 pkts; + + res = &qp->resp.resources[qp->resp.res_head]; + rxe_advance_resp_resource(qp); + free_rd_atomic_resource(qp, res); + + res->type = RXE_READ_MASK; + res->replay = 0; + res->read.va = qp->resp.va + qp->resp.offset; + res->read.va_org = qp->resp.va + qp->resp.offset; + res->read.resid = qp->resp.resid; + res->read.length = qp->resp.resid; + res->read.rkey = qp->resp.rkey; + + pkts = max_t(u32, (reth_len(pkt) + qp->mtu - 1)/qp->mtu, 1); + res->first_psn = pkt->psn; + res->cur_psn = pkt->psn; + res->last_psn = (pkt->psn + pkts - 1) & BTH_PSN_MASK; + + res->state = rdatm_res_state_new; + + return res; +} + +/** + * rxe_recheck_mr - revalidate MR from rkey and get a reference + * @qp: the qp + * @rkey: the rkey + * + * This code allows the MR to be invalidated or deregistered or + * the MW if one was used to be invalidated or deallocated. + * It is assumed that the access permissions if originally good + * are OK and the mappings to be unchanged. + * + * Return: mr on success else NULL + */ +static struct rxe_mr *rxe_recheck_mr(struct rxe_qp *qp, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_mr *mr; + struct rxe_mw *mw; + + if (rkey_is_mw(rkey)) { + mw = rxe_pool_get_index(&rxe->mw_pool, rkey >> 8); + if (!mw || mw->rkey != rkey) + return NULL; + + if (mw->state != RXE_MW_STATE_VALID) { + rxe_drop_ref(mw); + return NULL; + } + + mr = mw->mr; + rxe_drop_ref(mw); + } else { + mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); + if (!mr || mr->rkey != rkey) + return NULL; + } + + if (mr->state != RXE_MR_STATE_VALID) { + rxe_drop_ref(mr); + return NULL; + } + + return mr; +} + /* RDMA read response. If res is not NULL, then we have a current RDMA request * being processed or replayed. */ @@ -655,53 +727,26 @@ static enum resp_states read_reply(struct rxe_qp *qp, int opcode; int err; struct resp_res *res = qp->resp.res; + struct rxe_mr *mr; if (!res) { - /* This is the first time we process that request. Get a - * resource - */ - res = &qp->resp.resources[qp->resp.res_head]; - - free_rd_atomic_resource(qp, res); - rxe_advance_resp_resource(qp); - - res->type = RXE_READ_MASK; - res->replay = 0; - - res->read.va = qp->resp.va + - qp->resp.offset; - res->read.va_org = qp->resp.va + - qp->resp.offset; - - res->first_psn = req_pkt->psn; - - if (reth_len(req_pkt)) { - res->last_psn = (req_pkt->psn + - (reth_len(req_pkt) + mtu - 1) / - mtu - 1) & BTH_PSN_MASK; - } else { - res->last_psn = res->first_psn; - } - res->cur_psn = req_pkt->psn; - - res->read.resid = qp->resp.resid; - res->read.length = qp->resp.resid; - res->read.rkey = qp->resp.rkey; - - /* note res inherits the reference to mr from qp */ - res->read.mr = qp->resp.mr; - qp->resp.mr = NULL; - - qp->resp.res = res; - res->state = rdatm_res_state_new; + res = rxe_prepare_read_res(qp, req_pkt); + qp->resp.res = res; } if (res->state == rdatm_res_state_new) { + mr = qp->resp.mr; + qp->resp.mr = NULL; + if (res->read.resid <= mtu) opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY; else opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST; } else { + mr = rxe_recheck_mr(qp, res->read.rkey); + if (!mr) + return RESPST_ERR_RKEY_VIOLATION; + if (res->read.resid > mtu) opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE; else @@ -717,10 +762,12 @@ static enum resp_states read_reply(struct rxe_qp *qp, if (!skb) return RESPST_ERR_RNR; - err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), + err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), payload, RXE_FROM_MR_OBJ); if (err) pr_err("Failed copying memory\n"); + if (mr) + rxe_drop_ref(mr); if (bth_pad(&ack_pkt)) { u8 *pad = payload_addr(&ack_pkt) + payload; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index caf1ce118765..022abba4fb6b 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -157,7 +157,6 @@ struct resp_res { struct sk_buff *skb; } atomic; struct { - struct rxe_mr *mr; u64 va_org; u32 rkey; u32 length; From patchwork Mon Dec 6 21:12:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12659687 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F825C43217 for ; Mon, 6 Dec 2021 21:14:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350654AbhLFVRp (ORCPT ); Mon, 6 Dec 2021 16:17:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350541AbhLFVRi (ORCPT ); Mon, 6 Dec 2021 16:17:38 -0500 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [IPv6:2607:f8b0:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4422C0613FE for ; Mon, 6 Dec 2021 13:14:09 -0800 (PST) Received: by mail-oi1-x232.google.com with SMTP id m6so23866345oim.2 for ; Mon, 06 Dec 2021 13:14:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q1us2qZX+gf/MenFGjy9vWQwRK3b/AStIBzbqn7qx5A=; b=czqh+0PYq3yNM35BwqGlJbfHCBsaqRWsSWcaqKoi+p/0UH2OGeq5cgImKQc5+3K+c/ n/iR9JJrloM3bngifdO3Aja7JmSPGlUfBTPVhGyOYOPj2daNsE1ypxWu2+OxROh0s6Pv uLxH3S+AU1QMmUtm8FJz84mW/mJubThI8zrPMiaWkQGvtkNfcBKKKye10/WCdTe194zx SFLjtc+WDHk631o+Kpf/mG61+zp6oX5FuT5xq1KyYiNOIkMPbUB7jyX8GNG6bYpvGpbf pmuWgEc5Ghcq/YM3nkXE3epaLNbo9ykhKGByxNxGMTgMoqHkUel94/MJOCND3Cak/tqN K7ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q1us2qZX+gf/MenFGjy9vWQwRK3b/AStIBzbqn7qx5A=; b=eNrxFdYgT/heGdoY+pkBd3WCvC3ZWeTiMchpxpexBvXGQRqfwfvJcSf4cQ6415Sv1v bt5Tbs4mnpXrYfB2iBhZR/8iCfQXbNs2VlIgum4mqUFStDMBwZBNH7e40s63wbp1OCYl XwmtCZlWj6lBVB1uwuO3ZQqRTu79lK9lI+SsSjWU6B+rWShLlaveQc/1syndBQt7+jzH 1fwu3/EdJ7hxqdkBY6hW/JLwJSYIv2xOZWeFJaQ0QISf3Ulehw5ROF7pnyjnF1YgN/JA astwtmkcw4tRIx0QJPjiEdeoC5BLlShKHxhOv6tBg7T0RlUdvf7F0wD4Acn8HzxCrpF7 2bcg== X-Gm-Message-State: AOAM531NuCVNZSH1OHJxudPvKHC2EHNFHl2SsFjRSQpVyESsSXyfTw95 XQDaSIWIX3EdgL444Pajvd6V5wFSkm8= X-Google-Smtp-Source: ABdhPJzfGUxncyfalDQVUjh7p9qBQ0vct1iHT9JQCb+TcFujyzyoBDCBE5MEx46q7AMZ6OI7Hp8IbQ== X-Received: by 2002:a54:4614:: with SMTP id p20mr1198200oip.39.1638825249200; Mon, 06 Dec 2021 13:14:09 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-07ad-dbeb-c616-747c.res6.spectrum.com. [2603:8081:140c:1a00:7ad:dbeb:c616:747c]) by smtp.googlemail.com with ESMTPSA id y28sm2819111oix.57.2021.12.06.13.14.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 13:14:08 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v6 6/8] RDMA/rxe: Minor cleanups in rxe_pool.c/rxe_pool.h Date: Mon, 6 Dec 2021 15:12:41 -0600 Message-Id: <20211206211242.15528-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211206211242.15528-1-rpearsonhpe@gmail.com> References: <20211206211242.15528-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch includes a couple of minor cleanups in rxe_pool.c and rxe_pool.h Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 10 +++------- drivers/infiniband/sw/rxe/rxe_pool.h | 1 - 2 files changed, 3 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index eb3566b2ce01..ab48b4dec9cf 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -97,11 +97,8 @@ static const struct rxe_type_info { }, }; -void rxe_pool_init( - struct rxe_dev *rxe, - struct rxe_pool *pool, - enum rxe_elem_type type, - unsigned int max_elem) +void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, + enum rxe_elem_type type, unsigned int max_elem) { const struct rxe_type_info *info = &rxe_type_info[type]; @@ -109,7 +106,6 @@ void rxe_pool_init( pool->rxe = rxe; pool->name = info->name; - pool->type = type; pool->max_elem = max_elem; pool->elem_size = ALIGN(info->size, RXE_POOL_ALIGN); pool->elem_offset = info->elem_offset; @@ -222,7 +218,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) } /** - * rxe_pool_get_index - lookup object from index + * rxe_pool_get_index() - lookup object from index * @pool: the object pool * @index: the index of the object * diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 01f23f57d666..62e9e439c99c 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -46,7 +46,6 @@ struct rxe_pool { int (*init)(struct rxe_pool_elem *elem); void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; - enum rxe_elem_type type; unsigned int max_elem; atomic_t num_elem; From patchwork Mon Dec 6 21:12:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12659693 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4B25C433FE for ; Mon, 6 Dec 2021 21:14:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350552AbhLFVR4 (ORCPT ); Mon, 6 Dec 2021 16:17:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350325AbhLFVRj (ORCPT ); Mon, 6 Dec 2021 16:17:39 -0500 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76693C061746 for ; Mon, 6 Dec 2021 13:14:10 -0800 (PST) Received: by mail-oi1-x236.google.com with SMTP id t19so23882673oij.1 for ; Mon, 06 Dec 2021 13:14:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1yWDAYHk7XwtaPh1Uib0FM+NkuPZMXc2Lq6U55hxWuw=; b=Blx+b7DsApgdRpO1qa5DeW+FukeWl2B45j4x3P1xX+lfFHn4u0Eaj/JTIyPNWjeDRU urGJuPIxf0nP/GnvhoeuRKsP35pBqNA1U5OfxLXvDzmzuMCWL6HuwNMdzrybPmXQA5Qh 4yBzMtYfOLYvwgaFOpl74HvcXRwcgADRzJ/0qCe01OrczPc/C5hneSNtUkG3ntfVHZx/ apXuETxxHkHJ0q5BieJobWn+XVxCn3m/zGW+s3RZrenxtKvFt5MDnH5aBv0y7jN/sKdV YdEtDa/lOox+CcRNPmX8tprK2519dh6E1HcNSJ7V+BeX2VKF1nai4Yx2nh01dxdddwqm ipcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1yWDAYHk7XwtaPh1Uib0FM+NkuPZMXc2Lq6U55hxWuw=; b=BaM7L8UwOwoIxN/TFUrZpWVdYHkq+ZShbqk5mNtoaINyPnBhLPwGQBAQpNOQ5YQ2S/ EFaXAF1jsxDFS1tD62rRll87SgyPm0YENzC3cj7dIfqbc8Ei/mvOBFJyL1irrUC1agX9 yF2S4Kv1TQCau0rG+H/u6dA8TQ5Xayrg4iuFSElHLxPgzQifBuCXwHbN4RoBWDQITdjE WOctKcqgUpOetz37myVEVPBhyzcMEBiBaNx/F2DioQIt6sca+xD8bePvXNVrZVlOxtQe KOdNvyWHgDnC0VCNXT5lXvwJyNFaXUvESW0xSr6exCqm346Jlp2CoTMyKb5hitReNlOX s2vw== X-Gm-Message-State: AOAM5312W2fKTUv64P83QWISZ0txufFqDYw5rlLdd0FnaiShHy3WHt3X Y325g7VTgfYi4UPMlynw/nSd/eb/7TU= X-Google-Smtp-Source: ABdhPJzx2p5WKNioeNlFnCFsc1A4fEmzKUfq5RFWqpztbldrL7trlHRnArtRokuJW4pvrBNG0iuzuA== X-Received: by 2002:a05:6808:1485:: with SMTP id e5mr1199808oiw.156.1638825249867; Mon, 06 Dec 2021 13:14:09 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-07ad-dbeb-c616-747c.res6.spectrum.com. [2603:8081:140c:1a00:7ad:dbeb:c616:747c]) by smtp.googlemail.com with ESMTPSA id y28sm2819111oix.57.2021.12.06.13.14.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 13:14:09 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v6 7/8] RDMA/rxe: Replace rxe_alloc by kzalloc for rxe_mc_elem Date: Mon, 6 Dec 2021 15:12:42 -0600 Message-Id: <20211206211242.15528-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211206211242.15528-1-rpearsonhpe@gmail.com> References: <20211206211242.15528-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently rxe_mc_elem structs are treated as rdma objects which is unneeded. This patch replaces rxe_alloc and rxe_drop_ref by kzalloc and kfree for these structs which hold associatons between multicast groups and QPs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 3 --- drivers/infiniband/sw/rxe/rxe_mcast.c | 22 ++++++++++++++-------- drivers/infiniband/sw/rxe/rxe_pool.c | 6 ------ drivers/infiniband/sw/rxe/rxe_pool.h | 1 - drivers/infiniband/sw/rxe/rxe_verbs.h | 2 -- 5 files changed, 14 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 09c73a0d8513..20a925aed29c 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -31,7 +31,6 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->mr_pool); rxe_pool_cleanup(&rxe->mw_pool); rxe_pool_cleanup(&rxe->mc_grp_pool); - rxe_pool_cleanup(&rxe->mc_elem_pool); if (rxe->tfm) crypto_free_shash(rxe->tfm); @@ -128,8 +127,6 @@ static void rxe_init_pools(struct rxe_dev *rxe) rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW, rxe->attr.max_mw); rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP, rxe->attr.max_mcast_grp); - rxe_pool_init(rxe, &rxe->mc_elem_pool, RXE_TYPE_MC_ELEM, - rxe->attr.max_total_mcast_qp_attach); } /* initialize rxe device state */ diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index e110c4d3fbf4..b935634f86cd 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -63,14 +63,15 @@ int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, goto out; } - elem = rxe_alloc(&rxe->mc_elem_pool); + elem = kzalloc(sizeof(*elem), GFP_KERNEL); if (!elem) { err = -ENOMEM; goto out; } - /* each qp holds a ref on the grp */ + /* each elem holds a ref on the grp and the qp */ rxe_add_ref(grp); + rxe_add_ref(qp); grp->num_qp++; elem->qp = qp; @@ -91,6 +92,7 @@ int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, { struct rxe_mc_grp *grp; struct rxe_mc_elem *elem, *tmp; + int ret = -EINVAL; grp = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); if (!grp) @@ -107,18 +109,21 @@ int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, spin_unlock_bh(&grp->mcg_lock); spin_unlock_bh(&qp->grp_lock); - rxe_drop_ref(elem); - rxe_drop_ref(grp); /* ref held by QP */ - rxe_drop_ref(grp); /* ref from get_key */ - return 0; + kfree(elem); + rxe_drop_ref(qp); /* ref held by elem */ + rxe_drop_ref(grp); /* ref held by elem */ + ret = 0; + goto out_drop_ref; } } spin_unlock_bh(&grp->mcg_lock); spin_unlock_bh(&qp->grp_lock); + +out_drop_ref: rxe_drop_ref(grp); /* ref from get_key */ err1: - return -EINVAL; + return ret; } void rxe_drop_all_mcast_groups(struct rxe_qp *qp) @@ -142,8 +147,9 @@ void rxe_drop_all_mcast_groups(struct rxe_qp *qp) list_del(&elem->qp_list); grp->num_qp--; spin_unlock_bh(&grp->mcg_lock); + rxe_drop_ref(qp); rxe_drop_ref(grp); - rxe_drop_ref(elem); + kfree(elem); } } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index ab48b4dec9cf..ff03d1f9d92e 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -89,12 +89,6 @@ static const struct rxe_type_info { .key_offset = offsetof(struct rxe_mc_grp, mgid), .key_size = sizeof(union ib_gid), }, - [RXE_TYPE_MC_ELEM] = { - .name = "rxe-mc_elem", - .size = sizeof(struct rxe_mc_elem), - .elem_offset = offsetof(struct rxe_mc_elem, elem), - .flags = RXE_POOL_ALLOC, - }, }; void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 62e9e439c99c..db2caff6f408 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -23,7 +23,6 @@ enum rxe_elem_type { RXE_TYPE_MR, RXE_TYPE_MW, RXE_TYPE_MC_GRP, - RXE_TYPE_MC_ELEM, RXE_NUM_TYPES, /* keep me last */ }; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 022abba4fb6b..9f39b097a976 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -364,7 +364,6 @@ struct rxe_mc_grp { }; struct rxe_mc_elem { - struct rxe_pool_elem elem; struct list_head qp_list; struct list_head grp_list; struct rxe_qp *qp; @@ -402,7 +401,6 @@ struct rxe_dev { struct rxe_pool mr_pool; struct rxe_pool mw_pool; struct rxe_pool mc_grp_pool; - struct rxe_pool mc_elem_pool; spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Mon Dec 6 21:12:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12659691 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 436B5C433EF for ; Mon, 6 Dec 2021 21:14:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350660AbhLFVRw (ORCPT ); Mon, 6 Dec 2021 16:17:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350452AbhLFVRm (ORCPT ); Mon, 6 Dec 2021 16:17:42 -0500 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E12E6C061D5E for ; Mon, 6 Dec 2021 13:14:12 -0800 (PST) Received: by mail-oi1-x22a.google.com with SMTP id 7so23776171oip.12 for ; Mon, 06 Dec 2021 13:14:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JXSEuVhAVUjxX1fBrEmk/nu+jsjDSaaC9vDBpjxnbBk=; b=f23U+X5WnXuR/WykxQA1VdzujdKML5oSG8CbhB68LFMeq51wWjx5bskOcWETb1sGFT UDF5FFrnGKJKw3vEY3vJHQupc/WVxScHCWvnGgiB0obOsJSjLYX3m8R8liLhVPErdyui OfcG6WiL0pHPab+V9BSWf3U0uMiDUVIvqcViaqX1fylQQ1raidAmZaAJN3FrqpxiOBpr koqCPNvwEXtLeMpG/XXo5zvVKkXBwFUI6qHf0Y2tosN1XwHbxf2yk4mbTzz5rboNYy/5 +4jxjMzkDdAoWn6PY7rxiqEIPMwNat19EXGONRgBO0ekrBrMwnzr83SJ4uLEses9dO62 CpCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JXSEuVhAVUjxX1fBrEmk/nu+jsjDSaaC9vDBpjxnbBk=; b=ILgCZW4NaY5xWMJlJrb1bGyceE6nAfa/+aREj/8C5b2fimQ+7q0jS0CgH9NDChL/gj CNJwDAsY5I3NZ8Sg20UQS6jlVlwq8twAz/CgU5QfDpJEAB/tkbek8dQDFwJBfzafDMyw iKyunrSfg74/kw4McsiGLw+N9/A2n6s3+ocamFClvT9gy+SVUHIhmqIgVHcROVoX8lvn 4PUUVPvXAM22YKi73FC0YxMVJEUTYygLn3RT9I+wCwpq3Ph6gF947Kosu8ITtPcxJ9qq MfONtx3LT45d8idPbmeYet8SlhT2fCehjK/xJO+eLqeg7a95+GCk7C95pEBqnyxscVoY YpXw== X-Gm-Message-State: AOAM531vz3/z6EExuaAsFdIIpL2nxy+NRmhSuicHT7NSAbrREnc+XlC8 AzuXerRJvBjlXkXgrz0kZlU= X-Google-Smtp-Source: ABdhPJzFGdajaJKOFw81K5Br0y9ArZSsx7E+9wLNPb1SOyaFE/GHXy+o3q9VlhcjgomiC7wwfQyvyw== X-Received: by 2002:aca:308a:: with SMTP id w132mr1142498oiw.91.1638825250680; Mon, 06 Dec 2021 13:14:10 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-07ad-dbeb-c616-747c.res6.spectrum.com. [2603:8081:140c:1a00:7ad:dbeb:c616:747c]) by smtp.googlemail.com with ESMTPSA id y28sm2819111oix.57.2021.12.06.13.14.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Dec 2021 13:14:10 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson , kernel test robot Subject: [PATCH for-next v6 8/8] RDMA/rxe: Add wait for completion to obj destruct Date: Mon, 6 Dec 2021 15:12:43 -0600 Message-Id: <20211206211242.15528-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211206211242.15528-1-rpearsonhpe@gmail.com> References: <20211206211242.15528-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch adds code to wait until pending activity on RDMA objects has completed before freeing or returning to rdma-core where the object may be freed. Reported-by: kernel test robot Signed-off-by: Bob Pearson --- v6 Corrected incorrect comment before __rxe_fini() Added a #define for complete timeout value. Changed type of __rxe_fini to int to return value from wait_for_completion. --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 +- drivers/infiniband/sw/rxe/rxe_mcast.c | 4 ++ drivers/infiniband/sw/rxe/rxe_mr.c | 2 + drivers/infiniband/sw/rxe/rxe_mw.c | 14 +++-- drivers/infiniband/sw/rxe/rxe_pool.c | 31 +++++++++- drivers/infiniband/sw/rxe/rxe_pool.h | 4 ++ drivers/infiniband/sw/rxe/rxe_recv.c | 4 +- drivers/infiniband/sw/rxe/rxe_req.c | 11 ++-- drivers/infiniband/sw/rxe/rxe_resp.c | 6 +- drivers/infiniband/sw/rxe/rxe_verbs.c | 84 ++++++++++++++++++++------- 10 files changed, 126 insertions(+), 38 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index f363fe3fa414..a2bb66f320fa 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -562,7 +562,9 @@ int rxe_completer(void *arg) enum comp_state state; int ret = 0; - rxe_add_ref(qp); + /* check qp pointer still valid */ + if (!rxe_add_ref(qp)) + return -EAGAIN; if (!qp->valid || qp->req.state == QP_STATE_ERROR || qp->req.state == QP_STATE_RESET) { diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index b935634f86cd..70d48f5847b0 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -122,6 +122,8 @@ int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, out_drop_ref: rxe_drop_ref(grp); /* ref from get_key */ + if (grp->elem.complete.done) + rxe_fini(grp); err1: return ret; } @@ -149,6 +151,8 @@ void rxe_drop_all_mcast_groups(struct rxe_qp *qp) spin_unlock_bh(&grp->mcg_lock); rxe_drop_ref(qp); rxe_drop_ref(grp); + if (grp->elem.complete.done) + rxe_fini(grp); kfree(elem); } } diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 3c4390adfb80..5f8c08da352d 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -695,6 +695,8 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) rxe_drop_ref(mr_pd(mr)); rxe_drop_ref(mr); + rxe_fini(mr); + return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 3ae981d77c25..9b3468911976 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -12,7 +12,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) struct rxe_dev *rxe = to_rdev(ibmw->device); int ret; - rxe_add_ref(pd); + if (!rxe_add_ref(pd)) + return -EINVAL; ret = rxe_add_to_pool(&rxe->mw_pool, mw); if (ret) { @@ -60,8 +61,9 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) rxe_do_dealloc_mw(mw); spin_unlock_bh(&mw->lock); - rxe_drop_ref(mw); rxe_drop_ref(pd); + rxe_drop_ref(mw); + rxe_fini(mw); return 0; } @@ -178,11 +180,11 @@ static void rxe_do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe, if (mw->length) { mw->mr = mr; atomic_inc(&mr->num_mw); - rxe_add_ref(mr); + rxe_add_ref(mr); /* safe */ } if (mw->ibmw.type == IB_MW_TYPE_2) { - rxe_add_ref(qp); + rxe_add_ref(qp); /* safe */ mw->qp = qp; } } @@ -199,7 +201,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) mw = rxe_pool_get_index(&rxe->mw_pool, mw_rkey >> 8); if (unlikely(!mw)) { ret = -EINVAL; - goto err; + goto err_out; } if (unlikely(mw->rkey != mw_rkey)) { @@ -236,7 +238,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) rxe_drop_ref(mr); err_drop_mw: rxe_drop_ref(mw); -err: +err_out: return ret; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index ff03d1f9d92e..10a14c575487 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -6,6 +6,8 @@ #include "rxe.h" +/* timeout in jiffies for pool element to complete */ +#define RXE_POOL_TIMEOUT (100) #define RXE_POOL_ALIGN (16) static const struct rxe_type_info { @@ -150,6 +152,7 @@ static void *__rxe_alloc(struct rxe_pool *pool, gfp_t flags) elem->pool = pool; elem->obj = obj; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); if (pool->init) { err = pool->init(elem); @@ -189,6 +192,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); if (pool->init) { err = pool->init(elem); @@ -375,8 +379,33 @@ void rxe_elem_release(struct kref *kref) if (pool->cleanup) pool->cleanup(elem); + atomic_dec(&pool->num_elem); + + complete(&elem->complete); +} + +/** + * __rxe_fini() - wait for completion of pool element + * @elem: the pool elem + * + * Wait until the reference count of an object drops to zero when + * rxe_elem_release() will complete the object and then, if locally + * allocated, free the memory containing the object and return + * + * Returns: non-zero if the object completed successfully else zero + */ +int __rxe_fini(struct rxe_pool_elem *elem) +{ + struct rxe_pool *pool = elem->pool; + int ret; + + ret = wait_for_completion_timeout(&elem->complete, RXE_POOL_TIMEOUT); + if (!ret) + pr_warn_ratelimited("Timed out waiting for %s#%d to complete\n", + pool->name, elem->index); + if (pool->flags & RXE_POOL_ALLOC) kfree(elem->obj); - atomic_dec(&pool->num_elem); + return ret; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index db2caff6f408..1f94601087f8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -30,6 +30,7 @@ struct rxe_pool_elem { struct rxe_pool *pool; void *obj; struct kref ref_cnt; + struct completion complete; struct list_head list; /* only used if keyed */ @@ -105,4 +106,7 @@ static inline bool __rxe_drop_ref(struct rxe_pool_elem *elem) } #define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->elem) +int __rxe_fini(struct rxe_pool_elem *elem); +#define rxe_fini(obj) __rxe_fini(&(obj)->elem) + #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 6a6cc1fa90e4..4c7077aec9a7 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -288,11 +288,11 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) cpkt = SKB_TO_PKT(cskb); cpkt->qp = qp; - rxe_add_ref(qp); + rxe_add_ref(qp); /* safe */ rxe_rcv_pkt(cpkt, cskb); } else { pkt->qp = qp; - rxe_add_ref(qp); + rxe_add_ref(qp); /* safe */ rxe_rcv_pkt(pkt, skb); skb = NULL; /* mark consumed */ } diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 7bc1ec8a5aa6..9b75515cd0f4 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -614,9 +614,10 @@ int rxe_requester(void *arg) struct rxe_ah *ah; struct rxe_av *av; - rxe_add_ref(qp); + /* check qp pointer still valid */ + if (!rxe_add_ref(qp)) + return -EAGAIN; -next_wqe: if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR)) goto exit; @@ -644,7 +645,7 @@ int rxe_requester(void *arg) if (unlikely(ret)) goto err; else - goto next_wqe; + goto done; } if (unlikely(qp_type(qp) == IB_QPT_RC && @@ -760,7 +761,9 @@ int rxe_requester(void *arg) update_state(qp, wqe, &pkt, payload); - goto next_wqe; +done: + rxe_drop_ref(qp); + return 0; err_drop_ah: if (ah) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index c776289842e5..5aaf4573c0ac 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -463,8 +463,8 @@ static enum resp_states check_rkey(struct rxe_qp *qp, if (mw->access & IB_ZERO_BASED) qp->resp.offset = mw->addr; + rxe_add_ref(mr); /* safe */ rxe_drop_ref(mw); - rxe_add_ref(mr); } else { mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); if (!mr) { @@ -1247,7 +1247,9 @@ int rxe_responder(void *arg) struct rxe_pkt_info *pkt = NULL; int ret = 0; - rxe_add_ref(qp); + /* check qp pointer still valid */ + if (!rxe_add_ref(qp)) + return -EAGAIN; qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index e3f64eae088c..06b508ba4e2d 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -116,6 +116,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc) struct rxe_ucontext *uc = to_ruc(ibuc); rxe_drop_ref(uc); + rxe_fini(uc); } static int rxe_port_immutable(struct ib_device *dev, u32 port_num, @@ -150,6 +151,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) struct rxe_pd *pd = to_rpd(ibpd); rxe_drop_ref(pd); + rxe_fini(pd); return 0; } @@ -189,6 +191,7 @@ static int rxe_create_ah(struct ib_ah *ibah, sizeof(uresp->ah_num)); if (err) { rxe_drop_ref(ah); + rxe_fini(ah); return -EFAULT; } } else if (ah->is_user) { @@ -229,6 +232,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) struct rxe_ah *ah = to_rah(ibah); rxe_drop_ref(ah); + rxe_fini(ah); return 0; } @@ -297,25 +301,29 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, err = rxe_srq_chk_attr(rxe, NULL, &init->attr, IB_SRQ_INIT_MASK); if (err) - goto err1; + goto err_out; err = rxe_add_to_pool(&rxe->srq_pool, srq); if (err) - goto err1; + goto err_out; + + if (!rxe_add_ref(pd)) + goto err_drop_srq; - rxe_add_ref(pd); srq->pd = pd; err = rxe_srq_from_init(rxe, srq, init, udata, uresp); if (err) - goto err2; + goto err_drop_pd; return 0; -err2: +err_drop_pd: rxe_drop_ref(pd); +err_drop_srq: rxe_drop_ref(srq); -err1: + rxe_fini(srq); +err_out: return err; } @@ -373,6 +381,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) rxe_drop_ref(srq->pd); rxe_drop_ref(srq); + rxe_fini(srq); return 0; } @@ -442,6 +451,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, qp_init: rxe_drop_ref(qp); + rxe_fini(qp); return err; } @@ -486,6 +496,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) rxe_qp_destroy(qp); rxe_drop_ref(qp); + rxe_fini(qp); return 0; } @@ -797,6 +808,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) rxe_cq_disable(cq); rxe_drop_ref(cq); + rxe_fini(cq); return 0; } @@ -882,15 +894,28 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) struct rxe_dev *rxe = to_rdev(ibpd->device); struct rxe_pd *pd = to_rpd(ibpd); struct rxe_mr *mr; + int err; mr = rxe_alloc(&rxe->mr_pool); - if (!mr) - return ERR_PTR(-ENOMEM); + if (!mr) { + err = -ENOMEM; + goto err_out; + } + + if (!rxe_add_ref(pd)) { + err = -EINVAL; + goto err_drop_mr; + } - rxe_add_ref(pd); rxe_mr_init_dma(pd, access, mr); return &mr->ibmr; + +err_drop_mr: + rxe_drop_ref(mr); + rxe_fini(mr); +err_out: + return ERR_PTR(err); } static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, @@ -899,30 +924,35 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, u64 iova, int access, struct ib_udata *udata) { - int err; struct rxe_dev *rxe = to_rdev(ibpd->device); struct rxe_pd *pd = to_rpd(ibpd); struct rxe_mr *mr; + int err; mr = rxe_alloc(&rxe->mr_pool); if (!mr) { err = -ENOMEM; - goto err2; + goto err_out; } - rxe_add_ref(pd); + if (!rxe_add_ref(pd)) { + err = -EINVAL; + goto err_drop_mr; + } err = rxe_mr_init_user(pd, start, length, iova, access, mr); if (err) - goto err3; + goto err_drop_pd; return &mr->ibmr; -err3: +err_drop_pd: rxe_drop_ref(pd); +err_drop_mr: rxe_drop_ref(mr); -err2: + rxe_fini(mr); +err_out: return ERR_PTR(err); } @@ -934,27 +964,34 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, struct rxe_mr *mr; int err; - if (mr_type != IB_MR_TYPE_MEM_REG) - return ERR_PTR(-EINVAL); + if (mr_type != IB_MR_TYPE_MEM_REG) { + err = -EINVAL; + goto err_out; + } mr = rxe_alloc(&rxe->mr_pool); if (!mr) { err = -ENOMEM; - goto err1; + goto err_out; } - rxe_add_ref(pd); + if (!rxe_add_ref(pd)) { + err = -EINVAL; + goto err_drop_mr; + } err = rxe_mr_init_fast(pd, max_num_sg, mr); if (err) - goto err2; + goto err_drop_pd; return &mr->ibmr; -err2: +err_drop_pd: rxe_drop_ref(pd); +err_drop_mr: rxe_drop_ref(mr); -err1: + rxe_fini(mr); +err_out: return ERR_PTR(err); } @@ -994,8 +1031,10 @@ static int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) if (err) return err; + /* adds a ref on grp if successful */ err = rxe_mcast_add_grp_elem(rxe, qp, grp); + /* drops the ref from ..get_grp() */ rxe_drop_ref(grp); return err; } @@ -1005,6 +1044,7 @@ static int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) struct rxe_dev *rxe = to_rdev(ibqp->device); struct rxe_qp *qp = to_rqp(ibqp); + /* drops a ref on grp if successful */ return rxe_mcast_drop_grp_elem(rxe, qp, mgid); }