From patchwork Sun Oct 10 23:59:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12548857 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BCACC433F5 for ; Sun, 10 Oct 2021 23:59:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EF2B060EE4 for ; Sun, 10 Oct 2021 23:59:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233291AbhJKABn (ORCPT ); Sun, 10 Oct 2021 20:01:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231560AbhJKABm (ORCPT ); Sun, 10 Oct 2021 20:01:42 -0400 Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 999BCC06161C for ; Sun, 10 Oct 2021 16:59:43 -0700 (PDT) Received: by mail-oi1-x22f.google.com with SMTP id o204so14089242oih.13 for ; Sun, 10 Oct 2021 16:59:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dhQRoAI5C8GwdjDfE7PmcUW/h+BkYNOFrJSHy3H8Z2o=; b=QlsPPVDDTM/SrJ9dE2D1UqUH4iKuiLHHGY32fZdbLtd9Cx/cHmpUgjkuHC8p+Jl58o jCm1qc4SUv1qPA/s7Zn0mCqUmz1X1m5gf0UuU31Bma2ZZVlvQ1nN6pqOWhrJfyJdb4wv Y79lrBYQalFMbDHw2hIiq+c7M7hf6xRpkp9+HSljkZXq3ruHwGzUdHqBdaHM1FUHsG8h Mi1msof5J11YMKN75tYgzNGnsOs1iaFcmpL/vjDgJgYlBNVpMF4S+Y54EFBFAq06B4sq 4Hr9FF2kXf/IcEQW6uaVEstlqrf11RmvIVNdwpcOZNoVFLeRtI8gJmLdwzIOeeT5q2Ot KgKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dhQRoAI5C8GwdjDfE7PmcUW/h+BkYNOFrJSHy3H8Z2o=; b=rTpbA8tp12Mt2cTYsVSW+sZDkcQlpVCcBQD4xXId/0lLWBoo3DnyPOr7KcIS/ON2Xx KGzZZuEpHGWvuCuN89FeCkmOWXwN7Dw8zRJAxRZLW/WPbYptCh4J1mo7jQqqpHPoX+x2 bcp8SQoKicCIDfqAE4sjrDekJEyzVSP0Fu1gVuStu7FUFoITZbBiIylNJBUqrvxGUZXg 8VaRtouq0+FiQ7b7YZ+E7FICK9CSmHWX8svOg+eWQQ2n4csKgY8bkrElWxN55eLskZbu tJBTt3jShoj7VBceYHyr+hJOXYDUoF6e+j0gbg91XTa4p4pqKY8Nm4OopRtAj7DBBHWs LcMA== X-Gm-Message-State: AOAM530nmAfpKVjAhYK3MQBZKLX1ezsIhR8pRr7ri8pthN8P0oYxkklI ba+QPEX4uD+077+8PaAbcE4= X-Google-Smtp-Source: ABdhPJxx+FbV2+PX6eLAqHe+5wOwgT+mqmaDWMMFDXTs9aaUP5y3QL6D+upNYiBP9l1c/i9ywqdqRQ== X-Received: by 2002:a54:408f:: with SMTP id i15mr16147400oii.17.1633910382947; Sun, 10 Oct 2021 16:59:42 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-f9d4-70f1-9065-ca26.res6.spectrum.com. [2603:8081:140c:1a00:f9d4:70f1:9065:ca26]) by smtp.gmail.com with ESMTPSA id c21sm1375379oiy.18.2021.10.10.16.59.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 10 Oct 2021 16:59:42 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 1/6] RDMA/rxe: Make rxe_alloc() take pool lock Date: Sun, 10 Oct 2021 18:59:26 -0500 Message-Id: <20211010235931.24042-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211010235931.24042-1-rpearsonhpe@gmail.com> References: <20211010235931.24042-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In rxe there are two separate pool APIs for creating a new object rxe_alloc() and rxe_alloc_locked(). Currently they are identical. Make rxe_alloc() take the pool lock which is in line with the other APIs in the library. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 21 ++++----------------- 1 file changed, 4 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index ffa8420b4765..7a288ebacceb 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -352,27 +352,14 @@ void *rxe_alloc_locked(struct rxe_pool *pool) void *rxe_alloc(struct rxe_pool *pool) { - struct rxe_type_info *info = &rxe_type_info[pool->type]; - struct rxe_pool_entry *elem; + unsigned long flags; u8 *obj; - if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; - - obj = kzalloc(info->size, GFP_KERNEL); - if (!obj) - goto out_cnt; - - elem = (struct rxe_pool_entry *)(obj + info->elem_offset); - - elem->pool = pool; - kref_init(&elem->ref_cnt); + write_lock_irqsave(&pool->pool_lock, flags); + obj = rxe_alloc_locked(pool); + write_unlock_irqrestore(&pool->pool_lock, flags); return obj; - -out_cnt: - atomic_dec(&pool->num_elem); - return NULL; } int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) From patchwork Sun Oct 10 23:59:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12548859 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D2FEC433FE for ; Sun, 10 Oct 2021 23:59:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 82D5C60EE4 for ; Sun, 10 Oct 2021 23:59:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231560AbhJKABn (ORCPT ); Sun, 10 Oct 2021 20:01:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231481AbhJKABn (ORCPT ); Sun, 10 Oct 2021 20:01:43 -0400 Received: from mail-ot1-x32d.google.com (mail-ot1-x32d.google.com [IPv6:2607:f8b0:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BF59C061570 for ; Sun, 10 Oct 2021 16:59:44 -0700 (PDT) Received: by mail-ot1-x32d.google.com with SMTP id g62-20020a9d2dc4000000b0054752cfbc59so19544922otb.1 for ; Sun, 10 Oct 2021 16:59:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bzmCMZl8j20s/0qoEhSwnwiK2Rb6NzK2cRE6Bz1Otfg=; b=Qq1Oi+3lPECp4uExduzmErI9s/gIpWgBg3/gtw982zWIBipURPg1rhV8bNrHFkDuPT QTJlU9ePKcOyAX2s0m5NDLSf9tk2jrq5QUgBxsnX0M+UYylLcFepnYWdHhb1GvDPbZ9O gF4523NYn9VqwsZjosuR3SvkuXMWSEvaYa/qSwvdL/Dk82JBBXPzcg/L/T4IT5e4wjmZ HPPVK+9Twu9K5TdKiFA4f9sFKe7RuDFraai+ZSCrlqdpoSbdvDDcqGXa8KdgIN3Rn7fW q/V8N69ruT/lupGkw0LSuk+62ebzlFIWKi3w38D3KIBF2UHFomJTscwMa0W0Vf2SsbwO G9wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bzmCMZl8j20s/0qoEhSwnwiK2Rb6NzK2cRE6Bz1Otfg=; b=tlmlcgovXNnWj7LM2BCwSPuueDzm58p9O6s5pquRlXC68rKPgZrq8KQ7ONoLgKI6n3 +2OnHW7SRCMQC2pmfKhZDJPUdUvtN2eeSRQNGhJ/wNqS0Lc2Ag14tQOj7bmr7zbNazUt d2VIoOpgSJkmmS2h25PHS72tTHMwA6C8NLeoxsvDRK85S+4zi3X2rHNzu6hQbsc3yiv7 E5Jp964Rbbb6utZ0onjKcQW6t9MJR+qD1Q++rvjrxWv309QplaqaI6ebPVv/uiHuRhIW YfTiCnpEMFBtQv3tm4ANMkNFQYlSeKE2wSTqha3IjD28WChQgmloVyjeQ4zlooZMSJvD YJuQ== X-Gm-Message-State: AOAM531endJVa+Gf6N6wJr6HJM3FTBPQmTalxt4n/oTc3wh4QzNG0gnW YD2Xj52uPRnkEOUeulBih9wgN/hnJZ4= X-Google-Smtp-Source: ABdhPJxIf4B/SJIGpimugVM9DSVehNPKXsSLHBJNUPIW79DmktDDCL5sqP2CgNT2L0KP8Bkvt8gAjw== X-Received: by 2002:a05:6830:1c7c:: with SMTP id s28mr18913645otg.235.1633910383463; Sun, 10 Oct 2021 16:59:43 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-f9d4-70f1-9065-ca26.res6.spectrum.com. [2603:8081:140c:1a00:f9d4:70f1:9065:ca26]) by smtp.gmail.com with ESMTPSA id c21sm1375379oiy.18.2021.10.10.16.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 10 Oct 2021 16:59:43 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 2/6] RDMA/rxe: Copy setup parameters into rxe_pool Date: Sun, 10 Oct 2021 18:59:27 -0500 Message-Id: <20211010235931.24042-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211010235931.24042-1-rpearsonhpe@gmail.com> References: <20211010235931.24042-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In rxe_pool.c copied remaining pool parameters from rxe_pool_info into rxe_pool. Saves looking up rxe_pool_info in the performance path. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 47 ++++++++++++---------------- drivers/infiniband/sw/rxe/rxe_pool.h | 5 +-- 2 files changed, 23 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 7a288ebacceb..e9d74ad3f0b7 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -9,7 +9,7 @@ /* info about object pools */ -struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { +static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_UC] = { .name = "rxe-uc", .size = sizeof(struct rxe_ucontext), @@ -86,11 +86,6 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { }, }; -static inline const char *pool_name(struct rxe_pool *pool) -{ - return rxe_type_info[pool->type].name; -} - static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) { int err = 0; @@ -125,35 +120,37 @@ int rxe_pool_init( enum rxe_elem_type type, unsigned int max_elem) { + const struct rxe_type_info *info = &rxe_type_info[type]; int err = 0; - size_t size = rxe_type_info[type].size; memset(pool, 0, sizeof(*pool)); pool->rxe = rxe; + pool->name = info->name; pool->type = type; pool->max_elem = max_elem; - pool->elem_size = ALIGN(size, RXE_POOL_ALIGN); - pool->flags = rxe_type_info[type].flags; + pool->elem_size = ALIGN(info->size, RXE_POOL_ALIGN); + pool->flags = info->flags; pool->index.tree = RB_ROOT; pool->key.tree = RB_ROOT; - pool->cleanup = rxe_type_info[type].cleanup; + pool->cleanup = info->cleanup; + pool->size = info->size; + pool->elem_offset = info->elem_offset; atomic_set(&pool->num_elem, 0); rwlock_init(&pool->pool_lock); - if (rxe_type_info[type].flags & RXE_POOL_INDEX) { - err = rxe_pool_init_index(pool, - rxe_type_info[type].max_index, - rxe_type_info[type].min_index); + if (info->flags & RXE_POOL_INDEX) { + err = rxe_pool_init_index(pool, info->max_index, + info->min_index); if (err) goto out; } - if (rxe_type_info[type].flags & RXE_POOL_KEY) { - pool->key.key_offset = rxe_type_info[type].key_offset; - pool->key.key_size = rxe_type_info[type].key_size; + if (info->flags & RXE_POOL_KEY) { + pool->key.key_offset = info->key_offset; + pool->key.key_size = info->key_size; } out: @@ -164,7 +161,7 @@ void rxe_pool_cleanup(struct rxe_pool *pool) { if (atomic_read(&pool->num_elem) > 0) pr_warn("%s pool destroyed with unfree'd elem\n", - pool_name(pool)); + pool->name); kfree(pool->index.table); } @@ -327,18 +324,17 @@ void __rxe_drop_index(struct rxe_pool_entry *elem) void *rxe_alloc_locked(struct rxe_pool *pool) { - struct rxe_type_info *info = &rxe_type_info[pool->type]; struct rxe_pool_entry *elem; u8 *obj; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; - obj = kzalloc(info->size, GFP_ATOMIC); + obj = kzalloc(pool->size, GFP_ATOMIC); if (!obj) goto out_cnt; - elem = (struct rxe_pool_entry *)(obj + info->elem_offset); + elem = (struct rxe_pool_entry *)(obj + pool->elem_offset); elem->pool = pool; kref_init(&elem->ref_cnt); @@ -382,14 +378,13 @@ void rxe_elem_release(struct kref *kref) struct rxe_pool_entry *elem = container_of(kref, struct rxe_pool_entry, ref_cnt); struct rxe_pool *pool = elem->pool; - struct rxe_type_info *info = &rxe_type_info[pool->type]; u8 *obj; if (pool->cleanup) pool->cleanup(elem); if (!(pool->flags & RXE_POOL_NO_ALLOC)) { - obj = (u8 *)elem - info->elem_offset; + obj = (u8 *)elem - pool->elem_offset; kfree(obj); } @@ -398,7 +393,6 @@ void rxe_elem_release(struct kref *kref) void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) { - struct rxe_type_info *info = &rxe_type_info[pool->type]; struct rb_node *node; struct rxe_pool_entry *elem; u8 *obj; @@ -418,7 +412,7 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) if (node) { kref_get(&elem->ref_cnt); - obj = (u8 *)elem - info->elem_offset; + obj = (u8 *)elem - pool->elem_offset; } else { obj = NULL; } @@ -440,7 +434,6 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) { - struct rxe_type_info *info = &rxe_type_info[pool->type]; struct rb_node *node; struct rxe_pool_entry *elem; u8 *obj; @@ -464,7 +457,7 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) if (node) { kref_get(&elem->ref_cnt); - obj = (u8 *)elem - info->elem_offset; + obj = (u8 *)elem - pool->elem_offset; } else { obj = NULL; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 1feca1bffced..cd962dc5de9d 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -44,8 +44,6 @@ struct rxe_type_info { size_t key_size; }; -extern struct rxe_type_info rxe_type_info[]; - struct rxe_pool_entry { struct rxe_pool *pool; struct kref ref_cnt; @@ -61,6 +59,7 @@ struct rxe_pool_entry { struct rxe_pool { struct rxe_dev *rxe; + const char *name; rwlock_t pool_lock; /* protects pool add/del/search */ size_t elem_size; void (*cleanup)(struct rxe_pool_entry *obj); @@ -69,6 +68,8 @@ struct rxe_pool { unsigned int max_elem; atomic_t num_elem; + size_t size; + size_t elem_offset; /* only used if indexed */ struct { From patchwork Sun Oct 10 23:59:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12548861 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00EA0C433EF for ; Sun, 10 Oct 2021 23:59:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D567D60EFE for ; Sun, 10 Oct 2021 23:59:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233294AbhJKABo (ORCPT ); Sun, 10 Oct 2021 20:01:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233293AbhJKABn (ORCPT ); Sun, 10 Oct 2021 20:01:43 -0400 Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 815B5C061570 for ; Sun, 10 Oct 2021 16:59:44 -0700 (PDT) Received: by mail-oi1-x22f.google.com with SMTP id g125so15535368oif.9 for ; Sun, 10 Oct 2021 16:59:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TQUlPb9uS4BrEmhtot372Fwk6A5crjCM71pI26AGA9Q=; b=QECewIghugfiAjb8TsKtnceNFaLSU9lnS9OpPbZxAaGID5LbAhjEmfBFVOy1Jzv42u 3EmQWcH0Mq/RnIE9DC1K5ltkgIHE9rSNsg5zmYJm92nt6R/WeNLNfSGI3qIUm/YodPGR BXaGeU0UFSz/pHtUzN4KshqSfHmIxL/jVhtNlfTTsx/USXQWvJBiQpiWCocOExbLsAqW Vt+x66THNO8ByzzrktPoj27zYiuImcDzZvNA2Y+feYwYDCjtcOK/UxNKMLa5pFUM8GZj NhaFaLZbKYOZZcrcXe1Dhcc+C7+RtGb6pYspgqTY6UvqkXMtmPaAlVPp0VLd63TdtTZM xjIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TQUlPb9uS4BrEmhtot372Fwk6A5crjCM71pI26AGA9Q=; b=PfkZsh+aGxZY1NqHK7iGbBojt53CjRMDJYjVERE6z4fIGMMzRlOg8VQX0Vui0VgXi4 aqjKoa3lLbSVwutxkNRhNVfD+69mLbJNIA566eLKYezHd/rMe5OtPzFS3zQKZvSYjENT v1qXHDL3BZrQZUJIJUuOCT+6BsVugKUkgxFgX1Mw4/3IGIEP4XprcQzYeowiR/nwUiuc qfGRSLZ8cfk5+RSPcPd+59vK6g+rJtDWOeMRTslmJETaRJdRFBErm+nJ2kagulekgnug 4DxLEZEtxsU7Z5IPqj33uaRgDI+9BGofbkjI8g0x2GRpbfZZk/TMin3TsKxnx9iKuDOe oj1Q== X-Gm-Message-State: AOAM531CJXzOjZyVd9miv9Kv1/XUgOAc+Vjx2gQXVsDKyoHPsLAo3Fyo WUSsFWTt510x3E+8R2cxmmo= X-Google-Smtp-Source: ABdhPJwABipZY9oWRNAKB2w5z8B2HEPiqQ3C2rQESMp/p0eee+bluiQYqkiL+QbZ707PgImtRK6ZBA== X-Received: by 2002:a05:6808:1a86:: with SMTP id bm6mr6306924oib.125.1633910383972; Sun, 10 Oct 2021 16:59:43 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-f9d4-70f1-9065-ca26.res6.spectrum.com. [2603:8081:140c:1a00:f9d4:70f1:9065:ca26]) by smtp.gmail.com with ESMTPSA id c21sm1375379oiy.18.2021.10.10.16.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 10 Oct 2021 16:59:43 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 3/6] RDMA/rxe: Save object pointer in pool element Date: Sun, 10 Oct 2021 18:59:28 -0500 Message-Id: <20211010235931.24042-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211010235931.24042-1-rpearsonhpe@gmail.com> References: <20211010235931.24042-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In rxe_pool.c currently there are many cases where it is necessary to compute the offset from a pool element struct to the object containing the pool element in a type independent way. By saving a pointer to the object when they are created extra work can be saved. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 16 +++++++++------- drivers/infiniband/sw/rxe/rxe_pool.h | 1 + 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index e9d74ad3f0b7..4caaecdb0d68 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -337,6 +337,7 @@ void *rxe_alloc_locked(struct rxe_pool *pool) elem = (struct rxe_pool_entry *)(obj + pool->elem_offset); elem->pool = pool; + elem->obj = obj; kref_init(&elem->ref_cnt); return obj; @@ -349,7 +350,7 @@ void *rxe_alloc_locked(struct rxe_pool *pool) void *rxe_alloc(struct rxe_pool *pool) { unsigned long flags; - u8 *obj; + void *obj; write_lock_irqsave(&pool->pool_lock, flags); obj = rxe_alloc_locked(pool); @@ -364,6 +365,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) goto out_cnt; elem->pool = pool; + elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); return 0; @@ -378,13 +380,13 @@ void rxe_elem_release(struct kref *kref) struct rxe_pool_entry *elem = container_of(kref, struct rxe_pool_entry, ref_cnt); struct rxe_pool *pool = elem->pool; - u8 *obj; + void *obj; if (pool->cleanup) pool->cleanup(elem); if (!(pool->flags & RXE_POOL_NO_ALLOC)) { - obj = (u8 *)elem - pool->elem_offset; + obj = elem->obj; kfree(obj); } @@ -395,7 +397,7 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) { struct rb_node *node; struct rxe_pool_entry *elem; - u8 *obj; + void *obj; node = pool->index.tree.rb_node; @@ -412,7 +414,7 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) if (node) { kref_get(&elem->ref_cnt); - obj = (u8 *)elem - pool->elem_offset; + obj = elem->obj; } else { obj = NULL; } @@ -436,7 +438,7 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) { struct rb_node *node; struct rxe_pool_entry *elem; - u8 *obj; + void *obj; int cmp; node = pool->key.tree.rb_node; @@ -457,7 +459,7 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) if (node) { kref_get(&elem->ref_cnt); - obj = (u8 *)elem - pool->elem_offset; + obj = elem->obj; } else { obj = NULL; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index cd962dc5de9d..570bda77f4a6 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -46,6 +46,7 @@ struct rxe_type_info { struct rxe_pool_entry { struct rxe_pool *pool; + void *obj; struct kref ref_cnt; struct list_head list; From patchwork Sun Oct 10 23:59:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12548863 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF519C433F5 for ; Sun, 10 Oct 2021 23:59:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A760E60EFE for ; Sun, 10 Oct 2021 23:59:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233299AbhJKABp (ORCPT ); Sun, 10 Oct 2021 20:01:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231481AbhJKABo (ORCPT ); Sun, 10 Oct 2021 20:01:44 -0400 Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com [IPv6:2607:f8b0:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28965C061570 for ; Sun, 10 Oct 2021 16:59:45 -0700 (PDT) Received: by mail-ot1-x331.google.com with SMTP id k2-20020a056830168200b0054e523d242aso9934345otr.6 for ; Sun, 10 Oct 2021 16:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tZo61CQD8JH8PSyNFaj4kDsMRGsrpcIrwUfaQLSr0zM=; b=F7HZTR5AV1yOqsvkt8RCiqa7u+1QNu3igwh5ZdwCqfxmo+/NjhSPoULfnxKTMbDXMu tNWqRqwFtqBz9SOJesX6v5na1jh8YxOQjU4+LsAfCZCE60tnPxV9+yJ91OX3cjriMRTZ vUC2dF05QuQryFX+cK2PL25S56Rv2p75AznkoQ9ug6Ez20vQ3X0lk4pkU8IXHN1uKgsR 9Ds4NtJaVKTlOVvBo5q/j5Wdz8+nWs6Ie0h7MNIXTUPrbAH60u8PqtFmJVfsANd94Qaj KTSs1Ps9Ijt3eZpEpoANF95owiSWe3nu3dntpjpL21MFuTOWCeKWS67mfrQICzmPtCT2 2HCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tZo61CQD8JH8PSyNFaj4kDsMRGsrpcIrwUfaQLSr0zM=; b=jKHavXkPC1dGiuzzlx8T5G7QbahJwilr1NlJ+GnGpgm4VGkfNa6KxMeyLRAYOywTuE 1QRF6FjtdKv/Grh23xkSj1MwaBqq++d5++qgIaV8nHKiY2509pG6i7a9VmS8RECVHdcE 9bHkiWxGvhUDrxN6ScCmsCJvkUi44mj0Fe06ZXHOn4svoIz4Craqf0mheE6eToxcrS3j ICNdh11t2opkfWV2ZqNCOvxrX/nONxu0glueK8ncn1nD7PxULojpZTQAgR3arx4qSELH 9Dt2/l/PxXk7p5y0Dy1bC/f6cB78oBSDHimRFC9H1JKvMitQa8QJ8yjm7n1IOxy1Z7ty qi+g== X-Gm-Message-State: AOAM5319F4ilD6dLlC2qZGaFxlpgQJff9jScxmm7vxnO8EXSHmAr+MA3 Uc1/8fUE3IQtF+VaaaeM87IOEUGaTpA= X-Google-Smtp-Source: ABdhPJxKJV0ISa0TuP9CIL0ligZpvXdhjiMwlnCxKUdrGXHMBO+zvrayG7SlTlVLeKDPY3C13T2OWg== X-Received: by 2002:a9d:7696:: with SMTP id j22mr18302119otl.290.1633910384504; Sun, 10 Oct 2021 16:59:44 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-f9d4-70f1-9065-ca26.res6.spectrum.com. [2603:8081:140c:1a00:f9d4:70f1:9065:ca26]) by smtp.gmail.com with ESMTPSA id c21sm1375379oiy.18.2021.10.10.16.59.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 10 Oct 2021 16:59:44 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 4/6] RDMA/rxe: Combine rxe_add_index with rxe_alloc Date: Sun, 10 Oct 2021 18:59:29 -0500 Message-Id: <20211010235931.24042-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211010235931.24042-1-rpearsonhpe@gmail.com> References: <20211010235931.24042-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently rxe objects which have an index require adding and dropping the indices in a separate API call from allocating and freeing the object. These are always performed together so this patch combines them in a single operation. By taking a single pool lock around allocating the object and adding the index metadata or dropping the index metadata and releasing the object the possibility of a race condition where the metadata is not consistent with the state of the object is removed. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 1 - drivers/infiniband/sw/rxe/rxe_mw.c | 5 +-- drivers/infiniband/sw/rxe/rxe_pool.c | 59 +++++++++++++++------------ drivers/infiniband/sw/rxe/rxe_pool.h | 22 ---------- drivers/infiniband/sw/rxe/rxe_verbs.c | 10 ----- 5 files changed, 33 insertions(+), 64 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 53271df10e47..6e71f67ccfe9 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -693,7 +693,6 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) mr->state = RXE_MR_STATE_INVALID; rxe_drop_ref(mr_pd(mr)); - rxe_drop_index(mr); rxe_drop_ref(mr); return 0; diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 9534a7fe1a98..854d0c283521 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -20,7 +20,6 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return ret; } - rxe_add_index(mw); mw->rkey = ibmw->rkey = (mw->pelem.index << 8) | rxe_get_next_key(-1); mw->state = (mw->ibmw.type == IB_MW_TYPE_2) ? RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; @@ -335,7 +334,5 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) void rxe_mw_cleanup(struct rxe_pool_entry *elem) { - struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); - - rxe_drop_index(mw); + /* nothing to do currently */ } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 4caaecdb0d68..d55a40291692 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -166,12 +166,16 @@ void rxe_pool_cleanup(struct rxe_pool *pool) kfree(pool->index.table); } +/* should never fail because there are at least as many indices as + * max objects + */ static u32 alloc_index(struct rxe_pool *pool) { u32 index; u32 range = pool->index.max_index - pool->index.min_index + 1; - index = find_next_zero_bit(pool->index.table, range, pool->index.last); + index = find_next_zero_bit(pool->index.table, range, + pool->index.last); if (index >= range) index = find_first_zero_bit(pool->index.table, range); @@ -192,7 +196,8 @@ static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) elem = rb_entry(parent, struct rxe_pool_entry, index_node); if (elem->index == new->index) { - pr_warn("element already exists!\n"); + pr_warn("element with index = 0x%x already exists!\n", + new->index); return -EINVAL; } @@ -280,31 +285,21 @@ void __rxe_drop_key(struct rxe_pool_entry *elem) write_unlock_irqrestore(&pool->pool_lock, flags); } -int __rxe_add_index_locked(struct rxe_pool_entry *elem) +static int rxe_add_index(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; int err; elem->index = alloc_index(pool); err = rxe_insert_index(pool, elem); + if (err) + clear_bit(elem->index - pool->index.min_index, + pool->index.table); return err; } -int __rxe_add_index(struct rxe_pool_entry *elem) -{ - struct rxe_pool *pool = elem->pool; - unsigned long flags; - int err; - - write_lock_irqsave(&pool->pool_lock, flags); - err = __rxe_add_index_locked(elem); - write_unlock_irqrestore(&pool->pool_lock, flags); - - return err; -} - -void __rxe_drop_index_locked(struct rxe_pool_entry *elem) +static void rxe_drop_index(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; @@ -312,20 +307,11 @@ void __rxe_drop_index_locked(struct rxe_pool_entry *elem) rb_erase(&elem->index_node, &pool->index.tree); } -void __rxe_drop_index(struct rxe_pool_entry *elem) -{ - struct rxe_pool *pool = elem->pool; - unsigned long flags; - - write_lock_irqsave(&pool->pool_lock, flags); - __rxe_drop_index_locked(elem); - write_unlock_irqrestore(&pool->pool_lock, flags); -} - void *rxe_alloc_locked(struct rxe_pool *pool) { struct rxe_pool_entry *elem; u8 *obj; + int err; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -340,6 +326,14 @@ void *rxe_alloc_locked(struct rxe_pool *pool) elem->obj = obj; kref_init(&elem->ref_cnt); + if (pool->flags & RXE_POOL_INDEX) { + err = rxe_add_index(elem); + if (err) { + kfree(obj); + goto out_cnt; + } + } + return obj; out_cnt: @@ -361,6 +355,8 @@ void *rxe_alloc(struct rxe_pool *pool) int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) { + int err; + if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -368,6 +364,12 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + if (pool->flags & RXE_POOL_INDEX) { + err = rxe_add_index(elem); + if (err) + goto out_cnt; + } + return 0; out_cnt: @@ -385,6 +387,9 @@ void rxe_elem_release(struct kref *kref) if (pool->cleanup) pool->cleanup(elem); + if (pool->flags & RXE_POOL_INDEX) + rxe_drop_index(elem); + if (!(pool->flags & RXE_POOL_NO_ALLOC)) { obj = elem->obj; kfree(obj); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 570bda77f4a6..41eaf47a64a3 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -110,28 +110,6 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem); #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->pelem) -/* assign an index to an indexed object and insert object into - * pool's rb tree holding and not holding the pool_lock - */ -int __rxe_add_index_locked(struct rxe_pool_entry *elem); - -#define rxe_add_index_locked(obj) __rxe_add_index_locked(&(obj)->pelem) - -int __rxe_add_index(struct rxe_pool_entry *elem); - -#define rxe_add_index(obj) __rxe_add_index(&(obj)->pelem) - -/* drop an index and remove object from rb tree - * holding and not holding the pool_lock - */ -void __rxe_drop_index_locked(struct rxe_pool_entry *elem); - -#define rxe_drop_index_locked(obj) __rxe_drop_index_locked(&(obj)->pelem) - -void __rxe_drop_index(struct rxe_pool_entry *elem); - -#define rxe_drop_index(obj) __rxe_drop_index(&(obj)->pelem) - /* assign a key to a keyed object and insert object into * pool's rb tree holding and not holding pool_lock */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index c49ba0381964..bc40200436f0 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -409,7 +409,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (err) return err; - rxe_add_index(qp); err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibqp->pd, udata); if (err) goto qp_init; @@ -417,7 +416,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, return 0; qp_init: - rxe_drop_index(qp); rxe_drop_ref(qp); return err; } @@ -462,7 +460,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) struct rxe_qp *qp = to_rqp(ibqp); rxe_qp_destroy(qp); - rxe_drop_index(qp); rxe_drop_ref(qp); return 0; } @@ -871,7 +868,6 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) if (!mr) return ERR_PTR(-ENOMEM); - rxe_add_index(mr); rxe_add_ref(pd); rxe_mr_init_dma(pd, access, mr); @@ -895,8 +891,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, goto err2; } - rxe_add_index(mr); - rxe_add_ref(pd); err = rxe_mr_init_user(pd, start, length, iova, access, mr); @@ -907,7 +901,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, err3: rxe_drop_ref(pd); - rxe_drop_index(mr); rxe_drop_ref(mr); err2: return ERR_PTR(err); @@ -930,8 +923,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, goto err1; } - rxe_add_index(mr); - rxe_add_ref(pd); err = rxe_mr_init_fast(pd, max_num_sg, mr); @@ -942,7 +933,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, err2: rxe_drop_ref(pd); - rxe_drop_index(mr); rxe_drop_ref(mr); err1: return ERR_PTR(err); From patchwork Sun Oct 10 23:59:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12548865 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71D81C433FE for ; Sun, 10 Oct 2021 23:59:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5BF1060EFE for ; Sun, 10 Oct 2021 23:59:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233302AbhJKABq (ORCPT ); Sun, 10 Oct 2021 20:01:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233293AbhJKABo (ORCPT ); Sun, 10 Oct 2021 20:01:44 -0400 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9114BC061570 for ; Sun, 10 Oct 2021 16:59:45 -0700 (PDT) Received: by mail-oi1-x231.google.com with SMTP id e63so1650472oif.8 for ; Sun, 10 Oct 2021 16:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qPkkigCephKmdCnyXY+7JbFldIGQ4pdkQBhuDUMy8j8=; b=SIA4Mx+8YjI3ZoKq8SYzBGQCRKWTqNp2SAEy8iOQvGHQUFbIZ88LYiyEdgbqVOfutL SKnKfEe+WifeJtECmjGEkkq4iK4PYhAH+K/8zH5bUUXVwDrWYKN+cMK3ivbWoHLoxzb3 AWVivp72H2+wb8gjg7870kL2eZf9jp+8ijMDacGqSgE1QlMJuUMFZpF3v42O4XUlCG/1 7uSBVz0ppfBTSoOAJZ4AI41x8jZWw01UT3tXVx2SMlNi6HbqZDjnJ1CLdQzrp7kpKswD Ix9JO5VRvCIiL6MLAzLSGcqZim+xh2MfKnlNlHQpk0oThv0rhrAyVw6eqM5KNpAItzRj Rxdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qPkkigCephKmdCnyXY+7JbFldIGQ4pdkQBhuDUMy8j8=; b=MBEOfAdcHOjQKxnuTWrrAb2+x4haGq/Ua6QuTRlDOsAc6Pcnu3/Mit7VmbXw4Sc+DZ SuR2ZMhaaOMW4dtq3bmFHCWNlEdDCPJOyUqquN7NzslDt3N3mrYU5S91gdfwngFaPbMp hqykE2xWGGZgijofvT2SFSK7kQTa2mKKb5lIpTZXxB25xM8DY0j66jB8W3RIRJBF8UiZ jzVgWi4xWOcDs8Dz7YG9ZY9Q+6cEk+C2BNl1iao+wTKjv0zMMwsF0wZBzDJlBJjRLGb1 y3GfHQ4IbZs77NNy6pI88jZ7WBqCnBthzWs/r4odwip0oU9KmT1noourGS2OS10dEnTE Wa7w== X-Gm-Message-State: AOAM532JRdpf+HH2uxUeZAL1JN2JccHQiDJhf3RZNTLXRyk4idCglUXo C0OJGSDOKb2TZYvWDYcyEJWoOgxbHa0= X-Google-Smtp-Source: ABdhPJwS9wu8N7DQQ2ZgFf/KhF+pRgWeympFwQAofIkGCOybminFANxSsyy8OTtjk6/7mo4vKGpwBg== X-Received: by 2002:aca:6246:: with SMTP id w67mr13675049oib.44.1633910385015; Sun, 10 Oct 2021 16:59:45 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-f9d4-70f1-9065-ca26.res6.spectrum.com. [2603:8081:140c:1a00:f9d4:70f1:9065:ca26]) by smtp.gmail.com with ESMTPSA id c21sm1375379oiy.18.2021.10.10.16.59.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 10 Oct 2021 16:59:44 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 5/6] RDMA/rxe: Combine rxe_add_key with rxe_alloc Date: Sun, 10 Oct 2021 18:59:30 -0500 Message-Id: <20211010235931.24042-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211010235931.24042-1-rpearsonhpe@gmail.com> References: <20211010235931.24042-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently adding and dropping a key from a rxe object requires separate API calls from allocating and freeing the object but these are always performed together. This patch combines these into single APIs. This requires adding new rxe_allocate_with_key(_locked) APIs. By combining allocating an object and adding key metadata inside a single locked sequence and dropping the key metadata and releasing the object the possibility of a race condition where the object state and key metadata state are inconsistent is removed. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 5 +- drivers/infiniband/sw/rxe/rxe_pool.c | 81 +++++++++++++-------------- drivers/infiniband/sw/rxe/rxe_pool.h | 24 ++------ 3 files changed, 45 insertions(+), 65 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 1c1d1b53312d..337dc2c68051 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -15,18 +15,16 @@ static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, int err; struct rxe_mc_grp *grp; - grp = rxe_alloc_locked(&rxe->mc_grp_pool); + grp = rxe_alloc_with_key_locked(&rxe->mc_grp_pool, mgid); if (!grp) return ERR_PTR(-ENOMEM); INIT_LIST_HEAD(&grp->qp_list); spin_lock_init(&grp->mcg_lock); grp->rxe = rxe; - rxe_add_key_locked(grp, mgid); err = rxe_mcast_add(rxe, mgid); if (unlikely(err)) { - rxe_drop_key_locked(grp); rxe_drop_ref(grp); return ERR_PTR(err); } @@ -174,6 +172,5 @@ void rxe_mc_cleanup(struct rxe_pool_entry *arg) struct rxe_mc_grp *grp = container_of(arg, typeof(*grp), pelem); struct rxe_dev *rxe = grp->rxe; - rxe_drop_key(grp); rxe_mcast_delete(rxe, &grp->mgid); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index d55a40291692..70f407108b92 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -244,47 +244,6 @@ static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) return 0; } -int __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key) -{ - struct rxe_pool *pool = elem->pool; - int err; - - memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); - err = rxe_insert_key(pool, elem); - - return err; -} - -int __rxe_add_key(struct rxe_pool_entry *elem, void *key) -{ - struct rxe_pool *pool = elem->pool; - unsigned long flags; - int err; - - write_lock_irqsave(&pool->pool_lock, flags); - err = __rxe_add_key_locked(elem, key); - write_unlock_irqrestore(&pool->pool_lock, flags); - - return err; -} - -void __rxe_drop_key_locked(struct rxe_pool_entry *elem) -{ - struct rxe_pool *pool = elem->pool; - - rb_erase(&elem->key_node, &pool->key.tree); -} - -void __rxe_drop_key(struct rxe_pool_entry *elem) -{ - struct rxe_pool *pool = elem->pool; - unsigned long flags; - - write_lock_irqsave(&pool->pool_lock, flags); - __rxe_drop_key_locked(elem); - write_unlock_irqrestore(&pool->pool_lock, flags); -} - static int rxe_add_index(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; @@ -341,6 +300,31 @@ void *rxe_alloc_locked(struct rxe_pool *pool) return NULL; } +void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key) +{ + struct rxe_pool_entry *elem; + u8 *obj; + int err; + + obj = rxe_alloc_locked(pool); + if (!obj) + return NULL; + + elem = (struct rxe_pool_entry *)(obj + pool->elem_offset); + memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); + err = rxe_insert_key(pool, elem); + if (err) { + kfree(obj); + goto out_cnt; + } + + return obj; + +out_cnt: + atomic_dec(&pool->num_elem); + return NULL; +} + void *rxe_alloc(struct rxe_pool *pool) { unsigned long flags; @@ -353,6 +337,18 @@ void *rxe_alloc(struct rxe_pool *pool) return obj; } +void *rxe_alloc_with_key(struct rxe_pool *pool, void *key) +{ + unsigned long flags; + void *obj; + + write_lock_irqsave(&pool->pool_lock, flags); + obj = rxe_alloc_with_key_locked(pool, key); + write_unlock_irqrestore(&pool->pool_lock, flags); + + return obj; +} + int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) { int err; @@ -390,6 +386,9 @@ void rxe_elem_release(struct kref *kref) if (pool->flags & RXE_POOL_INDEX) rxe_drop_index(elem); + if (pool->flags & RXE_POOL_KEY) + rb_erase(&elem->key_node, &pool->key.tree); + if (!(pool->flags & RXE_POOL_NO_ALLOC)) { obj = elem->obj; kfree(obj); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 41eaf47a64a3..ad287c4ddc1a 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -105,31 +105,15 @@ void *rxe_alloc_locked(struct rxe_pool *pool); void *rxe_alloc(struct rxe_pool *pool); +void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key); + +void *rxe_alloc_with_key(struct rxe_pool *pool, void *key); + /* connect already allocated object to pool */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem); #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->pelem) -/* assign a key to a keyed object and insert object into - * pool's rb tree holding and not holding pool_lock - */ -int __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key); - -#define rxe_add_key_locked(obj, key) __rxe_add_key_locked(&(obj)->pelem, key) - -int __rxe_add_key(struct rxe_pool_entry *elem, void *key); - -#define rxe_add_key(obj, key) __rxe_add_key(&(obj)->pelem, key) - -/* remove elem from rb tree holding and not holding the pool_lock */ -void __rxe_drop_key_locked(struct rxe_pool_entry *elem); - -#define rxe_drop_key_locked(obj) __rxe_drop_key_locked(&(obj)->pelem) - -void __rxe_drop_key(struct rxe_pool_entry *elem); - -#define rxe_drop_key(obj) __rxe_drop_key(&(obj)->pelem) - /* lookup an indexed object from index holding and not holding the pool_lock. * takes a reference on object */ From patchwork Sun Oct 10 23:59:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12548867 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10317C433EF for ; Sun, 10 Oct 2021 23:59:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F130C60EE4 for ; Sun, 10 Oct 2021 23:59:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233293AbhJKABq (ORCPT ); Sun, 10 Oct 2021 20:01:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233297AbhJKABp (ORCPT ); Sun, 10 Oct 2021 20:01:45 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F49EC061745 for ; Sun, 10 Oct 2021 16:59:46 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id d21-20020a9d4f15000000b0054e677e0ac5so5511983otl.11 for ; Sun, 10 Oct 2021 16:59:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=71PXaA7yP3wDtW5Q8hrVVXYFMdWArlzEGIj033qe1Y4=; b=iE8ZasdUFeSgPoZxE8HYW4mCi9NKrOnfHgRHzE17+0DB12owJvxVXziy3/VplsDvee pDLzzy0OCwWn/9LhVai/8XzBfS9A//jFxd+SKZdAwRy3PiTtM4bOeNDXZKlJVTcHm57h N2ZiyJHMbzxb9/DA+rZsQRK3rb68vjH3c0KFKiOAfn6RgatlZDv2eUVoKEzPmZvqVvlL bwLC1km2BO6TfUYTiyQd2p9vf3Rx0dGvvJE4HWgRhiLPtFlZohJE4cEwey8QCtJw+IGC tVs0g3oj32ptxFt2jYfBQorwXvt1IYvmqtCSctq9r8kyg8xmT2MsE1XV1wZA/+r30KcB OzKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=71PXaA7yP3wDtW5Q8hrVVXYFMdWArlzEGIj033qe1Y4=; b=G6nFRFD83NkZma1z5vSak4wLl9wVFwTzHqfso8RcexYq/hUWbD1186comW7kMY1662 dLttF4f6/OKD/Zie7y1FUyA8nxoPAj5wts69TMW+gjaoTzJAnN/uE83cBYcFcHrjhOdJ EGPiWvHNK5QGFPRSaNYwp6QEdnH+F6Py8nLLDnR2DhVPE1gmFbRdXC10VgjpaLK4jEBC ire3iHD3/gs0/qpZ0AHMsneIaur3G/271wp5Cuh/4tBXdrWu9OomL768KRhbO0wTGL4u hsTQb3M1diEQiU0UZup3zwoO1yNYfadj4N0yNqtcEz+8nJPH264X1ZTHnjzzIbHoxZuD cyBg== X-Gm-Message-State: AOAM531VjUpnzHKQumZyWL//Lo9kKOlKDlk/g/Igfr9KfdBLHptnSf5v YAkXpaDJuzwa15Zd2jsn/kOx+5qGt4o= X-Google-Smtp-Source: ABdhPJyGdz61qNIj0LgT0qn8cE3xIm+Yekrqx9HYv/++z72t1cpxHkGUmrOWkg/Lun+etajxoRGpGA== X-Received: by 2002:a05:6830:20da:: with SMTP id z26mr19149131otq.359.1633910385599; Sun, 10 Oct 2021 16:59:45 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-f9d4-70f1-9065-ca26.res6.spectrum.com. [2603:8081:140c:1a00:f9d4:70f1:9065:ca26]) by smtp.gmail.com with ESMTPSA id c21sm1375379oiy.18.2021.10.10.16.59.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 10 Oct 2021 16:59:45 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 6/6] RDMA/rxe: Fix potential race condition in rxe_pool Date: Sun, 10 Oct 2021 18:59:31 -0500 Message-Id: <20211010235931.24042-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211010235931.24042-1-rpearsonhpe@gmail.com> References: <20211010235931.24042-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently there is a possible race condition related to rxe indexed or keyed objects where one thread is the last one holding a reference to an object and drops that reference triggering a call to rxe_elem_release() while at the same time another thread looks up the object from its index or key by calling rxe_pool_get_index(/_key). This can happen if an unexpected packet arrives as a result of a retry attempt and looks up its rkey or a multicast packet arrives just as the verbs consumer drops the mcast group. Add locking to prevent looking up an object from its index or key while another thread is trying to destroy the object. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 53 +++++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_pool.h | 15 ++++++-- 2 files changed, 46 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 70f407108b92..c6a583894956 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -266,10 +266,10 @@ static void rxe_drop_index(struct rxe_pool_entry *elem) rb_erase(&elem->index_node, &pool->index.tree); } -void *rxe_alloc_locked(struct rxe_pool *pool) +static void *__rxe_alloc_locked(struct rxe_pool *pool) { struct rxe_pool_entry *elem; - u8 *obj; + void *obj; int err; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) @@ -279,11 +279,10 @@ void *rxe_alloc_locked(struct rxe_pool *pool) if (!obj) goto out_cnt; - elem = (struct rxe_pool_entry *)(obj + pool->elem_offset); + elem = (struct rxe_pool_entry *)((u8 *)obj + pool->elem_offset); elem->pool = pool; elem->obj = obj; - kref_init(&elem->ref_cnt); if (pool->flags & RXE_POOL_INDEX) { err = rxe_add_index(elem); @@ -300,17 +299,32 @@ void *rxe_alloc_locked(struct rxe_pool *pool) return NULL; } +void *rxe_alloc_locked(struct rxe_pool *pool) +{ + struct rxe_pool_entry *elem; + void *obj; + + obj = __rxe_alloc_locked(pool); + if (!obj) + return NULL; + + elem = (struct rxe_pool_entry *)(obj + pool->elem_offset); + kref_init(&elem->ref_cnt); + + return obj; +} + void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key) { struct rxe_pool_entry *elem; - u8 *obj; + void *obj; int err; - obj = rxe_alloc_locked(pool); + obj = __rxe_alloc_locked(pool); if (!obj) return NULL; - elem = (struct rxe_pool_entry *)(obj + pool->elem_offset); + elem = (struct rxe_pool_entry *)((u8 *)obj + pool->elem_offset); memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); err = rxe_insert_key(pool, elem); if (err) { @@ -318,6 +332,8 @@ void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key) goto out_cnt; } + kref_init(&elem->ref_cnt); + return obj; out_cnt: @@ -351,14 +367,15 @@ void *rxe_alloc_with_key(struct rxe_pool *pool, void *key) int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) { + unsigned long flags; int err; + write_lock_irqsave(&pool->pool_lock, flags); if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; - kref_init(&elem->ref_cnt); if (pool->flags & RXE_POOL_INDEX) { err = rxe_add_index(elem); @@ -366,10 +383,14 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) goto out_cnt; } + kref_init(&elem->ref_cnt); + write_unlock_irqrestore(&pool->pool_lock, flags); + return 0; out_cnt: atomic_dec(&pool->num_elem); + write_unlock_irqrestore(&pool->pool_lock, flags); return -EINVAL; } @@ -401,7 +422,7 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) { struct rb_node *node; struct rxe_pool_entry *elem; - void *obj; + void *obj = NULL; node = pool->index.tree.rb_node; @@ -416,12 +437,8 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) break; } - if (node) { - kref_get(&elem->ref_cnt); + if (node && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; - } else { - obj = NULL; - } return obj; } @@ -442,7 +459,7 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) { struct rb_node *node; struct rxe_pool_entry *elem; - void *obj; + void *obj = NULL; int cmp; node = pool->key.tree.rb_node; @@ -461,12 +478,8 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) break; } - if (node) { - kref_get(&elem->ref_cnt); + if (node && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; - } else { - obj = NULL; - } return obj; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index ad287c4ddc1a..43dac03ad82e 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -132,9 +132,20 @@ void *rxe_pool_get_key(struct rxe_pool *pool, void *key); void rxe_elem_release(struct kref *kref); /* take a reference on an object */ -#define rxe_add_ref(elem) kref_get(&(elem)->pelem.ref_cnt) +static inline int __rxe_add_ref(struct rxe_pool_entry *elem) +{ + int ret = kref_get_unless_zero(&elem->ref_cnt); + + if (unlikely(!ret)) + pr_warn("Taking a reference on a %s object that is already destroyed\n", + elem->pool->name); + + return (ret) ? 0 : -EINVAL; +} + +#define rxe_add_ref(obj) __rxe_add_ref(&(obj)->pelem) /* drop a reference on an object */ -#define rxe_drop_ref(elem) kref_put(&(elem)->pelem.ref_cnt, rxe_elem_release) +#define rxe_drop_ref(obj) kref_put(&(obj)->pelem.ref_cnt, rxe_elem_release) #endif /* RXE_POOL_H */