From patchwork Thu Apr 21 01:40:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820976 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25C58C433F5 for ; Thu, 21 Apr 2022 01:41:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383664AbiDUBn7 (ORCPT ); Wed, 20 Apr 2022 21:43:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383663AbiDUBn6 (ORCPT ); Wed, 20 Apr 2022 21:43:58 -0400 Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B1629FDC for ; Wed, 20 Apr 2022 18:41:10 -0700 (PDT) Received: by mail-ot1-x32c.google.com with SMTP id r14-20020a9d750e000000b00605446d683eso2316658otk.10 for ; Wed, 20 Apr 2022 18:41:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=N3eHwjVF2WmuZXC2bINTmdjwmODvB9Qwj+pNzvuWajM=; b=fV06bXm9w3XjIKGYUGGKPKZ9e9oRA+yBSkBQ2ObeBFwK9vN6Gobss8/C9axI7UhIra nYF4UXb6Pdad8fMq38TupXu19HBlrDqMQDj4njOH3zBphT20PXIdf/yeN2N7DQshvV1O MoB6XtcReTOJZfbPST5nyYkEDaNmi8h+ioitc+wUmgSJWhmIoVpAO2IOPJj41Dh8Rlh1 NLmTjjJPgCMT2aSBB1rTwuVfYqMiboZlhIgccuBl5vRCUkP/TW2rurkNzDGYwgNr//du tWlANcxEsrPXMKPckuVi/CsgRrWYdmMhVzl4nPgDTAi6zuZ/eefXZTbIMAysIaGnwDDs yK/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=N3eHwjVF2WmuZXC2bINTmdjwmODvB9Qwj+pNzvuWajM=; b=dR7pT64Wfv1e/CJuqwaFBnesOXQI8LibfA/PxBp5Qf65Fp0yigd6LUr79zXklJ3Nfx WYUfteyJbs6i7LDdvP9FFLtZYWKNdaLMy6vUI37HqRhxXsqeE7Yi3qmAjkid5m2M09GC 56V3rkaNCJwAwydz9oy2Pnc1/8N9DlK2ZSKgfl+jzgHieQMzR/1svwUFV0vBFc5DG4UE MCNsofc8TaPgsCZgS38jcv5oysq+81rp70qw/hfrTOaFFnJmqZ1E9RSP+KMFtyJHcakg sto+mwvejCm/L/C7nUP1KuvmOMLYXwj8mPgRjM9JqprO/AR4k0TpIdboEgnbvrnvEmzQ tJ7w== X-Gm-Message-State: AOAM5319V7EqnM+yek2+oX7rTa8Q0MmeMsD7dVEySMsRR3WH0KtZWgTV HPibzbWJSm6emsM1oEB7bsI= X-Google-Smtp-Source: ABdhPJwe8QRSLykIVWzcA7HvR0FOrA4Lx23cORKNCUHv9a7uzRt9SG+LD4+AAR7IGneihdulQYF2Ig== X-Received: by 2002:a05:6830:34a6:b0:605:73ba:ab39 with SMTP id c38-20020a05683034a600b0060573baab39mr530079otu.181.1650505269602; Wed, 20 Apr 2022 18:41:09 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:09 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 01/10] RDMA/rxe: Remove IB_SRQ_INIT_MASK Date: Wed, 20 Apr 2022 20:40:34 -0500 Message-Id: <20220421014042.26985-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the #define IB_SRQ_INIT_MASK is used to distinguish the rxe_create_srq verb from the rxe_modify_srq verb so that some code can be shared between these two subroutines. This commit splits rxe_srq_chk_attr into two subroutines: rxe_srq_chk_init and rxe_srq_chk_attr which handle the create_srq and modify_srq verbs separately. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 9 +- drivers/infiniband/sw/rxe/rxe_srq.c | 118 +++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_verbs.c | 4 +- 3 files changed, 74 insertions(+), 57 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 2ffbe3390668..ff6cae2c2949 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -159,15 +159,12 @@ void retransmit_timer(struct timer_list *t); void rnr_nak_timer(struct timer_list *t); /* rxe_srq.c */ -#define IB_SRQ_INIT_MASK (~IB_SRQ_LIMIT) - -int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, - struct ib_srq_attr *attr, enum ib_srq_attr_mask mask); - +int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init); int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, struct ib_udata *udata, struct rxe_create_srq_resp __user *uresp); - +int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, + struct ib_srq_attr *attr, enum ib_srq_attr_mask mask); int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata); diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 0c0721f04357..e2dcfc5d97e3 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -6,64 +6,34 @@ #include #include "rxe.h" -#include "rxe_loc.h" #include "rxe_queue.h" -int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, - struct ib_srq_attr *attr, enum ib_srq_attr_mask mask) +int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init) { - if (srq && srq->error) { - pr_warn("srq in error state\n"); + struct ib_srq_attr *attr = &init->attr; + + if (attr->max_wr > rxe->attr.max_srq_wr) { + pr_warn("max_wr(%d) > max_srq_wr(%d)\n", + attr->max_wr, rxe->attr.max_srq_wr); goto err1; } - if (mask & IB_SRQ_MAX_WR) { - if (attr->max_wr > rxe->attr.max_srq_wr) { - pr_warn("max_wr(%d) > max_srq_wr(%d)\n", - attr->max_wr, rxe->attr.max_srq_wr); - goto err1; - } - - if (attr->max_wr <= 0) { - pr_warn("max_wr(%d) <= 0\n", attr->max_wr); - goto err1; - } - - if (srq && srq->limit && (attr->max_wr < srq->limit)) { - pr_warn("max_wr (%d) < srq->limit (%d)\n", - attr->max_wr, srq->limit); - goto err1; - } - - if (attr->max_wr < RXE_MIN_SRQ_WR) - attr->max_wr = RXE_MIN_SRQ_WR; + if (attr->max_wr <= 0) { + pr_warn("max_wr(%d) <= 0\n", attr->max_wr); + goto err1; } - if (mask & IB_SRQ_LIMIT) { - if (attr->srq_limit > rxe->attr.max_srq_wr) { - pr_warn("srq_limit(%d) > max_srq_wr(%d)\n", - attr->srq_limit, rxe->attr.max_srq_wr); - goto err1; - } + if (attr->max_wr < RXE_MIN_SRQ_WR) + attr->max_wr = RXE_MIN_SRQ_WR; - if (srq && (attr->srq_limit > srq->rq.queue->buf->index_mask)) { - pr_warn("srq_limit (%d) > cur limit(%d)\n", - attr->srq_limit, - srq->rq.queue->buf->index_mask); - goto err1; - } + if (attr->max_sge > rxe->attr.max_srq_sge) { + pr_warn("max_sge(%d) > max_srq_sge(%d)\n", + attr->max_sge, rxe->attr.max_srq_sge); + goto err1; } - if (mask == IB_SRQ_INIT_MASK) { - if (attr->max_sge > rxe->attr.max_srq_sge) { - pr_warn("max_sge(%d) > max_srq_sge(%d)\n", - attr->max_sge, rxe->attr.max_srq_sge); - goto err1; - } - - if (attr->max_sge < RXE_MIN_SRQ_SGE) - attr->max_sge = RXE_MIN_SRQ_SGE; - } + if (attr->max_sge < RXE_MIN_SRQ_SGE) + attr->max_sge = RXE_MIN_SRQ_SGE; return 0; @@ -93,8 +63,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, spin_lock_init(&srq->rq.consumer_lock); type = QUEUE_TYPE_FROM_CLIENT; - q = rxe_queue_init(rxe, &srq->rq.max_wr, - srq_wqe_size, type); + q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type); if (!q) { pr_warn("unable to allocate queue for srq\n"); return -ENOMEM; @@ -121,6 +90,57 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, return 0; } +int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, + struct ib_srq_attr *attr, enum ib_srq_attr_mask mask) +{ + if (srq->error) { + pr_warn("srq in error state\n"); + goto err1; + } + + if (mask & IB_SRQ_MAX_WR) { + if (attr->max_wr > rxe->attr.max_srq_wr) { + pr_warn("max_wr(%d) > max_srq_wr(%d)\n", + attr->max_wr, rxe->attr.max_srq_wr); + goto err1; + } + + if (attr->max_wr <= 0) { + pr_warn("max_wr(%d) <= 0\n", attr->max_wr); + goto err1; + } + + if (srq->limit && (attr->max_wr < srq->limit)) { + pr_warn("max_wr (%d) < srq->limit (%d)\n", + attr->max_wr, srq->limit); + goto err1; + } + + if (attr->max_wr < RXE_MIN_SRQ_WR) + attr->max_wr = RXE_MIN_SRQ_WR; + } + + if (mask & IB_SRQ_LIMIT) { + if (attr->srq_limit > rxe->attr.max_srq_wr) { + pr_warn("srq_limit(%d) > max_srq_wr(%d)\n", + attr->srq_limit, rxe->attr.max_srq_wr); + goto err1; + } + + if (attr->srq_limit > srq->rq.queue->buf->index_mask) { + pr_warn("srq_limit (%d) > cur limit(%d)\n", + attr->srq_limit, + srq->rq.queue->buf->index_mask); + goto err1; + } + } + + return 0; + +err1: + return -EINVAL; +} + int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata) diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 58e4412b1d16..2ddfd99dd020 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -7,8 +7,8 @@ #include #include #include + #include "rxe.h" -#include "rxe_loc.h" #include "rxe_queue.h" #include "rxe_hw_counters.h" @@ -295,7 +295,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, uresp = udata->outbuf; } - err = rxe_srq_chk_attr(rxe, NULL, &init->attr, IB_SRQ_INIT_MASK); + err = rxe_srq_chk_init(rxe, init); if (err) goto err1; From patchwork Thu Apr 21 01:40:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820977 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 836CEC433FE for ; Thu, 21 Apr 2022 01:41:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383663AbiDUBoA (ORCPT ); Wed, 20 Apr 2022 21:44:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383654AbiDUBn6 (ORCPT ); Wed, 20 Apr 2022 21:43:58 -0400 Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3E09765E for ; Wed, 20 Apr 2022 18:41:10 -0700 (PDT) Received: by mail-oi1-x229.google.com with SMTP id r8so4100896oib.5 for ; Wed, 20 Apr 2022 18:41:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OzuLb3XlUruYZ9Vu/tQaCE9Qho4LlE1+7yyaL8/rqGE=; b=UZ7WmRiHV+eJDuv7EuDd48bFRkEFzfKKRLsRlHD9fMGIcv8vbFBv9zmcYKS5HO2ttn zz5SpxOALXiZDKbonzwIm+bMN5DuSu67OAn93ctdYIKfE8//3L+QwV40wohc73B4OWUQ cLvE/JnGB8y9CwwV1yBkZW5BY71EOCjMMPgJqoo9eE6Ct3lsnUit3xt/I0GYoo5lklsZ +6av6vZzK4ZQWyvBtSCx5Y+XnuFZeVS3RPvw57txywStu7QmJpumwBqxjM6wN06btQKq 8Ixu0FOB2YIUtNORAePJBybd0YiyX8AWny+i6g0CRecddyfyIVGMuOtWM9K9l+mu8bNn UoTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OzuLb3XlUruYZ9Vu/tQaCE9Qho4LlE1+7yyaL8/rqGE=; b=ytp6wpiA6q/x1KZ0IoGJyz48rU/Grmi0JTC8DFCbQhfWumOAKa536TASbwtYlS8rAR wppDbkBsgeruW4+m9XEqACt2D7aJyBmLjWnXkszO1eYZ0VFYNcUU1PICG717cFzbwGRt suJeOIVeWaHzaJYzUCTUEkU0z0o8TE/7jcHQzRtug+k3gJbTmo/YeuQ50A8GcMnJma0A 5rRIq1W9PHa/GsQr7HKobNtQYVc6RhDxtZWWfT80HceYbYKxaB8mtIQ2f6q0kz8e/4rj SuD6QilpUZIQQDGDAlb5zHzo9gBtS9NJR0psyU2hwpNMpk5lUO+ADsuFwUztxOZRFj7H OntA== X-Gm-Message-State: AOAM5328MvySirubvP7gjnMkJG4xjQXNiKEpFI2Wo1AC/QD0IjxHy6Xw yBpK6qniFfs74NUTb+CSAXQ= X-Google-Smtp-Source: ABdhPJyNtmG7kUIoW+ymZ/+R8tA7gSExyPmRf4cpkHilUBm2FS1JCm2+U/zUdlMPCGu8BNVtjSRXMQ== X-Received: by 2002:a05:6808:1449:b0:322:ed6c:1f2a with SMTP id x9-20020a056808144900b00322ed6c1f2amr2360719oiv.289.1650505270342; Wed, 20 Apr 2022 18:41:10 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:09 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 02/10] RDMA/rxe: Add rxe_srq_cleanup() Date: Wed, 20 Apr 2022 20:40:35 -0500 Message-Id: <20220421014042.26985-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move cleanup code from rxe_destroy_srq() to rxe_srq_cleanup() which is called after all references are dropped to allow code depending on the srq object to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 7 ++++--- drivers/infiniband/sw/rxe/rxe_pool.c | 1 + drivers/infiniband/sw/rxe/rxe_srq.c | 11 +++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 27 +++++++++++---------------- 4 files changed, 27 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index ff6cae2c2949..18f3c5dac381 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -37,7 +37,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited); void rxe_cq_disable(struct rxe_cq *cq); -void rxe_cq_cleanup(struct rxe_pool_elem *arg); +void rxe_cq_cleanup(struct rxe_pool_elem *elem); /* rxe_mcast.c */ struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); @@ -81,7 +81,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey); int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr); int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); -void rxe_mr_cleanup(struct rxe_pool_elem *arg); +void rxe_mr_cleanup(struct rxe_pool_elem *elem); /* rxe_mw.c */ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); @@ -89,7 +89,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); -void rxe_mw_cleanup(struct rxe_pool_elem *arg); +void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, @@ -168,6 +168,7 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata); +void rxe_srq_cleanup(struct rxe_pool_elem *elem); void rxe_dealloc(struct ib_device *ib_dev); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 87066d04ed18..5963b1429ad8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -46,6 +46,7 @@ static const struct rxe_type_info { .name = "srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), + .cleanup = rxe_srq_cleanup, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, .max_elem = RXE_MAX_SRQ_INDEX - RXE_MIN_SRQ_INDEX + 1, diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index e2dcfc5d97e3..02b39498c370 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -174,3 +174,14 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, srq->rq.queue = NULL; return err; } + +void rxe_srq_cleanup(struct rxe_pool_elem *elem) +{ + struct rxe_srq *srq = container_of(elem, typeof(*srq), elem); + + if (srq->pd) + rxe_put(srq->pd); + + if (srq->rq.queue) + rxe_queue_cleanup(srq->rq.queue); +} diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 2ddfd99dd020..30491b976d39 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -286,36 +286,35 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, struct rxe_srq *srq = to_rsrq(ibsrq); struct rxe_create_srq_resp __user *uresp = NULL; - if (init->srq_type != IB_SRQT_BASIC) - return -EOPNOTSUPP; - if (udata) { if (udata->outlen < sizeof(*uresp)) return -EINVAL; uresp = udata->outbuf; } + if (init->srq_type != IB_SRQT_BASIC) + return -EOPNOTSUPP; + err = rxe_srq_chk_init(rxe, init); if (err) - goto err1; + goto err_out; err = rxe_add_to_pool(&rxe->srq_pool, srq); if (err) - goto err1; + goto err_out; rxe_get(pd); srq->pd = pd; err = rxe_srq_from_init(rxe, srq, init, udata, uresp); if (err) - goto err2; + goto err_cleanup; return 0; -err2: - rxe_put(pd); +err_cleanup: rxe_put(srq); -err1: +err_out: return err; } @@ -339,15 +338,15 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, err = rxe_srq_chk_attr(rxe, srq, attr, mask); if (err) - goto err1; + goto err_out; err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd, udata); if (err) - goto err1; + goto err_out; return 0; -err1: +err_out: return err; } @@ -368,10 +367,6 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) { struct rxe_srq *srq = to_rsrq(ibsrq); - if (srq->rq.queue) - rxe_queue_cleanup(srq->rq.queue); - - rxe_put(srq->pd); rxe_put(srq); return 0; } From patchwork Thu Apr 21 01:40:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820978 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 551E1C433EF for ; Thu, 21 Apr 2022 01:41:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383654AbiDUBoB (ORCPT ); Wed, 20 Apr 2022 21:44:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383665AbiDUBn7 (ORCPT ); Wed, 20 Apr 2022 21:43:59 -0400 Received: from mail-oa1-x36.google.com (mail-oa1-x36.google.com [IPv6:2001:4860:4864:20::36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2E5113FBF for ; Wed, 20 Apr 2022 18:41:11 -0700 (PDT) Received: by mail-oa1-x36.google.com with SMTP id 586e51a60fabf-e2a00f2cc8so3947391fac.4 for ; Wed, 20 Apr 2022 18:41:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J9C0qft11+s8T5hP9zWOQB4PW9ykUFarEO2261Tr0/U=; b=pBPGN/ovw1FfF2xZpOvN0IlsS241K8la5rIXLuB15dzKCwiZqfqrHnIhyNBFNNI+dG UrpNXDcE+pTMaxBmQ00JeT0M4qDwRoUkmENJCpV19EScowPRct1ec2zvgLgbEuOR9JKb gEK+EGLcGAdwpTybCWIejodQitGBHJjmSH6ZsGt6FUvKN1al7Dvq22PV4CRhtHcbL0aC NJbQLAkySo7wgtuy0C0PMUH3NlPDxH1GwwuQYknDqAZChVTWy5QO3dQ5o6VdkoYU6JXQ 89IP4n4fvhS63V3IcMbK9NYW7yH405P/Vm/mZ3ikDVnZ7jdoqIYJlSUu0ZlzcikQZSFI BX6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J9C0qft11+s8T5hP9zWOQB4PW9ykUFarEO2261Tr0/U=; b=dkmlWN6iFzpeQ6aH31VaA24SXMO8EGRa6FU8tccftF/Fwe+p5+VAv54sQOmpdM3LP0 YCnb09m43oSq70cTdaZqBhexRjpi/aImjmznMrq8z0kV1LlDbW5+B9P6pZEwGIUW5mUR x8JyeCS/5k5eZx4T8u1qphxpYRgAEz6B2cLQ3TFcaBDLb45Gcer+a95FMuSRT/3rTek3 /AG9MztTVlXD6MUOumocUyX+mVMmGaGap6BxGoQzczKTOSpR3polxqmlo490MFqMOZFa R+VOqyMFGgxYjXFWHdcBDJwA7xPAmr4wP2tcbt5wLw5p+7av/BgjPbmuExE1v/CWskJN p9Sg== X-Gm-Message-State: AOAM531pnxoNN4ChrULptfIPr36qKqMQvWoabMn2oGbc7k1I6HxND/5U RLSObCqZC/kB2FTPwFSuMJ0= X-Google-Smtp-Source: ABdhPJyBIfx8k94JKBvRO4bv36QQbyg58S9d1esQR3OSyO5NrxyLZIlZ+oM/GikQO7+8TfmmrWYhxQ== X-Received: by 2002:a05:6870:78d:b0:e2:e03c:6587 with SMTP id en13-20020a056870078d00b000e2e03c6587mr2846742oab.294.1650505271010; Wed, 20 Apr 2022 18:41:11 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:10 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 03/10] RDMA/rxe: Check rxe_get() return value Date: Wed, 20 Apr 2022 20:40:36 -0500 Message-Id: <20220421014042.26985-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In the tasklets (completer, responder, and requester) check the return value from rxe_get() to detect failures to get a reference. This only occurs if the qp has had its reference count drop to zero which indicates that it no longer should be used. This is in preparation to an upcoming change that will move the qp cleanup code to rxe_qp_cleanup(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 3 ++- drivers/infiniband/sw/rxe/rxe_req.c | 3 ++- drivers/infiniband/sw/rxe/rxe_resp.c | 3 ++- 3 files changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 138b3e7d3a5f..da3a398053b8 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -562,7 +562,8 @@ int rxe_completer(void *arg) enum comp_state state; int ret = 0; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; if (!qp->valid || qp->req.state == QP_STATE_ERROR || qp->req.state == QP_STATE_RESET) { diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 9bb24b824968..ca55bc4cd120 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -609,7 +609,8 @@ int rxe_requester(void *arg) struct rxe_ah *ah; struct rxe_av *av; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; next_wqe: if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR)) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 49133bd0d756..f4f6ee5d81fe 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -1262,7 +1262,8 @@ int rxe_responder(void *arg) struct rxe_pkt_info *pkt = NULL; int ret = 0; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED; From patchwork Thu Apr 21 01:40:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820979 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1064C4332F for ; Thu, 21 Apr 2022 01:41:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383665AbiDUBoB (ORCPT ); Wed, 20 Apr 2022 21:44:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383666AbiDUBoA (ORCPT ); Wed, 20 Apr 2022 21:44:00 -0400 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D9C0765E for ; Wed, 20 Apr 2022 18:41:12 -0700 (PDT) Received: by mail-oi1-x22b.google.com with SMTP id t15so4125095oie.1 for ; Wed, 20 Apr 2022 18:41:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pTnTJfIpwuT3RU2jftWC9N4Q9ke/sywAPWG0nuID1Pg=; b=FkyjChHglPo89zHC9juKPH+x/2SbLRUgh0AVcXpaillH18dlAp2bhmWu8DVrgXrilB UeBA/PYfTyQ9fLjnKAVDQWcnFRD0WyhEtdrB4RmU4rwSbEg8roPNB1m/9U7PyeO5Knu3 yHQblIKCeRISHxUE7ihqSb2qPQNRMXg5FqaIFEnR5aDyg+q8KRN7fFbC9pFbpG7cDE1S FbbgX3TVQyryKskcGavMRU0opz6lPAhOlsozMAMwJbqt5NaAFm3aP2nvas6gYe2iqS+n Bq6/pPLILiymAafFGxcDG5bbO4S+jFGtaBzGusvwNnFPrPCmdD5crAFVrwzqwmzb1ItV yEHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pTnTJfIpwuT3RU2jftWC9N4Q9ke/sywAPWG0nuID1Pg=; b=f7kSPlCuXAZHKeVY+ruvjHKyVbnsPxkSmbcB5VG7YB62CgKpfjwj5q556Fg8hqPBqT FQWy33Pz+9E83G69XnxrtzFb9UqexcJU+pjy9+Q8NwiYfM+ycMjVZI8vXFzPZ2aVgjP1 dWpHos/9rqxw2vgZav/8bfn35Wl/f5jva0nkJKMiGtGlpdaUgOW4ml+h9uQd00ZkRMLT X4kG97kzD4JhY6q3maOTAeT9VtZDzGbYpFmou8GMuzJYY9K3+pmoDv6gGSeOjbvjWQSs hR/F5NCSuqdDkBsh+VNQ9PGZpA3+WmPEQgO+zSZetq8DyetugBN+g95iF0keoIzesvHl vQXg== X-Gm-Message-State: AOAM531U4wwJO0+am24nIVbBXTu5RJ6N6KqXn6Q4KREE5REgcfI99Ca4 w1l+Os/3piVBbmxnMnKoj74= X-Google-Smtp-Source: ABdhPJxASKaP2N3UCCmNL00F+R49mkvIGWKsUjT0i/OZ23H3+BjX5dOCaDyOwORTRWQrh+07dmPdSQ== X-Received: by 2002:a05:6808:188e:b0:322:f5b5:d907 with SMTP id bi14-20020a056808188e00b00322f5b5d907mr1938990oib.294.1650505271685; Wed, 20 Apr 2022 18:41:11 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:11 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 04/10] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Date: Wed, 20 Apr 2022 20:40:37 -0500 Message-Id: <20220421014042.26985-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the code from rxe_qp_destroy() to rxe_qp_do_cleanup(). This allows flows holding references to qp to complete before the qp object is torn down. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 - drivers/infiniband/sw/rxe/rxe_qp.c | 12 ++++-------- drivers/infiniband/sw/rxe/rxe_verbs.c | 1 - 3 files changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 18f3c5dac381..0e022ae1b8a5 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -114,7 +114,6 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); void rxe_qp_error(struct rxe_qp *qp); int rxe_qp_chk_destroy(struct rxe_qp *qp); -void rxe_qp_destroy(struct rxe_qp *qp); void rxe_qp_cleanup(struct rxe_pool_elem *elem); static inline int qp_num(struct rxe_qp *qp) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index ff58f76347c9..a8011757784e 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -765,9 +765,11 @@ int rxe_qp_chk_destroy(struct rxe_qp *qp) return 0; } -/* called by the destroy qp verb */ -void rxe_qp_destroy(struct rxe_qp *qp) +/* called when the last reference to the qp is dropped */ +static void rxe_qp_do_cleanup(struct work_struct *work) { + struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); + qp->valid = 0; qp->qp_timeout_jiffies = 0; rxe_cleanup_task(&qp->resp.task); @@ -786,12 +788,6 @@ void rxe_qp_destroy(struct rxe_qp *qp) __rxe_do_task(&qp->comp.task); __rxe_do_task(&qp->req.task); } -} - -/* called when the last reference to the qp is dropped */ -static void rxe_qp_do_cleanup(struct work_struct *work) -{ - struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); if (qp->sq.queue) rxe_queue_cleanup(qp->sq.queue); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 30491b976d39..8585b1096538 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -490,7 +490,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) if (ret) return ret; - rxe_qp_destroy(qp); rxe_put(qp); return 0; } From patchwork Thu Apr 21 01:40:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820980 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F5FDC433F5 for ; Thu, 21 Apr 2022 01:41:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383671AbiDUBoC (ORCPT ); Wed, 20 Apr 2022 21:44:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383667AbiDUBoA (ORCPT ); Wed, 20 Apr 2022 21:44:00 -0400 Received: from mail-oa1-x36.google.com (mail-oa1-x36.google.com [IPv6:2001:4860:4864:20::36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 078219FDC for ; Wed, 20 Apr 2022 18:41:13 -0700 (PDT) Received: by mail-oa1-x36.google.com with SMTP id 586e51a60fabf-e5e8523fcbso3930171fac.10 for ; Wed, 20 Apr 2022 18:41:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ir1rbhJVm4YOuatC8nTnW07YQS3a/+GpADxu2BBd+7A=; b=IhEiJ9cDVDiswMgcsh/b1RY1VOCbfsRVqQZSgSjJRCj/gcUqRq4VeBu1dnmLH7HdGn VDd46iZ0s4nyXLfuVoNxuNGswjMW9wZEM8YSPBcRUOJpgeDHYYqNuKIXZ1xSAE1IViWD DtGWhCuivYX8rFOa5nsBV7WlXnjh+l5IQwR4I5PvO0eTIni35APdpLgQKBdutkebmYq3 LcL9b6K7obt+B517t6khXEh+i7meaR67s1wrSwWFxkMXw4v/vyXsocf4aIdsi3kP4Fwm 0nrNOxBxXJkmqcqH9cIEZepwGns1k1ykbTH/ZqOTJTjH6oR+MjZLbCR/ab39QpYWThwZ t4ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ir1rbhJVm4YOuatC8nTnW07YQS3a/+GpADxu2BBd+7A=; b=xf6AIcf+afW0ERtwVqjdxYgRhZoxEX1otGmLqSBcW8efq0xAyS/4RWpO4S/o4gIrgA Mg4aXDqtlXQDzl2vrHive6WVatC4KStDsl5PqPYuRtGwCQYNP3acpzjq4uVGppxgZA9p dEgTsqywOwqHvymbQKCehfiPpUuOFGYtAskj6E0EIG+DL5GjxttJLoKiAPbukaiHQmxS x8vImmfnc8WQtuNjzVESQdBN728lC4Otlky6EGtb7ANIDgmvERgKuTOP4FVFb0qN5kmz P5lw9gqsY3Q0eAMp4bJaUdGPt0YbyJosKvI8p1dlEm4newEmruGl5g5HBklc0MavZVhw g6vw== X-Gm-Message-State: AOAM532OVuJkuURcS8FaN8vqAWjGtHuyRp6ZYb+jTKSWvYbwQ96G6izD CnBDaQOwAUwjxN825IrI+9EJNgajEcs= X-Google-Smtp-Source: ABdhPJzISNOEq0rQ/TCbZoHCOD75lRZruLaoUDFG2c3+58Zr53wLBjIFxzteskKDGMYHjPCus8ns0w== X-Received: by 2002:a05:6870:309:b0:e2:c44c:addf with SMTP id m9-20020a056870030900b000e2c44caddfmr2858442oaf.205.1650505272377; Wed, 20 Apr 2022 18:41:12 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:12 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 05/10] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Date: Wed, 20 Apr 2022 20:40:38 -0500 Message-Id: <20220421014042.26985-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the code which tears down an mr to rxe_mr_cleanup to allow operations holding a reference to the mr to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 60a31b718774..fc3942e04a1f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -683,14 +683,10 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) { struct rxe_mr *mr = to_rmr(ibmr); - if (atomic_read(&mr->num_mw) > 0) { - pr_warn("%s: Attempt to deregister an MR while bound to MWs\n", - __func__); + /* See IBA 10.6.7.2.6 */ + if (atomic_read(&mr->num_mw) > 0) return -EINVAL; - } - mr->state = RXE_MR_STATE_INVALID; - rxe_put(mr_pd(mr)); rxe_put(mr); return 0; @@ -700,6 +696,8 @@ void rxe_mr_cleanup(struct rxe_pool_elem *elem) { struct rxe_mr *mr = container_of(elem, typeof(*mr), elem); + rxe_put(mr_pd(mr)); + ib_umem_release(mr->umem); if (mr->cur_map_set) From patchwork Thu Apr 21 01:40:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820981 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2A3FC433FE for ; Thu, 21 Apr 2022 01:41:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383666AbiDUBoD (ORCPT ); Wed, 20 Apr 2022 21:44:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383669AbiDUBoB (ORCPT ); Wed, 20 Apr 2022 21:44:01 -0400 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A3ED167F7 for ; Wed, 20 Apr 2022 18:41:13 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id r8so4100958oib.5 for ; Wed, 20 Apr 2022 18:41:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nyyIfh3NCIynbX7dfkoYQNhnnyc2TlEMyYMFedoimmE=; b=gSiqmyITmI8nat/OREOzjvr0ix+G0Ga6cCouoVaBiZDyFlkJrrF8dEhSaJiFEDRnpK AdvCvnSyt6QE5q3kBWJYZxJaF0zkhX4ZybBJVKwvh+tLbHKLCDXastKcdM3bRgqXl2UV eMpVoB+ZN/3GjvrU6aqNXGzGR/rk8deoZp8uil8LeFo7Gl6/diqTwCgSjfDPen11bfvC FfDZynekBszbEf/KEXSFVPSgZyTTNTCm5XFrv4SGQC9xUs+5UawV8eSyaSwCoQKPsKuK cVMd66NUPs6Z4fUiIYkuZ31yAXcLwiMJ3EmISpgyY6fmd5XUWbUV7g4d7N1Ji32lOQaL SXGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nyyIfh3NCIynbX7dfkoYQNhnnyc2TlEMyYMFedoimmE=; b=1hdThx0p30VMgmySVS/sCr9yFFkGh+titCG1jV1sMPGAsFkpNbaaV6+7woABzoxMgD 0L3fVORHnZQFYAjn0T1iaBksvzzKS4+nZqct27rLdS/0TFEtidRioYduw3iQ5iYLwm3S 0aWH6gTvesxe3Pg0Auy6GwtqJ3T6Aq+F//Nf4vx+bKekf/9/QTOdzduzQOq/PlylHC+0 ANaZMs/GB9XJVlsUE0uxskCqCzkXeyQ/ndEHRqYa9/DTsltgLqtznQYgEX7IlLPaVfvW WQ9SBTVod7b6HYYiTxI3szSMySQQc0QGzaCLrXeWFyMyIbjJifflW6yfLKve7DHmhX/P B+Xg== X-Gm-Message-State: AOAM5313E84Wm9Ozck5PbZ+fBu2eMBvsOtCUnTZ9Lzrdv2RtWEdC3PzG IKrzwpjg3VEMzt+wEZ5R6cUrwhTKp50= X-Google-Smtp-Source: ABdhPJwlI76ujjsGlG4RbXdhXHl7fTySvVZ64pW6Z+MI2W/8WUOCDtSoyKhewZfC1Fb7Izc58unNzQ== X-Received: by 2002:a05:6808:1985:b0:2da:3515:9486 with SMTP id bj5-20020a056808198500b002da35159486mr3111126oib.205.1650505273043; Wed, 20 Apr 2022 18:41:13 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:12 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 06/10] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Date: Wed, 20 Apr 2022 20:40:39 -0500 Message-Id: <20220421014042.26985-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move code from rxe_dealloc_mw() to rxe_mw_cleanup() to allow flows which hold a reference to mw to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mw.c | 57 ++++++++++++++-------------- drivers/infiniband/sw/rxe/rxe_pool.c | 1 + 2 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index f29829efd07d..2e1fa844fabf 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -36,40 +36,11 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return 0; } -static void rxe_do_dealloc_mw(struct rxe_mw *mw) -{ - if (mw->mr) { - struct rxe_mr *mr = mw->mr; - - mw->mr = NULL; - atomic_dec(&mr->num_mw); - rxe_put(mr); - } - - if (mw->qp) { - struct rxe_qp *qp = mw->qp; - - mw->qp = NULL; - rxe_put(qp); - } - - mw->access = 0; - mw->addr = 0; - mw->length = 0; - mw->state = RXE_MW_STATE_INVALID; -} - int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); - struct rxe_pd *pd = to_rpd(ibmw->pd); - - spin_lock_bh(&mw->lock); - rxe_do_dealloc_mw(mw); - spin_unlock_bh(&mw->lock); rxe_put(mw); - rxe_put(pd); return 0; } @@ -336,3 +307,31 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) return mw; } + +void rxe_mw_cleanup(struct rxe_pool_elem *elem) +{ + struct rxe_mw *mw = container_of(elem, typeof(*mw), elem); + struct rxe_pd *pd = to_rpd(mw->ibmw.pd); + + rxe_put(pd); + + if (mw->mr) { + struct rxe_mr *mr = mw->mr; + + mw->mr = NULL; + atomic_dec(&mr->num_mw); + rxe_put(mr); + } + + if (mw->qp) { + struct rxe_qp *qp = mw->qp; + + mw->qp = NULL; + rxe_put(qp); + } + + mw->access = 0; + mw->addr = 0; + mw->length = 0; + mw->state = RXE_MW_STATE_INVALID; +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 5963b1429ad8..0fdde3d46949 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -83,6 +83,7 @@ static const struct rxe_type_info { .name = "mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), + .cleanup = rxe_mw_cleanup, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, .max_elem = RXE_MAX_MW_INDEX - RXE_MIN_MW_INDEX + 1, From patchwork Thu Apr 21 01:40:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820982 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A4EDC433F5 for ; Thu, 21 Apr 2022 01:41:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383669AbiDUBoD (ORCPT ); Wed, 20 Apr 2022 21:44:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383672AbiDUBoC (ORCPT ); Wed, 20 Apr 2022 21:44:02 -0400 Received: from mail-oa1-x2a.google.com (mail-oa1-x2a.google.com [IPv6:2001:4860:4864:20::2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B37E167FF for ; Wed, 20 Apr 2022 18:41:14 -0700 (PDT) Received: by mail-oa1-x2a.google.com with SMTP id 586e51a60fabf-d6e29fb3d7so3932918fac.7 for ; Wed, 20 Apr 2022 18:41:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5ep+FVja65jOWLxclpsoAIGJYSijfov9x4ggHf6vS6U=; b=Leg9aHpRhx0p0n1R/OGpl442MOtNNnchI5ZkCs8fA9ChXMfdL31tBYLTtzKrtnNmha NWZ5SZCySEKNkLq1+GNPT92ISZhuB+TQrwGyxiQpHwBhMDgbevOWaey/kXHnTSjuvFb9 mxDrZSuRTp2ZCFeir97x8P2pJQajl2xATyUCvWn9G7PdypdPLzNXy/euzTovueCbSznn S+O2uuzC4n+bVmjP21CNi8lmjV+Qy1MQrsDx7K+Zzltb/4wEC8XAiGfXvWVxDVgHK38U wQ2q4hYnBfBUQ2xKAwahfiYS8XcIlh27tQmN4VWnb8YA5P4JumZ3sCCFdjeboW23KbYH sG9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5ep+FVja65jOWLxclpsoAIGJYSijfov9x4ggHf6vS6U=; b=G9xc2mCpS4jX2hT94fCpaJNcK4JIraecUQ9Hd3bU/3YX4xZaS+A2ZvgstV15eN+xEX uSZDrlySv8BukEDN1oGvx/rgFYC9B3EjHTTShCIch5yEA/k9cHdPIrshBqzcKJmI7p/K LuUgcSyYKyq2Q7gc8kzG77if1zf/ZHiNmNany8yzowV8NFvi1YPMRWbif/uruY7capwp zcNWX6+Ag46jhCeatSgyNLfKOqC4hOC5Z6Dk9165k4qfoRx1dRmdVH7HiIEoRWDaHWVF VfrIjmiFAcnUQL96BuUzRT7lf9l4dBO5JhmSwOvK6kEn2P9J8B/gsUTFUQpgcKBMFinP xfkg== X-Gm-Message-State: AOAM533oN+BjpMBvSSE0ivZApGVRuSYEMnHQ07TwIxTLhZcqAZHu4db4 dNiiUWhpYa4JTa7p0JP6Vcg= X-Google-Smtp-Source: ABdhPJw2FEvsBA0Zu6FRYF5RGh0zqtEhIjXN7E19woVQGl6UQ+ddhBROWrMhzYNHdJRiU/hu9RU5fQ== X-Received: by 2002:a05:6870:478d:b0:d4:6806:953 with SMTP id c13-20020a056870478d00b000d468060953mr2985721oaq.80.1650505273588; Wed, 20 Apr 2022 18:41:13 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 07/10] RDMA/rxe: Enforce IBA C11-17 Date: Wed, 20 Apr 2022 20:40:40 -0500 Message-Id: <20220421014042.26985-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a counter to keep track of the number of WQs connected to a CQ and return an error if destroy_cq is called while the counter is non zero. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_qp.c | 10 ++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 6 ++++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 3 files changed, 17 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index a8011757784e..22e9b85344c3 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -322,6 +322,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, qp->scq = scq; qp->srq = srq; + atomic_inc(&rcq->num_wq); + atomic_inc(&scq->num_wq); + rxe_qp_init_misc(rxe, qp, init); err = rxe_qp_init_req(rxe, qp, init, udata, uresp); @@ -341,6 +344,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, rxe_queue_cleanup(qp->sq.queue); qp->sq.queue = NULL; err1: + atomic_dec(&rcq->num_wq); + atomic_dec(&scq->num_wq); + qp->pd = NULL; qp->rcq = NULL; qp->scq = NULL; @@ -798,10 +804,14 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->rq.queue) rxe_queue_cleanup(qp->rq.queue); + atomic_dec(&qp->scq->num_wq); if (qp->scq) rxe_put(qp->scq); + + atomic_dec(&qp->rcq->num_wq); if (qp->rcq) rxe_put(qp->rcq); + if (qp->pd) rxe_put(qp->pd); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 8585b1096538..7357794b951a 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -800,6 +800,12 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) { struct rxe_cq *cq = to_rcq(ibcq); + /* See IBA C11-17: The CI shall return an error if this Verb is + * invoked while a Work Queue is still associated with the CQ. + */ + if (atomic_read(&cq->num_wq)) + return -EINVAL; + rxe_cq_disable(cq); rxe_put(cq); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 86068d70cd95..ac464e68c923 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -67,6 +67,7 @@ struct rxe_cq { bool is_dying; bool is_user; struct tasklet_struct comp_task; + atomic_t num_wq; }; enum wqe_state { From patchwork Thu Apr 21 01:40:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71700C433EF for ; Thu, 21 Apr 2022 01:41:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383668AbiDUBoF (ORCPT ); Wed, 20 Apr 2022 21:44:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383675AbiDUBoD (ORCPT ); Wed, 20 Apr 2022 21:44:03 -0400 Received: from mail-oa1-x29.google.com (mail-oa1-x29.google.com [IPv6:2001:4860:4864:20::29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56B6C17049 for ; Wed, 20 Apr 2022 18:41:15 -0700 (PDT) Received: by mail-oa1-x29.google.com with SMTP id 586e51a60fabf-e656032735so2673628fac.0 for ; Wed, 20 Apr 2022 18:41:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D9WBywQ4FSEeFAOXLYlGzO0LUFUlKiXwHhfCB/z2N10=; b=AOmzNAeVBmZsPmOY++zMaWYJ6hcfRGec9ww14ycQ2rmc5o+sJyrlns4gpDZWrIaLU1 ZvG25gwGD0oa6L7BCjGUjce9XVsQb9Vih5E+PYErAmDl02DSUrDNaBTEFx0fsm4KvLFk /Dp8eSFTTaw0pnX9V4E7RA1paJoZtOX+rcqvm642hRQ8hkWe0nC372IUSTtZrpFqZqTr ENa+IYDKQri92sW2lCcKemiW+wasr45DxjHdw5L/9tTA5SlU2/Qxg846Xxv+l39ecqMc 7SAxQD5uWhDVt2GKXVCQEa9cL4hbLFsQXTTnEoyWOnPDgp8Cjv+75Tq0Z5WKFbjChZEc 2lfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D9WBywQ4FSEeFAOXLYlGzO0LUFUlKiXwHhfCB/z2N10=; b=yJDIv9cSJsU2SEs6w7oRaVxxMjbMZKlb4qqSeYGfdjRLJwAMC/ECLQTdXNlb1Dtrm3 2mHHK7/mSrU4zyc/0tb/YE/aqkSro7/6AtfKxlkvnOKlAY8QH8jVtI/lagbR9UBqSFD7 PMzM8VAuRedbp+qB81BuRb+FZ1tiiy+jNKubXV8rG/jrqBe6HsEM9nut58johWZnCkIM dbqUA31gXteWldxWZoQDUvcLT768V8TAUZx8oXD8b7Q+ChyCnJ4jm9J1UHh/+Cn8dVgu 4si5rv6mdAx2iTnrEbUsXEvL3Ho150WoYj8d61/58B3HSIMwxAlZhGMXOh4YEoTPPmpn JZpw== X-Gm-Message-State: AOAM531/QW6gawwO4YMSZhtM+glCyOVimxTy4nXTNdIPDNL0KlELfEpo s63k1o+8dw/SbVsVUlKL9HlVKoad+cI= X-Google-Smtp-Source: ABdhPJym/ataYKKKwlJH3Y0hujIgEtfKmh7Bg7SwBbywo+DowapNmdcTkDEJ8kgZ8V9iAgZvdv7HdQ== X-Received: by 2002:a05:6870:ec8a:b0:e5:faac:cc90 with SMTP id eo10-20020a056870ec8a00b000e5faaccc90mr2857931oab.172.1650505274158; Wed, 20 Apr 2022 18:41:14 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 08/10] RDMA/rxe: Stop lookup of partially built objects Date: Wed, 20 Apr 2022 20:40:41 -0500 Message-Id: <20220421014042.26985-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rdma_rxe driver has a security weakness due to giving objects which are partially initialized indices allowing external actors to gain access to them by sending packets which refer to their index (e.g. qpn, rkey, etc) causing unpredictable results. This patch adds a new API rxe_finalize(obj) which enables looking up pool objects from indices using rxe_pool_get_index() for AH, QP, MR, and MW. They are added in create verbs only after the objects are fully initialized. It also adds wait for completion to destroy/dealloc verbs to assure that all references have been dropped before returning to rdma_core by implementing a new rxe_pool API rxe_cleanup() which drops a reference to the object and then waits for all other references to be dropped. When the last reference is dropped the object is completed by kref. After that it cleans up the object and if locally allocated frees the memory. Combined with deferring cleanup code to type specific cleanup routines this allows all pending activity referring to objects to complete before returning to rdma_core. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_mw.c | 4 +- drivers/infiniband/sw/rxe/rxe_pool.c | 62 +++++++++++++++++++++++++-- drivers/infiniband/sw/rxe/rxe_pool.h | 11 +++-- drivers/infiniband/sw/rxe/rxe_verbs.c | 30 ++++++++----- 5 files changed, 90 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index fc3942e04a1f..9a5c2af6a56f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -687,7 +687,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) if (atomic_read(&mr->num_mw) > 0) return -EINVAL; - rxe_put(mr); + rxe_cleanup(mr); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 2e1fa844fabf..86e63d7dc1f3 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -33,6 +33,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; spin_lock_init(&mw->lock); + rxe_finalize(mw); + return 0; } @@ -40,7 +42,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); - rxe_put(mw); + rxe_cleanup(mw); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 0fdde3d46949..f5380b6bdea2 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -6,6 +6,8 @@ #include "rxe.h" +#define RXE_POOL_TIMEOUT (200) +#define RXE_POOL_MAX_TIMEOUTS (3) #define RXE_POOL_ALIGN (16) static const struct rxe_type_info { @@ -139,8 +141,12 @@ void *rxe_alloc(struct rxe_pool *pool) elem->pool = pool; elem->obj = obj; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + /* allocate index in array but leave pointer as NULL so it + * can't be looked up until rxe_finalize() is called + */ + err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_free; @@ -167,8 +173,9 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_cnt; @@ -201,9 +208,44 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) static void rxe_elem_release(struct kref *kref) { struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt); + + complete(&elem->complete); +} + +int __rxe_cleanup(struct rxe_pool_elem *elem) +{ struct rxe_pool *pool = elem->pool; + struct xarray *xa = &pool->xa; + static int timeout = RXE_POOL_TIMEOUT; + unsigned long flags; + int ret, err = 0; + void *xa_ret; - xa_erase(&pool->xa, elem->index); + /* erase xarray entry to prevent looking up + * the pool elem from its index + */ + xa_lock_irqsave(xa, flags); + xa_ret = __xa_erase(xa, elem->index); + xa_unlock_irqrestore(xa, flags); + WARN_ON(xa_err(xa_ret)); + + /* if this is the last call to rxe_put complete the + * object. It is safe to touch elem after this since + * it is freed below + */ + __rxe_put(elem); + + if (timeout) { + ret = wait_for_completion_timeout(&elem->complete, timeout); + if (!ret) { + pr_warn("Timed out waiting for %s#%d to complete\n", + pool->name, elem->index); + if (++pool->timeouts >= RXE_POOL_MAX_TIMEOUTS) + timeout = 0; + + err = -EINVAL; + } + } if (pool->cleanup) pool->cleanup(elem); @@ -212,6 +254,8 @@ static void rxe_elem_release(struct kref *kref) kfree(elem->obj); atomic_dec(&pool->num_elem); + + return err; } int __rxe_get(struct rxe_pool_elem *elem) @@ -223,3 +267,15 @@ int __rxe_put(struct rxe_pool_elem *elem) { return kref_put(&elem->ref_cnt, rxe_elem_release); } + +void __rxe_finalize(struct rxe_pool_elem *elem) +{ + struct xarray *xa = &elem->pool->xa; + unsigned long flags; + void *ret; + + xa_lock_irqsave(xa, flags); + ret = __xa_store(&elem->pool->xa, elem->index, elem, GFP_KERNEL); + xa_unlock_irqrestore(xa, flags); + WARN_ON(xa_err(ret)); +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 24bcc786c1b3..83f96b2d5096 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -28,6 +28,7 @@ struct rxe_pool_elem { void *obj; struct kref ref_cnt; struct list_head list; + struct completion complete; u32 index; }; @@ -37,6 +38,7 @@ struct rxe_pool { void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; + unsigned int timeouts; unsigned int max_elem; atomic_t num_elem; @@ -63,20 +65,23 @@ void *rxe_alloc(struct rxe_pool *pool); /* connect already allocated object to pool */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); - #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) /* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); int __rxe_get(struct rxe_pool_elem *elem); - #define rxe_get(obj) __rxe_get(&(obj)->elem) int __rxe_put(struct rxe_pool_elem *elem); - #define rxe_put(obj) __rxe_put(&(obj)->elem) +int __rxe_cleanup(struct rxe_pool_elem *elem); +#define rxe_cleanup(obj) __rxe_cleanup(&(obj)->elem) + #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt) +void __rxe_finalize(struct rxe_pool_elem *elem); +#define rxe_finalize(obj) __rxe_finalize(&(obj)->elem) + #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 7357794b951a..b003bc126fb7 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -115,7 +115,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc) { struct rxe_ucontext *uc = to_ruc(ibuc); - rxe_put(uc); + rxe_cleanup(uc); } static int rxe_port_immutable(struct ib_device *dev, u32 port_num, @@ -149,7 +149,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) { struct rxe_pd *pd = to_rpd(ibpd); - rxe_put(pd); + rxe_cleanup(pd); return 0; } @@ -188,7 +188,7 @@ static int rxe_create_ah(struct ib_ah *ibah, err = copy_to_user(&uresp->ah_num, &ah->ah_num, sizeof(uresp->ah_num)); if (err) { - rxe_put(ah); + rxe_cleanup(ah); return -EFAULT; } } else if (ah->is_user) { @@ -197,6 +197,8 @@ static int rxe_create_ah(struct ib_ah *ibah, } rxe_init_av(init_attr->ah_attr, &ah->av); + rxe_finalize(ah); + return 0; } @@ -228,7 +230,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); - rxe_put(ah); + rxe_cleanup(ah); return 0; } @@ -313,7 +315,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, return 0; err_cleanup: - rxe_put(srq); + rxe_cleanup(srq); err_out: return err; } @@ -367,7 +369,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) { struct rxe_srq *srq = to_rsrq(ibsrq); - rxe_put(srq); + rxe_cleanup(srq); return 0; } @@ -434,10 +436,11 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (err) goto qp_init; + rxe_finalize(qp); return 0; qp_init: - rxe_put(qp); + rxe_cleanup(qp); return err; } @@ -490,7 +493,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) if (ret) return ret; - rxe_put(qp); + rxe_cleanup(qp); return 0; } @@ -808,7 +811,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) rxe_cq_disable(cq); - rxe_put(cq); + rxe_cleanup(cq); return 0; } @@ -903,6 +906,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) rxe_get(pd); rxe_mr_init_dma(pd, access, mr); + rxe_finalize(mr); return &mr->ibmr; } @@ -931,11 +935,13 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, if (err) goto err3; + rxe_finalize(mr); + return &mr->ibmr; err3: rxe_put(pd); - rxe_put(mr); + rxe_cleanup(mr); err2: return ERR_PTR(err); } @@ -963,11 +969,13 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, if (err) goto err2; + rxe_finalize(mr); + return &mr->ibmr; err2: rxe_put(pd); - rxe_put(mr); + rxe_cleanup(mr); err1: return ERR_PTR(err); } From patchwork Thu Apr 21 01:40:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE08FC433FE for ; Thu, 21 Apr 2022 01:41:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383670AbiDUBoF (ORCPT ); Wed, 20 Apr 2022 21:44:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383668AbiDUBoD (ORCPT ); Wed, 20 Apr 2022 21:44:03 -0400 Received: from mail-oa1-x2a.google.com (mail-oa1-x2a.google.com [IPv6:2001:4860:4864:20::2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CD7D17046 for ; Wed, 20 Apr 2022 18:41:15 -0700 (PDT) Received: by mail-oa1-x2a.google.com with SMTP id 586e51a60fabf-e656032735so2673644fac.0 for ; Wed, 20 Apr 2022 18:41:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rSkUR8iHoCDwj4Zz9gZcYpz6HMH9rY869OGi2lAzZt4=; b=fOegawjiKNomL1JphUlrIDktaUZmXLKyEhqgrMcjPmT/qFvXDoJe4lhOajVcCmozHS IklxTydGr3/aO5qqtbGTuq4rk1T05mCgMzlQnLrH+kw0JlhZiE5IZk8krQLnpTthsBdj xWt3FC0JafbXC4ZTdKJSu8w2sw6KdbzH5msZ5PrDr/WIzIJE0Ctzt0uc61jEcYib04+j ndakI2+TDBIStbhOQAnmWEUcRkGMFy4latodGHQ/zGeyW4h/mUI84Qm2oC/0u6Ukmdbz MB+1do0ImcFFg1Cx7EzcQi5FZXXXfpuO0J+uJLa+q6jyx2IqyTatIDjU8z83D5e5TJdD 5q/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rSkUR8iHoCDwj4Zz9gZcYpz6HMH9rY869OGi2lAzZt4=; b=6VkeU+NtWb0/fO4/sNFBrQMsi9lwCJrJa1RZHp3rnWmRsIT/NHa4/giB9PkoxEcNWL 18ouAbxqHzK4Rfu+dATFvrCoZpZ50S4Kxmp9fC9Mhdxdej0cZ0TnLHaimc4dpCt4PXI9 m+WW3i/1aPljGbFuCQVmC0qjCaqzuabANbOlLAd0T8MzFb8KODoR3uKyVBV2tYbpARjq MGFgw8v0gpz3tUMMHbzJZ8KaSbJ3UH5zYyrhUbSCfkViPb1l1UniPtqrMJj01G0BUEMG 1rL23nzpNKJ9vK5Tein5R91YIYw2fqMZTuEntEH7OfA8errblD0Nuog0ggZFLm9se8Ov 25dg== X-Gm-Message-State: AOAM532EUVWWrlpU6xdlhgnTwVtPrhRPCGb5ZWJdfIz0mostUN1Llbf6 55U4Flk+MGil8YMV8WHNZuE= X-Google-Smtp-Source: ABdhPJymekjzGraWL09YLz9gWIMyXp2u2tVIVTf58MBTv4gvpIW5EDoqLXNxnQeB9bDJobFVHOoslw== X-Received: by 2002:a05:6870:a40a:b0:e5:c4cb:7b84 with SMTP id m10-20020a056870a40a00b000e5c4cb7b84mr2906013oal.123.1650505274715; Wed, 20 Apr 2022 18:41:14 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:14 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 09/10] RDMA/rxe: Convert read side locking to rcu Date: Wed, 20 Apr 2022 20:40:42 -0500 Message-Id: <20220421014042.26985-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Use rcu_read_lock() for protecting read side operations in rxe_pool.c. Convert write side locking to use plain spin_lock(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index f5380b6bdea2..661e0af522a9 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -191,16 +191,15 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; struct xarray *xa = &pool->xa; - unsigned long flags; void *obj; - xa_lock_irqsave(xa, flags); + rcu_read_lock(); elem = xa_load(xa, index); if (elem && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; else obj = NULL; - xa_unlock_irqrestore(xa, flags); + rcu_read_unlock(); return obj; } @@ -217,16 +216,15 @@ int __rxe_cleanup(struct rxe_pool_elem *elem) struct rxe_pool *pool = elem->pool; struct xarray *xa = &pool->xa; static int timeout = RXE_POOL_TIMEOUT; - unsigned long flags; int ret, err = 0; void *xa_ret; + WARN_ON(!in_task()); + /* erase xarray entry to prevent looking up * the pool elem from its index */ - xa_lock_irqsave(xa, flags); - xa_ret = __xa_erase(xa, elem->index); - xa_unlock_irqrestore(xa, flags); + xa_ret = xa_erase(xa, elem->index); WARN_ON(xa_err(xa_ret)); /* if this is the last call to rxe_put complete the @@ -251,7 +249,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem) pool->cleanup(elem); if (pool->flags & RXE_POOL_ALLOC) - kfree(elem->obj); + kfree_rcu(elem->obj); atomic_dec(&pool->num_elem); @@ -270,12 +268,8 @@ int __rxe_put(struct rxe_pool_elem *elem) void __rxe_finalize(struct rxe_pool_elem *elem) { - struct xarray *xa = &elem->pool->xa; - unsigned long flags; - void *ret; - - xa_lock_irqsave(xa, flags); - ret = __xa_store(&elem->pool->xa, elem->index, elem, GFP_KERNEL); - xa_unlock_irqrestore(xa, flags); - WARN_ON(xa_err(ret)); + void *xa_ret; + + xa_ret = xa_store(&elem->pool->xa, elem->index, elem, GFP_KERNEL); + WARN_ON(xa_err(xa_ret)); } From patchwork Thu Apr 21 01:40:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12820984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16128C4332F for ; Thu, 21 Apr 2022 01:41:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383685AbiDUBoG (ORCPT ); Wed, 20 Apr 2022 21:44:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383667AbiDUBoD (ORCPT ); Wed, 20 Apr 2022 21:44:03 -0400 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0D6C765E for ; Wed, 20 Apr 2022 18:41:15 -0700 (PDT) Received: by mail-oi1-x235.google.com with SMTP id z8so4115185oix.3 for ; Wed, 20 Apr 2022 18:41:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PN7pvzubqtiBUl4kd7cQKBgLlljE+56vzGXCYJHPmkk=; b=CnJPCFT27FPwc7qUyIlEx8rbIBu75DRS64wh0mslROMhV20EbzHUxNuUmk24CswNze 4aIjyqSEYHwApAOjicjKGzNWWUs/In+lT/w9AEOCo4m4SjvJ3rKhqdI0Hujbvs0+9wZY xMKuFmrnWWQrphQV/23g+eFsDGXIoXnIVstVlZ70MvwFrQq2cmq+hUVURT7eWqW5f2Dl Pf9BhFXY3EuShRwM95jZTxdZ3WD/BkHo6sj+ss9AUEWFLZImawRr/e55yc48SD/f5chQ wYqdngNP22aope8AfihIqT0bodBm/2wnh1/lazQQnvYZ68M3fj7SoR/F2KiZdCqqwFTs VXdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PN7pvzubqtiBUl4kd7cQKBgLlljE+56vzGXCYJHPmkk=; b=R9lDHIktNqudbn8cwdb+AW7WU4XCqG/+NF0G1vDRlKnZ7JMO4nscM+zDbqt7PqaU5L bkqDLfgctJ68V85GIChemOZ+kN2ZGb69qmfpIv2udmzDtUnHS+tOSXz84dwnUNTdac1C 44fV0dyxkHcAevQrJzS/HJc4yqAeKouKDLwECIwIAi24mbrWpN375mFVuFq1tmZn4eqM 2Eitn8ZJBo/qoBUrpKHjpfbS3t3PN+sZEAT/Lw2TLe6O/4ennivDh+SOOICtCwuAJryn qMgEAMc2qT5WNs5BqduGzUyc9mP1aAaFjerPSY+fhhpMBIySluyYbebsg0Jbh7um+vYa 2qbg== X-Gm-Message-State: AOAM530Ah1jRySY5lWTOTZjQ1xfcxQ8G09A3qDuxGr1Pi29i6SRunXHm VVjfVAhmqUbQo5cA7Bn1kXeQR3hC92w= X-Google-Smtp-Source: ABdhPJyOnrAxnzYDLKog+YC6hkGxkVo88w1pNk9lNumqBY9z6dH3IzB4hzHP/VslE1ZHdLmpKMIkzg== X-Received: by 2002:a05:6808:2205:b0:322:8621:84f with SMTP id bd5-20020a056808220500b003228621084fmr3228785oib.80.1650505275366; Wed, 20 Apr 2022 18:41:15 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-c7f7-b397-372c-b2f0.res6.spectrum.com. [2603:8081:140c:1a00:c7f7:b397:372c:b2f0]) by smtp.googlemail.com with ESMTPSA id l16-20020a9d6a90000000b0060548d240d4sm4847710otq.74.2022.04.20.18.41.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 18:41:15 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v14 10/10] RDMA/rxe: Cleanup rxe_pool.c Date: Wed, 20 Apr 2022 20:40:43 -0500 Message-Id: <20220421014042.26985-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220421014042.26985-1-rpearsonhpe@gmail.com> References: <20220421014042.26985-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Minor cleanup of rxe_pool.c. Add document comment headers for the subroutines. Increase alignment for pool elements. Convert some printk's to WARN-ON's. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 661e0af522a9..24bcf5d1f66f 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -256,16 +256,32 @@ int __rxe_cleanup(struct rxe_pool_elem *elem) return err; } +/** + * __rxe_get - takes a ref on the object unless ref count is zero + * @elem: rxe_pool_elem embedded in object + * + * Returns: 1 if reference is added else 0 + */ int __rxe_get(struct rxe_pool_elem *elem) { return kref_get_unless_zero(&elem->ref_cnt); } +/** + * __rxe_put - puts a ref on the object + * @elem: rxe_pool_elem embedded in object + * + * Returns: 1 if ref count reaches zero and release called else 0 + */ int __rxe_put(struct rxe_pool_elem *elem) { return kref_put(&elem->ref_cnt, rxe_elem_release); } +/** + * __rxe_finalize - enable looking up object from index + * @elem: rxe_pool_elem embedded in object + */ void __rxe_finalize(struct rxe_pool_elem *elem) { void *xa_ret;