From patchwork Mon Apr 4 21:50:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 476EDC433F5 for ; Mon, 4 Apr 2022 22:27:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242073AbiDDW3T (ORCPT ); Mon, 4 Apr 2022 18:29:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349052AbiDDW2U (ORCPT ); Mon, 4 Apr 2022 18:28:20 -0400 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 308AB51E4A for ; Mon, 4 Apr 2022 14:51:26 -0700 (PDT) Received: by mail-oi1-x233.google.com with SMTP id j83so11482409oih.6 for ; Mon, 04 Apr 2022 14:51:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LMTJkZUkxzghcGkX7qTl0dUr6s4CrUAG6aLSaXEopZU=; b=VGMdLorGYsKuB0lVLcujD7zBzopZ5/fIAEOdF8+qCu+mWO0wzEIr5Oa13bl7RJKwIC +ySwJVT+cZUw24O7m3M0+H8+jB2INd2n+KDTzXNUo5T13PNTbIFy6cK41qzeqLNw1h0X UxJ68vsZbYXGeWFX8qbMcluGQWzZO+a8Ijv8KkssSAlUs7tkomRjji0A4yYK1ADRFvkL D2hsSSLgM0QGpUckJOdLoVX+walxV71a0Qt+AfNmj00gMVqVhEunQj8OhDTj6sEb4h0x 3xojF4SFJAO27pQjtAvFJNRLGUouG4DuBrb+R8fCQmy3LwfDY5sUD1D6t+rLUamgrjsn 8XBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LMTJkZUkxzghcGkX7qTl0dUr6s4CrUAG6aLSaXEopZU=; b=jgwdHESy/hk1IAAB7uBHq+4OlDTFv6aWeUg056NM2+YAGTB9IRd1gZYp5voTBDUBgG tZn3OHLuq4uZfPB4MfHJQTezkk/dBkzU/nFhCILy99/3kGtgXoJSAa2TKW4VerVoNSll 3QGkhVLVxH6LDGEF0Da13jLP7v7ghf604OJ1M5ggTdxq3llFImQxWLCum1NBlFw5D5Ys hqRYuoazmel8R9gka82yUROK90sQiA3lO28H+nAYf7nsNv180cYN9R9tCjG/UZFfV+G3 /HdzMFh3a3NUtnHqO4X/pARP06GQ/t0vDemZGMAzf/6gENiteGP2eN+YLRyQfTgjV0rb 8n+w== X-Gm-Message-State: AOAM531TAVWIgvdPGtFEWzau1hUbR8FTmdG9GkCIF8axuixPYwrGOrzT 3l/y9lJjdl04YWYOSPS1+gc= X-Google-Smtp-Source: ABdhPJzNN491bsoj5yIW+izZijBMV8VTcs2I6tltlOXByy5NVxbLRP5I8aVwh9iaFlHJNDRuNh827g== X-Received: by 2002:a54:470f:0:b0:2ef:8a55:b94f with SMTP id k15-20020a54470f000000b002ef8a55b94fmr113428oik.243.1649109085395; Mon, 04 Apr 2022 14:51:25 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:25 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 01/10] RDMA/rxe: Remove IB_SRQ_INIT_MASK Date: Mon, 4 Apr 2022 16:50:51 -0500 Message-Id: <20220404215059.39819-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the #define IB_SRQ_INIT_MASK is used to distinguish the rxe_create_srq verb from the rxe_modify_srq verb so that some code can be shared between these two subroutines. This commit splits rxe_srq_chk_attr into two subroutines: rxe_srq_chk_init and rxe_srq_chk_attr which handle the create_srq and modify_srq verbs separately. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 9 +- drivers/infiniband/sw/rxe/rxe_srq.c | 118 +++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_verbs.c | 4 +- 3 files changed, 74 insertions(+), 57 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 2ffbe3390668..ff6cae2c2949 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -159,15 +159,12 @@ void retransmit_timer(struct timer_list *t); void rnr_nak_timer(struct timer_list *t); /* rxe_srq.c */ -#define IB_SRQ_INIT_MASK (~IB_SRQ_LIMIT) - -int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, - struct ib_srq_attr *attr, enum ib_srq_attr_mask mask); - +int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init); int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, struct ib_udata *udata, struct rxe_create_srq_resp __user *uresp); - +int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, + struct ib_srq_attr *attr, enum ib_srq_attr_mask mask); int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata); diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 0c0721f04357..e2dcfc5d97e3 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -6,64 +6,34 @@ #include #include "rxe.h" -#include "rxe_loc.h" #include "rxe_queue.h" -int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, - struct ib_srq_attr *attr, enum ib_srq_attr_mask mask) +int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init) { - if (srq && srq->error) { - pr_warn("srq in error state\n"); + struct ib_srq_attr *attr = &init->attr; + + if (attr->max_wr > rxe->attr.max_srq_wr) { + pr_warn("max_wr(%d) > max_srq_wr(%d)\n", + attr->max_wr, rxe->attr.max_srq_wr); goto err1; } - if (mask & IB_SRQ_MAX_WR) { - if (attr->max_wr > rxe->attr.max_srq_wr) { - pr_warn("max_wr(%d) > max_srq_wr(%d)\n", - attr->max_wr, rxe->attr.max_srq_wr); - goto err1; - } - - if (attr->max_wr <= 0) { - pr_warn("max_wr(%d) <= 0\n", attr->max_wr); - goto err1; - } - - if (srq && srq->limit && (attr->max_wr < srq->limit)) { - pr_warn("max_wr (%d) < srq->limit (%d)\n", - attr->max_wr, srq->limit); - goto err1; - } - - if (attr->max_wr < RXE_MIN_SRQ_WR) - attr->max_wr = RXE_MIN_SRQ_WR; + if (attr->max_wr <= 0) { + pr_warn("max_wr(%d) <= 0\n", attr->max_wr); + goto err1; } - if (mask & IB_SRQ_LIMIT) { - if (attr->srq_limit > rxe->attr.max_srq_wr) { - pr_warn("srq_limit(%d) > max_srq_wr(%d)\n", - attr->srq_limit, rxe->attr.max_srq_wr); - goto err1; - } + if (attr->max_wr < RXE_MIN_SRQ_WR) + attr->max_wr = RXE_MIN_SRQ_WR; - if (srq && (attr->srq_limit > srq->rq.queue->buf->index_mask)) { - pr_warn("srq_limit (%d) > cur limit(%d)\n", - attr->srq_limit, - srq->rq.queue->buf->index_mask); - goto err1; - } + if (attr->max_sge > rxe->attr.max_srq_sge) { + pr_warn("max_sge(%d) > max_srq_sge(%d)\n", + attr->max_sge, rxe->attr.max_srq_sge); + goto err1; } - if (mask == IB_SRQ_INIT_MASK) { - if (attr->max_sge > rxe->attr.max_srq_sge) { - pr_warn("max_sge(%d) > max_srq_sge(%d)\n", - attr->max_sge, rxe->attr.max_srq_sge); - goto err1; - } - - if (attr->max_sge < RXE_MIN_SRQ_SGE) - attr->max_sge = RXE_MIN_SRQ_SGE; - } + if (attr->max_sge < RXE_MIN_SRQ_SGE) + attr->max_sge = RXE_MIN_SRQ_SGE; return 0; @@ -93,8 +63,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, spin_lock_init(&srq->rq.consumer_lock); type = QUEUE_TYPE_FROM_CLIENT; - q = rxe_queue_init(rxe, &srq->rq.max_wr, - srq_wqe_size, type); + q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type); if (!q) { pr_warn("unable to allocate queue for srq\n"); return -ENOMEM; @@ -121,6 +90,57 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, return 0; } +int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, + struct ib_srq_attr *attr, enum ib_srq_attr_mask mask) +{ + if (srq->error) { + pr_warn("srq in error state\n"); + goto err1; + } + + if (mask & IB_SRQ_MAX_WR) { + if (attr->max_wr > rxe->attr.max_srq_wr) { + pr_warn("max_wr(%d) > max_srq_wr(%d)\n", + attr->max_wr, rxe->attr.max_srq_wr); + goto err1; + } + + if (attr->max_wr <= 0) { + pr_warn("max_wr(%d) <= 0\n", attr->max_wr); + goto err1; + } + + if (srq->limit && (attr->max_wr < srq->limit)) { + pr_warn("max_wr (%d) < srq->limit (%d)\n", + attr->max_wr, srq->limit); + goto err1; + } + + if (attr->max_wr < RXE_MIN_SRQ_WR) + attr->max_wr = RXE_MIN_SRQ_WR; + } + + if (mask & IB_SRQ_LIMIT) { + if (attr->srq_limit > rxe->attr.max_srq_wr) { + pr_warn("srq_limit(%d) > max_srq_wr(%d)\n", + attr->srq_limit, rxe->attr.max_srq_wr); + goto err1; + } + + if (attr->srq_limit > srq->rq.queue->buf->index_mask) { + pr_warn("srq_limit (%d) > cur limit(%d)\n", + attr->srq_limit, + srq->rq.queue->buf->index_mask); + goto err1; + } + } + + return 0; + +err1: + return -EINVAL; +} + int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata) diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 67184b0281a0..c1d543e24281 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -7,8 +7,8 @@ #include #include #include + #include "rxe.h" -#include "rxe_loc.h" #include "rxe_queue.h" #include "rxe_hw_counters.h" @@ -295,7 +295,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, uresp = udata->outbuf; } - err = rxe_srq_chk_attr(rxe, NULL, &init->attr, IB_SRQ_INIT_MASK); + err = rxe_srq_chk_init(rxe, init); if (err) goto err1; From patchwork Mon Apr 4 21:50:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F3BAC433F5 for ; Mon, 4 Apr 2022 22:27:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243243AbiDDW3W (ORCPT ); Mon, 4 Apr 2022 18:29:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243039AbiDDW2U (ORCPT ); Mon, 4 Apr 2022 18:28:20 -0400 Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com [IPv6:2001:4860:4864:20::2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E617651E4E for ; Mon, 4 Apr 2022 14:51:26 -0700 (PDT) Received: by mail-oa1-x2c.google.com with SMTP id 586e51a60fabf-de3ca1efbaso12352741fac.9 for ; Mon, 04 Apr 2022 14:51:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ED6RrmQzjFRdJerMCPzNkokvIx5w9e0RDTgAlYIxE4g=; b=YyCYje1B9w6Z7d3HIT2B6OhZL591UhbY39xrMse7F+fz5wqQymNFJYYJqAWCIsIIc2 fmSUg2vmQRQxyIWMeS0/BSk6hn/mSlkXAM9+dySS7xWMEflVSkDBEG5dh92P6o26b9JD i4FvZsspPVFbQUfCRMlFMY9ujhGKsQcUi6ta0qQZQbVBtrZUUiq+JikisbI/tfpn0slA CDosKzHjLXjCMF/Y3TZmHgcqul5D+5O/00m2CsnZp4GOb4KQf+gddIZBFMXp9EebQkTc 8X3CgtQXUcpQ2yITu8Drp91Ycdo4GPobd3L8XF+UqWrzbv0ht3DFko0DQh2gYI5siWPd 9BSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ED6RrmQzjFRdJerMCPzNkokvIx5w9e0RDTgAlYIxE4g=; b=iokg2dJwlc4wTdDQci5XVphyL5ziNtGqMoukV9CIDsXqQM3eJfZoAKICYhWxiQw++2 8rEOdytNOJHw3uU3cgAeNjRk+6QWmD7JnQhYo8anmQ6HpkOwfDis1txRw2s6q+vohHqR SH6YlQIDva2trhJ4/94/Y0QVCUzRbpyOYyFeqY1NXriwgfdWTuV9Mh1JXRTGlMUgT21m +PULGuT5zM2QINStPIJBmtihU/RBDhl3gc64vQi5Ymr73/IyBh32AuX/LXMjCZ5QVeUY KdmswWkmXZIx47TKp/e2aRgZFNmEaL/zvIuUMTsfqQBWzhrcIgPwWdIGruhBNQV0SRv3 AV2w== X-Gm-Message-State: AOAM531n1xkC5O/rmM2IEPvQC14MFKtwIqKcMIIXKNc/kEjb/qu5iT0Q G6KCPZh9ZyLhBoqvsm726bE= X-Google-Smtp-Source: ABdhPJzzBlFWH/4k7z1tAwGlwbhFWPl/kIgnFzDcxEOktMhvy2fgSU5VpbZV7qmaPbC4ZOFqnAlVZg== X-Received: by 2002:a05:6870:4345:b0:de:f347:e2cd with SMTP id x5-20020a056870434500b000def347e2cdmr134137oah.113.1649109086169; Mon, 04 Apr 2022 14:51:26 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:25 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 02/10] RDMA/rxe: Add rxe_srq_cleanup() Date: Mon, 4 Apr 2022 16:50:52 -0500 Message-Id: <20220404215059.39819-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move cleanup code from rxe_destroy_srq() to rxe_srq_cleanup() which is called after all references are dropped to allow code depending on the srq object to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 7 ++++--- drivers/infiniband/sw/rxe/rxe_pool.c | 1 + drivers/infiniband/sw/rxe/rxe_srq.c | 11 +++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 27 +++++++++++---------------- 4 files changed, 27 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index ff6cae2c2949..18f3c5dac381 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -37,7 +37,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited); void rxe_cq_disable(struct rxe_cq *cq); -void rxe_cq_cleanup(struct rxe_pool_elem *arg); +void rxe_cq_cleanup(struct rxe_pool_elem *elem); /* rxe_mcast.c */ struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); @@ -81,7 +81,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey); int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr); int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); -void rxe_mr_cleanup(struct rxe_pool_elem *arg); +void rxe_mr_cleanup(struct rxe_pool_elem *elem); /* rxe_mw.c */ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); @@ -89,7 +89,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); -void rxe_mw_cleanup(struct rxe_pool_elem *arg); +void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, @@ -168,6 +168,7 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata); +void rxe_srq_cleanup(struct rxe_pool_elem *elem); void rxe_dealloc(struct ib_device *ib_dev); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 87066d04ed18..5963b1429ad8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -46,6 +46,7 @@ static const struct rxe_type_info { .name = "srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), + .cleanup = rxe_srq_cleanup, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, .max_elem = RXE_MAX_SRQ_INDEX - RXE_MIN_SRQ_INDEX + 1, diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index e2dcfc5d97e3..02b39498c370 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -174,3 +174,14 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, srq->rq.queue = NULL; return err; } + +void rxe_srq_cleanup(struct rxe_pool_elem *elem) +{ + struct rxe_srq *srq = container_of(elem, typeof(*srq), elem); + + if (srq->pd) + rxe_put(srq->pd); + + if (srq->rq.queue) + rxe_queue_cleanup(srq->rq.queue); +} diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index c1d543e24281..83271bea83b1 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -286,36 +286,35 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, struct rxe_srq *srq = to_rsrq(ibsrq); struct rxe_create_srq_resp __user *uresp = NULL; - if (init->srq_type != IB_SRQT_BASIC) - return -EOPNOTSUPP; - if (udata) { if (udata->outlen < sizeof(*uresp)) return -EINVAL; uresp = udata->outbuf; } + if (init->srq_type != IB_SRQT_BASIC) + return -EOPNOTSUPP; + err = rxe_srq_chk_init(rxe, init); if (err) - goto err1; + goto err_out; err = rxe_add_to_pool(&rxe->srq_pool, srq); if (err) - goto err1; + goto err_out; rxe_get(pd); srq->pd = pd; err = rxe_srq_from_init(rxe, srq, init, udata, uresp); if (err) - goto err2; + goto err_cleanup; return 0; -err2: - rxe_put(pd); +err_cleanup: rxe_put(srq); -err1: +err_out: return err; } @@ -339,15 +338,15 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, err = rxe_srq_chk_attr(rxe, srq, attr, mask); if (err) - goto err1; + goto err_out; err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd, udata); if (err) - goto err1; + goto err_out; return 0; -err1: +err_out: return err; } @@ -368,10 +367,6 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) { struct rxe_srq *srq = to_rsrq(ibsrq); - if (srq->rq.queue) - rxe_queue_cleanup(srq->rq.queue); - - rxe_put(srq->pd); rxe_put(srq); return 0; } From patchwork Mon Apr 4 21:50:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CEC5C433F5 for ; Mon, 4 Apr 2022 22:27:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347085AbiDDW3V (ORCPT ); Mon, 4 Apr 2022 18:29:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236801AbiDDW2U (ORCPT ); Mon, 4 Apr 2022 18:28:20 -0400 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AE1651E51 for ; Mon, 4 Apr 2022 14:51:27 -0700 (PDT) Received: by mail-oi1-x236.google.com with SMTP id q189so11462444oia.9 for ; Mon, 04 Apr 2022 14:51:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1xptPxghO5Wv38XZBgv1n4noPRyDNrIm4QC80PPVwBo=; b=K3nQZaAXVv+vQ/X8xsHllyCKVwTsRlheXteE0HwHV76ZKEgbrroHbww5DaKZWwtiEv +EVKkF18Ia8eQmwBV3AlQZdOs0ixCzBac5Py52/j71g6BBK4Tv9ZmpSvCPxzmz8gtT5o 4SMwJN67NcbG6WdMyQYW8zDcjflYdcg5C7+L7cgICVsHquLc9ZNpsP6qPKsmCi2kr9by THpawBfNTPGsCmSv/6p13FKmd9ChWDfDgwK0n0WrDxz75FRuGBS7M0yabZdFckXHHtEt jsuOSA++EKBZDtvrqeA8WsLrcriGS5IFWTF7or/RtSWZVbMSHBupd4iCVzKh+2EXPBwk UA0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1xptPxghO5Wv38XZBgv1n4noPRyDNrIm4QC80PPVwBo=; b=oFQNkLSs68sg/L1KvqNaK1nGQkZpLAb/LC8ugQ8P3NAboQ9NfZ94T9OSe1I75LbasA 6x9QJ9V9Xdc5cPRkK5k0E/BuDEPOzwRCvE1gAVRKD+nCC4WZjUNenU4NFf7g5/hCVSxA yGDOKo6w/qASx5g7dkloriGkTVA+hbhcguLA/fWixFrmajUejjG53rNnl14oFyjwOrG+ Mluy0awiPANAzCJtv42ANezMHXvljzZTFFgkDF+SGE57q31MGa40F/Xo9Ixxqnzsz+SZ P+dxoS0AaLZKUGDIkoyFdxS+jUTCrS0VI6v/xHjkeXOvPbCvn2ts74xviYtEa/JOT4D+ nIXg== X-Gm-Message-State: AOAM532dYztdXFsnkfyG57YLWzkAiu/hbtBn/TQpQWyL8kW+2YepOeEO xlKQ86xL6BIMHh8wN2qmNr1nyUWZDvo= X-Google-Smtp-Source: ABdhPJwzC5fkkOOuLyUftZIYyMQaKwgDULzsTN7AiZoS5SqZ0p9AD53YDyc31kWu8slXeU1gj9+7wQ== X-Received: by 2002:a05:6808:159a:b0:2da:3ab5:2051 with SMTP id t26-20020a056808159a00b002da3ab52051mr155723oiw.170.1649109086788; Mon, 04 Apr 2022 14:51:26 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:26 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 03/10] RDMA/rxe: Check rxe_get() return value Date: Mon, 4 Apr 2022 16:50:53 -0500 Message-Id: <20220404215059.39819-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In the tasklets (completer, responder, and requester) check the return value from rxe_get() to detect failures to get a reference. This only occurs if the qp has had its reference count drop to zero which indicates that it no longer should be used. This is in preparation to an upcoming change that will move the qp cleanup code to rxe_qp_cleanup(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 3 ++- drivers/infiniband/sw/rxe/rxe_req.c | 3 ++- drivers/infiniband/sw/rxe/rxe_resp.c | 3 ++- 3 files changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 138b3e7d3a5f..da3a398053b8 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -562,7 +562,8 @@ int rxe_completer(void *arg) enum comp_state state; int ret = 0; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; if (!qp->valid || qp->req.state == QP_STATE_ERROR || qp->req.state == QP_STATE_RESET) { diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index ae5fbc79dd5c..27aba921cc66 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -611,7 +611,8 @@ int rxe_requester(void *arg) struct rxe_ah *ah; struct rxe_av *av; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; next_wqe: if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR)) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 16fc7ea1298d..1ed45c192cf5 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -1250,7 +1250,8 @@ int rxe_responder(void *arg) struct rxe_pkt_info *pkt = NULL; int ret = 0; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED; From patchwork Mon Apr 4 21:50:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFF18C433FE for ; Mon, 4 Apr 2022 22:27:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237084AbiDDW3X (ORCPT ); Mon, 4 Apr 2022 18:29:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245401AbiDDW2U (ORCPT ); Mon, 4 Apr 2022 18:28:20 -0400 Received: from mail-oa1-x34.google.com (mail-oa1-x34.google.com [IPv6:2001:4860:4864:20::34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12F14517F8 for ; Mon, 4 Apr 2022 14:51:28 -0700 (PDT) Received: by mail-oa1-x34.google.com with SMTP id 586e51a60fabf-d6e29fb3d7so12359776fac.7 for ; Mon, 04 Apr 2022 14:51:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1Q2DwsD75zgPlVackU2NQ86VaRD6/QwV/bfYs0ib26w=; b=FrG2KRFrA1IkGGf4GshUEzIbHG8rH53qslXKHeKlzSnC+7ZN8iiIdmOAHjcY/IECz6 FhsZ6ya6+SXDzYZErcP69jwRgwPM8FXa5FrQB3Qf0T6p0urFiv0rdbRgIbRzfkfRZs4L jTFYniiG6C/FHldSusJVAKrHF+4awIDDakHyEfFbM24AWRQXwGfb8LevO05ChYFvnJmu L1nlGu6exsctIJQ8839up4uZ14HewAhWwWQjaA1wtjbsm+5cWadWIXfr8+7RdkpnyMz2 0dTcusZwOmw0aJRHgmxCS1VJfwwy5nuX2VmUOB++8TbRAYJwlVk4CZzgI5yqtnbI4q4n g/tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Q2DwsD75zgPlVackU2NQ86VaRD6/QwV/bfYs0ib26w=; b=27D9gKklxHqCmqWf9pqZne2nxE9P4KcA5uyri8EOxoTiswR2iI5F6TcBaU4+gopT7p 8Yb6+ZMcftqR+gRBKD/A7W5bJvwtgKeqYi9dUYhkphXJFIDPxqwQZA7twcxQZAMq5BM7 uVIujs35bTCCjVq3WWs+gN5pMvkMFUGDkZJXf0xlFG+2SMDgMrb3VzEemsrfE1+cZLUP QsLzptJxVMesRhKOi/dGT7NKkxRa/02KopWmWyLxlXSLyuLg+PoggRo3dRpPmR/3CkrS sePhchHYJSzZM9b08WWWRuxzNoDbvdxXLMzJAp345oV10UvqrKK+LiMPac6GKKTFkByh m1qQ== X-Gm-Message-State: AOAM530TGpeVqNENV4o/6n8i2e9lEbpPP88j8KtKsW2IOPTeWRukf/ot yp3UI1Y4rw3fBae8Exef0TDNH/ol0TA= X-Google-Smtp-Source: ABdhPJxKCDT9BQPF8sJVpm5snOETIuE9oDnyrnCAimO0YryQs2lTqTd92zMEsQa593XTqGp6tPJjkA== X-Received: by 2002:a05:6870:e2d4:b0:e1:f404:8a8f with SMTP id w20-20020a056870e2d400b000e1f4048a8fmr150885oad.124.1649109087384; Mon, 04 Apr 2022 14:51:27 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:27 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 04/10] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Date: Mon, 4 Apr 2022 16:50:54 -0500 Message-Id: <20220404215059.39819-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the code from rxe_qp_destroy() to rxe_qp_do_cleanup(). This allows flows holding references to qp to complete before the qp object is torn down. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 - drivers/infiniband/sw/rxe/rxe_qp.c | 12 ++++-------- drivers/infiniband/sw/rxe/rxe_verbs.c | 1 - 3 files changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 18f3c5dac381..0e022ae1b8a5 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -114,7 +114,6 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); void rxe_qp_error(struct rxe_qp *qp); int rxe_qp_chk_destroy(struct rxe_qp *qp); -void rxe_qp_destroy(struct rxe_qp *qp); void rxe_qp_cleanup(struct rxe_pool_elem *elem); static inline int qp_num(struct rxe_qp *qp) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 62acf890af6c..f5200777399c 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -777,9 +777,11 @@ int rxe_qp_chk_destroy(struct rxe_qp *qp) return 0; } -/* called by the destroy qp verb */ -void rxe_qp_destroy(struct rxe_qp *qp) +/* called when the last reference to the qp is dropped */ +static void rxe_qp_do_cleanup(struct work_struct *work) { + struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); + qp->valid = 0; qp->qp_timeout_jiffies = 0; rxe_cleanup_task(&qp->resp.task); @@ -798,12 +800,6 @@ void rxe_qp_destroy(struct rxe_qp *qp) __rxe_do_task(&qp->comp.task); __rxe_do_task(&qp->req.task); } -} - -/* called when the last reference to the qp is dropped */ -static void rxe_qp_do_cleanup(struct work_struct *work) -{ - struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); if (qp->sq.queue) rxe_queue_cleanup(qp->sq.queue); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 83271bea83b1..6738f1b4a543 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -490,7 +490,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) if (ret) return ret; - rxe_qp_destroy(qp); rxe_put(qp); return 0; } From patchwork Mon Apr 4 21:50:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43C6AC433F5 for ; Mon, 4 Apr 2022 22:27:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243039AbiDDW3g (ORCPT ); Mon, 4 Apr 2022 18:29:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245434AbiDDW2V (ORCPT ); Mon, 4 Apr 2022 18:28:21 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96D1051E56 for ; Mon, 4 Apr 2022 14:51:28 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id t21so11469191oie.11 for ; Mon, 04 Apr 2022 14:51:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ir1rbhJVm4YOuatC8nTnW07YQS3a/+GpADxu2BBd+7A=; b=mTm9DC73zrc9+qnJe02afFnnOl6MCGSh3IWWvLG5tsd/KyTy8Vu3zNxuTE/tq7fWQ5 yDh++N7VQCVYLXwFuXcWvw3DRwHJQIuojiOVv2Lm36P9D9VBP+U1qBM/1dS6eaWKg60b Q2IGDxBaUA1wOfiCffj4beikFlZNuFvcnvf9e76p0r8vll9SE1Vs8D/Ac9OAvogoRSuj RSAnvZxgFCyFSVSgSSCSWYEvZ0p1nQjt+EyOUtiyxFSoUGZo/U9khtS1SMclyY0SzfKR HO/VUbv+WYAJudxZayhFqMJ5pb2jB+pDYZFp2Cjb7vm76q8webyWKLVjAa9plnZBc8hp jKjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ir1rbhJVm4YOuatC8nTnW07YQS3a/+GpADxu2BBd+7A=; b=MdgdyDJ0mCh0YPKS5vLCM5eiu+uRCGOtDf3B2rSD+Ywf0vesVn4tYAbQw6AttXrIpT mX0om3f8BbLGbPPvlCCX9sUwBYBvQsp7+spUnsn9dAKlrzK7SrvfcVWU0Ir+BxJ4DMxa exih5nLeahaixl4/dc2EZGxggJQcklp1fZljyWCSYAoYQzmeoQN4wrbBQ68jYsgncUN+ Q6NN8te1spIop+toGPfJGbOmdtJ7KzmLiSou2vYaDiTloLlB0dBvQo+BTbwz8MSbRB76 /6cyPG4HHRcxYeN9Y27GW/oi26Yyf5FPUmLPKCke5HaK2fvj9/LaAuzzv/ni6S4qgR39 uxmA== X-Gm-Message-State: AOAM5313h4U7NanO2dLfiCBNgPBIA/ggpIYbLpfvQ4Ze0LEyZ/mpXsvX BkVWrfaSJfNJrlGym12iSBVGePeMYCU= X-Google-Smtp-Source: ABdhPJwPQ2bprDhQhp2WrXtZX9iE+cdQCrvLzFdpSMEyxT0sdskEcouw/HkG6VSkocCkD0yH3aryrg== X-Received: by 2002:a05:6808:13d4:b0:2ce:2801:d7bb with SMTP id d20-20020a05680813d400b002ce2801d7bbmr166716oiw.44.1649109088003; Mon, 04 Apr 2022 14:51:28 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:27 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 05/10] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Date: Mon, 4 Apr 2022 16:50:55 -0500 Message-Id: <20220404215059.39819-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the code which tears down an mr to rxe_mr_cleanup to allow operations holding a reference to the mr to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 60a31b718774..fc3942e04a1f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -683,14 +683,10 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) { struct rxe_mr *mr = to_rmr(ibmr); - if (atomic_read(&mr->num_mw) > 0) { - pr_warn("%s: Attempt to deregister an MR while bound to MWs\n", - __func__); + /* See IBA 10.6.7.2.6 */ + if (atomic_read(&mr->num_mw) > 0) return -EINVAL; - } - mr->state = RXE_MR_STATE_INVALID; - rxe_put(mr_pd(mr)); rxe_put(mr); return 0; @@ -700,6 +696,8 @@ void rxe_mr_cleanup(struct rxe_pool_elem *elem) { struct rxe_mr *mr = container_of(elem, typeof(*mr), elem); + rxe_put(mr_pd(mr)); + ib_umem_release(mr->umem); if (mr->cur_map_set) From patchwork Mon Apr 4 21:50:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65D4AC433F5 for ; Mon, 4 Apr 2022 22:28:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238861AbiDDWaH (ORCPT ); Mon, 4 Apr 2022 18:30:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349395AbiDDW2V (ORCPT ); Mon, 4 Apr 2022 18:28:21 -0400 Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com [IPv6:2607:f8b0:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 468A451E43 for ; Mon, 4 Apr 2022 14:51:29 -0700 (PDT) Received: by mail-ot1-x331.google.com with SMTP id i11-20020a9d4a8b000000b005cda3b9754aso8118730otf.12 for ; Mon, 04 Apr 2022 14:51:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NOY+T0iyFXtdgsKlS557JSXDm6hKgYpjdNQTGul5DK4=; b=VMZHm21GlYPmQYf+qgDJxpxr2uIzAL7ahDIQ021N3PZ5CekBH5UeDs7C+G3Co6hY1O 0W6H44/2hQhoUriUcXnOo9DHOllYSSKBKwhKt+5joTQcyalJO/kAp/BFHrrQAwdCduG/ dpucsechG0FNGzdYR8NqIk1qWUJk39pCucQ5i4P9r2CyEc4aYmALfJBghVaY7TL+k65D IgIsOczGrf9LVp8nQZinuz0/GQJL5Law0zdOG+IvnVFdOvHow050kScK+qt02tO3L1u5 rGHrsmVCmdG6qkOCwvPcc2TujbxcwNwtOK+SBLR3+LAO4VaMwAoTjRepCaAR8FU/pp4q BFXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NOY+T0iyFXtdgsKlS557JSXDm6hKgYpjdNQTGul5DK4=; b=idNOwetVpo7JSl3rnTTtD/4Y5+wElD7iyMGlt3bO0tgy202tu2iM2lEBQwRU5QkMeJ 5je8oXBtCbYFuIPZiMgHRdVq2iPt+nqLRgKINKzzqB5FMojeshJ8fIfdktugMBq0bB17 fqR1TCQDnvYbbXUnSXIRljjF+VJ5/Zamyxd7npzW4FTD9N98wu7QqEn3HHuO06AtYMEf kteqZUVOwmE8rPL1V5s0HZ+6nnWhfV/U4K1dsAVDVzKby54QvJsijc94TOI8Qd7qgNQ/ HMbB0nGKMZkSQx53AQLk7LGxJUEETG8JTO0k4PojGJbNLTTbPphirXoFhqfnUBcWYxtc PBUQ== X-Gm-Message-State: AOAM530AYvArwRONUM7aNCx/LlBJWHuMuks3Vo3eLrjbQjUCGIFwuEmo VTxeSiutCYm5UuRJxaOugP5YK2cAeMc= X-Google-Smtp-Source: ABdhPJw6PBhk/zBYcvumrowTDAnpJiGNnCF8Df6PXadwDU9xuqgPU2QLCgfU410QXUEOOQW0orjMGg== X-Received: by 2002:a9d:5913:0:b0:5cd:a050:8f55 with SMTP id t19-20020a9d5913000000b005cda0508f55mr134717oth.44.1649109088581; Mon, 04 Apr 2022 14:51:28 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:28 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 06/10] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Date: Mon, 4 Apr 2022 16:50:56 -0500 Message-Id: <20220404215059.39819-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move code from rxe_dealloc_mw() to rxe_mw_cleanup() to allow flows which hold a reference to mw to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mw.c | 57 ++++++++++++++-------------- drivers/infiniband/sw/rxe/rxe_pool.c | 1 + 2 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index c86b2efd58f2..ba3f94c69171 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -28,40 +28,11 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return 0; } -static void rxe_do_dealloc_mw(struct rxe_mw *mw) -{ - if (mw->mr) { - struct rxe_mr *mr = mw->mr; - - mw->mr = NULL; - atomic_dec(&mr->num_mw); - rxe_put(mr); - } - - if (mw->qp) { - struct rxe_qp *qp = mw->qp; - - mw->qp = NULL; - rxe_put(qp); - } - - mw->access = 0; - mw->addr = 0; - mw->length = 0; - mw->state = RXE_MW_STATE_INVALID; -} - int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); - struct rxe_pd *pd = to_rpd(ibmw->pd); - - spin_lock_bh(&mw->lock); - rxe_do_dealloc_mw(mw); - spin_unlock_bh(&mw->lock); rxe_put(mw); - rxe_put(pd); return 0; } @@ -328,3 +299,31 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) return mw; } + +void rxe_mw_cleanup(struct rxe_pool_elem *elem) +{ + struct rxe_mw *mw = container_of(elem, typeof(*mw), elem); + struct rxe_pd *pd = to_rpd(mw->ibmw.pd); + + rxe_put(pd); + + if (mw->mr) { + struct rxe_mr *mr = mw->mr; + + mw->mr = NULL; + atomic_dec(&mr->num_mw); + rxe_put(mr); + } + + if (mw->qp) { + struct rxe_qp *qp = mw->qp; + + mw->qp = NULL; + rxe_put(qp); + } + + mw->access = 0; + mw->addr = 0; + mw->length = 0; + mw->state = RXE_MW_STATE_INVALID; +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 5963b1429ad8..0fdde3d46949 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -83,6 +83,7 @@ static const struct rxe_type_info { .name = "mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), + .cleanup = rxe_mw_cleanup, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, .max_elem = RXE_MAX_MW_INDEX - RXE_MIN_MW_INDEX + 1, From patchwork Mon Apr 4 21:50:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F2EDC433EF for ; Mon, 4 Apr 2022 22:28:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245401AbiDDWaG (ORCPT ); Mon, 4 Apr 2022 18:30:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343760AbiDDW2V (ORCPT ); Mon, 4 Apr 2022 18:28:21 -0400 Received: from mail-oa1-x2a.google.com (mail-oa1-x2a.google.com [IPv6:2001:4860:4864:20::2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 928FE517FF for ; Mon, 4 Apr 2022 14:51:30 -0700 (PDT) Received: by mail-oa1-x2a.google.com with SMTP id 586e51a60fabf-de48295467so12391335fac.2 for ; Mon, 04 Apr 2022 14:51:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TvPpkHvA3BY4MWR0Waer4OHXDIziMbtQ5DYk5eeWse4=; b=Kn6WgMKhz/deDtMlaBuLmvC+y2vb4GGPG/ftF+/j9OWq2tG5lJGBxsNqoD4sUa5V3O oatXCb/HFSZA3oGr/GTCa6uWKhgceevcVmAes1mKSOLjUYjdBz55k8cRUOhjYNdP8BYk XKNLfgWhYW9Te8Oo53P+YmgQ/x6FPkWhco1jU0+AciwLOFWsLg0vDcYnkkpb3+1UVohl WDlRRaOeVkf1JBajuPDQLlQeKDaX0/PdZVPXK7OhQNuVfRYLBSAL00G52ZfWMd+rBnib R3cMY6qGzrc2YxAyPNZ+CWcv+aIOkBNhlyJEOsdEfR8vM12nAcfQgGqBaPRlTU8PrK/r Ithw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TvPpkHvA3BY4MWR0Waer4OHXDIziMbtQ5DYk5eeWse4=; b=UzNFKj8dWxpg7lGaz7IhG7w0KIC1yplnRYCa1u+R5cELbCnQYMgUvknniruUbCWjIu IRpD0WkTCJg91ZwKSaGp/iDoLNzWhLky/tgLiYIhNKHtqVQIWrdU87D42oJ1uTimwcYk rVgTSpP3JirtaY/KEZmxqf/SbMbyGeuSytC91ry2DFoe5Ultwi7DyqRg8yACxh9yUfuh iG95OFhkT+Yh8Hisa1iw+9cpezJhv5fJgq2eV6ASh1kxDvSWD8Y3zrDTVBoGgtQLgt1V 9DUP2/w4jnbNPXKuNsiuNBd1g9JgiS03tNwYXDkgxx6RhdmGJo9bptzY4jJ8YdtJdSxQ TrlQ== X-Gm-Message-State: AOAM5319sBEEZscF31xji4M2Maf5Vl2L1hvj3zRC0jLyv/4hholQNY6M DNNDKAhWE1Ew6oJ8CW0WUR0= X-Google-Smtp-Source: ABdhPJwOIqP/7LkWYE6XS3qie02rs6CiajoP9NZuNzhxeVuFTmRlpD9qAGna+xjagPGoAiHSBqWFIg== X-Received: by 2002:a05:6870:d58c:b0:de:6900:6222 with SMTP id u12-20020a056870d58c00b000de69006222mr129571oao.179.1649109089525; Mon, 04 Apr 2022 14:51:29 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:29 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 07/10] RDMA/rxe: Enforce IBA C11-17 Date: Mon, 4 Apr 2022 16:50:57 -0500 Message-Id: <20220404215059.39819-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a counter to keep track of the number of WQs connected to a CQ and return an error if destroy_cq is called while the counter is non zero. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_qp.c | 10 ++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 6 ++++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 3 files changed, 17 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index f5200777399c..18861b9edbfd 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -334,6 +334,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, qp->scq = scq; qp->srq = srq; + atomic_inc(&rcq->num_wq); + atomic_inc(&scq->num_wq); + rxe_qp_init_misc(rxe, qp, init); err = rxe_qp_init_req(rxe, qp, init, udata, uresp); @@ -353,6 +356,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, rxe_queue_cleanup(qp->sq.queue); qp->sq.queue = NULL; err1: + atomic_dec(&rcq->num_wq); + atomic_dec(&scq->num_wq); + qp->pd = NULL; qp->rcq = NULL; qp->scq = NULL; @@ -810,10 +816,14 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->rq.queue) rxe_queue_cleanup(qp->rq.queue); + atomic_dec(&qp->scq->num_wq); if (qp->scq) rxe_put(qp->scq); + + atomic_dec(&qp->rcq->num_wq); if (qp->rcq) rxe_put(qp->rcq); + if (qp->pd) rxe_put(qp->pd); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 6738f1b4a543..4adcb93af9b1 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -801,6 +801,12 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) { struct rxe_cq *cq = to_rcq(ibcq); + /* See IBA C11-17: The CI shall return an error if this Verb is + * invoked while a Work Queue is still associated with the CQ. + */ + if (atomic_read(&cq->num_wq)) + return -EINVAL; + rxe_cq_disable(cq); rxe_put(cq); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index e7eff1ca75e9..bb15a74f624e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -67,6 +67,7 @@ struct rxe_cq { bool is_dying; bool is_user; struct tasklet_struct comp_task; + atomic_t num_wq; }; enum wqe_state { From patchwork Mon Apr 4 21:50:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA679C433F5 for ; Mon, 4 Apr 2022 22:28:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347691AbiDDWaK (ORCPT ); Mon, 4 Apr 2022 18:30:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345218AbiDDW2X (ORCPT ); Mon, 4 Apr 2022 18:28:23 -0400 Received: from mail-oa1-x31.google.com (mail-oa1-x31.google.com [IPv6:2001:4860:4864:20::31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28C5C51E63 for ; Mon, 4 Apr 2022 14:51:31 -0700 (PDT) Received: by mail-oa1-x31.google.com with SMTP id 586e51a60fabf-df22f50e0cso12395508fac.3 for ; Mon, 04 Apr 2022 14:51:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VR3DV9hS599y8qzxCn1PgeIdcIysc5kmXopSwnIQrCQ=; b=givxALZ/gfSezC1HJeI+Cv0uhFUaC52xh31nd0liT+TxWEnChbWFcqdb0liOIkuH3k +TGPAYpVCd2KgKNY1mIutJ9gFs7BR1AoNTiO9+KiRflFOAwkxsqia+eSswz8ZljytBZo 1jtj+bLnstxM/BOloCkTes2rGeslIxJlhiEAnnbAVzu1yM0T5CQDpK+yF9jdQXgouV6J 672C4rha9rZU5CV13LxqqJDqy3MeRyhSncG2AskJwuXRZ2fyQvO34V41lyUQ2kdHU4iw vA3fhlD1OuV8ZhjTuaMKOHaPSpNirWVcFHAQVi7CgmwgH47zsE1QCYSDUPzOimLMV6BF H0UA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VR3DV9hS599y8qzxCn1PgeIdcIysc5kmXopSwnIQrCQ=; b=DKNPgWlUaSbrwN54GB3tTvVJ9SNUfoK4rfy94ybTxROhk0Y9t1c2f4vybQCgEorHk1 mMHB4e6fkhX/HjxohXPyVOrQbh7nQG7qByfOrDaMDAkibZF0q3DPz6Ip1oWMgCMiG9tg aCk9Z57m8Bfndd/59BZy9bwN9ymOC+meGo9NQpZIKxFo6wcn0MCNSKrb8qJ46JA2R+cS /A70C5TEvp3MAFLR5lzL1Up/UIU2CFkFeOd8gm1JLWpMzzj28L6xJfjzdPUW5g+sbZA3 uXjgPeJ1jCdHZUeWC1I7hP4AD6S4iXyMOWnuXKaBVX9uVUhtnV5w5IrXVNlDkSsOGEG8 yXtA== X-Gm-Message-State: AOAM532ymGtlIM0oS96gCgkrGkApjJ4BqP9MB0w8ifqb43RgtRKrFaoq jblW0xiFLLKiUOIEEmN3RfM= X-Google-Smtp-Source: ABdhPJyh2jQv39ybWA2JphA2GwT9NLYl7cT9flSGRc0m/5+2dO51EKNJ8evQ2LdIlwSZXeTlp1VvjA== X-Received: by 2002:a05:6870:e0d1:b0:e2:1c3b:cca2 with SMTP id a17-20020a056870e0d100b000e21c3bcca2mr89743oab.163.1649109090246; Mon, 04 Apr 2022 14:51:30 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:30 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 08/10] RDMA/rxe: Stop lookup of partially built objects Date: Mon, 4 Apr 2022 16:50:58 -0500 Message-Id: <20220404215059.39819-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rdma_rxe driver has a security weakness due to giving objects which are partially initialized indices allowing external actors to gain access to them by sending packets which refer to their index (e.g. qpn, rkey, etc) causing unpredictable results. This patch adds a new API rxe_finalize(obj) which enables looking up pool objects from indices using rxe_pool_get_index() for AH, QP, MR, and MW. They are added in create verbs only after the objects are fully initialized. It also adds wait for completion to destroy/dealloc verbs to assure that all references have been dropped before returning to rdma_core by implementing a new rxe_pool API rxe_cleanup() which drops a reference to the object and then waits for all other references to be dropped. When the last reference is dropped the object is completed by kref. After that it cleans up the object and if locally allocated frees the memory. Combined with deferring cleanup code to type specific cleanup routines this allows all pending activity referring to objects to complete before returning to rdma_core. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_mw.c | 4 +- drivers/infiniband/sw/rxe/rxe_pool.c | 61 +++++++++++++++++++++++++-- drivers/infiniband/sw/rxe/rxe_pool.h | 11 +++-- drivers/infiniband/sw/rxe/rxe_verbs.c | 30 ++++++++----- 5 files changed, 89 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index fc3942e04a1f..9a5c2af6a56f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -687,7 +687,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) if (atomic_read(&mr->num_mw) > 0) return -EINVAL; - rxe_put(mr); + rxe_cleanup(mr); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index ba3f94c69171..ebd95b6453da 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -25,6 +25,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; spin_lock_init(&mw->lock); + rxe_finalize(mw); + return 0; } @@ -32,7 +34,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); - rxe_put(mw); + rxe_cleanup(mw); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 0fdde3d46949..38f435762238 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -6,6 +6,8 @@ #include "rxe.h" +#define RXE_POOL_TIMEOUT (200) +#define RXE_POOL_MAX_TIMEOUTS (3) #define RXE_POOL_ALIGN (16) static const struct rxe_type_info { @@ -139,8 +141,11 @@ void *rxe_alloc(struct rxe_pool *pool) elem->pool = pool; elem->obj = obj; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + /* allocate index in array but leave pointer as NULL so it + * can't be looked up until rxe_finalize() is called */ + err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_free; @@ -167,8 +172,9 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_cnt; @@ -201,9 +207,44 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) static void rxe_elem_release(struct kref *kref) { struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt); + + complete(&elem->complete); +} + +int __rxe_cleanup(struct rxe_pool_elem *elem) +{ struct rxe_pool *pool = elem->pool; + struct xarray *xa = &pool->xa; + static int timeout = RXE_POOL_TIMEOUT; + unsigned long flags; + int ret, err = 0; + void *xa_ret; - xa_erase(&pool->xa, elem->index); + /* erase xarray entry to prevent looking up + * the pool elem from its index + */ + xa_lock_irqsave(xa, flags); + xa_ret = __xa_erase(xa, elem->index); + xa_unlock_irqrestore(xa, flags); + WARN_ON(xa_err(xa_ret)); + + /* if this is the last call to rxe_put complete the + * object. It is safe to touch elem after this since + * it is freed below + */ + __rxe_put(elem); + + if (timeout) { + ret = wait_for_completion_timeout(&elem->complete, timeout); + if (!ret) { + pr_warn("Timed out waiting for %s#%d to complete\n", + pool->name, elem->index); + if (++pool->timeouts >= RXE_POOL_MAX_TIMEOUTS) + timeout = 0; + + err = -EINVAL; + } + } if (pool->cleanup) pool->cleanup(elem); @@ -212,6 +253,8 @@ static void rxe_elem_release(struct kref *kref) kfree(elem->obj); atomic_dec(&pool->num_elem); + + return err; } int __rxe_get(struct rxe_pool_elem *elem) @@ -223,3 +266,15 @@ int __rxe_put(struct rxe_pool_elem *elem) { return kref_put(&elem->ref_cnt, rxe_elem_release); } + +void __rxe_finalize(struct rxe_pool_elem *elem) +{ + struct xarray *xa = &elem->pool->xa; + unsigned long flags; + void *ret; + + xa_lock_irqsave(xa, flags); + ret = __xa_store(&elem->pool->xa, elem->index, elem, GFP_KERNEL); + xa_unlock_irqrestore(xa, flags); + WARN_ON(xa_err(ret)); +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 24bcc786c1b3..83f96b2d5096 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -28,6 +28,7 @@ struct rxe_pool_elem { void *obj; struct kref ref_cnt; struct list_head list; + struct completion complete; u32 index; }; @@ -37,6 +38,7 @@ struct rxe_pool { void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; + unsigned int timeouts; unsigned int max_elem; atomic_t num_elem; @@ -63,20 +65,23 @@ void *rxe_alloc(struct rxe_pool *pool); /* connect already allocated object to pool */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); - #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) /* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); int __rxe_get(struct rxe_pool_elem *elem); - #define rxe_get(obj) __rxe_get(&(obj)->elem) int __rxe_put(struct rxe_pool_elem *elem); - #define rxe_put(obj) __rxe_put(&(obj)->elem) +int __rxe_cleanup(struct rxe_pool_elem *elem); +#define rxe_cleanup(obj) __rxe_cleanup(&(obj)->elem) + #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt) +void __rxe_finalize(struct rxe_pool_elem *elem); +#define rxe_finalize(obj) __rxe_finalize(&(obj)->elem) + #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 4adcb93af9b1..5ca21922b5e7 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -115,7 +115,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc) { struct rxe_ucontext *uc = to_ruc(ibuc); - rxe_put(uc); + rxe_cleanup(uc); } static int rxe_port_immutable(struct ib_device *dev, u32 port_num, @@ -149,7 +149,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) { struct rxe_pd *pd = to_rpd(ibpd); - rxe_put(pd); + rxe_cleanup(pd); return 0; } @@ -188,7 +188,7 @@ static int rxe_create_ah(struct ib_ah *ibah, err = copy_to_user(&uresp->ah_num, &ah->ah_num, sizeof(uresp->ah_num)); if (err) { - rxe_put(ah); + rxe_cleanup(ah); return -EFAULT; } } else if (ah->is_user) { @@ -197,6 +197,8 @@ static int rxe_create_ah(struct ib_ah *ibah, } rxe_init_av(init_attr->ah_attr, &ah->av); + rxe_finalize(ah); + return 0; } @@ -228,7 +230,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); - rxe_put(ah); + rxe_cleanup(ah); return 0; } @@ -313,7 +315,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, return 0; err_cleanup: - rxe_put(srq); + rxe_cleanup(srq); err_out: return err; } @@ -367,7 +369,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) { struct rxe_srq *srq = to_rsrq(ibsrq); - rxe_put(srq); + rxe_cleanup(srq); return 0; } @@ -434,10 +436,11 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (err) goto qp_init; + rxe_finalize(qp); return 0; qp_init: - rxe_put(qp); + rxe_cleanup(qp); return err; } @@ -490,7 +493,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) if (ret) return ret; - rxe_put(qp); + rxe_cleanup(qp); return 0; } @@ -809,7 +812,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) rxe_cq_disable(cq); - rxe_put(cq); + rxe_cleanup(cq); return 0; } @@ -904,6 +907,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) rxe_get(pd); rxe_mr_init_dma(pd, access, mr); + rxe_finalize(mr); return &mr->ibmr; } @@ -932,11 +936,13 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, if (err) goto err3; + rxe_finalize(mr); + return &mr->ibmr; err3: rxe_put(pd); - rxe_put(mr); + rxe_cleanup(mr); err2: return ERR_PTR(err); } @@ -964,11 +970,13 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, if (err) goto err2; + rxe_finalize(mr); + return &mr->ibmr; err2: rxe_put(pd); - rxe_put(mr); + rxe_cleanup(mr); err1: return ERR_PTR(err); } From patchwork Mon Apr 4 21:50:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B19F9C433EF for ; Mon, 4 Apr 2022 22:28:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229695AbiDDWaM (ORCPT ); Mon, 4 Apr 2022 18:30:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239116AbiDDW2X (ORCPT ); Mon, 4 Apr 2022 18:28:23 -0400 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A720451E69 for ; Mon, 4 Apr 2022 14:51:31 -0700 (PDT) Received: by mail-ot1-x32f.google.com with SMTP id i23-20020a9d6117000000b005cb58c354e6so8127931otj.10 for ; Mon, 04 Apr 2022 14:51:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gGxKJaSNHEDlk3VmEnBxiQ/pB/AHoJYFuCZI7oHQbys=; b=DPqmwEIGSB15fBlhTufYBMX5PxkKG7InYh5qcWUmZsc5FQ2LwCpYP44T/JLgelRt32 DMaAUYGYoVlvK2zVTigIZ0t7zk592pU+XpLEBbZqu+i7aJJrTgSChuK9MFtFBftX1ooV /F4uOCNrUoPhWqn6Y2X6Tr+d/5d4RhAbbWYwC2zw0QwQ35Xy0sGwd6aieWl70jMWKEF2 oWiSCaMmqAIhXlElitXxrh4HI2jaOuYF351Jp49YqIz5Vk1T+PWki1JkNVg/L9ZDdGjW jUdyeP+K3/viXGwqgHtdIWmh5glqcU/gdfn1f6T66HCPXedYPt4WA8yAs6KieMNntuTy kO/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gGxKJaSNHEDlk3VmEnBxiQ/pB/AHoJYFuCZI7oHQbys=; b=Q3qtTb83Z3wAZhnrXQt6oHRqq+qxKqCIDYBR1NsN94bj4cs7lO0axyj3r2wd4ZVUJm +iFyMR9G8pSd9I769uiPqkoeFm2Djh9GeefFL34WkOSjqSIxPbBSfPCPiI/lxSIAn9Fc 172pRqYKXx+IZqEMohsfWKvjMGoC3HOcQ61tE4rgLfo3cdKCDvPHi41EopTPL3987ZPC aeKFctHJLPz7ekSFTELChTFy154e9j1qRCQJcP1NhEefEe4xdPAT+qzYsgRW28BvJZ2U GWWCicP+9ezyMKL7NBk5RHOKlXb33+z5NjqSlcFBgnbwQQvScw/HUWFXi2ZZovTxaIyG 4x3A== X-Gm-Message-State: AOAM532UbvpywmJA0A56zuBKT20zJetQx8ci+9L4iGr1FtPIaLp3HX+w 0Ac8hjY+cOkHIBJ6eDqLeHtumtELzss= X-Google-Smtp-Source: ABdhPJzyLjX+PxUQoR6F1CJBy/A8RBW6amm25V0HMPo97mxjohSI9mu9OcQhRD+xSTzzZFP4gRxRfw== X-Received: by 2002:a9d:7f04:0:b0:5ce:417:e176 with SMTP id j4-20020a9d7f04000000b005ce0417e176mr123727otq.39.1649109091007; Mon, 04 Apr 2022 14:51:31 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:30 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 09/10] RDMA/rxe: Convert read side locking to rcu Date: Mon, 4 Apr 2022 16:50:59 -0500 Message-Id: <20220404215059.39819-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Use rcu_read_lock() for protecting read side operations in rxe_pool.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 38f435762238..cbe7b05c3b66 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -190,16 +190,15 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; struct xarray *xa = &pool->xa; - unsigned long flags; void *obj; - xa_lock_irqsave(xa, flags); + rcu_read_lock(); elem = xa_load(xa, index); if (elem && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; else obj = NULL; - xa_unlock_irqrestore(xa, flags); + rcu_read_unlock(); return obj; } @@ -250,7 +249,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem) pool->cleanup(elem); if (pool->flags & RXE_POOL_ALLOC) - kfree(elem->obj); + kfree_rcu(elem->obj); atomic_dec(&pool->num_elem); From patchwork Mon Apr 4 21:51:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12800886 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEA1BC433FE for ; Mon, 4 Apr 2022 22:28:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239116AbiDDWaN (ORCPT ); Mon, 4 Apr 2022 18:30:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345323AbiDDW2X (ORCPT ); Mon, 4 Apr 2022 18:28:23 -0400 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5821051E6D for ; Mon, 4 Apr 2022 14:51:32 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id k10so11535757oia.0 for ; Mon, 04 Apr 2022 14:51:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ACEymdZ+o6PzYutMiFq8CSopxsHRpbZdzFfwaeopvi0=; b=YilbgnBSlpCnDrVP88PWj9EaLjZMPJzNlA3pWM33wSWuW7GhzbL+OCrDJz2xseSAvd PHoqYxAGlCvKKgbl3lQE/9jM2OfE6X4us5sC0+tftmRHx7NAvgHR/EzfYfZled0ezFHp t7cH/upklCViU9XS0ZOJbg0Rz+VP4UlvHzEzuzg7SC6h8TFXNihIBO+RQRlJws7N19S5 3n38Q4Z0cFdxTP8mW/xPTJvtwpPYPn+wqD4tjFXN6OIJfafwASlWFmbayhBWKAPSQGHS vqW0hAQEr65g/OWX9GpZIZY/A+H8HjXYkGxnfWzhZS239hllv12ZEJe2o7d84TOzsgbk Cqmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ACEymdZ+o6PzYutMiFq8CSopxsHRpbZdzFfwaeopvi0=; b=TL983wZxGj65Zz4T+5uB8P9UMYFIFNcjvX85XHs6SdHeFjvaLF6MsuyIMghQKtGsBp pWnJYUj5vyoR6A3O/iruf2VnD8K7y/IgvMmIkO5GZAoR+ndZwV679fb9gMLVkGD+ksxG l3qubvoVJ8+UmeIKbYc79YFJMUGnlK00TWZ4Zodao3DfRuZAUee+L+NFztZB3pIP5ko7 3ZFE7l/RiwI2e66CWZBTZEeqrlLysPjAPLcS8rSVLbTUyes27/c0fQzuO6xc6kdOGWMk CuGH2ncMUi62IJdW3wPWipWehEBGvvpH+Wwsp8jCl4AZgor4PK2fcQd3g9JhbE7MjHoy giCQ== X-Gm-Message-State: AOAM532nksHMkGyrrspkYe7gaLXdte6dj+TUgw11BYSXbn+OvT1eGkQ8 +0L5yOtVfKwKmG5CHVJA38k= X-Google-Smtp-Source: ABdhPJxo769nYuOrhFh4SQ6UHuwYW4Y1isFIrlZq0+kKeQig9mMGPZLDGAPcrGIyzkhNQux+B/yUhg== X-Received: by 2002:a05:6808:168b:b0:2f7:338b:7a55 with SMTP id bb11-20020a056808168b00b002f7338b7a55mr153432oib.133.1649109091647; Mon, 04 Apr 2022 14:51:31 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-349e-d2a8-b899-a3ee.res6.spectrum.com. [2603:8081:140c:1a00:349e:d2a8:b899:a3ee]) by smtp.googlemail.com with ESMTPSA id e2-20020a0568301f2200b005cdafdea1d9sm5226441oth.50.2022.04.04.14.51.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 14:51:31 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 10/10] RDMA/rxe: Cleanup rxe_pool.c Date: Mon, 4 Apr 2022 16:51:00 -0500 Message-Id: <20220404215059.39819-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220404215059.39819-1-rpearsonhpe@gmail.com> References: <20220404215059.39819-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Minor cleanup of rxe_pool.c. Add document comment headers for the subroutines. Increase alignment for pool elements. Convert some printk's to WARN-ON's. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 89 +++++++++++++++++++++++----- 1 file changed, 73 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index cbe7b05c3b66..3b98650fb971 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -8,7 +8,7 @@ #define RXE_POOL_TIMEOUT (200) #define RXE_POOL_MAX_TIMEOUTS (3) -#define RXE_POOL_ALIGN (16) +#define RXE_POOL_ALIGN (64) static const struct rxe_type_info { const char *name; @@ -120,24 +120,35 @@ void rxe_pool_cleanup(struct rxe_pool *pool) WARN_ON(!xa_empty(&pool->xa)); } +/** + * rxe_alloc - allocate a new pool object + * @pool: object pool + * + * Context: in task. + * Returns: object on success else an ERR_PTR + */ void *rxe_alloc(struct rxe_pool *pool) { struct rxe_pool_elem *elem; void *obj; - int err; + int err = -EINVAL; if (WARN_ON(!(pool->flags & RXE_POOL_ALLOC))) - return NULL; + goto err_out; + + if (WARN_ON(!in_task())) + goto err_out; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto err_cnt; + goto err_dec; obj = kzalloc(pool->elem_size, GFP_KERNEL); - if (!obj) - goto err_cnt; + if (!obj) { + err = -ENOMEM; + goto err_dec; + } elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); - elem->pool = pool; elem->obj = obj; kref_init(&elem->ref_cnt); @@ -154,20 +165,32 @@ void *rxe_alloc(struct rxe_pool *pool) err_free: kfree(obj); -err_cnt: +err_dec: atomic_dec(&pool->num_elem); - return NULL; +err_out: + return ERR_PTR(err); } +/** + * __rxe_add_to_pool - add rdma-core allocated object to rxe object pool + * @pool: object pool + * @elem: rxe_pool_elem embedded in object + * + * Context: in task. + * Returns: 0 on success else an error + */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { - int err; + int err = -EINVAL; if (WARN_ON(pool->flags & RXE_POOL_ALLOC)) - return -EINVAL; + goto err_out; + + if (WARN_ON(!in_task())) + goto err_out; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto err_cnt; + goto err_dec; elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; @@ -177,15 +200,23 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) - goto err_cnt; + goto err_dec; return 0; -err_cnt: +err_dec: atomic_dec(&pool->num_elem); - return -EINVAL; +err_out: + return err; } +/** + * rxe_pool_get_index - find object in pool with given index + * @pool: object pool + * @index: index + * + * Returns: object on success else NULL + */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; @@ -203,6 +234,10 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) return obj; } +/** + * rxe_elem_release - complete object when last reference is dropped + * @kref: kref contained in rxe_pool_elem + */ static void rxe_elem_release(struct kref *kref) { struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt); @@ -210,6 +245,12 @@ static void rxe_elem_release(struct kref *kref) complete(&elem->complete); } +/** + * __rxe_cleanup - cleanup object after waiting for all refs to be dropped + * @elem: rxe_pool_elem + * + * Returns: 0 on success else an error + */ int __rxe_cleanup(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; @@ -229,7 +270,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem) /* if this is the last call to rxe_put complete the * object. It is safe to touch elem after this since - * it is freed below + * it is freed below if locally allocated */ __rxe_put(elem); @@ -256,16 +297,32 @@ int __rxe_cleanup(struct rxe_pool_elem *elem) return err; } +/** + * __rxe_get - takes a ref on the object unless ref count is zero + * @elem: rxe_pool_elem embedded in object + * + * Returns: 1 if reference is added else 0 + */ int __rxe_get(struct rxe_pool_elem *elem) { return kref_get_unless_zero(&elem->ref_cnt); } +/** + * __rxe_put - puts a ref on the object + * @elem: rxe_pool_elem embedded in object + * + * Returns: 1 if ref count reaches zero and release called else 0 + */ int __rxe_put(struct rxe_pool_elem *elem) { return kref_put(&elem->ref_cnt, rxe_elem_release); } +/** + * __rxe_finalize - enable looking up object from index + * @elem: rxe_pool_elem embedded in object + */ void __rxe_finalize(struct rxe_pool_elem *elem) { struct xarray *xa = &elem->pool->xa;