From patchwork Thu Jan 27 21:37:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727433 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FA99C433FE for ; Thu, 27 Jan 2022 21:38:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344403AbiA0ViI (ORCPT ); Thu, 27 Jan 2022 16:38:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232131AbiA0ViH (ORCPT ); Thu, 27 Jan 2022 16:38:07 -0500 Received: from mail-ot1-x336.google.com (mail-ot1-x336.google.com [IPv6:2607:f8b0:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C8C0C061714 for ; Thu, 27 Jan 2022 13:38:07 -0800 (PST) Received: by mail-ot1-x336.google.com with SMTP id s6-20020a0568301e0600b0059ea5472c98so3847610otr.11 for ; Thu, 27 Jan 2022 13:38:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VL/GI+7qpsdeG8zENV6IvqpyCn40bfNnpv7eZSvf4D4=; b=LiBvtxj8D/ls5P6rYvV+W6cR7frkdq3sCyopE+eAsVD3pi/sf/og81+0otZ7zktDh9 VTM86d9DXDj79eTQq4yHeSZX9o2/pSlMzXnJylVpjU50ZGjlFfkNCc/uqNq3GgRYp5j2 fDf9pFpryMF8IfrDHo1sl8fFF5dw6DVV0+h3eyB2E0OlXIi02wXl7kixuvLP40VpJL8i szT/g34cB+DKJlRnP4bxcbbNzdT3cL35vSuF3nK2HX2GL+/SBzOoALdD5OV5RysybhxU dshun007zMGNKi8XVF+Y44hXNDkQtxCwd/2iODqgBsMCbj9VoPH+l9UliYbBr5duKcrb RhQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VL/GI+7qpsdeG8zENV6IvqpyCn40bfNnpv7eZSvf4D4=; b=hK3ySqfQCIGB4okt5gS7uRQpnL9ltJbMzCck5I73/QpBbbrxCVHdTGdiDciKjZyNLt tzrZROEjhmWVXJbaBc7B91c0HnbdPkjnVnCdQEu3HlI4Xz38DNpNvJ0XvUTfT5v4ptXX 189U2v5zHPxMELCP0yEVsj/bje+WJjfqQmCnWhY09kXaiGGC/r6Q3GWTNt8oQzuqK5bC cwEX58fotD3dJavvUTeYGKmHeKt1Oj4WT3b4PLA9vVVBUVacuHNCkDZxWE64hOQ1JE+8 Ye9n/yZzje4xDEIBlk0saQC6wiLMrZqLakwpL0AlxOkjXP6l60gRzz6PlclnhYjAKRqU oIxw== X-Gm-Message-State: AOAM530tef2efGxtjwKxqGbBhwk0XI3G4zIYafGRmpR6BQjFiZfS/FHy eKL5JZM4G7aVM7V5Xat8SG9tZkl1Md0= X-Google-Smtp-Source: ABdhPJxg9bGoZDD7kgYFHvsM3FLTtGYSRmmHWcKecYh6wfY4vobrxpTvtkuuO6uofDNub85EqXnZig== X-Received: by 2002:a05:6830:2708:: with SMTP id j8mr2131903otu.274.1643319486723; Thu, 27 Jan 2022 13:38:06 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:06 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 01/26] RDMA/rxe: Move rxe_mcast_add/delete to rxe_mcast.c Date: Thu, 27 Jan 2022 15:37:30 -0600 Message-Id: <20220127213755.31697-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move rxe_mcast_add and rxe_mcast_delete from rxe_net.c to rxe_mcast.c, make static and remove declarations from rxe_loc.h. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 -- drivers/infiniband/sw/rxe/rxe_mcast.c | 18 ++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_net.c | 18 ------------------ 3 files changed, 18 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index b1e174afb1d4..bcec33c3c3b7 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -106,8 +106,6 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); -int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid); -int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid); /* rxe_qp.c */ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index bd1ac88b8700..e5689c161984 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -7,6 +7,24 @@ #include "rxe.h" #include "rxe_loc.h" +static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) +{ + unsigned char ll_addr[ETH_ALEN]; + + ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); + + return dev_mc_add(rxe->ndev, ll_addr); +} + +static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) +{ + unsigned char ll_addr[ETH_ALEN]; + + ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); + + return dev_mc_del(rxe->ndev, ll_addr); +} + /* caller should hold mc_grp_pool->pool_lock */ static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, struct rxe_pool *pool, diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index be72bdbfb4ba..a8cfa7160478 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -20,24 +20,6 @@ static struct rxe_recv_sockets recv_sockets; -int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) -{ - unsigned char ll_addr[ETH_ALEN]; - - ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); - - return dev_mc_add(rxe->ndev, ll_addr); -} - -int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) -{ - unsigned char ll_addr[ETH_ALEN]; - - ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); - - return dev_mc_del(rxe->ndev, ll_addr); -} - static struct dst_entry *rxe_find_route4(struct net_device *ndev, struct in_addr *saddr, struct in_addr *daddr) From patchwork Thu Jan 27 21:37:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727435 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 994EBC433F5 for ; Thu, 27 Jan 2022 21:38:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344404AbiA0ViJ (ORCPT ); Thu, 27 Jan 2022 16:38:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344402AbiA0ViI (ORCPT ); Thu, 27 Jan 2022 16:38:08 -0500 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 026EAC061714 for ; Thu, 27 Jan 2022 13:38:08 -0800 (PST) Received: by mail-oi1-x231.google.com with SMTP id w133so8535143oie.7 for ; Thu, 27 Jan 2022 13:38:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MkBHdRKk08KQIqTCTrTE63cnp3LYEJ5QRumogQcI5G0=; b=aLXtTLI1ELhgUsKKyMXAUT3Lvn9mNQ9Lyr2Ehcyum+y4p9tfOXJS4QQvBEpjTk8MgN xDWQxRBgCcbgBTx4qk9etnI1koAKiiJsVu2z0yGTEXI1LFNrBffSBGocdELC7VMVNSWv LdvwGFLaSIU/Ql3Pn9BKeV8uJO3YR6sMnHCZmAp4ZAkHtaGW33dUc2FHrimVvORSJ25L 6OuqZ49wggseYWqjGv8ctPpBfV9Z5nMhqFvf+RsCoI7frZy8O0cskkHUCzAP2cfVyHEZ Ew2alwsV8SnYmsj5stCGrI9XpFuUcXbmxSQoSh6cMjWS9fkBe6mc+3r+1KJy9xmvvgwB bBOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MkBHdRKk08KQIqTCTrTE63cnp3LYEJ5QRumogQcI5G0=; b=NaLyQ7maZh5LwYoEJ6S8X3aHNCgcuGlwNSdUUHxTqb0lFhybLwfImKEmXROmtLL8Kd Ac3AAHKgD8+kQthcrCJ/Q8SLuFSgJCtITDMGQqlCzcaQs1eZrPf4gWCX0JTQ69rCUX67 nC/oJCC1ag9rF9sgm/7autYQ2UKq7raK7OCn4jKMzhXiWnKYSX4vj/xFPmJB9KXgFm/q T+1pQhmWlmsSlwfa0TnFFQ+MFwBMvUL53LwbuYrcMD+elj4WUkXxCQPQB5gnHQ+5m9Xa sJEMPU+X4YT7OyqNepGTeDNbSkmLQe4+VV/uJAODIGojqg3FNi7ljvb/G9RqDX1IroSA jg+w== X-Gm-Message-State: AOAM533yYHx2j0sib63oQ16nYVmJHmcTqWhoxDvo3zZPwhuXWVUAqZSc GrlzlHuTsKl+FNYTyxnOXv8= X-Google-Smtp-Source: ABdhPJz6nCR1XBFeowKA7//H9R98eUWMhgHQbcauGuvHMZ6T0ju7N6ShYV8OXPBjWiImLfq07ogAPg== X-Received: by 2002:a05:6808:1281:: with SMTP id a1mr8741617oiw.69.1643319487417; Thu, 27 Jan 2022 13:38:07 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:07 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 02/26] RDMA/rxe: Move rxe_mcast_attach/detach to rxe_mcast.c Date: Thu, 27 Jan 2022 15:37:31 -0600 Message-Id: <20220127213755.31697-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move rxe_mcast_attach and rxe_mcast_detach from rxe_verbs.c to rxe_mcast.c, Make non-static and add declarations to rxe_loc.h. Make the subroutines in rxe_mcast.c referenced by these routines static and remove their declarations from rxe_loc.h. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 12 ++------- drivers/infiniband/sw/rxe/rxe_mcast.c | 36 +++++++++++++++++++++++---- drivers/infiniband/sw/rxe/rxe_verbs.c | 26 ------------------- 3 files changed, 33 insertions(+), 41 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index bcec33c3c3b7..dc606241f0d6 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -40,18 +40,10 @@ void rxe_cq_disable(struct rxe_cq *cq); void rxe_cq_cleanup(struct rxe_pool_elem *arg); /* rxe_mcast.c */ -int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mc_grp **grp_p); - -int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mc_grp *grp); - -int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - union ib_gid *mgid); - void rxe_drop_all_mcast_groups(struct rxe_qp *qp); - void rxe_mc_cleanup(struct rxe_pool_elem *arg); +int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); +int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); /* rxe_mmap.c */ struct rxe_mmap_info { diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index e5689c161984..f86e32f4e77f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -52,8 +52,8 @@ static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, return grp; } -int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mc_grp **grp_p) +static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, + struct rxe_mc_grp **grp_p) { int err; struct rxe_mc_grp *grp; @@ -81,7 +81,7 @@ int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, return 0; } -int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, +static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mc_grp *grp) { int err; @@ -125,8 +125,8 @@ int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, return err; } -int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - union ib_gid *mgid) +static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, + union ib_gid *mgid) { struct rxe_mc_grp *grp; struct rxe_mc_elem *elem, *tmp; @@ -194,3 +194,29 @@ void rxe_mc_cleanup(struct rxe_pool_elem *elem) rxe_drop_key(grp); rxe_mcast_delete(rxe, &grp->mgid); } + +int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) +{ + int err; + struct rxe_dev *rxe = to_rdev(ibqp->device); + struct rxe_qp *qp = to_rqp(ibqp); + struct rxe_mc_grp *grp; + + /* takes a ref on grp if successful */ + err = rxe_mcast_get_grp(rxe, mgid, &grp); + if (err) + return err; + + err = rxe_mcast_add_grp_elem(rxe, qp, grp); + + rxe_drop_ref(grp); + return err; +} + +int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) +{ + struct rxe_dev *rxe = to_rdev(ibqp->device); + struct rxe_qp *qp = to_rqp(ibqp); + + return rxe_mcast_drop_grp_elem(rxe, qp, mgid); +} diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 915ad6664321..f7682541f9af 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -999,32 +999,6 @@ static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, return n; } -static int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) -{ - int err; - struct rxe_dev *rxe = to_rdev(ibqp->device); - struct rxe_qp *qp = to_rqp(ibqp); - struct rxe_mc_grp *grp; - - /* takes a ref on grp if successful */ - err = rxe_mcast_get_grp(rxe, mgid, &grp); - if (err) - return err; - - err = rxe_mcast_add_grp_elem(rxe, qp, grp); - - rxe_drop_ref(grp); - return err; -} - -static int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) -{ - struct rxe_dev *rxe = to_rdev(ibqp->device); - struct rxe_qp *qp = to_rqp(ibqp); - - return rxe_mcast_drop_grp_elem(rxe, qp, mgid); -} - static ssize_t parent_show(struct device *device, struct device_attribute *attr, char *buf) { From patchwork Thu Jan 27 21:37:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727434 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E927AC4332F for ; Thu, 27 Jan 2022 21:38:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344402AbiA0ViJ (ORCPT ); Thu, 27 Jan 2022 16:38:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232131AbiA0ViI (ORCPT ); Thu, 27 Jan 2022 16:38:08 -0500 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF7EFC061714 for ; Thu, 27 Jan 2022 13:38:08 -0800 (PST) Received: by mail-oi1-x234.google.com with SMTP id s127so8598534oig.2 for ; Thu, 27 Jan 2022 13:38:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/7Re42h/hqRKORy3r1Yy/SCLyBooQ0darP9rBfE5na4=; b=DarlJSPv4aNsKQxFPORvTGVRtCBpGb3RnDqN+r96ZclUJROoZQdB/f6MnZcmXCpyGx fp8AKx9t/vRyauJvv2/kptUPo9DyobiM3hC+PFTyTyVj31aCte524127mIfaVRmN6jD3 1E4tqeFP9XYtSApJHv60jyHU6UCBbc2u/xIYFs/fcMni035CdzjZ8FNvS7IIrdS028ub wr2EPK6pvD+JbXQnkEANNruhczAdtu/eqAP/vd8hRtj4HexxvWpzRoj6OYUP8HC91gQ2 6V+4y5xfKJwcoDbv1295ERTKFQaabGbSdNxVDfjTw02rbJ3HXlCVKN/+oLWuB7kIE9B8 bCAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/7Re42h/hqRKORy3r1Yy/SCLyBooQ0darP9rBfE5na4=; b=L4v8nPd/NTk6HJSmrq3n6NmZLam5pRBlkDNrm1ADYv7V8gC+lnAJ1sGmxEj2p++TZ0 RmfcKBQrJ/AKJ+addqfbSaJZnoA9sUAnmu9TM7XaSTdoeI+4Wek+3TJMGyb+K8eACtrR 1auJc7wGXnfpyqwZ5TOyqgiXhOZKfbgN3dQ+BaEeCcrLXnvfMr3+sUEq8ySWc2WBiwkV HHVd4WM/WFKcJ8fx/nPB77JS5xs8os2PElk4TKfci641nOq7f6k99zKn2U0uq5Boj9dO 84Zv7xMF2u+anUXkaDsEyC/CEPWWKVKyayiZfYQqD5cdt2/zaph7Hafgth8eQ8EsBY0T PJhg== X-Gm-Message-State: AOAM533ANjmayDp6RQYRnRZYkDFUCE7xdumXYv7kyOVrTavArNp8/Aec bQvHqZZHXLVyAH7RXfmnuEs= X-Google-Smtp-Source: ABdhPJxNGUeTdBOGnkcLCh41sGVcY9faDIouZ9Bwi6j6VzAoN5/VadlYTBeJdVaDl5QPRRXhJTNY8A== X-Received: by 2002:a05:6808:1598:: with SMTP id t24mr7977365oiw.50.1643319488182; Thu, 27 Jan 2022 13:38:08 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:07 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 03/26] RDMA/rxe: Rename rxe_mc_grp and rxe_mc_elem Date: Thu, 27 Jan 2022 15:37:32 -0600 Message-Id: <20220127213755.31697-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rename rxe_mc_grp to rxe_mcg. Rename rxe_mc_elem to rxe_mca. These can be read 'multicast group' and 'multicast attachment'. 'elem' collided with the use of elem in rxe pools and was a little confusing. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 26 +++++++++++++------------- drivers/infiniband/sw/rxe/rxe_pool.c | 10 +++++----- drivers/infiniband/sw/rxe/rxe_recv.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_verbs.h | 6 +++--- 4 files changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index f86e32f4e77f..949784198d80 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -26,12 +26,12 @@ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) } /* caller should hold mc_grp_pool->pool_lock */ -static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, +static struct rxe_mcg *create_grp(struct rxe_dev *rxe, struct rxe_pool *pool, union ib_gid *mgid) { int err; - struct rxe_mc_grp *grp; + struct rxe_mcg *grp; grp = rxe_alloc_locked(&rxe->mc_grp_pool); if (!grp) @@ -53,10 +53,10 @@ static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, } static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mc_grp **grp_p) + struct rxe_mcg **grp_p) { int err; - struct rxe_mc_grp *grp; + struct rxe_mcg *grp; struct rxe_pool *pool = &rxe->mc_grp_pool; if (rxe->attr.max_mcast_qp_attach == 0) @@ -82,10 +82,10 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, } static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mc_grp *grp) + struct rxe_mcg *grp) { int err; - struct rxe_mc_elem *elem; + struct rxe_mca *elem; /* check to see of the qp is already a member of the group */ spin_lock_bh(&qp->grp_lock); @@ -128,8 +128,8 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { - struct rxe_mc_grp *grp; - struct rxe_mc_elem *elem, *tmp; + struct rxe_mcg *grp; + struct rxe_mca *elem, *tmp; grp = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); if (!grp) @@ -162,8 +162,8 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, void rxe_drop_all_mcast_groups(struct rxe_qp *qp) { - struct rxe_mc_grp *grp; - struct rxe_mc_elem *elem; + struct rxe_mcg *grp; + struct rxe_mca *elem; while (1) { spin_lock_bh(&qp->grp_lock); @@ -171,7 +171,7 @@ void rxe_drop_all_mcast_groups(struct rxe_qp *qp) spin_unlock_bh(&qp->grp_lock); break; } - elem = list_first_entry(&qp->grp_list, struct rxe_mc_elem, + elem = list_first_entry(&qp->grp_list, struct rxe_mca, grp_list); list_del(&elem->grp_list); spin_unlock_bh(&qp->grp_lock); @@ -188,7 +188,7 @@ void rxe_drop_all_mcast_groups(struct rxe_qp *qp) void rxe_mc_cleanup(struct rxe_pool_elem *elem) { - struct rxe_mc_grp *grp = container_of(elem, typeof(*grp), elem); + struct rxe_mcg *grp = container_of(elem, typeof(*grp), elem); struct rxe_dev *rxe = grp->rxe; rxe_drop_key(grp); @@ -200,7 +200,7 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) int err; struct rxe_dev *rxe = to_rdev(ibqp->device); struct rxe_qp *qp = to_rqp(ibqp); - struct rxe_mc_grp *grp; + struct rxe_mcg *grp; /* takes a ref on grp if successful */ err = rxe_mcast_get_grp(rxe, mgid, &grp); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 4cb003885e00..63c594173565 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -83,17 +83,17 @@ static const struct rxe_type_info { }, [RXE_TYPE_MC_GRP] = { .name = "rxe-mc_grp", - .size = sizeof(struct rxe_mc_grp), - .elem_offset = offsetof(struct rxe_mc_grp, elem), + .size = sizeof(struct rxe_mcg), + .elem_offset = offsetof(struct rxe_mcg, elem), .cleanup = rxe_mc_cleanup, .flags = RXE_POOL_KEY, - .key_offset = offsetof(struct rxe_mc_grp, mgid), + .key_offset = offsetof(struct rxe_mcg, mgid), .key_size = sizeof(union ib_gid), }, [RXE_TYPE_MC_ELEM] = { .name = "rxe-mc_elem", - .size = sizeof(struct rxe_mc_elem), - .elem_offset = offsetof(struct rxe_mc_elem, elem), + .size = sizeof(struct rxe_mca), + .elem_offset = offsetof(struct rxe_mca, elem), }, }; diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 6a6cc1fa90e4..7ff6b53555f4 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -233,8 +233,8 @@ static inline void rxe_rcv_pkt(struct rxe_pkt_info *pkt, struct sk_buff *skb) static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) { struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); - struct rxe_mc_grp *mcg; - struct rxe_mc_elem *mce; + struct rxe_mcg *mcg; + struct rxe_mca *mce; struct rxe_qp *qp; union ib_gid dgid; int err; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index e48969e8d4c8..388b7dc23dd7 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -353,7 +353,7 @@ struct rxe_mw { u64 length; }; -struct rxe_mc_grp { +struct rxe_mcg { struct rxe_pool_elem elem; spinlock_t mcg_lock; /* guard group */ struct rxe_dev *rxe; @@ -364,12 +364,12 @@ struct rxe_mc_grp { u16 pkey; }; -struct rxe_mc_elem { +struct rxe_mca { struct rxe_pool_elem elem; struct list_head qp_list; struct list_head grp_list; struct rxe_qp *qp; - struct rxe_mc_grp *grp; + struct rxe_mcg *grp; }; struct rxe_port { From patchwork Thu Jan 27 21:37:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727436 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 702C1C43217 for ; Thu, 27 Jan 2022 21:38:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344405AbiA0ViJ (ORCPT ); Thu, 27 Jan 2022 16:38:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232131AbiA0ViJ (ORCPT ); Thu, 27 Jan 2022 16:38:09 -0500 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D6EFC061714 for ; Thu, 27 Jan 2022 13:38:09 -0800 (PST) Received: by mail-oi1-x231.google.com with SMTP id b186so2347729oif.1 for ; Thu, 27 Jan 2022 13:38:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8VS4ULIj6Z254AwbD710TEadG7JtfZappHmjpTUGkXM=; b=LwbgAVcn4Rf/F6i9k228sTh35JRlIEgD5lY7+wcqvi8Gvx9m3iO+WvxXT/gjzabD5w PqfYO3y8v8dsn/qctCyDnMFpIyS86T6fDrQzSXKZ6EAun0CTsE7hcoOS//b6xnWs5whr EXG6jjFmmoQbLQEoW6kAYPG/3AARpQKIzVe4tD7hQS8JXE3OHv34HTCjt9pQ8vxFUilf 2VzR9vXEjJeJFK/o3iV2h7NTtkzPCkcy+MeH1h/HS6nckU/Qwdz916lRwFYg2tTkPS0P SsMYq5pKo+x36p8pAT64+VHEO/Lh/vOEtCXaezCD5vKq0pKHn0UncY8ciStR18vo4OPD vI2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8VS4ULIj6Z254AwbD710TEadG7JtfZappHmjpTUGkXM=; b=6T9uc/H9idMCjF+EU43SFsHsDEr3ZfsQNp401KUulcxcw59emE6Vq9YVWzGzA149wW BKU5zXLHyQOqrvkJ85urjwT6AsZg+dF8hJCazaDe0Eud2hhbb5LCkW8i2qRUPknYBupX NMZWpGMpF9y5mtVFxVgk52enrS9niztYmkl7C9H5wf9TLRiowk32Znzzek9S0NcPQMv2 DZCaoAsQuXwyUYJ/cJhaAhtRcnyDv3utsREwkK1meUjhtRUR7enCLUtWt/+HSnqcCrgo 7RiAUqfEc1PNrz2T7AWfvN8pd7CziQVn1siM/m/PEnIy90pkl+wIpqH5HXsnA0rjSS48 oXSg== X-Gm-Message-State: AOAM5307xjCk78ba7Vl8MjJSGIcWIEtawOIC5SEeC0K3bnWzpE4qHhRK 578eT0uyf0lwYgDMILIO/gJWrYZtcLM= X-Google-Smtp-Source: ABdhPJyVULT2ZJm++zHP+ZFGUJ/thpqrcb1wOwnP9rGyLkaOjI44fodOTxSKBDK5U/TbdCXvYjUt4g== X-Received: by 2002:a05:6808:1789:: with SMTP id bg9mr3609405oib.47.1643319488800; Thu, 27 Jan 2022 13:38:08 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:08 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 04/26] RDMA/rxe: Enforce IBA o10-2.2.3 Date: Thu, 27 Jan 2022 15:37:33 -0600 Message-Id: <20220127213755.31697-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add code to check if a QP is attached to one or more multicast groups when destroy_qp is called and return an error if so. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 9 +-------- drivers/infiniband/sw/rxe/rxe_mcast.c | 2 ++ drivers/infiniband/sw/rxe/rxe_qp.c | 14 ++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 5 +++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 5 files changed, 23 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index dc606241f0d6..052beaaacf43 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -101,26 +101,19 @@ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); /* rxe_qp.c */ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); - int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, struct ib_pd *ibpd, struct ib_udata *udata); - int rxe_qp_to_init(struct rxe_qp *qp, struct ib_qp_init_attr *init); - int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); - int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask, struct ib_udata *udata); - int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); - void rxe_qp_error(struct rxe_qp *qp); - +int rxe_qp_chk_destroy(struct rxe_qp *qp); void rxe_qp_destroy(struct rxe_qp *qp); - void rxe_qp_cleanup(struct rxe_pool_elem *elem); static inline int qp_num(struct rxe_qp *qp) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 949784198d80..34e3c52f0b72 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -114,6 +114,7 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, grp->num_qp++; elem->qp = qp; elem->grp = grp; + atomic_inc(&qp->mcg_num); list_add(&elem->qp_list, &grp->qp_list); list_add(&elem->grp_list, &qp->grp_list); @@ -143,6 +144,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, list_del(&elem->qp_list); list_del(&elem->grp_list); grp->num_qp--; + atomic_dec(&qp->mcg_num); spin_unlock_bh(&grp->mcg_lock); spin_unlock_bh(&qp->grp_lock); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 5018b9387694..2af19b79dd23 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -770,6 +770,20 @@ int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask) return 0; } +int rxe_qp_chk_destroy(struct rxe_qp *qp) +{ + /* See IBA o10-2.2.3 + * An attempt to destroy a QP while attached to a mcast group + * will fail immediately. + */ + if (atomic_read(&qp->mcg_num)) { + pr_warn_once("Attempt to destroy QP while attached to multicast group\n"); + return -EBUSY; + } + + return 0; +} + /* called by the destroy qp verb */ void rxe_qp_destroy(struct rxe_qp *qp) { diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index f7682541f9af..9f0aef4b649d 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -493,6 +493,11 @@ static int rxe_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) { struct rxe_qp *qp = to_rqp(ibqp); + int ret; + + ret = rxe_qp_chk_destroy(qp); + if (ret) + return ret; rxe_qp_destroy(qp); rxe_drop_index(qp); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 388b7dc23dd7..4910d0782e33 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -235,6 +235,7 @@ struct rxe_qp { /* list of mcast groups qp has joined (for cleanup) */ struct list_head grp_list; spinlock_t grp_lock; /* guard grp_list */ + atomic_t mcg_num; struct sk_buff_head req_pkts; struct sk_buff_head resp_pkts; From patchwork Thu Jan 27 21:37:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727437 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF13BC433FE for ; Thu, 27 Jan 2022 21:38:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344408AbiA0ViL (ORCPT ); Thu, 27 Jan 2022 16:38:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344406AbiA0ViK (ORCPT ); Thu, 27 Jan 2022 16:38:10 -0500 Received: from mail-ot1-x32b.google.com (mail-ot1-x32b.google.com [IPv6:2607:f8b0:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69893C061714 for ; Thu, 27 Jan 2022 13:38:10 -0800 (PST) Received: by mail-ot1-x32b.google.com with SMTP id d18-20020a9d51d2000000b005a09728a8c2so3889622oth.3 for ; Thu, 27 Jan 2022 13:38:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HsZTG23E7hU2lbuZGfhkrrJO4k1jAwTbw+oM1vC5OO8=; b=q4RESmuSw0i7BtNsL2smCyi5PrQyC3g4Ud+zbxRk1/mmtVchuGjuGIVu2zOpuwX6g/ OTARGVM3dWIgIb9evzHrXTpymPmMFl6wS2T4zUbPDmMqkJLsaxtPuOwxavMcn7v5NJY3 PUGBDke9RPD4ug+7gGnebux93TvhEWQw8u8NqeiO+vJBeymbGkhAOme55tJEQ/EuQILb 1Lki0+gkKGSOuNQmBRnXIG1P/mIHy6SI3IwYZ7i98Pc79xBu7XpbOe3ypdofhhQ6b4Dv sfk489ZIsxkVLdJ/tu6m0TcUMGFRNyzdjU6z6NGqddzViBcEoWQbMFcZMOhZjKpF7kcg u+aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HsZTG23E7hU2lbuZGfhkrrJO4k1jAwTbw+oM1vC5OO8=; b=xAkc4SUR6D3SxFWbKFvRqj7KuMy+4AlLY6rypgTi6qULEYaFSm9xoMXOrO4UnIuQej IUDJhkmSvyBgdekD55RMwpJMxL3+svC07WTMy2GuDC7ARIRfHCnPLJM9QSk0s5viqfEJ FsvWOogrVSeYsBTYn335obCJ8pV1ZsDcRm3LcdHB8kWMD8Zx6swF1h4vPuWB6yvvKoLD GTfVo/DL7aWcl3nFkXfx9urZzEpxKJDVnNQVfa7DiKDZAQdx14FBzieX8wZKjFR48TRo E85oFO4BwlhrAajl5/3iLjaddhXbCyijqXzQamsUnzKRQit9EtH7INCAmFwQIIzEY4fo lVXw== X-Gm-Message-State: AOAM533JmlDs/WrJITScW1CiaoMSTj0yHPaJyIiABBX0u0LfNrrPRPgu /BUODBsJAij5hkeznZvGJn5bvTABpjk= X-Google-Smtp-Source: ABdhPJwufLMybNoGb/Wwm40toayyEvNwtNhdalp5osqPcoyLJugDrw6Y5IIHUuSJ8+d5n1leH1zbbg== X-Received: by 2002:a9d:6c13:: with SMTP id f19mr3253073otq.87.1643319489398; Thu, 27 Jan 2022 13:38:09 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:09 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 05/26] RDMA/rxe: Remove rxe_drop_all_macst_groups Date: Thu, 27 Jan 2022 15:37:34 -0600 Message-Id: <20220127213755.31697-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org With o10-2.2.3 enforced rxe_drop_all_mcast_groups is completely unnecessary. Remove it and references to it. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 - drivers/infiniband/sw/rxe/rxe_mcast.c | 26 -------------------------- drivers/infiniband/sw/rxe/rxe_qp.c | 2 -- 3 files changed, 29 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 052beaaacf43..af40e3c212fb 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -40,7 +40,6 @@ void rxe_cq_disable(struct rxe_cq *cq); void rxe_cq_cleanup(struct rxe_pool_elem *arg); /* rxe_mcast.c */ -void rxe_drop_all_mcast_groups(struct rxe_qp *qp); void rxe_mc_cleanup(struct rxe_pool_elem *arg); int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 34e3c52f0b72..39a41daa7a6b 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -162,32 +162,6 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, return -EINVAL; } -void rxe_drop_all_mcast_groups(struct rxe_qp *qp) -{ - struct rxe_mcg *grp; - struct rxe_mca *elem; - - while (1) { - spin_lock_bh(&qp->grp_lock); - if (list_empty(&qp->grp_list)) { - spin_unlock_bh(&qp->grp_lock); - break; - } - elem = list_first_entry(&qp->grp_list, struct rxe_mca, - grp_list); - list_del(&elem->grp_list); - spin_unlock_bh(&qp->grp_lock); - - grp = elem->grp; - spin_lock_bh(&grp->mcg_lock); - list_del(&elem->qp_list); - grp->num_qp--; - spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(grp); - rxe_drop_ref(elem); - } -} - void rxe_mc_cleanup(struct rxe_pool_elem *elem) { struct rxe_mcg *grp = container_of(elem, typeof(*grp), elem); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 2af19b79dd23..087126550caf 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -812,8 +812,6 @@ static void rxe_qp_do_cleanup(struct work_struct *work) { struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); - rxe_drop_all_mcast_groups(qp); - if (qp->sq.queue) rxe_queue_cleanup(qp->sq.queue); From patchwork Thu Jan 27 21:37:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727438 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A598C433EF for ; Thu, 27 Jan 2022 21:38:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232131AbiA0ViL (ORCPT ); Thu, 27 Jan 2022 16:38:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344409AbiA0ViK (ORCPT ); Thu, 27 Jan 2022 16:38:10 -0500 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E9C7C061747 for ; Thu, 27 Jan 2022 13:38:10 -0800 (PST) Received: by mail-ot1-x32f.google.com with SMTP id z25-20020a0568301db900b005946f536d85so3861929oti.9 for ; Thu, 27 Jan 2022 13:38:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=I195ccDuUg3aeOIrfH5jOfm/YAvtHvM1sJOKVc6Rd/M=; b=nOqdu12x0ZyVCB7XCKPcQMTle9QMV3F9rehbYSNNQGk5CfIzfjlljOwtdHUdduXTaT z2363E2UADpfJn1wlWBkUtBTy/bxi/mr7shUq1vmPG18q/kRNV8Ojd5tANwlCEiMcRJH dgafOEkgW8q5ysrunL+iYTuGv6A8wHCwYKTk823aAf7NrFUZsiQumtPjwjSTIlFMDFfs TWS5fKNrf8OpG8mO/Ca9Oldr594pctgG7pYzAS8fupTR86UeXsVtJjII/T5cs5tFT9Vz w55+hp5vFSN60mfHwvTavjcnwA7c62CcMzM5uNsOo2XwNW1lw4Do/NXjUOAtHWITUQzc bDqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I195ccDuUg3aeOIrfH5jOfm/YAvtHvM1sJOKVc6Rd/M=; b=GFVIVA0xOthrawHYfkZA1jkYPGJi+yVclTkB9x6BXiM2+EW87QjWbRr0WTNz9GeuQb xXkV65cNItOUNOgZC9gtx2zc8I3JOAs+HWV0dvuPt90IiJxKRWCY0mguU/WbPgPyK96X +gQ0uDJusMbXy5r/FsIymrYaDFx7+i/oKO0zdn3JHBSLNYVyjDSteeCz/tn5uaHDzHWc NJPpq3iNZGSmNA0jcHs24w22KRvZ5oAfSrITsbaHxxRommS9oRqBDLZZIqejFv3nCNbk HYJQTI981aVkWcffKPFDl5XTdiQtyLl9kXL7WiiCfrH/tMc7XB/CJsYOffeIGbN5Xh/8 dgeQ== X-Gm-Message-State: AOAM530HS78H+/Aasqv4KHFx4bjXsKhgdxRJFsY4jzmOCF8aPB0IsZlb Zg4bcH8DpJYamA7rT2A8C5E= X-Google-Smtp-Source: ABdhPJz0yGcWBPuS6AhonP+bN9bVh4u+VDZzIEsWw7rgHR0jspNN7w4XioupM9sPuL19qZ9BTLePXQ== X-Received: by 2002:a9d:6647:: with SMTP id q7mr3236990otm.268.1643319490048; Thu, 27 Jan 2022 13:38:10 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:09 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 06/26] RDMA/rxe: Remove qp->grp_lock and qp->grp_list Date: Thu, 27 Jan 2022 15:37:35 -0600 Message-Id: <20220127213755.31697-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Since it is no longer required to cleanup attachments to multicast groups when a QP is destroyed qp->grp_lock and qp->grp_list are no longer needed and are removed. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 8 -------- drivers/infiniband/sw/rxe/rxe_qp.c | 3 --- drivers/infiniband/sw/rxe/rxe_verbs.h | 5 ----- 3 files changed, 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 39a41daa7a6b..9336295c4ee2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -88,7 +88,6 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mca *elem; /* check to see of the qp is already a member of the group */ - spin_lock_bh(&qp->grp_lock); spin_lock_bh(&grp->mcg_lock); list_for_each_entry(elem, &grp->qp_list, qp_list) { if (elem->qp == qp) { @@ -113,16 +112,13 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, grp->num_qp++; elem->qp = qp; - elem->grp = grp; atomic_inc(&qp->mcg_num); list_add(&elem->qp_list, &grp->qp_list); - list_add(&elem->grp_list, &qp->grp_list); err = 0; out: spin_unlock_bh(&grp->mcg_lock); - spin_unlock_bh(&qp->grp_lock); return err; } @@ -136,18 +132,15 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, if (!grp) goto err1; - spin_lock_bh(&qp->grp_lock); spin_lock_bh(&grp->mcg_lock); list_for_each_entry_safe(elem, tmp, &grp->qp_list, qp_list) { if (elem->qp == qp) { list_del(&elem->qp_list); - list_del(&elem->grp_list); grp->num_qp--; atomic_dec(&qp->mcg_num); spin_unlock_bh(&grp->mcg_lock); - spin_unlock_bh(&qp->grp_lock); rxe_drop_ref(elem); rxe_drop_ref(grp); /* ref held by QP */ rxe_drop_ref(grp); /* ref from get_key */ @@ -156,7 +149,6 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, } spin_unlock_bh(&grp->mcg_lock); - spin_unlock_bh(&qp->grp_lock); rxe_drop_ref(grp); /* ref from get_key */ err1: return -EINVAL; diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 087126550caf..742073ce0709 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -188,9 +188,6 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, break; } - INIT_LIST_HEAD(&qp->grp_list); - - spin_lock_init(&qp->grp_lock); spin_lock_init(&qp->state_lock); atomic_set(&qp->ssn, 0); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 4910d0782e33..55f8ed2bc621 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -232,9 +232,6 @@ struct rxe_qp { struct rxe_av pri_av; struct rxe_av alt_av; - /* list of mcast groups qp has joined (for cleanup) */ - struct list_head grp_list; - spinlock_t grp_lock; /* guard grp_list */ atomic_t mcg_num; struct sk_buff_head req_pkts; @@ -368,9 +365,7 @@ struct rxe_mcg { struct rxe_mca { struct rxe_pool_elem elem; struct list_head qp_list; - struct list_head grp_list; struct rxe_qp *qp; - struct rxe_mcg *grp; }; struct rxe_port { From patchwork Thu Jan 27 21:37:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22B95C433F5 for ; Thu, 27 Jan 2022 21:38:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344416AbiA0ViM (ORCPT ); Thu, 27 Jan 2022 16:38:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344415AbiA0ViL (ORCPT ); Thu, 27 Jan 2022 16:38:11 -0500 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72DC5C061748 for ; Thu, 27 Jan 2022 13:38:11 -0800 (PST) Received: by mail-oi1-x231.google.com with SMTP id m9so8483593oia.12 for ; Thu, 27 Jan 2022 13:38:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iKRcqVX17NA8lrzAfHpc3YdddULFvuDyZlT6fgrQiWk=; b=Xsd4KLgFY0YN/jU8vIEHSpBq4VNDXfoQsJspjQu2sMx8ZG5hf00oVOb2izUR2OdXt+ J8poBFSmcxn0GKCSCsWOWPST+Ovkv9iSzYlPqpA+7w5gnVKjzuO+WlhAgJH0S8BdiLU3 QNOPDX0KaTFQLGISveBveeW+weuQmxlqII8zjWBG40EVznZ1LC8kDH6PWbvpbLvMBMz2 h+4oBkhHu171EGg1ceKwdW4pqDgHaXfOfoOrb/LEtBuC9xgKH2TOSH2i9kaRC0J9Bz84 DNor69rJMh4iYv0OdiSBaomif9C1y70V6C2d6qLDxC8W0XHMhA7vZcIR6itShcNJ4QYZ I38w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iKRcqVX17NA8lrzAfHpc3YdddULFvuDyZlT6fgrQiWk=; b=T4TCWNipLJPejnGd8rWaFZ7tIrF1mZvnu+Ork1w0wzasxYhXcrRi9rUKwdy1BrpohW F26RSoIqFYu0CQ8xAR8G0WtMipERAWbLnE5tIiLaC1kmnAMHtff14ZivRNb4AyLZr9BB Ovfyy1ToyOLyfA9RO2SXNVg4/sdO2KkZsAdmDnE+cL50kipK9GnPLzwxCIWTKNAReYX7 acvVVdz+RulgqdmGtcolSwa87pT6d8SZj3oODlAwWtn7ECHL1HBPuOTGKiZ9Dxcjaenl UNmdrxlok52SHmDvsSRbSR/aYymP7tXlZeNajtmXLhNLi9LkvCyOS3ZIgnzq5lL5qk9s fABw== X-Gm-Message-State: AOAM532ZkipRuQ2XmAUiXziO6d0NAT5lvuvgnORRkJlJAiW+Xm+UKSjz 9mp2sc+Iy+LaM4SVD8rJEJVgfbgvYZk= X-Google-Smtp-Source: ABdhPJyR7NZyhlTcjkpwE9ka1PNwYI/0OzWoKEm2ZFfbZh9damou+6vkST7h9toI6ZoF5E7/N+CMDw== X-Received: by 2002:aca:1011:: with SMTP id 17mr3370347oiq.27.1643319490811; Thu, 27 Jan 2022 13:38:10 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:10 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 07/26] RDMA/rxe: Use kzmalloc/kfree for mca Date: Thu, 27 Jan 2022 15:37:36 -0600 Message-Id: <20220127213755.31697-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Remove rxe_mca (was rxe_mc_elem) from rxe pools and use kzmalloc and kfree to allocate and free. Use the sequence new_mca = kzalloc(sizeof(*new_mca), GFP_KERNEL); /* in case of a race */ instead of GFP_ATOMIC inside of the spinlock. Add an extra reference to mcg when a new one is created and drop when the last qp is detached. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 8 ----- drivers/infiniband/sw/rxe/rxe_mcast.c | 51 ++++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_pool.c | 5 --- drivers/infiniband/sw/rxe/rxe_pool.h | 1 - drivers/infiniband/sw/rxe/rxe_verbs.h | 2 -- 5 files changed, 30 insertions(+), 37 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index fab291245366..c55736e441e7 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -29,7 +29,6 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->mr_pool); rxe_pool_cleanup(&rxe->mw_pool); rxe_pool_cleanup(&rxe->mc_grp_pool); - rxe_pool_cleanup(&rxe->mc_elem_pool); if (rxe->tfm) crypto_free_shash(rxe->tfm); @@ -163,15 +162,8 @@ static int rxe_init_pools(struct rxe_dev *rxe) if (err) goto err9; - err = rxe_pool_init(rxe, &rxe->mc_elem_pool, RXE_TYPE_MC_ELEM, - rxe->attr.max_total_mcast_qp_attach); - if (err) - goto err10; - return 0; -err10: - rxe_pool_cleanup(&rxe->mc_grp_pool); err9: rxe_pool_cleanup(&rxe->mw_pool); err8: diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 9336295c4ee2..39f38ee665f2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -36,6 +36,7 @@ static struct rxe_mcg *create_grp(struct rxe_dev *rxe, grp = rxe_alloc_locked(&rxe->mc_grp_pool); if (!grp) return ERR_PTR(-ENOMEM); + rxe_add_ref(grp); INIT_LIST_HEAD(&grp->qp_list); spin_lock_init(&grp->mcg_lock); @@ -85,12 +86,28 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mcg *grp) { int err; - struct rxe_mca *elem; + struct rxe_mca *mca, *new_mca; - /* check to see of the qp is already a member of the group */ + /* check to see if the qp is already a member of the group */ spin_lock_bh(&grp->mcg_lock); - list_for_each_entry(elem, &grp->qp_list, qp_list) { - if (elem->qp == qp) { + list_for_each_entry(mca, &grp->qp_list, qp_list) { + if (mca->qp == qp) { + spin_unlock_bh(&grp->mcg_lock); + return 0; + } + } + spin_unlock_bh(&grp->mcg_lock); + + /* speculative alloc new mca without using GFP_ATOMIC */ + new_mca = kzalloc(sizeof(*mca), GFP_KERNEL); + if (!new_mca) + return -ENOMEM; + + spin_lock_bh(&grp->mcg_lock); + /* re-check to see if someone else just attached qp */ + list_for_each_entry(mca, &grp->qp_list, qp_list) { + if (mca->qp == qp) { + kfree(new_mca); err = 0; goto out; } @@ -101,20 +118,11 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, goto out; } - elem = rxe_alloc_locked(&rxe->mc_elem_pool); - if (!elem) { - err = -ENOMEM; - goto out; - } - - /* each qp holds a ref on the grp */ - rxe_add_ref(grp); - grp->num_qp++; - elem->qp = qp; + new_mca->qp = qp; atomic_inc(&qp->mcg_num); - list_add(&elem->qp_list, &grp->qp_list); + list_add(&new_mca->qp_list, &grp->qp_list); err = 0; out: @@ -126,7 +134,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { struct rxe_mcg *grp; - struct rxe_mca *elem, *tmp; + struct rxe_mca *mca, *tmp; grp = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); if (!grp) @@ -134,16 +142,17 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, spin_lock_bh(&grp->mcg_lock); - list_for_each_entry_safe(elem, tmp, &grp->qp_list, qp_list) { - if (elem->qp == qp) { - list_del(&elem->qp_list); + list_for_each_entry_safe(mca, tmp, &grp->qp_list, qp_list) { + if (mca->qp == qp) { + list_del(&mca->qp_list); grp->num_qp--; + if (grp->num_qp <= 0) + rxe_drop_ref(grp); atomic_dec(&qp->mcg_num); spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(elem); - rxe_drop_ref(grp); /* ref held by QP */ rxe_drop_ref(grp); /* ref from get_key */ + kfree(mca); return 0; } } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 63c594173565..a6756aa93e2b 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -90,11 +90,6 @@ static const struct rxe_type_info { .key_offset = offsetof(struct rxe_mcg, mgid), .key_size = sizeof(union ib_gid), }, - [RXE_TYPE_MC_ELEM] = { - .name = "rxe-mc_elem", - .size = sizeof(struct rxe_mca), - .elem_offset = offsetof(struct rxe_mca, elem), - }, }; static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 214279310f4d..511f81554fd1 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -23,7 +23,6 @@ enum rxe_elem_type { RXE_TYPE_MR, RXE_TYPE_MW, RXE_TYPE_MC_GRP, - RXE_TYPE_MC_ELEM, RXE_NUM_TYPES, /* keep me last */ }; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 55f8ed2bc621..02745d51c163 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -363,7 +363,6 @@ struct rxe_mcg { }; struct rxe_mca { - struct rxe_pool_elem elem; struct list_head qp_list; struct rxe_qp *qp; }; @@ -397,7 +396,6 @@ struct rxe_dev { struct rxe_pool mr_pool; struct rxe_pool mw_pool; struct rxe_pool mc_grp_pool; - struct rxe_pool mc_elem_pool; spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Thu Jan 27 21:37:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D37B5C433F5 for ; Thu, 27 Jan 2022 21:38:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344406AbiA0ViM (ORCPT ); Thu, 27 Jan 2022 16:38:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344421AbiA0ViM (ORCPT ); Thu, 27 Jan 2022 16:38:12 -0500 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14C65C061747 for ; Thu, 27 Jan 2022 13:38:12 -0800 (PST) Received: by mail-oi1-x233.google.com with SMTP id m9so8483639oia.12 for ; Thu, 27 Jan 2022 13:38:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z2qlmGdU5fOOh7j0lEey+Dd6zBF++m9TinfKtP81AS0=; b=nKyZvDrlcC76WCPnbmfz24/DszgHSLKVa4zFE9kVvPt6c3QsjlXu7U4pZNYvjObIbL Oku6+G/j1nK/3aBnHX1is8zlReJVDMlDBmEj+2011tCMPn+AzzmKqJMQE+42D/UM+QtU NdOn4L6MfrkX29Edjq6agiHX81SPuAb4NNagx39pkqffUJ1P0IsAgwszNTAFWndmEBF7 3Ijzl/zVLbirZSKdW5vdomWfdj2vLsBbQa6KpC1CDYnURjq6vDc5fYf3cxxrDR2u1ekO W2EhYbfzggymm6Suexn8bmoYkI6FCFALo7mAdPWtHvplEBOkve2VIX/HgdXHIff89i2/ D4WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z2qlmGdU5fOOh7j0lEey+Dd6zBF++m9TinfKtP81AS0=; b=nv/cTuf34Ipr4oa2dgEd+el4n1XAU0QqJIYy/pS7Na9F+ijzKvmSCi04W6dPlK/DV2 JiwS2aHin3uf18/ujzXjzP84NU4w0/UVZjbaj/cwYwk4F+HpNmmg52jDM+UUed2bGrhP oenPS1JI8tiAx2+n2gojOBopR1xGkVz3OtzAFYlnW/N0R6Fnxx6rWDZ0Svu7Yf9wZo3E s/gj9ozd/rS5gGnv+aRUWwmUYS+QF40uK+cHUbzIvn1AMuvvR+F22fAIW+KXrRG1J+QF 7+SCR2egvVHXUxbTNKePfrr4lHtfmsHJCYCJmL1OWZkoVHP0Yh7PIYDoFeLOwp5/YbYN OUqQ== X-Gm-Message-State: AOAM530Zi/5xZN3GoNlZc5G2HCsaZhmS365FhUO8XpttSCYSLqjuyCzD 8i30aml+qwQRK5KVZMDRgls= X-Google-Smtp-Source: ABdhPJxc6U50KUFLj8rvwdjIcbsYemIZl2EqcX2QICIkPG4wMJBjaogkW6/+IoUwmyqRDqW7NID0vg== X-Received: by 2002:a05:6808:1452:: with SMTP id x18mr3777795oiv.30.1643319491475; Thu, 27 Jan 2022 13:38:11 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:11 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 08/26] RDMA/rxe: Rename grp to mcg and mce to mca Date: Thu, 27 Jan 2022 15:37:37 -0600 Message-Id: <20220127213755.31697-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In rxe_mcast.c and rxe_recv.c replace 'grp' by 'mcg' and 'mce' by 'mca'. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 102 +++++++++++++------------- drivers/infiniband/sw/rxe/rxe_recv.c | 8 +- 2 files changed, 55 insertions(+), 55 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 39f38ee665f2..ed1b9ca65da3 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -31,33 +31,33 @@ static struct rxe_mcg *create_grp(struct rxe_dev *rxe, union ib_gid *mgid) { int err; - struct rxe_mcg *grp; + struct rxe_mcg *mcg; - grp = rxe_alloc_locked(&rxe->mc_grp_pool); - if (!grp) + mcg = rxe_alloc_locked(&rxe->mc_grp_pool); + if (!mcg) return ERR_PTR(-ENOMEM); - rxe_add_ref(grp); + rxe_add_ref(mcg); - INIT_LIST_HEAD(&grp->qp_list); - spin_lock_init(&grp->mcg_lock); - grp->rxe = rxe; - rxe_add_key_locked(grp, mgid); + INIT_LIST_HEAD(&mcg->qp_list); + spin_lock_init(&mcg->mcg_lock); + mcg->rxe = rxe; + rxe_add_key_locked(mcg, mgid); err = rxe_mcast_add(rxe, mgid); if (unlikely(err)) { - rxe_drop_key_locked(grp); - rxe_drop_ref(grp); + rxe_drop_key_locked(mcg); + rxe_drop_ref(mcg); return ERR_PTR(err); } - return grp; + return mcg; } static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mcg **grp_p) + struct rxe_mcg **mcgp) { int err; - struct rxe_mcg *grp; + struct rxe_mcg *mcg; struct rxe_pool *pool = &rxe->mc_grp_pool; if (rxe->attr.max_mcast_qp_attach == 0) @@ -65,47 +65,47 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, write_lock_bh(&pool->pool_lock); - grp = rxe_pool_get_key_locked(pool, mgid); - if (grp) + mcg = rxe_pool_get_key_locked(pool, mgid); + if (mcg) goto done; - grp = create_grp(rxe, pool, mgid); - if (IS_ERR(grp)) { + mcg = create_grp(rxe, pool, mgid); + if (IS_ERR(mcg)) { write_unlock_bh(&pool->pool_lock); - err = PTR_ERR(grp); + err = PTR_ERR(mcg); return err; } done: write_unlock_bh(&pool->pool_lock); - *grp_p = grp; + *mcgp = mcg; return 0; } static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mcg *grp) + struct rxe_mcg *mcg) { int err; struct rxe_mca *mca, *new_mca; /* check to see if the qp is already a member of the group */ - spin_lock_bh(&grp->mcg_lock); - list_for_each_entry(mca, &grp->qp_list, qp_list) { + spin_lock_bh(&mcg->mcg_lock); + list_for_each_entry(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - spin_unlock_bh(&grp->mcg_lock); + spin_unlock_bh(&mcg->mcg_lock); return 0; } } - spin_unlock_bh(&grp->mcg_lock); + spin_unlock_bh(&mcg->mcg_lock); /* speculative alloc new mca without using GFP_ATOMIC */ new_mca = kzalloc(sizeof(*mca), GFP_KERNEL); if (!new_mca) return -ENOMEM; - spin_lock_bh(&grp->mcg_lock); + spin_lock_bh(&mcg->mcg_lock); /* re-check to see if someone else just attached qp */ - list_for_each_entry(mca, &grp->qp_list, qp_list) { + list_for_each_entry(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { kfree(new_mca); err = 0; @@ -113,63 +113,63 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, } } - if (grp->num_qp >= rxe->attr.max_mcast_qp_attach) { + if (mcg->num_qp >= rxe->attr.max_mcast_qp_attach) { err = -ENOMEM; goto out; } - grp->num_qp++; + mcg->num_qp++; new_mca->qp = qp; atomic_inc(&qp->mcg_num); - list_add(&new_mca->qp_list, &grp->qp_list); + list_add(&new_mca->qp_list, &mcg->qp_list); err = 0; out: - spin_unlock_bh(&grp->mcg_lock); + spin_unlock_bh(&mcg->mcg_lock); return err; } static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { - struct rxe_mcg *grp; + struct rxe_mcg *mcg; struct rxe_mca *mca, *tmp; - grp = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); - if (!grp) + mcg = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); + if (!mcg) goto err1; - spin_lock_bh(&grp->mcg_lock); + spin_lock_bh(&mcg->mcg_lock); - list_for_each_entry_safe(mca, tmp, &grp->qp_list, qp_list) { + list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { list_del(&mca->qp_list); - grp->num_qp--; - if (grp->num_qp <= 0) - rxe_drop_ref(grp); + mcg->num_qp--; + if (mcg->num_qp <= 0) + rxe_drop_ref(mcg); atomic_dec(&qp->mcg_num); - spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(grp); /* ref from get_key */ + spin_unlock_bh(&mcg->mcg_lock); + rxe_drop_ref(mcg); /* ref from get_key */ kfree(mca); return 0; } } - spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(grp); /* ref from get_key */ + spin_unlock_bh(&mcg->mcg_lock); + rxe_drop_ref(mcg); /* ref from get_key */ err1: return -EINVAL; } void rxe_mc_cleanup(struct rxe_pool_elem *elem) { - struct rxe_mcg *grp = container_of(elem, typeof(*grp), elem); - struct rxe_dev *rxe = grp->rxe; + struct rxe_mcg *mcg = container_of(elem, typeof(*mcg), elem); + struct rxe_dev *rxe = mcg->rxe; - rxe_drop_key(grp); - rxe_mcast_delete(rxe, &grp->mgid); + rxe_drop_key(mcg); + rxe_mcast_delete(rxe, &mcg->mgid); } int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) @@ -177,16 +177,16 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) int err; struct rxe_dev *rxe = to_rdev(ibqp->device); struct rxe_qp *qp = to_rqp(ibqp); - struct rxe_mcg *grp; + struct rxe_mcg *mcg; - /* takes a ref on grp if successful */ - err = rxe_mcast_get_grp(rxe, mgid, &grp); + /* takes a ref on mcg if successful */ + err = rxe_mcast_get_grp(rxe, mgid, &mcg); if (err) return err; - err = rxe_mcast_add_grp_elem(rxe, qp, grp); + err = rxe_mcast_add_grp_elem(rxe, qp, mcg); - rxe_drop_ref(grp); + rxe_drop_ref(mcg); return err; } diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 7ff6b53555f4..814a002b8911 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -234,7 +234,7 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) { struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); struct rxe_mcg *mcg; - struct rxe_mca *mce; + struct rxe_mca *mca; struct rxe_qp *qp; union ib_gid dgid; int err; @@ -257,8 +257,8 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) * single QP happen and just move on and try * the rest of them on the list */ - list_for_each_entry(mce, &mcg->qp_list, qp_list) { - qp = mce->qp; + list_for_each_entry(mca, &mcg->qp_list, qp_list) { + qp = mca->qp; /* validate qp for incoming packet */ err = check_type_state(rxe, pkt, qp); @@ -273,7 +273,7 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) * skb and pass to the QP. Pass the original skb to * the last QP in the list. */ - if (mce->qp_list.next != &mcg->qp_list) { + if (mca->qp_list.next != &mcg->qp_list) { struct sk_buff *cskb; struct rxe_pkt_info *cpkt; From patchwork Thu Jan 27 21:37:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 687B4C433EF for ; Thu, 27 Jan 2022 21:38:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344443AbiA0ViQ (ORCPT ); Thu, 27 Jan 2022 16:38:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344412AbiA0ViM (ORCPT ); Thu, 27 Jan 2022 16:38:12 -0500 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAEF3C061714 for ; Thu, 27 Jan 2022 13:38:12 -0800 (PST) Received: by mail-oi1-x236.google.com with SMTP id m9so8483689oia.12 for ; Thu, 27 Jan 2022 13:38:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1Qfvp20206DcmKvuNv+j3mvuVrDOObPq3yGtKsY16FI=; b=Y793FBN20EKl+qL3E8LCL49N/YLw2uFEY7d3pcW9yZqqtFaCGcffMdD2WH7iUG01yv shR008dH8j0144EanXXCTOG9uCOhA/ehmmAbiEASA4hGe4TChmFDkk+2iLk3HXAOypaP IB5Yb0upVFheyutKJo9C9Bcrs7/WTfFlVSHwsW0Yj35eiLS7IWwATSwabtIKpasBaHCD th5sYaSbyW9a1hp6h0W02MtPn5QTTvEreI6WEDdBScRt+bAj2hwT6jNzdqF7oJlnGs27 KYa4U4No22vV603Is41XAlx7+VNXx/spglev8i7NnVS6fcsHFSaIL0F0sxJj+3P+aykf CM4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Qfvp20206DcmKvuNv+j3mvuVrDOObPq3yGtKsY16FI=; b=3EdmfyqoNlC4Ml/IJT7dUaOgVrIJiGTwh+FCB1P4IWgphexQXmPTYXzzeoyZDf1DFY gv+LC5N9g+Zpw/pv9/q82EAo5HsHEv5a3eoyxDwWVQtKb5+BWGlxBH7guG83FzCo4m8H SO11vm4xSF9B33bF5cFPKF6Cl2vw7LIzoTB5RQGQw2gCXKTrmUjD7vXR94VcKXFg+Bmp bM/B8K48J471j91YpmLj4N/UB0RVNV5J2zBXIKqaZukKZMXuBp2sir65PqPaAcSDGXHz pQyWA2aeWB9IgPK9HwtSYji7xeGXfUb4tHMMwBzvq0cj37XtA5xrw3KIqcKuEva+gQAe onBw== X-Gm-Message-State: AOAM533yF3q9mMjPxdd4r12LBrKqdOF2t/14p2SJc/+RoRYTeZLbWhR4 enJfv0ipWTuFGAEJCk4jnP3z/zdncVI= X-Google-Smtp-Source: ABdhPJxW0bs28lQHIt8gJEBg3sLVbbpYt+jXmcYhcQk+hi9sCaGQ9Wj0y5FLE/Igj2FaAoaQHHuG/Q== X-Received: by 2002:a05:6808:14d0:: with SMTP id f16mr3537047oiw.32.1643319492114; Thu, 27 Jan 2022 13:38:12 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:11 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 09/26] RDMA/rxe: Introduce RXECB(skb) Date: Thu, 27 Jan 2022 15:37:38 -0600 Message-Id: <20220127213755.31697-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a #define RXECB(skb) to rxe_hdr.h as a short cut to refer to single members of rxe_pkt_info which is stored in skb->cb in the receive path. Use this to make some cleanups in rxe_recv.c Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 3 ++ drivers/infiniband/sw/rxe/rxe_recv.c | 55 +++++++++++++--------------- 2 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e432f9e37795..2a85d1e40e6a 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -36,6 +36,9 @@ static inline struct sk_buff *PKT_TO_SKB(struct rxe_pkt_info *pkt) return container_of((void *)pkt, struct sk_buff, cb); } +/* alternative to access a single element of rxe_pkt_info from skb */ +#define RXECB(skb) ((struct rxe_pkt_info *)((skb)->cb)) + /* * IBA header types and methods * diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 814a002b8911..10020103ea4a 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -107,17 +107,15 @@ static int check_keys(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, return -EINVAL; } -static int check_addr(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, +static int check_addr(struct rxe_dev *rxe, struct sk_buff *skb, struct rxe_qp *qp) { - struct sk_buff *skb = PKT_TO_SKB(pkt); - if (qp_type(qp) != IB_QPT_RC && qp_type(qp) != IB_QPT_UC) goto done; - if (unlikely(pkt->port_num != qp->attr.port_num)) { + if (unlikely(RXECB(skb)->port_num != qp->attr.port_num)) { pr_warn_ratelimited("port %d != qp port %d\n", - pkt->port_num, qp->attr.port_num); + RXECB(skb)->port_num, qp->attr.port_num); goto err1; } @@ -167,8 +165,9 @@ static int check_addr(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, return -EINVAL; } -static int hdr_check(struct rxe_pkt_info *pkt) +static int hdr_check(struct sk_buff *skb) { + struct rxe_pkt_info *pkt = RXECB(skb); struct rxe_dev *rxe = pkt->rxe; struct rxe_port *port = &rxe->port; struct rxe_qp *qp = NULL; @@ -199,7 +198,7 @@ static int hdr_check(struct rxe_pkt_info *pkt) if (unlikely(err)) goto err2; - err = check_addr(rxe, pkt, qp); + err = check_addr(rxe, skb, qp); if (unlikely(err)) goto err2; @@ -222,17 +221,19 @@ static int hdr_check(struct rxe_pkt_info *pkt) return -EINVAL; } -static inline void rxe_rcv_pkt(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static inline void rxe_rcv_pkt(struct sk_buff *skb) { - if (pkt->mask & RXE_REQ_MASK) - rxe_resp_queue_pkt(pkt->qp, skb); + if (RXECB(skb)->mask & RXE_REQ_MASK) + rxe_resp_queue_pkt(RXECB(skb)->qp, skb); else - rxe_comp_queue_pkt(pkt->qp, skb); + rxe_comp_queue_pkt(RXECB(skb)->qp, skb); } -static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) +static void rxe_rcv_mcast_pkt(struct sk_buff *skb) { - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + struct sk_buff *s; + struct rxe_pkt_info *pkt = RXECB(skb); + struct rxe_dev *rxe = pkt->rxe; struct rxe_mcg *mcg; struct rxe_mca *mca; struct rxe_qp *qp; @@ -274,26 +275,22 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) * the last QP in the list. */ if (mca->qp_list.next != &mcg->qp_list) { - struct sk_buff *cskb; - struct rxe_pkt_info *cpkt; - - cskb = skb_clone(skb, GFP_ATOMIC); - if (unlikely(!cskb)) + s = skb_clone(skb, GFP_ATOMIC); + if (unlikely(!s)) continue; if (WARN_ON(!ib_device_try_get(&rxe->ib_dev))) { - kfree_skb(cskb); + kfree_skb(s); break; } - cpkt = SKB_TO_PKT(cskb); - cpkt->qp = qp; + RXECB(s)->qp = qp; rxe_add_ref(qp); - rxe_rcv_pkt(cpkt, cskb); + rxe_rcv_pkt(s); } else { - pkt->qp = qp; + RXECB(skb)->qp = qp; rxe_add_ref(qp); - rxe_rcv_pkt(pkt, skb); + rxe_rcv_pkt(skb); skb = NULL; /* mark consumed */ } } @@ -326,7 +323,7 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) */ static int rxe_chk_dgid(struct rxe_dev *rxe, struct sk_buff *skb) { - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + struct rxe_pkt_info *pkt = RXECB(skb); const struct ib_gid_attr *gid_attr; union ib_gid dgid; union ib_gid *pdgid; @@ -359,7 +356,7 @@ static int rxe_chk_dgid(struct rxe_dev *rxe, struct sk_buff *skb) void rxe_rcv(struct sk_buff *skb) { int err; - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + struct rxe_pkt_info *pkt = RXECB(skb); struct rxe_dev *rxe = pkt->rxe; if (unlikely(skb->len < RXE_BTH_BYTES)) @@ -378,7 +375,7 @@ void rxe_rcv(struct sk_buff *skb) if (unlikely(skb->len < header_size(pkt))) goto drop; - err = hdr_check(pkt); + err = hdr_check(skb); if (unlikely(err)) goto drop; @@ -389,9 +386,9 @@ void rxe_rcv(struct sk_buff *skb) rxe_counter_inc(rxe, RXE_CNT_RCVD_PKTS); if (unlikely(bth_qpn(pkt) == IB_MULTICAST_QPN)) - rxe_rcv_mcast_pkt(rxe, skb); + rxe_rcv_mcast_pkt(skb); else - rxe_rcv_pkt(pkt, skb); + rxe_rcv_pkt(skb); return; From patchwork Thu Jan 27 21:37:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18B75C433FE for ; Thu, 27 Jan 2022 21:38:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344433AbiA0ViT (ORCPT ); Thu, 27 Jan 2022 16:38:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344420AbiA0ViN (ORCPT ); Thu, 27 Jan 2022 16:38:13 -0500 Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50DCEC061749 for ; Thu, 27 Jan 2022 13:38:13 -0800 (PST) Received: by mail-oi1-x229.google.com with SMTP id u129so8536321oib.4 for ; Thu, 27 Jan 2022 13:38:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Xcskru2ZEKc8psByd3UYRhdd+k0OhRhy/pBOrm7y3Do=; b=ClTmqVtEhibVEqVQRg2oOjnJOFa2jyKhAEvhCchPjfVRiwUS/xlYH7Oz8MkGkO6c2m rx7rvIDGX1pEWwNysW5nI4FfetN6qFP7ZmdQPYzoG2Sf/YTDfk3GJuKF3G0vzgpUF1cL WmPx3k6QWGkYJ1fllmae17vMhP9J41a6uARtPaevqRUFVmJXbhqtMRLi9gCRBWJ72ohl l+QCPc+zXYBG6QsASaUPzfSUwIy5MgR7QaOYAG4GCQmnuCDnOXawXeyh7vhFA7h/0Q9s 5mqi6LUA6jWPodpZrulCaFrCmLrLLs+8eJse5BkfnHBXB1LPv/L9xPWXT2v2frXq4/f8 pRjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Xcskru2ZEKc8psByd3UYRhdd+k0OhRhy/pBOrm7y3Do=; b=wHwblsPLn1GMt9cVw6/cVcRd8eYvC4M5bAhWhaP8c4hk21AUxkBMFWPNSh5XzoGobi kJZE4om+wvCpR6vOwlVyWUz1DPw1dB+gf75d7H5XVAMFvAX8NsWZiOt26fIoccRGkmGG araNvAxBQmqcZEqPdDVSVNsnubBmdLL7t3BbQ41Np1amJYLvXmilMmI7E8rhM187PM12 Kuh5JUSy6ciZAHdPiDiJJUTnHiWkNoOgCQwhA0WQuCx8y7kM8v23mUj8Vp58y3g5oPtO ORtF3e3yUl/DjI5swDZKCFlf/cvvnE7Hrda62nNrKMFuFSQ9oL+8RlSpWUIYZoFqAWVa fKew== X-Gm-Message-State: AOAM532WK5E47xrc2ihpQr9hHnYG58cQEMFEgrlA5OqwQY6MTnqPthVz aTYFvyueMkk21RmiclHPALUuZYKsXDk= X-Google-Smtp-Source: ABdhPJy2EDdTEhQxLKDPDtyaBpKGHzPxoS3eRAGUR2zeqpG1Il2Oh07PoSf72qQRU8WdYP7KttEe7A== X-Received: by 2002:a05:6808:1645:: with SMTP id az5mr6707957oib.313.1643319492711; Thu, 27 Jan 2022 13:38:12 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:12 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 10/26] RDMA/rxe: Split rxe_rcv_mcast_pkt into two phases Date: Thu, 27 Jan 2022 15:37:39 -0600 Message-Id: <20220127213755.31697-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently rxe_rcv_mcast_pkt performs most of its work under the mcg->mcg_lock and calls into rxe_rcv which queues the packets to the responder and completer tasklets holding the lock which is a very bad idea. This patch walks the qp_list in mcg and copies the qp addresses to a dynamically allocated array under the lock but does the rest of the work without holding the lock. The critical section is now very small. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 11 +++++---- drivers/infiniband/sw/rxe/rxe_recv.c | 33 +++++++++++++++++++++++---- drivers/infiniband/sw/rxe/rxe_verbs.h | 2 +- 3 files changed, 35 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index ed1b9ca65da3..3b66019fc26d 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -113,16 +113,16 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, } } - if (mcg->num_qp >= rxe->attr.max_mcast_qp_attach) { + if (atomic_read(&mcg->qp_num) >= rxe->attr.max_mcast_qp_attach) { err = -ENOMEM; goto out; } - mcg->num_qp++; + atomic_inc(&mcg->qp_num); new_mca->qp = qp; atomic_inc(&qp->mcg_num); - list_add(&new_mca->qp_list, &mcg->qp_list); + list_add_tail(&new_mca->qp_list, &mcg->qp_list); err = 0; out: @@ -135,6 +135,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, { struct rxe_mcg *mcg; struct rxe_mca *mca, *tmp; + int n; mcg = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); if (!mcg) @@ -145,8 +146,8 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { list_del(&mca->qp_list); - mcg->num_qp--; - if (mcg->num_qp <= 0) + n = atomic_dec_return(&mcg->qp_num); + if (n <= 0) rxe_drop_ref(mcg); atomic_dec(&qp->mcg_num); diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 10020103ea4a..41571c6b7d98 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -229,6 +229,11 @@ static inline void rxe_rcv_pkt(struct sk_buff *skb) rxe_comp_queue_pkt(RXECB(skb)->qp, skb); } +/* split processing of the qp list into two stages. + * first just make a simple linear array from the + * current list while holding the lock and then + * process each qp without holding the lock. + */ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) { struct sk_buff *s; @@ -237,7 +242,9 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) struct rxe_mcg *mcg; struct rxe_mca *mca; struct rxe_qp *qp; + struct rxe_qp **qp_array; union ib_gid dgid; + int n, nmax; int err; if (skb->protocol == htons(ETH_P_IP)) @@ -251,15 +258,31 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) if (!mcg) goto drop; /* mcast group not registered */ + /* this is the current number of qp's attached to mcg plus a + * little room in case new qp's are attached. It isn't wrong + * to miss some qp's since it is just a matter of precisely + * when the packet is assumed to be received. + */ + nmax = atomic_read(&mcg->qp_num) + 2; + qp_array = kmalloc_array(nmax, sizeof(qp), GFP_KERNEL); + + n = 0; spin_lock_bh(&mcg->mcg_lock); + list_for_each_entry(mca, &mcg->qp_list, qp_list) { + qp_array[n++] = mca->qp; + if (n == nmax) + break; + } + spin_unlock_bh(&mcg->mcg_lock); + nmax = n; /* this is unreliable datagram service so we let * failures to deliver a multicast packet to a * single QP happen and just move on and try * the rest of them on the list */ - list_for_each_entry(mca, &mcg->qp_list, qp_list) { - qp = mca->qp; + for (n = 0; n < nmax; n++) { + qp = qp_array[n]; /* validate qp for incoming packet */ err = check_type_state(rxe, pkt, qp); @@ -274,8 +297,8 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) * skb and pass to the QP. Pass the original skb to * the last QP in the list. */ - if (mca->qp_list.next != &mcg->qp_list) { - s = skb_clone(skb, GFP_ATOMIC); + if (n < nmax - 1) { + s = skb_clone(skb, GFP_KERNEL); if (unlikely(!s)) continue; @@ -295,7 +318,7 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) } } - spin_unlock_bh(&mcg->mcg_lock); + kfree(qp_array); rxe_drop_ref(mcg); /* drop ref from rxe_pool_get_key. */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 02745d51c163..d65c358798c6 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -356,8 +356,8 @@ struct rxe_mcg { spinlock_t mcg_lock; /* guard group */ struct rxe_dev *rxe; struct list_head qp_list; + atomic_t qp_num; union ib_gid mgid; - int num_qp; u32 qkey; u16 pkey; }; From patchwork Thu Jan 27 21:37:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95771C4321E for ; Thu, 27 Jan 2022 21:38:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344423AbiA0ViV (ORCPT ); Thu, 27 Jan 2022 16:38:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344432AbiA0ViT (ORCPT ); Thu, 27 Jan 2022 16:38:19 -0500 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4E41C06174E for ; Thu, 27 Jan 2022 13:38:18 -0800 (PST) Received: by mail-oi1-x234.google.com with SMTP id u129so8536772oib.4 for ; Thu, 27 Jan 2022 13:38:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8Qk2vgeclDXUdo7237r2VFHmcnL2+s+zUxnDBrX0loQ=; b=dQfXCJxdNahfnh9Eqp+/o9zN2ufR/jsw3tZs9MfWhev2ZQQk7hGCRpHyHytFyyWqel 7HMYFLyfzIThEGd6XyXN651hdl6OwiaiLP4XxNO04WDo+L/MEcqKLJ/uEKz6rkSGv4XE strjb7WxHY4l37IvjV38OMMawiBD+ykuTSoB56ySmWmE+2GqZl0L0azkGKikVBLci2RW eZ1guBU9RIhFi5bWFiAZuj3YwRsz+x5KFhB1k7g53dqmaZu9tU5o+FyDIdd4G9kgDLH8 V47Xktqf0ZXe/L6Xgy1ABm6DsIbgMbRsRcAPPyKT8WtMAzbSY7gW7XQWI6KDdvnXBXv1 wsFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8Qk2vgeclDXUdo7237r2VFHmcnL2+s+zUxnDBrX0loQ=; b=d9Iu9TQzNlJJIfB71eDvqy/wY7UNHzxsOM8AqEp5zKF80jgoKfuEByl8u6TP2UOPn2 WuA5gYxwD9xhV2VcWXpl/50dFoQTUx72NAvwuF5fJjwRFVSuP+Oji5w82KbuPOnDhKMk BfJtPqwm70DfmBNnNO6lQVJUvSy1HxJMkVOu6W59j/m0LVUCY92E4B1verx7aL9SuCQF gjsbUGly02YCHAZJJtZIECBvP6OxAwMDIV7z7YuikjTbSkJ+BiviN6/KqTC2f2HdaUEd BEXOMPow/ptBASoP+OXsE6foVNMpPufsGeYvcD2ypU3D5HIUWWtRmUYv+KINA1KOVqN+ Y+TA== X-Gm-Message-State: AOAM532dPfG4jiTKJlL4+6pLI2s84tHB5fjBCBx6XKqo+hQrgVm9DiYK ou7rzd+zuG9pK8mCZbiXIw4= X-Google-Smtp-Source: ABdhPJykZk1CatuWq4HlHJSshOdys5l2JBiVroJdcFCO4Is9xR1+Kcx2bRHDW4tdRFGfD44jtAiemw== X-Received: by 2002:a05:6808:10c7:: with SMTP id s7mr3360073ois.332.1643319493363; Thu, 27 Jan 2022 13:38:13 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:12 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 11/26] RDMA/rxe: Replace locks by rxe->mcg_lock Date: Thu, 27 Jan 2022 15:37:40 -0600 Message-Id: <20220127213755.31697-12-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Starting to decouple mcg from rxe pools, replace the spin lock mcg->mcg_lock and the write lock pool->pool_lock by rxe->mcg_lock. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 ++ drivers/infiniband/sw/rxe/rxe_mcast.c | 25 ++++++++++++------------- drivers/infiniband/sw/rxe/rxe_recv.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_verbs.h | 3 ++- 4 files changed, 18 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index c55736e441e7..46a07e2d9dcf 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -198,6 +198,8 @@ static int rxe_init(struct rxe_dev *rxe) if (err) return err; + spin_lock_init(&rxe->mcg_lock); + /* init pending mmap list */ spin_lock_init(&rxe->mmap_offset_lock); spin_lock_init(&rxe->pending_lock); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 3b66019fc26d..62ace10206b0 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -25,7 +25,7 @@ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) return dev_mc_del(rxe->ndev, ll_addr); } -/* caller should hold mc_grp_pool->pool_lock */ +/* caller should hold mc_grp_rxe->mcg_lock */ static struct rxe_mcg *create_grp(struct rxe_dev *rxe, struct rxe_pool *pool, union ib_gid *mgid) @@ -39,7 +39,6 @@ static struct rxe_mcg *create_grp(struct rxe_dev *rxe, rxe_add_ref(mcg); INIT_LIST_HEAD(&mcg->qp_list); - spin_lock_init(&mcg->mcg_lock); mcg->rxe = rxe; rxe_add_key_locked(mcg, mgid); @@ -63,7 +62,7 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, if (rxe->attr.max_mcast_qp_attach == 0) return -EINVAL; - write_lock_bh(&pool->pool_lock); + spin_lock_bh(&rxe->mcg_lock); mcg = rxe_pool_get_key_locked(pool, mgid); if (mcg) @@ -71,13 +70,13 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, mcg = create_grp(rxe, pool, mgid); if (IS_ERR(mcg)) { - write_unlock_bh(&pool->pool_lock); + spin_unlock_bh(&rxe->mcg_lock); err = PTR_ERR(mcg); return err; } done: - write_unlock_bh(&pool->pool_lock); + spin_unlock_bh(&rxe->mcg_lock); *mcgp = mcg; return 0; } @@ -89,21 +88,21 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mca *mca, *new_mca; /* check to see if the qp is already a member of the group */ - spin_lock_bh(&mcg->mcg_lock); + spin_lock_bh(&rxe->mcg_lock); list_for_each_entry(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); return 0; } } - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); /* speculative alloc new mca without using GFP_ATOMIC */ new_mca = kzalloc(sizeof(*mca), GFP_KERNEL); if (!new_mca) return -ENOMEM; - spin_lock_bh(&mcg->mcg_lock); + spin_lock_bh(&rxe->mcg_lock); /* re-check to see if someone else just attached qp */ list_for_each_entry(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { @@ -126,7 +125,7 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, err = 0; out: - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); return err; } @@ -141,7 +140,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, if (!mcg) goto err1; - spin_lock_bh(&mcg->mcg_lock); + spin_lock_bh(&rxe->mcg_lock); list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { @@ -151,14 +150,14 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, rxe_drop_ref(mcg); atomic_dec(&qp->mcg_num); - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); rxe_drop_ref(mcg); /* ref from get_key */ kfree(mca); return 0; } } - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); rxe_drop_ref(mcg); /* ref from get_key */ err1: return -EINVAL; diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 41571c6b7d98..11246589fda7 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -267,13 +267,13 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) qp_array = kmalloc_array(nmax, sizeof(qp), GFP_KERNEL); n = 0; - spin_lock_bh(&mcg->mcg_lock); + spin_lock_bh(&rxe->mcg_lock); list_for_each_entry(mca, &mcg->qp_list, qp_list) { qp_array[n++] = mca->qp; if (n == nmax) break; } - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); nmax = n; /* this is unreliable datagram service so we let diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index d65c358798c6..b72f8f09d984 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -353,7 +353,6 @@ struct rxe_mw { struct rxe_mcg { struct rxe_pool_elem elem; - spinlock_t mcg_lock; /* guard group */ struct rxe_dev *rxe; struct list_head qp_list; atomic_t qp_num; @@ -397,6 +396,8 @@ struct rxe_dev { struct rxe_pool mw_pool; struct rxe_pool mc_grp_pool; + spinlock_t mcg_lock; /* guard multicast groups */ + spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Thu Jan 27 21:37:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24933C433F5 for ; Thu, 27 Jan 2022 21:38:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344413AbiA0ViR (ORCPT ); Thu, 27 Jan 2022 16:38:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344436AbiA0ViQ (ORCPT ); Thu, 27 Jan 2022 16:38:16 -0500 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0800C06175A for ; Thu, 27 Jan 2022 13:38:14 -0800 (PST) Received: by mail-oi1-x230.google.com with SMTP id g205so8542403oif.5 for ; Thu, 27 Jan 2022 13:38:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E7pk3vhMtPwznDg39XcANW8sSNurQHc3ldY1ET245D0=; b=cWz50evUl0ape74OENewsWkvu/YL0zBI4KAQ6NYvVITBpjuYp2xR3SV+W8k7qqJfxi D+siUgY/R+SBHCu3C7EI/eNbb9OTXUPiSxfqpxuJ5+M5vkm1Cytcp1XrsIPxN9aM2x1k g73vmsL3iU3W52qfJwrfgY6hHEW3jFirpm1b1hk9ud8IUoekdkFg+AngWulfIoFgHXHP Uyh2/SM+/SMQjFO8+XktUkCjdk12zmTXoRKLdOWjj9FB1GqsK/YqSc7p3xdh/cgDTapG P7mg7AbuB4xKc8Nnv0q3feLv1blEeTfuqeNKLgXw3/twsC+wQORpTQl8M+wKNUzswKoE 054A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E7pk3vhMtPwznDg39XcANW8sSNurQHc3ldY1ET245D0=; b=jJd5KoB+/K5OQt/DRU6NZeufiGBFfG13WwbEKZWMQqDK+8vvh9fsJmO5PMO+GU+96t hobXaLrLFFNxRgSDWh/2X11jjvCYGW6XIb/VBDJ4caRORzdB3Gg5Fi2oKOtCpkQ2X4W+ oXz2xEcemO6nwz/vpP4qlXOXBU3r1bSb/hokwxVsqQNHXO2Eszx39r4Lmq6+sP1tA7nT 5Tfx7MVSlViMxEnIFr5NW2KAocPonQjzsjY8Xo412KA45fq7IKqdKiqy2gdBtvuB5kQm Fy8sAWy/KNbA03/AIwB8XA58w9tpQuOt1B+iZ01WPQVyYEzFyPWZXvIOy/xDcRdaAEu2 Pr6A== X-Gm-Message-State: AOAM531KBk2nm+xDSgx7btpfmvkTZv55AOM0ecJf+yRKbu0Er8Q66nq+ F9BJMjDnPuOAHIfOAfDlD5i+dW+PRn0= X-Google-Smtp-Source: ABdhPJwERq6AZQluPlyZ7G7/lVITSe/3pTnMiLOE59i8CatsqNB3qu+uCJHu2CFnXK1KJ4idg6xEyg== X-Received: by 2002:aca:4286:: with SMTP id p128mr3425142oia.220.1643319494113; Thu, 27 Jan 2022 13:38:14 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:13 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 12/26] RDMA/rxe: Replace pool key by rxe->mcg_tree Date: Thu, 27 Jan 2022 15:37:41 -0600 Message-Id: <20220127213755.31697-13-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Continuing to decouple mcg from rxe pools. Create red-black tree code in rxe_mcast.c to hold mcg index. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 1 + drivers/infiniband/sw/rxe/rxe_loc.h | 3 +- drivers/infiniband/sw/rxe/rxe_mcast.c | 187 +++++++++++++++++++++----- drivers/infiniband/sw/rxe/rxe_recv.c | 4 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 3 + 5 files changed, 159 insertions(+), 39 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 46a07e2d9dcf..310e184ae9e8 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -199,6 +199,7 @@ static int rxe_init(struct rxe_dev *rxe) return err; spin_lock_init(&rxe->mcg_lock); + rxe->mcg_tree = RB_ROOT; /* init pending mmap list */ spin_lock_init(&rxe->mmap_offset_lock); diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index af40e3c212fb..d9faf3a1ee61 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -40,9 +40,10 @@ void rxe_cq_disable(struct rxe_cq *cq); void rxe_cq_cleanup(struct rxe_pool_elem *arg); /* rxe_mcast.c */ -void rxe_mc_cleanup(struct rxe_pool_elem *arg); +struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); +void rxe_mc_cleanup(struct rxe_pool_elem *arg); /* rxe_mmap.c */ struct rxe_mmap_info { diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 62ace10206b0..4c3eb9c723b4 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -25,60 +25,172 @@ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) return dev_mc_del(rxe->ndev, ll_addr); } -/* caller should hold mc_grp_rxe->mcg_lock */ -static struct rxe_mcg *create_grp(struct rxe_dev *rxe, - struct rxe_pool *pool, - union ib_gid *mgid) +/** + * __rxe_insert_mcg - insert an mcg into red-black tree (rxe->mcg_tree) + * @mcg: mcast group object with an embedded red-black tree node + * + * Context: caller must hold a reference to mcg and rxe->mcg_lock and + * is responsible to avoid adding the same mcg twice to the tree. + */ +static void __rxe_insert_mcg(struct rxe_mcg *mcg) { - int err; + struct rb_root *tree = &mcg->rxe->mcg_tree; + struct rb_node **link = &tree->rb_node; + struct rb_node *node = NULL; + struct rxe_mcg *tmp; + int cmp; + + while (*link) { + node = *link; + tmp = rb_entry(node, struct rxe_mcg, node); + + cmp = memcmp(&tmp->mgid, &mcg->mgid, sizeof(mcg->mgid)); + if (cmp > 0) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&mcg->node, node, link); + rb_insert_color(&mcg->node, tree); +} + +/** + * __rxe_remove_mcg - remove an mcg from red-black tree holding lock + * @mcg: mcast group object with an embedded red-black tree node + * + * Context: caller must hold a reference to mcg and rxe->mcg_lock + */ +static void __rxe_remove_mcg(struct rxe_mcg *mcg) +{ + rb_erase(&mcg->node, &mcg->rxe->mcg_tree); +} + +/** + * __rxe_lookup_mcg - lookup mcg in rxe->mcg_tree while holding lock + * @rxe: rxe device object + * @mgid: multicast IP address + * + * Context: caller must hold rxe->mcg_lock + * Returns: mcg on success and takes a ref to mcg else NULL + */ +static struct rxe_mcg *__rxe_lookup_mcg(struct rxe_dev *rxe, + union ib_gid *mgid) +{ + struct rb_root *tree = &rxe->mcg_tree; struct rxe_mcg *mcg; + struct rb_node *node; + int cmp; - mcg = rxe_alloc_locked(&rxe->mc_grp_pool); - if (!mcg) - return ERR_PTR(-ENOMEM); - rxe_add_ref(mcg); + node = tree->rb_node; - INIT_LIST_HEAD(&mcg->qp_list); - mcg->rxe = rxe; - rxe_add_key_locked(mcg, mgid); + while (node) { + mcg = rb_entry(node, struct rxe_mcg, node); - err = rxe_mcast_add(rxe, mgid); - if (unlikely(err)) { - rxe_drop_key_locked(mcg); - rxe_drop_ref(mcg); - return ERR_PTR(err); + cmp = memcmp(&mcg->mgid, mgid, sizeof(*mgid)); + + if (cmp > 0) + node = node->rb_left; + else if (cmp < 0) + node = node->rb_right; + else + break; } - return mcg; + if (node) { + rxe_add_ref(mcg); + return mcg; + } + + return NULL; } -static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mcg **mcgp) +/** + * rxe_lookup_mcg - lookup up mcg in red-back tree + * @rxe: rxe device object + * @mgid: multicast IP address + * + * Returns: mcg if found else NULL + */ +struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid) { - int err; struct rxe_mcg *mcg; + + spin_lock_bh(&rxe->mcg_lock); + mcg = __rxe_lookup_mcg(rxe, mgid); + spin_unlock_bh(&rxe->mcg_lock); + + return mcg; +} + +/** + * rxe_get_mcg - lookup or allocate a mcg + * @rxe: rxe device object + * @mgid: multicast IP address + * @mcgp: address of returned mcg value + * + * Adds one ref if mcg already exists else add a second reference + * which is dropped when qp_num goes to zero. + * + * Returns: 0 and sets *mcgp to mcg on success else an error + */ +static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, + struct rxe_mcg **mcgp) +{ + struct rxe_mcg *mcg, *tmp; + int ret; struct rxe_pool *pool = &rxe->mc_grp_pool; - if (rxe->attr.max_mcast_qp_attach == 0) + if (rxe->attr.max_mcast_grp == 0) return -EINVAL; - spin_lock_bh(&rxe->mcg_lock); + /* check to see if mcg already exists */ + mcg = rxe_lookup_mcg(rxe, mgid); + if (mcg) { + *mcgp = mcg; + return 0; + } - mcg = rxe_pool_get_key_locked(pool, mgid); - if (mcg) - goto done; + /* speculative alloc of mcg without using GFP_ATOMIC */ + mcg = rxe_alloc(pool); + if (!mcg) + return -ENOMEM; - mcg = create_grp(rxe, pool, mgid); - if (IS_ERR(mcg)) { + spin_lock_bh(&rxe->mcg_lock); + /* re-check to see if someone else just added it */ + tmp = __rxe_lookup_mcg(rxe, mgid); + if (tmp) { spin_unlock_bh(&rxe->mcg_lock); - err = PTR_ERR(mcg); - return err; + rxe_drop_ref(mcg); + mcg = tmp; + goto out; } -done: + if (atomic_inc_return(&rxe->mcg_num) > rxe->attr.max_mcast_grp) + goto err_dec; + + ret = rxe_mcast_add(rxe, mgid); + if (ret) + goto err_out; + + rxe_add_ref(mcg); + mcg->rxe = rxe; + memcpy(&mcg->mgid, mgid, sizeof(*mgid)); + INIT_LIST_HEAD(&mcg->qp_list); + atomic_inc(&rxe->mcg_num); + __rxe_insert_mcg(mcg); spin_unlock_bh(&rxe->mcg_lock); +out: *mcgp = mcg; return 0; + +err_dec: + atomic_dec(&rxe->mcg_num); + ret = -ENOMEM; +err_out: + spin_unlock_bh(&rxe->mcg_lock); + rxe_drop_ref(mcg); + return ret; } static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, @@ -136,7 +248,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mca *mca, *tmp; int n; - mcg = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); + mcg = rxe_lookup_mcg(rxe, mgid); if (!mcg) goto err1; @@ -151,14 +263,14 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, atomic_dec(&qp->mcg_num); spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); /* ref from get_key */ + rxe_drop_ref(mcg); kfree(mca); return 0; } } spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); /* ref from get_key */ + rxe_drop_ref(mcg); err1: return -EINVAL; } @@ -168,7 +280,10 @@ void rxe_mc_cleanup(struct rxe_pool_elem *elem) struct rxe_mcg *mcg = container_of(elem, typeof(*mcg), elem); struct rxe_dev *rxe = mcg->rxe; - rxe_drop_key(mcg); + spin_lock_bh(&rxe->mcg_lock); + __rxe_remove_mcg(mcg); + spin_unlock_bh(&rxe->mcg_lock); + rxe_mcast_delete(rxe, &mcg->mgid); } @@ -180,7 +295,7 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) struct rxe_mcg *mcg; /* takes a ref on mcg if successful */ - err = rxe_mcast_get_grp(rxe, mgid, &mcg); + err = rxe_get_mcg(rxe, mgid, &mcg); if (err) return err; diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 11246589fda7..f1ca83e09160 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -254,7 +254,7 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) memcpy(&dgid, &ipv6_hdr(skb)->daddr, sizeof(dgid)); /* lookup mcast group corresponding to mgid, takes a ref */ - mcg = rxe_pool_get_key(&rxe->mc_grp_pool, &dgid); + mcg = rxe_lookup_mcg(rxe, &dgid); if (!mcg) goto drop; /* mcast group not registered */ @@ -320,7 +320,7 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) kfree(qp_array); - rxe_drop_ref(mcg); /* drop ref from rxe_pool_get_key. */ + rxe_drop_ref(mcg); if (likely(!skb)) return; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index b72f8f09d984..ea2d9ff29744 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -353,6 +353,7 @@ struct rxe_mw { struct rxe_mcg { struct rxe_pool_elem elem; + struct rb_node node; struct rxe_dev *rxe; struct list_head qp_list; atomic_t qp_num; @@ -397,6 +398,8 @@ struct rxe_dev { struct rxe_pool mc_grp_pool; spinlock_t mcg_lock; /* guard multicast groups */ + struct rb_root mcg_tree; + atomic_t mcg_num; spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Thu Jan 27 21:37:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33D2DC4332F for ; Thu, 27 Jan 2022 21:38:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344415AbiA0ViR (ORCPT ); Thu, 27 Jan 2022 16:38:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344442AbiA0ViQ (ORCPT ); Thu, 27 Jan 2022 16:38:16 -0500 Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 658F0C06175C for ; Thu, 27 Jan 2022 13:38:15 -0800 (PST) Received: by mail-oi1-x22f.google.com with SMTP id y23so8460151oia.13 for ; Thu, 27 Jan 2022 13:38:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vTtTwdu5em41EWFvJMufZEtcNuGGg+0nCV4u7Ii4q58=; b=Ji5XCfV182dH2OnMN2vhdtzzs16cfNWv2YeE9pR3pjhFh2VJHHkmJ5mzVqrwU2+hrC TmacHVmRAPon5uVOhTGsWqkJI1WraW0RcQBP3dqEyedJdJc4cypCHZ1JHfyvt9ZAydtt xFDvnLrw0VvEG1YEth46H15TjqnOlfmpP111KfgfoTTnVwL5BrfFW4AirAxb+YT2JGrV EoI7FUCek/K5s09Ft99plJWs/AS6ZNyU2mlFKobmWlu7zZIEp8kvVxnU4K4WFoy+XM/t X/NF9+InpxYLe1UPna8TQfLVrQK/Tox/LIvuGG32YTip1gmxTsCgw+ucsMt/8VmwzKjA HuNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vTtTwdu5em41EWFvJMufZEtcNuGGg+0nCV4u7Ii4q58=; b=N4V6+vngoYp2xNxXbAJI8iEmhsSSKhUtQi/ACBJOMlAdrwKaKfiEV/v+Lsrzgi0haB Lj1dFsiPyKjmMchtADdfZyGVhbXiZ0Ra/5pg7ibA28PB0B7uB4xitAjbF0KZ5etqjGJ+ drK0Ps6MeMw0tgsyjmOlGmgiW11wziSIvsPFpiqt2obX8iCrhfmJUbrMsa2qrrfQwd9d 8xl9nTnPePCAeUrMHxZw+ucYh6+YA43uDQU4Yl2bLg0mMVe5HQwtnlcK1A+aOgnlcd6k 2VTA8A4f9/SlFGIYVd1fHEgBeYHXbDlnkakCWnF1aesPSeeydXAt0bjQb/m/isNLN5YQ yzNg== X-Gm-Message-State: AOAM531degF/sv/PyCxTd3vd1OVDDKYbi+PAgwn8KbFq9l5Spmjn6PW6 is2PFuCtKr9xlRCKja3LD0foyznTRI8= X-Google-Smtp-Source: ABdhPJyxXe97SZ8DUYMujkAnK6IXmjujFC098VWHrlxTiXY9+9HWyL328OyGa+20xQ8DYdWZ54Yl9A== X-Received: by 2002:a54:4812:: with SMTP id j18mr8208763oij.186.1643319494811; Thu, 27 Jan 2022 13:38:14 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:14 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 13/26] RDMA/rxe: Remove key'ed object support Date: Thu, 27 Jan 2022 15:37:42 -0600 Message-Id: <20220127213755.31697-14-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Now that rxe_mcast.c has it's own red-black tree support there is no longer any requirement for key'ed objects in rxe pools. This patch removes the key APIs and related code. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 126 --------------------------- drivers/infiniband/sw/rxe/rxe_pool.h | 38 -------- 2 files changed, 164 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index a6756aa93e2b..673b29f1f12c 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -16,8 +16,6 @@ static const struct rxe_type_info { enum rxe_pool_flags flags; u32 min_index; u32 max_index; - size_t key_offset; - size_t key_size; } rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_UC] = { .name = "rxe-uc", @@ -86,9 +84,6 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mcg), .elem_offset = offsetof(struct rxe_mcg, elem), .cleanup = rxe_mc_cleanup, - .flags = RXE_POOL_KEY, - .key_offset = offsetof(struct rxe_mcg, mgid), - .key_size = sizeof(union ib_gid), }, }; @@ -147,12 +142,6 @@ int rxe_pool_init( goto out; } - if (pool->flags & RXE_POOL_KEY) { - pool->key.tree = RB_ROOT; - pool->key.key_offset = info->key_offset; - pool->key.key_size = info->key_size; - } - out: return err; } @@ -209,77 +198,6 @@ static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) return 0; } -static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new) -{ - struct rb_node **link = &pool->key.tree.rb_node; - struct rb_node *parent = NULL; - struct rxe_pool_elem *elem; - int cmp; - - while (*link) { - parent = *link; - elem = rb_entry(parent, struct rxe_pool_elem, key_node); - - cmp = memcmp((u8 *)elem + pool->key.key_offset, - (u8 *)new + pool->key.key_offset, - pool->key.key_size); - - if (cmp == 0) { - pr_warn("key already exists!\n"); - return -EINVAL; - } - - if (cmp > 0) - link = &(*link)->rb_left; - else - link = &(*link)->rb_right; - } - - rb_link_node(&new->key_node, parent, link); - rb_insert_color(&new->key_node, &pool->key.tree); - - return 0; -} - -int __rxe_add_key_locked(struct rxe_pool_elem *elem, void *key) -{ - struct rxe_pool *pool = elem->pool; - int err; - - memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); - err = rxe_insert_key(pool, elem); - - return err; -} - -int __rxe_add_key(struct rxe_pool_elem *elem, void *key) -{ - struct rxe_pool *pool = elem->pool; - int err; - - write_lock_bh(&pool->pool_lock); - err = __rxe_add_key_locked(elem, key); - write_unlock_bh(&pool->pool_lock); - - return err; -} - -void __rxe_drop_key_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - rb_erase(&elem->key_node, &pool->key.tree); -} - -void __rxe_drop_key(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - write_lock_bh(&pool->pool_lock); - __rxe_drop_key_locked(elem); - write_unlock_bh(&pool->pool_lock); -} - int __rxe_add_index_locked(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; @@ -443,47 +361,3 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) return obj; } - -void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) -{ - struct rb_node *node; - struct rxe_pool_elem *elem; - void *obj; - int cmp; - - node = pool->key.tree.rb_node; - - while (node) { - elem = rb_entry(node, struct rxe_pool_elem, key_node); - - cmp = memcmp((u8 *)elem + pool->key.key_offset, - key, pool->key.key_size); - - if (cmp > 0) - node = node->rb_left; - else if (cmp < 0) - node = node->rb_right; - else - break; - } - - if (node) { - kref_get(&elem->ref_cnt); - obj = elem->obj; - } else { - obj = NULL; - } - - return obj; -} - -void *rxe_pool_get_key(struct rxe_pool *pool, void *key) -{ - void *obj; - - read_lock_bh(&pool->pool_lock); - obj = rxe_pool_get_key_locked(pool, key); - read_unlock_bh(&pool->pool_lock); - - return obj; -} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 511f81554fd1..b6de415e10d2 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -9,7 +9,6 @@ enum rxe_pool_flags { RXE_POOL_INDEX = BIT(1), - RXE_POOL_KEY = BIT(2), RXE_POOL_NO_ALLOC = BIT(4), }; @@ -32,9 +31,6 @@ struct rxe_pool_elem { struct kref ref_cnt; struct list_head list; - /* only used if keyed */ - struct rb_node key_node; - /* only used if indexed */ struct rb_node index_node; u32 index; @@ -61,13 +57,6 @@ struct rxe_pool { u32 max_index; u32 min_index; } index; - - /* only used if keyed */ - struct { - struct rb_root tree; - size_t key_offset; - size_t key_size; - } key; }; /* initialize a pool of objects with given limit on @@ -112,26 +101,6 @@ void __rxe_drop_index(struct rxe_pool_elem *elem); #define rxe_drop_index(obj) __rxe_drop_index(&(obj)->elem) -/* assign a key to a keyed object and insert object into - * pool's rb tree holding and not holding pool_lock - */ -int __rxe_add_key_locked(struct rxe_pool_elem *elem, void *key); - -#define rxe_add_key_locked(obj, key) __rxe_add_key_locked(&(obj)->elem, key) - -int __rxe_add_key(struct rxe_pool_elem *elem, void *key); - -#define rxe_add_key(obj, key) __rxe_add_key(&(obj)->elem, key) - -/* remove elem from rb tree holding and not holding the pool_lock */ -void __rxe_drop_key_locked(struct rxe_pool_elem *elem); - -#define rxe_drop_key_locked(obj) __rxe_drop_key_locked(&(obj)->elem) - -void __rxe_drop_key(struct rxe_pool_elem *elem); - -#define rxe_drop_key(obj) __rxe_drop_key(&(obj)->elem) - /* lookup an indexed object from index holding and not holding the pool_lock. * takes a reference on object */ @@ -139,13 +108,6 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index); void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); -/* lookup keyed object from key holding and not holding the pool_lock. - * takes a reference on the objecti - */ -void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key); - -void *rxe_pool_get_key(struct rxe_pool *pool, void *key); - /* cleanup an object when all references are dropped */ void rxe_elem_release(struct kref *kref); From patchwork Thu Jan 27 21:37:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AD92C433FE for ; Thu, 27 Jan 2022 21:38:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344411AbiA0ViR (ORCPT ); Thu, 27 Jan 2022 16:38:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344413AbiA0ViQ (ORCPT ); Thu, 27 Jan 2022 16:38:16 -0500 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B88DC061714 for ; Thu, 27 Jan 2022 13:38:16 -0800 (PST) Received: by mail-oi1-x22d.google.com with SMTP id s127so8599048oig.2 for ; Thu, 27 Jan 2022 13:38:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AVK7u2P52N1G3b9h/Df3g4Qwbi3NOjtbe74Ha1KGhjk=; b=c982XjkxXuLVbFOws/IpjneWloOvHcrUEw+h+NxNK52jgsMQ2w/rrjkWCPRSM8kZ7/ 5EskT+CURzbsR1diW5yZfUla1XIc8bFTIzAAkBw+a2bCaca7SuvOUb7JWwlOnXcujb0W 7q6LXoGEfqsY87CLyq8RTuyn/+v2HW8PUBk48nKctsvxl8s2ZvsTI/UZi+RFZ4Ov+7mC mewyDVl7aBPhRh1rh/UwHPCIda2D4csNQtRQSm62w8Fggj3FGo2SHsACftIdrIF6J1Yz LGVFWGMSIAL8SNDLlhKewdC1DN9c0ifSaHQxCSdwkWumw2mlQ+qD6sY+pJxu3Fy8RdsB vmSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AVK7u2P52N1G3b9h/Df3g4Qwbi3NOjtbe74Ha1KGhjk=; b=k+MXmrDQYCbVm79gnPztvP28sS2qodCh7sLq8r7y83J99GFbZaItwcpSG/EUv64sEZ lrrZR5SLIhXnwvsZOMC/sihQfqz3kf/OEC9OkU4Z2GHntRmdLhD5yMs+Rj5r45IYXBny kD8Tk3vL91d9CWWPoXgAbwQeZ6lB5z7tF9JzedefIgcscvSjtQfsaTdeN8d7NoopwPoA QA0Jv9lP1Nnuh2sy0S6vnUpYFqexUr/n/HJ3XgnanILlBzAL5/ZSUX5NdqnOnwqLPYxL GK5eS/EnZBvfkaCTk/CRx8AFl5wi5XVs1AziETAigvozqVpiFdHm4BarWgf5IxKOVq23 3XWQ== X-Gm-Message-State: AOAM5324UVVx6O0pkAvCozNLhHR+K6Lq8+9MSikj3BSXQQ9XWM1SJIKN blUnyqT8If3SXH/BEce/twM= X-Google-Smtp-Source: ABdhPJzkWm/ir2kSUPJFpdUmH6R5fgFJECzwrpYgV28ATzdrsNnW9EPvcxbNXbNoPZCgJcxUf8Vlig== X-Received: by 2002:a05:6808:2324:: with SMTP id bn36mr2374836oib.212.1643319495428; Thu, 27 Jan 2022 13:38:15 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:15 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 14/26] RDMA/rxe: Remove mcg from rxe pools Date: Thu, 27 Jan 2022 15:37:43 -0600 Message-Id: <20220127213755.31697-15-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Finish removing mcg from rxe pools. Replace rxe pools ref counting by kref's. Replace rxe_alloc by kzalloc. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 8 --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_mcast.c | 76 ++++++++++++++++++--------- drivers/infiniband/sw/rxe/rxe_pool.c | 6 --- drivers/infiniband/sw/rxe/rxe_pool.h | 1 - drivers/infiniband/sw/rxe/rxe_recv.c | 4 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 2 +- 7 files changed, 54 insertions(+), 45 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 310e184ae9e8..c560d467a972 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -28,7 +28,6 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->cq_pool); rxe_pool_cleanup(&rxe->mr_pool); rxe_pool_cleanup(&rxe->mw_pool); - rxe_pool_cleanup(&rxe->mc_grp_pool); if (rxe->tfm) crypto_free_shash(rxe->tfm); @@ -157,15 +156,8 @@ static int rxe_init_pools(struct rxe_dev *rxe) if (err) goto err8; - err = rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP, - rxe->attr.max_mcast_grp); - if (err) - goto err9; - return 0; -err9: - rxe_pool_cleanup(&rxe->mw_pool); err8: rxe_pool_cleanup(&rxe->mr_pool); err7: diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index d9faf3a1ee61..409efeecd581 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -43,7 +43,7 @@ void rxe_cq_cleanup(struct rxe_pool_elem *arg); struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); -void rxe_mc_cleanup(struct rxe_pool_elem *arg); +void rxe_cleanup_mcg(struct kref *kref); /* rxe_mmap.c */ struct rxe_mmap_info { diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 4c3eb9c723b4..d01456052879 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -98,7 +98,7 @@ static struct rxe_mcg *__rxe_lookup_mcg(struct rxe_dev *rxe, } if (node) { - rxe_add_ref(mcg); + kref_get(&mcg->ref_cnt); return mcg; } @@ -139,7 +139,6 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, { struct rxe_mcg *mcg, *tmp; int ret; - struct rxe_pool *pool = &rxe->mc_grp_pool; if (rxe->attr.max_mcast_grp == 0) return -EINVAL; @@ -152,7 +151,7 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, } /* speculative alloc of mcg without using GFP_ATOMIC */ - mcg = rxe_alloc(pool); + mcg = kzalloc(sizeof(*mcg), GFP_KERNEL); if (!mcg) return -ENOMEM; @@ -161,19 +160,22 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, tmp = __rxe_lookup_mcg(rxe, mgid); if (tmp) { spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); + kfree(mcg); mcg = tmp; goto out; } - if (atomic_inc_return(&rxe->mcg_num) > rxe->attr.max_mcast_grp) + if (atomic_inc_return(&rxe->mcg_num) > rxe->attr.max_mcast_grp) { + ret = -ENOMEM; goto err_dec; + } ret = rxe_mcast_add(rxe, mgid); if (ret) - goto err_out; + goto err_dec; - rxe_add_ref(mcg); + kref_init(&mcg->ref_cnt); + kref_get(&mcg->ref_cnt); mcg->rxe = rxe; memcpy(&mcg->mgid, mgid, sizeof(*mgid)); INIT_LIST_HEAD(&mcg->qp_list); @@ -186,13 +188,47 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, err_dec: atomic_dec(&rxe->mcg_num); - ret = -ENOMEM; -err_out: spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); + kfree(mcg); return ret; } +/** + * __rxe_cleanup_mcg - cleanup mcg object holding lock + * @kref: kref embedded in mcg object + * + * Context: caller has put all references to mcg + * caller should hold rxe->mcg_lock + */ +static void __rxe_cleanup_mcg(struct kref *kref) +{ + struct rxe_mcg *mcg = container_of(kref, typeof(*mcg), ref_cnt); + struct rxe_dev *rxe = mcg->rxe; + + __rxe_remove_mcg(mcg); + rxe_mcast_delete(rxe, &mcg->mgid); + atomic_dec(&rxe->mcg_num); + + kfree(mcg); +} + +/** + * rxe_cleanup_mcg - cleanup mcg object + * @kref: kref embedded in mcg object + * + * Context: caller has put all references to mcg and no one should be + * able to get another one + */ +void rxe_cleanup_mcg(struct kref *kref) +{ + struct rxe_mcg *mcg = container_of(kref, typeof(*mcg), ref_cnt); + struct rxe_dev *rxe = mcg->rxe; + + spin_lock_bh(&rxe->mcg_lock); + __rxe_cleanup_mcg(kref); + spin_unlock_bh(&rxe->mcg_lock); +} + static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mcg *mcg) { @@ -259,34 +295,22 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, list_del(&mca->qp_list); n = atomic_dec_return(&mcg->qp_num); if (n <= 0) - rxe_drop_ref(mcg); + kref_put(&mcg->ref_cnt, __rxe_cleanup_mcg); atomic_dec(&qp->mcg_num); spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); + kref_put(&mcg->ref_cnt, __rxe_cleanup_mcg); kfree(mca); return 0; } } spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); err1: return -EINVAL; } -void rxe_mc_cleanup(struct rxe_pool_elem *elem) -{ - struct rxe_mcg *mcg = container_of(elem, typeof(*mcg), elem); - struct rxe_dev *rxe = mcg->rxe; - - spin_lock_bh(&rxe->mcg_lock); - __rxe_remove_mcg(mcg); - spin_unlock_bh(&rxe->mcg_lock); - - rxe_mcast_delete(rxe, &mcg->mgid); -} - int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) { int err; @@ -301,7 +325,7 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) err = rxe_mcast_add_grp_elem(rxe, qp, mcg); - rxe_drop_ref(mcg); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); return err; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 673b29f1f12c..b6fe7c93aaab 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -79,12 +79,6 @@ static const struct rxe_type_info { .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, }, - [RXE_TYPE_MC_GRP] = { - .name = "rxe-mc_grp", - .size = sizeof(struct rxe_mcg), - .elem_offset = offsetof(struct rxe_mcg, elem), - .cleanup = rxe_mc_cleanup, - }, }; static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index b6de415e10d2..99b1eb04b405 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -21,7 +21,6 @@ enum rxe_elem_type { RXE_TYPE_CQ, RXE_TYPE_MR, RXE_TYPE_MW, - RXE_TYPE_MC_GRP, RXE_NUM_TYPES, /* keep me last */ }; diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index f1ca83e09160..357a6cea1484 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -274,6 +274,8 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) break; } spin_unlock_bh(&rxe->mcg_lock); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); + nmax = n; /* this is unreliable datagram service so we let @@ -320,8 +322,6 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) kfree(qp_array); - rxe_drop_ref(mcg); - if (likely(!skb)) return; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index ea2d9ff29744..dea24ebdb3d0 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -352,8 +352,8 @@ struct rxe_mw { }; struct rxe_mcg { - struct rxe_pool_elem elem; struct rb_node node; + struct kref ref_cnt; struct rxe_dev *rxe; struct list_head qp_list; atomic_t qp_num; From patchwork Thu Jan 27 21:37:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54040C433EF for ; Thu, 27 Jan 2022 21:38:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344409AbiA0ViS (ORCPT ); Thu, 27 Jan 2022 16:38:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344412AbiA0ViR (ORCPT ); Thu, 27 Jan 2022 16:38:17 -0500 Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9418FC061747 for ; Thu, 27 Jan 2022 13:38:16 -0800 (PST) Received: by mail-oi1-x229.google.com with SMTP id u129so8536609oib.4 for ; Thu, 27 Jan 2022 13:38:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aKNe5f1UiRXo4zGBhN7dw31pRci+hLgKcAj+G8bW8o4=; b=CyK8NNhe8SxICIU0Lg1WRr+fpsS5MELfOvjIeWqk272j2R95aHv8VBOEXMFalN5PVo DRyQyRp6vluLZiI/28yha+Q/YMlU8U68NkrAge/x7lDA4HabpgS2e4E2afyP6BBBp0KF RYnbgE0yZTa0iqFzCOvZ09fDkSegkg1edZXv8jFaPVIKUiChQ7IpANONmwXRjgQPCugs SeDwAGiIFX1oS1zL4WxVoBGKe6BU+sHOq+dwtvSQdpzCETFXNRmEHH9N0bbbVhzGDQ9I WNcELtThamOrnWYnyJD69hu8bHFIwz9GV3nLzstM5A0PdIy+H+4BTmwSmqUDNdHJsjXC lO1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aKNe5f1UiRXo4zGBhN7dw31pRci+hLgKcAj+G8bW8o4=; b=aCjY4+nj3JPeC/t2iHP/x1IPzAlgBVF15IJspnU/DY9oMZoknafGKc02iAtG5XGU3z hp0oYa3f2SABM58OW0rx4eU7RC1OrpRwIG+6UIoJMWqCqlDP7M3RY03PVxI6/rQhGEeT t7LknvP9M1ph+Nnvk4K+nQpNqCeIYRP8e7wFpxFW5538wAOR1mmVZs0xPB3lvby1OZXO BGDyGqJYFGrcx6TgecMKvZK9xnB5yYJ+IugQOP2Ie5XcGDnSKLGul13/RslH5ev+5wLk J17A/kPV0v63Pgx8jGEWIPLC4cX5pLS9ppfwQ9ygqS6OgjfQR/D9MWJkVFJ/EoFasqoo 3y4g== X-Gm-Message-State: AOAM533dsa4YCSeg25Xjm07L5SE4jGn4JLHzV3G/gVToI47Oh05UO4zL VlVbFK9rX2hQZhpQws440Hkw+k4mTgw= X-Google-Smtp-Source: ABdhPJyWKHA2PwnrS/A+A6blOkHiibW7jCzNFt6woS45OzcVaOX069rm4iRzn9axtVRfJ7xQU8KxQg== X-Received: by 2002:a05:6808:13cc:: with SMTP id d12mr3851685oiw.29.1643319496066; Thu, 27 Jan 2022 13:38:16 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:15 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 15/26] RDMA/rxe: Add code to cleanup mcast memory Date: Thu, 27 Jan 2022 15:37:44 -0600 Message-Id: <20220127213755.31697-16-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Well behaved applications will free all memory allocated by multicast but programs which do not clean up properly can leave behind allocated memory when the rxe driver is unloaded. This patch walks the red-black tree holding multicast group elements and then walks the list of attached qp's freeing the mca's and finally the mcg's. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 ++ drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mcast.c | 31 +++++++++++++++++++++++++++ 3 files changed, 34 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index c560d467a972..74c5521e9b3d 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -29,6 +29,8 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->mr_pool); rxe_pool_cleanup(&rxe->mw_pool); + rxe_cleanup_mcast(rxe); + if (rxe->tfm) crypto_free_shash(rxe->tfm); } diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 409efeecd581..0bc1b7e2877c 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -44,6 +44,7 @@ struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); void rxe_cleanup_mcg(struct kref *kref); +void rxe_cleanup_mcast(struct rxe_dev *rxe); /* rxe_mmap.c */ struct rxe_mmap_info { diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index d01456052879..49cc1ad05bba 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -336,3 +336,34 @@ int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) return rxe_mcast_drop_grp_elem(rxe, qp, mgid); } + +/** + * rxe_cleanup_mcast - cleanup all resources held by mcast + * @rxe: rxe object + * + * Called when rxe device is unloaded. Walk red-black tree to + * find all mcg's and then walk mcg->qp_list to find all mca's and + * free them. These should have been freed already if apps are + * well behaved. + */ +void rxe_cleanup_mcast(struct rxe_dev *rxe) +{ + struct rb_root *root = &rxe->mcg_tree; + struct rb_node *node, *next; + struct rxe_mcg *mcg; + struct rxe_mca *mca, *tmp; + + for (node = rb_first(root); node; node = next) { + next = rb_next(node); + mcg = rb_entry(node, typeof(*mcg), node); + + spin_lock_bh(&rxe->mcg_lock); + list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) + kfree(mca); + + __rxe_remove_mcg(mcg); + spin_unlock_bh(&rxe->mcg_lock); + + kfree(mcg); + } +} From patchwork Thu Jan 27 21:37:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DA28C433F5 for ; Thu, 27 Jan 2022 21:38:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344418AbiA0ViU (ORCPT ); Thu, 27 Jan 2022 16:38:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344421AbiA0ViR (ORCPT ); Thu, 27 Jan 2022 16:38:17 -0500 Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 541BFC061747 for ; Thu, 27 Jan 2022 13:38:17 -0800 (PST) Received: by mail-oi1-x22c.google.com with SMTP id e81so8544183oia.6 for ; Thu, 27 Jan 2022 13:38:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O9HP4u1QtSz+UTDeR5PfbOxd1zQDa2gpLbjYJV32Jsc=; b=hevN2zdNUuWHf+Z2CCawe9Uur5FU1WfWod4PxURq2N24Cy9kwgoNeWBqQnQHCZMSmc qOizGxsQtXGRNaX7w2e2diUKScN15MBqhY/UU4w1bLmZ0rSLojos4GZFAepPolV1m5Rj RpcGp2vpdRHNuNXpRI/r0UyDstSu2Sb+lnnVH1ERLsDL+mZvpZ8ebrS6pBplO3Fwj/9H 8QhaWW6su02SLHsXn1JvJDlLS9cvdNRewCNWUD28F8Xk/63OLr45mCTO5IGT/BFluF6I y/bu1WTg2f6WV78F4gD9YHeQWSyYlmoe4avSlTiqRyzsa0x7uBBOF65fV1rQFJ+D4rBP ANnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O9HP4u1QtSz+UTDeR5PfbOxd1zQDa2gpLbjYJV32Jsc=; b=RgPdmIqsjkLkThXOv030H9fwBz93Qqvu4ubLl/Cgo5IojIZpt048Ldn6ITmiy+2hba juD3FwQ/qXEPECH5Uuj+kP9y0Lp2OKRbvx/GRG2IhigNBoTa8IRC+7ngyiQc3zQ9Dl1E ExwX5gjDlo2jbryPFuZcj/T87+zp+H03TVc1gojf21uwl0/exspcMPEAcYptFK8b3WnO PYUIaUVqs0s4Lrb4C3oJWReCvk2ZJj/9BSvR0TOLUCDzI/TYGNFf2gNHEizA3diVL3Y/ J4xDG9puZ2Gs7KQOSIA4CAUrgvSbFMr/Dp3kPYiVfBjSC7zEKJ7QDH5P1+WZWGL+9VmE bjSA== X-Gm-Message-State: AOAM532hUBobievsz2jOggJv431mhXz28dRhIufDs6cdM2eiNDPl3BnB 60Ygs1a3vqUPOq5qadIjF+U= X-Google-Smtp-Source: ABdhPJzOIiHJR1T0h2oM15XpinwS2xKtmy2xWO3P8MTcFGYxFGquZsx45IqdVXjARHr4VbE6A2Q3/A== X-Received: by 2002:a05:6808:30a3:: with SMTP id bl35mr8214215oib.226.1643319496738; Thu, 27 Jan 2022 13:38:16 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:16 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 16/26] RDMA/rxe: Add comments to rxe_mcast.c Date: Thu, 27 Jan 2022 15:37:45 -0600 Message-Id: <20220127213755.31697-17-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add comments to rxe_mcast.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 42 ++++++++++++++++++++++++++- 1 file changed, 41 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 49cc1ad05bba..77f166a5d5c8 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -1,12 +1,45 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* + * Copyright (c) 2022 Hewlett Packard Enterprise, Inc. All rights reserved. * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ +/* + * rxe_mcast.c implements driver support for multicast transport. + * It is based on two data structures struct rxe_mcg ('mcg') and + * struct rxe_mca ('mca'). An mcg is allocated each time a qp is + * attached to a new mgid for the first time. These are indexed by + * a red-black tree using the mgid. This data structure is searched + * for the mcg when a multicast packet is received and when another + * qp is attached to the same mgid. It is cleaned up when the last qp + * is detached from the mcg. Each time a qp is attached to an mcg an + * mca is created. It holds a pointer to the qp and is added to a list + * of qp's that are attached to the mcg. The qp_list is used to replicate + * mcast packets in the rxe receive path. + * + * mcg's keep a count of the number of qp's attached and once the count + * goes to zero it needs to be cleaned up. mcg's also have a reference + * count. While InfiniBand multicast groups are created and destroyed + * by explicit MADs, for rxe devices this is more implicit and the mcg + * is created by the first qp attach and destroyed by the last qp detach. + * To implement this there is some hysteresis with an extra kref_get when + * the mcg is created and an extra kref_put when the qp count decreases + * to zero. + * + * The qp list and the red-black tree are protected by a single + * rxe->mcg_lock per device. + */ + #include "rxe.h" -#include "rxe_loc.h" +/** + * rxe_mcast_add - add multicast address to rxe device + * @rxe: rxe device object + * @mgid: multicast address as a gid + * + * Returns 0 on success else an error + */ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) { unsigned char ll_addr[ETH_ALEN]; @@ -16,6 +49,13 @@ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) return dev_mc_add(rxe->ndev, ll_addr); } +/** + * rxe_mcast_delete - delete multicast address from rxe device + * @rxe: rxe device object + * @mgid: multicast address as a gid + * + * Returns 0 on success else an error + */ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) { unsigned char ll_addr[ETH_ALEN]; From patchwork Thu Jan 27 21:37:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45B71C4167B for ; Thu, 27 Jan 2022 21:38:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344420AbiA0ViV (ORCPT ); Thu, 27 Jan 2022 16:38:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344423AbiA0ViS (ORCPT ); Thu, 27 Jan 2022 16:38:18 -0500 Received: from mail-ot1-x32b.google.com (mail-ot1-x32b.google.com [IPv6:2607:f8b0:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 116C3C061714 for ; Thu, 27 Jan 2022 13:38:18 -0800 (PST) Received: by mail-ot1-x32b.google.com with SMTP id x52-20020a05683040b400b0059ea92202daso3869516ott.7 for ; Thu, 27 Jan 2022 13:38:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=k/bmw1xcxgv8ClRBPc9ZyzRB8UlHVaxTNT3B0AOI8DE=; b=Zb1B/XTjebax+dCI2Ta8zwm7toAHqzfLcmuCN34IERJ9KyDnHbCPWITNe8SKKuOCFV OFfeAl4PmTddC6em95YR4JU2pAT2We+fdVyadpDFpYMBys+ek9GX59mGSj+daCcMVVVE bpHWZMH6Zaun2gJ9XSHYfT9FKcobVm6X9c0gGPKV+T7R3ZuFAe/PqQopu6ahuX8fEI1T 9NCaxJ1et3p4WnVQmHjUmR0afhttJvBq0Ylwibk7/1iNPe1f7JVIa54uoHJa3MbBtLIq oBMGl+fvakgOk47+UsFUoiuv5Mtm/ztIrRPGcM4ucVpiRQh05Vqpu6hSN7zHvAPNxarS Nvdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=k/bmw1xcxgv8ClRBPc9ZyzRB8UlHVaxTNT3B0AOI8DE=; b=m4aa0Y4BDlyI1xgyO9f1C2a1gqc8LHvzj9Wx9VbCdUPb7rpLeRq2WhHyh5wnHC3NFg YOPfizEVqAx4T/R/W2jwuFwdJ98969XSjfDWz7evyRQi6/7sW/mi1Ax0dxmaeggPo19D +LiqiGZlgDYCQnXdCm+w71h6D+yy9g94AKr+29vaGyKr0uFn9L1uCTWik+qe8lCy3iHe KR//Avus+Py0zXUQUBfsVcVJT7N1WEgUyDh1hhszEmXZmrV9O5y2UP3E3nmPDZ9u0otb 6xtJqbwYTv4RNDTYrXarqgdT8CPrx12GEuOX0JsTN8/cQS7R0eOmkPFFDnz87OvqQlT8 2FRQ== X-Gm-Message-State: AOAM533E9GRJNOAfw6Rmz9/1WkJmqOLoUJEBD30mJ6bETZNQMuZnJDID ifZVc3Ut0HINTLT1GCdw7BNT6jnS9p4= X-Google-Smtp-Source: ABdhPJxhAj3EuRqy9rkMPRaR0sY/lvvJi5533Cjr2Xr74rz+JfK29Zc5gkdmST/bqC09zHP81LtzZA== X-Received: by 2002:a05:6830:2304:: with SMTP id u4mr3175714ote.348.1643319497446; Thu, 27 Jan 2022 13:38:17 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:17 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 17/26] RDMA/rxe: Separate code into subroutines Date: Thu, 27 Jan 2022 15:37:46 -0600 Message-Id: <20220127213755.31697-18-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Cleanup rxe_mcast.c code by separating initialization and cleanup of mca objects into subroutines. Added remaining documentation comments. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 162 +++++++++++++++++++------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 2 files changed, 121 insertions(+), 42 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 77f166a5d5c8..865e6e85084f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -178,7 +178,7 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, struct rxe_mcg **mcgp) { struct rxe_mcg *mcg, *tmp; - int ret; + int err; if (rxe->attr.max_mcast_grp == 0) return -EINVAL; @@ -206,12 +206,12 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, } if (atomic_inc_return(&rxe->mcg_num) > rxe->attr.max_mcast_grp) { - ret = -ENOMEM; + err = -ENOMEM; goto err_dec; } - ret = rxe_mcast_add(rxe, mgid); - if (ret) + err = rxe_mcast_add(rxe, mgid); + if (err) goto err_dec; kref_init(&mcg->ref_cnt); @@ -230,7 +230,7 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, atomic_dec(&rxe->mcg_num); spin_unlock_bh(&rxe->mcg_lock); kfree(mcg); - return ret; + return err; } /** @@ -269,11 +269,59 @@ void rxe_cleanup_mcg(struct kref *kref) spin_unlock_bh(&rxe->mcg_lock); } -static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mcg *mcg) +/** + * __rxe_init_mca - initialize a new mca holding lock + * @qp: qp object + * @mcg: mcg object + * @mca: empty space for new mca + * + * Context: caller must hold references on qp and mcg, rxe->mcg_lock + * and pass memory for new mca + * + * Returns: 0 on success else an error + */ +static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, + struct rxe_mca *mca) { - int err; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int n; + + n = atomic_inc_return(&rxe->mcg_attach); + if (n > rxe->attr.max_total_mcast_qp_attach) { + atomic_dec(&rxe->mcg_attach); + return -ENOMEM; + } + + n = atomic_inc_return(&mcg->qp_num); + if (n > rxe->attr.max_mcast_qp_attach) { + atomic_dec(&mcg->qp_num); + atomic_dec(&rxe->mcg_attach); + return -ENOMEM; + } + + atomic_inc(&qp->mcg_num); + + rxe_add_ref(qp); + mca->qp = qp; + + list_add_tail(&mca->qp_list, &mcg->qp_list); + + return 0; +} + +/** + * rxe_attach_mcg - attach qp to mcg if not already attached + * @mcg: mcg object + * @qp: qp object + * + * Context: caller must hold reference on qp and mcg. + * Returns: 0 on success else an error + */ +static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) +{ + struct rxe_dev *rxe = mcg->rxe; struct rxe_mca *mca, *new_mca; + int err; /* check to see if the qp is already a member of the group */ spin_lock_bh(&rxe->mcg_lock); @@ -296,61 +344,74 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, if (mca->qp == qp) { kfree(new_mca); err = 0; - goto out; + goto done; } } - if (atomic_read(&mcg->qp_num) >= rxe->attr.max_mcast_qp_attach) { - err = -ENOMEM; - goto out; - } + mca = new_mca; + err = __rxe_init_mca(qp, mcg, mca); + if (err) + kfree(mca); +done: + spin_unlock_bh(&rxe->mcg_lock); - atomic_inc(&mcg->qp_num); - new_mca->qp = qp; - atomic_inc(&qp->mcg_num); + return err; +} + +/** + * __rxe_cleanup_mca - cleanup mca object holding lock + * @mca: mca object + * @mcg: mcg object + * + * Context: caller must hold a reference to mcg and rxe->mcg_lock + */ +static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) +{ + list_del(&mca->qp_list); - list_add_tail(&new_mca->qp_list, &mcg->qp_list); + atomic_dec(&mcg->qp_num); + atomic_dec(&mcg->rxe->mcg_attach); + atomic_dec(&mca->qp->mcg_num); - err = 0; -out: - spin_unlock_bh(&rxe->mcg_lock); - return err; + rxe_drop_ref(mca->qp); } -static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - union ib_gid *mgid) +/** + * rxe_detach_mcg - detach qp from mcg + * @mcg: mcg object + * @qp: qp object + * + * Returns: 0 on success else an error if qp is not attached. + */ +static int rxe_detach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) { - struct rxe_mcg *mcg; + struct rxe_dev *rxe = mcg->rxe; struct rxe_mca *mca, *tmp; - int n; - - mcg = rxe_lookup_mcg(rxe, mgid); - if (!mcg) - goto err1; spin_lock_bh(&rxe->mcg_lock); - list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - list_del(&mca->qp_list); - n = atomic_dec_return(&mcg->qp_num); - if (n <= 0) + __rxe_cleanup_mca(mca, mcg); + if (atomic_read(&mcg->qp_num) <= 0) kref_put(&mcg->ref_cnt, __rxe_cleanup_mcg); - atomic_dec(&qp->mcg_num); - spin_unlock_bh(&rxe->mcg_lock); - kref_put(&mcg->ref_cnt, __rxe_cleanup_mcg); kfree(mca); return 0; } } - spin_unlock_bh(&rxe->mcg_lock); - kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); -err1: + return -EINVAL; } +/** + * rxe_attach_mcast - attach qp to multicast group (see IBA-11.3.1) + * @ibqp: (IB) qp object + * @mgid: multicast IP address + * @mlid: multicast LID, ignored for RoCEv2 (see IBA-A17.5.6) + * + * Returns: 0 on success else an errno + */ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) { int err; @@ -363,18 +424,35 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) if (err) return err; - err = rxe_mcast_add_grp_elem(rxe, qp, mcg); - + err = rxe_attach_mcg(mcg, qp); kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); + return err; } +/** + * rxe_detach_mcast - detach qp from multicast group (see IBA-11.3.2) + * @ibqp: address of (IB) qp object + * @mgid: multicast IP address + * @mlid: multicast LID, ignored for RoCEv2 (see IBA-A17.5.6) + * + * Returns: 0 on success else an errno + */ int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) { struct rxe_dev *rxe = to_rdev(ibqp->device); struct rxe_qp *qp = to_rqp(ibqp); + struct rxe_mcg *mcg; + int err; + + mcg = rxe_lookup_mcg(rxe, mgid); + if (!mcg) + return -EINVAL; - return rxe_mcast_drop_grp_elem(rxe, qp, mgid); + err = rxe_detach_mcg(mcg, qp); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); + + return err; } /** diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index dea24ebdb3d0..76350d43ce2a 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -400,6 +400,7 @@ struct rxe_dev { spinlock_t mcg_lock; /* guard multicast groups */ struct rb_root mcg_tree; atomic_t mcg_num; + atomic_t mcg_attach; spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Thu Jan 27 21:37:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE561C43219 for ; Thu, 27 Jan 2022 21:38:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344430AbiA0ViU (ORCPT ); Thu, 27 Jan 2022 16:38:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344428AbiA0ViS (ORCPT ); Thu, 27 Jan 2022 16:38:18 -0500 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD1EAC061749 for ; Thu, 27 Jan 2022 13:38:18 -0800 (PST) Received: by mail-oi1-x233.google.com with SMTP id g205so8542751oif.5 for ; Thu, 27 Jan 2022 13:38:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J+jpVfY+8XlJF2Rb2+/f1AwXvaR2DR0pHXW101pfcwY=; b=nWnL4suqlfVxWBytEVIQeiHFCtMMnzhTH1/eqNs89zwWgVIYrrVNYtWCqRhMIuK4XH PT43P5uD5wYdWRRHf05TdBaM7D/lA+tIDU0GqLSOghpmxMjrnbnodxLIBQ9MBgK/F2vM E+a5z3uWeefHwkF2maqRbLJ6sgTgdkYFcBcamO4pSChJyz63sMP8tPKDAQu5fC6pP1TH SvQGFwsNHjORfHTYqzvfeMbk+BbAyBlhNUpqQKkASloZDx9htkPCOHLi+5ewz49pWjFh PtBRR0BrfIpZlbDigWZES1WlDMPZ5FedWLEjNEl4G83r2F58O9BuQtdOOt873TbQ5pb0 c1VA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J+jpVfY+8XlJF2Rb2+/f1AwXvaR2DR0pHXW101pfcwY=; b=JPcA6CaeHDA5JyMWA/k13HiAhTVCJL+u3npaembzx/+cW94yvYo8+uielZ55Tml4nS z1fdu7gbKQFFN4KGpELj1A0VZ0sCH1SP0LtlmcRrSClyRkYXry2ut64jSWOmEVmErAkk 6p4Q+JuGSolgBJZRaPFgCS/13rmvbU3fPNkdw4aAGb9PdHU3r4sjSmN8l9/8WOwOULhI YtROC4imWsWSjQG5NUuB40yHz351KWnqS72l1LUFYbVYnLI0P109hvr1P71Um3XVyQ48 igI5zocS6iNXoG8cFOrMsLY71a+6gBTQeGLyEPig/swkMYYHE9PAa7YSyTs5XIa759wX LEsA== X-Gm-Message-State: AOAM530fVPq/QDLfGneIix27z8X8sJTU/4b5wVUgp2x6tBAHXLRSkC66 05DJRELYrlXnfxNdRI0yVA8= X-Google-Smtp-Source: ABdhPJyx8bXU3fXRad5+SKELnbXrL2yz72PKVpR1SBIEsJR7bLSeUiZfMrA3A7puOpzye0OncUmnaQ== X-Received: by 2002:a05:6808:1b13:: with SMTP id bx19mr3437927oib.284.1643319498188; Thu, 27 Jan 2022 13:38:18 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:17 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 18/26] RDMA/rxe: Convert mca read locking to RCU Date: Thu, 27 Jan 2022 15:37:47 -0600 Message-Id: <20220127213755.31697-19-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace spinlocks with rcu read locks for read side operations on mca.n rxe_recv.c and rxe_mcast.c. Use rcu list extensions on write side operations and keep spinlocks to separate write threads. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 57 ++++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_recv.c | 6 +-- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 3 files changed, 39 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 865e6e85084f..c193bd4975f7 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -27,7 +27,8 @@ * the mcg is created and an extra kref_put when the qp count decreases * to zero. * - * The qp list and the red-black tree are protected by a single + * The qp list is protected for read operations by RCU and the qp list and + * the red-black tree are protected for write operations by a single * rxe->mcg_lock per device. */ @@ -270,7 +271,7 @@ void rxe_cleanup_mcg(struct kref *kref) } /** - * __rxe_init_mca - initialize a new mca holding lock + * __rxe_init_mca_rcu - initialize a new mca holding lock * @qp: qp object * @mcg: mcg object * @mca: empty space for new mca @@ -280,7 +281,7 @@ void rxe_cleanup_mcg(struct kref *kref) * * Returns: 0 on success else an error */ -static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, +static int __rxe_init_mca_rcu(struct rxe_qp *qp, struct rxe_mcg *mcg, struct rxe_mca *mca) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); @@ -304,7 +305,7 @@ static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, rxe_add_ref(qp); mca->qp = qp; - list_add_tail(&mca->qp_list, &mcg->qp_list); + list_add_tail_rcu(&mca->qp_list, &mcg->qp_list); return 0; } @@ -324,14 +325,14 @@ static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) int err; /* check to see if the qp is already a member of the group */ - spin_lock_bh(&rxe->mcg_lock); - list_for_each_entry(mca, &mcg->qp_list, qp_list) { + rcu_read_lock(); + list_for_each_entry_rcu(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - spin_unlock_bh(&rxe->mcg_lock); + rcu_read_unlock(); return 0; } } - spin_unlock_bh(&rxe->mcg_lock); + rcu_read_unlock(); /* speculative alloc new mca without using GFP_ATOMIC */ new_mca = kzalloc(sizeof(*mca), GFP_KERNEL); @@ -340,16 +341,19 @@ static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) spin_lock_bh(&rxe->mcg_lock); /* re-check to see if someone else just attached qp */ - list_for_each_entry(mca, &mcg->qp_list, qp_list) { + rcu_read_lock(); + list_for_each_entry_rcu(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { + rcu_read_unlock(); kfree(new_mca); err = 0; goto done; } } + rcu_read_unlock(); mca = new_mca; - err = __rxe_init_mca(qp, mcg, mca); + err = __rxe_init_mca_rcu(qp, mcg, mca); if (err) kfree(mca); done: @@ -359,21 +363,23 @@ static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) } /** - * __rxe_cleanup_mca - cleanup mca object holding lock + * __rxe_cleanup_mca_rcu - cleanup mca object holding lock * @mca: mca object * @mcg: mcg object * * Context: caller must hold a reference to mcg and rxe->mcg_lock */ -static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) +static void __rxe_cleanup_mca_rcu(struct rxe_mca *mca, struct rxe_mcg *mcg) { - list_del(&mca->qp_list); + list_del_rcu(&mca->qp_list); atomic_dec(&mcg->qp_num); atomic_dec(&mcg->rxe->mcg_attach); atomic_dec(&mca->qp->mcg_num); rxe_drop_ref(mca->qp); + + kfree_rcu(mca, rcu); } /** @@ -386,22 +392,29 @@ static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) static int rxe_detach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) { struct rxe_dev *rxe = mcg->rxe; - struct rxe_mca *mca, *tmp; + struct rxe_mca *mca; + int ret; spin_lock_bh(&rxe->mcg_lock); - list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { + rcu_read_lock(); + list_for_each_entry_rcu(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - __rxe_cleanup_mca(mca, mcg); - if (atomic_read(&mcg->qp_num) <= 0) - kref_put(&mcg->ref_cnt, __rxe_cleanup_mcg); - spin_unlock_bh(&rxe->mcg_lock); - kfree(mca); - return 0; + rcu_read_unlock(); + goto found; } } + rcu_read_unlock(); + ret = -EINVAL; + goto done; +found: + __rxe_cleanup_mca_rcu(mca, mcg); + if (atomic_read(&mcg->qp_num) <= 0) + kref_put(&mcg->ref_cnt, __rxe_cleanup_mcg); + ret = 0; +done: spin_unlock_bh(&rxe->mcg_lock); - return -EINVAL; + return ret; } /** diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 357a6cea1484..7f2ea61a52c1 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -267,13 +267,13 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) qp_array = kmalloc_array(nmax, sizeof(qp), GFP_KERNEL); n = 0; - spin_lock_bh(&rxe->mcg_lock); - list_for_each_entry(mca, &mcg->qp_list, qp_list) { + rcu_read_lock(); + list_for_each_entry_rcu(mca, &mcg->qp_list, qp_list) { qp_array[n++] = mca->qp; if (n == nmax) break; } - spin_unlock_bh(&rxe->mcg_lock); + rcu_read_unlock(); kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); nmax = n; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 76350d43ce2a..12bff190fc1f 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -365,6 +365,7 @@ struct rxe_mcg { struct rxe_mca { struct list_head qp_list; struct rxe_qp *qp; + struct rcu_head rcu; }; struct rxe_port { From patchwork Thu Jan 27 21:37:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9735C4167D for ; Thu, 27 Jan 2022 21:38:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344412AbiA0ViW (ORCPT ); Thu, 27 Jan 2022 16:38:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344445AbiA0ViT (ORCPT ); Thu, 27 Jan 2022 16:38:19 -0500 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D60BC061748 for ; Thu, 27 Jan 2022 13:38:19 -0800 (PST) Received: by mail-oi1-x231.google.com with SMTP id x193so8661669oix.0 for ; Thu, 27 Jan 2022 13:38:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CWWfpsAW3PURcAs6cTHYlX1JonKPfT0wiVd2NrLN1/4=; b=JJcHZo+EbNiqtIWbl3moYZI82l/Jx3QtLxVi0GCsR1ybvVEdlcfAr2k3QHRv9zPPJR MZNr2FM7ibHtSWNN1bkDLdUTp865eiN/GHT1sTkp3qOUKYQx4YoN4UiSTe5By4F4RmId wyySZF0DsUDmw/hiNtvPXyH0V9B8ardXALHs8LjUKzkSm9Yn3xUxdc6Tfd9SZc0uODbP PVuQc5gk4enXEsenS8Z5LyZW7A24ugPRzQUOwkM3SUeHftUAnYBXo2+q4BwBZMgq+ZlR fieacwtk7dQfHDrU8QetvTM7kAZoJAeqr1wHMSDABliUhLg7pPdZE9rK5RI/BiGMMFR4 xJFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CWWfpsAW3PURcAs6cTHYlX1JonKPfT0wiVd2NrLN1/4=; b=JxktGuXlJ0/Ba8rtVgCYxqpnZ/Rry3hq3IVFeKfx/YG0Cb/kBppeXsd92IPgBFVzQJ iMUphDYtCivZyLYuCfPMVUQPp+hM0vddYaTspgwtHXKIqtD/MI0m9tLbuh6ocsWp19PX 7DyWnFnXOUj+7A7Go3GmlseKykZFglDlqVM5vu0OG69lgQgGxEOdZt4yKf1bTyVZiJMq ni2lV+bagGscoqJDEKoHlizKSWEEPo+OIkNki+Z4MwwAFpBXBCnL5PyRn3u/KbM7dTJ7 1NTz+0UhnSbXRbdZWTtwD7AnsJv9D9zYWvGODUTSOzVABFwyV9biFT4Ka99hnCEUAP2d j8sQ== X-Gm-Message-State: AOAM531hcr6A+7jra4/1H06GIB+3EK395XWiydZgO5TOlVpUlaIx97J5 cI/Y7SNfUYc+/I4DJahkvwAHJFIMa+Q= X-Google-Smtp-Source: ABdhPJyMU/d2I2ug/Ixzp6u21Wkf86MK0lGHhQHM9fWX8BQyzfxzPZ3Fq6KWhTT0rWudyrNvlZ04Gg== X-Received: by 2002:a05:6808:1598:: with SMTP id t24mr7977777oiw.50.1643319498889; Thu, 27 Jan 2022 13:38:18 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:18 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 19/26] RDMA/rxe: Reverse the sense of RXE_POOL_NO_ALLOC Date: Thu, 27 Jan 2022 15:37:48 -0600 Message-Id: <20220127213755.31697-20-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org There is only one remaining object type that allocates its own memory, that is MR. So the sense of RXE_POOL_NO_ALLOC is changed to RXE_POOL_ALLOC. Add checks to rxe_alloc() and rxe_add_to_pool() to make sure the correct call is used for the setting of this flag. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 27 ++++++++++++++++++--------- drivers/infiniband/sw/rxe/rxe_pool.h | 2 +- 2 files changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index b6fe7c93aaab..8fc3f0026f69 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -21,19 +21,17 @@ static const struct rxe_type_info { .name = "rxe-uc", .size = sizeof(struct rxe_ucontext), .elem_offset = offsetof(struct rxe_ucontext, elem), - .flags = RXE_POOL_NO_ALLOC, }, [RXE_TYPE_PD] = { .name = "rxe-pd", .size = sizeof(struct rxe_pd), .elem_offset = offsetof(struct rxe_pd, elem), - .flags = RXE_POOL_NO_ALLOC, }, [RXE_TYPE_AH] = { .name = "rxe-ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, }, @@ -41,7 +39,7 @@ static const struct rxe_type_info { .name = "rxe-srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, }, @@ -50,7 +48,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_qp), .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_QP_INDEX, .max_index = RXE_MAX_QP_INDEX, }, @@ -58,7 +56,6 @@ static const struct rxe_type_info { .name = "rxe-cq", .size = sizeof(struct rxe_cq), .elem_offset = offsetof(struct rxe_cq, elem), - .flags = RXE_POOL_NO_ALLOC, .cleanup = rxe_cq_cleanup, }, [RXE_TYPE_MR] = { @@ -66,7 +63,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mr), .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, - .flags = RXE_POOL_INDEX, + .flags = RXE_POOL_INDEX | RXE_POOL_ALLOC, .min_index = RXE_MIN_MR_INDEX, .max_index = RXE_MAX_MR_INDEX, }, @@ -75,7 +72,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), .cleanup = rxe_mw_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, }, @@ -262,6 +259,12 @@ void *rxe_alloc(struct rxe_pool *pool) struct rxe_pool_elem *elem; void *obj; + if (!(pool->flags & RXE_POOL_ALLOC)) { + pr_warn_once("%s: Pool %s must call rxe_add_to_pool\n", + __func__, pool->name); + return NULL; + } + if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -284,6 +287,12 @@ void *rxe_alloc(struct rxe_pool *pool) int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { + if (pool->flags & RXE_POOL_ALLOC) { + pr_warn_once("%s: Pool %s must call rxe_alloc\n", + __func__, pool->name); + return -EINVAL; + } + if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -308,7 +317,7 @@ void rxe_elem_release(struct kref *kref) if (pool->cleanup) pool->cleanup(elem); - if (!(pool->flags & RXE_POOL_NO_ALLOC)) { + if (pool->flags & RXE_POOL_ALLOC) { obj = elem->obj; kfree(obj); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 99b1eb04b405..ca7e5c4c44cf 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -9,7 +9,7 @@ enum rxe_pool_flags { RXE_POOL_INDEX = BIT(1), - RXE_POOL_NO_ALLOC = BIT(4), + RXE_POOL_ALLOC = BIT(2), }; enum rxe_elem_type { From patchwork Thu Jan 27 21:37:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20236C4167E for ; Thu, 27 Jan 2022 21:38:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344429AbiA0ViW (ORCPT ); Thu, 27 Jan 2022 16:38:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344422AbiA0ViU (ORCPT ); Thu, 27 Jan 2022 16:38:20 -0500 Received: from mail-ot1-x32a.google.com (mail-ot1-x32a.google.com [IPv6:2607:f8b0:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22F01C06174A for ; Thu, 27 Jan 2022 13:38:20 -0800 (PST) Received: by mail-ot1-x32a.google.com with SMTP id j38-20020a9d1926000000b0059fa6de6c71so3860584ota.10 for ; Thu, 27 Jan 2022 13:38:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m7m1BGBziqwvyvCrwIk/mIsGbri46HjiDVdhAI03lww=; b=idhfjdqrY8NMtra/jlxwJ4hBj85yhbbx7+/JjbJePugurWwzEGO9trPIj4pn7u/AIh RSmc14aZt4NtVR1m5JT2FcdAUNogNfce16SLbt8+oWkhsdcNkYK1MAC8xDHeHpYyxOpl EA551Ei9GyQokKvZhTg/U5XsQxqiSC9ZP3URd0uJXXO2hQGK16mXI4Gfh5m9wUoT6xr+ 47JC3mNVR8/M4jBDHMG2fk19DfQRWj2w/8cawwH1Yj7rMnxg9Ex8FvFXvDLgurRRBRyj YvF9hm+qMyDq3NqHAfA/EGkv7f7tlg0WgzjdzALCxQVbcZINWha4dcGtd3g4H5wK0gnK dhAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m7m1BGBziqwvyvCrwIk/mIsGbri46HjiDVdhAI03lww=; b=yNfl/9x6LaB+ZQyoAFAy1qlb6oOWou1mFsjsIF/6m3xHVx+r0Wvpmqsqk09eb1Hx1D UPDo1Nv53u56DTpcq9+UDoJN0dwssXvuZqLAdJuIEDRgOhFo0VuOJjH8IyML8OCl+E6+ WHfV6qFMxCFXW9Lj8wsNx5ERjJjJIA0+/qN6qFeChM0oMgMgiGqqvZZ31sfVt+K8smgn X/onAXIltwkZdS+M2EUUS13BrY42uN67k6w5TQGewmUBx2EanneTG/e47uVGPF8OKalw 2dtvkLY/lMrcXYm8VTItjtTx2ezogyCN2SCbuhyR/gcnSs3fRB12bZXdIc86G+GwbG1u IxbQ== X-Gm-Message-State: AOAM530RctS5O5euRebmHEc861PoIA66NoisDSwTexjgB24o5uQbv3N1 cWbtHBzG701hhJTUjZkI6pELieR3RxQ= X-Google-Smtp-Source: ABdhPJwX6Lm+zBr7PwduQzHC3tGPZUCEqwQeOI0eS03Ar1oI8EMaQ9I/cw3G1wB1oYLnmcMnH+Lw5Q== X-Received: by 2002:a05:6830:2433:: with SMTP id k19mr3111297ots.216.1643319499555; Thu, 27 Jan 2022 13:38:19 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:19 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 20/26] RDMA/rxe: Delete _locked() APIs for pool objects Date: Thu, 27 Jan 2022 15:37:49 -0600 Message-Id: <20220127213755.31697-21-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Since caller managed locks for indexed objects are no longer used these APIs are deleted. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 63 +++------------------------- drivers/infiniband/sw/rxe/rxe_pool.h | 24 ++--------- 2 files changed, 10 insertions(+), 77 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 8fc3f0026f69..b3c74988b0e9 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -189,71 +189,29 @@ static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) return 0; } -int __rxe_add_index_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - int err; - - elem->index = alloc_index(pool); - err = rxe_insert_index(pool, elem); - - return err; -} - int __rxe_add_index(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; int err; write_lock_bh(&pool->pool_lock); - err = __rxe_add_index_locked(elem); + elem->index = alloc_index(pool); + err = rxe_insert_index(pool, elem); write_unlock_bh(&pool->pool_lock); return err; } -void __rxe_drop_index_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - clear_bit(elem->index - pool->index.min_index, pool->index.table); - rb_erase(&elem->index_node, &pool->index.tree); -} - void __rxe_drop_index(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; write_lock_bh(&pool->pool_lock); - __rxe_drop_index_locked(elem); + clear_bit(elem->index - pool->index.min_index, pool->index.table); + rb_erase(&elem->index_node, &pool->index.tree); write_unlock_bh(&pool->pool_lock); } -void *rxe_alloc_locked(struct rxe_pool *pool) -{ - struct rxe_pool_elem *elem; - void *obj; - - if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; - - obj = kzalloc(pool->elem_size, GFP_ATOMIC); - if (!obj) - goto out_cnt; - - elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); - - elem->pool = pool; - elem->obj = obj; - kref_init(&elem->ref_cnt); - - return obj; - -out_cnt: - atomic_dec(&pool->num_elem); - return NULL; -} - void *rxe_alloc(struct rxe_pool *pool) { struct rxe_pool_elem *elem; @@ -325,12 +283,13 @@ void rxe_elem_release(struct kref *kref) atomic_dec(&pool->num_elem); } -void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) +void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rb_node *node; struct rxe_pool_elem *elem; void *obj; + read_lock_bh(&pool->pool_lock); node = pool->index.tree.rb_node; while (node) { @@ -350,16 +309,6 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) } else { obj = NULL; } - - return obj; -} - -void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) -{ - void *obj; - - read_lock_bh(&pool->pool_lock); - obj = rxe_pool_get_index_locked(pool, index); read_unlock_bh(&pool->pool_lock); return obj; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index ca7e5c4c44cf..b7babf4789c7 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -68,9 +68,7 @@ int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, /* free resources from object pool */ void rxe_pool_cleanup(struct rxe_pool *pool); -/* allocate an object from pool holding and not holding the pool lock */ -void *rxe_alloc_locked(struct rxe_pool *pool); - +/* allocate an object from pool */ void *rxe_alloc(struct rxe_pool *pool); /* connect already allocated object to pool */ @@ -79,32 +77,18 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) /* assign an index to an indexed object and insert object into - * pool's rb tree holding and not holding the pool_lock + * pool's rb tree */ -int __rxe_add_index_locked(struct rxe_pool_elem *elem); - -#define rxe_add_index_locked(obj) __rxe_add_index_locked(&(obj)->elem) - int __rxe_add_index(struct rxe_pool_elem *elem); #define rxe_add_index(obj) __rxe_add_index(&(obj)->elem) -/* drop an index and remove object from rb tree - * holding and not holding the pool_lock - */ -void __rxe_drop_index_locked(struct rxe_pool_elem *elem); - -#define rxe_drop_index_locked(obj) __rxe_drop_index_locked(&(obj)->elem) - +/* drop an index and remove object from rb tree */ void __rxe_drop_index(struct rxe_pool_elem *elem); #define rxe_drop_index(obj) __rxe_drop_index(&(obj)->elem) -/* lookup an indexed object from index holding and not holding the pool_lock. - * takes a reference on object - */ -void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index); - +/* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); /* cleanup an object when all references are dropped */ From patchwork Thu Jan 27 21:37:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 786F3C3525C for ; Thu, 27 Jan 2022 21:38:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344422AbiA0ViW (ORCPT ); Thu, 27 Jan 2022 16:38:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344421AbiA0ViU (ORCPT ); Thu, 27 Jan 2022 16:38:20 -0500 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7A34C061748 for ; Thu, 27 Jan 2022 13:38:20 -0800 (PST) Received: by mail-oi1-x22d.google.com with SMTP id v67so8512641oie.9 for ; Thu, 27 Jan 2022 13:38:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dH8B2cNvdFl06sxTfKf2vqr2ZG1+fZx21yGBIrgJ5L4=; b=DdW96JLpM/AXFVqyTSl9SSgL4uMiFqQz5ksvfqH5RelzkUScATwbnxB5Dm60B9NaSg 2Y85TdY7ukwTNWBHrRkFH4ppHq0tvuYsJgPBVZwYcyGsO8gLvQSjif1D+gHMnOLwkUPS iWL+2ZyvlDEr34MAs4oWMU3woqrZzook3EuPcK3S19S+qd9A1XOKlvtIjunPjkDKproI qej6MuJs5/c80dOoTSDjCJS8AbRt5T8KJ30W3VKnMfrLgx5qHlGX3pyL3cZZEFis2srQ jZzltXvi+uOnNYChsEK5PPZZsrpOCH/5p70moQnjvIJ61gW5nQinzFRmioO5C1H5JlWG 4VlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dH8B2cNvdFl06sxTfKf2vqr2ZG1+fZx21yGBIrgJ5L4=; b=R75tpHVsrFVVJ5jRltLtbz6GXJ2fyRxOsOhMIgJl0VzhnwY5RLdWlwSLBls2MySFAC oAiIBzGWFFq6xEzemS2CR2SmMX051dCrZ29VI5mfI0q9Cr+Id6b6nSi7umwYLjovzu4+ 7KbndP/MO15AvkCTOQGbIVMq1BR5REk+w/1vACC5DGyEZLzbY0nNFritgRYwi//Dx1jA 7XudyM2Uugow9FPt6kimVc3upuR2vjIPoC5GoQ9Ng6LHvWnPfu2z3luc56Qtt10kuwjf av31ftofTUfyYBkJLjsfdMxX8mhmrFCkdOf5xuQGt4518F5TNgyfQ/yE2pJNcrchaWF8 KHCQ== X-Gm-Message-State: AOAM53361V/7j1tqs2tTWN3JhXuNyWOblW6XcRH+zxvbmR3LbWTcjSDS aaUd9Uz7pGQwUdFR+Ln5Uh4= X-Google-Smtp-Source: ABdhPJyElXEE2LPILJ9ef8KWX+e0uKuz2itFmMQpHN4e8/b5TVc0GvT1+0qqkD+fNJ/IcYY/8bjdMg== X-Received: by 2002:aca:1011:: with SMTP id 17mr3370670oiq.27.1643319500172; Thu, 27 Jan 2022 13:38:20 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:19 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 21/26] RDMA/rxe: Replace obj by elem in declaration Date: Thu, 27 Jan 2022 15:37:50 -0600 Message-Id: <20220127213755.31697-22-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Fix a harmless typo replacing obj by elem in the cleanup fields. This has no effect but is confusing. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 2 +- drivers/infiniband/sw/rxe/rxe_pool.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index b3c74988b0e9..a024c3bf8696 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -12,7 +12,7 @@ static const struct rxe_type_info { const char *name; size_t size; size_t elem_offset; - void (*cleanup)(struct rxe_pool_elem *obj); + void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; u32 min_index; u32 max_index; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index b7babf4789c7..3d3470d0e3c8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -39,7 +39,7 @@ struct rxe_pool { struct rxe_dev *rxe; const char *name; rwlock_t pool_lock; /* protects pool add/del/search */ - void (*cleanup)(struct rxe_pool_elem *obj); + void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; From patchwork Thu Jan 27 21:37:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE209C433FE for ; Thu, 27 Jan 2022 21:38:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344421AbiA0ViX (ORCPT ); Thu, 27 Jan 2022 16:38:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344439AbiA0ViV (ORCPT ); Thu, 27 Jan 2022 16:38:21 -0500 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CFCDC061748 for ; Thu, 27 Jan 2022 13:38:21 -0800 (PST) Received: by mail-oi1-x22d.google.com with SMTP id u129so8536996oib.4 for ; Thu, 27 Jan 2022 13:38:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kTVtqibSyCIZK0XHFX37ITeq/GnWKAOVSzdnkqYx3rg=; b=Y3OVG0SSMH+lRTciZQmnJ6IdDy+eXHD1IlFyhVpSntwnQxdzt3Dz3IPlr3ngJjkBFk G+Wdd5l9rWiufvE32g7puzgXTYZTqOsrnUV6sSA42OqniwOxmEtHL7u3fO+UqTrH7CuM STVv5Jjig0ynepeQBZo0d2xIbCpcZulQzuBkc6Pk7HErTYVVgXMyQPv0nAYKYZNKl+e7 uvw627kol2jdW6sxjE1PbAP/UvJA8V+x/0XbE2V8q1Hroku9bK6w8WqJzyJrR9+AeIL/ KB7M8gvXEey8FFO/jJwYmq5iywLs2HPMHeWdONB8mnZmAwQKnX2iieOf7saQBhVcC1f3 bydg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kTVtqibSyCIZK0XHFX37ITeq/GnWKAOVSzdnkqYx3rg=; b=Dv3DDQvj/tf7Och0hVGiOf3yLUA/HaQCGbBbYa7FNXeIbl3IgTavXaaxrMTCUIzDZD mv951fHEyW5FkP2+wyoKuCsVmlDUQN4fNwqG2L/FgVMPSPNnlpf1v8fGAp7OaO2c971o owklveF5m3qQLRBgiouj6qaClCBbSxa31T1DrGYWTEgS/Q0NO/LQf8tDmtzW/kxT7Jbu fK71YYdOLESYTM9oxMmoHZETgnAWqey4/LzpmiCGtmR5eAqmBdc88yicY57/lZTFvkAE 9O7+PlxBDZUcVoIx4+fWmuFsewMOLsWTNa4TbkexGCqgGHuG12VpYS2Q5BB4eQX7rb10 lVgw== X-Gm-Message-State: AOAM530xQ8G+KkZ5aWEdwZovnD2d8hOi5wmMpFPF8tC2mzOGjQbl6OFh GHEXigUFaOeWx2Tepvbe5Kk= X-Google-Smtp-Source: ABdhPJxB6/2dR1eAbVStuoByXyd1w/RzUBEcjrmoU1q+TxCTW080bnTODaFnujDONGylMpF5pZ8GbQ== X-Received: by 2002:a05:6808:13cb:: with SMTP id d11mr3813909oiw.25.1643319500859; Thu, 27 Jan 2022 13:38:20 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:20 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 22/26] RDMA/rxe: Replace red-black trees by xarrays Date: Thu, 27 Jan 2022 15:37:51 -0600 Message-Id: <20220127213755.31697-23-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rxe driver uses red-black trees to add indices to the rxe object pool. Linux xarrays provide a better way to implement the same functionality for indices. This patch replaces red-black trees by xarrays for pool objects. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 86 ++-------- drivers/infiniband/sw/rxe/rxe_mr.c | 1 - drivers/infiniband/sw/rxe/rxe_mw.c | 8 - drivers/infiniband/sw/rxe/rxe_pool.c | 218 +++++++++----------------- drivers/infiniband/sw/rxe/rxe_pool.h | 40 ++--- drivers/infiniband/sw/rxe/rxe_verbs.c | 12 -- 6 files changed, 98 insertions(+), 267 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 74c5521e9b3d..de94947df18f 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -114,83 +114,27 @@ static void rxe_init_ports(struct rxe_dev *rxe) } /* init pools of managed objects */ -static int rxe_init_pools(struct rxe_dev *rxe) +static void rxe_init_pools(struct rxe_dev *rxe) { - int err; - - err = rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC, - rxe->max_ucontext); - if (err) - goto err1; - - err = rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD, - rxe->attr.max_pd); - if (err) - goto err2; - - err = rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH, - rxe->attr.max_ah); - if (err) - goto err3; - - err = rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ, - rxe->attr.max_srq); - if (err) - goto err4; - - err = rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP, - rxe->attr.max_qp); - if (err) - goto err5; - - err = rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ, - rxe->attr.max_cq); - if (err) - goto err6; - - err = rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR, - rxe->attr.max_mr); - if (err) - goto err7; - - err = rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW, - rxe->attr.max_mw); - if (err) - goto err8; - - return 0; - -err8: - rxe_pool_cleanup(&rxe->mr_pool); -err7: - rxe_pool_cleanup(&rxe->cq_pool); -err6: - rxe_pool_cleanup(&rxe->qp_pool); -err5: - rxe_pool_cleanup(&rxe->srq_pool); -err4: - rxe_pool_cleanup(&rxe->ah_pool); -err3: - rxe_pool_cleanup(&rxe->pd_pool); -err2: - rxe_pool_cleanup(&rxe->uc_pool); -err1: - return err; + rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC, rxe->max_ucontext); + rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD, rxe->attr.max_pd); + rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH, rxe->attr.max_ah); + rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ, rxe->attr.max_srq); + rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP, rxe->attr.max_qp); + rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ, rxe->attr.max_cq); + rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR, rxe->attr.max_mr); + rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW, rxe->attr.max_mw); } /* initialize rxe device state */ -static int rxe_init(struct rxe_dev *rxe) +static void rxe_init(struct rxe_dev *rxe) { - int err; - /* init default device parameters */ rxe_init_device_param(rxe); rxe_init_ports(rxe); - err = rxe_init_pools(rxe); - if (err) - return err; + rxe_init_pools(rxe); spin_lock_init(&rxe->mcg_lock); rxe->mcg_tree = RB_ROOT; @@ -201,8 +145,6 @@ static int rxe_init(struct rxe_dev *rxe) INIT_LIST_HEAD(&rxe->pending_mmaps); mutex_init(&rxe->usdev_lock); - - return 0; } void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu) @@ -224,11 +166,7 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu) */ int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name) { - int err; - - err = rxe_init(rxe); - if (err) - return err; + rxe_init(rxe); rxe_set_mtu(rxe, mtu); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 453ef3c9d535..35628b8a00b4 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -691,7 +691,6 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) mr->state = RXE_MR_STATE_INVALID; rxe_drop_ref(mr_pd(mr)); - rxe_drop_index(mr); rxe_drop_ref(mr); return 0; diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 32dd8c0b8b9e..7df36c40eec2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -20,7 +20,6 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return ret; } - rxe_add_index(mw); mw->rkey = ibmw->rkey = (mw->elem.index << 8) | rxe_get_next_key(-1); mw->state = (mw->ibmw.type == IB_MW_TYPE_2) ? RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; @@ -329,10 +328,3 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) return mw; } - -void rxe_mw_cleanup(struct rxe_pool_elem *elem) -{ - struct rxe_mw *mw = container_of(elem, typeof(*mw), elem); - - rxe_drop_index(mw); -} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index a024c3bf8696..928bc56b439f 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -21,11 +21,15 @@ static const struct rxe_type_info { .name = "rxe-uc", .size = sizeof(struct rxe_ucontext), .elem_offset = offsetof(struct rxe_ucontext, elem), + .min_index = 1, + .max_index = UINT_MAX, }, [RXE_TYPE_PD] = { .name = "rxe-pd", .size = sizeof(struct rxe_pd), .elem_offset = offsetof(struct rxe_pd, elem), + .min_index = 1, + .max_index = UINT_MAX, }, [RXE_TYPE_AH] = { .name = "rxe-ah", @@ -57,6 +61,8 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_cq), .elem_offset = offsetof(struct rxe_cq, elem), .cleanup = rxe_cq_cleanup, + .min_index = 1, + .max_index = UINT_MAX, }, [RXE_TYPE_MR] = { .name = "rxe-mr", @@ -71,44 +77,16 @@ static const struct rxe_type_info { .name = "rxe-mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), - .cleanup = rxe_mw_cleanup, .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, }, }; -static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) -{ - int err = 0; - - if ((max - min + 1) < pool->max_elem) { - pr_warn("not enough indices for max_elem\n"); - err = -EINVAL; - goto out; - } - - pool->index.max_index = max; - pool->index.min_index = min; - - pool->index.table = bitmap_zalloc(max - min + 1, GFP_KERNEL); - if (!pool->index.table) { - err = -ENOMEM; - goto out; - } - -out: - return err; -} - -int rxe_pool_init( - struct rxe_dev *rxe, - struct rxe_pool *pool, - enum rxe_elem_type type, - unsigned int max_elem) +void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, + enum rxe_elem_type type, unsigned int max_elem) { const struct rxe_type_info *info = &rxe_type_info[type]; - int err = 0; memset(pool, 0, sizeof(*pool)); @@ -125,110 +103,54 @@ int rxe_pool_init( rwlock_init(&pool->pool_lock); - if (pool->flags & RXE_POOL_INDEX) { - pool->index.tree = RB_ROOT; - err = rxe_pool_init_index(pool, info->max_index, - info->min_index); - if (err) - goto out; - } - -out: - return err; + xa_init_flags(&pool->xa, XA_FLAGS_ALLOC); + pool->limit.max = info->max_index; + pool->limit.min = info->min_index; } void rxe_pool_cleanup(struct rxe_pool *pool) { - if (atomic_read(&pool->num_elem) > 0) - pr_warn("%s pool destroyed with unfree'd elem\n", - pool->name); - - if (pool->flags & RXE_POOL_INDEX) - bitmap_free(pool->index.table); -} - -static u32 alloc_index(struct rxe_pool *pool) -{ - u32 index; - u32 range = pool->index.max_index - pool->index.min_index + 1; - - index = find_next_zero_bit(pool->index.table, range, pool->index.last); - if (index >= range) - index = find_first_zero_bit(pool->index.table, range); - - WARN_ON_ONCE(index >= range); - set_bit(index, pool->index.table); - pool->index.last = index; - return index + pool->index.min_index; -} - -static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) -{ - struct rb_node **link = &pool->index.tree.rb_node; - struct rb_node *parent = NULL; struct rxe_pool_elem *elem; - - while (*link) { - parent = *link; - elem = rb_entry(parent, struct rxe_pool_elem, index_node); - - if (elem->index == new->index) { - pr_warn("element already exists!\n"); - return -EINVAL; + unsigned long index = 0; + unsigned long max = ULONG_MAX; + unsigned int elem_count = 0; + unsigned int free_count = 0; + + do { + elem = xa_find(&pool->xa, &index, max, XA_PRESENT); + if (elem) { + elem_count++; + xa_erase(&pool->xa, index); + if (pool->flags & RXE_POOL_ALLOC) { + kfree(elem->obj); + free_count++; + } } + } while (elem); - if (elem->index > new->index) - link = &(*link)->rb_left; - else - link = &(*link)->rb_right; - } - - rb_link_node(&new->index_node, parent, link); - rb_insert_color(&new->index_node, &pool->index.tree); - - return 0; -} - -int __rxe_add_index(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - int err; - - write_lock_bh(&pool->pool_lock); - elem->index = alloc_index(pool); - err = rxe_insert_index(pool, elem); - write_unlock_bh(&pool->pool_lock); - - return err; -} - -void __rxe_drop_index(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - write_lock_bh(&pool->pool_lock); - clear_bit(elem->index - pool->index.min_index, pool->index.table); - rb_erase(&elem->index_node, &pool->index.tree); - write_unlock_bh(&pool->pool_lock); + if (elem_count || free_count) + pr_warn("Freed %d indices and %d objects from pool %s\n", + elem_count, free_count, pool->name); } void *rxe_alloc(struct rxe_pool *pool) { struct rxe_pool_elem *elem; void *obj; + int err; if (!(pool->flags & RXE_POOL_ALLOC)) { - pr_warn_once("%s: Pool %s must call rxe_add_to_pool\n", + pr_warn_once("%s: pool %s must call rxe_add_to_pool\n", __func__, pool->name); return NULL; } if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; + goto err_cnt; obj = kzalloc(pool->elem_size, GFP_KERNEL); if (!obj) - goto out_cnt; + goto err_cnt; elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); @@ -236,36 +158,66 @@ void *rxe_alloc(struct rxe_pool *pool) elem->obj = obj; kref_init(&elem->ref_cnt); + err = xa_alloc_cyclic_bh(&pool->xa, &elem->index, elem, pool->limit, + &pool->next, GFP_KERNEL); + if (err) + goto err_free; + return obj; -out_cnt: +err_free: + kfree(obj); +err_cnt: atomic_dec(&pool->num_elem); return NULL; } int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { + int err; + if (pool->flags & RXE_POOL_ALLOC) { - pr_warn_once("%s: Pool %s must call rxe_alloc\n", + pr_warn_once("%s: pool %s must call rxe_alloc\n", __func__, pool->name); return -EINVAL; } if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; + goto err_cnt; elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + err = xa_alloc_cyclic_bh(&pool->xa, &elem->index, elem, pool->limit, + &pool->next, GFP_KERNEL); + if (err) + goto err_cnt; + return 0; -out_cnt: +err_cnt: atomic_dec(&pool->num_elem); return -EINVAL; } -void rxe_elem_release(struct kref *kref) +void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) +{ + struct rxe_pool_elem *elem; + void *obj; + + read_lock_bh(&pool->pool_lock); + elem = xa_load(&pool->xa, index); + if (elem && kref_get_unless_zero(&elem->ref_cnt)) + obj = elem->obj; + else + obj = NULL; + read_unlock_bh(&pool->pool_lock); + + return obj; +} + +static void rxe_elem_release(struct kref *kref) { struct rxe_pool_elem *elem = container_of(kref, struct rxe_pool_elem, ref_cnt); @@ -280,36 +232,16 @@ void rxe_elem_release(struct kref *kref) kfree(obj); } + xa_erase(&pool->xa, elem->index); atomic_dec(&pool->num_elem); } -void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) +int __rxe_add_ref(struct rxe_pool_elem *elem) { - struct rb_node *node; - struct rxe_pool_elem *elem; - void *obj; - - read_lock_bh(&pool->pool_lock); - node = pool->index.tree.rb_node; - - while (node) { - elem = rb_entry(node, struct rxe_pool_elem, index_node); - - if (elem->index > index) - node = node->rb_left; - else if (elem->index < index) - node = node->rb_right; - else - break; - } - - if (node) { - kref_get(&elem->ref_cnt); - obj = elem->obj; - } else { - obj = NULL; - } - read_unlock_bh(&pool->pool_lock); + return kref_get_unless_zero(&elem->ref_cnt); +} - return obj; +int __rxe_drop_ref(struct rxe_pool_elem *elem) +{ + return kref_put(&elem->ref_cnt, rxe_elem_release); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 3d3470d0e3c8..c985ed519066 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -29,9 +29,6 @@ struct rxe_pool_elem { void *obj; struct kref ref_cnt; struct list_head list; - - /* only used if indexed */ - struct rb_node index_node; u32 index; }; @@ -48,21 +45,17 @@ struct rxe_pool { size_t elem_size; size_t elem_offset; - /* only used if indexed */ - struct { - struct rb_root tree; - unsigned long *table; - u32 last; - u32 max_index; - u32 min_index; - } index; + struct xarray xa; + struct xa_limit limit; + u32 next; + int locked; /* ?? */ }; /* initialize a pool of objects with given limit on * number of elements. gets parameters from rxe_type_info * pool elements will be allocated out of a slab cache */ -int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, +void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type, u32 max_elem); /* free resources from object pool */ @@ -76,28 +69,17 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) -/* assign an index to an indexed object and insert object into - * pool's rb tree - */ -int __rxe_add_index(struct rxe_pool_elem *elem); - -#define rxe_add_index(obj) __rxe_add_index(&(obj)->elem) - -/* drop an index and remove object from rb tree */ -void __rxe_drop_index(struct rxe_pool_elem *elem); - -#define rxe_drop_index(obj) __rxe_drop_index(&(obj)->elem) - /* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); -/* cleanup an object when all references are dropped */ -void rxe_elem_release(struct kref *kref); - /* take a reference on an object */ -#define rxe_add_ref(obj) kref_get(&(obj)->elem.ref_cnt) +int __rxe_add_ref(struct rxe_pool_elem *elem); + +#define rxe_add_ref(obj) __rxe_add_ref(&(obj)->elem) /* drop a reference on an object */ -#define rxe_drop_ref(obj) kref_put(&(obj)->elem.ref_cnt, rxe_elem_release) +int __rxe_drop_ref(struct rxe_pool_elem *elem); + +#define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->elem) #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 9f0aef4b649d..3ca374f1cf9b 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -181,7 +181,6 @@ static int rxe_create_ah(struct ib_ah *ibah, return err; /* create index > 0 */ - rxe_add_index(ah); ah->ah_num = ah->elem.index; if (uresp) { @@ -189,7 +188,6 @@ static int rxe_create_ah(struct ib_ah *ibah, err = copy_to_user(&uresp->ah_num, &ah->ah_num, sizeof(uresp->ah_num)); if (err) { - rxe_drop_index(ah); rxe_drop_ref(ah); return -EFAULT; } @@ -230,7 +228,6 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); - rxe_drop_index(ah); rxe_drop_ref(ah); return 0; } @@ -437,7 +434,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (err) return err; - rxe_add_index(qp); err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibqp->pd, udata); if (err) goto qp_init; @@ -445,7 +441,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, return 0; qp_init: - rxe_drop_index(qp); rxe_drop_ref(qp); return err; } @@ -500,7 +495,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) return ret; rxe_qp_destroy(qp); - rxe_drop_index(qp); rxe_drop_ref(qp); return 0; } @@ -903,7 +897,6 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) if (!mr) return ERR_PTR(-ENOMEM); - rxe_add_index(mr); rxe_add_ref(pd); rxe_mr_init_dma(pd, access, mr); @@ -927,7 +920,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, goto err2; } - rxe_add_index(mr); rxe_add_ref(pd); @@ -939,7 +931,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, err3: rxe_drop_ref(pd); - rxe_drop_index(mr); rxe_drop_ref(mr); err2: return ERR_PTR(err); @@ -962,8 +953,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, goto err1; } - rxe_add_index(mr); - rxe_add_ref(pd); err = rxe_mr_init_fast(pd, max_num_sg, mr); @@ -974,7 +963,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, err2: rxe_drop_ref(pd); - rxe_drop_index(mr); rxe_drop_ref(mr); err1: return ERR_PTR(err); From patchwork Thu Jan 27 21:37:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98E3EC4332F for ; Thu, 27 Jan 2022 21:38:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344447AbiA0ViX (ORCPT ); Thu, 27 Jan 2022 16:38:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344445AbiA0ViW (ORCPT ); Thu, 27 Jan 2022 16:38:22 -0500 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EF1EC06173B for ; Thu, 27 Jan 2022 13:38:22 -0800 (PST) Received: by mail-oi1-x22d.google.com with SMTP id b186so2348736oif.1 for ; Thu, 27 Jan 2022 13:38:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TsNeCMZKEAKvNRklFyxWkKaXRDoIJC31vUhT7QS4tLs=; b=dQKarUH8oAGX7EoRjIXZl3X0RXceu9KzByqA/XtR2VVDtcyeJ/goCgj30/yNPrAMv0 lWimsfV6lb7ttZX/Cg/z9LNuwY+hZyjXerY4ELTksuNIm+8fBSNzg6JRKSA5A+TofdC5 3ioID6+KN6871iiZmJQnfY504Fsgdywnf7rofy1lAB4zIhlWARQNNrnbXEJaLxarZnDU BNktmljvWCUh0aK3dEKKlO/ZN9qfyQ7kmiVMindIZDLmvW7dcaSE4LR+eHCFhzSOGIbt GnErsS4bh8MBNr76ToKgYpqej4RB0BrEDeEzCNBnATtxVaYYKsRlR8UH3AxXnmAw1Lp8 p4sQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TsNeCMZKEAKvNRklFyxWkKaXRDoIJC31vUhT7QS4tLs=; b=JRjTiyt0aEsTiBX04L0d4Y3zEJS/6ELHMGUIuFVpPPMHaarXuG3tHZ3IkJ5j3AzkH4 3abmbrCQlS5U+ZxDQf9l5ACeo81T9YyLZ4+TLPhk69AoHHSjw/h1RKSY3uizvJcZTrbK LnVKVMi5idkkc+K9dQWtbJcpZmJaab+KsDKN25ioIINFIove6PXl/KwOmPuRcfQWAXE+ LaIhZO9gaKsOZqgyJ6wvtQ59rU4UomxSnfqr9IqNmumu+tpFaGsjbCQNeSeeq2Tard+R kTd6a8tozSYD0n5kgnSMHGyR3i6nqGuQikZNsQ1XxWv3pLRyvo0MuFhY4+Cd17DAPKkO V9Vw== X-Gm-Message-State: AOAM532wu9+yDA37HMTYYY+sUgqyXvTucx+JoMHp0hmdrOa/yYqDn6x7 KbJZl1TaYSigbC70puZBYi8= X-Google-Smtp-Source: ABdhPJzTGQ+E5TlCscn1iVvqnAnZxuXAhIosXg9VuZ5Ce7CasrZ2cSkKr5BHafm1ee9U9Zxps0etUA== X-Received: by 2002:a05:6808:1a18:: with SMTP id bk24mr8475288oib.237.1643319501678; Thu, 27 Jan 2022 13:38:21 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:21 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 23/26] RDMA/rxe: Change pool locking to RCU Date: Thu, 27 Jan 2022 15:37:52 -0600 Message-Id: <20220127213755.31697-24-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rxe driver uses red-black trees to add indices to the rxe object pool. Linux xarrays provide a better way to implement the same functionality for indices. This patch replaces red-black trees by xarrays for indexed objects. Read side operations are protected by rcu_read_lock and write side operations which are all from verbs API calls are protected by the xa_lock spinlock. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 50 +++++++++++++++------------ drivers/infiniband/sw/rxe/rxe_pool.h | 19 ++-------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 3 files changed, 30 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 928bc56b439f..18cdf5e0ad4e 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* + * Copyright (c) 2022 Hewlett Packard Enterprise, Inc. All rights reserved. * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ @@ -35,7 +36,6 @@ static const struct rxe_type_info { .name = "rxe-ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), - .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, }, @@ -43,7 +43,6 @@ static const struct rxe_type_info { .name = "rxe-srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), - .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, }, @@ -52,7 +51,6 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_qp), .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, - .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_QP_INDEX, .max_index = RXE_MAX_QP_INDEX, }, @@ -69,7 +67,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mr), .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_ALLOC, + .flags = RXE_POOL_ALLOC, .min_index = RXE_MIN_MR_INDEX, .max_index = RXE_MAX_MR_INDEX, }, @@ -77,7 +75,6 @@ static const struct rxe_type_info { .name = "rxe-mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), - .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, }, @@ -100,14 +97,14 @@ void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, pool->cleanup = info->cleanup; atomic_set(&pool->num_elem, 0); - - rwlock_init(&pool->pool_lock); + spin_lock_init(&pool->xa.xa_lock); xa_init_flags(&pool->xa, XA_FLAGS_ALLOC); pool->limit.max = info->max_index; pool->limit.min = info->min_index; } +/* runs single threaded at driver shutdown */ void rxe_pool_cleanup(struct rxe_pool *pool) { struct rxe_pool_elem *elem; @@ -204,36 +201,42 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; - void *obj; + void *obj = NULL; - read_lock_bh(&pool->pool_lock); + rcu_read_lock(); elem = xa_load(&pool->xa, index); if (elem && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; - else - obj = NULL; - read_unlock_bh(&pool->pool_lock); + rcu_read_unlock(); return obj; } -static void rxe_elem_release(struct kref *kref) +static void rxe_obj_free_rcu(struct rcu_head *rcu) { - struct rxe_pool_elem *elem = - container_of(kref, struct rxe_pool_elem, ref_cnt); + struct rxe_pool_elem *elem = container_of(rcu, typeof(*elem), rcu); + + kfree(elem->obj); +} + +static void __rxe_elem_release_rcu(struct kref *kref) + __releases(&pool->xa.xa_lock) +{ + struct rxe_pool_elem *elem = container_of(kref, + struct rxe_pool_elem, ref_cnt); struct rxe_pool *pool = elem->pool; - void *obj; + + __xa_erase(&pool->xa, elem->index); + + spin_unlock(&pool->xa.xa_lock); if (pool->cleanup) pool->cleanup(elem); - if (pool->flags & RXE_POOL_ALLOC) { - obj = elem->obj; - kfree(obj); - } - - xa_erase(&pool->xa, elem->index); atomic_dec(&pool->num_elem); + + if (pool->flags & RXE_POOL_ALLOC) + call_rcu(&elem->rcu, rxe_obj_free_rcu); } int __rxe_add_ref(struct rxe_pool_elem *elem) @@ -243,5 +246,6 @@ int __rxe_add_ref(struct rxe_pool_elem *elem) int __rxe_drop_ref(struct rxe_pool_elem *elem) { - return kref_put(&elem->ref_cnt, rxe_elem_release); + return kref_put_lock(&elem->ref_cnt, __rxe_elem_release_rcu, + &elem->pool->xa.xa_lock); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index c985ed519066..40026d746563 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -8,8 +8,7 @@ #define RXE_POOL_H enum rxe_pool_flags { - RXE_POOL_INDEX = BIT(1), - RXE_POOL_ALLOC = BIT(2), + RXE_POOL_ALLOC = BIT(1), }; enum rxe_elem_type { @@ -29,13 +28,13 @@ struct rxe_pool_elem { void *obj; struct kref ref_cnt; struct list_head list; + struct rcu_head rcu; u32 index; }; struct rxe_pool { struct rxe_dev *rxe; const char *name; - rwlock_t pool_lock; /* protects pool add/del/search */ void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; @@ -48,38 +47,24 @@ struct rxe_pool { struct xarray xa; struct xa_limit limit; u32 next; - int locked; /* ?? */ }; -/* initialize a pool of objects with given limit on - * number of elements. gets parameters from rxe_type_info - * pool elements will be allocated out of a slab cache - */ void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type, u32 max_elem); -/* free resources from object pool */ void rxe_pool_cleanup(struct rxe_pool *pool); -/* allocate an object from pool */ void *rxe_alloc(struct rxe_pool *pool); -/* connect already allocated object to pool */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); - #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) -/* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); -/* take a reference on an object */ int __rxe_add_ref(struct rxe_pool_elem *elem); - #define rxe_add_ref(obj) __rxe_add_ref(&(obj)->elem) -/* drop a reference on an object */ int __rxe_drop_ref(struct rxe_pool_elem *elem); - #define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->elem) #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 12bff190fc1f..d70d44392c32 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -309,6 +309,7 @@ static inline int rkey_is_mw(u32 rkey) struct rxe_mr { struct rxe_pool_elem elem; struct ib_mr ibmr; + struct rcu_head rcu; struct ib_umem *umem; From patchwork Thu Jan 27 21:37:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7BDAC41535 for ; Thu, 27 Jan 2022 21:38:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344426AbiA0ViX (ORCPT ); Thu, 27 Jan 2022 16:38:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344432AbiA0ViX (ORCPT ); Thu, 27 Jan 2022 16:38:23 -0500 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED138C06173B for ; Thu, 27 Jan 2022 13:38:22 -0800 (PST) Received: by mail-oi1-x234.google.com with SMTP id x193so8661974oix.0 for ; Thu, 27 Jan 2022 13:38:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YTTT87wMC4v2jPPK4ulEKzEXnIs47mB9ujkCoKuWINg=; b=mudim/pERMGohkODQTe71eF9V0whlWq1K4ng2LvbHUYf+Z4eS7/loqk9PAJTOuGTg0 netxdPdBhDDGReWFxEwh+uzQaCBTSvfw8mNJJFJrfJCaWAMbRolMEChnT8qbiK9atXIx fTrF+dgPnnY1AyxWhmaKCLyZdTxTqFAWdkvcuirS0WAAkQ6TgnS5rIaKXuI5vRDrWGcn gjunB8YVF1km+g/oh1J0MzqBiD6s+2cH5PHzw/XJ10e8MqeRGaXIkz/BFPjKugP0nZ7g jz9ux3rOtlTRffaCppbdCBr+buaPojcVZ0X8wtzl7AUXHX5FpJp01G0AGCghGcZpgbjo MmJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YTTT87wMC4v2jPPK4ulEKzEXnIs47mB9ujkCoKuWINg=; b=HbI1uzSoCIUumjEAGPD1VUtGCceAgGVgcuejpmWHVutocbxI5pFj7Y8vpsIXfoK968 cewZ1Do9UMTCfVlFcXaNCRHFncIbcAL7WEXa/dXW0N69/CXzZeRqIJMzuDRd9TfV4NaZ 6N/jaYj0hpa1j4hBLqx3hUQ6r23lbZoRoFBi88eXinDBeVw5+G/1J6JUtaK0xBAYypkS GMTEgS/57bGKQKpo80wkJg9bsKvkqzxGKkiAt7NZ6ZfKV+xjKUl8Ydr8EpC1Lo93H4tW 3wBVLh/PyThxV+RlWpI6wCnJI34RCRJ71EJZ7UqO5GfLjNWyLVPJUzviO/vA1iajI8BD 5c4A== X-Gm-Message-State: AOAM531UujQqZRHhNJvvNyDOuKXFG7usgRj04xNA+AuI1UZHcFAb2tIH JZlgBvQ/EHu5JiJKzeU89l0= X-Google-Smtp-Source: ABdhPJxmXDom2yQaGVZWtkX5XqK/qJzkpwoGS9GwKOV6/Q+Oh2JqtnA/Vr2vg0xKYIjol9iTZXBFeQ== X-Received: by 2002:a05:6808:1645:: with SMTP id az5mr6708328oib.313.1643319502374; Thu, 27 Jan 2022 13:38:22 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:21 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 24/26] RDMA/rxe: Add wait_for_completion to pool objects Date: Thu, 27 Jan 2022 15:37:53 -0600 Message-Id: <20220127213755.31697-25-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Reference counting for object deletion can cause an object to wait for something else to happen before an object gets deleted. The destroy verbs can then return to rdma-core with the object still holding references. Adding wait_for_completion in this path prevents this. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 1 + drivers/infiniband/sw/rxe/rxe_mw.c | 3 +- drivers/infiniband/sw/rxe/rxe_pool.c | 79 ++++++++++++++++++++++----- drivers/infiniband/sw/rxe/rxe_pool.h | 4 ++ drivers/infiniband/sw/rxe/rxe_verbs.c | 11 ++++ 5 files changed, 84 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 35628b8a00b4..6d1ce05bcf65 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -692,6 +692,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) mr->state = RXE_MR_STATE_INVALID; rxe_drop_ref(mr_pd(mr)); rxe_drop_ref(mr); + rxe_wait(mr); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 7df36c40eec2..dd3d02db3d03 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -60,8 +60,9 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) rxe_do_dealloc_mw(mw); spin_unlock_bh(&mw->lock); - rxe_drop_ref(mw); rxe_drop_ref(pd); + rxe_drop_ref(mw); + rxe_wait(mw); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 18cdf5e0ad4e..5402dae01554 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -7,6 +7,7 @@ #include "rxe.h" +#define RXE_POOL_TIMEOUT (200) #define RXE_POOL_ALIGN (16) static const struct rxe_type_info { @@ -154,6 +155,7 @@ void *rxe_alloc(struct rxe_pool *pool) elem->pool = pool; elem->obj = obj; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); err = xa_alloc_cyclic_bh(&pool->xa, &elem->index, elem, pool->limit, &pool->next, GFP_KERNEL); @@ -185,6 +187,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); err = xa_alloc_cyclic_bh(&pool->xa, &elem->index, elem, pool->limit, &pool->next, GFP_KERNEL); @@ -212,31 +215,22 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) return obj; } -static void rxe_obj_free_rcu(struct rcu_head *rcu) -{ - struct rxe_pool_elem *elem = container_of(rcu, typeof(*elem), rcu); - - kfree(elem->obj); -} - static void __rxe_elem_release_rcu(struct kref *kref) __releases(&pool->xa.xa_lock) { - struct rxe_pool_elem *elem = container_of(kref, - struct rxe_pool_elem, ref_cnt); + struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt); struct rxe_pool *pool = elem->pool; __xa_erase(&pool->xa, elem->index); - spin_unlock(&pool->xa.xa_lock); + spin_unlock_bh(&pool->xa.xa_lock); if (pool->cleanup) pool->cleanup(elem); atomic_dec(&pool->num_elem); - if (pool->flags & RXE_POOL_ALLOC) - call_rcu(&elem->rcu, rxe_obj_free_rcu); + complete(&elem->complete); } int __rxe_add_ref(struct rxe_pool_elem *elem) @@ -244,8 +238,67 @@ int __rxe_add_ref(struct rxe_pool_elem *elem) return kref_get_unless_zero(&elem->ref_cnt); } +static bool refcount_dec_and_lock_bh(refcount_t *r, spinlock_t *lock) + __acquires(lock) __releases(lock) +{ + if (refcount_dec_not_one(r)) + return false; + + spin_lock_bh(lock); + if (!refcount_dec_and_test(r)) { + spin_unlock_bh(lock); + return false; + } + + return true; +} + +static int kref_put_lock_bh(struct kref *kref, + void (*release)(struct kref *kref), + spinlock_t *lock) +{ + if (refcount_dec_and_lock_bh(&kref->refcount, lock)) { + release(kref); + return 1; + } + return 0; +} + int __rxe_drop_ref(struct rxe_pool_elem *elem) { - return kref_put_lock(&elem->ref_cnt, __rxe_elem_release_rcu, + return kref_put_lock_bh(&elem->ref_cnt, __rxe_elem_release_rcu, &elem->pool->xa.xa_lock); } + +static void rxe_obj_free_rcu(struct rcu_head *rcu) +{ + struct rxe_pool_elem *elem = container_of(rcu, typeof(*elem), rcu); + + kfree(elem->obj); +} + +int __rxe_wait(struct rxe_pool_elem *elem) +{ + struct rxe_pool *pool = elem->pool; + static int timeout = RXE_POOL_TIMEOUT; + static int timeout_failures; + int ret; + + if (timeout) { + ret = wait_for_completion_timeout(&elem->complete, timeout); + if (!ret) { + if (timeout_failures++ == 5) { + timeout = 0; + pr_warn("Exceeded max completion timeouts. Disabling wait_for_completion\n"); + } else { + pr_warn_ratelimited("Timed out waiting for %s#%d to complete\n", + pool->name + 4, elem->index); + } + } + } + + if (pool->flags & RXE_POOL_ALLOC) + call_rcu(&elem->rcu, rxe_obj_free_rcu); + + return ret; +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 40026d746563..f085750c4c5a 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -29,6 +29,7 @@ struct rxe_pool_elem { struct kref ref_cnt; struct list_head list; struct rcu_head rcu; + struct completion complete; u32 index; }; @@ -67,4 +68,7 @@ int __rxe_add_ref(struct rxe_pool_elem *elem); int __rxe_drop_ref(struct rxe_pool_elem *elem); #define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->elem) +int __rxe_wait(struct rxe_pool_elem *elem); +#define rxe_wait(obj) __rxe_wait(&(obj)->elem) + #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 3ca374f1cf9b..f2c1037696c5 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -116,6 +116,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc) struct rxe_ucontext *uc = to_ruc(ibuc); rxe_drop_ref(uc); + rxe_wait(uc); } static int rxe_port_immutable(struct ib_device *dev, u32 port_num, @@ -150,6 +151,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) struct rxe_pd *pd = to_rpd(ibpd); rxe_drop_ref(pd); + rxe_wait(pd); return 0; } @@ -189,6 +191,7 @@ static int rxe_create_ah(struct ib_ah *ibah, sizeof(uresp->ah_num)); if (err) { rxe_drop_ref(ah); + rxe_wait(ah); return -EFAULT; } } else if (ah->is_user) { @@ -229,6 +232,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) struct rxe_ah *ah = to_rah(ibah); rxe_drop_ref(ah); + rxe_wait(ah); return 0; } @@ -315,6 +319,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, err2: rxe_drop_ref(pd); rxe_drop_ref(srq); + rxe_wait(srq); err1: return err; } @@ -373,6 +378,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) rxe_drop_ref(srq->pd); rxe_drop_ref(srq); + rxe_wait(srq); return 0; } @@ -442,6 +448,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, qp_init: rxe_drop_ref(qp); + rxe_wait(qp); return err; } @@ -496,6 +503,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) rxe_qp_destroy(qp); rxe_drop_ref(qp); + rxe_wait(qp); return 0; } @@ -807,6 +815,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) rxe_cq_disable(cq); rxe_drop_ref(cq); + rxe_wait(cq); return 0; } @@ -932,6 +941,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, err3: rxe_drop_ref(pd); rxe_drop_ref(mr); + rxe_wait(mr); err2: return ERR_PTR(err); } @@ -964,6 +974,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, err2: rxe_drop_ref(pd); rxe_drop_ref(mr); + rxe_wait(mr); err1: return ERR_PTR(err); } From patchwork Thu Jan 27 21:37:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAD62C433F5 for ; Thu, 27 Jan 2022 21:38:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344424AbiA0ViY (ORCPT ); Thu, 27 Jan 2022 16:38:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344419AbiA0ViX (ORCPT ); Thu, 27 Jan 2022 16:38:23 -0500 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD8BEC061748 for ; Thu, 27 Jan 2022 13:38:23 -0800 (PST) Received: by mail-oi1-x235.google.com with SMTP id s9so8492894oib.11 for ; Thu, 27 Jan 2022 13:38:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NfM6reqAdf0gdHXElwBXQfSnLF2WGcKRIDOZUdXrjbo=; b=gBsPXTgvXhOT8vbdV68txvTL5s3RWs5bVEd1OCoXJeSMdyK3nzRtFcXlDDHpwKneI0 4oK9+SDiL3K6YLu0j803tNtTiSKEHNJO72V6rIy2VEhVN0XrM7wDREyGDD58eMQp/RG8 39Njk9ENZKkCodsv6XkYA0hCOVz/u6tHXnIfxq5tcJEkxa6cOyb9wU52PI8RE6ypw1Gt EYwl/K2rZO0OtjF0U24FPKqknTJb16bw4Q1DBi+kdFlD06wRjPbYCsE81nAgDrCn+HK9 sbH6EIO3i3p0gu5PO0rGf489odUf7u7VysvwrTA61vsuSi8Wmx8CEz/KtIQpf7OvGDGA mCwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NfM6reqAdf0gdHXElwBXQfSnLF2WGcKRIDOZUdXrjbo=; b=1+W0ZQ9dxb8XopiD2yYU5nVk8ctiwXFSCgjl8aolB9nfBhgmS5mu/2ZZb58VEPyvIa nzDUiSiXMF5u73mosPDVOT1pzf1iEFPyRLFZN1mT7HUWanUgbRGcCXlTslUWARHUlB0s SOS/wmFQnpXgxsrRGLeWuzYZTy3QjLEHmsElpKj0cHKRozSt/IbSxAy8De7S0QK1J1ko 2+Vz2YWeqBAzQxxx9mUXz4pRZykNL9UqY96F6arMAUeCSFNY+VasqGzpk+R5xy5kieGJ 5g8kB/9sUUPqEEvjAybsnCmuc4rLytAp1znWwrA/sHFbOCONe8MZadYUHZVDdxx2t3fQ XSjw== X-Gm-Message-State: AOAM530BblChLwdNsJlUMYjz6SaPWm0E79xINWyZToTscNGXvCDwyLtV 1WQpf2WXiKczmyArnlJ8hqI= X-Google-Smtp-Source: ABdhPJzxl/mm2PFYJWr/GE/st5IF6PFOlxGVyjyTc24N+RzOwqr91c9uVetJ2I2s/j2CBWxPyFR/qQ== X-Received: by 2002:a05:6808:198f:: with SMTP id bj15mr3293003oib.119.1643319503129; Thu, 27 Jan 2022 13:38:23 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:22 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 25/26] RDMA/rxe: Fix ref error in rxe_av.c Date: Thu, 27 Jan 2022 15:37:54 -0600 Message-Id: <20220127213755.31697-26-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The commit referenced below can take a reference to the AH which is never dropped. This only happens in the UD request path. This patch optionally passes that AH back to the caller so that it can hold the reference while the AV is being accessed and then drop it. Code to do this is added to rxe_req.c. The AV is also passed to rxe_prepare in rxe_net.c as an optimization. Fixes: e2fe06c90806 ("RDMA/rxe: Lookup kernel AH from ah index in UD WQEs") Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_av.c | 19 +++++++++- drivers/infiniband/sw/rxe/rxe_loc.h | 5 ++- drivers/infiniband/sw/rxe/rxe_net.c | 17 +++++---- drivers/infiniband/sw/rxe/rxe_req.c | 55 +++++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_resp.c | 2 +- 5 files changed, 63 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c index 38c7b6fb39d7..360a567159fe 100644 --- a/drivers/infiniband/sw/rxe/rxe_av.c +++ b/drivers/infiniband/sw/rxe/rxe_av.c @@ -99,11 +99,14 @@ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr) av->network_type = type; } -struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt) +struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp) { struct rxe_ah *ah; u32 ah_num; + if (ahp) + *ahp = NULL; + if (!pkt || !pkt->qp) return NULL; @@ -117,10 +120,22 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt) if (ah_num) { /* only new user provider or kernel client */ ah = rxe_pool_get_index(&pkt->rxe->ah_pool, ah_num); - if (!ah || ah->ah_num != ah_num || rxe_ah_pd(ah) != pkt->qp->pd) { + if (!ah) { pr_warn("Unable to find AH matching ah_num\n"); return NULL; } + + if (rxe_ah_pd(ah) != pkt->qp->pd) { + pr_warn("PDs don't match for AH and QP\n"); + rxe_drop_ref(ah); + return NULL; + } + + if (ahp) + *ahp = ah; + else + rxe_drop_ref(ah); + return &ah->av; } diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 0bc1b7e2877c..31a052c5d5f8 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -19,7 +19,7 @@ void rxe_av_to_attr(struct rxe_av *av, struct rdma_ah_attr *attr); void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr); -struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt); +struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp); /* rxe_cq.c */ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq, @@ -95,7 +95,8 @@ void rxe_mw_cleanup(struct rxe_pool_elem *arg); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, int paylen, struct rxe_pkt_info *pkt); -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb); +int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index a8cfa7160478..b06f22ffc5a8 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -271,13 +271,13 @@ static void prepare_ipv6_hdr(struct dst_entry *dst, struct sk_buff *skb, ip6h->payload_len = htons(skb->len - sizeof(*ip6h)); } -static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static int prepare4(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { struct rxe_qp *qp = pkt->qp; struct dst_entry *dst; bool xnet = false; __be16 df = htons(IP_DF); - struct rxe_av *av = rxe_get_av(pkt); struct in_addr *saddr = &av->sgid_addr._sockaddr_in.sin_addr; struct in_addr *daddr = &av->dgid_addr._sockaddr_in.sin_addr; @@ -297,11 +297,11 @@ static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb) return 0; } -static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static int prepare6(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { struct rxe_qp *qp = pkt->qp; struct dst_entry *dst; - struct rxe_av *av = rxe_get_av(pkt); struct in6_addr *saddr = &av->sgid_addr._sockaddr_in6.sin6_addr; struct in6_addr *daddr = &av->dgid_addr._sockaddr_in6.sin6_addr; @@ -322,16 +322,17 @@ static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb) return 0; } -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb) +int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { int err = 0; if (skb->protocol == htons(ETH_P_IP)) - err = prepare4(pkt, skb); + err = prepare4(av, pkt, skb); else if (skb->protocol == htons(ETH_P_IPV6)) - err = prepare6(pkt, skb); + err = prepare6(av, pkt, skb); - if (ether_addr_equal(skb->dev->dev_addr, rxe_get_av(pkt)->dmac)) + if (ether_addr_equal(skb->dev->dev_addr, av->dmac)) pkt->mask |= RXE_LOOPBACK_MASK; return err; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 5eb89052dd66..f44535f82bea 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -358,6 +358,7 @@ static inline int get_mtu(struct rxe_qp *qp) } static struct sk_buff *init_req_packet(struct rxe_qp *qp, + struct rxe_av *av, struct rxe_send_wqe *wqe, int opcode, int payload, struct rxe_pkt_info *pkt) @@ -365,7 +366,6 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; struct rxe_send_wr *ibwr = &wqe->wr; - struct rxe_av *av; int pad = (-payload) & 0x3; int paylen; int solicited; @@ -374,21 +374,9 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, /* length from start of bth to end of icrc */ paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - - /* pkt->hdr, port_num and mask are initialized in ifc layer */ - pkt->rxe = rxe; - pkt->opcode = opcode; - pkt->qp = qp; - pkt->psn = qp->req.psn; - pkt->mask = rxe_opcode[opcode].mask; - pkt->paylen = paylen; - pkt->wqe = wqe; + pkt->paylen = paylen; /* init skb */ - av = rxe_get_av(pkt); - if (!av) - return NULL; - skb = rxe_init_packet(rxe, av, paylen, pkt); if (unlikely(!skb)) return NULL; @@ -447,13 +435,13 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, return skb; } -static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - struct rxe_pkt_info *pkt, struct sk_buff *skb, - int paylen) +static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, + struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, + struct sk_buff *skb, int paylen) { int err; - err = rxe_prepare(pkt, skb); + err = rxe_prepare(av, pkt, skb); if (err) return err; @@ -608,6 +596,7 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) int rxe_requester(void *arg) { struct rxe_qp *qp = (struct rxe_qp *)arg; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_pkt_info pkt; struct sk_buff *skb; struct rxe_send_wqe *wqe; @@ -619,6 +608,8 @@ int rxe_requester(void *arg) struct rxe_send_wqe rollback_wqe; u32 rollback_psn; struct rxe_queue *q = qp->sq.queue; + struct rxe_ah *ah; + struct rxe_av *av; rxe_add_ref(qp); @@ -705,14 +696,28 @@ int rxe_requester(void *arg) payload = mtu; } - skb = init_req_packet(qp, wqe, opcode, payload, &pkt); + pkt.rxe = rxe; + pkt.opcode = opcode; + pkt.qp = qp; + pkt.psn = qp->req.psn; + pkt.mask = rxe_opcode[opcode].mask; + pkt.wqe = wqe; + + av = rxe_get_av(&pkt, &ah); + if (unlikely(!av)) { + pr_err("qp#%d Failed no address vector\n", qp_num(qp)); + wqe->status = IB_WC_LOC_QP_OP_ERR; + goto err_drop_ah; + } + + skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt); if (unlikely(!skb)) { pr_err("qp#%d Failed allocating skb\n", qp_num(qp)); wqe->status = IB_WC_LOC_QP_OP_ERR; - goto err; + goto err_drop_ah; } - ret = finish_packet(qp, wqe, &pkt, skb, payload); + ret = finish_packet(qp, av, wqe, &pkt, skb, payload); if (unlikely(ret)) { pr_debug("qp#%d Error during finish packet\n", qp_num(qp)); if (ret == -EFAULT) @@ -720,9 +725,12 @@ int rxe_requester(void *arg) else wqe->status = IB_WC_LOC_QP_OP_ERR; kfree_skb(skb); - goto err; + goto err_drop_ah; } + if (ah) + rxe_drop_ref(ah); + /* * To prevent a race on wqe access between requester and completer, * wqe members state and psn need to be set before calling @@ -751,6 +759,9 @@ int rxe_requester(void *arg) goto next_wqe; +err_drop_ah: + if (ah) + rxe_drop_ref(ah); err: wqe->state = wqe_state_error; __rxe_do_task(&qp->comp.task); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index e8f435fa6e4d..f589f4dde35c 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -632,7 +632,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, if (ack->mask & RXE_ATMACK_MASK) atmack_set_orig(ack, qp->resp.atomic_orig); - err = rxe_prepare(ack, skb); + err = rxe_prepare(&qp->pri_av, ack, skb); if (err) { kfree_skb(skb); return NULL; From patchwork Thu Jan 27 21:37:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12727458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86C87C433EF for ; Thu, 27 Jan 2022 21:38:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344437AbiA0ViZ (ORCPT ); Thu, 27 Jan 2022 16:38:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344445AbiA0ViY (ORCPT ); Thu, 27 Jan 2022 16:38:24 -0500 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75DCDC06173B for ; Thu, 27 Jan 2022 13:38:24 -0800 (PST) Received: by mail-oi1-x22b.google.com with SMTP id s9so8492938oib.11 for ; Thu, 27 Jan 2022 13:38:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5NwmFWFJDVwZjGeL8r8xoN5O1f4f5gexE+ZUokJpGP4=; b=bppyQEa+ElJCXT9Z1q1VqKgiA0hkZf/QPjdJlMliuc7/9rzpyfiOkzK9zf8nGBajh0 Inr9zJx2L4dcWMI40Gwu3pT/bjpxIrwKEjv7xdOBgKl5ny6AP9ypmmpyVTDlrAvyR9el 6w5CMEEgmrTrJUxkzX5MZDmgOO0VrEfglBavisK5L/asnWPZw/oYzcgXBV8f6EsAQKVQ n7AiLMM5fP7QdCkOpELGezk5b1SRB3YVOczsidRNaeHiel9RP679aU44NpDiUY4zI7vC EvkYEXaOwfTAd82VnjNFWQo4GWSoQRlL9uGirHjIKdokgo2yLR2hUbFFDbWTIedMgsey b+LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5NwmFWFJDVwZjGeL8r8xoN5O1f4f5gexE+ZUokJpGP4=; b=E657hsl1IpWSDjNVujuSLXJa68IBx628PQRmRLf0ePHtuheODRxqJgafSyKw9wj77j L5jb+xep2EXMMDILY06HJogUOEav9ORtz5Nc94hS3VV35b0j7ApRtij5uwTafJDSawsq PZ6/3EESFsQVTodCd25JI+/KEqZ68gO2gXOMS0amUqBOCnnpL1DK/BLRs4C9PBhYcyGS duxJhrpXMY6tpbUsKNzOSoKYPatt66o8zjAz7HLInkEolBosEN6Sb460+LLQ1jNrjdLP QuNDrSUdNEmPfcoRvkNacq5Q5JBLNWRT6aRqNYnCyAnAbYaUD3XzyCwGRwdU8VcbmK11 0M+w== X-Gm-Message-State: AOAM5327u9pysbbZzao5/5GeKVKDK2ygLTUwP+KjAqpZShiX2eQE0KqB DYDP20kFvrpuygibJRrPO9KunGpagzE= X-Google-Smtp-Source: ABdhPJyXbJt0DoiqfvQxfcEh5PUl/UrqMjFrUOYsF3Ml1vx/VDSwJCazttwwrd2WnUi0+8RWvGdoEg== X-Received: by 2002:a05:6808:190f:: with SMTP id bf15mr8264673oib.40.1643319503814; Thu, 27 Jan 2022 13:38:23 -0800 (PST) Received: from ubuntu-21.tx.rr.com (097-099-248-255.res.spectrum.com. [97.99.248.255]) by smtp.googlemail.com with ESMTPSA id v32sm3994677ooj.45.2022.01.27.13.38.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jan 2022 13:38:23 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [RFC PATCH v9 26/26] RDMA/rxe: Replace mr by rkey in responder resources Date: Thu, 27 Jan 2022 15:37:55 -0600 Message-Id: <20220127213755.31697-27-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220127213755.31697-1-rpearsonhpe@gmail.com> References: <20220127213755.31697-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently rxe saves a copy of MR in responder resources for RDMA reads. Since the responder resources are never freed just over written if more are needed this MR may not have a reference freed until the QP is destroyed. This patch uses the rkey instead of the MR and on subsequent packets of a multipacket read reply message it looks up the MR from the rkey for each packet. This makes it possible for a user to deregister an MR or unbind a MW on the fly and get correct behaviour. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_qp.c | 10 +-- drivers/infiniband/sw/rxe/rxe_resp.c | 123 ++++++++++++++++++-------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 - 3 files changed, 87 insertions(+), 47 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 742073ce0709..c595a140e893 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -135,12 +135,8 @@ static void free_rd_atomic_resources(struct rxe_qp *qp) void free_rd_atomic_resource(struct rxe_qp *qp, struct resp_res *res) { - if (res->type == RXE_ATOMIC_MASK) { + if (res->type == RXE_ATOMIC_MASK) kfree_skb(res->atomic.skb); - } else if (res->type == RXE_READ_MASK) { - if (res->read.mr) - rxe_drop_ref(res->read.mr); - } res->type = 0; } @@ -825,10 +821,8 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->pd) rxe_drop_ref(qp->pd); - if (qp->resp.mr) { + if (qp->resp.mr) rxe_drop_ref(qp->resp.mr); - qp->resp.mr = NULL; - } if (qp_type(qp) == IB_QPT_RC) sk_dst_reset(qp->sk->sk); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index f589f4dde35c..c776289842e5 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -641,6 +641,78 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, return skb; } +static struct resp_res *rxe_prepare_read_res(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) +{ + struct resp_res *res; + u32 pkts; + + res = &qp->resp.resources[qp->resp.res_head]; + rxe_advance_resp_resource(qp); + free_rd_atomic_resource(qp, res); + + res->type = RXE_READ_MASK; + res->replay = 0; + res->read.va = qp->resp.va + qp->resp.offset; + res->read.va_org = qp->resp.va + qp->resp.offset; + res->read.resid = qp->resp.resid; + res->read.length = qp->resp.resid; + res->read.rkey = qp->resp.rkey; + + pkts = max_t(u32, (reth_len(pkt) + qp->mtu - 1)/qp->mtu, 1); + res->first_psn = pkt->psn; + res->cur_psn = pkt->psn; + res->last_psn = (pkt->psn + pkts - 1) & BTH_PSN_MASK; + + res->state = rdatm_res_state_new; + + return res; +} + +/** + * rxe_recheck_mr - revalidate MR from rkey and get a reference + * @qp: the qp + * @rkey: the rkey + * + * This code allows the MR to be invalidated or deregistered or + * the MW if one was used to be invalidated or deallocated. + * It is assumed that the access permissions if originally good + * are OK and the mappings to be unchanged. + * + * Return: mr on success else NULL + */ +static struct rxe_mr *rxe_recheck_mr(struct rxe_qp *qp, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_mr *mr; + struct rxe_mw *mw; + + if (rkey_is_mw(rkey)) { + mw = rxe_pool_get_index(&rxe->mw_pool, rkey >> 8); + if (!mw || mw->rkey != rkey) + return NULL; + + if (mw->state != RXE_MW_STATE_VALID) { + rxe_drop_ref(mw); + return NULL; + } + + mr = mw->mr; + rxe_drop_ref(mw); + } else { + mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); + if (!mr || mr->rkey != rkey) + return NULL; + } + + if (mr->state != RXE_MR_STATE_VALID) { + rxe_drop_ref(mr); + return NULL; + } + + return mr; +} + /* RDMA read response. If res is not NULL, then we have a current RDMA request * being processed or replayed. */ @@ -655,53 +727,26 @@ static enum resp_states read_reply(struct rxe_qp *qp, int opcode; int err; struct resp_res *res = qp->resp.res; + struct rxe_mr *mr; if (!res) { - /* This is the first time we process that request. Get a - * resource - */ - res = &qp->resp.resources[qp->resp.res_head]; - - free_rd_atomic_resource(qp, res); - rxe_advance_resp_resource(qp); - - res->type = RXE_READ_MASK; - res->replay = 0; - - res->read.va = qp->resp.va + - qp->resp.offset; - res->read.va_org = qp->resp.va + - qp->resp.offset; - - res->first_psn = req_pkt->psn; - - if (reth_len(req_pkt)) { - res->last_psn = (req_pkt->psn + - (reth_len(req_pkt) + mtu - 1) / - mtu - 1) & BTH_PSN_MASK; - } else { - res->last_psn = res->first_psn; - } - res->cur_psn = req_pkt->psn; - - res->read.resid = qp->resp.resid; - res->read.length = qp->resp.resid; - res->read.rkey = qp->resp.rkey; - - /* note res inherits the reference to mr from qp */ - res->read.mr = qp->resp.mr; - qp->resp.mr = NULL; - - qp->resp.res = res; - res->state = rdatm_res_state_new; + res = rxe_prepare_read_res(qp, req_pkt); + qp->resp.res = res; } if (res->state == rdatm_res_state_new) { + mr = qp->resp.mr; + qp->resp.mr = NULL; + if (res->read.resid <= mtu) opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY; else opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST; } else { + mr = rxe_recheck_mr(qp, res->read.rkey); + if (!mr) + return RESPST_ERR_RKEY_VIOLATION; + if (res->read.resid > mtu) opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE; else @@ -717,10 +762,12 @@ static enum resp_states read_reply(struct rxe_qp *qp, if (!skb) return RESPST_ERR_RNR; - err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), + err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), payload, RXE_FROM_MR_OBJ); if (err) pr_err("Failed copying memory\n"); + if (mr) + rxe_drop_ref(mr); if (bth_pad(&ack_pkt)) { u8 *pad = payload_addr(&ack_pkt) + payload; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index d70d44392c32..81996e5af079 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -157,7 +157,6 @@ struct resp_res { struct sk_buff *skb; } atomic; struct { - struct rxe_mr *mr; u64 va_org; u32 rkey; u32 length;