From patchwork Mon Jan 31 22:08:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731256 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66056C433F5 for ; Mon, 31 Jan 2022 22:10:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231511AbiAaWKG (ORCPT ); Mon, 31 Jan 2022 17:10:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231504AbiAaWKG (ORCPT ); Mon, 31 Jan 2022 17:10:06 -0500 Received: from mail-ot1-x32d.google.com (mail-ot1-x32d.google.com [IPv6:2607:f8b0:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB38FC061714 for ; Mon, 31 Jan 2022 14:10:05 -0800 (PST) Received: by mail-ot1-x32d.google.com with SMTP id q14-20020a05683022ce00b005a6162a1620so1663773otc.0 for ; Mon, 31 Jan 2022 14:10:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VL/GI+7qpsdeG8zENV6IvqpyCn40bfNnpv7eZSvf4D4=; b=G3Ic6T6FHsiQleosjRUQVrXLjw1ukTSDfNrvC/JaaExLCGGcA35add8yjwlbo+BDBv skYpA7ylRl4fJ2e1TMfmo1GDq6fQCNeaPA6jvhgfFHqxzhgSYYTyiaxzzChaLNt1EXDm trv5XOrAVpEQ5tKCHMtTuce2QS8FawwL8PHla8wONmYMeItyNkFnLZIvCJKZ+kk5jOR2 ODq3E3yTcG6jx/vmBXvoy/OEzIQ5GnFmo7Ghrp8QlyHsGBk3z+WJ7+7dcwQNvnbxeujt X4EnSBLQaXUiOBpqfdajr1sqPJnU42t2bt8L7IjeUpcIVAwRAo5ZEalTKZJ2A/D/xwVD iZVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VL/GI+7qpsdeG8zENV6IvqpyCn40bfNnpv7eZSvf4D4=; b=8BOqEdjb6uCrVwEJTuZW1CwVj9k5tQbzmJmwF7pAy/4rjVHgLkNJ5/EGJHnQsxq/fo L/z/zb/FNppWr2xqxAa4UQRtuOFjMzuPHOTVMRNAtgoGjcKlkM9rGZILYYNX3bji5t41 5SNiTiw5HuQ+paIELhhXZX4heqYL5mSnT9jt8AkawEgpH4BBSGMzf3h6uvDh3ZfD0bND W/BoS7Y+jR0hFKOO2nvo5QsKhQeQ08FHYiSpMwh6P1fd0XAIjX0ldQG5X0HvplampxRA UObo4SPfpXjnzhe4jjfNxLcYKUqvX1qjOnVlf+/Zonf4jzVZAmOIMmmBcLriE/DIXfuR F/IQ== X-Gm-Message-State: AOAM532oa0jOmpEWZpKkUGHD7Z/KispBQ4Wi2EK4MKZ9h5cg3wNGkX6C X8JwZBtI9DGv5HCAWE2YXBpkPub9WRk= X-Google-Smtp-Source: ABdhPJymwHTOT9A27Rg9jkYjfE8xP2Dq7qJUu/i0caDiH1r2fjXizK29hVwwDPxKSwpuneLsXwKd6Q== X-Received: by 2002:a9d:6e0f:: with SMTP id e15mr11968294otr.103.1643667004979; Mon, 31 Jan 2022 14:10:04 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:04 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 01/17] RDMA/rxe: Move rxe_mcast_add/delete to rxe_mcast.c Date: Mon, 31 Jan 2022 16:08:34 -0600 Message-Id: <20220131220849.10170-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move rxe_mcast_add and rxe_mcast_delete from rxe_net.c to rxe_mcast.c, make static and remove declarations from rxe_loc.h. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 -- drivers/infiniband/sw/rxe/rxe_mcast.c | 18 ++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_net.c | 18 ------------------ 3 files changed, 18 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index b1e174afb1d4..bcec33c3c3b7 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -106,8 +106,6 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); -int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid); -int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid); /* rxe_qp.c */ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index bd1ac88b8700..e5689c161984 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -7,6 +7,24 @@ #include "rxe.h" #include "rxe_loc.h" +static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) +{ + unsigned char ll_addr[ETH_ALEN]; + + ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); + + return dev_mc_add(rxe->ndev, ll_addr); +} + +static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) +{ + unsigned char ll_addr[ETH_ALEN]; + + ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); + + return dev_mc_del(rxe->ndev, ll_addr); +} + /* caller should hold mc_grp_pool->pool_lock */ static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, struct rxe_pool *pool, diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index be72bdbfb4ba..a8cfa7160478 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -20,24 +20,6 @@ static struct rxe_recv_sockets recv_sockets; -int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) -{ - unsigned char ll_addr[ETH_ALEN]; - - ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); - - return dev_mc_add(rxe->ndev, ll_addr); -} - -int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) -{ - unsigned char ll_addr[ETH_ALEN]; - - ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); - - return dev_mc_del(rxe->ndev, ll_addr); -} - static struct dst_entry *rxe_find_route4(struct net_device *ndev, struct in_addr *saddr, struct in_addr *daddr) From patchwork Mon Jan 31 22:08:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731258 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6F52C433FE for ; Mon, 31 Jan 2022 22:10:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231504AbiAaWKG (ORCPT ); Mon, 31 Jan 2022 17:10:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231425AbiAaWKG (ORCPT ); Mon, 31 Jan 2022 17:10:06 -0500 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56F91C06173B for ; Mon, 31 Jan 2022 14:10:06 -0800 (PST) Received: by mail-oi1-x235.google.com with SMTP id r27so7386922oiw.4 for ; Mon, 31 Jan 2022 14:10:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MkBHdRKk08KQIqTCTrTE63cnp3LYEJ5QRumogQcI5G0=; b=dJywDkLPDXfIXJiqADZpswJCpqe2+5SUfvRdlKd4fp8y9UAXhlA90UebHlHAemnpNn x5A34WxdQKEbrwz5/V8uTdnGKmMEA3JuCAGxhIVkmqs6inZ1hmBE44ldfR7RKv2+ZL8n 7gcB0oFFC8Fb59kmmyNKhQQLpFF0gWWNdQuhPE2JhYyd3fq6ZLcbiubX/7QWrWSfOUPk bzV9wC2u6DINlhCmHWTGr2I+uk4BVFSEL0YJ6zL4h+FkrLDKEIvdeLdPfHhRR9iMGLTk BDu3sWgvbPutUHGKRM+Vfrvc9s2vyXzReaN0CrskEkAxopOhzzD+u3pbV1lxULk/n4td CmZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MkBHdRKk08KQIqTCTrTE63cnp3LYEJ5QRumogQcI5G0=; b=VHYZpOfNGGX0ZSDHsrWttd8yE7Sw/Q7X0Iv4TY/sPbAEzRdTyJfUgrM3O/748O71TL ERnrZJXZxJcFRgguuf6yYjF5BETD7tg5htFEOxTS0K8AlnCDrqvq3oiU8jIx81LlviUE CHFYgk8hn/iEx3l1kLLBh+vsVSJKSgfDPLFKYCt3oV/xT/266BQVBpwCZ7twHDF/1Xwg MSpzZ5IG/jHBlAjWcblpF38+HJuzllDhmIeIqKUKLyfJUQWKqrohhbKOCmYcvm51oks/ UoovGsxHe04xYlt8ZsDQqoUKXoFefjFMy+gRQPBIHuOFT2BaIwCQoyBaX0oYtRHzXwrm sqIQ== X-Gm-Message-State: AOAM530jWB3wjBDS8lVudme11txzCg9d94Ff812tZ3wS03y6kXXSh/Yu W3oxAUKrovEuUUes/eJE6QU= X-Google-Smtp-Source: ABdhPJxaDX+omJu0E96vCETXCi02qaCKtMhBMk7fDHTwPs7Q+UuyUVr+rbJJPIGA871dMgUs7NupoA== X-Received: by 2002:a05:6808:181c:: with SMTP id bh28mr843055oib.125.1643667005767; Mon, 31 Jan 2022 14:10:05 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:05 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 02/17] RDMA/rxe: Move rxe_mcast_attach/detach to rxe_mcast.c Date: Mon, 31 Jan 2022 16:08:35 -0600 Message-Id: <20220131220849.10170-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move rxe_mcast_attach and rxe_mcast_detach from rxe_verbs.c to rxe_mcast.c, Make non-static and add declarations to rxe_loc.h. Make the subroutines in rxe_mcast.c referenced by these routines static and remove their declarations from rxe_loc.h. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 12 ++------- drivers/infiniband/sw/rxe/rxe_mcast.c | 36 +++++++++++++++++++++++---- drivers/infiniband/sw/rxe/rxe_verbs.c | 26 ------------------- 3 files changed, 33 insertions(+), 41 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index bcec33c3c3b7..dc606241f0d6 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -40,18 +40,10 @@ void rxe_cq_disable(struct rxe_cq *cq); void rxe_cq_cleanup(struct rxe_pool_elem *arg); /* rxe_mcast.c */ -int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mc_grp **grp_p); - -int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mc_grp *grp); - -int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - union ib_gid *mgid); - void rxe_drop_all_mcast_groups(struct rxe_qp *qp); - void rxe_mc_cleanup(struct rxe_pool_elem *arg); +int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); +int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); /* rxe_mmap.c */ struct rxe_mmap_info { diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index e5689c161984..f86e32f4e77f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -52,8 +52,8 @@ static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, return grp; } -int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mc_grp **grp_p) +static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, + struct rxe_mc_grp **grp_p) { int err; struct rxe_mc_grp *grp; @@ -81,7 +81,7 @@ int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, return 0; } -int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, +static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mc_grp *grp) { int err; @@ -125,8 +125,8 @@ int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, return err; } -int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - union ib_gid *mgid) +static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, + union ib_gid *mgid) { struct rxe_mc_grp *grp; struct rxe_mc_elem *elem, *tmp; @@ -194,3 +194,29 @@ void rxe_mc_cleanup(struct rxe_pool_elem *elem) rxe_drop_key(grp); rxe_mcast_delete(rxe, &grp->mgid); } + +int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) +{ + int err; + struct rxe_dev *rxe = to_rdev(ibqp->device); + struct rxe_qp *qp = to_rqp(ibqp); + struct rxe_mc_grp *grp; + + /* takes a ref on grp if successful */ + err = rxe_mcast_get_grp(rxe, mgid, &grp); + if (err) + return err; + + err = rxe_mcast_add_grp_elem(rxe, qp, grp); + + rxe_drop_ref(grp); + return err; +} + +int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) +{ + struct rxe_dev *rxe = to_rdev(ibqp->device); + struct rxe_qp *qp = to_rqp(ibqp); + + return rxe_mcast_drop_grp_elem(rxe, qp, mgid); +} diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 915ad6664321..f7682541f9af 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -999,32 +999,6 @@ static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, return n; } -static int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) -{ - int err; - struct rxe_dev *rxe = to_rdev(ibqp->device); - struct rxe_qp *qp = to_rqp(ibqp); - struct rxe_mc_grp *grp; - - /* takes a ref on grp if successful */ - err = rxe_mcast_get_grp(rxe, mgid, &grp); - if (err) - return err; - - err = rxe_mcast_add_grp_elem(rxe, qp, grp); - - rxe_drop_ref(grp); - return err; -} - -static int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) -{ - struct rxe_dev *rxe = to_rdev(ibqp->device); - struct rxe_qp *qp = to_rqp(ibqp); - - return rxe_mcast_drop_grp_elem(rxe, qp, mgid); -} - static ssize_t parent_show(struct device *device, struct device_attribute *attr, char *buf) { From patchwork Mon Jan 31 22:08:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731257 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62744C4332F for ; Mon, 31 Jan 2022 22:10:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231517AbiAaWKH (ORCPT ); Mon, 31 Jan 2022 17:10:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231425AbiAaWKH (ORCPT ); Mon, 31 Jan 2022 17:10:07 -0500 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10BDEC061714 for ; Mon, 31 Jan 2022 14:10:07 -0800 (PST) Received: by mail-oi1-x22b.google.com with SMTP id r27so7386980oiw.4 for ; Mon, 31 Jan 2022 14:10:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/7Re42h/hqRKORy3r1Yy/SCLyBooQ0darP9rBfE5na4=; b=fJwJqsay9x4EJRhhEu4+iJx+t0NGFtJ6imalzwG0/XmFkQ+j7YldxLiUs6qt4EhDnt VzfA34uRPEzx9B0rK52fTmtXTcAaKeow5vqg4Ud3cyVmX5thKPItDPtXcqDEWgAwXw4X FiML4EAiL2mZXWfwySJ1gZX1etO4doHHuxcEn5GV7KjOeNfsFadhbglywaT7Tfy8GejE sqdIsHE8O5UGJnHDDs/LSnOAzgOYRfs030i01FmxPJMEh0YmvtxGRNaj5cQ+LRkwF6gV hXz3UuyngbfudH2/a+rcV/6hGEgC9bdhi4vAS/hajR8ckgRyBfTBC9QJLgzVVRjdHJfY uywQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/7Re42h/hqRKORy3r1Yy/SCLyBooQ0darP9rBfE5na4=; b=3JN8bHbTAD2sLCTzbhVR5CXBvOLX2grSAGbhlrguiz3jyKKLrqxB/rFGHcA1H2WiNg jExs2wyrZIWPKfrbYw3a0YYfm2zDahwAKNJ+Jz42vUDG+xxbaNh79QwPu2vAXLrPB0PK KcbNc46t+9HaMq2GoV7ecaDaqXm5z2qbrgNX0tlOyX0Jgp1XpPghorLAeWpUyco+cpF4 7PEMBkqEs3J+XNwVGHab40Pqug5XNP5fqPkDamdxpPCKKZo247RbjVGuY5GxYnUefzdX BXMxvc4pAxA9ayd7wU37CGBg1vcXYDCQapj5oPiMda9uIjMqIECLFU9uAdrEyuaugSIy FohQ== X-Gm-Message-State: AOAM532uMD/QmQRCvFDSoxF13nH/HhhQVxhB9Mam7lnA+SEzOV9CGz+o JGjCkldtDp4a4RWL3exeFYJssVCXkgM= X-Google-Smtp-Source: ABdhPJxPamyVk/ZpyMrhw9tRYIWt0dBTDIby7NFdpgPH/DbI/nWsfcx0orNS/2IbKVZ16yQWvdf7ZQ== X-Received: by 2002:aca:b7c1:: with SMTP id h184mr19242893oif.36.1643667006487; Mon, 31 Jan 2022 14:10:06 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:06 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 03/17] RDMA/rxe: Rename rxe_mc_grp and rxe_mc_elem Date: Mon, 31 Jan 2022 16:08:36 -0600 Message-Id: <20220131220849.10170-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rename rxe_mc_grp to rxe_mcg. Rename rxe_mc_elem to rxe_mca. These can be read 'multicast group' and 'multicast attachment'. 'elem' collided with the use of elem in rxe pools and was a little confusing. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 26 +++++++++++++------------- drivers/infiniband/sw/rxe/rxe_pool.c | 10 +++++----- drivers/infiniband/sw/rxe/rxe_recv.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_verbs.h | 6 +++--- 4 files changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index f86e32f4e77f..949784198d80 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -26,12 +26,12 @@ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) } /* caller should hold mc_grp_pool->pool_lock */ -static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, +static struct rxe_mcg *create_grp(struct rxe_dev *rxe, struct rxe_pool *pool, union ib_gid *mgid) { int err; - struct rxe_mc_grp *grp; + struct rxe_mcg *grp; grp = rxe_alloc_locked(&rxe->mc_grp_pool); if (!grp) @@ -53,10 +53,10 @@ static struct rxe_mc_grp *create_grp(struct rxe_dev *rxe, } static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mc_grp **grp_p) + struct rxe_mcg **grp_p) { int err; - struct rxe_mc_grp *grp; + struct rxe_mcg *grp; struct rxe_pool *pool = &rxe->mc_grp_pool; if (rxe->attr.max_mcast_qp_attach == 0) @@ -82,10 +82,10 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, } static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mc_grp *grp) + struct rxe_mcg *grp) { int err; - struct rxe_mc_elem *elem; + struct rxe_mca *elem; /* check to see of the qp is already a member of the group */ spin_lock_bh(&qp->grp_lock); @@ -128,8 +128,8 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { - struct rxe_mc_grp *grp; - struct rxe_mc_elem *elem, *tmp; + struct rxe_mcg *grp; + struct rxe_mca *elem, *tmp; grp = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); if (!grp) @@ -162,8 +162,8 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, void rxe_drop_all_mcast_groups(struct rxe_qp *qp) { - struct rxe_mc_grp *grp; - struct rxe_mc_elem *elem; + struct rxe_mcg *grp; + struct rxe_mca *elem; while (1) { spin_lock_bh(&qp->grp_lock); @@ -171,7 +171,7 @@ void rxe_drop_all_mcast_groups(struct rxe_qp *qp) spin_unlock_bh(&qp->grp_lock); break; } - elem = list_first_entry(&qp->grp_list, struct rxe_mc_elem, + elem = list_first_entry(&qp->grp_list, struct rxe_mca, grp_list); list_del(&elem->grp_list); spin_unlock_bh(&qp->grp_lock); @@ -188,7 +188,7 @@ void rxe_drop_all_mcast_groups(struct rxe_qp *qp) void rxe_mc_cleanup(struct rxe_pool_elem *elem) { - struct rxe_mc_grp *grp = container_of(elem, typeof(*grp), elem); + struct rxe_mcg *grp = container_of(elem, typeof(*grp), elem); struct rxe_dev *rxe = grp->rxe; rxe_drop_key(grp); @@ -200,7 +200,7 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) int err; struct rxe_dev *rxe = to_rdev(ibqp->device); struct rxe_qp *qp = to_rqp(ibqp); - struct rxe_mc_grp *grp; + struct rxe_mcg *grp; /* takes a ref on grp if successful */ err = rxe_mcast_get_grp(rxe, mgid, &grp); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 4cb003885e00..63c594173565 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -83,17 +83,17 @@ static const struct rxe_type_info { }, [RXE_TYPE_MC_GRP] = { .name = "rxe-mc_grp", - .size = sizeof(struct rxe_mc_grp), - .elem_offset = offsetof(struct rxe_mc_grp, elem), + .size = sizeof(struct rxe_mcg), + .elem_offset = offsetof(struct rxe_mcg, elem), .cleanup = rxe_mc_cleanup, .flags = RXE_POOL_KEY, - .key_offset = offsetof(struct rxe_mc_grp, mgid), + .key_offset = offsetof(struct rxe_mcg, mgid), .key_size = sizeof(union ib_gid), }, [RXE_TYPE_MC_ELEM] = { .name = "rxe-mc_elem", - .size = sizeof(struct rxe_mc_elem), - .elem_offset = offsetof(struct rxe_mc_elem, elem), + .size = sizeof(struct rxe_mca), + .elem_offset = offsetof(struct rxe_mca, elem), }, }; diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 6a6cc1fa90e4..7ff6b53555f4 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -233,8 +233,8 @@ static inline void rxe_rcv_pkt(struct rxe_pkt_info *pkt, struct sk_buff *skb) static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) { struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); - struct rxe_mc_grp *mcg; - struct rxe_mc_elem *mce; + struct rxe_mcg *mcg; + struct rxe_mca *mce; struct rxe_qp *qp; union ib_gid dgid; int err; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index e48969e8d4c8..388b7dc23dd7 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -353,7 +353,7 @@ struct rxe_mw { u64 length; }; -struct rxe_mc_grp { +struct rxe_mcg { struct rxe_pool_elem elem; spinlock_t mcg_lock; /* guard group */ struct rxe_dev *rxe; @@ -364,12 +364,12 @@ struct rxe_mc_grp { u16 pkey; }; -struct rxe_mc_elem { +struct rxe_mca { struct rxe_pool_elem elem; struct list_head qp_list; struct list_head grp_list; struct rxe_qp *qp; - struct rxe_mc_grp *grp; + struct rxe_mcg *grp; }; struct rxe_port { From patchwork Mon Jan 31 22:08:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731259 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6850AC433F5 for ; Mon, 31 Jan 2022 22:10:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231560AbiAaWKI (ORCPT ); Mon, 31 Jan 2022 17:10:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231425AbiAaWKI (ORCPT ); Mon, 31 Jan 2022 17:10:08 -0500 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D245AC061714 for ; Mon, 31 Jan 2022 14:10:07 -0800 (PST) Received: by mail-oi1-x236.google.com with SMTP id t199so12940819oie.10 for ; Mon, 31 Jan 2022 14:10:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aKT0Z4cQawwxui4aW6anZu6dMHukCC9UuGWraip01Fo=; b=N8v3cZKEZBgSa5OqUd5UMZo3O76vH10UhLMy5gP+KUUnTts/PP9MYIdJmn86z34m5M 9iJkqF2r8yl8k0F0x/raCKpR0ffvCNHfSQHrFUqATxrXxmsfx0EW5C0y0X4FfDmLtNya OnF4CeyA1KbnB9VSA/DxjJ/QGrt2gIkCN4LuH9sdBAGiym99TzHBHLwjP/A4i8wkilcW zXEHQFjogOQUEohneQFVPe8f6PieqBayIxCq37j+u9ghXw3aQ2jajKkASrXR0EhCuccW j8HrclOwNBUBwz08bvQmibTdWSVi46+nrH7WqWgJPhw9QvyAXwHphtPI8aX0NicC4hSo SxDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aKT0Z4cQawwxui4aW6anZu6dMHukCC9UuGWraip01Fo=; b=ZwA9gEZWu9q9+VdXQohh8jmQ1mOAgpT3kq+Euy++Hb2nC5dqBbSVXpBXmd7UzfGUSp YIorMQ5Dv6Nz3hPEEgIPYk5bvD/xAg1FpknkNizjVUzdrHFeZOiJ/oU7r0ELH1suxW2m yBXjhSP5EX65i9owe/Hx2GeuQpH+JNQ2iOlCDSbtiCUeA9ND7Mi9D46wtkRI/CKFIdVM VVGb0/uB6oBqLpbpxETSl32JjHjN9fGF4lxZBH866yyaIZl3nZRbiTYQ61c87l8xDO79 JhP75kDM8OPNXwALx2TJX3QhYneturHm4oUa3C6qyo/MaxEfUUBGhheWwo5zvsI8WgGY ONQw== X-Gm-Message-State: AOAM531IF1kD7aIn6JGA2DKm04O8s9Id80HcwijI5ZmEZ4HTmNHpsaMx yDVRDURmLff4FipZYUVZjp8= X-Google-Smtp-Source: ABdhPJy8KoU0TM6DP8q9nVeXNPsN0c+En1yAs6+RRgQFJCT0Y71YeRY+Nw62xVPA1GPSK+4fpRpZ0A== X-Received: by 2002:a05:6808:1250:: with SMTP id o16mr14512099oiv.95.1643667007297; Mon, 31 Jan 2022 14:10:07 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:06 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 04/17] RDMA/rxe: Enforce IBA o10-2.2.3 Date: Mon, 31 Jan 2022 16:08:37 -0600 Message-Id: <20220131220849.10170-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add code to check if a QP is attached to one or more multicast groups when destroy_qp is called and return an error if so. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 9 +-------- drivers/infiniband/sw/rxe/rxe_mcast.c | 2 ++ drivers/infiniband/sw/rxe/rxe_qp.c | 14 ++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 5 +++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 5 files changed, 23 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index dc606241f0d6..052beaaacf43 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -101,26 +101,19 @@ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); /* rxe_qp.c */ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); - int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, struct ib_pd *ibpd, struct ib_udata *udata); - int rxe_qp_to_init(struct rxe_qp *qp, struct ib_qp_init_attr *init); - int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); - int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask, struct ib_udata *udata); - int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); - void rxe_qp_error(struct rxe_qp *qp); - +int rxe_qp_chk_destroy(struct rxe_qp *qp); void rxe_qp_destroy(struct rxe_qp *qp); - void rxe_qp_cleanup(struct rxe_pool_elem *elem); static inline int qp_num(struct rxe_qp *qp) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 949784198d80..34e3c52f0b72 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -114,6 +114,7 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, grp->num_qp++; elem->qp = qp; elem->grp = grp; + atomic_inc(&qp->mcg_num); list_add(&elem->qp_list, &grp->qp_list); list_add(&elem->grp_list, &qp->grp_list); @@ -143,6 +144,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, list_del(&elem->qp_list); list_del(&elem->grp_list); grp->num_qp--; + atomic_dec(&qp->mcg_num); spin_unlock_bh(&grp->mcg_lock); spin_unlock_bh(&qp->grp_lock); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 5018b9387694..99284337f547 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -770,6 +770,20 @@ int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask) return 0; } +int rxe_qp_chk_destroy(struct rxe_qp *qp) +{ + /* See IBA o10-2.2.3 + * An attempt to destroy a QP while attached to a mcast group + * will fail immediately. + */ + if (atomic_read(&qp->mcg_num)) { + pr_debug_once("Attempt to destroy QP while attached to multicast group\n"); + return -EBUSY; + } + + return 0; +} + /* called by the destroy qp verb */ void rxe_qp_destroy(struct rxe_qp *qp) { diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index f7682541f9af..9f0aef4b649d 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -493,6 +493,11 @@ static int rxe_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) { struct rxe_qp *qp = to_rqp(ibqp); + int ret; + + ret = rxe_qp_chk_destroy(qp); + if (ret) + return ret; rxe_qp_destroy(qp); rxe_drop_index(qp); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 388b7dc23dd7..4910d0782e33 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -235,6 +235,7 @@ struct rxe_qp { /* list of mcast groups qp has joined (for cleanup) */ struct list_head grp_list; spinlock_t grp_lock; /* guard grp_list */ + atomic_t mcg_num; struct sk_buff_head req_pkts; struct sk_buff_head resp_pkts; From patchwork Mon Jan 31 22:08:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731260 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 574A0C433EF for ; Mon, 31 Jan 2022 22:10:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231757AbiAaWKJ (ORCPT ); Mon, 31 Jan 2022 17:10:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231627AbiAaWKI (ORCPT ); Mon, 31 Jan 2022 17:10:08 -0500 Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DB1CC06173D for ; Mon, 31 Jan 2022 14:10:08 -0800 (PST) Received: by mail-ot1-x32c.google.com with SMTP id w27-20020a9d5a9b000000b005a17d68ae89so14414490oth.12 for ; Mon, 31 Jan 2022 14:10:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ggLkkJhVtjmI0551cLLsUXr/tcgbpb+AmLL1Pmf4XGw=; b=iOLuCmBISanH0dXjHGZgViAyW2noZGgiomgT9n20PJV4/1UsVFIc38eS+yUI5uUtrw Ok6h15rsxPxk9SXQkrNcyUTGdpd5qHcL7YxbgIL8NZQYWYHoa5ydT8OXSC+gmlob3gRu lWXXWLwhB64JQsv2rp8AHCsIRX2NBW3j3Wa8vKg4nDERO1h0MEVCERvihDOQs85Qt6Da jjZmiuyYf+O1QY3EnOe0hKmIu96gR2QGwCbnjweJm8R8H050zFdXI+q0lAN3Uv3al2rq 4yPHrDtugY67xaxalEnTbdWC9pPPduX32QOXxQtOQ165nxVUUU0TzSA8iBWEoMJtSJig c7zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ggLkkJhVtjmI0551cLLsUXr/tcgbpb+AmLL1Pmf4XGw=; b=tSefW2nJvPAYpJKH0a9H0JY3ocgTIXq+vlBmRjfcQ7izrOwdLPnV7Gx1MTsJK6sSeH 1fgat0wOeOEqji5Kx7HcBzikusAQoc3GG85hkA8K1ZYzkEF45ea9ylD3O9dqLUg/Ad3W t6YYHCsn9Wh/47+2i/R8p9B0AJjG3R7+O+/uYh3wAWunxDcnKKibJ6RYRqSXdwuvqGWx Qlb0rqcEzdRWh7LBaIwYZW02zdI0/stEbf+SYvzEjbytwbknSQQrJM9ezEgrRbZxRcnv cJK2NDuEf+G4Oj3hLM8oXP3bgpqLSpDuS9rsXJDtcEGc5sK0uGvU33VvGVlffHcS/DxT 9l0Q== X-Gm-Message-State: AOAM530MsRSsbWF8aQr3Id+fNuMQr72VQJBqtGoVW212Cm4BmNmtRNKv 33NAY3v6Mvsqkx52aCyqh1E= X-Google-Smtp-Source: ABdhPJzaHSVNgcwpoOe/1ZpZptk9JuiqT3q2zvwoBG57y4pvy3+G+Iigb6C/Kb1R3/31GX2ftoD/Sg== X-Received: by 2002:a05:6830:34a3:: with SMTP id c35mr11176829otu.113.1643667008003; Mon, 31 Jan 2022 14:10:08 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:07 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 05/17] RDMA/rxe: Remove rxe_drop_all_macst_groups Date: Mon, 31 Jan 2022 16:08:38 -0600 Message-Id: <20220131220849.10170-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org With o10-2.2.3 enforced rxe_drop_all_mcast_groups is completely unnecessary. Remove it and references to it. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 - drivers/infiniband/sw/rxe/rxe_mcast.c | 26 -------------------------- drivers/infiniband/sw/rxe/rxe_qp.c | 2 -- 3 files changed, 29 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 052beaaacf43..af40e3c212fb 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -40,7 +40,6 @@ void rxe_cq_disable(struct rxe_cq *cq); void rxe_cq_cleanup(struct rxe_pool_elem *arg); /* rxe_mcast.c */ -void rxe_drop_all_mcast_groups(struct rxe_qp *qp); void rxe_mc_cleanup(struct rxe_pool_elem *arg); int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 34e3c52f0b72..39a41daa7a6b 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -162,32 +162,6 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, return -EINVAL; } -void rxe_drop_all_mcast_groups(struct rxe_qp *qp) -{ - struct rxe_mcg *grp; - struct rxe_mca *elem; - - while (1) { - spin_lock_bh(&qp->grp_lock); - if (list_empty(&qp->grp_list)) { - spin_unlock_bh(&qp->grp_lock); - break; - } - elem = list_first_entry(&qp->grp_list, struct rxe_mca, - grp_list); - list_del(&elem->grp_list); - spin_unlock_bh(&qp->grp_lock); - - grp = elem->grp; - spin_lock_bh(&grp->mcg_lock); - list_del(&elem->qp_list); - grp->num_qp--; - spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(grp); - rxe_drop_ref(elem); - } -} - void rxe_mc_cleanup(struct rxe_pool_elem *elem) { struct rxe_mcg *grp = container_of(elem, typeof(*grp), elem); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 99284337f547..a21d704dc376 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -812,8 +812,6 @@ static void rxe_qp_do_cleanup(struct work_struct *work) { struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); - rxe_drop_all_mcast_groups(qp); - if (qp->sq.queue) rxe_queue_cleanup(qp->sq.queue); From patchwork Mon Jan 31 22:08:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731261 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBF57C433FE for ; Mon, 31 Jan 2022 22:10:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231661AbiAaWKK (ORCPT ); Mon, 31 Jan 2022 17:10:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231771AbiAaWKJ (ORCPT ); Mon, 31 Jan 2022 17:10:09 -0500 Received: from mail-ot1-x32e.google.com (mail-ot1-x32e.google.com [IPv6:2607:f8b0:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58A9FC061401 for ; Mon, 31 Jan 2022 14:10:09 -0800 (PST) Received: by mail-ot1-x32e.google.com with SMTP id b17-20020a9d4791000000b005a17fc2dfc1so14447686otf.1 for ; Mon, 31 Jan 2022 14:10:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bBWAAFZqjiekU7uNW+ATIBkmLHIoGKYvK4qdlLNklN4=; b=QMidn0fHi+oQq86YaV2pXxhPPQwcu6BMX1sapvh8nvaP9O5auAx06xKGrWXKtO6Dz/ kcUynC3qvmlS7vlh9OdpsJUl06z3x68lCUHWQ2EGUE0zVrtX0mkvb22WEPSVRhO/fu1f UaNB5DpU0m1jiuvRnBmKG1wYAWn0iHzOSbzHJ8P3e/2ISv26bdagicO95xLfrAOkkWgj 2uaqkWMOslGXnJ0/0mOc0552Ohp7bVDo79sgyDxUpyHeV18VV6t8uavUjHxpX85hj+VD sOeCapTxGlXV8yNGrucEnInm8UZv66p0ucWwu1eemIHm7iHnNG1io8JvNdKmbgbfjFLu 6J6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bBWAAFZqjiekU7uNW+ATIBkmLHIoGKYvK4qdlLNklN4=; b=dB8a0unUlSzMTvbGzM2Q+7V6wivdCcORetitU4BZbDALLYopZv4Ebkd1bJ6zGRdJgw ZF8pOEWEWgxUtvLYJTxV08jF9DH+iBMCzj4kka2A8bcvvxu/4wUetan5iPY8jiff0Rye zHlWpK57ddfivAcbwxClvillDOfEs7Ana4iT31JBOBxLv5fTpX4iSIFMH6uMETV3djTz lfIMZxDuawJpg5MyEUnZbMXl3dBm0RQihiMLVHgCTpQfhDqeCrAFGIefiAQZZIuWMpnM mi5V2US93iZrJrZSYvnQzWjTEIH6daOM6OHsq3M1XyimWPKMnTq/FJv4TyVca1Jy3rqu WS/g== X-Gm-Message-State: AOAM530awXvGXkJOZ+8WK12Wye0cYsh8F3nfD1U74tryr0igf86jIMtW 2rWiR1nQszlR4679rUPGfl6vtdksVhM= X-Google-Smtp-Source: ABdhPJymg9GxZHKfuLS/yXqSC8ovU74qtab6sfLFH1fIZoRR7y1oQPXXiVyqPv2j8l5kM6JqJ3yywQ== X-Received: by 2002:a05:6830:2a8b:: with SMTP id s11mr13020369otu.8.1643667008757; Mon, 31 Jan 2022 14:10:08 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:08 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 06/17] RDMA/rxe: Remove qp->grp_lock and qp->grp_list Date: Mon, 31 Jan 2022 16:08:39 -0600 Message-Id: <20220131220849.10170-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Since it is no longer required to cleanup attachments to multicast groups when a QP is destroyed qp->grp_lock and qp->grp_list are no longer needed and are removed. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 8 -------- drivers/infiniband/sw/rxe/rxe_qp.c | 3 --- drivers/infiniband/sw/rxe/rxe_verbs.h | 5 ----- 3 files changed, 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 39a41daa7a6b..9336295c4ee2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -88,7 +88,6 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mca *elem; /* check to see of the qp is already a member of the group */ - spin_lock_bh(&qp->grp_lock); spin_lock_bh(&grp->mcg_lock); list_for_each_entry(elem, &grp->qp_list, qp_list) { if (elem->qp == qp) { @@ -113,16 +112,13 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, grp->num_qp++; elem->qp = qp; - elem->grp = grp; atomic_inc(&qp->mcg_num); list_add(&elem->qp_list, &grp->qp_list); - list_add(&elem->grp_list, &qp->grp_list); err = 0; out: spin_unlock_bh(&grp->mcg_lock); - spin_unlock_bh(&qp->grp_lock); return err; } @@ -136,18 +132,15 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, if (!grp) goto err1; - spin_lock_bh(&qp->grp_lock); spin_lock_bh(&grp->mcg_lock); list_for_each_entry_safe(elem, tmp, &grp->qp_list, qp_list) { if (elem->qp == qp) { list_del(&elem->qp_list); - list_del(&elem->grp_list); grp->num_qp--; atomic_dec(&qp->mcg_num); spin_unlock_bh(&grp->mcg_lock); - spin_unlock_bh(&qp->grp_lock); rxe_drop_ref(elem); rxe_drop_ref(grp); /* ref held by QP */ rxe_drop_ref(grp); /* ref from get_key */ @@ -156,7 +149,6 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, } spin_unlock_bh(&grp->mcg_lock); - spin_unlock_bh(&qp->grp_lock); rxe_drop_ref(grp); /* ref from get_key */ err1: return -EINVAL; diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index a21d704dc376..58ccca96c209 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -188,9 +188,6 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, break; } - INIT_LIST_HEAD(&qp->grp_list); - - spin_lock_init(&qp->grp_lock); spin_lock_init(&qp->state_lock); atomic_set(&qp->ssn, 0); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 4910d0782e33..55f8ed2bc621 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -232,9 +232,6 @@ struct rxe_qp { struct rxe_av pri_av; struct rxe_av alt_av; - /* list of mcast groups qp has joined (for cleanup) */ - struct list_head grp_list; - spinlock_t grp_lock; /* guard grp_list */ atomic_t mcg_num; struct sk_buff_head req_pkts; @@ -368,9 +365,7 @@ struct rxe_mcg { struct rxe_mca { struct rxe_pool_elem elem; struct list_head qp_list; - struct list_head grp_list; struct rxe_qp *qp; - struct rxe_mcg *grp; }; struct rxe_port { From patchwork Mon Jan 31 22:08:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4C3AC4332F for ; Mon, 31 Jan 2022 22:10:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231601AbiAaWKK (ORCPT ); Mon, 31 Jan 2022 17:10:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231710AbiAaWKK (ORCPT ); Mon, 31 Jan 2022 17:10:10 -0500 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20CAAC061401 for ; Mon, 31 Jan 2022 14:10:10 -0800 (PST) Received: by mail-oi1-x234.google.com with SMTP id b186so23383746oif.1 for ; Mon, 31 Jan 2022 14:10:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m7mNkfbEPoi1rj5yhVKQHMSqY7aHIAY5Ekm/rxrD91Q=; b=GYRr4jeZCGwYD9Z1txV83M/K33/ReoRk4+f/jD0BzrcttzrugIG52SuyUjn2LNGZN1 KiXJ7YCzZFVOhS5NpeWNwGOf5FW7cD6rbKadzmRAJyVwc40wNzT/kKqIerRuqZM3zyLR ZtHhIvd2U01/1htgkvC6wUWncMKetbc+QfMEXgKKwENgbLnQC0Oxkf5dJTuiEwwUEs8B GmEX+nOOMIvWb5NxMg9bxRISuLcv7WMz0hktCMt2wdyMvTWRtbrGMnk33XxBT6+EwUjt rgouFwbLCB+3No8D8swGzVx4u9u4/z/9EgJsl/0JfZo/C98s/IRPJrVgORrZf0xh0pCq 0Q/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m7mNkfbEPoi1rj5yhVKQHMSqY7aHIAY5Ekm/rxrD91Q=; b=XtEnGQa8RUPn+WPQ77VUCxIAu0BAI3z5OT4XGuybUUSrGu5nZmxLJAABWcgyYz4iYs Tdh+7n4qmnzldBB/5iO4uzcBxSBEZQu6HxiZd/TWA7HtZlxXyfaMfWo8zE3uiGjwe4qS Kx3M9JajuBk1OoesDAgLLI0r5JpkZ/eXrsmZUaey52+PrY3Yjg3tIo34EpNfqFzyJUqb MyQKUtvLSH3RWYHtJpFBXgyzQ4whYKrmw58gkQwbWPc4AVYZScG20Fy/1RSQmiNqSerG LzbWFp9cLxq9cZlw+yDjlz7aaiGaFpKDLimVRIHEKikil0ydCb1DRdh9XNCv1xKFEL9H gCiQ== X-Gm-Message-State: AOAM5319b56b6hs6DItnS9u7lguOdwNcDCpoc8TUFSfB1ZrfU/Hiyat8 YN8CQ7U8o5g4Sj2xeAd8hHY= X-Google-Smtp-Source: ABdhPJzv0ArcNB7y4SO8sEdX/sVtP5ae/opYwjOdQb1o+deUNX2jrUI4P6/grWMnLRlJ8JRCGNVXFQ== X-Received: by 2002:a54:4011:: with SMTP id x17mr14058186oie.255.1643667009519; Mon, 31 Jan 2022 14:10:09 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:09 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 07/17] RDMA/rxe: Use kzmalloc/kfree for mca Date: Mon, 31 Jan 2022 16:08:40 -0600 Message-Id: <20220131220849.10170-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Remove rxe_mca (was rxe_mc_elem) from rxe pools and use kzmalloc and kfree to allocate and free. Use the sequence new_mca = kzalloc(sizeof(*new_mca), GFP_KERNEL); /* in case of a race */ instead of GFP_ATOMIC inside of the spinlock. Add an extra reference to multicast group to protect the pointer in the index that maps mgid to group. Signed-off-by: Bob Pearson Reported-by: kernel test robot Reported-by: kernel test robot --- drivers/infiniband/sw/rxe/rxe.c | 8 -- drivers/infiniband/sw/rxe/rxe_mcast.c | 102 +++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_pool.c | 5 -- drivers/infiniband/sw/rxe/rxe_pool.h | 1 - drivers/infiniband/sw/rxe/rxe_verbs.h | 2 - 5 files changed, 59 insertions(+), 59 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index fab291245366..c55736e441e7 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -29,7 +29,6 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->mr_pool); rxe_pool_cleanup(&rxe->mw_pool); rxe_pool_cleanup(&rxe->mc_grp_pool); - rxe_pool_cleanup(&rxe->mc_elem_pool); if (rxe->tfm) crypto_free_shash(rxe->tfm); @@ -163,15 +162,8 @@ static int rxe_init_pools(struct rxe_dev *rxe) if (err) goto err9; - err = rxe_pool_init(rxe, &rxe->mc_elem_pool, RXE_TYPE_MC_ELEM, - rxe->attr.max_total_mcast_qp_attach); - if (err) - goto err10; - return 0; -err10: - rxe_pool_cleanup(&rxe->mc_grp_pool); err9: rxe_pool_cleanup(&rxe->mw_pool); err8: diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 9336295c4ee2..4a5896a225a6 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -26,30 +26,40 @@ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) } /* caller should hold mc_grp_pool->pool_lock */ -static struct rxe_mcg *create_grp(struct rxe_dev *rxe, - struct rxe_pool *pool, - union ib_gid *mgid) +static int __rxe_create_grp(struct rxe_dev *rxe, struct rxe_pool *pool, + union ib_gid *mgid, struct rxe_mcg **grp_p) { int err; struct rxe_mcg *grp; grp = rxe_alloc_locked(&rxe->mc_grp_pool); if (!grp) - return ERR_PTR(-ENOMEM); + return -ENOMEM; + + err = rxe_mcast_add(rxe, mgid); + if (unlikely(err)) { + rxe_drop_ref(grp); + return err; + } INIT_LIST_HEAD(&grp->qp_list); spin_lock_init(&grp->mcg_lock); grp->rxe = rxe; + + rxe_add_ref(grp); rxe_add_key_locked(grp, mgid); - err = rxe_mcast_add(rxe, mgid); - if (unlikely(err)) { - rxe_drop_key_locked(grp); - rxe_drop_ref(grp); - return ERR_PTR(err); - } + *grp_p = grp; + return 0; +} + +/* caller is holding a ref from lookup and mcg->mcg_lock*/ +void __rxe_destroy_mcg(struct rxe_mcg *grp) +{ + rxe_drop_key(grp); + rxe_drop_ref(grp); - return grp; + rxe_mcast_delete(grp->rxe, &grp->mgid); } static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, @@ -68,10 +78,9 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, if (grp) goto done; - grp = create_grp(rxe, pool, mgid); - if (IS_ERR(grp)) { + err = __rxe_create_grp(rxe, pool, mgid, &grp); + if (err) { write_unlock_bh(&pool->pool_lock); - err = PTR_ERR(grp); return err; } @@ -85,36 +94,44 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mcg *grp) { int err; - struct rxe_mca *elem; + struct rxe_mca *mca, *new_mca; - /* check to see of the qp is already a member of the group */ + /* check to see if the qp is already a member of the group */ spin_lock_bh(&grp->mcg_lock); - list_for_each_entry(elem, &grp->qp_list, qp_list) { - if (elem->qp == qp) { + list_for_each_entry(mca, &grp->qp_list, qp_list) { + if (mca->qp == qp) { + spin_unlock_bh(&grp->mcg_lock); + return 0; + } + } + spin_unlock_bh(&grp->mcg_lock); + + /* speculative alloc new mca without using GFP_ATOMIC */ + new_mca = kzalloc(sizeof(*mca), GFP_KERNEL); + if (!new_mca) + return -ENOMEM; + + spin_lock_bh(&grp->mcg_lock); + /* re-check to see if someone else just attached qp */ + list_for_each_entry(mca, &grp->qp_list, qp_list) { + if (mca->qp == qp) { + kfree(new_mca); err = 0; goto out; } } + mca = new_mca; if (grp->num_qp >= rxe->attr.max_mcast_qp_attach) { err = -ENOMEM; goto out; } - elem = rxe_alloc_locked(&rxe->mc_elem_pool); - if (!elem) { - err = -ENOMEM; - goto out; - } - - /* each qp holds a ref on the grp */ - rxe_add_ref(grp); - grp->num_qp++; - elem->qp = qp; + mca->qp = qp; atomic_inc(&qp->mcg_num); - list_add(&elem->qp_list, &grp->qp_list); + list_add(&mca->qp_list, &grp->qp_list); err = 0; out: @@ -126,7 +143,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { struct rxe_mcg *grp; - struct rxe_mca *elem, *tmp; + struct rxe_mca *mca, *tmp; grp = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); if (!grp) @@ -134,33 +151,30 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, spin_lock_bh(&grp->mcg_lock); - list_for_each_entry_safe(elem, tmp, &grp->qp_list, qp_list) { - if (elem->qp == qp) { - list_del(&elem->qp_list); + list_for_each_entry_safe(mca, tmp, &grp->qp_list, qp_list) { + if (mca->qp == qp) { + list_del(&mca->qp_list); grp->num_qp--; + if (grp->num_qp <= 0) + __rxe_destroy_mcg(grp); atomic_dec(&qp->mcg_num); spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(elem); - rxe_drop_ref(grp); /* ref held by QP */ - rxe_drop_ref(grp); /* ref from get_key */ + rxe_drop_ref(grp); + kfree(mca); return 0; } } spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(grp); /* ref from get_key */ + rxe_drop_ref(grp); err1: return -EINVAL; } void rxe_mc_cleanup(struct rxe_pool_elem *elem) { - struct rxe_mcg *grp = container_of(elem, typeof(*grp), elem); - struct rxe_dev *rxe = grp->rxe; - - rxe_drop_key(grp); - rxe_mcast_delete(rxe, &grp->mgid); + /* nothing left to do */ } int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) @@ -170,13 +184,15 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) struct rxe_qp *qp = to_rqp(ibqp); struct rxe_mcg *grp; - /* takes a ref on grp if successful */ err = rxe_mcast_get_grp(rxe, mgid, &grp); if (err) return err; err = rxe_mcast_add_grp_elem(rxe, qp, grp); + if (grp->num_qp == 0) + __rxe_destroy_mcg(grp); + rxe_drop_ref(grp); return err; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 63c594173565..a6756aa93e2b 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -90,11 +90,6 @@ static const struct rxe_type_info { .key_offset = offsetof(struct rxe_mcg, mgid), .key_size = sizeof(union ib_gid), }, - [RXE_TYPE_MC_ELEM] = { - .name = "rxe-mc_elem", - .size = sizeof(struct rxe_mca), - .elem_offset = offsetof(struct rxe_mca, elem), - }, }; static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 214279310f4d..511f81554fd1 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -23,7 +23,6 @@ enum rxe_elem_type { RXE_TYPE_MR, RXE_TYPE_MW, RXE_TYPE_MC_GRP, - RXE_TYPE_MC_ELEM, RXE_NUM_TYPES, /* keep me last */ }; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 55f8ed2bc621..02745d51c163 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -363,7 +363,6 @@ struct rxe_mcg { }; struct rxe_mca { - struct rxe_pool_elem elem; struct list_head qp_list; struct rxe_qp *qp; }; @@ -397,7 +396,6 @@ struct rxe_dev { struct rxe_pool mr_pool; struct rxe_pool mw_pool; struct rxe_pool mc_grp_pool; - struct rxe_pool mc_elem_pool; spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Mon Jan 31 22:08:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 872A9C433F5 for ; Mon, 31 Jan 2022 22:10:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231648AbiAaWKL (ORCPT ); Mon, 31 Jan 2022 17:10:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231733AbiAaWKL (ORCPT ); Mon, 31 Jan 2022 17:10:11 -0500 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BDBBC06173D for ; Mon, 31 Jan 2022 14:10:11 -0800 (PST) Received: by mail-oi1-x22e.google.com with SMTP id m10so4537080oie.2 for ; Mon, 31 Jan 2022 14:10:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=N8RcOh3BB2FN1HL0D8zh2H7knefO2CFiWRvnb32iFdI=; b=j2JdxPZlBLAHAwd7obW1DUBGmkNPpBEvqDEY3f78rU5F2fa+JlOC5F3BXoIpAH5upm Fbd9LvXgl8zqSaIvgyzQgWIRflOCbScjJpRJ02Woj9ONQQju+hvNz8hteSIa4LOBzURA 7k4EQuY493VMWb0capsvNrT7NpWPXOQgwFdz+2l6fmQAbF9uOUqVaHcUPJFD3qOXPWOr W0YVAzKVg89GzlRtez6pTDLiiyvdAk4pO+QgQGhL3c7GT38FklfvLnRVGRRiS18nTDGK eudA8hckJjS+1z2J3h3khiU6OJxbbeKn1Zubt/gs4m6P2pB7yo/vpkrcig0qcHIqi8iZ WGFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=N8RcOh3BB2FN1HL0D8zh2H7knefO2CFiWRvnb32iFdI=; b=2tygyEvYQQD7S+Vum200O1I/NTnUNWbf1axS7fHYqRxDi/N3dRXa+Ed5Q6tG3M/kQI /z71daFsrhDvrjqWNfTxkSYbgfGryMbuUskgU5Hfc+VHrpLqen3o6SsYoWkGUbi3Vpsw Sk7Hq7472qC/FzmGLAVFnA77PWjbjWUmp67i4teyUsBkTn1zFSWbO1pvGzQBVyQlIVFR 7thV8qRFVOeAtP7Lk1Df6Z0QnfBekaYpuXiL2rISDMMK2tGDZsQhqbtCOh5iy44kiRvo MNr9fYCgSDd0O/J6ejt8V6dHQ89iF6sSN5KMOz6TQmH9JndC9SNb6MuUaZDq9SQqft0l WUMw== X-Gm-Message-State: AOAM530kQAjcSYojIKFunTIS2Mgrex1XiINGrv/XcUu1kJRFWgOJ/YVY 8PLKTjo/EXo2QiCFhwbO2XU= X-Google-Smtp-Source: ABdhPJy9ubc5+wRsHyFOEKVgKUgSwiwajn6tFI84ARmtPSjJCV5zUlOUkx30BTX8QPmPSd0L9KIQPA== X-Received: by 2002:a05:6808:210c:: with SMTP id r12mr14731219oiw.221.1643667010405; Mon, 31 Jan 2022 14:10:10 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:10 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 08/17] RDMA/rxe: Rename grp to mcg and mce to mca Date: Mon, 31 Jan 2022 16:08:41 -0600 Message-Id: <20220131220849.10170-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In rxe_mcast.c and rxe_recv.c replace 'grp' by 'mcg' and 'mce' by 'mca'. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 104 +++++++++++++------------- drivers/infiniband/sw/rxe/rxe_recv.c | 8 +- 2 files changed, 56 insertions(+), 56 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 4a5896a225a6..29e6c9e11c77 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -26,47 +26,47 @@ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) } /* caller should hold mc_grp_pool->pool_lock */ -static int __rxe_create_grp(struct rxe_dev *rxe, struct rxe_pool *pool, - union ib_gid *mgid, struct rxe_mcg **grp_p) +static int __rxe_create_mcg(struct rxe_dev *rxe, struct rxe_pool *pool, + union ib_gid *mgid, struct rxe_mcg **mcg_p) { int err; - struct rxe_mcg *grp; + struct rxe_mcg *mcg; - grp = rxe_alloc_locked(&rxe->mc_grp_pool); - if (!grp) + mcg = rxe_alloc_locked(&rxe->mc_grp_pool); + if (!mcg) return -ENOMEM; err = rxe_mcast_add(rxe, mgid); if (unlikely(err)) { - rxe_drop_ref(grp); + rxe_drop_ref(mcg); return err; } - INIT_LIST_HEAD(&grp->qp_list); - spin_lock_init(&grp->mcg_lock); - grp->rxe = rxe; + INIT_LIST_HEAD(&mcg->qp_list); + spin_lock_init(&mcg->mcg_lock); + mcg->rxe = rxe; - rxe_add_ref(grp); - rxe_add_key_locked(grp, mgid); + rxe_add_ref(mcg); + rxe_add_key_locked(mcg, mgid); - *grp_p = grp; + *mcg_p = mcg; return 0; } /* caller is holding a ref from lookup and mcg->mcg_lock*/ -void __rxe_destroy_mcg(struct rxe_mcg *grp) +void __rxe_destroy_mcg(struct rxe_mcg *mcg) { - rxe_drop_key(grp); - rxe_drop_ref(grp); + rxe_drop_key(mcg); + rxe_drop_ref(mcg); - rxe_mcast_delete(grp->rxe, &grp->mgid); + rxe_mcast_delete(mcg->rxe, &mcg->mgid); } -static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mcg **grp_p) +static int rxe_mcast_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, + struct rxe_mcg **mcg_p) { int err; - struct rxe_mcg *grp; + struct rxe_mcg *mcg; struct rxe_pool *pool = &rxe->mc_grp_pool; if (rxe->attr.max_mcast_qp_attach == 0) @@ -74,11 +74,11 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, write_lock_bh(&pool->pool_lock); - grp = rxe_pool_get_key_locked(pool, mgid); - if (grp) + mcg = rxe_pool_get_key_locked(pool, mgid); + if (mcg) goto done; - err = __rxe_create_grp(rxe, pool, mgid, &grp); + err = __rxe_create_mcg(rxe, pool, mgid, &mcg); if (err) { write_unlock_bh(&pool->pool_lock); return err; @@ -86,34 +86,34 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, done: write_unlock_bh(&pool->pool_lock); - *grp_p = grp; + *mcg_p = mcg; return 0; } static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mcg *grp) + struct rxe_mcg *mcg) { int err; struct rxe_mca *mca, *new_mca; /* check to see if the qp is already a member of the group */ - spin_lock_bh(&grp->mcg_lock); - list_for_each_entry(mca, &grp->qp_list, qp_list) { + spin_lock_bh(&mcg->mcg_lock); + list_for_each_entry(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - spin_unlock_bh(&grp->mcg_lock); + spin_unlock_bh(&mcg->mcg_lock); return 0; } } - spin_unlock_bh(&grp->mcg_lock); + spin_unlock_bh(&mcg->mcg_lock); /* speculative alloc new mca without using GFP_ATOMIC */ new_mca = kzalloc(sizeof(*mca), GFP_KERNEL); if (!new_mca) return -ENOMEM; - spin_lock_bh(&grp->mcg_lock); + spin_lock_bh(&mcg->mcg_lock); /* re-check to see if someone else just attached qp */ - list_for_each_entry(mca, &grp->qp_list, qp_list) { + list_for_each_entry(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { kfree(new_mca); err = 0; @@ -122,52 +122,52 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, } mca = new_mca; - if (grp->num_qp >= rxe->attr.max_mcast_qp_attach) { + if (mcg->num_qp >= rxe->attr.max_mcast_qp_attach) { err = -ENOMEM; goto out; } - grp->num_qp++; + mcg->num_qp++; mca->qp = qp; atomic_inc(&qp->mcg_num); - list_add(&mca->qp_list, &grp->qp_list); + list_add(&mca->qp_list, &mcg->qp_list); err = 0; out: - spin_unlock_bh(&grp->mcg_lock); + spin_unlock_bh(&mcg->mcg_lock); return err; } static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { - struct rxe_mcg *grp; + struct rxe_mcg *mcg; struct rxe_mca *mca, *tmp; - grp = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); - if (!grp) + mcg = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); + if (!mcg) goto err1; - spin_lock_bh(&grp->mcg_lock); + spin_lock_bh(&mcg->mcg_lock); - list_for_each_entry_safe(mca, tmp, &grp->qp_list, qp_list) { + list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { list_del(&mca->qp_list); - grp->num_qp--; - if (grp->num_qp <= 0) - __rxe_destroy_mcg(grp); + mcg->num_qp--; + if (mcg->num_qp <= 0) + __rxe_destroy_mcg(mcg); atomic_dec(&qp->mcg_num); - spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(grp); + spin_unlock_bh(&mcg->mcg_lock); + rxe_drop_ref(mcg); kfree(mca); return 0; } } - spin_unlock_bh(&grp->mcg_lock); - rxe_drop_ref(grp); + spin_unlock_bh(&mcg->mcg_lock); + rxe_drop_ref(mcg); err1: return -EINVAL; } @@ -182,18 +182,18 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) int err; struct rxe_dev *rxe = to_rdev(ibqp->device); struct rxe_qp *qp = to_rqp(ibqp); - struct rxe_mcg *grp; + struct rxe_mcg *mcg; - err = rxe_mcast_get_grp(rxe, mgid, &grp); + err = rxe_mcast_get_mcg(rxe, mgid, &mcg); if (err) return err; - err = rxe_mcast_add_grp_elem(rxe, qp, grp); + err = rxe_mcast_add_grp_elem(rxe, qp, mcg); - if (grp->num_qp == 0) - __rxe_destroy_mcg(grp); + if (mcg->num_qp == 0) + __rxe_destroy_mcg(mcg); - rxe_drop_ref(grp); + rxe_drop_ref(mcg); return err; } diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 7ff6b53555f4..814a002b8911 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -234,7 +234,7 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) { struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); struct rxe_mcg *mcg; - struct rxe_mca *mce; + struct rxe_mca *mca; struct rxe_qp *qp; union ib_gid dgid; int err; @@ -257,8 +257,8 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) * single QP happen and just move on and try * the rest of them on the list */ - list_for_each_entry(mce, &mcg->qp_list, qp_list) { - qp = mce->qp; + list_for_each_entry(mca, &mcg->qp_list, qp_list) { + qp = mca->qp; /* validate qp for incoming packet */ err = check_type_state(rxe, pkt, qp); @@ -273,7 +273,7 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) * skb and pass to the QP. Pass the original skb to * the last QP in the list. */ - if (mce->qp_list.next != &mcg->qp_list) { + if (mca->qp_list.next != &mcg->qp_list) { struct sk_buff *cskb; struct rxe_pkt_info *cpkt; From patchwork Mon Jan 31 22:08:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B19DC433FE for ; Mon, 31 Jan 2022 22:10:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231425AbiAaWKN (ORCPT ); Mon, 31 Jan 2022 17:10:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231660AbiAaWKL (ORCPT ); Mon, 31 Jan 2022 17:10:11 -0500 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D69CAC06173E for ; Mon, 31 Jan 2022 14:10:11 -0800 (PST) Received: by mail-oi1-x231.google.com with SMTP id m9so29503470oia.12 for ; Mon, 31 Jan 2022 14:10:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1Qfvp20206DcmKvuNv+j3mvuVrDOObPq3yGtKsY16FI=; b=cPUtk6PcjbHwBNeifn7PbzYRFzolu/hGSZN/4F/iTnSf2tZKZeMhRLMuozLjbX+d6h y2d3jgbXKLPgWarz6nFf1bri4UUYY5tVblG/VS6/mod/sjhd/LjKER1l3CzjjmlGtFgG wsKVU5DD/YB9I3ob2S7slPpRVky6QPhfJOnK3iIOwNfVnETwIKvybuBWfl4Q6HUD6spb 91bJYOeBIMyI7Kooq5Lz8DjhwHNzzZw28qWihk1Mi5eGOjM683rHZi6rVzzJG5lGBVfA RSDCCrPnBq82/TYVVe92wxZet+RPOt+vB6IOr0lV3T4s+UonKsFMVnvB7R8unWxglXlT tY2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Qfvp20206DcmKvuNv+j3mvuVrDOObPq3yGtKsY16FI=; b=3ywEoMJYre3ccsG/TjP5E8kxAShCHl3DMaPksVJuCTGWUfMuM6BgGW9z9DinlYfvTa sVr3t69Gte5SGFfLsUP3VJlmwxxVfYLODODl3qFb+tFXkmh59b+kLwSfRIuo5ajVOjkO zdhDJCN7kC6uenyDO/DxY928wMvL1o7msKrYmh2sbl4QCaNxXdyN6BFWF3Z/CnsN+Zkr oDUksJq4IH7cRI+NfNsyMSWmAfMtweFQYlZy6NY4S8WKPkImvxbOux9BowXCRPKJWTW0 Dkpbws1hAlWeyFkmxLSN/1EM9fkFsQioDrQEibvcKcywnBd/6BPsggygfvexGxFYFLK3 TG+g== X-Gm-Message-State: AOAM530FFN3HSOz4T1S0mpEcyBjUNhet10XYlcwfh2M4gAbDpVeimkCQ FNyHd13TD1eyFAveDTOmIrk= X-Google-Smtp-Source: ABdhPJykhveKLG1JySaJwQR3xJijJ1VvQy14dzqU5PJKB1L1lDuTzxrsq4sYuPOVda/7s3xvFXQrIA== X-Received: by 2002:a05:6808:1491:: with SMTP id e17mr18648023oiw.123.1643667011282; Mon, 31 Jan 2022 14:10:11 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:10 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 09/17] RDMA/rxe: Introduce RXECB(skb) Date: Mon, 31 Jan 2022 16:08:42 -0600 Message-Id: <20220131220849.10170-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a #define RXECB(skb) to rxe_hdr.h as a short cut to refer to single members of rxe_pkt_info which is stored in skb->cb in the receive path. Use this to make some cleanups in rxe_recv.c Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 3 ++ drivers/infiniband/sw/rxe/rxe_recv.c | 55 +++++++++++++--------------- 2 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e432f9e37795..2a85d1e40e6a 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -36,6 +36,9 @@ static inline struct sk_buff *PKT_TO_SKB(struct rxe_pkt_info *pkt) return container_of((void *)pkt, struct sk_buff, cb); } +/* alternative to access a single element of rxe_pkt_info from skb */ +#define RXECB(skb) ((struct rxe_pkt_info *)((skb)->cb)) + /* * IBA header types and methods * diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 814a002b8911..10020103ea4a 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -107,17 +107,15 @@ static int check_keys(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, return -EINVAL; } -static int check_addr(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, +static int check_addr(struct rxe_dev *rxe, struct sk_buff *skb, struct rxe_qp *qp) { - struct sk_buff *skb = PKT_TO_SKB(pkt); - if (qp_type(qp) != IB_QPT_RC && qp_type(qp) != IB_QPT_UC) goto done; - if (unlikely(pkt->port_num != qp->attr.port_num)) { + if (unlikely(RXECB(skb)->port_num != qp->attr.port_num)) { pr_warn_ratelimited("port %d != qp port %d\n", - pkt->port_num, qp->attr.port_num); + RXECB(skb)->port_num, qp->attr.port_num); goto err1; } @@ -167,8 +165,9 @@ static int check_addr(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, return -EINVAL; } -static int hdr_check(struct rxe_pkt_info *pkt) +static int hdr_check(struct sk_buff *skb) { + struct rxe_pkt_info *pkt = RXECB(skb); struct rxe_dev *rxe = pkt->rxe; struct rxe_port *port = &rxe->port; struct rxe_qp *qp = NULL; @@ -199,7 +198,7 @@ static int hdr_check(struct rxe_pkt_info *pkt) if (unlikely(err)) goto err2; - err = check_addr(rxe, pkt, qp); + err = check_addr(rxe, skb, qp); if (unlikely(err)) goto err2; @@ -222,17 +221,19 @@ static int hdr_check(struct rxe_pkt_info *pkt) return -EINVAL; } -static inline void rxe_rcv_pkt(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static inline void rxe_rcv_pkt(struct sk_buff *skb) { - if (pkt->mask & RXE_REQ_MASK) - rxe_resp_queue_pkt(pkt->qp, skb); + if (RXECB(skb)->mask & RXE_REQ_MASK) + rxe_resp_queue_pkt(RXECB(skb)->qp, skb); else - rxe_comp_queue_pkt(pkt->qp, skb); + rxe_comp_queue_pkt(RXECB(skb)->qp, skb); } -static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) +static void rxe_rcv_mcast_pkt(struct sk_buff *skb) { - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + struct sk_buff *s; + struct rxe_pkt_info *pkt = RXECB(skb); + struct rxe_dev *rxe = pkt->rxe; struct rxe_mcg *mcg; struct rxe_mca *mca; struct rxe_qp *qp; @@ -274,26 +275,22 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) * the last QP in the list. */ if (mca->qp_list.next != &mcg->qp_list) { - struct sk_buff *cskb; - struct rxe_pkt_info *cpkt; - - cskb = skb_clone(skb, GFP_ATOMIC); - if (unlikely(!cskb)) + s = skb_clone(skb, GFP_ATOMIC); + if (unlikely(!s)) continue; if (WARN_ON(!ib_device_try_get(&rxe->ib_dev))) { - kfree_skb(cskb); + kfree_skb(s); break; } - cpkt = SKB_TO_PKT(cskb); - cpkt->qp = qp; + RXECB(s)->qp = qp; rxe_add_ref(qp); - rxe_rcv_pkt(cpkt, cskb); + rxe_rcv_pkt(s); } else { - pkt->qp = qp; + RXECB(skb)->qp = qp; rxe_add_ref(qp); - rxe_rcv_pkt(pkt, skb); + rxe_rcv_pkt(skb); skb = NULL; /* mark consumed */ } } @@ -326,7 +323,7 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) */ static int rxe_chk_dgid(struct rxe_dev *rxe, struct sk_buff *skb) { - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + struct rxe_pkt_info *pkt = RXECB(skb); const struct ib_gid_attr *gid_attr; union ib_gid dgid; union ib_gid *pdgid; @@ -359,7 +356,7 @@ static int rxe_chk_dgid(struct rxe_dev *rxe, struct sk_buff *skb) void rxe_rcv(struct sk_buff *skb) { int err; - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + struct rxe_pkt_info *pkt = RXECB(skb); struct rxe_dev *rxe = pkt->rxe; if (unlikely(skb->len < RXE_BTH_BYTES)) @@ -378,7 +375,7 @@ void rxe_rcv(struct sk_buff *skb) if (unlikely(skb->len < header_size(pkt))) goto drop; - err = hdr_check(pkt); + err = hdr_check(skb); if (unlikely(err)) goto drop; @@ -389,9 +386,9 @@ void rxe_rcv(struct sk_buff *skb) rxe_counter_inc(rxe, RXE_CNT_RCVD_PKTS); if (unlikely(bth_qpn(pkt) == IB_MULTICAST_QPN)) - rxe_rcv_mcast_pkt(rxe, skb); + rxe_rcv_mcast_pkt(skb); else - rxe_rcv_pkt(pkt, skb); + rxe_rcv_pkt(skb); return; From patchwork Mon Jan 31 22:08:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 358B7C433EF for ; Mon, 31 Jan 2022 22:10:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231660AbiAaWKN (ORCPT ); Mon, 31 Jan 2022 17:10:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231769AbiAaWKM (ORCPT ); Mon, 31 Jan 2022 17:10:12 -0500 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D7F4C061714 for ; Mon, 31 Jan 2022 14:10:12 -0800 (PST) Received: by mail-oi1-x230.google.com with SMTP id u13so13359009oie.5 for ; Mon, 31 Jan 2022 14:10:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZXOIkcu6HLfcWLHcXrUtsWKWCIssUMfCQdIW2FKGHD8=; b=ZacRhok4/ZRBYagV3hbqWYFmqklDH3KM2evkta150gUY32wFL6v6En+S/vOLTxFOBf agcTHeijF3F+dJkhUZUSPSqnd52cyDfqapQyUGT00ijPm/LSWzt/ca0nRmuteyMxxVhX HGcq+k2egR3UBOdVgBG4SQMTIUm+s9tKPUiIipkyJ5NuQilsJ0jJViA3IQ5KQwP6qNlM aUIGsrnxOnAjjD0qR+B1JoDplK2vvmLE0a+HwiGOvRg52I0tOSmuGNCdX4snKMHr3YRB AA1PWZuX9xRy7ncmRFjDLf2NAcfmku/RrI2jUhY0IssciH1O0gJJGzvzVTbDKLM5mHRh XI7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZXOIkcu6HLfcWLHcXrUtsWKWCIssUMfCQdIW2FKGHD8=; b=5r7fcCWDdTMSCQCutxV2l+lK2rQcvWwCStICpeCvP/1C5bpHaP4KXxnCNr6yv1EFOB X2ylj5/67/EsRPdnB6LTbBmihC4lMU8/+NvjG0nDG253NqHViS0Hs/ET0A8+pQ82Xn1n zo6Fec1GuUYgDN1WQA+lXInQh/vxn+ijW8hJoZ04O0X/THz4GzaO8a34Hpiw1N0Q9IVG 1ERnysUM+DMPydd+xQhaOpXWUwt/ulllNoYQHp3qlAdzkm6uDYmSvETCFTLLDrNHTgQ9 Dfl0dbyzQ1o7T4LMkLMdAWoF26utzzQTZ76c2x/GDpCMZucISO4mVEmD/PxaCfGH9Wx6 464A== X-Gm-Message-State: AOAM531vkd8IuR+rX0WajKaD9jdwuQd0VYweipMs6N8FLuIL6vXW+wcP RNA7WZ4DsbD77o7G7D4NXhk= X-Google-Smtp-Source: ABdhPJwIbu7Hf2WYgPbCVr6/11eFSKKrML+9V8frc2yLjJ4CN0wzPtFcmCO70Kmi72/nbCdiowkEDw== X-Received: by 2002:a05:6808:1918:: with SMTP id bf24mr19023246oib.253.1643667012058; Mon, 31 Jan 2022 14:10:12 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:11 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 10/17] RDMA/rxe: Split rxe_rcv_mcast_pkt into two phases Date: Mon, 31 Jan 2022 16:08:43 -0600 Message-Id: <20220131220849.10170-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently rxe_rcv_mcast_pkt performs most of its work under the mcg->mcg_lock and calls into rxe_rcv which queues the packets to the responder and completer tasklets holding the lock which is a very bad idea. This patch walks the qp_list in mcg and copies the qp addresses to a dynamically allocated array under the lock but does the rest of the work without holding the lock. The critical section is now very small. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 11 ++++--- drivers/infiniband/sw/rxe/rxe_recv.c | 41 +++++++++++++++++++++------ drivers/infiniband/sw/rxe/rxe_verbs.h | 2 +- 3 files changed, 38 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 29e6c9e11c77..52bd46ca22c9 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -122,16 +122,16 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, } mca = new_mca; - if (mcg->num_qp >= rxe->attr.max_mcast_qp_attach) { + if (atomic_read(&mcg->qp_num) >= rxe->attr.max_mcast_qp_attach) { err = -ENOMEM; goto out; } - mcg->num_qp++; + atomic_inc(&mcg->qp_num); mca->qp = qp; atomic_inc(&qp->mcg_num); - list_add(&mca->qp_list, &mcg->qp_list); + list_add_tail(&mca->qp_list, &mcg->qp_list); err = 0; out: @@ -154,8 +154,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { list_del(&mca->qp_list); - mcg->num_qp--; - if (mcg->num_qp <= 0) + if (atomic_dec_return(&mcg->qp_num) <= 0) __rxe_destroy_mcg(mcg); atomic_dec(&qp->mcg_num); @@ -190,7 +189,7 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) err = rxe_mcast_add_grp_elem(rxe, qp, mcg); - if (mcg->num_qp == 0) + if (atomic_read(&mcg->qp_num) == 0) __rxe_destroy_mcg(mcg); rxe_drop_ref(mcg); diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 10020103ea4a..ed80125f1dc5 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -229,6 +229,11 @@ static inline void rxe_rcv_pkt(struct sk_buff *skb) rxe_comp_queue_pkt(RXECB(skb)->qp, skb); } +/* split processing of the qp list into two stages. + * first just make a simple linear array from the + * current list while holding the lock and then + * process each qp without holding the lock. + */ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) { struct sk_buff *s; @@ -237,7 +242,9 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) struct rxe_mcg *mcg; struct rxe_mca *mca; struct rxe_qp *qp; + struct rxe_qp **qp_array; union ib_gid dgid; + int n, nmax; int err; if (skb->protocol == htons(ETH_P_IP)) @@ -251,15 +258,32 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) if (!mcg) goto drop; /* mcast group not registered */ + /* this is the current number of qp's attached to mcg plus a + * little room in case new qp's are attached. It isn't wrong + * to miss some qp's since it is just a matter of precisely + * when the packet is assumed to be received. + */ + nmax = atomic_read(&mcg->qp_num) + 2; + qp_array = kmalloc_array(nmax, sizeof(qp), GFP_KERNEL); + + n = 0; spin_lock_bh(&mcg->mcg_lock); + list_for_each_entry(mca, &mcg->qp_list, qp_list) { + rxe_add_ref(mca->qp); + qp_array[n++] = mca->qp; + if (n == nmax) + break; + } + spin_unlock_bh(&mcg->mcg_lock); + nmax = n; /* this is unreliable datagram service so we let * failures to deliver a multicast packet to a * single QP happen and just move on and try * the rest of them on the list */ - list_for_each_entry(mca, &mcg->qp_list, qp_list) { - qp = mca->qp; + for (n = 0; n < nmax; n++) { + qp = qp_array[n]; /* validate qp for incoming packet */ err = check_type_state(rxe, pkt, qp); @@ -274,28 +298,27 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) * skb and pass to the QP. Pass the original skb to * the last QP in the list. */ - if (mca->qp_list.next != &mcg->qp_list) { - s = skb_clone(skb, GFP_ATOMIC); + if (n < nmax - 1) { + s = skb_clone(skb, GFP_KERNEL); if (unlikely(!s)) continue; + RXECB(s)->qp = qp; if (WARN_ON(!ib_device_try_get(&rxe->ib_dev))) { + rxe_drop_ref(RXECB(s)->qp); kfree_skb(s); - break; + continue; } - RXECB(s)->qp = qp; - rxe_add_ref(qp); rxe_rcv_pkt(s); } else { RXECB(skb)->qp = qp; - rxe_add_ref(qp); rxe_rcv_pkt(skb); skb = NULL; /* mark consumed */ } } - spin_unlock_bh(&mcg->mcg_lock); + kfree(qp_array); rxe_drop_ref(mcg); /* drop ref from rxe_pool_get_key. */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 02745d51c163..d65c358798c6 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -356,8 +356,8 @@ struct rxe_mcg { spinlock_t mcg_lock; /* guard group */ struct rxe_dev *rxe; struct list_head qp_list; + atomic_t qp_num; union ib_gid mgid; - int num_qp; u32 qkey; u16 pkey; }; From patchwork Mon Jan 31 22:08:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F465C4332F for ; Mon, 31 Jan 2022 22:10:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231710AbiAaWKO (ORCPT ); Mon, 31 Jan 2022 17:10:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231627AbiAaWKN (ORCPT ); Mon, 31 Jan 2022 17:10:13 -0500 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F4F0C06173D for ; Mon, 31 Jan 2022 14:10:13 -0800 (PST) Received: by mail-oi1-x234.google.com with SMTP id v67so29534185oie.9 for ; Mon, 31 Jan 2022 14:10:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vyhU+SAzlULycIcyLyDOAHPl5t3FH7EOR8gipo1aZZQ=; b=ncTLCO+May2EkmrxW6WtEfI2yP8a05YnuM5yyZ4Ewc5uxuKHLEnLqLMQk7BXKRedld lhqGvanKtsXAKMKLhgUwkzYhk/AUcFSerscE459b9hrJ48XdTpoLhyJkIPsS+dG7nVo4 VXGQbbFef+FeQVvoDbKL6GgJQGNlhSBabOR2QKJGhNd+lR86307aXJ8paFY2rWRs/L76 orXsv6eNZLOT1jUHDC/5OgV/qO9J0jCh3X9UmIbtR7BdpTK1jYeA4DqU3so259PdsvrX SqHE/z9IAc10nqEXyuBve8sfRc3Bi71ThFQFnsghPQ6/jtgJhkCLCRMraj6abOZ6itdV Zb9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vyhU+SAzlULycIcyLyDOAHPl5t3FH7EOR8gipo1aZZQ=; b=wfRkpfPvDqVsSCh4IMEWPJ8+z1pLATPCSFlXHvIswe/e9OL8Ku3SGoKvDJOus0/9vs cGEzV8G1nFhxPjK9MeN8Y/a6ytPJ8bfGgRoAKaM5nL8oSEn33xQviihnNtP+0spF8QZX 75IHnxqAC635MATssO7CVoZNjQ1tgDmkoPHB2Z9UgrAd5t+dSgPSs7qQ7rjwD5BMIV+6 URfyi5SzWC82vKPWUMIgwfSal677X2EWaBAagxmXGJf/ECk4JAUNx+JWQ8drmZH22egm /slo5euIQHM6SNVY7zmwzi3NN9dKNGm5yJS2fB6q3SLxKdz28YRv3KZdLsP+EVe0EIrD NOeA== X-Gm-Message-State: AOAM530qYh6Lj3ax8B7gP7cZBBpIavCJOncpuzkG2zqucYmRHY36zLXA aC1u9usMyrKDMcNNr2AQess9v6/A9Vk= X-Google-Smtp-Source: ABdhPJw/BCmJkZ4CcaTR3Bn+ku9DrpIAPWfLq+VKaT9bfKCkhh4DEIHlx2XOrDW3FIsKpF5e15FspA== X-Received: by 2002:a05:6808:1250:: with SMTP id o16mr14512371oiv.95.1643667012873; Mon, 31 Jan 2022 14:10:12 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:12 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 11/17] RDMA/rxe: Replace mcg locks by rxe->mcg_lock Date: Mon, 31 Jan 2022 16:08:44 -0600 Message-Id: <20220131220849.10170-12-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Starting to decouple mcg from rxe pools, replace the spin lock mcg->mcg_lock and the read/write lock pool->pool_lock by rxe->mcg_lock. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 ++ drivers/infiniband/sw/rxe/rxe_mcast.c | 25 ++++++++++++------------- drivers/infiniband/sw/rxe/rxe_recv.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_verbs.h | 3 ++- 4 files changed, 18 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index c55736e441e7..46a07e2d9dcf 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -198,6 +198,8 @@ static int rxe_init(struct rxe_dev *rxe) if (err) return err; + spin_lock_init(&rxe->mcg_lock); + /* init pending mmap list */ spin_lock_init(&rxe->mmap_offset_lock); spin_lock_init(&rxe->pending_lock); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 52bd46ca22c9..d35070777214 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -25,7 +25,7 @@ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) return dev_mc_del(rxe->ndev, ll_addr); } -/* caller should hold mc_grp_pool->pool_lock */ +/* caller should hold rxe->mcg_lock */ static int __rxe_create_mcg(struct rxe_dev *rxe, struct rxe_pool *pool, union ib_gid *mgid, struct rxe_mcg **mcg_p) { @@ -43,7 +43,6 @@ static int __rxe_create_mcg(struct rxe_dev *rxe, struct rxe_pool *pool, } INIT_LIST_HEAD(&mcg->qp_list); - spin_lock_init(&mcg->mcg_lock); mcg->rxe = rxe; rxe_add_ref(mcg); @@ -72,7 +71,7 @@ static int rxe_mcast_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, if (rxe->attr.max_mcast_qp_attach == 0) return -EINVAL; - write_lock_bh(&pool->pool_lock); + spin_lock_bh(&rxe->mcg_lock); mcg = rxe_pool_get_key_locked(pool, mgid); if (mcg) @@ -80,12 +79,12 @@ static int rxe_mcast_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, err = __rxe_create_mcg(rxe, pool, mgid, &mcg); if (err) { - write_unlock_bh(&pool->pool_lock); + spin_unlock_bh(&rxe->mcg_lock); return err; } done: - write_unlock_bh(&pool->pool_lock); + spin_unlock_bh(&rxe->mcg_lock); *mcg_p = mcg; return 0; } @@ -97,21 +96,21 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mca *mca, *new_mca; /* check to see if the qp is already a member of the group */ - spin_lock_bh(&mcg->mcg_lock); + spin_lock_bh(&rxe->mcg_lock); list_for_each_entry(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); return 0; } } - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); /* speculative alloc new mca without using GFP_ATOMIC */ new_mca = kzalloc(sizeof(*mca), GFP_KERNEL); if (!new_mca) return -ENOMEM; - spin_lock_bh(&mcg->mcg_lock); + spin_lock_bh(&rxe->mcg_lock); /* re-check to see if someone else just attached qp */ list_for_each_entry(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { @@ -135,7 +134,7 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, err = 0; out: - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); return err; } @@ -149,7 +148,7 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, if (!mcg) goto err1; - spin_lock_bh(&mcg->mcg_lock); + spin_lock_bh(&rxe->mcg_lock); list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { @@ -158,14 +157,14 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, __rxe_destroy_mcg(mcg); atomic_dec(&qp->mcg_num); - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); rxe_drop_ref(mcg); kfree(mca); return 0; } } - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); rxe_drop_ref(mcg); err1: return -EINVAL; diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index ed80125f1dc5..9a45743c8eaa 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -267,14 +267,14 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) qp_array = kmalloc_array(nmax, sizeof(qp), GFP_KERNEL); n = 0; - spin_lock_bh(&mcg->mcg_lock); + spin_lock_bh(&rxe->mcg_lock); list_for_each_entry(mca, &mcg->qp_list, qp_list) { rxe_add_ref(mca->qp); qp_array[n++] = mca->qp; if (n == nmax) break; } - spin_unlock_bh(&mcg->mcg_lock); + spin_unlock_bh(&rxe->mcg_lock); nmax = n; /* this is unreliable datagram service so we let diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index d65c358798c6..b72f8f09d984 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -353,7 +353,6 @@ struct rxe_mw { struct rxe_mcg { struct rxe_pool_elem elem; - spinlock_t mcg_lock; /* guard group */ struct rxe_dev *rxe; struct list_head qp_list; atomic_t qp_num; @@ -397,6 +396,8 @@ struct rxe_dev { struct rxe_pool mw_pool; struct rxe_pool mc_grp_pool; + spinlock_t mcg_lock; /* guard multicast groups */ + spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Mon Jan 31 22:08:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CDEDC433EF for ; Mon, 31 Jan 2022 22:10:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231745AbiAaWKR (ORCPT ); Mon, 31 Jan 2022 17:10:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231627AbiAaWKO (ORCPT ); Mon, 31 Jan 2022 17:10:14 -0500 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58FEFC061714 for ; Mon, 31 Jan 2022 14:10:14 -0800 (PST) Received: by mail-oi1-x22a.google.com with SMTP id q186so29561328oih.8 for ; Mon, 31 Jan 2022 14:10:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d4W/VIdBaJxzJ136KdX2FA8NCQ+FkUx44M1fpmUEF5s=; b=aIIRVgA0mZXL+sgOFX2GzrkkBplWnLVAX5XXneobarpY+S+bWdaMwQInIfqrUDfmKH yXWKPC54Im5qaIZ4qACTUh3fznjpqC7Gh0fr5zG/q0aJILFAqROUt055s3ODLTUvG9mV 81E2w18336Xi4SWw6ptupeMDm1V40QaKFdKM2JAgCDD33sKYqP0f6sdHKmasUBj/QcO2 dN7davXtOK08fGW1dF5dN3XYSd5Ge3C8G/vHVFEfWfPHUIbxnMmp6nDWPzsRZItpqa2l cNDKZef6vgd6iO48HGpSqjP0FX7gJVGc0gFme8l2sukDn/Ie+cI/aawTPS1lSy2MKGj5 s8Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d4W/VIdBaJxzJ136KdX2FA8NCQ+FkUx44M1fpmUEF5s=; b=Twx+blhTRYk3/0EN8O1mEsfARELcub1DLKrOBIqNGDUbl6YU744gJ3L/eVJSKw4nYh WVlPR/W1WkxDyBHuSJQWpntdilYaemsKJpV6Jt7QYgRFC7lVH+ThoNkQes1KQu5edOH+ W6DI2b+ETkgpGLLXyxiIMfd7PLNR9aHOVU2DyJO5zdnDymXGsyU+KVqql6xcAbc7q+H+ Xcc9l/Mw7yUrKZh+McSWVbxwUlOOqFo0pL8lvWhRCLjCHudVpIrJRZbg7Fo9hIXD6hf7 cw8bQojwj+0a2HQjST4D2ToVKsDhla14NhgmZ8kUBCDtNJQx47Hdw8sVzvFVnHOFZA4a DpFg== X-Gm-Message-State: AOAM530gopd0uftsMmyB0AI6GfOXX2XM8doePuUo2KjkC7w+Cn9aC956 moUYvgSuCNX3h0J5NO4MBwe8rjcBN/Y= X-Google-Smtp-Source: ABdhPJxIGy2oZ4BX3N6o1KhAS9sd/hkPZdWiIA95Sb+2GXzv5LHXQk3dhFf6L9jiOtORTRINQGdmnw== X-Received: by 2002:a05:6808:1899:: with SMTP id bi25mr15307964oib.338.1643667013647; Mon, 31 Jan 2022 14:10:13 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:13 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 12/17] RDMA/rxe: Replace pool key by rxe->mcg_tree Date: Mon, 31 Jan 2022 16:08:45 -0600 Message-Id: <20220131220849.10170-13-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Continuing to decouple mcg from rxe pools. Create red-black tree code in rxe_mcast.c to hold mcg index. Replace pool key calls by calls to local red-black routines. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 1 + drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_mcast.c | 233 +++++++++++++++++++++----- drivers/infiniband/sw/rxe/rxe_pool.c | 6 +- drivers/infiniband/sw/rxe/rxe_recv.c | 4 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 3 + 6 files changed, 197 insertions(+), 52 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 46a07e2d9dcf..310e184ae9e8 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -199,6 +199,7 @@ static int rxe_init(struct rxe_dev *rxe) return err; spin_lock_init(&rxe->mcg_lock); + rxe->mcg_tree = RB_ROOT; /* init pending mmap list */ spin_lock_init(&rxe->mmap_offset_lock); diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index af40e3c212fb..bd701af7758c 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -40,7 +40,7 @@ void rxe_cq_disable(struct rxe_cq *cq); void rxe_cq_cleanup(struct rxe_pool_elem *arg); /* rxe_mcast.c */ -void rxe_mc_cleanup(struct rxe_pool_elem *arg); +struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index d35070777214..82669d14d8a9 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -25,68 +25,189 @@ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) return dev_mc_del(rxe->ndev, ll_addr); } -/* caller should hold rxe->mcg_lock */ -static int __rxe_create_mcg(struct rxe_dev *rxe, struct rxe_pool *pool, - union ib_gid *mgid, struct rxe_mcg **mcg_p) +/** + * __rxe_insert_mcg - insert an mcg into red-black tree (rxe->mcg_tree) + * @mcg: mcg object with an embedded red-black tree node + * + * Context: caller must hold a reference to mcg and rxe->mcg_lock and + * is responsible to avoid adding the same mcg twice to the tree. + */ +static void __rxe_insert_mcg(struct rxe_mcg *mcg) { - int err; + struct rb_root *tree = &mcg->rxe->mcg_tree; + struct rb_node **link = &tree->rb_node; + struct rb_node *node = NULL; + struct rxe_mcg *tmp; + int cmp; + + while (*link) { + node = *link; + tmp = rb_entry(node, struct rxe_mcg, node); + + cmp = memcmp(&tmp->mgid, &mcg->mgid, sizeof(mcg->mgid)); + if (cmp > 0) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&mcg->node, node, link); + rb_insert_color(&mcg->node, tree); +} + +/** + * __rxe_remove_mcg - remove an mcg from red-black tree holding lock + * @mcg: mcast group object with an embedded red-black tree node + * + * Context: caller must hold a reference to mcg and rxe->mcg_lock + */ +static void __rxe_remove_mcg(struct rxe_mcg *mcg) +{ + rb_erase(&mcg->node, &mcg->rxe->mcg_tree); +} + +/** + * __rxe_lookup_mcg - lookup mcg in rxe->mcg_tree while holding lock + * @rxe: rxe device object + * @mgid: multicast IP address + * + * Context: caller must hold rxe->mcg_lock + * Returns: mcg on success and takes a ref to mcg else NULL + */ +static struct rxe_mcg *__rxe_lookup_mcg(struct rxe_dev *rxe, + union ib_gid *mgid) +{ + struct rb_root *tree = &rxe->mcg_tree; struct rxe_mcg *mcg; + struct rb_node *node; + int cmp; - mcg = rxe_alloc_locked(&rxe->mc_grp_pool); - if (!mcg) - return -ENOMEM; + node = tree->rb_node; + + while (node) { + mcg = rb_entry(node, struct rxe_mcg, node); + + cmp = memcmp(&mcg->mgid, mgid, sizeof(*mgid)); + + if (cmp > 0) + node = node->rb_left; + else if (cmp < 0) + node = node->rb_right; + else + break; + } + + if (node) { + rxe_add_ref(mcg); + return mcg; + } + + return NULL; +} + +/** + * rxe_lookup_mcg - lookup up mcg in red-back tree + * @rxe: rxe device object + * @mgid: multicast IP address + * + * Returns: mcg if found else NULL + */ +struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid) +{ + struct rxe_mcg *mcg; + + spin_lock_bh(&rxe->mcg_lock); + mcg = __rxe_lookup_mcg(rxe, mgid); + spin_unlock_bh(&rxe->mcg_lock); + + return mcg; +} + +/** + * __rxe_init_mcg - initialize a new mcg + * @rxe: rxe device + * @mgid: multicast address as a gid + * @mcg: new mcg object + * + * Context: caller should hold rxe->mcg lock + * Returns: 0 on success else an error + */ +static int __rxe_init_mcg(struct rxe_dev *rxe, union ib_gid *mgid, + struct rxe_mcg *mcg) +{ + int err; err = rxe_mcast_add(rxe, mgid); - if (unlikely(err)) { - rxe_drop_ref(mcg); + if (unlikely(err)) return err; - } + memcpy(&mcg->mgid, mgid, sizeof(mcg->mgid)); INIT_LIST_HEAD(&mcg->qp_list); mcg->rxe = rxe; rxe_add_ref(mcg); - rxe_add_key_locked(mcg, mgid); + __rxe_insert_mcg(mcg); - *mcg_p = mcg; - return 0; -} -/* caller is holding a ref from lookup and mcg->mcg_lock*/ -void __rxe_destroy_mcg(struct rxe_mcg *mcg) -{ - rxe_drop_key(mcg); - rxe_drop_ref(mcg); - - rxe_mcast_delete(mcg->rxe, &mcg->mgid); + return 0; } -static int rxe_mcast_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, - struct rxe_mcg **mcg_p) +/** + * rxe_get_mcg - lookup or allocate a mcg + * @rxe: rxe device object + * @mgid: multicast IP address + * @mcgp: address of returned mcg value + * + * Returns: 0 and sets *mcgp to mcg on success else an error + */ +static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, + struct rxe_mcg **mcgp) { - int err; - struct rxe_mcg *mcg; struct rxe_pool *pool = &rxe->mc_grp_pool; + struct rxe_mcg *mcg, *tmp; + int err; - if (rxe->attr.max_mcast_qp_attach == 0) + if (rxe->attr.max_mcast_grp == 0) return -EINVAL; - spin_lock_bh(&rxe->mcg_lock); + /* check to see if mcg already exists */ + mcg = rxe_lookup_mcg(rxe, mgid); + if (mcg) { + *mcgp = mcg; + return 0; + } - mcg = rxe_pool_get_key_locked(pool, mgid); - if (mcg) - goto done; + /* speculative alloc of mcg */ + mcg = rxe_alloc(pool); + if (!mcg) + return -ENOMEM; - err = __rxe_create_mcg(rxe, pool, mgid, &mcg); - if (err) { - spin_unlock_bh(&rxe->mcg_lock); - return err; + spin_lock_bh(&rxe->mcg_lock); + /* re-check to see if someone else just added it */ + tmp = __rxe_lookup_mcg(rxe, mgid); + if (tmp) { + rxe_drop_ref(mcg); + mcg = tmp; + goto out; } -done: + if (atomic_inc_return(&rxe->mcg_num) > rxe->attr.max_mcast_grp) { + err = -ENOMEM; + goto err_dec; + } + + err = __rxe_init_mcg(rxe, mgid, mcg); + if (err) + goto err_dec; +out: spin_unlock_bh(&rxe->mcg_lock); - *mcg_p = mcg; + *mcgp = mcg; return 0; +err_dec: + atomic_dec(&rxe->mcg_num); + spin_unlock_bh(&rxe->mcg_lock); + rxe_drop_ref(mcg); + return err; } static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, @@ -138,13 +259,42 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, return err; } +/** + * __rxe_destroy_mcg - destroy mcg object holding rxe->mcg_lock + * @mcg: the mcg object + * + * Context: caller is holding rxe->mcg_lock, all refs to mcg are dropped + * no qp's are attached to mcg + */ +void __rxe_destroy_mcg(struct rxe_mcg *mcg) +{ + __rxe_remove_mcg(mcg); + + rxe_drop_ref(mcg); + + rxe_mcast_delete(mcg->rxe, &mcg->mgid); +} + +/** + * rxe_destroy_mcg - destroy mcg object + * @mcg: the mcg object + * + * Context: all refs to mcg are dropped, no qp's are attached to mcg + */ +static void rxe_destroy_mcg(struct rxe_mcg *mcg) +{ + spin_lock_bh(&mcg->rxe->mcg_lock); + __rxe_destroy_mcg(mcg); + spin_unlock_bh(&mcg->rxe->mcg_lock); +} + static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { struct rxe_mcg *mcg; struct rxe_mca *mca, *tmp; - mcg = rxe_pool_get_key(&rxe->mc_grp_pool, mgid); + mcg = rxe_lookup_mcg(rxe, mgid); if (!mcg) goto err1; @@ -170,11 +320,6 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, return -EINVAL; } -void rxe_mc_cleanup(struct rxe_pool_elem *elem) -{ - /* nothing left to do */ -} - int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) { int err; @@ -182,14 +327,14 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) struct rxe_qp *qp = to_rqp(ibqp); struct rxe_mcg *mcg; - err = rxe_mcast_get_mcg(rxe, mgid, &mcg); + err = rxe_get_mcg(rxe, mgid, &mcg); if (err) return err; err = rxe_mcast_add_grp_elem(rxe, qp, mcg); if (atomic_read(&mcg->qp_num) == 0) - __rxe_destroy_mcg(mcg); + rxe_destroy_mcg(mcg); rxe_drop_ref(mcg); return err; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index a6756aa93e2b..4eff95d57aa4 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -82,13 +82,9 @@ static const struct rxe_type_info { .max_index = RXE_MAX_MW_INDEX, }, [RXE_TYPE_MC_GRP] = { - .name = "rxe-mc_grp", + .name = "rxe-mcg", .size = sizeof(struct rxe_mcg), .elem_offset = offsetof(struct rxe_mcg, elem), - .cleanup = rxe_mc_cleanup, - .flags = RXE_POOL_KEY, - .key_offset = offsetof(struct rxe_mcg, mgid), - .key_size = sizeof(union ib_gid), }, }; diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 9a45743c8eaa..9a92e5a486ee 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -254,7 +254,7 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) memcpy(&dgid, &ipv6_hdr(skb)->daddr, sizeof(dgid)); /* lookup mcast group corresponding to mgid, takes a ref */ - mcg = rxe_pool_get_key(&rxe->mc_grp_pool, &dgid); + mcg = rxe_lookup_mcg(rxe, &dgid); if (!mcg) goto drop; /* mcast group not registered */ @@ -320,7 +320,7 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) kfree(qp_array); - rxe_drop_ref(mcg); /* drop ref from rxe_pool_get_key. */ + rxe_drop_ref(mcg); if (likely(!skb)) return; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index b72f8f09d984..ea2d9ff29744 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -353,6 +353,7 @@ struct rxe_mw { struct rxe_mcg { struct rxe_pool_elem elem; + struct rb_node node; struct rxe_dev *rxe; struct list_head qp_list; atomic_t qp_num; @@ -397,6 +398,8 @@ struct rxe_dev { struct rxe_pool mc_grp_pool; spinlock_t mcg_lock; /* guard multicast groups */ + struct rb_root mcg_tree; + atomic_t mcg_num; spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Mon Jan 31 22:08:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5407BC433FE for ; Mon, 31 Jan 2022 22:10:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231769AbiAaWKR (ORCPT ); Mon, 31 Jan 2022 17:10:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231803AbiAaWKP (ORCPT ); Mon, 31 Jan 2022 17:10:15 -0500 Received: from mail-ot1-x335.google.com (mail-ot1-x335.google.com [IPv6:2607:f8b0:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E916C061744 for ; Mon, 31 Jan 2022 14:10:15 -0800 (PST) Received: by mail-ot1-x335.google.com with SMTP id n6-20020a9d6f06000000b005a0750019a7so14427643otq.5 for ; Mon, 31 Jan 2022 14:10:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e8FYVdtWAAqdzoqyYFlGhe2LJ7OpG5+d8J1MkrjCL+s=; b=k/+JKCgpwnFToKQCaHOjxxui96Wl8kxVcbKq3Amm8RgM3Dq9VDenAf9XXbBLiaLCPj euwCDpZsknKv4o8rD5OXrEG9ftSw0tKHzOmBJmuXQvyF6hB3e8o78RiQnR561mqgkuCc UdHilOq5ZeT9YMg7rf9d28IzsbyU577KVMfAIqLLdpz97wmfOty2z4Qpwbts9aFbvaLu xl0dBbxkWHqPSTaZSDrwSEYyt526tIPpRpkS/UVsKvh8vwO7IJFZXCxm4aD41NlbaoGu sz5034Z8VrazDd80AZd7QaFvP8GcxWyusoNBDcVg9gyWujRWu9sKuaJskx+mWU4roUY9 tiKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e8FYVdtWAAqdzoqyYFlGhe2LJ7OpG5+d8J1MkrjCL+s=; b=0wWRya7XRINnNm48PiOZsw0og3nXRDFt8PHkCAUj4/hv8F/lzDf3UL14hdyVamVIu2 6kHoY8SfahPnTP/EmMUXqzHUX1FKwe3yL5y6oNYenTJeBP2rtQxT+U8fajGKYO2YCMJS 9Hj21Et4vlz5B9uJ4n2HojsX+1SjKZrcoFcrXOrVomkSg3t8LcqN6w8raVm5KLJUTw88 Lor2WCHv2Joy6G0LLgt5PbI4VTgiys65WGgKWBPXTtHksP5WPqa8k+sNnIEGAMvTsEhx 8lALAOGMUxeDKhB5oX+WybMpfYXpoQ0KwnEnxoLNlLc3uIr/8lM+YZvTNHsXHrmFATW2 jDzg== X-Gm-Message-State: AOAM531j6QC3ibKYbx6MmctHF2hThgXtS+4/XGnd7UQyKm/V1yDQFHqU V/M3CBwgyFhSf9Y74mS5afomLkQUwEo= X-Google-Smtp-Source: ABdhPJzZ0wC89hCigctRzBxwCtuWHuJmYj4XVos4P8SrLSWlykdv+NfjmtucU+7oQGgZh6UivKbTGQ== X-Received: by 2002:a9d:2006:: with SMTP id n6mr12335784ota.280.1643667014514; Mon, 31 Jan 2022 14:10:14 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:14 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 13/17] RDMA/rxe: Remove key'ed object support Date: Mon, 31 Jan 2022 16:08:46 -0600 Message-Id: <20220131220849.10170-14-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Now that rxe_mcast.c has it's own red-black tree support there is no longer any requirement for key'ed objects in rxe pools. This patch removes the key APIs and related code. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 123 --------------------------- drivers/infiniband/sw/rxe/rxe_pool.h | 38 --------- 2 files changed, 161 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 4eff95d57aa4..fe0fcca47d8d 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -16,8 +16,6 @@ static const struct rxe_type_info { enum rxe_pool_flags flags; u32 min_index; u32 max_index; - size_t key_offset; - size_t key_size; } rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_UC] = { .name = "rxe-uc", @@ -143,12 +141,6 @@ int rxe_pool_init( goto out; } - if (pool->flags & RXE_POOL_KEY) { - pool->key.tree = RB_ROOT; - pool->key.key_offset = info->key_offset; - pool->key.key_size = info->key_size; - } - out: return err; } @@ -205,77 +197,6 @@ static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) return 0; } -static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new) -{ - struct rb_node **link = &pool->key.tree.rb_node; - struct rb_node *parent = NULL; - struct rxe_pool_elem *elem; - int cmp; - - while (*link) { - parent = *link; - elem = rb_entry(parent, struct rxe_pool_elem, key_node); - - cmp = memcmp((u8 *)elem + pool->key.key_offset, - (u8 *)new + pool->key.key_offset, - pool->key.key_size); - - if (cmp == 0) { - pr_warn("key already exists!\n"); - return -EINVAL; - } - - if (cmp > 0) - link = &(*link)->rb_left; - else - link = &(*link)->rb_right; - } - - rb_link_node(&new->key_node, parent, link); - rb_insert_color(&new->key_node, &pool->key.tree); - - return 0; -} - -int __rxe_add_key_locked(struct rxe_pool_elem *elem, void *key) -{ - struct rxe_pool *pool = elem->pool; - int err; - - memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); - err = rxe_insert_key(pool, elem); - - return err; -} - -int __rxe_add_key(struct rxe_pool_elem *elem, void *key) -{ - struct rxe_pool *pool = elem->pool; - int err; - - write_lock_bh(&pool->pool_lock); - err = __rxe_add_key_locked(elem, key); - write_unlock_bh(&pool->pool_lock); - - return err; -} - -void __rxe_drop_key_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - rb_erase(&elem->key_node, &pool->key.tree); -} - -void __rxe_drop_key(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - write_lock_bh(&pool->pool_lock); - __rxe_drop_key_locked(elem); - write_unlock_bh(&pool->pool_lock); -} - int __rxe_add_index_locked(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; @@ -439,47 +360,3 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) return obj; } - -void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) -{ - struct rb_node *node; - struct rxe_pool_elem *elem; - void *obj; - int cmp; - - node = pool->key.tree.rb_node; - - while (node) { - elem = rb_entry(node, struct rxe_pool_elem, key_node); - - cmp = memcmp((u8 *)elem + pool->key.key_offset, - key, pool->key.key_size); - - if (cmp > 0) - node = node->rb_left; - else if (cmp < 0) - node = node->rb_right; - else - break; - } - - if (node) { - kref_get(&elem->ref_cnt); - obj = elem->obj; - } else { - obj = NULL; - } - - return obj; -} - -void *rxe_pool_get_key(struct rxe_pool *pool, void *key) -{ - void *obj; - - read_lock_bh(&pool->pool_lock); - obj = rxe_pool_get_key_locked(pool, key); - read_unlock_bh(&pool->pool_lock); - - return obj; -} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 511f81554fd1..b6de415e10d2 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -9,7 +9,6 @@ enum rxe_pool_flags { RXE_POOL_INDEX = BIT(1), - RXE_POOL_KEY = BIT(2), RXE_POOL_NO_ALLOC = BIT(4), }; @@ -32,9 +31,6 @@ struct rxe_pool_elem { struct kref ref_cnt; struct list_head list; - /* only used if keyed */ - struct rb_node key_node; - /* only used if indexed */ struct rb_node index_node; u32 index; @@ -61,13 +57,6 @@ struct rxe_pool { u32 max_index; u32 min_index; } index; - - /* only used if keyed */ - struct { - struct rb_root tree; - size_t key_offset; - size_t key_size; - } key; }; /* initialize a pool of objects with given limit on @@ -112,26 +101,6 @@ void __rxe_drop_index(struct rxe_pool_elem *elem); #define rxe_drop_index(obj) __rxe_drop_index(&(obj)->elem) -/* assign a key to a keyed object and insert object into - * pool's rb tree holding and not holding pool_lock - */ -int __rxe_add_key_locked(struct rxe_pool_elem *elem, void *key); - -#define rxe_add_key_locked(obj, key) __rxe_add_key_locked(&(obj)->elem, key) - -int __rxe_add_key(struct rxe_pool_elem *elem, void *key); - -#define rxe_add_key(obj, key) __rxe_add_key(&(obj)->elem, key) - -/* remove elem from rb tree holding and not holding the pool_lock */ -void __rxe_drop_key_locked(struct rxe_pool_elem *elem); - -#define rxe_drop_key_locked(obj) __rxe_drop_key_locked(&(obj)->elem) - -void __rxe_drop_key(struct rxe_pool_elem *elem); - -#define rxe_drop_key(obj) __rxe_drop_key(&(obj)->elem) - /* lookup an indexed object from index holding and not holding the pool_lock. * takes a reference on object */ @@ -139,13 +108,6 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index); void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); -/* lookup keyed object from key holding and not holding the pool_lock. - * takes a reference on the objecti - */ -void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key); - -void *rxe_pool_get_key(struct rxe_pool *pool, void *key); - /* cleanup an object when all references are dropped */ void rxe_elem_release(struct kref *kref); From patchwork Mon Jan 31 22:08:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05613C4332F for ; Mon, 31 Jan 2022 22:10:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231435AbiAaWKR (ORCPT ); Mon, 31 Jan 2022 17:10:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbiAaWKQ (ORCPT ); Mon, 31 Jan 2022 17:10:16 -0500 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06FE5C061714 for ; Mon, 31 Jan 2022 14:10:16 -0800 (PST) Received: by mail-oi1-x22a.google.com with SMTP id e81so29570043oia.6 for ; Mon, 31 Jan 2022 14:10:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BIc5VDJc+DCdBKA/jf3bVYVvCkoKJbmsqCkrUFi/7bM=; b=TonqHlriJKe4oDZJHsmnCtpydvo/cA86EHlA0vhbWzuKxiyFxH8/IpgKf8UTqFnqjm Q3bnSi0HZp0mGdP+pfc0WctZeSVMbpF01PaCDjxFGHtny5jGE9p/VYA8mMyOLhFu79tp nEOeCnx3W5b+3ktrQdh/wng/q93gceKTIcUUJE58cxG/uTxu0TbO6z8FyK+CJk5qk0QD eAwA/eKZkzNGA3LsBtZT3saqBO8KVGnACAlO/3jIlsN7NmrUiNhHJpYtIceJ1imSLpLp XdfUna9oBX0sm4WLbFieARggcQKMXmjWhuinV3NLCRwMbcwj5TOfO6iP7dEzMgT15cKg PYxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BIc5VDJc+DCdBKA/jf3bVYVvCkoKJbmsqCkrUFi/7bM=; b=PWq2gUYRs0Nt6jKBdYNcjkI/IBVPoYpR39zIHX0tLoeK3yaHyCE5b2qXNRbBTTiXuy rHyOCevMiHXu8FIqLRL8IAkEOZEBhhmzYqgTeZqmDLcmwUzBR0j3CoQ1MWCmPzC2ttSu Lz4vFRkfcb/KUv+OxuehDUorS0JKfwNnnnR/NFOzK5K76DGO9VkmqyAq9a9wy0sAow2J TLibL1va8xRNKTKTDUWfiBtWGTjvWYWRbpmMcfF2ebcBwNqLz/qGVGLEDTphNiGlbLMc 785j0R0+4elxRtFpfWErcR7LhO9Bt7OjQQNfYWcWDiOfId3a5FkM3w0h6RMgkEG4PzV5 508g== X-Gm-Message-State: AOAM5313vI49li1tpn1tHzjfEEpXZGtjgSF3wUIhQJWPFOgkR3Bgv2ZN ZbtuVbxnouFwO6aebpiOqqs= X-Google-Smtp-Source: ABdhPJwXAIQIHwU/RFm4yFLS260blc/ErQeQQUxTAEk0GMLPofylpM5hJr1BuguRLCbU8kUfOPaQnw== X-Received: by 2002:a05:6808:189d:: with SMTP id bi29mr18752800oib.68.1643667015383; Mon, 31 Jan 2022 14:10:15 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:15 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 14/17] RDMA/rxe: Remove mcg from rxe pools Date: Mon, 31 Jan 2022 16:08:47 -0600 Message-Id: <20220131220849.10170-15-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Finish removing mcg from rxe pools. Replace rxe pools ref counting by kref's. Replace rxe_alloc by kzalloc. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 8 --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mcast.c | 91 ++++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_pool.c | 5 -- drivers/infiniband/sw/rxe/rxe_pool.h | 1 - drivers/infiniband/sw/rxe/rxe_recv.c | 4 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 4 +- 7 files changed, 59 insertions(+), 55 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 310e184ae9e8..c560d467a972 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -28,7 +28,6 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->cq_pool); rxe_pool_cleanup(&rxe->mr_pool); rxe_pool_cleanup(&rxe->mw_pool); - rxe_pool_cleanup(&rxe->mc_grp_pool); if (rxe->tfm) crypto_free_shash(rxe->tfm); @@ -157,15 +156,8 @@ static int rxe_init_pools(struct rxe_dev *rxe) if (err) goto err8; - err = rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP, - rxe->attr.max_mcast_grp); - if (err) - goto err9; - return 0; -err9: - rxe_pool_cleanup(&rxe->mw_pool); err8: rxe_pool_cleanup(&rxe->mr_pool); err7: diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index bd701af7758c..409efeecd581 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -43,6 +43,7 @@ void rxe_cq_cleanup(struct rxe_pool_elem *arg); struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); +void rxe_cleanup_mcg(struct kref *kref); /* rxe_mmap.c */ struct rxe_mmap_info { diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 82669d14d8a9..ed23d0a270fd 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -98,7 +98,7 @@ static struct rxe_mcg *__rxe_lookup_mcg(struct rxe_dev *rxe, } if (node) { - rxe_add_ref(mcg); + kref_get(&mcg->ref_cnt); return mcg; } @@ -141,11 +141,13 @@ static int __rxe_init_mcg(struct rxe_dev *rxe, union ib_gid *mgid, if (unlikely(err)) return err; + kref_init(&mcg->ref_cnt); memcpy(&mcg->mgid, mgid, sizeof(mcg->mgid)); INIT_LIST_HEAD(&mcg->qp_list); mcg->rxe = rxe; + mcg->index = rxe->mcg_next++; - rxe_add_ref(mcg); + kref_get(&mcg->ref_cnt); __rxe_insert_mcg(mcg); @@ -163,7 +165,6 @@ static int __rxe_init_mcg(struct rxe_dev *rxe, union ib_gid *mgid, static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, struct rxe_mcg **mcgp) { - struct rxe_pool *pool = &rxe->mc_grp_pool; struct rxe_mcg *mcg, *tmp; int err; @@ -178,7 +179,7 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, } /* speculative alloc of mcg */ - mcg = rxe_alloc(pool); + mcg = kzalloc(sizeof(*mcg), GFP_KERNEL); if (!mcg) return -ENOMEM; @@ -186,7 +187,7 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, /* re-check to see if someone else just added it */ tmp = __rxe_lookup_mcg(rxe, mgid); if (tmp) { - rxe_drop_ref(mcg); + kfree(mcg); mcg = tmp; goto out; } @@ -206,10 +207,53 @@ static int rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid, err_dec: atomic_dec(&rxe->mcg_num); spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); + kfree(mcg); return err; } +/** + * rxe_cleanup_mcg - cleanup mcg for kref_put + * @kref: + * + * caller may or may not hold rxe->mcg_lock + */ +void rxe_cleanup_mcg(struct kref *kref) +{ + struct rxe_mcg *mcg = container_of(kref, typeof(*mcg), ref_cnt); + + kfree(mcg); +} + +/** + * __rxe_destroy_mcg - destroy mcg object holding rxe->mcg_lock + * @mcg: the mcg object + * + * Context: caller is holding rxe->mcg_lock, no qp's are attached to mcg + */ +void __rxe_destroy_mcg(struct rxe_mcg *mcg) +{ + struct rxe_dev *rxe = mcg->rxe; + + __rxe_remove_mcg(mcg); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); + + rxe_mcast_delete(rxe, &mcg->mgid); + atomic_dec(&rxe->mcg_num); +} + +/** + * rxe_destroy_mcg - destroy mcg object + * @mcg: the mcg object + * + * Context: no qp's are attached to mcg + */ +static void rxe_destroy_mcg(struct rxe_mcg *mcg) +{ + spin_lock_bh(&mcg->rxe->mcg_lock); + __rxe_destroy_mcg(mcg); + spin_unlock_bh(&mcg->rxe->mcg_lock); +} + static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mcg *mcg) { @@ -259,35 +303,6 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, return err; } -/** - * __rxe_destroy_mcg - destroy mcg object holding rxe->mcg_lock - * @mcg: the mcg object - * - * Context: caller is holding rxe->mcg_lock, all refs to mcg are dropped - * no qp's are attached to mcg - */ -void __rxe_destroy_mcg(struct rxe_mcg *mcg) -{ - __rxe_remove_mcg(mcg); - - rxe_drop_ref(mcg); - - rxe_mcast_delete(mcg->rxe, &mcg->mgid); -} - -/** - * rxe_destroy_mcg - destroy mcg object - * @mcg: the mcg object - * - * Context: all refs to mcg are dropped, no qp's are attached to mcg - */ -static void rxe_destroy_mcg(struct rxe_mcg *mcg) -{ - spin_lock_bh(&mcg->rxe->mcg_lock); - __rxe_destroy_mcg(mcg); - spin_unlock_bh(&mcg->rxe->mcg_lock); -} - static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { @@ -308,14 +323,14 @@ static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, atomic_dec(&qp->mcg_num); spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); kfree(mca); return 0; } } spin_unlock_bh(&rxe->mcg_lock); - rxe_drop_ref(mcg); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); err1: return -EINVAL; } @@ -336,7 +351,7 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) if (atomic_read(&mcg->qp_num) == 0) rxe_destroy_mcg(mcg); - rxe_drop_ref(mcg); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); return err; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index fe0fcca47d8d..b6fe7c93aaab 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -79,11 +79,6 @@ static const struct rxe_type_info { .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, }, - [RXE_TYPE_MC_GRP] = { - .name = "rxe-mcg", - .size = sizeof(struct rxe_mcg), - .elem_offset = offsetof(struct rxe_mcg, elem), - }, }; static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index b6de415e10d2..99b1eb04b405 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -21,7 +21,6 @@ enum rxe_elem_type { RXE_TYPE_CQ, RXE_TYPE_MR, RXE_TYPE_MW, - RXE_TYPE_MC_GRP, RXE_NUM_TYPES, /* keep me last */ }; diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 9a92e5a486ee..04fe0cd36d6c 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -275,6 +275,8 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) break; } spin_unlock_bh(&rxe->mcg_lock); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); + nmax = n; /* this is unreliable datagram service so we let @@ -320,8 +322,6 @@ static void rxe_rcv_mcast_pkt(struct sk_buff *skb) kfree(qp_array); - rxe_drop_ref(mcg); - if (likely(!skb)) return; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index ea2d9ff29744..97d3a59e5c6f 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -352,12 +352,13 @@ struct rxe_mw { }; struct rxe_mcg { - struct rxe_pool_elem elem; struct rb_node node; + struct kref ref_cnt; struct rxe_dev *rxe; struct list_head qp_list; atomic_t qp_num; union ib_gid mgid; + unsigned int index; u32 qkey; u16 pkey; }; @@ -400,6 +401,7 @@ struct rxe_dev { spinlock_t mcg_lock; /* guard multicast groups */ struct rb_root mcg_tree; atomic_t mcg_num; + unsigned int mcg_next; spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Mon Jan 31 22:08:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22AB4C433F5 for ; Mon, 31 Jan 2022 22:10:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231759AbiAaWKS (ORCPT ); Mon, 31 Jan 2022 17:10:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231733AbiAaWKR (ORCPT ); Mon, 31 Jan 2022 17:10:17 -0500 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB853C061714 for ; Mon, 31 Jan 2022 14:10:16 -0800 (PST) Received: by mail-ot1-x32f.google.com with SMTP id o9-20020a9d7189000000b0059ee49b4f0fso14433696otj.2 for ; Mon, 31 Jan 2022 14:10:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=M22qyIuNeSQMVHPFilxhiSz2IK2SLwIzs2Vwzwb7EBA=; b=eQ3qWzVz6o5IeBybLGjUrba+bxsqWQ6IB2pHvpW3A2h9N2sBc7mZWu7Hkh55/BsDy3 YYeCvSmt0q4K1MbUfBoOn54i02iQPEZKduZxkpaQpfA9CzEx9A0FxaWdwU5Iv/0/Vvac Gqr0r/mBkFSrdR9tPdxqcxXffYKk9uFDwdS/wyBq2YC20Zc6qJgLnlmAvhAZkbI/Lj67 Lo84q1JOOoGoJpOcEf8Xinad2tW8c/mtxlngVPWdpJfg/YO2rVarWlc8koQ7iB1k9+GE p5GqSYHc4UJswV1tojApWxTbSwhry8C0Rm1ZoBJsM4Gw5ki5hyIaVrZ4tkUF6sNdp20q 4BtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=M22qyIuNeSQMVHPFilxhiSz2IK2SLwIzs2Vwzwb7EBA=; b=B11aEoWssWUxX1FhJX2zTEFZ512CO+dVVZSf/WU/tVE++cYom2xJi/dN3Fjgdzwl4e eY3h3txg3PU9DGnWVdirWBf7wING5FvUpH1mnD15ToPm0Suz1nwK0Jvd6qWGrJLFtz7x EnFH0q/E9dtgNq3m1tylmGEi/MbGNxFG0Mw2lsWa1vCRfNQ+YgB/m+ciUqspVNeYEGQW 3Y5EjUJfHCz9pdTU9at9E8yxQfIL0MGCvir5TFU09ttRM+vdKA40ehQTxAbNtwurkzyq TppI5xbNi7BxR/1dA+wV+ItsHQe+ihNiWJziZy7W7UlnmZe4pB/fcyFqH6iY/UzFKOcJ 0Wew== X-Gm-Message-State: AOAM530KxZKcIz+ChwJrvk0+Vxpk1XItp4chADsn8erM9gdzbgq/tmCE v6Q9hQNQ2kLMsB77pLrpy+cmaBKoyhU= X-Google-Smtp-Source: ABdhPJyA7pXVeeMrhykaSWe4KmOkPHDBjyhASix/Khz2TsR/zEEnE+UHGkN1Cbn+LNEH80uquvkALA== X-Received: by 2002:a9d:7ad7:: with SMTP id m23mr12444769otn.20.1643667016158; Mon, 31 Jan 2022 14:10:16 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:15 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 15/17] RDMA/rxe: Add code to cleanup mcast memory Date: Mon, 31 Jan 2022 16:08:48 -0600 Message-Id: <20220131220849.10170-16-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Well behaved applications will free all memory allocated by multicast but programs which do not clean up properly can leave behind allocated memory when the rxe driver is unloaded. This patch walks the red-black tree holding multicast group elements and then walks the list of attached qp's freeing the mca's and finally the mcg's. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 ++ drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mcast.c | 31 +++++++++++++++++++++++++++ 3 files changed, 34 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index c560d467a972..74c5521e9b3d 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -29,6 +29,8 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->mr_pool); rxe_pool_cleanup(&rxe->mw_pool); + rxe_cleanup_mcast(rxe); + if (rxe->tfm) crypto_free_shash(rxe->tfm); } diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 409efeecd581..0bc1b7e2877c 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -44,6 +44,7 @@ struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid); void rxe_cleanup_mcg(struct kref *kref); +void rxe_cleanup_mcast(struct rxe_dev *rxe); /* rxe_mmap.c */ struct rxe_mmap_info { diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index ed23d0a270fd..00b4e3046d39 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -362,3 +362,34 @@ int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) return rxe_mcast_drop_grp_elem(rxe, qp, mgid); } + +/** + * rxe_cleanup_mcast - cleanup all resources held by mcast + * @rxe: rxe object + * + * Called when rxe device is unloaded. Walk red-black tree to + * find all mcg's and then walk mcg->qp_list to find all mca's and + * free them. These should have been freed already if apps are + * well behaved. + */ +void rxe_cleanup_mcast(struct rxe_dev *rxe) +{ + struct rb_root *root = &rxe->mcg_tree; + struct rb_node *node, *next; + struct rxe_mcg *mcg; + struct rxe_mca *mca, *tmp; + + for (node = rb_first(root); node; node = next) { + next = rb_next(node); + mcg = rb_entry(node, typeof(*mcg), node); + + spin_lock_bh(&rxe->mcg_lock); + list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) + kfree(mca); + + __rxe_remove_mcg(mcg); + spin_unlock_bh(&rxe->mcg_lock); + + kfree(mcg); + } +} From patchwork Mon Jan 31 22:08:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7A73C43217 for ; Mon, 31 Jan 2022 22:10:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231803AbiAaWKS (ORCPT ); Mon, 31 Jan 2022 17:10:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231627AbiAaWKR (ORCPT ); Mon, 31 Jan 2022 17:10:17 -0500 Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com [IPv6:2607:f8b0:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 846C0C06173D for ; Mon, 31 Jan 2022 14:10:17 -0800 (PST) Received: by mail-ot1-x331.google.com with SMTP id o9-20020a9d7189000000b0059ee49b4f0fso14433724otj.2 for ; Mon, 31 Jan 2022 14:10:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ND85foxjmkXjnsWGNjGYV2h8qNK61stwtqXRrzd7Kv4=; b=gIoBekliG6ITOmVi8e89Krw6sSsSh+BciQZg9FmjmYd4XNFWEVTD83q9uSKRNMWufS +T9RFqpwA8OIoJ4s1+Yk2H3j9zXQ8I9JYJf0DnxHd3KdiDnSVWEwTwS5NmTR4M7tyKfv gRmIv+TVmkcOVIfk3I1PE1UWCEt3H1sJE/2KEWhJBwJ9pCddYVWNvFOx3Q/Xl6sZWxSM we0ksivFCBm+ID3swWeHMGDcKV+SwXQYUS1Sb2ysd901Zm5YmxVlUccDpIHyPXzJ6F3i hUSFmMWh/Zk9bFIAdnJJKhOYJajJCJTz3IcI7Po9EeqaQN87RX3c+Xdpfb5w44ssgoOG qvKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ND85foxjmkXjnsWGNjGYV2h8qNK61stwtqXRrzd7Kv4=; b=V3+9EfRKhh1/EnVWN0CwspNPMdr/9OhRumBp0J22UeNQOO6zJUt1Qt9H5kVph1w+2l DbeAb9DEexmVbAQQarfy2HeKrlm0iBSuOLxjI22yClpYLDUjCUMnb7pjM36ysCrfjCNn 0rUxqEspQVZjAvTx7q1d+f78ZhP60vQrxa444oBs06y4L1fj5fYUDZVFjylWPlFqrmgM V2vcsxrQ+7KIiV2EkLBOd1A2eDVT1OBXl0+UTGlvu86F9h7SbV3+44P59f3QEzEhPpTY Tt/HXL3YDT9KD4pNjgH1Gw4rsa6T+xWXP734GRJg1mIIi/94zumggm3L52jlj93/1krV K6ew== X-Gm-Message-State: AOAM532p0d0IjTmHVjYWWVPQQ6Pwkd8TGMtDte0zAI6wonN/s+mT/Fpe B38LQ1iKpS5yBDBemWPAVIPoaHQFKmU= X-Google-Smtp-Source: ABdhPJw/A/FHXzpfITAJ7jl57iHLxrKI6r0TZ3KvJYnAVNyoWqwcvGrpG7qr+2jpQKou9tOrjkAp4w== X-Received: by 2002:a05:6830:13cd:: with SMTP id e13mr12771141otq.193.1643667016916; Mon, 31 Jan 2022 14:10:16 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:16 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 16/17] RDMA/rxe: Add comments to rxe_mcast.c Date: Mon, 31 Jan 2022 16:08:49 -0600 Message-Id: <20220131220849.10170-17-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add comments to rxe_mcast.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 30 ++++++++++++++++++++++++++- drivers/infiniband/sw/rxe/rxe_verbs.h | 2 +- 2 files changed, 30 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 00b4e3046d39..2fccf69f9a4b 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -1,12 +1,33 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* + * Copyright (c) 2022 Hewlett Packard Enterprise, Inc. All rights reserved. * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ +/* + * rxe_mcast.c implements driver support for multicast transport. + * It is based on two data structures struct rxe_mcg ('mcg') and + * struct rxe_mca ('mca'). An mcg is allocated each time a qp is + * attached to a new mgid for the first time. These are indexed by + * a red-black tree using the mgid. This data structure is searched + * for the mcg when a multicast packet is received and when another + * qp is attached to the same mgid. It is cleaned up when the last qp + * is detached from the mcg. Each time a qp is attached to an mcg an + * mca is created. It holds a pointer to the qp and is added to a list + * of qp's that are attached to the mcg. The qp_list is used to replicate + * mcast packets in the rxe receive path. + */ + #include "rxe.h" -#include "rxe_loc.h" +/** + * rxe_mcast_add - add multicast address to rxe device + * @rxe: rxe device object + * @mgid: multicast address as a gid + * + * Returns 0 on success else an error + */ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) { unsigned char ll_addr[ETH_ALEN]; @@ -16,6 +37,13 @@ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) return dev_mc_add(rxe->ndev, ll_addr); } +/** + * rxe_mcast_delete - delete multicast address from rxe device + * @rxe: rxe device object + * @mgid: multicast address as a gid + * + * Returns 0 on success else an error + */ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) { unsigned char ll_addr[ETH_ALEN]; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 97d3a59e5c6f..72a913a8e0cb 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -358,7 +358,7 @@ struct rxe_mcg { struct list_head qp_list; atomic_t qp_num; union ib_gid mgid; - unsigned int index; + unsigned int index; /* debugging help */ u32 qkey; u16 pkey; }; From patchwork Mon Jan 31 22:08:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12731271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05570C43219 for ; Mon, 31 Jan 2022 22:10:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231960AbiAaWKT (ORCPT ); Mon, 31 Jan 2022 17:10:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231771AbiAaWKS (ORCPT ); Mon, 31 Jan 2022 17:10:18 -0500 Received: from mail-ot1-x336.google.com (mail-ot1-x336.google.com [IPv6:2607:f8b0:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59BE7C061714 for ; Mon, 31 Jan 2022 14:10:18 -0800 (PST) Received: by mail-ot1-x336.google.com with SMTP id x52-20020a05683040b400b0059ea92202daso14407008ott.7 for ; Mon, 31 Jan 2022 14:10:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cjZScGIBobsFfva4tVaWlGYA7Ed52AsbUVIfIYf+mXo=; b=dHq37mSidr2th7EKtwDO1F/m2aehyWPlTNNdd51j7lIuO+Lp1Me+OPVgyLo4zxWndq rjh/7gyY7z0t7oBFYinakAi+3RNHpUFHwbU6osTONnr1nHjfHqEArokN+sEk1azbDM/D 6dVf6fKLlrgoJcUQtZ3TejIVujCfFc7VglLBpwzGuvGpL7xHHD2Vt7nd2vfwxPuMBJ8A TqHUCDp7uV7OYeyo0Kxf2mM8wl0VmZhogOdOVJps7NcdC4m366mSaV+LfPdRGiTYXBFV +YC0eVCGEjqoU9VZG9tXgmQX4uGWeX2H4+uLg6/xYZQYUyxy0hHmZljiYj5MivHH8l5e NNHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cjZScGIBobsFfva4tVaWlGYA7Ed52AsbUVIfIYf+mXo=; b=HxC25+B0kZ4TYJPO+CBUETaCBe9uV2ZUggtFGjyy81XGISA7fn6RCaAah6UM4hjQYH j7ubUbS9+/fA8TBHzEH710dreaqfEBdKFO7u0pJcmjM5IskNk4up3ixa3oIX872wMPnC 3kLmSUEAUZRmZxekJ7CMrB+YMGgOPU/9etzEUCBuSk7I6FOcNOR/X9PsjtKqUYouy6DD UJ/Kb1AUGHVqigsinjuvYvin72rxQXfmw/Caf6zlS9V4Su6CUEB9S2crDmP3cW1aHV5/ JbY9l3PB0F1wfeAnCPXYrD090Y0EOT6vPsg3lyF0M2EUeXfLDdd0ImlScg0j3vyXJaOl AejQ== X-Gm-Message-State: AOAM5327H0VzoDVhWb6o1JgATGsbJTrhtV2abdbcSt2M3AVARwACsf4C 7mZVpbmoXiCENRJWxMxe2mY= X-Google-Smtp-Source: ABdhPJylawVz/C5zoBa3vVSqkPuH0CpYhJX9mFLhvZ81wpYVOXyEWDHoHiI0HEPgb7Dnp93q/IGp3g== X-Received: by 2002:a05:6830:16cf:: with SMTP id l15mr12316326otr.378.1643667017671; Mon, 31 Jan 2022 14:10:17 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-5c63-4cee-84ac-42bc.res6.spectrum.com. [2603:8081:140c:1a00:5c63:4cee:84ac:42bc]) by smtp.googlemail.com with ESMTPSA id t21sm8304929otq.81.2022.01.31.14.10.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 14:10:17 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v10 17/17] RDMA/rxe: Finish cleanup of rxe_mcast.c Date: Mon, 31 Jan 2022 16:08:50 -0600 Message-Id: <20220131220849.10170-18-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220131220849.10170-1-rpearsonhpe@gmail.com> References: <20220131220849.10170-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Cleanup rxe_mcast.c code. Minor changes and complete comments. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 163 +++++++++++++++++++------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 2 files changed, 124 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 2fccf69f9a4b..2e5b41063f83 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -175,6 +175,7 @@ static int __rxe_init_mcg(struct rxe_dev *rxe, union ib_gid *mgid, mcg->rxe = rxe; mcg->index = rxe->mcg_next++; + /* take reference to protect pointer in red-black tree */ kref_get(&mcg->ref_cnt); __rxe_insert_mcg(mcg); @@ -263,6 +264,7 @@ void __rxe_destroy_mcg(struct rxe_mcg *mcg) struct rxe_dev *rxe = mcg->rxe; __rxe_remove_mcg(mcg); + /* drop reference that protected pointer in red-black tree */ kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); rxe_mcast_delete(rxe, &mcg->mgid); @@ -282,11 +284,59 @@ static void rxe_destroy_mcg(struct rxe_mcg *mcg) spin_unlock_bh(&mcg->rxe->mcg_lock); } -static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mcg *mcg) +/** + * __rxe_init_mca - initialize a new mca holding lock + * @qp: qp object + * @mcg: mcg object + * @mca: empty space for new mca + * + * Context: caller must hold references on qp and mcg, rxe->mcg_lock + * and pass memory for new mca + * + * Returns: 0 on success else an error + */ +static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, + struct rxe_mca *mca) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int n; + + n = atomic_inc_return(&rxe->mcg_attach); + if (n > rxe->attr.max_total_mcast_qp_attach) { + atomic_dec(&rxe->mcg_attach); + return -ENOMEM; + } + + n = atomic_inc_return(&mcg->qp_num); + if (n > rxe->attr.max_mcast_qp_attach) { + atomic_dec(&mcg->qp_num); + atomic_dec(&rxe->mcg_attach); + return -ENOMEM; + } + + atomic_inc(&qp->mcg_num); + + rxe_add_ref(qp); + mca->qp = qp; + + list_add_tail(&mca->qp_list, &mcg->qp_list); + + return 0; +} + +/** + * rxe_attach_mcg - attach qp to mcg if not already attached + * @mcg: mcg object + * @qp: qp object + * + * Context: caller must hold reference on qp and mcg. + * Returns: 0 on success else an error + */ +static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) { + struct rxe_dev *rxe = mcg->rxe; + struct rxe_mca *mca, *tmp; int err; - struct rxe_mca *mca, *new_mca; /* check to see if the qp is already a member of the group */ spin_lock_bh(&rxe->mcg_lock); @@ -298,71 +348,84 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, } spin_unlock_bh(&rxe->mcg_lock); - /* speculative alloc new mca without using GFP_ATOMIC */ - new_mca = kzalloc(sizeof(*mca), GFP_KERNEL); - if (!new_mca) + /* speculative alloc new mca */ + mca = kzalloc(sizeof(*mca), GFP_KERNEL); + if (!mca) return -ENOMEM; spin_lock_bh(&rxe->mcg_lock); /* re-check to see if someone else just attached qp */ - list_for_each_entry(mca, &mcg->qp_list, qp_list) { + list_for_each_entry(tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - kfree(new_mca); + kfree(mca); err = 0; - goto out; + goto done; } } - mca = new_mca; - if (atomic_read(&mcg->qp_num) >= rxe->attr.max_mcast_qp_attach) { - err = -ENOMEM; - goto out; - } + err = __rxe_init_mca(qp, mcg, mca); + if (err) + kfree(mca); +done: + spin_unlock_bh(&rxe->mcg_lock); - atomic_inc(&mcg->qp_num); - mca->qp = qp; - atomic_inc(&qp->mcg_num); + return err; +} - list_add_tail(&mca->qp_list, &mcg->qp_list); +/** + * __rxe_cleanup_mca - cleanup mca object holding lock + * @mca: mca object + * @mcg: mcg object + * + * Context: caller must hold a reference to mcg and rxe->mcg_lock + */ +static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) +{ + list_del(&mca->qp_list); - err = 0; -out: - spin_unlock_bh(&rxe->mcg_lock); - return err; + rxe_drop_ref(mca->qp); + + atomic_dec(&mcg->qp_num); + atomic_dec(&mcg->rxe->mcg_attach); + atomic_dec(&mca->qp->mcg_num); } -static int rxe_mcast_drop_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, - union ib_gid *mgid) +/** + * rxe_detach_mcg - detach qp from mcg + * @mcg: mcg object + * @qp: qp object + * + * Returns: 0 on success else an error if qp is not attached. + */ +static int rxe_detach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) { - struct rxe_mcg *mcg; + struct rxe_dev *rxe = mcg->rxe; struct rxe_mca *mca, *tmp; - mcg = rxe_lookup_mcg(rxe, mgid); - if (!mcg) - goto err1; - spin_lock_bh(&rxe->mcg_lock); - list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - list_del(&mca->qp_list); - if (atomic_dec_return(&mcg->qp_num) <= 0) + __rxe_cleanup_mca(mca, mcg); + if (atomic_read(&mcg->qp_num) <= 0) __rxe_destroy_mcg(mcg); - atomic_dec(&qp->mcg_num); - spin_unlock_bh(&rxe->mcg_lock); - kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); kfree(mca); return 0; } } - spin_unlock_bh(&rxe->mcg_lock); - kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); -err1: + return -EINVAL; } +/** + * rxe_attach_mcast - attach qp to multicast group (see IBA-11.3.1) + * @ibqp: (IB) qp object + * @mgid: multicast IP address + * @mlid: multicast LID, ignored for RoCEv2 (see IBA-A17.5.6) + * + * Returns: 0 on success else an errno + */ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) { int err; @@ -374,8 +437,11 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) if (err) return err; - err = rxe_mcast_add_grp_elem(rxe, qp, mcg); + err = rxe_attach_mcg(mcg, qp); + /* this can happen if we failed to attach a first qp to mcg + * go ahead and destroy mcg + */ if (atomic_read(&mcg->qp_num) == 0) rxe_destroy_mcg(mcg); @@ -383,12 +449,29 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) return err; } +/** + * rxe_detach_mcast - detach qp from multicast group (see IBA-11.3.2) + * @ibqp: address of (IB) qp object + * @mgid: multicast IP address + * @mlid: multicast LID, ignored for RoCEv2 (see IBA-A17.5.6) + * + * Returns: 0 on success else an errno + */ int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) { struct rxe_dev *rxe = to_rdev(ibqp->device); struct rxe_qp *qp = to_rqp(ibqp); + struct rxe_mcg *mcg; + int err; - return rxe_mcast_drop_grp_elem(rxe, qp, mgid); + mcg = rxe_lookup_mcg(rxe, mgid); + if (!mcg) + return -EINVAL; + + err = rxe_detach_mcg(mcg, qp); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); + + return err; } /** diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 72a913a8e0cb..716f11ec80fe 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -401,6 +401,7 @@ struct rxe_dev { spinlock_t mcg_lock; /* guard multicast groups */ struct rb_root mcg_tree; atomic_t mcg_num; + atomic_t mcg_attach; unsigned int mcg_next; spinlock_t pending_lock; /* guard pending_mmaps */