From patchwork Wed Feb 23 23:07:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12757655 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB8BBC433F5 for ; Wed, 23 Feb 2022 23:07:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242631AbiBWXIN (ORCPT ); Wed, 23 Feb 2022 18:08:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244062AbiBWXIM (ORCPT ); Wed, 23 Feb 2022 18:08:12 -0500 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B4763584B for ; Wed, 23 Feb 2022 15:07:43 -0800 (PST) Received: by mail-oi1-x22a.google.com with SMTP id ay7so652420oib.8 for ; Wed, 23 Feb 2022 15:07:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qYYgccM/xsqiv1zvdQzPYwV/Qajr9hRwnCJAvMmYI6w=; b=VlUvoiBJSvAbGcO/TLKiO7XXmcfwangTDRCcap0j2uXFuhss/4j6QnHcPWefv61pPT Ii6JJXLSRe+LfMQs67HDGUmwzECRkgs0+cjQ0JqDDbbu37Kk3ASTt8WPzlYLsWh8SjaW fdAbzH6vf7kqsFMI+nhY5Ga0qN6j0N2lsAhFvwf3kTOm61ulnA1NX2Dbxy1VUHSS+bT1 qDSSej3DNZrlaRDFBMz+NOmXJgyTxhnuG/aUHwF+q9aIp5AMP55MDfDwariUsOa3agD1 n3W7npFeVXcms5Q6X8WwKUD+Kk3IhqHQudcPRGVYZUQ8MK3isVXbc1Htc3Pe9gPlMeTs Sb7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qYYgccM/xsqiv1zvdQzPYwV/Qajr9hRwnCJAvMmYI6w=; b=A5LDTx6A8BF32LY6jeevIuWtIhl9RfAogcLJTgg1tk79HVG73FOT4DGIOj9l3dFZ4J g9hVVioy4T0Lso6hhUCRMW82SSIiy1NtkTM2cE/zW/huzh5kLaIZ5vOg+0Mdr6+iNGLj x2JVIYJqvWESFf1GSCcPp8k++z+ZO9za2LxhQSYEQ6gasC4j0ppM2bhyD5YO9L9gESY7 2bn8DSrMYPUaaaIe4kMFYnssz6+oA/hft8OEfwjT0TscNbJsbzWYiEaqxJCokttRHapi 0z4NycK4dYRiSpVNKa3BSXxT6V+pjYi0cqIA+er5X3+d1/F1MYE5q8HfxunsTOdGsD3a 4ahA== X-Gm-Message-State: AOAM531A+iDq/v6hoyM2WkxsCM+1v8zsT0NFvGXztKQEhMbo/QOVgj7V IHK2XA2MvD+jT65WlLY+KHc= X-Google-Smtp-Source: ABdhPJweiAt1OXtM+f3zAbkgNNkbbqGOUoyTwdiGoUFqFH41njJqWex60Yosrn416/sI6Kxx+ckHgw== X-Received: by 2002:a05:6808:188b:b0:2d4:f7bd:ef42 with SMTP id bi11-20020a056808188b00b002d4f7bdef42mr1064645oib.179.1645657662498; Wed, 23 Feb 2022 15:07:42 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-809e-284a-c7bf-c6d9.res6.spectrum.com. [2603:8081:140c:1a00:809e:284a:c7bf:c6d9]) by smtp.googlemail.com with ESMTPSA id y3sm505030oiv.21.2022.02.23.15.07.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Feb 2022 15:07:42 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 1/6] RDMA/rxe: Warn if mcast memory is not freed Date: Wed, 23 Feb 2022 17:07:03 -0600 Message-Id: <20220223230706.50332-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220223230706.50332-1-rpearsonhpe@gmail.com> References: <20220223230706.50332-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Print a warning if memory allocated by mcast is not cleared when the rxe driver is unloaded. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 3520eb2db685..fce3994d8f7a 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -29,6 +29,8 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->mr_pool); rxe_pool_cleanup(&rxe->mw_pool); + WARN_ON(!RB_EMPTY_ROOT(&rxe->mcg_tree)); + if (rxe->tfm) crypto_free_shash(rxe->tfm); } From patchwork Wed Feb 23 23:07:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12757656 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1E73C433EF for ; Wed, 23 Feb 2022 23:07:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244062AbiBWXIO (ORCPT ); Wed, 23 Feb 2022 18:08:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244715AbiBWXIM (ORCPT ); Wed, 23 Feb 2022 18:08:12 -0500 Received: from mail-oo1-xc32.google.com (mail-oo1-xc32.google.com [IPv6:2607:f8b0:4864:20::c32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDBA9488AB for ; Wed, 23 Feb 2022 15:07:43 -0800 (PST) Received: by mail-oo1-xc32.google.com with SMTP id i10-20020a4aab0a000000b002fccf890d5fso858728oon.5 for ; Wed, 23 Feb 2022 15:07:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hwKnRdPP1nh11oYPUh6+89DP77zh2e3vOPp3MASHCak=; b=iDt+8yyg0C3DnSqFiCrWG3wu1JVSC9LR21zgVoWv8lGoHqGd8/OSQSJqGaS4Tg6pgl UxXRWyMRT9hIlpxxTRdkWVl1oM/O0smSU0zJDf2gxL9Ysuk1Sssus/bG64cz1WIAiwtb UhPXFXBI0TRG/Z0lkTZA8TsFh4ceTcwsrhmgpFY750sNr1rXrtYqnSWn3Reyy0yyiAVp ZnuaErPwdWjWAjvzMc9+Z24V99r3kbkQuclUMdUPdx3Ss9xfoiwSftbJKIqXwDLn85GE lBBX6YWJHBAYI1TCucKe9e3kizKyaFKAY3LpY8t0qXnX4X5JhxuVUs/rs3TkuwCh7rN9 LPBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hwKnRdPP1nh11oYPUh6+89DP77zh2e3vOPp3MASHCak=; b=02RIUThpma1d8CnP6LcG4gWeEm6n3hHhu34DrtYhexNMPvnNzQL7VNm2WUk0q+SWdc 6Z8tdjW9fvrw1iGkPEirhh8my7nPF3H28pjV0VBfzRHuMawwQdOTWGECxcikAp3SGcdY 1J+jXEjQg708Zllo4t4yww7pBSz7pFhlzjD5bowFY0B8XTWqH+ebOlBOTJVlVo0fnoH6 bgWh2QrKUjvGcqO3Ya7MmqQJSjmAUaXb/0sX1TJp/Jbe+ZJPSuwyNopcAaVMcgApnMaT OKbF0s5qMUSJHWgdvpMFNIdPxniRmBTWe8lMvqj40J6tpmOdxxKVJ4hiXVCNTdF3T1G2 IxqQ== X-Gm-Message-State: AOAM530nwP72p8RvWlxKEQV2cmm/4qtaW7Goj3pghys4yJgx3vvC4Brg dVbQAUZPgxXvvv9OvcB3hQA= X-Google-Smtp-Source: ABdhPJwopykpci8sigAyxiRIDjB3suP3aMpdft97LYuuxzHZBloulr5XAxCXmXePbknqEKrwqmguDA== X-Received: by 2002:a05:6870:e304:b0:ce:c0c9:69b with SMTP id z4-20020a056870e30400b000cec0c9069bmr5062749oad.237.1645657663310; Wed, 23 Feb 2022 15:07:43 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-809e-284a-c7bf-c6d9.res6.spectrum.com. [2603:8081:140c:1a00:809e:284a:c7bf:c6d9]) by smtp.googlemail.com with ESMTPSA id y3sm505030oiv.21.2022.02.23.15.07.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Feb 2022 15:07:43 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH v13 for-next 2/6] RDMA/rxe: Collect mca init code in a subroutine Date: Wed, 23 Feb 2022 17:07:04 -0600 Message-Id: <20220223230706.50332-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220223230706.50332-1-rpearsonhpe@gmail.com> References: <20220223230706.50332-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Collect initialization code for struct rxe_mca into a subroutine, __rxe_init_mca(), to cleanup rxe_attach_mcg() in rxe_mcast.c. Check limit on total number of attached qp's. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 58 ++++++++++++++++++++------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 2 files changed, 44 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 4935fe5c5868..a0a7f8720f95 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -259,6 +259,46 @@ static void rxe_destroy_mcg(struct rxe_mcg *mcg) spin_unlock_irqrestore(&mcg->rxe->mcg_lock, flags); } +/** + * __rxe_init_mca - initialize a new mca holding lock + * @qp: qp object + * @mcg: mcg object + * @mca: empty space for new mca + * + * Context: caller must hold references on qp and mcg, rxe->mcg_lock + * and pass memory for new mca + * + * Returns: 0 on success else an error + */ +static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, + struct rxe_mca *mca) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int n; + + n = atomic_inc_return(&rxe->mcg_attach); + if (n > rxe->attr.max_total_mcast_qp_attach) { + atomic_dec(&rxe->mcg_attach); + return -ENOMEM; + } + + n = atomic_inc_return(&mcg->qp_num); + if (n > rxe->attr.max_mcast_qp_attach) { + atomic_dec(&mcg->qp_num); + atomic_dec(&rxe->mcg_attach); + return -ENOMEM; + } + + atomic_inc(&qp->mcg_num); + + rxe_add_ref(qp); + mca->qp = qp; + + list_add_tail(&mca->qp_list, &mcg->qp_list); + + return 0; +} + static int rxe_attach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_mcg *mcg) { @@ -291,22 +331,9 @@ static int rxe_attach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, } } - /* check limits after checking if already attached */ - if (atomic_inc_return(&mcg->qp_num) > rxe->attr.max_mcast_qp_attach) { - atomic_dec(&mcg->qp_num); + err = __rxe_init_mca(qp, mcg, mca); + if (err) kfree(mca); - err = -ENOMEM; - goto out; - } - - /* protect pointer to qp in mca */ - rxe_add_ref(qp); - mca->qp = qp; - - atomic_inc(&qp->mcg_num); - list_add(&mca->qp_list, &mcg->qp_list); - - err = 0; out: spin_unlock_irqrestore(&rxe->mcg_lock, flags); return err; @@ -329,6 +356,7 @@ static int rxe_detach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, if (mca->qp == qp) { list_del(&mca->qp_list); atomic_dec(&qp->mcg_num); + atomic_dec(&rxe->mcg_attach); rxe_drop_ref(qp); /* if the number of qp's attached to the diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 20fe3ee6589d..6b15251ff67a 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -401,6 +401,7 @@ struct rxe_dev { spinlock_t mcg_lock; struct rb_root mcg_tree; atomic_t mcg_num; + atomic_t mcg_attach; spinlock_t pending_lock; /* guard pending_mmaps */ struct list_head pending_mmaps; From patchwork Wed Feb 23 23:07:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12757657 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69C88C433FE for ; Wed, 23 Feb 2022 23:07:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244715AbiBWXIO (ORCPT ); Wed, 23 Feb 2022 18:08:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244716AbiBWXIN (ORCPT ); Wed, 23 Feb 2022 18:08:13 -0500 Received: from mail-oo1-xc30.google.com (mail-oo1-xc30.google.com [IPv6:2607:f8b0:4864:20::c30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8453356779 for ; Wed, 23 Feb 2022 15:07:44 -0800 (PST) Received: by mail-oo1-xc30.google.com with SMTP id d134-20020a4a528c000000b00319244f4b04so833652oob.8 for ; Wed, 23 Feb 2022 15:07:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HnZMowBykl/ZuOijrTLkaDDU0JEbdavNgWnYY4Bf6Ys=; b=kmqH7PtwSrgpkHKQ8sttcfdkvZ+eMjXneLxwj4lUtiNrKUwBkXVSKf3aOQAVrdjF6x FN+bOj4s5eS+ArdxQDOYNoM1HTJELOB6BxcuIFqDcNlTM1/91qfiSOInE/uaB3EuKEuK ef9JeJAEmT32Cb68jqApe+JP3tmxW5vJNuYpPJwd+kBLfoqkfcD1rvdYnE0TtKHk2w34 LfBPmIdhShUGFc7b9i5NB9ajYzPUvgRIDPANMDaCblaA3gQPDJ4xsu2IqJrkesbDGFiZ OXfo4jQACLIN7piL2W7BSONwmF2NBzHJQ09iiEHwrqr0uBsK6k4hb3ts5SdnaBBvYkx6 nsug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HnZMowBykl/ZuOijrTLkaDDU0JEbdavNgWnYY4Bf6Ys=; b=Y1s5cmGmDw/96+56MaeizLZZCvrkA2ITEkIKAv7zTw/Ue0HpVJaM6LtvZLZ9QeF4Tv DnbafBK1E2CT4oE84N2aow/S1PAvztFtQOk/63no6yDYsJHxmOaopZfJ1AWCwAexm4OC m1phpiYY4C3e2lnUj+JjXb059WDvfMntXRh74QQKG3EG5Shsz4B69QZnL3oBelSqQ6ft eApyqbzj0Sd/BgEkYi//Z7hyh2mMmJ3sgvf1pOwJv5fEm3P62Q0W6G5kTz7dzRUH8H2R tU+QFz3kk5lopRxvcb2zddxuB8xoNdYR4NQSxLjuEOBbcMKbf129Jy8fInqjhpQWpwCy xUIw== X-Gm-Message-State: AOAM531vL217URNwMR/TMQGZqeDaH7nozyyyfqqlSp2deuXPS7Cp+Yrz cI2HgIjsLcfPdiwz+YEIaKQ= X-Google-Smtp-Source: ABdhPJw96GZCXJQ9DFxMrMPxGM4shYtchW6VSZlGFDbmKSNDZVmEOmmfzxfoc1sxn/HzUU4Ni+JuWg== X-Received: by 2002:a05:6870:6014:b0:d6:ca51:2108 with SMTP id t20-20020a056870601400b000d6ca512108mr860692oaa.47.1645657663942; Wed, 23 Feb 2022 15:07:43 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-809e-284a-c7bf-c6d9.res6.spectrum.com. [2603:8081:140c:1a00:809e:284a:c7bf:c6d9]) by smtp.googlemail.com with ESMTPSA id y3sm505030oiv.21.2022.02.23.15.07.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Feb 2022 15:07:43 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 3/6] RDMA/rxe: Collect cleanup mca code in a subroutine Date: Wed, 23 Feb 2022 17:07:05 -0600 Message-Id: <20220223230706.50332-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220223230706.50332-1-rpearsonhpe@gmail.com> References: <20220223230706.50332-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Collect cleanup code for struct rxe_mca into a subroutine, __rxe_cleanup_mca() called in rxe_detach_mcg() in rxe_mcast.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 41 +++++++++++++++++---------- 1 file changed, 26 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index a0a7f8720f95..66c1ae703976 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -339,13 +339,31 @@ static int rxe_attach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, return err; } +/** + * __rxe_cleanup_mca - cleanup mca object holding lock + * @mca: mca object + * @mcg: mcg object + * + * Context: caller must hold a reference to mcg and rxe->mcg_lock + */ +static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) +{ + list_del(&mca->qp_list); + + atomic_dec(&mcg->qp_num); + atomic_dec(&mcg->rxe->mcg_attach); + atomic_dec(&mca->qp->mcg_num); + rxe_drop_ref(mca->qp); + + kfree(mca); +} + static int rxe_detach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, union ib_gid *mgid) { struct rxe_mcg *mcg; struct rxe_mca *mca, *tmp; unsigned long flags; - int err; mcg = rxe_lookup_mcg(rxe, mgid); if (!mcg) @@ -354,37 +372,30 @@ static int rxe_detach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, spin_lock_irqsave(&rxe->mcg_lock, flags); list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - list_del(&mca->qp_list); - atomic_dec(&qp->mcg_num); - atomic_dec(&rxe->mcg_attach); - rxe_drop_ref(qp); + __rxe_cleanup_mca(mca, mcg); /* if the number of qp's attached to the * mcast group falls to zero go ahead and * tear it down. This will not free the * object since we are still holding a ref - * from the get key above. + * from the get key above */ - if (atomic_dec_return(&mcg->qp_num) <= 0) + if (atomic_read(&mcg->qp_num) <= 0) __rxe_destroy_mcg(mcg); /* drop the ref from get key. This will free the * object if qp_num is zero. */ kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); - kfree(mca); - err = 0; - goto out_unlock; + + spin_unlock_irqrestore(&rxe->mcg_lock, flags); + return 0; } } /* we didn't find the qp on the list */ - kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); - err = -EINVAL; - -out_unlock: spin_unlock_irqrestore(&rxe->mcg_lock, flags); - return err; + return -EINVAL; } int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) From patchwork Wed Feb 23 23:07:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12757658 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB087C433F5 for ; Wed, 23 Feb 2022 23:07:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244716AbiBWXIP (ORCPT ); Wed, 23 Feb 2022 18:08:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244717AbiBWXIO (ORCPT ); Wed, 23 Feb 2022 18:08:14 -0500 Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E46E12765 for ; Wed, 23 Feb 2022 15:07:45 -0800 (PST) Received: by mail-oi1-x22c.google.com with SMTP id q5so665577oij.6 for ; Wed, 23 Feb 2022 15:07:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=romtMJKtpyWJ2vGgfrYWZGHob+oyOiudg6/6oHjcTbM=; b=iNkv7zquTECiSmd6k1rYEHNG0GEQH7wlFZtKHWr/DRELDDAOyILw/JRRGHTtB+fPGR EzHu5+P8OC6XaZoyoeocOLW9iFfXOHxGnMLPxHMjq5mS4tAPA5Guu+T11hWJH2dZOrbr 88ziBlldBnPCAwAcn6Z2g9vAxzadl/H62HYw181hQvrnMNIG5dQVYXe+T5HRHBhiLIEt FVy93k4OaM/ipicN9420F86zf/qxwVctwtjSxGXhRKjkyOvM4z4X0JhDeffIp3wANTHp tmC/PaSa4Ada3u1JpDdep4Jn6IqM+nzT/em9HCv0xspyT3pv9IMbi+ckgYi2+upJk3vZ St9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=romtMJKtpyWJ2vGgfrYWZGHob+oyOiudg6/6oHjcTbM=; b=ifOzR1UOdlvwrStjCAjVDCgBxIBhqqf+FHAHmV2PYbwckRP7++UQCMvMDQpG/0GwnI AunBJkMioxM+ak37c0NbTVaFV5PClQa0/XRaJplvxWsJ8fIyEpdwUS+YvkQBSWz3dagT PDiz0OpubMOJ4KodX/H6ZYdS6Bm8m1vIgrlleEvkhpROCHvXOT6OTUH/MDPW81i/DLYi my3s7YUSVFLG6d0kL69Qm8uvH5LQFNMbl86aaAcwaLntjMYKbgm396nnJ2T5IRgMsplr 9Ijp7jwP/m3iUkwsC9Ril1ltgFcaqqXKPIJsPhLYv2R+t09Vsj0NNzK1v6+m8ICzZhN8 /gvA== X-Gm-Message-State: AOAM533glgpXsuNqfXJ5Yn00s8HIZT1C0FDl5v+IfdLzUhUuzmq6aZlA KDRIvfXGO40Y+STWkTHIbcq4kGv8sgI= X-Google-Smtp-Source: ABdhPJyG7z29wVF8U7F0cllkCI6nF4EuzYziYsFmXUevY+uQcge7VsF4j0OvaxJTNLDPnoceuAnEjw== X-Received: by 2002:a05:6808:13cb:b0:2d3:7f26:1c52 with SMTP id d11-20020a05680813cb00b002d37f261c52mr1051330oiw.309.1645657664546; Wed, 23 Feb 2022 15:07:44 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-809e-284a-c7bf-c6d9.res6.spectrum.com. [2603:8081:140c:1a00:809e:284a:c7bf:c6d9]) by smtp.googlemail.com with ESMTPSA id y3sm505030oiv.21.2022.02.23.15.07.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Feb 2022 15:07:44 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH v13 for-next 4/6] RDMA/rxe: Cleanup rxe_mcast.c Date: Wed, 23 Feb 2022 17:07:06 -0600 Message-Id: <20220223230706.50332-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220223230706.50332-1-rpearsonhpe@gmail.com> References: <20220223230706.50332-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Finish adding subroutine comment headers to subroutines in rxe_mcast.c. Make minor api change cleanups. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 97 +++++++++++++++++++++------ 1 file changed, 78 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 66c1ae703976..c399a29b648b 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -1,12 +1,33 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* + * Copyright (c) 2022 Hewlett Packard Enterprise, Inc. All rights reserved. * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ +/* + * rxe_mcast.c implements driver support for multicast transport. + * It is based on two data structures struct rxe_mcg ('mcg') and + * struct rxe_mca ('mca'). An mcg is allocated each time a qp is + * attached to a new mgid for the first time. These are indexed by + * a red-black tree using the mgid. This data structure is searched + * for the mcg when a multicast packet is received and when another + * qp is attached to the same mgid. It is cleaned up when the last qp + * is detached from the mcg. Each time a qp is attached to an mcg an + * mca is created. It holds a pointer to the qp and is added to a list + * of qp's that are attached to the mcg. The qp_list is used to replicate + * mcast packets in the rxe receive path. + */ + #include "rxe.h" -#include "rxe_loc.h" +/** + * rxe_mcast_add - add multicast address to rxe device + * @rxe: rxe device object + * @mgid: multicast address as a gid + * + * Returns 0 on success else an error + */ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) { unsigned char ll_addr[ETH_ALEN]; @@ -16,6 +37,13 @@ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) return dev_mc_add(rxe->ndev, ll_addr); } +/** + * rxe_mcast_delete - delete multicast address from rxe device + * @rxe: rxe device object + * @mgid: multicast address as a gid + * + * Returns 0 on success else an error + */ static int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid) { unsigned char ll_addr[ETH_ALEN]; @@ -216,7 +244,7 @@ static struct rxe_mcg *rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid) /** * rxe_cleanup_mcg - cleanup mcg for kref_put - * @kref: + * @kref: struct kref embnedded in mcg */ void rxe_cleanup_mcg(struct kref *kref) { @@ -299,9 +327,17 @@ static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, return 0; } -static int rxe_attach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, - struct rxe_mcg *mcg) +/** + * rxe_attach_mcg - attach qp to mcg if not already attached + * @qp: qp object + * @mcg: mcg object + * + * Context: caller must hold reference on qp and mcg. + * Returns: 0 on success else an error + */ +static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) { + struct rxe_dev *rxe = mcg->rxe; struct rxe_mca *mca, *tmp; unsigned long flags; int err; @@ -358,17 +394,19 @@ static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) kfree(mca); } -static int rxe_detach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, - union ib_gid *mgid) +/** + * rxe_detach_mcg - detach qp from mcg + * @mcg: mcg object + * @qp: qp object + * + * Returns: 0 on success else an error if qp is not attached. + */ +static int rxe_detach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) { - struct rxe_mcg *mcg; + struct rxe_dev *rxe = mcg->rxe; struct rxe_mca *mca, *tmp; unsigned long flags; - mcg = rxe_lookup_mcg(rxe, mgid); - if (!mcg) - return -EINVAL; - spin_lock_irqsave(&rxe->mcg_lock, flags); list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { if (mca->qp == qp) { @@ -378,16 +416,11 @@ static int rxe_detach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, * mcast group falls to zero go ahead and * tear it down. This will not free the * object since we are still holding a ref - * from the get key above + * from the caller */ if (atomic_read(&mcg->qp_num) <= 0) __rxe_destroy_mcg(mcg); - /* drop the ref from get key. This will free the - * object if qp_num is zero. - */ - kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); - spin_unlock_irqrestore(&rxe->mcg_lock, flags); return 0; } @@ -398,6 +431,14 @@ static int rxe_detach_mcg(struct rxe_dev *rxe, struct rxe_qp *qp, return -EINVAL; } +/** + * rxe_attach_mcast - attach qp to multicast group (see IBA-11.3.1) + * @ibqp: (IB) qp object + * @mgid: multicast IP address + * @mlid: multicast LID, ignored for RoCEv2 (see IBA-A17.5.6) + * + * Returns: 0 on success else an errno + */ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) { int err; @@ -410,20 +451,38 @@ int rxe_attach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) if (IS_ERR(mcg)) return PTR_ERR(mcg); - err = rxe_attach_mcg(rxe, qp, mcg); + err = rxe_attach_mcg(mcg, qp); /* if we failed to attach the first qp to mcg tear it down */ if (atomic_read(&mcg->qp_num) == 0) rxe_destroy_mcg(mcg); kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); + return err; } +/** + * rxe_detach_mcast - detach qp from multicast group (see IBA-11.3.2) + * @ibqp: address of (IB) qp object + * @mgid: multicast IP address + * @mlid: multicast LID, ignored for RoCEv2 (see IBA-A17.5.6) + * + * Returns: 0 on success else an errno + */ int rxe_detach_mcast(struct ib_qp *ibqp, union ib_gid *mgid, u16 mlid) { struct rxe_dev *rxe = to_rdev(ibqp->device); struct rxe_qp *qp = to_rqp(ibqp); + struct rxe_mcg *mcg; + int err; + + mcg = rxe_lookup_mcg(rxe, mgid); + if (!mcg) + return -EINVAL; - return rxe_detach_mcg(rxe, qp, mgid); + err = rxe_detach_mcg(mcg, qp); + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); + + return err; } From patchwork Wed Feb 23 23:07:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12757660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E7FDC433FE for ; Wed, 23 Feb 2022 23:07:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244722AbiBWXIR (ORCPT ); Wed, 23 Feb 2022 18:08:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244718AbiBWXIO (ORCPT ); Wed, 23 Feb 2022 18:08:14 -0500 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E26593584B for ; Wed, 23 Feb 2022 15:07:45 -0800 (PST) Received: by mail-oi1-x236.google.com with SMTP id j2so658960oie.7 for ; Wed, 23 Feb 2022 15:07:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nHH2kN1qaHun2fFYMfE2C2Cg4rCPKgUgoK974Y7RcTA=; b=qXnwgm/RpwyzOWfRd3zY57PfA31hZkmSUdWO4AWtMJia4D71YceeNjh0veA2VPhwu9 e6LX1jlCDKzx/S4L/MoIoiUV7ttbiID3CF+dauHAGEa1abbX/5EXLAVkMq5VTybpWE3z TkhN7jlRhsoU6k80ycSEuLFpjMcB3NLqTTwbu1RZL2vWenHbexQDIbs3xAn1I2M6Wxtq 4BLMrm6Hw+aPMfCxqQldCwWD1QcUYZ9qF3Ov3TSZObcjPvt1DZwPyse0guMThsabs4cE FMw38fCatumLcDN5+oUykoY4Y59o6KMrMV4RehBY9qq3nUizo8l/klCPFGedMnnUZ/4t iyHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nHH2kN1qaHun2fFYMfE2C2Cg4rCPKgUgoK974Y7RcTA=; b=ft7/fhb1sF4B2+u4xdpcIBVhIZk84dpL+2ahp1v6tt0ETziALVUUOMqPhBaP7fQHwF Hel32tT0x9/kRj4HZ6yrLTuX6S0y2EkLsMR/95z+6De0agetpTGoRO/pZVfdzs5FRLTh jxcrWuFss2T595c1u4Z8Ukqq+Tnweui3j7Cj9JG4ZVHPDhTMYzbmIpNOrGkQt2KfVDGM liG8Kl/9SopsYvYZzlYEdYZuGHR9o1BEVUPmcGrqInWjpbhjsHSCsUY7C4IbV+D/mSeq AVnjEHzPQaCTpGWUm4oQY2E35uONAD54b5huMrnkuCPgNfynzyNd2JbOhD4/keA5olMC 5LAg== X-Gm-Message-State: AOAM530RoQycW9pXkzCLfBHhLjfDEhv2ov653DstI4DcNNoNitXAGHat xWQPrqCws/LT/E9JhOPnoJ8= X-Google-Smtp-Source: ABdhPJwHFruClfEptYvG8sU/Qgd1nxvBTNnTUwNuogeYbViIoLpZlXEgt8qcmg5Nne8zZG6EQizxVg== X-Received: by 2002:a05:6808:118e:b0:2d4:6fe7:6bd7 with SMTP id j14-20020a056808118e00b002d46fe76bd7mr1087468oil.146.1645657665256; Wed, 23 Feb 2022 15:07:45 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-809e-284a-c7bf-c6d9.res6.spectrum.com. [2603:8081:140c:1a00:809e:284a:c7bf:c6d9]) by smtp.googlemail.com with ESMTPSA id y3sm505030oiv.21.2022.02.23.15.07.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Feb 2022 15:07:45 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH v13 for-next 5/6] RDMA/rxe: For mcast copy qp list to temp array Date: Wed, 23 Feb 2022 17:07:07 -0600 Message-Id: <20220223230706.50332-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220223230706.50332-1-rpearsonhpe@gmail.com> References: <20220223230706.50332-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently rxe_rcv_mcast_pkt performs most of its work under the rxe->mcg_lock and calls into rxe_rcv which queues the packets to the responder and completer tasklets holding the lock which is a very bad idea. This patch walks the qp_list in mcg and copies the qp addresses to a temporary array under the lock but does the rest of the work without holding the lock. The critical section is now very small. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_recv.c | 103 +++++++++++++++++---------- 1 file changed, 64 insertions(+), 39 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 53924453abef..9b21cbb22602 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -232,11 +232,15 @@ static inline void rxe_rcv_pkt(struct rxe_pkt_info *pkt, struct sk_buff *skb) static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) { + struct sk_buff *skb_copy; struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + struct rxe_pkt_info *pkt_copy; struct rxe_mcg *mcg; struct rxe_mca *mca; struct rxe_qp *qp; + struct rxe_qp **qp_array; union ib_gid dgid; + int n, nmax; int err; if (skb->protocol == htons(ETH_P_IP)) @@ -248,68 +252,89 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) /* lookup mcast group corresponding to mgid, takes a ref */ mcg = rxe_lookup_mcg(rxe, &dgid); if (!mcg) - goto drop; /* mcast group not registered */ + goto err_drop; /* mcast group not registered */ + + /* this is the current number of qp's attached to mcg plus a + * little room in case new qp's are attached between here + * and when we finish walking the qp list. If someone can + * attach more than 4 new qp's we will miss forwarding + * packets to those qp's. This is actually OK since UD is + * a unreliable service. + */ + nmax = atomic_read(&mcg->qp_num) + 4; + qp_array = kmalloc_array(nmax, sizeof(qp), GFP_KERNEL); + n = 0; spin_lock_bh(&rxe->mcg_lock); - - /* this is unreliable datagram service so we let - * failures to deliver a multicast packet to a - * single QP happen and just move on and try - * the rest of them on the list - */ list_for_each_entry(mca, &mcg->qp_list, qp_list) { - qp = mca->qp; + /* protect the qp pointers in the list */ + rxe_add_ref(mca->qp); + qp_array[n++] = mca->qp; + if (n == nmax) + break; + } + spin_unlock_bh(&rxe->mcg_lock); + nmax = n; + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); - /* validate qp for incoming packet */ + for (n = 0; n < nmax; n++) { + qp = qp_array[n]; + + /* since this is an unreliable transport if + * one of the qp's fails to pass these checks + * just don't forward a packet and continue + * on to the other qp's. If there aren't any + * drop the skb + */ err = check_type_state(rxe, pkt, qp); - if (err) + if (err) { + rxe_drop_ref(qp); + if (n == nmax - 1) + goto err_free; continue; + } err = check_keys(rxe, pkt, bth_qpn(pkt), qp); - if (err) + if (err) { + rxe_drop_ref(qp); + if (n == nmax - 1) + goto err_free; continue; + } - /* for all but the last QP create a new clone of the - * skb and pass to the QP. Pass the original skb to - * the last QP in the list. + /* for all but the last qp create a new copy(clone) + * of the skb and pass to the qp. Pass the original + * skb to the last qp in the list unless it failed + * checks above */ - if (mca->qp_list.next != &mcg->qp_list) { - struct sk_buff *cskb; - struct rxe_pkt_info *cpkt; - - cskb = skb_clone(skb, GFP_ATOMIC); - if (unlikely(!cskb)) + if (n < nmax - 1) { + skb_copy = skb_clone(skb, GFP_KERNEL); + if (unlikely(!skb_copy)) { + rxe_drop_ref(qp); continue; + } if (WARN_ON(!ib_device_try_get(&rxe->ib_dev))) { - kfree_skb(cskb); - break; + kfree_skb(skb_copy); + rxe_drop_ref(qp); + continue; } - cpkt = SKB_TO_PKT(cskb); - cpkt->qp = qp; - rxe_add_ref(qp); - rxe_rcv_pkt(cpkt, cskb); + pkt_copy = SKB_TO_PKT(skb_copy); + pkt_copy->qp = qp; + rxe_rcv_pkt(pkt_copy, skb_copy); } else { pkt->qp = qp; - rxe_add_ref(qp); rxe_rcv_pkt(pkt, skb); - skb = NULL; /* mark consumed */ } } - spin_unlock_bh(&rxe->mcg_lock); - - kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); - - if (likely(!skb)) - return; - - /* This only occurs if one of the checks fails on the last - * QP in the list above - */ + kfree(qp_array); + return; -drop: +err_free: + kfree(qp_array); +err_drop: kfree_skb(skb); ib_device_put(&rxe->ib_dev); } From patchwork Wed Feb 23 23:07:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12757659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2420C4332F for ; Wed, 23 Feb 2022 23:07:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244717AbiBWXIQ (ORCPT ); Wed, 23 Feb 2022 18:08:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244719AbiBWXIO (ORCPT ); Wed, 23 Feb 2022 18:08:14 -0500 Received: from mail-ot1-x32b.google.com (mail-ot1-x32b.google.com [IPv6:2607:f8b0:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82B9E488AB for ; Wed, 23 Feb 2022 15:07:46 -0800 (PST) Received: by mail-ot1-x32b.google.com with SMTP id j9-20020a9d7d89000000b005ad5525ba09so117774otn.10 for ; Wed, 23 Feb 2022 15:07:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m2HAbOM0HiU4XXojwFnPIQRYbPadds7FYpjB1DWzjo0=; b=ps8aWyzZUrXMIhdMQLOMorJsMnXMXqu4QkbamS5I4oRD5MnkgZll59kfOGredo6Pl5 Cgd6Bly7XbEFUFYcUsRcK3iveC4F9Z/LzK1KyLTKhlv9uIFtY4boXkNXOOok5fgdtSYF r3IMg0xSVG6MsFpabY3hJWzkzpLcW2vMmtWs47bLczCLjcxZYALYrQFMX7aFQBScTHbh dT7L4gbwSn1jElrEEeZKE9mugx890AjMY5QR6r+myyGFLviasVFzqgGDKjzL9y+MFYpu 77fCBX7UEiB8+8UutKRfsu/Lh/nwwTKWDwNZhDAJd456aETb4uFcV5lelDeedrsi6vUN fwYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m2HAbOM0HiU4XXojwFnPIQRYbPadds7FYpjB1DWzjo0=; b=FRIwjILRuqtAT+cU3GxVYkFkatyVTz+QhJCjXYn5URTrmwrbBLcQm7lVdBQrUevvDd scXOxANbQZtVFX9xOzpGA6tXy4nwU0mcfx0/5r9pf9Ez/gDdOqrS8ntLOTdx7STem8+B es02Bs8GpTrrHKILpigxzBW9Q20YakhY56ZxS2mJ0itu3otiNlBjvP58tbRCTULHWOob UD0qu0MMHfVS2zHlCC1GaOKKjPKNyqGqmy4eFB4YXhqCHGKwkXMIUSYoC0/amK2xo5Sg 1M+n8dj+VAW07YglG8p5j1LOMIjbHRKkdoRQF9vHHjRottngMnM7oPKe2VBlmavMK7NH /jFA== X-Gm-Message-State: AOAM530yvEwxb2de2tnXHfA8p8+CkgGiPF0HMIVoABqAK7wHuszjYFPa SfcjIMEs/szq7vzsPc2ccs3DYGOiVcw= X-Google-Smtp-Source: ABdhPJzJ9K/Fn9J2IUk5bPSCmRRAbPyV7NlxgTNDMb1mCR6QDUP1sgEJ/tfWSf2fDSUAS+0hlzSm9A== X-Received: by 2002:a05:6830:b8c:b0:59d:67ea:5da7 with SMTP id a12-20020a0568300b8c00b0059d67ea5da7mr761320otv.38.1645657665887; Wed, 23 Feb 2022 15:07:45 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-809e-284a-c7bf-c6d9.res6.spectrum.com. [2603:8081:140c:1a00:809e:284a:c7bf:c6d9]) by smtp.googlemail.com with ESMTPSA id y3sm505030oiv.21.2022.02.23.15.07.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Feb 2022 15:07:45 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v13 6/6] RDMA/rxe: Convert mca read locking to RCU Date: Wed, 23 Feb 2022 17:07:08 -0600 Message-Id: <20220223230706.50332-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220223230706.50332-1-rpearsonhpe@gmail.com> References: <20220223230706.50332-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace spinlock with rcu read locks for read side operations on mca in rxe_recv.c and rxe_mcast.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mcast.c | 67 ++++++++++++++++++++------- drivers/infiniband/sw/rxe/rxe_recv.c | 6 +-- drivers/infiniband/sw/rxe/rxe_verbs.h | 3 ++ 3 files changed, 56 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index c399a29b648b..b2ca4bf5658f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -17,6 +17,12 @@ * mca is created. It holds a pointer to the qp and is added to a list * of qp's that are attached to the mcg. The qp_list is used to replicate * mcast packets in the rxe receive path. + * + * The highest performance operations are mca list traversal when + * processing incoming multicast packets which need to be fanned out + * to the attached qp's. This list is protected by RCU locking for read + * operations and a spinlock in the rxe_dev struct for write operations. + * The red-black tree is protected by the same spinlock. */ #include "rxe.h" @@ -299,7 +305,7 @@ static void rxe_destroy_mcg(struct rxe_mcg *mcg) * Returns: 0 on success else an error */ static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, - struct rxe_mca *mca) + struct rxe_mca *mca) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); int n; @@ -322,7 +328,12 @@ static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, rxe_add_ref(qp); mca->qp = qp; - list_add_tail(&mca->qp_list, &mcg->qp_list); + kref_get(&mcg->ref_cnt); + mca->mcg = mcg; + + init_completion(&mca->complete); + + list_add_tail_rcu(&mca->qp_list, &mcg->qp_list); return 0; } @@ -343,14 +354,14 @@ static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) int err; /* check to see if the qp is already a member of the group */ - spin_lock_irqsave(&rxe->mcg_lock, flags); - list_for_each_entry(mca, &mcg->qp_list, qp_list) { + rcu_read_lock(); + list_for_each_entry_rcu(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { - spin_unlock_irqrestore(&rxe->mcg_lock, flags); + rcu_read_unlock(); return 0; } } - spin_unlock_irqrestore(&rxe->mcg_lock, flags); + rcu_read_unlock(); /* speculative alloc new mca without using GFP_ATOMIC */ mca = kzalloc(sizeof(*mca), GFP_KERNEL); @@ -375,6 +386,20 @@ static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) return err; } +/** + * __rxe_destroy_mca - free mca resources + * @head: rcu_head embedded in mca + */ +static void rxe_destroy_mca(struct rcu_head *head) +{ + struct rxe_mca *mca = container_of(head, typeof(*mca), rcu); + + atomic_dec(&mca->qp->mcg_num); + rxe_drop_ref(mca->qp); + + complete(&mca->complete); +} + /** * __rxe_cleanup_mca - cleanup mca object holding lock * @mca: mca object @@ -384,14 +409,12 @@ static int rxe_attach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) */ static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) { - list_del(&mca->qp_list); - + mca->mcg = NULL; + kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); atomic_dec(&mcg->qp_num); atomic_dec(&mcg->rxe->mcg_attach); - atomic_dec(&mca->qp->mcg_num); - rxe_drop_ref(mca->qp); - kfree(mca); + list_del_rcu(&mca->qp_list); } /** @@ -404,11 +427,10 @@ static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) static int rxe_detach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) { struct rxe_dev *rxe = mcg->rxe; - struct rxe_mca *mca, *tmp; - unsigned long flags; + struct rxe_mca *mca; - spin_lock_irqsave(&rxe->mcg_lock, flags); - list_for_each_entry_safe(mca, tmp, &mcg->qp_list, qp_list) { + spin_lock_bh(&rxe->mcg_lock); + list_for_each_entry_rcu(mca, &mcg->qp_list, qp_list) { if (mca->qp == qp) { __rxe_cleanup_mca(mca, mcg); @@ -420,14 +442,25 @@ static int rxe_detach_mcg(struct rxe_mcg *mcg, struct rxe_qp *qp) */ if (atomic_read(&mcg->qp_num) <= 0) __rxe_destroy_mcg(mcg); + spin_unlock_bh(&rxe->mcg_lock); + + /* schedule rxe_destroy_mca and then wait for + * completion before returning to rdma-core. + * Having an outstanding call_rcu() causes + * rdma-core to fail. It may be simpler to + * just call synchronize_rcu() and then + * rxe_destroy_rcu(), but this works OK. + */ + call_rcu(&mca->rcu, rxe_destroy_mca); + wait_for_completion(&mca->complete); + kfree(mca); - spin_unlock_irqrestore(&rxe->mcg_lock, flags); return 0; } } /* we didn't find the qp on the list */ - spin_unlock_irqrestore(&rxe->mcg_lock, flags); + spin_unlock_bh(&rxe->mcg_lock); return -EINVAL; } diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 9b21cbb22602..c2cab85c6576 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -265,15 +265,15 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) qp_array = kmalloc_array(nmax, sizeof(qp), GFP_KERNEL); n = 0; - spin_lock_bh(&rxe->mcg_lock); - list_for_each_entry(mca, &mcg->qp_list, qp_list) { + rcu_read_lock(); + list_for_each_entry_rcu(mca, &mcg->qp_list, qp_list) { /* protect the qp pointers in the list */ rxe_add_ref(mca->qp); qp_array[n++] = mca->qp; if (n == nmax) break; } - spin_unlock_bh(&rxe->mcg_lock); + rcu_read_unlock(); nmax = n; kref_put(&mcg->ref_cnt, rxe_cleanup_mcg); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 6b15251ff67a..14a574e6140e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -364,7 +364,10 @@ struct rxe_mcg { struct rxe_mca { struct list_head qp_list; + struct rcu_head rcu; struct rxe_qp *qp; + struct rxe_mcg *mcg; + struct completion complete; }; struct rxe_port {